tag:blogger.com,1999:blog-3722233.post2767190519892227353..comments2021-04-17T21:45:43.290-05:00Comments on Computational Complexity: Are most lower bounds really upper bounds?Lance Fortnowhttp://www.blogger.com/profile/06752030912874378610noreply@blogger.comBlogger14125tag:blogger.com,1999:blog-3722233.post-32195619625409592692013-02-20T06:51:12.445-06:002013-02-20T06:51:12.445-06:00P.S. Most lower bounds proofs for a complexity mea...P.S. Most lower bounds proofs for a complexity measure c(f) first UPPER bound some more tractable combinatorial measure m(f) in terms of c(f), and then LOWER bounds m(f). The first step is indeed an UPPER bounds problem: after all, m(f) must be not much larger than c(f). This is usually achieved by UPPER bounding the gate-by-gate or level-by-level progress. So, every proof is a mix of solving upper and lower bounds problems. It depends on which part is harder. In AC^0 stuff, UPPER bounding is harder. In monotone circuits, LOWER bounding is harder (especially, for the Perfect Matching function). In the paper by your student also LOWER bounding is harder. In some proofs, both steps are equally hard. So, my question actually was: does somebody know a proof where a lower bound on c(f) is obtained by proving an upper bound on m(f)? Stasyshttp://www.thi.informatik.uni-frankfurt.de/~jukna/noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-54634041312261063552013-02-20T05:58:01.957-06:002013-02-20T05:58:01.957-06:00Bill, I think the title of your post has little (i...Bill, I think the title of your post has little (if any) to do with its content, with "algorithmic or not". EVERY lower bound proof, which reduces the problem to lower bounding some combinatorial/algebraic measure, may be viewed as "algorithmic". Most of the proofs in circuit complexity are such. But I don't know here of any result proved via *upper bounding* some measure. I mean standard "direct" (not Ryan's type) proofs for formulas, branching programs/tree, bounded-depth or monotone circuits. Does somebody know such an "upperbounding" proof there? Stasyshttp://www.thi.informatik.uni-frankfurt.de/~jukna/noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-39355628973828511022013-02-20T02:47:27.998-06:002013-02-20T02:47:27.998-06:00I think also the proof that parity is not in AC0 h...I think also the proof that parity is not in AC0 has an algorithm at its core: given any small size circuit with constant depth, we find (probabilistically) a low degree polynomial which approximate it.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-80074751612082858192013-02-19T17:53:36.916-06:002013-02-19T17:53:36.916-06:00The hierarchy theorems are proven by constructing ...The hierarchy theorems are proven by constructing a universal simulating algorithm for the smaller class. For example, by showing that there is an *algorithm* that runs in time O(n^3) that can simulate all algorithms that run in time O(n^2). Finding a better *algorithm* for simulating k-tapes on 2-tapes is (or at least seems to be) the bottleneck for improving the Time Hierarchy Theorem.<br /><br />(Sometimes diagonalization is non-algorithmic - for example, a few oracle constructions are non-computable, yielding non-computable oracles - but I'd say not in the case of the hierarchy theorems. Many oracle constructions are algorithmic, however, though the underlying algorithms are often very inefficient.)Joshua Grochowhttp://www.cs.toronto.edu/~jgrochow/noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-34529811684403725542013-02-19T16:22:35.795-06:002013-02-19T16:22:35.795-06:00Thanks.
NOW I have fixed it. Hopefull.Thanks.<br />NOW I have fixed it. Hopefull.GASARCHhttps://www.blogger.com/profile/06134382469361359081noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-88669658447481712762013-02-19T16:17:11.329-06:002013-02-19T16:17:11.329-06:00You fixed the wrong instance of ACC0. You are n...You fixed the wrong instance of ACC0. You are now underselling Ryan Williams' result.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-65272703246933465152013-02-19T15:55:23.552-06:002013-02-19T15:55:23.552-06:00this does seem to be a deep principle that might s...this does seem to be a deep principle that might somehow be formalized more generally and rigorously and generally. maybe it is some kind of tradeoff phenomenon between time and space (and other computational resources). this can be seen in SAT lower bounds that are stated in terms of TISP, time and space. ie the two are interrelated as far as optimal algorithms.<br /><br />it would seem that the interrelation between lower bounds and upper bounds can be visualized in terms of the inherent tradeoff in compression algorithms. one can compress strings "better" (shorter) if one has more time. it appears that the concept of compression algorithms is quite fundamental to TCS eg as in kolmogorov complexity and maybe a larger "unifying framework" someday.<br /><br />this following question is an attempt to formalize some of this via questions on the compression of the (state, symbol) sequence that ensues on TM computatoins.<br /><br /><a href="http://mathoverflow.net/questions/86339/compression-of-a-turing-machine-run-sequence" rel="nofollow">compression of a TM run sequence</a><br /><br />Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-31162655815456887662013-02-19T15:28:05.666-06:002013-02-19T15:28:05.666-06:00Dear Dr. Gasarch,
I think his definition (2) of C...Dear Dr. Gasarch,<br /><br />I think his definition (2) of Cook's Theorem is incomplete, since it is missing that the reduction must be done in polynomial time, and that the running time of that poly-time NTM must be given in order to that reduction works (with poly-time construction of a Boolean formula using the description of that NTM and input x).<br /><br />Notice that this is not preciously, because without such details many computer theorists cannot understand the serious flaw that affects this theorem, as explained at http://www.andrebarbosa.eti.br/The_Cook-Levin_Theorem_is_False.pdf.AndrĂ© Luiz Barbosahttp://www.andrebarbosa.eti.brnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-10900570996934674302013-02-19T15:10:34.652-06:002013-02-19T15:10:34.652-06:00FixedFixedGASARCHhttps://www.blogger.com/profile/06134382469361359081noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-89924237952698131882013-02-19T15:07:04.327-06:002013-02-19T15:07:04.327-06:00There is typo in 5. ACC0 should be AC0[3].There is typo in 5. ACC0 should be AC0[3].Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-25686498924686220632013-02-19T15:04:59.745-06:002013-02-19T15:04:59.745-06:00Petition to ask ACM to join Open Access:
http://t...Petition to ask ACM to join Open Access:<br /><br />http://teardownthispaywall.appspot.com/Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-3247717750997962262013-02-19T14:18:16.043-06:002013-02-19T14:18:16.043-06:00What is the difference between a "constructiv...What is the difference between a "constructive" proof (possibly a randomized constructive) and an "algorithmic" one in your definition? Proofs have constructive parts (as in Ryan Williams' algorithm) as well as non-constructive parts (the diagonalization that is at the NEXP not in ACC^0 core). Ditto for the Ajtai-FSS-Yao-Hastad parity not in AC^0 proofs. Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-26625099827296197582013-02-19T11:15:43.528-06:002013-02-19T11:15:43.528-06:00...and also this related question: http://cstheory......and also this related question: http://cstheory.stackexchange.com/q/14085Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-72097663248471182912013-02-19T09:32:30.107-06:002013-02-19T09:32:30.107-06:00See also the same question on cstheory:
http://cst...See also the same question on cstheory:<br />http://cstheory.stackexchange.com/q/3229Anonymousnoreply@blogger.com