Tuesday, February 19, 2013

Are most lower bounds really upper bounds?

Recently Daniel Apon (grad student of Jon Katz at UMCP, but he also hangs out with me) proved a LOWER BOUND by proving an UPPER BOUND. His paper is here. I have heard it said (I think by Avi W and Lane H) that MOST lower bounds are really upper bounds. Below I use the term Non-Alg Lower Bound for a lower bound that is NOT an algorithm. This is not a rigorous notion and the items below are up for debate.

  1. Time Hier, Space Hier- Diagonalization. I would call that Non-Alg.
  2. Cooks Theorem: this is an ALGORITHM to transform a Non Det TM and a string x to a Boolean Formula.
  3. All reductions can be viewed as ALGORITHMS.
  4. Parity not in AC0: The Yao-Hastad proof can be viewed as a non-alg lower bound for the depth 1 or 2, and then a randomized ALGORITHM to transform depth d to depth d-1.
  5. Parity not in AC0[3]: This is a Non-Alg lower bounds--- you show that parity has some property (not being able to be approx by low degree polys) and then you show that AC0[3] cannot deal with this property.
  6. Comm Complexity: The det lower bound on EQ is a Non-Alg lower bound. I think the randomized lower bound on DISJOINT is a Non-Alg lower bounds. Many others lower bounds are reductions to these two, and hence are algorithms.
  7. Multiparty Comm Comp: I'll just mention one result: Chandra-Furst-Lipton's lower bounds on EXACT-N for k-player Number-on-Forehead. The lower bounds shows that if there is a protocol of t bits then some structure can be colored in a certain way. Then Ramsey Theory is used. Non-Alg Lower bound I think.
  8. Decision Tree Complexity (Comparisons): The lower bounds on SORTING and MAX, are non-Alg lower bounds. The following leave-counting lower bound for
    2nd largest is sort-of a reduction to MAX but I still think its non-alg: First note that the lower bound for MAX is very strong- even the best case
    requires n-1 comps. Hence any DT for MAX has roughly 2n-1 leaves.
    T is a DT for 2nd largest. For all i=1,...,n let Ti be the subtree where xi WINS. This is a MAX tree for n-1 elements so has 2n-2 leaves. All these sets of leaves are disjoint so T has n2n-2 leaves.Hence T has height n+ log n + \Omega(1).)
  9. Decision Tree Complexity (Other queries): Algebraic Queries, k-ary queries have all been studied.
    The Algebraic Queries lower bounds use number-of-component arguments and seem non-alg. Some of the k-ary query lower bounds use Ramsey Theory to reduce to the comparison case.
  10. Branching programs and other models often reduce to comm complexity. Is that an algorithm.
  11. Ryan Williams proof that NEXP is not in ACC is really, at its core, an algorithm that does slightly better than brute force.

My Final Opinion: The above is a random sample, but it seems to be that there are plenty of lower bounds that are non-alg lower bounds. However, as more and more lower bounds are known, more and more reductions will be used and hence there will be more algorithms.

14 comments:

  1. See also the same question on cstheory:
    http://cstheory.stackexchange.com/q/3229

    ReplyDelete
    Replies
    1. ...and also this related question: http://cstheory.stackexchange.com/q/14085

      Delete
  2. What is the difference between a "constructive" proof (possibly a randomized constructive) and an "algorithmic" one in your definition? Proofs have constructive parts (as in Ryan Williams' algorithm) as well as non-constructive parts (the diagonalization that is at the NEXP not in ACC^0 core). Ditto for the Ajtai-FSS-Yao-Hastad parity not in AC^0 proofs.

    ReplyDelete
  3. Petition to ask ACM to join Open Access:

    http://teardownthispaywall.appspot.com/

    ReplyDelete
  4. There is typo in 5. ACC0 should be AC0[3].

    ReplyDelete
    Replies
    1. You fixed the wrong instance of ACC0. You are now underselling Ryan Williams' result.

      Delete
    2. Thanks.
      NOW I have fixed it. Hopefull.

      Delete
  5. Dear Dr. Gasarch,

    I think his definition (2) of Cook's Theorem is incomplete, since it is missing that the reduction must be done in polynomial time, and that the running time of that poly-time NTM must be given in order to that reduction works (with poly-time construction of a Boolean formula using the description of that NTM and input x).

    Notice that this is not preciously, because without such details many computer theorists cannot understand the serious flaw that affects this theorem, as explained at http://www.andrebarbosa.eti.br/The_Cook-Levin_Theorem_is_False.pdf.

    ReplyDelete
  6. this does seem to be a deep principle that might somehow be formalized more generally and rigorously and generally. maybe it is some kind of tradeoff phenomenon between time and space (and other computational resources). this can be seen in SAT lower bounds that are stated in terms of TISP, time and space. ie the two are interrelated as far as optimal algorithms.

    it would seem that the interrelation between lower bounds and upper bounds can be visualized in terms of the inherent tradeoff in compression algorithms. one can compress strings "better" (shorter) if one has more time. it appears that the concept of compression algorithms is quite fundamental to TCS eg as in kolmogorov complexity and maybe a larger "unifying framework" someday.

    this following question is an attempt to formalize some of this via questions on the compression of the (state, symbol) sequence that ensues on TM computatoins.

    compression of a TM run sequence

    ReplyDelete
  7. The hierarchy theorems are proven by constructing a universal simulating algorithm for the smaller class. For example, by showing that there is an *algorithm* that runs in time O(n^3) that can simulate all algorithms that run in time O(n^2). Finding a better *algorithm* for simulating k-tapes on 2-tapes is (or at least seems to be) the bottleneck for improving the Time Hierarchy Theorem.

    (Sometimes diagonalization is non-algorithmic - for example, a few oracle constructions are non-computable, yielding non-computable oracles - but I'd say not in the case of the hierarchy theorems. Many oracle constructions are algorithmic, however, though the underlying algorithms are often very inefficient.)

    ReplyDelete
  8. I think also the proof that parity is not in AC0 has an algorithm at its core: given any small size circuit with constant depth, we find (probabilistically) a low degree polynomial which approximate it.

    ReplyDelete
  9. Bill, I think the title of your post has little (if any) to do with its content, with "algorithmic or not". EVERY lower bound proof, which reduces the problem to lower bounding some combinatorial/algebraic measure, may be viewed as "algorithmic". Most of the proofs in circuit complexity are such. But I don't know here of any result proved via *upper bounding* some measure. I mean standard "direct" (not Ryan's type) proofs for formulas, branching programs/tree, bounded-depth or monotone circuits. Does somebody know such an "upperbounding" proof there?

    ReplyDelete
    Replies
    1. P.S. Most lower bounds proofs for a complexity measure c(f) first UPPER bound some more tractable combinatorial measure m(f) in terms of c(f), and then LOWER bounds m(f). The first step is indeed an UPPER bounds problem: after all, m(f) must be not much larger than c(f). This is usually achieved by UPPER bounding the gate-by-gate or level-by-level progress. So, every proof is a mix of solving upper and lower bounds problems. It depends on which part is harder. In AC^0 stuff, UPPER bounding is harder. In monotone circuits, LOWER bounding is harder (especially, for the Perfect Matching function). In the paper by your student also LOWER bounding is harder. In some proofs, both steps are equally hard. So, my question actually was: does somebody know a proof where a lower bound on c(f) is obtained by proving an upper bound on m(f)?

      Delete