Monday, November 15, 2010

Is Ryan's Theorem Only Interesting to Complexity Theorists?

Last week, the complexity theory bloggers DickScottLuca and myself all heaped praise on Ryan Williams' new circuit lower bound, NEXP not in ACC0. Outside the computational complexity community the reaction has been something like "Wow, those complexity theorists are excited by Ryan's paper."  So let me try to explain why we are excited by the result and perhaps why you should be too. 


In the 1980's complexity theorists started looking at circuit complexity. Circuits are ANDs, ORs and NOT gates that feed into each other (no cycles) with input bits on the bottom and an output bit at the top. Every function mapping n bits to one bit can be expressed as a circuit but may require an exponential (2O(n)) number of gates. Every polynomial-time computable function can be expressed by circuit with a polynomial number of gates. If one could show some problem in NP cannot be computed by polynomial-size circuits then you would have proven P ≠ NP.

Furst, Saxe and Sipser made the first breakthrough in 1981. They showed a simple function (parity) cannot be computed by circuits with polynomial-size and constant depth where depth is the length of the longest chain of gates from the input bits to the output. Yao and Hastad would soon show that you need an exponential number of gates for a constant-depth circuit to compute parity.

I started graduate school in 1985 with Mike Sipser as my advisor. During that first year we saw Hastad's paper and then a new result by a Russian, Alexander Razborov. Razborov showed that no polynomial-size monotone circuit (just AND and OR gates, no NOTs) can compute the clique function. At this point Sipser was sure a proof of P ≠ NP using circuit complexity was just around the corner.

In 1987, Razborov and Smolesky has a nice extension on the constant-depth bounds. They showed that one cannot compute the parity function with constant-depth polynomial-size circuits even if we augment them with mod3 gates. A mod3 gates outputs 1 if the number of inputs is not a multiple of 3. More generally they showed that for any primes p ≠ q, one cannot compute the modp function with ANDs, ORs, NOTs and modq gates. Their proof broke down if q is not a power of a prime.

That's six major circuit results in six years. We had every expectation that the results would keep coming. But nothing in 1988. Nothing in 1989. Nothing in the 90's. Nothing in the first decade of the 2000's.

Nothing is a bit strong, there were several nice lower bounds for uniform and restrictive models. Some new characterizations of circuit classes. We also saw arguments that further progress would be difficult, Razborov showed that his techniques for his clique result break down completely with NOT gates and, of course, natural proofs.

But for 23 years we had no progress on general circuits beyond Razborov-Smolensky. Forget NP, we couldn't even show there were problems in NEXP (problems verifiable in exponential time) that didn't have polynomial-size constant depth circuits with only Mod6 gates. (Nothing special about 6, same holds for any number that is a multiple of at least two distinct primes).

That is until last week when Ryan showed that NEXP cannot have polynomial-size constant-depth circuits with AND, OR, NOT and Modm gates for any m, prime or not.

And that's why we are so excited about this paper.

44 comments:

  1. To be fair, there were real advances in arithmetic circuit lower bounds over the last 20 years (such as the work of Ran Raz and collaborators on multilinear formulas and circuits). But Ryan's result is certainly the first major (positive!) advance in Boolean circuit lower bounds since the 1980s.

    ReplyDelete
  2. And, don't forget Ajtai's 2001 lower bound for linear length branching programs (where previously even read-twice BPs had no lower bounds.)

    ReplyDelete
  3. I will use Yao's allegory: it is like monkeys climbing trees hoping to get to the moon.

    To get to the moon we need some engineering---artificial intelligence, better systems.

    This is not to say that monkeys should stop climbing trees (they are inspiration, after all),
    just that it is hard to get the engineers excited about a taller tree being climbed (it
    has no impact on the (more important) engineering).

    ReplyDelete
  4. Has the proof been verified?
    The blogs have been praising it but
    not quite saying its been verified.

    I am not skeptical but I have to ask...

    ReplyDelete
  5. My own interests are more in algorithms (upper bounds) than lower bounds. But I'm interesting in Ryan's result because there is something algorithmic at its core. In STOC'10 he already demonstrated this connection: if you get a better exponential upper bound for certain kinds of circuit satisfiability, you also get better lower bounds separating nondeterministic exponential time classes from polynomial nonuniform circuit classes, because the composing the good new algorithm with the circuits implied by a collapse of these classes would give you something violating the time hierarchy theorem. Now he's shown this more concretely by developing a better algorithm for another type of satisfiability problem and getting from it an unconditional lower bound.

    Some of my other favorite theorems in complexity theory (e.g. Savitch's, and Immerman–Szelepcsényi) can also be thought of as algorithms in disguise (e.g. in Savitch's case, a low-space algorithm for finding paths in graphs). So I guess complexity theory and concrete algorithms are less distinct than we may often think.

    ReplyDelete
  6. I think so. Think about it, what result in AI or other branches of CS would excite you?

    As long as there are no applications to Ryan's result, they won't be excited. That's different for complexity theorists, you missed one of the big issues in your post, not only we had no progress in boolean circuit complexity and kept relating it to lots of other problems like derandomization, we had no new method and old methods seemed to suffer in a very strong sense. Ryan's paper is interesting because it seems to avoid all known barriers and seems promising to get stronger results. After all, all you need to do is to get a small improvement over the trivial algorithm for satisfiability for a circuit complexity class. It seems promising, hopefully there will be a very high level account and a meta-theorem that will make it clear what is exactly needed and then lots of people will start working on improving satisfiability algorithms.

    ReplyDelete
  7. Very nice explanation. (I also like Yao's allegory!) There is a lesson here for other fields as well. If you want to promote your field, then promote the cool results! Complexity theory keeps its influence because of its strong outreach efforts.

    ReplyDelete
  8. And don't forget many other important results. Just to mention a few:

    1. With-5 branching programs are as powerfull as formulas [Barrington].

    2. Super-polynomial lower bounds for span programs [Babai, Gal, Pudlak, Wigderson and others].

    3. The pigeonhole principle requires resolution proofs of exponential length, even if the number of pigeonholes is arbitrary large [Raz, Razborov]

    4. Length vs. width of resolution proofs [Ben-Sasson, Wigderson]

    5. Exponential lower bounds for the length of cutting plane proofs [Impagliazzo–Pitassi–Urquhart, Krajicek, and Bonet-Pitassi-Raz, Pudlak]

    ... and many, many more. All this requires (and produces!) quite non-trivial mathematics (not diagonalization, not simulation, not counting anyway). In our expectation for "great separations" of complexity classes we perhaps forgive the main goal of computational complexity: to understand WHY some computational tasks are easy and why others are hard, not just "there exist some difficult tasks in some classes".

    ReplyDelete
  9. The blog post fails to answer the question in its title.

    ReplyDelete
  10. I agree with above anon.

    Are there any consequences for other areas (e.g. algorithms) from this result? If so, you can explain them.

    Does this give other complexity-class based assumptions on existence of faster SAT algorithms?

    ReplyDelete
  11. A question: Can parity be computed by constant-depth circuits, even if we allow the circuits to be of exponential size? The claim that parity "requires" circuits to be at least exponential doesn't quite seem to be an affirmative answer to that question.

    ReplyDelete
  12. "So I guess complexity theory and concrete algorithms are less distinct than we may often think."

    Amen to that. Note that the results which Ryan uses (Coppersmith's matrix mult algorithm, time hierarchies, the characterization of ACC^0 by depth 2 AND-SYM circuits) have been around for a while. Part of the reason the result has been a long time coming is that algorithmists and complexity theorists don't talk much to each other.

    ReplyDelete
  13. It's hard to not be excited about this result, when you understand it's a lower bound for a problem as natural as to be in NEXP, and in a computational model as strong as constant-depth with mod 6 gates :)

    ReplyDelete
  14. You can compute Parity as an OR of 2^{n-1} ANDs of literals: just take one monomial for each accepted vector.

    ReplyDelete
  15. This is a very nice post Lance. Want to see more such posts coming that so lucidly can explain the main results of an exciting paper that readers can't help but read it.

    ReplyDelete
  16. Very interesting post. I'm happy that complexity theorists are excited with the new result, but I still fail to see one point: why are mod-n gates interesting? I really believe the result is amazing, by what people are saying. Let me explain why I fail to see the reason for it even after reading the post.

    The post starts by showing a relation of circuits without mod-n gates and the P=NP question. Then, it changes to results on circuits with mod-n gates where much stronger results were already known for circuits without these gates. It does not show any interesting consequences of not having circuits with constant depth and mod-n gates even it was proved for problems easier than NEXP (say NP). It is already hard to see why Razborov and Smolesky result is interesting, since the strong assumption seems to be on the "constant complexity" part, not on allowing mod-n gates or not.

    ReplyDelete
  17. Is there a story for why this paper isn't on ECCC or the arXiv?

    ReplyDelete
  18. it is like monkeys climbing trees hoping to get to the moon.

    To get to the moon we need some engineering---artificial intelligence, better systems.

    This is not to say that monkeys should stop climbing trees (they are inspiration, after all),
    just that it is hard to get the engineers excited about a taller tree being climbed (it
    has no impact on the (more important) engineering).


    I think this analogy loses its force when you remember that, in the world of complexity theory, we're all just monkeys, and none of us (including you!) can even climb particularly tall trees. There are no "aerospace engineers" in complexity theory. But Ryan is now the monkey who, standing on the shoulders of other monkeys, has climbed the tallest tree that any of us monkeys have ever climbed. And instead of sitting on the ground throwing feces at each other, we now have the option to follow Ryan, if not to the moon, then at least to high up in the rainforest canopy, where little slivers of sunlight shine through.

    And I think this metaphor is done. :-)

    ReplyDelete
  19. Lance, in my humble opinion, this is not the right attitude. You are basically stating that the reason it is interesting to complexity theorists, you are not explaining why it should be of any interest to others. Put it differently, you are saying that complexity theory had been in a pathetic and embarrassing state with no progress in last two decades until last week, now it is *a tiny bit* less embarrassing.

    ReplyDelete
  20. It is already hard to see why Razborov and Smolensky result is interesting ...

    When climbing, monkeys learn new climbing techniques. And R&S gave us one more of them: approximation by low degree polynomials. We want "great" results without having a "great toolbox". Want quickies without long, long study of "small" problems. Counting, diagonalization and the like will not help monkeys much ... They need real mathematics.

    ReplyDelete
  21. The going to the moon metaphor by McCarty doesn't seem to work. The point of that metaphor is that if you want to go to the moon, the first step is not climbing trees. That metaphor would only be appropriate only if you had reasons to think that the method developed by Ryan is not going to go far in solving P vs NP. I don't see any reason for this at the moment.

    ReplyDelete
  22. Is Ryan's Theorem Only Interesting to Complexity Theorists?

    This "only" is somewhat too "arrogant". Let us ask: is it interesting for people working in circuit complexity? For those trying to find *explicit* hard functions? Not just to show their existence in some huge sets.

    Let us separate things: this is perhaps a great result in *structural* complexity, but not in *circuit* complexity.

    ReplyDelete
  23. For those trying to find *explicit* hard functions? Not just to show their existence in some huge sets.

    I don't understand this complaint: if you want an explicit function that is hard for ACC, just take any NEXP-complete function...

    ReplyDelete
  24. Modern complexity theory suffers from being simultaneously so abstract as to be absolutely useless in real life, yet so complicated as to require pages and pages just to define.

    IMHO further breakthroughs in complexity theory will need to come from a totally fresh standpoint, a revolution or something. To an outsider, it seems you guys are just making longer and longer acronyms, conjecturing whether N-COMP-EXP-SPACE-CLOSURE-PRIME is equivalent to NP-PRECOMP-HARD-REDUCIBLE and so on.

    In fairness, it sounds like Ryan's result is more useful in real life than, say, Fermat's Last Theorem, and we all know how excited that made everyone. Again, though, FLT doesn't take pages to define...

    ReplyDelete
  25. Very poor metaphor Scott! Even you can do better.

    ReplyDelete
  26. I don't understand this complaint: if you want an explicit function that is hard for ACC, just take any NEXP-complete function...

    But what about this: Is Majority in ACC? Is Multiplication of two n-bit numbers there? Does this result answers these questions?

    ReplyDelete
  27. Anonymous says: "Complexity theory [has] been in a pathetic and embarrassing state with no progress in last two decades until last week, now it is *a tiny bit* less embarrassing."

    As Ketan Mulmuley has been emphasizing, this statement is *not* true ... provided that we recognize that one of the key questions in complexity theory is "How big is the class P?"

    If we measure progress in complexity theory by the size of the class of problems that have been demonstrated—either rigorously or empirically—to be solvable in P, then the growth of this class has been truly spectacular, and shows no signs of slowing, and very few fundamental limits to continued growth are known.

    Indeed, it can be argued that a truly great achievement of complexity theory in the past three decades has to establish that there are surprisingly few (hmmmm ... any?) foreseeable limits to continued expansion in the size of the class of problems in P.

    ReplyDelete
  28. SJ, we all know it's you.

    why don't you yourself show that majority is not in ACC?

    ReplyDelete
  29. John sidles, you haven't understood my comment. Please reread it, I am not saying complexity theory has been pathetic, I was trying to explain why non-complexity theorists are not excited about this result and Lance post is not going to change it.

    ReplyDelete
  30. @John Silde

    PS: I am personally very excited about Ryan's result and expect stronger results from him (or based on his work) in near future (and hopefully no new barriers).

    ReplyDelete
  31. why don't you yourself show that majority is not in ACC?

    Would like to do this :-) I even hope this could be done using graph complexity, albeit for a little bit more complex function whose communication matrix has many ones but no 2x2 all-1 submatrices.

    But more seriously: Ryan's result is indeed a great separation of two complexity classes, no doubts. B.t.w. recently there were some other intriguing separations, like that of the communication complexity classes \Sigma_2 and UPP (unbounded-error PP) done by Razborov and Sherstov. My only "complain" is that we call Ryan's nice result a breakthrough in *circuit* (not in *structural*) complexity. I think these are quite different fields, both by the nature of problems and techniques used. In structural complexity one tries to answer more global questions, like is space a more powerful resource than time, etc. Whereas in circuit complexity one tries (at least I try) to answer more "pragmatic" questions like that about the Majority. Hence, a difference in techniques used.

    ReplyDelete
  32. @John Sidle:

    Ketan's argument isn't very good. Here it is in a nutshell:

    "Provided that we include things that are not normally considered complexity and that do not use the methods of complexity then complexity is doing ok".

    Pretty weak, don't you think?

    I do believe it is high time that complexity examines its lack of progress over the last twenty years yet I'm equally surprised that all of these negative comments about complexity come out now when finally some progress has been made.

    Ryan's result is a breakthrough and the first hint that complexity might yet start to move again after the last two decades of stasis.

    ReplyDelete
  33. I am surprised about a strongly negative reaction to my "innocent" doubt concerning our *allocation*, not the *merits* of the result. Concerning Lance's post, not Ryan's result itself. Concerning the difference between structural and circuit complexity. Do you guys really think there is no difference?

    ReplyDelete
  34. The monkeys that climbed the highest trees developed an advantage, evolved in to humans, then went to the moon. I don't get the allegory. Perhaps it is a statement about time?

    ReplyDelete
  35. Last anon, that was hilarious.

    No, it is not about time, it is about what is the intelligent thing to do. Scientists in NASA were not climbing to the trees as a first step towards going to the moon, similarly proving lowerbounds for very weak models of computation like constant depth circuits is not the right step towards separating P vs NP. So we should stop claiming that we are making progress towards solving P vs NP by proving lowerbounds for very weak models of computation.

    Creating programs which were capable of very limited amount of intelligence (even less than a monkey) and claiming that the ultimate goal is human-like intelligence and it is achievable in near future is just bogus.

    ReplyDelete
  36. So we should stop claiming that we are making progress towards solving P vs NP by proving lowerbounds for very weak models of computation.

    That circuits lower bounds can prove P!=NP is a joke, I agree completely. A sentence in grants, no more. This is clearly not the goal of circuit complexity. Many bright brains (Kolmogorov among them) even conjectured that all P can be done by linear size circuits. But should humans first try to go to the moon before they have tried to discover the wheel?

    ReplyDelete
  37. I think I have a decent high-level understanding of this paper now, thanks to some inspired talks by Pavan Aduri over the last two weeks, and some of the criticisms left on this blog are missing the point.

    The results in the Williams paper are obtained through remarkable breadth and cleverness, as opposed to the depth of original arguments. The author combines several known results in a novel way, and the circuit lower bounds follow directly from that novel combination. This feels different to me from, say, Razborov's method of approximations, which to my naive eye looks as though it came out of nowhere -- that paper/technique seems more "deep" than "broad."

    I'm sure some people will take this as a slam against complexity theory, or against Ryan Williams, because I am saying the new result is broader than it is deep, but I mean it as a compliment to both him and the field, and I think it should be taken that way. What Williams has shown is that all the ingredients were already there, that work done on fast matrix multiplication, on the SYM+ characterization of ACC circuits, etc., once they were viewed with the new eyes he brought to the problem. Among other things, this implies that there has been steady advancement in circuit lower bounds over the last twenty years because Williams crafts a jigsaw of puzzle pieces/theorems proved during that time period.

    Rather than this being the breakthrough showing all those other schmoes were wrong, it is more a confirmation that a lot of people have been on the right track.

    ReplyDelete
  38. Some peopel mentioned that Willimas
    lower bound was derived from combining
    some existing results. I would like
    emphasis that William's main contribution is the new philosophy for proving lower bound via upper bound. This method will be much more
    important than Razeborov's approximation method.

    ReplyDelete
  39. Unfortunately, the so-called "advances" in complexity theory are advances only in the eyes of complexity theorists. Ryan's theorem is of interest to a handful of people who earn their bread and wine by doing similar things. To the rest of the world, these "advances" make little sense and have little effect. If someone asks me how will Intel's advances in gesture recognition technology will affect him/her, I can give a convincing answer. But if someone asks me how will Ryan's theorem affect him/her, I won't be able to say anything sensible. And I think even Ryan will not be able to give a convincing answer. -- guptill

    ReplyDelete
  40. @Anon 12:15 AM, November 22, 2010

    It seems that you have forgotten that complexity theory is part of THEORETICAL computer science. Comparing it to HCI and other applied CS is like comparing apples to oranges. I can criticize HCI based on those criteria we have in TCS but that does not mean much, right? Your comment is a little bit insulting to me, just to get the feeling, would you like me say HCI is engineering and not computer *science*?

    ReplyDelete
  41. That's why we are so excited about this paper.

    I don't think that you've really captured why I find Ryan's result so interesting. It is not that it proves the specific lower bounds for ACC^0. It is the fact that it strongly validates the fascinating new approach that Ryan sketched in his STOC paper, a fundamentally new idea about how to approach circuit lower bounds for uniform exponential-time complexity classes. (Indeed the result in some sense seems less about ACC^0 than about NEXP.)

    By the way, in all the recent discussion of ACC^0 I have seen, none has mentioned a characterization of ACC^0 that is very natural and intuitive and is part of a continuum that includes TC^0 as well:

    ACC^0 can be characterized as the class of the Boolean functions computable by polynomial-size constant depth unbounded fan-in arithmetic circuits over finite fields.

    If one allows the size of the field to grow as a function of the number of input bits then TC^0 can be characterized as Boolean function computable by such circuits when the field size grows anywhere from
    log^Omega(1) n to n^O(1).

    ReplyDelete
  42. Unfortunately, the so-called "advances" in complexity theory are advances only in the eyes of complexity theorists. Ryan's theorem is of interest to a handful of people who earn their bread and wine by doing similar things. To the rest of the world, these "advances" make little sense and have little effect.

    Would you apply the same argument to every advance in math, theoretical physics, astronomy, and other fields without immediate applications?

    If not, then how do you justify the double-standard?

    If so, then why do you spend your time reading an academic blog?

    ReplyDelete
  43. By the way, in all the recent discussion of ACC^0 I have seen, none has mentioned a characterization of ACC^0 that is very natural and intuitive and is part of a continuum that includes TC^0 as well:

    ACC^0 can be characterized as the class of the Boolean functions computable by polynomial-size constant depth unbounded fan-in arithmetic circuits over finite fields.


    This is probably the nicest characterization of ACC0 known. For those interested, the reference is: Agrawal, Allender, Datta, "On TC0, AC0, and Arithmetic Circuits".

    - Not Agrawal, Allender, or Datta

    ReplyDelete