Sunday, November 27, 2011

The Death of Complexity Classes?

In the 2011 Complexity proceedings there are three papers that analyze complexity classes, Ryan Williams' great paper on ACC, Russell Impagliazzo's paper on average versus worst case for NP and a Hartmut Klauck's paper on AM in communication complexity. That's the complexity conference. At STOC, there was only the Aaronson-Arkhipov paper on linear optics.

Complexity classes capture important aspects of problems based on how they can use various resources. The complexity zoo has nearly 500 classes. Many of the greatest theorems in theoretical computer science relate classes like NPSPACE = PSPACE, NL = co-NL, PH ⊆ P#P, IP = PSPACE, NP = PCP(log n,1), SL = L, NEXP not in ACC and many others. The most important open question in our field (and maybe any field) is the relationships of the classes P and NP.

So why do we see so few papers about complexity classes these days? We are victims of our own successes and failures. We have succeeded in classifying and relating most of the connections between classes where we don't have relativizable counterexamples and have failed, outside of interactive proofs, to develop many tools that let us get around relativization and other barriers.

So here we sit, still putting out the occasional paper on complexity classes but mostly hitting a wall. Occasionally we see a nice breakthrough like Ryan's paper but mostly we are just clawing at that wall waiting for someone to create a wrecking ball so we can make real progress. 

18 comments:

  1. How many of these classes are used by people outside TCS? I think this might be a good measure of the useful output.

    ReplyDelete
  2. It's funny, I had a dream last night involving someone separating NC1 from P. It was probably Ryan Williams though---so even in my dream-world, your point about Ryan being among the few people making progress on fundamental relationships between complexity classes might still stand.

    ReplyDelete
  3. Do you witness the same "issue" with parameterized complexity?

    ReplyDelete
  4. Yeah, I noticed the problem when I worked on Complexity Zoology. I got the sense that it's a great topic that very few people are still answering for.

    But besides just "hitting a wall", the list of names involved revealed a striking pattern of CS departments marginalizing complexity theory and even CS theory in general. Many but not all CS departments, less so the best ones, seem to view complexity theory or CS theory as a boutique product that they don't need. This is my explanation for why I found people who I thought had done quite strong research at relatively lower-ranked universities --- the threshold of employability just seems to be higher.

    Quantum computation is a partial exception, because that is all the rage in the media and at funding agencies.

    There is a parallel phenomenon with string theory in physics departments. The best antidote that I can think of in both cases, even though it is far from perfect, is math departments. If you're proving theorems for whatever reason, then your work should be taken seriously as mathematics. If cs or physics or any department loses interest in a subfield that proves theorems, then instead of just having everyone in the subfield squeezed out of academia entirely, math should play pickup.

    ReplyDelete
  5. Daniel asked: "How many of these classes are used by people outside TCS?" I second him, a good question. Lance said "The complexity zoo has nearly 500 classes". This is my second "sorrow": is this not too much? When I started to work in complexity, there were relatively few classes, each capturing a fundamental resource, P, L, NC, NP, EXP, PSPACE, etc. Then came classes with subscripts/subscripts (NP^NP, etc.), and I was lost, turned to "simpler" things, without classes. Having 500 classes seems like classification of species in biology. Will somebody seriously try to separate (or prove equality of) all these 500 classes? I understand that classes are our "flags". But having so many flags it is not clear where THE right route is.

    @Greg: "If you're proving theorems for whatever reason, then your work should be taken seriously as mathematics." Why then we write long "motivations" in our papers? I think everything reduces to the "degree of seriousness", to how seriously our theorems influence mathematics of computer science. Just prove "a" theorem is not enough.

    ReplyDelete
  6. Stasys - Of course not all theorems are equally interesting, and I'm in favor of motivation as much as anyone else. That said, there is a fashionable concern lately in TCS that papers are too technical and don't have enough conceptual motivation. Maybe it reflects self-conscious fears about the status of TCS within computer science.

    Again, I think that math departments are relatively charitable towards good ideas just because they are good ideas --- at least if they are rigorous --- instead of ignoring good ideas because they are in the "wrong" area. I can't say that math departments are above petty judgments based on area of research, but we might at least be less bad about it.

    ReplyDelete
  7. I thought Lance's analysis was more or less the reason why the name of the conference "Structure in Complexity Theory" was changed to "Computational Complexity". That was quite a few years algo. Do you mean that things have got even worse since then ?

    ReplyDelete
  8. You missed at least one that fits your definition (Russell Impagliazzo's paper on oracle separations of worst-case vs average case NP) but I think that you have missed other topics by taking too narrow a view when you describe papers as being "about" complexity classes.

    I think that your dichotomy is wrong and goes back to discussions 25 years ago about the "structural" versus "concrete" complexity theory in the founding of CCC as "Structure in Complexity Theory". Though even at the beginning there were papers of both sorts (those that focused on the classes - at that time often inspired by recursion-theoretic ideas - versus those that focused on concrete computational problems) it wasn't until the early 1990's that the two groups realized that this separation of the two was kind of silly. An exemplar for the marriage of the two is Noam Nisan's beautiful paper on his PRG: RL \subset SC. In core content the paper is very closely related to today's papers about pseudorandom generators for simple computing devices like read-once branching programs and polynomial threshold functions.

    ReplyDelete
  9. Meanwhile, much more interesting things have been happening in algorithms. Speaking of Ryan's result, his wife Virgi just announced an even more groundbreaking result: a new matrix multiplication exponent!

    http://www.cs.berkeley.edu/~virgi/matrixmult.pdf

    I hope we can have a post about this breakthrough soon!

    ReplyDelete
  10. It might be worth pointing out that the three CCC'11 papers Lance listed---the only three that "analyze complexity classes"---were (in my opinion) three of the most memorable papers of the conference. This might support Greg's thesis that complexity theory is currently getting short shrift, even within complexity theory! :-) I.e., "the bar is higher" for such papers. Or it might support Lance's thesis that so much is already known about basic complexity classes that, if something new is discovered, then it's likely to be important.

    @Daniel Lemire: Would you also say that a good measure of particle physics' "useful output" is how often the top quark, W boson, etc. get used by people in other fields? Or that a good output measure for astronomy is how often other fields need to distinguish spiral from elliptical galaxies? If not, then why do you make this argument about complexity theory?

    @Stasys: I've never been able to understand the argument that complexity theory is a failure because there are 500+ classes that at least one person found interesting enough to study. The truth is that, much like with the Periodic Table of the Elements, there are only 15 or so classes that "everyone ought to know" (P, NP, MA, AM, coNP, BPP, PH, PSPACE, P/poly, EXP, BQP, #P, L, NC, AC0). The rest are different combinations of the same basic ingredients, or can be learned on a "need-to-know basis" when and if they arise in specific applications. I feel bad if the Complexity Zoo contributed to giving anyone the wrong impression here.

    ReplyDelete
  11. "How many of these classes are used by people outside TCS? I think this might be a good measure of the useful output."

    I don't think this is a good measure of academic work. We tell our students that there is a difference between University and trade school and that part of the difference is learning for the sake of learning and for the sake of being well-rounded and curious. Why should faculty "output" be judged in the way you suggest? How many departments would be shut down if the same metric were applied?

    ReplyDelete
  12. @Daniel Lemire. "How many of these classes are used by people outside TCS? I think this might be a good measure of the useful output."

    With this kind of attitude noone would've considered the class NEXP, but then we would not have had the hardness of approximation results of today.

    Note the word "Science" in CS. Maybe we should split the CS departments into CS and SC departments, the latter standing for "Software Construction". It may help put an end into this type of debates.

    ReplyDelete
  13. The "wrecking ball":

    http://arxiv.org/abs/0907.3965

    ReplyDelete
  14. @Scott: I agree that "the rest are different combinations of the same basic ingredients". But the "rest" already contains almost 500 "species". One feeling I can not get free of: in our world, encoding of inputs predetermine the complexity of the problem itself. Say, if a graph is given by a small circuit describing the adjacency, then even such trivial problems as "is the graph non-empty?" turn to NP-hard problems (as observed by Galperin and Wigderson in 1983). So what: should we introduce a next collection of 500+ complexity classes? I don't believe that the difference between problems "has a graph at least k edges", and "has a graph a clique with k edges" lies in a/the encoding. I am afraid that "classes" do not capture the problem itself; the problem being "WHY one property is harder to detect than the other?".

    @Paul: I completely agree that making a difference between "structural" and "concrete" complexity was just a nonsense. Complexity of views at objects is just as complex as objects are. The difference between "structural" and "concrete" lies in just what methods we use, not in what we are interested in.

    ReplyDelete
  15. Timing your research6:52 PM, November 29, 2011

    There is nothing wrong in walking away from certain problems if there is no evidence that we are ready to mount a frontal attack on them.

    This happens all the time in mathematics, so it shouldn't surprise us if after twenty years of banging our heads against the (complexity) wall new students are loathe to pursue this research.

    Ironically, you might be writing this obituary 20 years too late, just as CC is about to rise from the grave and walk again, as the recent results in CCC'11 suggest that maybe, just maybe, it is time to come back and have a second look at complexity using modern tools.

    ReplyDelete
  16. There was a time when proving a problem is NP-hard was publishable. Now this will only happen in very exceptional cases, not 'cause the reductions became less profound. It's simply that for the community this has become less interesting unless something really exciting is happening. So just defining two complexity classes, and separating them, unless the analysis is fascinating, is no longer that interesting. Now you have to have an interesting motivation or an open problem.

    ReplyDelete
  17. @Unknown: Separating complexity classes is HARD. The reason we have so many complexity classes in the Zoo is that we cannot separate even such apparently distant classes as P and PSPACE. There are more or less natural classes in between those two (AM, NP, coNP, coNP, BPP, #P to name just some important ones) If we could separate any two in the list, it would separate P and PSPACE....

    ReplyDelete
  18. The classification of problems according to their complexity is still important. A typically completeness result might be very interesting anymore theoretically, but for applications it is still a very important task. We should continue to classify problems not just for mathematical reasons similar to classifications of other objects like simple groups, but also for practical reasons. It might not be deep theoretically but it is still very useful. And if in the process one comes of with a new natural complexity class so be it. I think one can define a nice project for extending the Complexity Garden and get good funding for employing theory graduates to do it.

    http://qwiki.stanford.edu/index.php/Complexity_Garden

    ReplyDelete