Rahul Santhanam guest posts with thoughts from last week's Barriers in Computational Complexity workshop in Princeton. 1. What's a complexity theorist's definition of an algorithm? A failed attempt to prove a lower bound. One of the more famous examples of this is Arora's approximation algorithm for Euclidean TSP. At the workshop, I heard about a couple more. Chris Umans mentioned that the new algorithms for matrix multiplication in his paper with Henry Cohn came about in part because of a failed attempt to prove their approach was unviable. While chatting with Dave Barrington about his counter-intuitive
characterization of NC
1 by bounded-width branching programs, I learned that this too resulted from an "obstruction" to proving a lower bound (By the way, Dave's result was voted the most surprising result in complexity theory by an eminent panel on the first day). It struck me that this ability to toggle freely between "hard" (reductions from complete problems, oracle & black-box results) and "easy" (algorithms) viewpoints is a very productive component of our way of thinking. Of course this is formalized in results such as Razborov-Rudich and Kabanets-Impagliazzo, but it's also an indication that Nature is far from adversarial. It could have been that interesting problems are not just hard but also that we could say nothing interesting about their hardness... Instead, we have an extensive theory, a rich web of implications. Sure, lower bound proofs are difficult, but then there are the chance flowerings of failed attempts.
2. The highlight of the workshop for me was talking with Ketan Mulmuley about his approach to P vs NP using geometric invariant theory. Ketan has been working on this approach for more than a decade now. Given his seminal work on semantics of programming languages, parallel algorithms and computational geometry, and his reputation as a brilliant problem solver, he was always going to be taken seriously. But there was also an element of mystification - was all this high-powered mathematics really necessary to attack P vs NP? Why spend time learning about this approach when it might not pay any dividends for the next half a century? And how could Ketan claim that this was in a sense the only possible approach to P vs NP? The fact that Ketan wasn't comfortable evangelizing about his approach didn't help matters.
It seems to me that now there's a qualitative shift. There seem to be two factors in the shift - one is that Ketan is more confident in the viability of his program than he was in the formative stages, and the second is that he is more aware now of the importance of communicating both with mathematicians and with complexity theorists about his approach. Indeed, the scale of the project is so immense that collaborations across the board are necessary to its success.
To his credit, Ketan has realized this. Earlier, when the possibility was suggested to him that his approach might lead to weaker but still interesting lower bounds along the way, or to connections with other problems, he didn't seem very interested, possibly because he was focused on the ultimate goal of separating P and NP. Now, he is more actively encouraging of the search for such connections. Also, he doesn't claim that his approach is the only one for separating NP vs P - such a claim is hardly tenable. Instead, he makes a much more nuanced argument in terms of the barriers that other approaches run up against and which his own program avoids, as well as for the mathematical naturalness and aesthetic value of his approach. As for the time scale of the project, it's true that carrying it out in its present form would involve solving mathematical conjectures that algebraic geometers consider far beyond the reach of present techniques. But there is always the possibility of short cuts arising from unexpected connections and new ideas. For this reason, estimates of time scales are speculative, and perhaps not all that relevant.
The P vs NP question is the most important in our area, and as of now there seems to be exactly one program (in the mathematical sense) for solving it, and that's Ketan's program. Simply for that reason, it deserves serious study. At the very least, the decade of mathematical labour that has gone into developing the approach, together with the current efforts to explicate the approach and its relation to barriers complexity-theoretic and mathematical, have raised the standards for any future approach to be taken seriously.
The best sources for learning about the approach are his
Complexity Theoretic Overview of GCT and
Mathematical Overview of GCT.
3. A few people at the workshop questioned the focus on barriers. Ran Raz gave a great talk in which the "barrier" slide merely had a picture of the human brain (but then, isn't it even more unlikely that a computer could prove P != NP?). Why are we so obsessed with barriers? Perhaps it's because we are computer scientists rather than mathematicians. Mathematicians don't care about constructivity - they believe that proofs exist, however long it takes to find them. It was a question of when Fermat's last theorem would be proved, not whether. We, however are used to doing things efficiently (at least in theory). So if we fail to find a proof quickly, the fault surely "lies not in ourselves but in our stars". Walls tend to spring up around us just as we're on the verge of something extraordinary. Oracles give us gloomy portents. We're forced to be unnatural. ZFC shrugs and turns away from the question...
Yet there is something uncanny about the whole thing. Lower bound questions can be formulated (and have been formulated) in many different ways mathematically - there hasn't been any real progress with any of these formulations. Just as an algorithm for SAT would also solve every other NP-complete problem, a lower bound proof would say something about all these formulations at once, which seems odd since they pertain to apparently unrelated areas of mathematics.
Just another barrier with which to amuse ourselves.
4. The very last talk of the workshop was given by Luca Trevisan, on the advantages of being a polyglot. Historically, complexity theory is rooted in logic and combinatorics. As it matures as a discipline, theorists are able to ply their skills in other mathematical provinces. Pseudorandomness is an especially "extroverted" part of complexity theory. Theorists have made important contributions to problems such as explicit constructions of Ramsey graphs (Barak-Rao-Shaltiel-Wigderson) and the Kakeya conjecture (Dvir) using the language and techniques of pseudorandomness.
Luca's talk was about connections between pseudorandomness and additive number theory, with an emphasis on the result by Green and Tao that the primes contain arbitrarily long arithmetic progressions. He made the point that the techniques that go towards proving this result can be phrased in a couple of different languages, the language of functional analysis (functions, norms) and the language of pseudorandomness (distributions, distinguishability, statistical tests). It's useful to construct a "dictionary" between these two languages, since concepts that are transparent when phrased in one language become less so when translated into the other. For example, the functional analysis viewpoint implies that distributions and adversaries are the same kind of object, which seems strange from the other viewpoint. Not only is this dictionary useful in learning the new language, but also because it exposes new concepts that our native language is not well equipped to handle. Indeed, there have already been many fruitful applications of the Gowers uniformity concept to theoretical computer science, including the Samorodnitsky-Trevisan work on low-error PCPs, the work by Bogdanov, Viola and Lovett on PRGs for low-degree polynomials, and the recent beautiful work by Kolaitis and Kopparty on modular convergence laws for first-order logic with the Mod p operator. It seems likely that there are many fruitful connections still unexplored. Luca's survey in the SIGACT News complexity column (
PDF) is well worth checking out.
5. There were also several cool talks at the workshop where I learned about new results. Ben Rossman talked about average-case monotone lower bounds for Clique under natural distributions - this is a problem that has been open for a while. He shows that there are two values of the edge probability for Erdos-Renyi graphs, p
1 and p
2, such that no monotone circuit of size less than n
k/4 can solve k-Clique well on average on both of the corresponding graph distributions. This result complements Ben's other recent result showing that k-Clique does not have constant-depth circuits of size less than n
k/4, and uses some of the same techniques, inspired by intuitions from finite model theory. Toni Pitassi spoke about work with Paul Beame and Trinh Huynh on "lifting" proof complexity lower bounds from rank lower bounds for Resolution to lower bounds for stronger systems such as Cutting Planes and Lovasz-Schrijver. This builds on the "pattern matrix" method of Sherstov, which Toni discovered was also implicit in the Raz-McKenzie separation of the monotone NC hierarchy from more than a decade back (see correction
below). Of course it would be very interesting to "lift" circuit lower bounds in this fashion, but few results of that kind are known. Adam Kalai talked about work with Shang-Hua Teng on learning decision trees and DNFs under smoothed distributions - product distributions where every co-ordinate probability is chosen uniformly in random from some small range. These learning algorithms do not use membership queries - corresponding results for the uniform distribution would solve longstanding open problems. Adam made the point that his result can be thought of as modelling Nature in a way that is not fully adversarial. At least for "most" reasonable distributions, we can in fact learn efficiently in these cases.
6. The workshop was a great success, in part because it brought together more complexity theorists than any current conference does. It was also very smoothly organized. Thanks to Avi, Russell, the session organizers, the admin staff and the student volunteers for making it such a valuable experience.
Correction from Sherstov (10/5/09): What Toni meant is that the two works study related communication problems (just like there are many papers on the disjointness problem); the two differ fundamentally as to the techniques used and results achieved. This point is clarified in the revised version of Toni's
paper on ECCC (page 3, line -15).