Tuesday, September 27, 2005

Circuit Complexity and P versus NP

In 1983, Michael Sipser suggested an approach to separating P and NP.
One way to gain insight into polynomial time would be to study the expressive power of polynomial-sized circuits. Perhaps the P=?NP question will be settled by showing that some problem in NP does not have polynomial-sized circuits. Unfortunately, there are currently no known techniques for establishing significant lower bounds on circuit size for NP problems. The strongest results to date give linear lower bounds, and it does not seem likely that the ideas there can go much beyond that.
Over the next few years, circuit complexity played a central role in theoretical computer science. In 1985, Yao showed parity required exponential-sized constant-depth circuits, greatly strengthening the bounds given by Furst, Saxe and Sipser. Håstad quickly followed with essentially tight bounds.

Shortly after Razborov showed that the clique function requires large monotone circuits. If we could just handle those pesky NOT gates, then we would have proven P≠NP.

Then Razborov (in Russian) and Smolensky showed strong lower bounds for computing the modp function using constant depth circuits with modq gates for distinct primes p and q. These great circuit results kept coming one after another. We could taste P≠NP.

But then it stopped. We still saw many good circuit complexity papers and some beautiful connections between circuit complexity and communication complexity, derandomization and proof complexity. But the march of great circuit results toward P≠NP hit a wall after the 1987 Razborov-Smolensky papers. As far as we know today, NP still could have linear-sized circuits and NEXP could have polynomial-sized constant-depth circuits with Mod6 gates.

Boppana and Sipser wrote a wonderful survey on these results in the Handbook of Theoretical Computer Science The survey remains surprisingly up to date.

10 comments:

  1. I am surprised that you call the Boppana--Sipser survey up to date. I thought circuit complexity was more or less dead (or at least got a massive barrier to entry) after the Natural Proofs work of Razborov and Rudich, and this work appeared in 1994, well after the survey was written.

    Siva

    ReplyDelete
  2. It amazes me that people seem sort of vaguely disinterested in Razborov-Rudich. The proof doesn't strike me as any more complicate than say Blum-Micali-Yao. And it seems like it should give complexity theory some direction and something to grapple with. But instead people just seem to sort of not care.

    ReplyDelete
  3. Isn't the question of lower bounds still interesting for algebraic circuits? Or does some Razborov-Rudich type obstruction exist for that situation too?

    Rahul.

    ReplyDelete
  4. For me, Razborov-Rudich is a fascinating example of "mining algorithms from a proof." Some logicians talk about this, for example
    Kohlebach
    and Oliva.
    From what little I have seen, however, that work has not concentrated on discrete math and TCS. The Natural Proofs work,
    in contrast, is a direct and
    surprising application of that idea.
    I suspect that the two communities
    developed the idea independently,
    but I don't know.

    In my case, I do care, but at the
    same time it seems hard to think
    of any proof technique that does
    not yield an efficient algorithm
    yet is also good for separating
    complexity classes.
    You are left with counting
    arguments of various types or diagonalization.

    For a while I was
    interested in Ehrenfeuct-Fraisse games, as there is a result showing that deciding the winner of such a game is PSPACE-complete in general. Therefore an algorithm mined from such a proof might be too expensive to act as an efficient distinguisher for a pseudo-random function. Unfortunately, that doesn't mean that the game used to separate a particular pair of complexity classes is hard to decide...so it's not clear what this means. Perhaps you could find a new proof of a result established via an EF game, but "pump up" the complexity of the game used for the separation?

    Here is another question: is there any known proof technique which is both not natural and also does not relativize?

    ReplyDelete
  5. On my understanding, Fortnow & co.'s method of 'arithmetization' is both non-natural and nonrelativizing:

    http://citeseer.ist.psu.edu/
    babai91arithmetization.html

    Basically, and with variations, the prover interpolates a low-degree polynomial through a boolean function, then runs an interactive proof to demonstrate a value or identity involving the polynomial. It works roughly because different low-degree polynomials are very different, and this difference is probabilistically detectable by the verifier.

    It's nonrelativizing because the natural complete problems one can arithmetize (e.g. 3-SAT) are no longer complete under relativizations--their vocabulary fails to capture the dependence of a nondeterministic machine's behavior on exponentially many oracle bits. One can't hope to fix the technique because of Chang et al.'s result that IP^A doesn't contain coNP^A for random A; this is proved by standard techniques w/o reference to interpolation.

    It's not 'natural' because it's just not a Razborov-Rudich hardness test for circuits (in any form I understand, at least). Its main thrust is to prove surprising class *equalities*. However, there is at least one nonrelativizing circuit lower bound that follows, see Burhman-Fortnow-Thierauf,
    http://citeseer.ist.psu.edu/10573.html

    How is this done? (The latter paper is short, so see for yourself..) Basically it's the same 'a bridge too far' method that Fortnow argues makes diagonalization a still-viable tool: class collapses combine powerfully with each other to produce further collapse; arithmetization is a collapsing technique; the assumed collapse one is trying to disprove then causes so much collapse that you contradict the classical (relativizable) hierarchy theorems.

    As I see it, there's no particular reason to expect this particular technique to solve all complexity questions. To argue this, you might construct two oracle worlds that share the known equality results garnered by arithmetization yet differ as to whether e.g. P = NP.

    ReplyDelete
  6. Are there any known results for circuits which only consist of xor gates?

    Consider the following problem: we have n inputs, and n = p * p for some prime p. Each input can be specified as (x, y) for x, y in [0, p). We also have n outputs, referred to in the same way. The value of the (x, y) output is equal to the xor of each input (m, x + y * m) for m in [0, p).

    There's no pair of inputs which are xored for two different outputs for this problem, so it's 'obvious' that the smallest number of xor gates which can be used to obtain the result is to simply do all the xors for each one, which leads to circuit complexity just under n * sqrt(n). It's also 'obvious' that using non-xor gates can't help compute the values any faster.

    Clearly that second conjecture hasn't been proven, because it result in a superlinear circuit complexity. It seems like the first conjecture should be easy to prove though, but I can't get anywhere with it.

    -Bram

    ReplyDelete
  7. Bram, I'm unsure of how to interpret your problem. Try to state it more formally. Especially worrisome, what is xor of numbers in Z^p? Do you mean addition mod p?

    Also: be aware that if your input is n pairs of elements of [0, p), the strict input 'size' is bit-encoding size, i.e. ~2n*log(p) = 2p^2log(p). Your conjectured circuit lower bound, to be interesting, has to be superlinear in *this*, not just in n.

    Finally, your output is more than a single bit; it's not a decision problem (which are more commonly studied), more is required. So proving you need a big circuit might be easier. But ask yourself if the individual output bits seem nearly as hard to produce; if this is the case you might prefer to concentrate on the problem of computing one of them.

    If the complexity seems instead to come from there being many outputs, the issue is those outputs being relatively 'orthogonal' as mathematical entities--combining their computations doesn't give significant savings. I'm not aware of natural problems for which this is known in a strong way, and I'm not sure your problem is the one to achieve it since the bits seem integrally related.
    Good luck.

    P.S. for a classic problem that shows how problems can yield savings when computed together, try to write a prog to find both the max and min of n numbers with many fewer comparisons than just superimposing a max-search and a min-search.

    ReplyDelete
  8. I don't think this problem should be easy at all, even for linear circuits, although it looks interesting.

    If I understand correctly the inputs are $n=p*p$ single bits (just ordered on the p*p grid)
    and the (x,y)^th output is the XOR of all the bits that reside on the line specified by x and y.

    Thus if (a,b) and (a',b') are two points then indeed they are contained in only one line. (Although I am pretty sure this property alone won't suffice for a lower bound)

    Each individual bit is certainly easy here to produce as it takes \sqrt{n} XORs, so indeed any proof of this form is some kind of direct product construction.

    ReplyDelete
  9. Thanks, that makes perfect sense. Cool problem.

    ReplyDelete
  10. Yeah, there are n outputs, each of which requires sqrt(n) to compute individually, and my conjecture is basically that computing them is completely orthogonal.

    -Bram

    ReplyDelete