Thursday, January 28, 2016

We Still Can't Beat Relativization

As we celebrate our successes in computational complexity here's a sobering fact: We have had no new non-relativizing techniques in the last 25 years.

A little background: In 1975, a few years after Cook established the P v NP problem, Baker, Gill and Solovay created two sets A and B such that every NP machine that could ask questions about A could be simulated by a P machine that could ask questions to A and there is some language accepted by an NP machine that could ask questions to B that cannot be solved by a P machine that could ask questions to B. In other words PA = NPA but PB ≠ NPB.

All the known proof techniques at the time relativized, i.e., if you proved two classes were the same or different, they would be the same of different relative to any set. BGS implied that these techniques could not settle the P v NP problem.

In the 80's and 90's we got very good at creating these relativized worlds and, with a couple of exceptions, we created relativized worlds that make nearly all the open questions of complexity classes between P and PSPACE both true and false.

There were some results in the that were non-relativizing for technical uninteresting reasons. In the late 80s/early 90s we had some progress, results like NP having zero-knowledge proofs, co-NP (and later PSPACE) having interactive proofs and NEXP having (exponential-sized) probabilistically checkable proofs, despite relativized worlds making those statements false. But then it stopped. We had a few more nonrelativizing results but those just used the results above, not any new techniques.

In 1994 I wrote on survey on what relativization meant post-interactive proofs, still mostly up to date. We have seen new barriers put up such as natural proofs and algebrization, but until we can at least get past traditional relativization we just cannot make much more progress with complexity classes.

5 comments:

  1. I thought Ryan Williams lower bound did not relativize.

    ReplyDelete
  2. I thought the whole point of the Aaronson, Wigderson paper you linked to on algebrization is that we CAN get past relativization (via arithmetization) but that it's not good enough and so they introduce their new barrier
    A lot of the circuit lower bounds of the last decade got past, I thought, both naturalization (via diagonalization) and relativization (via arithmetization) simultaneously by combining the two techniques

    So, don't we have good wways to get past relativization? And then Aaronson and Wigderson's response was that Yes, we get past it, but here's a new barrier (algebrization) that all our techniques to go past it succumb to

    ReplyDelete
  3. Indeed, NEXP not in ACC doesn't relativize. In particular, all known SAT algorithms that beat exhaustive search are non-relativizing.

    ReplyDelete
  4. Do we know any barriers to the method used by Ryan in his ACC vs. NEXP result?

    His result actually made me hopeful that he might make considerable progress in settling questions at least about small complexity classes. My limited understanding was that we just need to find good representations of small circuit classes and then exploit it to give faster algorithms for them. This should push people to look for representation theorems for small classes. Unfortunately it seems that the only person who is really pushing this thread is Ryan.

    ReplyDelete
  5. A sentence like "we created relativized worlds that make nearly all the open questions of complexity classes between P and PSPACE both true and false" makes me wonder why we focused on extremely powerful classes like PSPACE here. Why not create relativized worlds to make all open question between L-uniform NC^1 and NP both true and false? Or between ALogTime and PH, if we want to think big? And why only focus on relativization, and not also try to carry over the natural proofs and the algebrizing proofs barrier (to L != PH for example)?

    ReplyDelete