The parallel repetition problem has come up in this blog more than once before (perhaps because the topic of interactive proofs is near and dear to Lance's heart). I've been thinking about it lately because Venkat Guruswami and I will be teaching a course on PCPs and hardness of approximation this term.

The dirty little secret about PCP courses is that everyone always skips the proof of Ran's Parallel Repetition Theorem, even though it's an essential component of very many optimal hardness of approximation results. I used to think Dinur's new proof of the basic PCP theorem promised an opportunity to get all the way from soup to nuts in one course on PCPs and hardness of approximation. However it really only promises the following: instead of sweating for 10 lectures to get through the basic PCP theorem and then waving your hands over parallel repetition, you get to proceed happily for 3 lectures through the basic PCP theorem and then sweat over how to deal with parallel repetition.

So how to describe getting from a basic PCP with soundness, say, .9, to a PCP with soundness .0001? Here are three possible strategies I've considered:

1. Take a stab at explaining Ran's paper. This is no mean feat -- I've tried reading it maybe six times lifetime, and only on my last attempt did I make any sort of headway. That included completely disregarding the 20 pages of calculations at the end. Still, I've absorbed enough intuition from Avi Wigderson and Uri Feige (check out the latter's survey, the only explanatory writing on the theorem I've ever found) that maybe I could say something in a couple of lectures that would vaguely make sense.

2. Attempt a proof or sketch of Feige and Kilian's parallel repetition theorem. This underappreciated result is definitely easier to follow than Ran's, but it would still probably be challenging to cover in a small number of lectures. It has weaker parameters than Raz's result -- to get soundness down to epsilon you need poly(1/epsilon) parallel repetitions, rather than log(1/epsilon). But come on: if the new proof size is going to be n^{2^2^10000} anyway, does it really matter?

3. Teach the Lov�sz-Feige method of parallel repetition from STOC '92. Unless I'm mistaken, this is a neat and simple way to do a kind of parallel repetition, using multilinear extensions and a sumcheck-protocol-style analysis. The only downside: the proof gets blown up to quasipolynomial size (although, as a compensatory bonus, you get quasipolynomially small soundness). But again, does it really matter? The hypotheses NP not in P and NP not in TIME(n^{polylog(n)}) are pretty much equally good to me.

I'll probably go with either #1 or #3. Any opinions?

I'd say 3, if it were me. The idea is to make sure you and the students at least understand this technique in detail. Then in a future year, if you want to attempt 1 (or heavens, even a follow-up course - "PCP 202") you have some appreciation for what is different.

ReplyDeleteKeep in mind, though, that I don't know either result at all. This is just going by what you've said.

I don't think you can use Feige-Lovasz because the answer size is too large so you cannot apply the long-code (you get an exponential size proof).

ReplyDeleteAs for Feige-Killian I think it depends on the application

Presumably part of the reason it's not an easily approachable result is because people haven't approached it much. A catch-22. If people push there way through it, and then do their best to exposit it, and then this process is repeated enough times, I think almost any result can be made much more sensible. Perhaps it's being percieved as some mystifying technical result is part of the problem.

ReplyDeleteI must be missing something, but I thought Raz's result was only needed for multi-prover interactive proofs, not PCPs. Can someone explain further?

ReplyDeleteI'm not very familiar with Ran's theorem, but if so few people understand it it might be well worth going over it again in detail.

ReplyDeletePlease go over #1. The benefit from repetition in this case is exponential.

ReplyDelete