Wednesday, July 29, 2009


A great new result in quantum complexity, Rahul Jain, Zhengfeng Ji, Sarvagya Upadhyay and John Watrous have shown that QIP is contained in PSPACE solving a decade-old open problem. The Pontiff has already blessed this result but as someone who has studied both quantum and interactive proofs, let me give you my take. Steve Fenner gives a mostly non-technical outline of the proof below.

QIP stands for Quantum interactive proofs. That PSPACE sits in QIP follows from IP=PSPACE and the fact that quantum interactive proofs can easily simulate classical ones. QIP=PSPACE implies IP=QIP so classical interactive proofs can also simulate quantum ones. Amazing.

Old papers by Watrous and Kitaev and Watrous, and showed two already amazing results:
  1. QIP can be simulated by 3-round quantum interactive proofs.
  2. 3-round quantum interactive proofs can be simulated in exponential time.
The second result proven by creating an exponential-sized semi-definite program to capture the acceptance probability of these proof systems. 

In classical interactive proofs, if we could simulate unbounded rounds with bounded rounds than PSPACE would collapse through the polynomial-time hierarchy (highly unlikely). So while we now know classical interactive proofs can simulate quantum proofs they will need many more rounds to do it.

In an upcoming FOCS paper, Jain, Upadhyay and Watrous show that two-round quantum interactive proof systems can be simulated in PSPACE. QIP in PSPACE uses techniques from all these papers.

Still open: The power of multi-prover quantum interactive proofs where the provers share prior entanglement but otherwise can't communicate. Classically we have MIP=NEXP (nondeterministic exponential time). Does QMIP=MIP? Neither containment is known.

Here is Steve Fenner's outline of the proof. 

The proof that QIP is in PSPACE is in three distinct parts:

  1. Reduce a given QIP protocol P to an equivalent semidefinite programming problem S. In other words, find a semidefinite programming problem S where the Verifier's optimal acceptance probability is the optimal solution to S.
  2. Describe a high-level decision procedure for P based on computing a solution to S that is close to optimal.
  3. Show that this numerical solution can be approximated to sufficient accuracy in PSPACE.

The proof relies in a crucial way on an earlier result of Marriott and Watrous (building on Kitaev and Watrous) that any QIP protocol can be simulated by a much simpler QMAM (quantum Merlin-Arthur-Merlin) protocol. In this protocol, which is a three-round quantum version of a standard Arthur-Merlin protocol, Merlin is an all-powerful quantum process and Arthur is a polynomial-size quantum circuit. It runs as follows:

  • Merlin gives Arthur some quantum state r.
  • Arthur flips a *single* coin and sends the result to Merlin.
  • Merlin gives Arther another quantum state s in a different register.
  • Arthur performs one of two possible measurements, based on the value of his previous coin flip. He accepts if the result of the measurement is 1 and rejects otherwise.

The proof actually starts with this equivalent QMAM protocol and turns it into a semidefinite programming problem in Part 1, above.

Every semidefinite program has a primal and a dual formulation. The primal is a maximization problem, and the dual is a minimization problem. In Part 2, the authors give an iterative process that finds better and better feasible solutions to both formulations simultaneously. Each solution approaches the optimal one, the primal from below and the dual from above, giving better and better bounds in both directions. The optimal value is Arthur's maximum acceptance probability over all possible behaviors of Merlin. They show that after a certain bounded number of iterations, the solutions are close enough to the optimal to correctly determine the outcome of the QMAM protocol.

Finally, in Part 3, the authors argue that the iterative process in Part 2 can be implemented efficiently to reasonable accuracy in parallel, using polynomial space. This follows from the fact that basic matrix operations (especially exponentiation and spectral decomposition) can be done in the class NC of polynomial-size log-depth circuits. Here the circuits work on exponential-size data (in the number of qubits), yet everything can still be executed using polynomial space.

One last comment: The proof improves upon and uses some techniques from a recent result of Jain, Upadhyay, and Watrous that two-message quantum interactive proofs are in PSPACE (to appear in FOCS 2009). The same basic technique was also used in a paper by Jain and Watrous in the most recent CCC conference, showing that another quantum class is contained in PSPACE.


  1. This is definitely a great result. But I can't help observing the irony: complexity is a field that's supposed to be doing lower bounds and separations, but almost all great results have been upper bounds and class collapses :) [Say, IP=PSPACE, Toda, NL=coNL, PCPs for NP, derandomization...]

    How do we fight the following criticism (which is unreasonable, but not totally so): we are inept at doing what we're supposed to be doing, so we keep inventing problems we can solve (classes that might collapse), and anointing them as important.

  2. The idea that "we are supposed to be doing lower bounds and separations" is not correct.
    Complexity in supposed to determine how difficult computations are. This may involve nontrivial upper bounds, or nontrivial lower bounds.

    Upper bounds are easier: you have to prove existence of something. For a lower bound you have to prove nonexistence. Not surprisingly, we have been more successful in the former than the latter.

    As for making up problems,
    coNL was not a "class invented to collapse". Similarly, in IP=PSPACE if one wants a class we "invented" it would be IP, and it is not hard to prove that IP is contained in PSPACE. The unexpected collapse is that IP can simulate PSPACE.

    It is true that we made little progress in fundamental lower bounds. The reasone may be that these problems are truly very difficult. Moreover, at least some (e.g. Mulmuley) believe that in fact the difficulty in ALL these problems is the same, and major mathematical obstacles have to be overcome in order to make progress.

  3. Complexity in supposed to determine how difficult computations are. This may involve nontrivial upper bounds, or nontrivial lower bounds.

    I guess that would be called "Theoretical Computer Science". The separation between algorithms and complexity is of course hard to draw formally (a formal separation shouldn't exist!), but it's hard to argue that the classes Algorithms, Complexity, and TCS collapse.

  4. "But I can't help observing the irony: complexity is a field that's supposed to be doing lower bounds and separations, but almost all great results have been upper bounds and class collapses :)"

    A "collapse" says that we have found yet another definition for the same thing. This is always good--to look at the same thing from different sides. But I share your "irony:" in circuits complexity we have just an opposite. There most of "classes" (=restricted circuit models) turn out to be *incomparable* in their power (=do not collapse). Why? I think because the reason to introduce these classes is different: they localize the scope of a particular lower bounds argument.The "importance" of the resulting model is secondary. The goal is to prove *any* nontrivial lower or surprising upper bound, not just to find a new definition. I think this is where the "high level" and the "low level" complexity differ.

    Randomness is good, quantum is even more attractive. But what about the "classics", about what we have and will have for a long time? Do we really hope to understand the power of quantum algorithms without having understood that of "simplest", classical ones? This nice result just shows that we *cannot* hope.

  5. Probably the nicest way to summarize all this is QIP = BQPSPACE = PSPACE = IP (the middle equality was proved by Watrous 11 years ago). If we only knew whether or not BQP was in the PH...

  6. It seems unlikely that this'll have the same effect that IP = PSPACE did; for one thing, the "hard" collapse for the quantum case is the "easy" part classically. Still a great result.

    My question, though -- will this have an effect on the study of QMA, and in particular will we finally be able to formulate the "right" quantum analogues of various classical results around P and NP? (I'm thinking in particular of stuff like Valiant-Vazirani, the PCP theorem, etc, which as far as I know don't have good quantum counterparts.)

  7. "every separation is a collapse in disguise"