As my plane descended toward Toronto on Monday, it felt as though I was landing on the surface of another planet (though maybe the Mars rover was too fresh in my mind). All I could see out the window was white snow crisscrossed by black highways. On the ground, the weather was probably the coldest I've ever experienced. Call me a wuss if you're from Alaska, northern Canada, Siberia, or Antarctica, but I did go to Cornell.
Before QIP started I visited the University of Western Ontario for a day, to work with Dan Christensen on the complexity of simulating spin-foam models of quantum gravity. We didn't get far. The trouble is that no one knows how to define measurement in these models, and the answer could strongly affect computational complexity. Maybe spin-foam models can solve graph isomorphism in polynomial time; then again, maybe they can't even simulate garden-variety quantum computers.
I took the train to Waterloo on Tuesday night, then on Wednesday hung around the Perimeter Institute, which is a converted tavern full of theoretical physicists and quantum computing people. The conference talks started on Thursday; here are summaries of a few.
- Richard Cleve spoke about some extremely cool joint work with Peter Høyer, Ben Toner, and John Watrous. They point out that the classical proof of MIP = NEXP breaks down if the two provers share entanglement -- regardless of whether the verifier is able to manipulate, or even knows anything about, quantum information. (It might still be true that multiple provers who share entanglement can convince us of any language in NEXP, but if so it will need a new proof.) Cleve et al. give explicit examples of 2-prover interactive proof systems that are classically sound but become unsound if the provers share entanglement. To me, the most exciting aspect of this work is that it offers a new, complexity-theoretic way to understand the famous Bell inequalities. In turns out that Bell inequality violation is "really" about two provers convincing a verifier that (say) a certain graph has a 3-coloring when in fact it doesn't, by using entanglement to correlate their answers to the verifier's queries.
- John Watrous spoke about stronger error reduction for QMA. Recall that QMA, or Quantum MA, is the class of languages for which there exist polynomial-size quantum proofs that convince a polynomial-time quantum verifier that an input is in the language when indeed it is. Here the completeness and soundness errors are 1/3. Early on Kitaev observed that the prover can amplify the correctness probability to 1-2-p(n) by giving the verifier O(p(n)) copies of the proof. The verifier then checks each proof independently (destroying it in the process) and outputs the majority result. Against everyone's intuitions (or at least mine!), Watrous now shows that O(p(n)) copies are overkill -- the verifier can amplify the correctness probability arbitrarily using a single copy of the proof! This means that a "quantum library" could store proofs on the shelves, to be borrowed and returned intact by quantum patrons who want to convince themselves of the truth of various statements. The conventional wisdom -- that learning something from a quantum state always disturbs that state -- is wrong in the case of proofs. (Could this be related to zero-knowledge proofs?) Another implication of Watrous' result is that QMAlog = BQP.
- Scott Aaronson spoke about Multilinear Formulas and Skepticism of Quantum Computing. Journalistic objectivity precludes me from commenting on the excellence or otherwise of that particular talk. Next Ran Raz explained why Multi-Linear Formulas for Permanent and Determinant are of Super-Polynomial Size -- a brilliant result whose relevance to quantum computing is that it provides the technical tools for my talk. I'd say more about Raz's result, but it's 1AM and I have to get up early tomorrow for another day of manipulating entanglement at ultra-cold temperatures.