All of you readers should be familiar with the P v NP question. P is the set of problems efficiently solvable by a deterministic computer. NP are the problems for which there exists a witness that we can verify quickly. The polynomial-time hierarchy (PH) is the constant alternation generalization of NP that has played a major role in computational complexity So why don't we talk about the P vs PH question? That's just equivalent to P vs NP.
BQP is the the class of problem efficiently solved by a quantum computer. Since P sits in BQP which sits in PSPACE we can't prove outright any separations for BQP without separating P from PSPACE. We can though get an understanding of complexity by allowing the machines access to the same oracle and seeing what we can separate. We already knew BQP doesn't sit in NP relative to an oracle. Same was true for BPP (efficient probabilistic computation) but BPP is in PH for all oracles as are constant round interactive proofs. So we might expect we can have BQP in PH, BQP solvable with constant alternations, relative to all oracles. Raz and Tal refute that hypothesis.
Raz and Tal's proof builds on the Forrelation concept developed by Aaronson (under the name Fourier Checking) and extended by Aaronson and Ambainis. The oracle isn't complicated: Pick n numbers x1,...,xn each xi chosen from a normal distribution with mean 0 and variance a small ε > 0. Let y=y1,...,yn be the Hadamard transform applied to x1,...,xn. You then probabilistically round the x's and y's to 1 or -1. A quantum machine can distinguish this distribution from uniform using the fact that quantum machines can do Hadamards easily and Raz and Tal use Fourier show that low-depth alternating circuits (that capture PH) can't distinguish so easily. The paper is well-written if you want details.
A few questions for further directions:
- Can you show a relativized world where P = NP (=PH) but P ≠ BQP? I'd suspect it will be a messy but not too hard generalization of this paper.
- How far can you push the Raz-Tal result, i.e., for what function f(n) can we show that BQP cannot be solved by f(n)-alternating polynomial-time Turing machines. Can you show f(n) = nΩ(1)?
Also see Scott's personal take on this new result.
Hi Lance,
ReplyDeleteThank you for your wonderful post. You ask how many alternation can we handle? The answer is that using Theorem 7.4, our lower bound would still hold for any f(n) = o(n/log(n)) alternations.
Best, Avishay.
wasn't this problem featured in a recent episode of rick and morty?
ReplyDeleteWhat do you think about BH
ReplyDeleteI am looking for a way to understand (and explain to students) this theorem in an easiest way. Ok, BQP^A is easy: this is the set of languages recognised by quantum computer which can also ask extra questions in the form "does x belong to set A"? P^A would be easy as well - same definition without the word "quantum". Similarly, we can easily define P^(PH,A) - standard computer with 2 oracles. But what is PH^A? Same definition with "low-depth alternating circuits" instead "quantum computer"? Too complicated! So, my questions are: Is P^PH = PH ? Is, for any oracle A, P^(PH,A) = PH^A? Is the Theorem we are discussing here is equivalent to the statement that there exists an oracle A such that BQP^A is not a subset of P^(PH,A)?
ReplyDeleteIf you think of PH as languages defined by bounded-alternation polynomial-time machines, then you get PH^A by just letting these machines access A as an oracle. It's not the case in general that P^(PH,A) = PH^A.
Delete