Thursday, October 03, 2019

Quantum Supremacy: A Guest Post by Abhinav Deshpande

I am delighted to introduce you to Abhinav Deshpande, who is a graduate student at the University of Maryland, studying Quantum Computing. This will be a guest post on the rumors of the recent Google breakthrough on Quantum Supremacy. For other blog posts on this exciting rumor, see Scott Aaronson's postScott Aaronson's second post on itJohn Preskill's quanta articleFortnow's post,
and there may be others.

Guest post by Abhinav:

I (Abhinav) thank Bill Fefferman for help with this post, and Bill Gasarch for inviting me to do a guest post.


The quest towards quantum computational supremacy

September saw some huge news in the area of quantum computing, with rumours that the Google AI Lab has achieved a milestone known as 'quantum computational supremacy', also termed 'quantum supremacy' or 'quantum advantage' by some authors. Today, we examine what this term means, the most promising approach towards achieving this milestone, and the best complexity-theoretic evidence we have so far against classical simulability of quantum mechanics. We will not be commenting on details of the purported paper since there is no official announcement or claim from the authors so far.

What it means

First off, the field of quantum computational supremacy arose from trying to formally understand the differences in the power of classical and quantum computers. A complexity theorist would view this goal as trying to give evidence to separate the complexity classes BPP and BQP. However, it turns out that one can gain more traction from considering the sampling analogues of these classes, SampBPP and SampBQP.  These are classes of distributions that can be efficiently sampled on classical and quantum computers, respectively. Given a quantum circuit U on n qubits, one may define an associated probability distribution over 2^n outcomes as follows: apply U to the fiducial initial state |000...0> and measure the resulting state in the computational basis. This produces a distribution D_U.

A suitable way to define the task of simulating the quantum circuit is as follows:

Input: Description of a quantum circuit U acting on n qubits.

Output: A sample from the probability distribution D_U obtained by measuring U|000...0> in the computational basis.

One of the early works in this field was that of Terhal and DiVincenzo, which first considered the complexity of sampling from a distribution (weak simulation) as opposed to that of calculating the exact probability of a certain outcome (strong simulation). Weak simulation is arguably the more natural notion of simulating a quantum system, since in general, we cannot feasibly compute the probability of a certain outcome even if we can simulate the quantum circuit. Subsequent works by Aaronson and Arkhipov, and by Bremner, Jozsa, and Shepherd established that if there is a classically efficient weak simulator for different classes of quantum circuits, the polynomial hierarchy collapses to the third level.


So far, we have only considered the question of exactly sampling from the distribution D_U. However, any realistic experiment is necessarily noisy, and a more natural problem is to sample from a distribution that is not exactly D_U but from any distribution D_O that is ε-close in a suitable distance measure, say the variation distance.

The aforementioned work by Aaronson and Arkhipov was the first to consider this problem, and they made progress towards showing that a special class of quantum circuits (linear optical circuits) is classically hard to approximately simulate in the sense above. The task of sampling from the output of linear optical circuits is known as boson sampling. At the   time, it was the best available way to show that quantum computers  may solve some problems that are far beyond the reach of classical computers.

Even granting that the PH doesn't collapse, one still needs to make an additional conjecture to establish that boson sampling is not classically simulable.  The conjecture is that additively approximating the output probabilities of a random linear optical quantum circuit is #P-hard.  The reason this may be true is that output probabilities of random linear optical quantum circuits are Permanents of a Gaussian random matrix, and the Permanent is as hard to compute on a random matrix as it is on a worst-case matrix. Therefore, the only missing link is to go from average-case hardness of exact computation to average-case hardness of an additive estimation. In addition, if we make a second conjecture known as the "anti-concentration" conjecture, we can show that this additive estimation is non-trivial: it suffices to give us a good multiplicative estimation with high probability.

So that's what quantum computational supremacy is about: we have a computational task that is efficiently solvable with quantum computers, but which would collapse the polynomial hierarchy if done by a classical computer (assuming certain other conjectures are true). One may substitute "collapse of the polynomial hierarchy" with stronger conjectures and incur a corresponding tradeoff in the likelihood of the conjecture being true.

Random circuit sampling

In 2016, Boixo et al. proposed to replace the class of quantum circuits for which some hardness results were known (commuting circuits and boson sampling) by random circuits of sufficient depth on a 2D grid of qubits having nearest-neighbour interactions. Concretely, the proposed experiment would be to apply random unitaries from a specified set on n qubits arranged on a 2D grid for sufficient depth, and then sample from the resulting distribution. The two-qubit unitaries in the set are restricted to act between nearest neighbours, respecting the geometric This task is called random circuit sampling (RCS).

At the time, the level of evidence for the hardness of this scheme was not yet the same as the linear optical scheme. However, given the theoretical and experimental interest in the idea of demonstrating a quantum speedup over classical computers, subsequent works by Bouland, Fefferman, Nirkhe and Vazirani, and Harrow and Mehraban bridged this gap (the relevant work by Aaronson and Chen will be discussed in the following section). Harrow and Mehraban proved anticoncentration for random circuits. In particular, they showed that a 2-dimensional grid of n qubits achieve anticoncentration in depth O(\sqrt{n}), improving upon earlier results with higher depth due to Brandao, Harrow and Horodecki. Bouland et al. proved the same supporting evidence for RCS as that for boson sampling, namely a worst-to-average-case reduction for exactly computing most output probabilities, even without the permanent structure possessed by linear optical quantum circuits.

Verification

So far, we have not discussed the elephant in the room: of verifying that the output distribution supported on 2^n outcomes. It turns out that there are concrete lower bounds such as those due to Valiant and Valiant, showing that verifying whether an empirical distribution is close to a target distribution is impossible if one has few samples.

Boixo et al. proposed a way of certifying the fidelity of the purported simulation. Their key observation was to note that if their experimental system is well modelled by a noise model called global depolarising noise, estimating the output fidelity is possible with relatively few outcomes. Under global depolarising noise with fidelity f, the noisy distribution takes the form D_N = f D_U + (1-f) I, where I is the uniform distribution over the 2^n outcomes. Together with another empirical observation about the statistics of output probabilities of the ideal distribution D_U, they argued that computing the following cross-entropy score would serve as a good estimator of the fidelity:

f ~ H(I, D_U) - H(D_exp, D_U), where H(D_A,D_B) is the cross-entropy between the two distributions: H(D_A, D_B) = -\sum_i p_A log (p_B).

The proposal here was to experimentally collect several samples from D_exp, classically compute using brute-force the probabilities of these outcomes in the distribution D_U, and estimate the cross-entropy using this information. If the test outputs a high score for a computation on sufficiently many qubits and depth, the claim is that quantum supremacy has been achieved.

Aaronson and Chen gave alternative form of evidence for the hardness of scoring well on a test that aims to certify quantum supremacy similar to the manner above. This sidesteps the issue of whether a test similar to the one above does indeed certify the fidelity. The specific problem considered was "Heavy Output Generation" (HOG), the problem of outputting strings that have higher than median probability in the output distribution. Aaronson and Chen linked the hardness of HOG to a closely related problem called "QUATH", and conjectured that QUATH is hard for classical computers.

Open questions

Assuming the Google team has performed the impressive feat of both running the experiment outlined before and classically computing the probabilities of the relevant outcomes to see a high score on their cross-entropy test, I discuss the remaining positions a skeptic might take regarding the claim about quantum supremacy.

"The current evidence of classical hardness of random circuit sampling is not sufficient to conclude that the task is hard". Assuming that the skeptic believes that the polynomial hierarchy does not collapse, a remaining possibility is that there is no worst-to-average-case reduction for the problem of *approximating* most output probabilities, which kills the proof technique of Aaronson and Arkhipov to show hardness of approximate sampling.

"The cross-entropy proposal does not certify the fidelity." Boixo et al. gave numerical evidence and other arguments for this statement, based on the observation that the noise is of the global depolarising form. A skeptic may argue that the assumption of global depolarising noise is a strong one.

"The QUATH problem is not classically hard." In order to give evidence for the hardness of QUATH, Aaronson and Chen examined the best existing algorithms for this problem and also gave a new algorithm that nevertheless do not solve QUATH with the required parameters.

It would be great if the community could work towards strengthening the evidence we already have for this task to be hard, either phrased as a sampling experiment or together with the verification test.

Finally, I think this is an exciting time for quantum computing and to witness this landmark event. It may not be the first probe of an experiment that is "hard" to classically simulate, since there are many quantum experiments that are beyond the reach of current classical simulations, but the inherent programmability and control present in the experimental system is what enables the tools of complexity theory to be applied to the problem. A thought that fascinates me is the idea that we may be exploring quantum mechanics in a regime never probed this carefully before, the "high complexity regime" of quantum mechanics. One imagines there are important lessons in physics here.

No comments:

Post a Comment