Sunday, January 18, 2004

Algorithmic Cooling on Mars I: Algorithmic Cooling (by guest blogger Scott Aaronson)

Sorry I haven't posted for a while -- QIP has left me with nary an hour to spare. Today Leonard Schulman gave a talk whose title was announced as "Physical Limits of Heat-Bath Algorithmic Cooling on Mars." No, the talk didn't actually have anything to do with Mars; Leonard just wanted to show us his Windows wallpaper (a Mars photo), and suggest that Mars, being even colder than Waterloo, might be an even better site for quantum computing experiments.

Nevertheless, the title provides an excuse to discuss two things on my mind: Leonard's talk and Mars. I'll start with Leonard's talk. Liquid NMR quantum computing has the advantage that hundreds or thousands of quantum gates can be applied before decoherence sets in, and the disadvantage that the qubits are difficult to initialize to the standard "all-0" state. Instead, the starting state is exponentially close to the maximally mixed state. This means that in the final measurement outcome, the ratio of signal to noise decreases exponentially in the number of qubits -- so exponentially many repetitions are needed to extract a signal, negating any quantum speedup.

But is this fundamental? A few years ago Schulman and Vazirani introduced algorithmic cooling, a technique that starts with a hot, high-entropy state, then uses data compression to cool down a few qubits, at the cost of making the rest of the qubits even hotter. (As long as we're limited to reversible unitary gates, the total entropy of the system must remain constant.) The cold ("Waterloo/Mars") qubits can then be used as a quantum computer's standard initial state. The trouble is that too much chaff is needed for too little wheat: with current NMR error rates, extracting a few dozen cold qubits could take 108 or 1012 starting qubits.

A natural alternative, proposed by Boykin, Mor, Roychowdhury, Vatan, and Vrijen, is to let the qubits evolve nonunitarily; that is, interact with the environment. In physics jargon, this is called "coupling the qubits to a heat bath," even though the goal is to cool the qubits. Amazingly, it turns out that by using classical tricks (for example, mapping the basis state |a,b,c> to |a+c,b+c,MAJ(a,b,c)>, where addition is mod 2 and MAJ denotes the majority function), the qubits can be made even colder than the environment to which they're coupled. This raises a question: are there any limits to such cooling? Schulman, jointly with collaborators who I can't recall right now (one of them is Tal Mor), have given an affirmative answer. Suppose each qubit initially has bias ε (that is, is in the mixed state (1/2+ε)|0><0|+(1/2-ε)|1><1|). Then the heat-bath method can't increase the probability (that is, |amplitude|2) of any basis state above 2-nexp(ε2n), where n is the number of qubits. This bound is essentially tight: if ε>24-n, then the initial state can be cooled significantly. Unfortunately, the algorithm that achieves this cooling requires order 1/ε2 steps, which is exponential assuming ε is exponentially small. Furthermore, this exponential blowup seems to be unavoidable (Schulman didn't give a rigorous lower bound, but said it would be easy to obtain).

To my mind, the cooling result raises a fascinating question for complexity theory. Imagine that each generation of human beings, just as it plants trees and stores wine in cellars, starts cooling quantum states -- so that future generations, millions of years hence, could use those states to perform whatever quantum computations they wanted. "Vintage" quantum states would then be highly prized possessions (served chilled, of course). In this situation, would we be using exponential time (the time needed to cool the states), or polynomial time (the time between specifying the input and measuring the output)?

Part II of "Algorithmic Cooling on Mars" will be about Mars.

No comments:

Post a Comment