Tuesday, December 08, 2009

Dequantification

After a talk on derandomization at Midwest Theory Day, someone asked if those techniques could also be used in quantum computing. 

In classical randomness under reasonable assumptions we can create a polynomial in m sized set S number of strings of length m such that every circuit of size m will accept with roughly the same probability whether we draw the input uniformly from S or from all strings of length m. Can we have an analog for quantum? 

There are many good parallels between probabilistic computing and quantum computing. Both seem to allow an exponential number of possible computations in parallel combined in a limited way. Probabilistic computation is essentially multiplying exponentially large stochastic matrices and quantum computing is multiplying exponentially large unitary (or just orthogonal) matrices.

If a quantum algorithm uses classical randomness, one can replace that randomness with measurements of appropriate qbits. So classical randomness does not give any additional power to quantum computation.

But could one use some hardness assumptions to remove the quantumness from a quantum computation leaving a deterministic algorithm? It's not clear we can pull out the quantumness and even if we could quantumness and randomness just work differently. Quantum algorithms get their power from interference, allowing adding positive and negative amplitudes causing cancellation, something very difficult to simulate probabilistically or with any form of psuedorandom generator.

In short no hope for dequantification. You'll just have to build that quantum computer if you want to run those quantum algorithms.

17 comments:

  1. Has anyone taken a closer look at the line of papers by Castagnoli (e.g. arXiv:0912.0401 and earlier ones), who claims that quantum computation can be simulated by probabilistic coputation given 50% of the output as nondeterministic witness? So, if you can nondeterministically guess 50% of the output of your FBQP algorithm, you can compute the other 50% in FBPP, that is how I understand his claim.

    ReplyDelete
  2. The derandomization result reminded me of various results in quantum computing which show that you can replace a unitary uniformly drawn with Haar measure by a random circuit composed of Clifford unitaries. Of course, this is not a dequantization, or even a derandomization, since it is just replacing a continuous measure by a discrete one, but perhaps there is a connection to classical derandomization results.

    ReplyDelete
  3. Lance has posted a Great Truth in asserting "In short no hope for dequantification. You'll just have to build that quantum computer if you want to run those quantum algorithms".

    Now, in the mathematical blogsphere, the great advantage of posting a Great Truth is, of course, that the opposite is also true ... which stimulates good discussions ... and so Lance's Great Truth leads us to ask, "What is the evidence for the contrary Great Truth?"

    At QIP 2009 Matt Hastings gave a terrific talk titled Area laws for quantum many-body systems: gapped one-dimensional quantum systems are in NP that (IMHO) articulated the contrary position very clearly. However, another nice thing about Great Truths is that people care about them so passionately, that we can generally find wonderful essays in the literature that argue either side.

    In particular Diana Taimina has a wonderful book titled Crocheting Adventures with Hyperbolic Planes, which has an introduction by Bill Thurston, that (in my reading of it) bears directly on Lance's Great Truth and Hasting's QIP presentation.

    Thurston's introduction to Taimina's book is so enjoyable, and so broadly application to Great Truth discussions in mathematics (it has one of the longest annotation fields in my BibTeX database) that I will simply post an excerpt here, together with an encouragement to take a look at Taimina's book on Google Books; it makes a terrific holiday present for the mathematician in your life.

    (Thurston's introduction) "Our brains are complicated devices, with many specialized modules working behind the scenes to give us an integrated understanding of the world. Mathematical concepts are abstract, so it ends up that there are many different ways that they can sit in our brains. A given mathematical concept might be primarily a symbolic equation, a picture, a rhythmic pattern, a short movie---or best of all, an integrated combination of several different representations. These non-symbolic mental models for mathematical concepts are extremely important, but unfortunately, many of them are hard to share."

    "Mathematics sings when we feel it in our whole brain. People are generally inhibited about even trying to share their personal mental models. People like music, but they are afraid to sing. You only learn to sing by singing."

    ... "Why do Daina's crochet models have such a great resonance with so many people? It's because they break through the austere, formal stereotype of mathematics, and present a path to a whole-brain understanding of a beautiful cluster of simple and significant but rarely understood ideas. The crochet models also break through the stereotype that mathematics is only relevant to traditionally male interests."

    ... "In any case, I hope this book gives you pause and changes your way of thinking about mathematics."

    --------------------------------------------------------

    @unpublished{*,Author = {Martin B. Hastings}, Note = {QIP 2009: Twelfth Workshop on Quantum Information Processing (\url{http://info.phys.unm.edu/qip2009/index.html})}, Title = {Area laws for quantum many-body systems: gapped one-dimensional quantum systems are in {NP}}, Year = {2009}}

    @book{Taimina:2009sy, Address = {Wellesley, MA}, Author = {Taimi{\c{n}}a, Daina}, Publisher = {A K Peters Ltd.}, Title = {Crocheting {A}dventures with {H}yperbolic {P}lanes}, Year = {2009}}

    ReplyDelete
  4. OK, now how do we start with Thurston's essay, apply Taimina's geometric insights to link it to Hasting's QIP presentation, and arrive at the opposite of Lance's Great Truth?

    In the immortally overconfident words of Brace Sir Robin (in Monty Python's Holy Grail) ... "That's Easy!".

    We start with the Thurston-esque question: "What are the natural mathematical objects of quantum mechanics, and how do we 'dequantize' them in a natural way?"

    One approach---which has been popular for many decades---is to embrace Hilbert space vectors as the natural objects of quantum mechanics, and 'dequantize' them (that is, make calculations feasible with classical resources) by focusing upon Hilbert vectors that are sparse in some natural basis (in the sense of Candes and Tao). Very often the required vector-representation sparsity is physically motiviated, by imposing symmetries or low temperatures (or both).

    A more general approach---which has become *very* popular in recent years, and which Hastings' QIP talk in particular embraces---is to regard the natural state-space of quantum mechanics as a join space of multilinear algebraic varieties, restricted such that all the bipartitions of any given state have a Schmidt rank r that is polynomial in the number of basis qubits n. The purest examples of such state-spaces require that the maximal Schmidt rank r be some fixed, finite number for all 2^(n-1) bipartitions of the n-spin state-space. Physically speaking, these states impose no spatial ordering, and spread entanglement uniformly across all bipartitions (so that instead of area-like boundaries, there are dust-like bipartitions).

    If we think about the resulting state-spaces, we realize that they are familiar mathematical objects. To physicists they are simply matrix product states, which in their purest embodiment are specified by diagonal (hence abelian, hence orderless) matrices. To geometers they are simply ruled hyperbolic manifolds of the kind that Taimana's book discusses. To algebraists they are not sparse vectors, but rather are sparse varieties ... such that the sparse reconstruction methods of Candes and Tao generalize naturally (which is good). To biological simulationists the question "Which quantum systems are simulatable?" looks much like the question "Which classical systems are simulatable?" ... to which latter question the answer is "Systems for which enthalpy dominates entropy."

    As far as Lance's Great Truth is concerned, these algebraic state-spaces are "dequantized" in the sense that trajectories can be integrated with polynomical resources ... and yet these same state-spaces retain their quantum goodness in the sense that Hamiltonian and Lindbladian processes pullback naturally onto them ... such that a great many practical quantum systems can be simulated naturally and efficiently within this broad context.

    The preceding is why (IMHO) the first two slides of Matt Hasting's QIP 2009 presentation captured the present situation in quantum mechanics beautifully ... the first slide being a funny XKCD cartoon ... the second slide being a historical review of quantum simulation to similar effect as the SEABEES motto: "The difficult we simulate immediately, the impossible takes a little longer".

    That is why (IMHO) the twenty-first century will both "dequantize" and "requantize" our understanding of quantum mechanics.

    At the same time, these advances will challenge the scope of our mathematical toolset, the scale of the enterprises we envision.

    Finally, they will challenge our sense of wonder and our sense of humor. Which is good! :)

    In summary ... happy holidays to everyone! :)

    ReplyDelete
  5. I don't understand why Hastings' result indicates the opposite Great Truth. There are Hamiltonians in nature that are not gapped. Cluster states do not obey the area law. Local Hamiltonians that have the cluster state used in measurement based quantum computation are not gapped.

    ReplyDelete
  6. Matt, I had in mind not Hasting's result, but Hasting's second slide in his QIP talk Area Laws for Quantum Many-Body Systems, for which the executive summary is:

    -----------------------

    Q: How hard is Quantum Many-Body Theory?

    A1 (computer scientists): Really, really hard.

    A2 (brute-force numerics): Really, really hard.

    A3: (80 years of practice): In some cases, not that hard.

    -----------------------

    And here please allow me to encourage folks to read Hasting's whole talk ... it's excellent!

    It is a striking historical fact that ever since the 1950s, quantum simulation capabilities have exhibited the same exponentiating increase that classical simulation algorithms (and classical computation hardware too) have exhibited.

    As with any Moore's Law growth, this exponential increase must eventually slow, but there is little evidence (AFAICT) that this slowing will happen anytime soon.

    It is instead likely (IMHO) that the growth rates in quantum simulation capability will accelerate in the next few decades ... thanks to the freshet of ideas and theorems from QIT/QIP (which is why engineers like me study the QIT/QIP literature, go to the conferences, and read the blogs).

    That is why, in any debate on the topic: Resolved: Real-world quantum systems are not harder to simulate than classical systems, it seems (to me) that Matt's presentation affords powerful arguments on both sides of the question.

    The point being, that many real-world quantum dynamical systems are hard and/or infeasible to simulate (e.g., quantum phase transitions and quantum computations), but when we pullback their dynamics onto state-spaces of Schmidt rank-one (i.e., classical state-spaces), we find that the resulting classical dynamics (e.g., classical phase transitions and classical computations) also are hard and/or infeasible to simulate.

    This leads us to wonder, is "the curse of dimensionality" really worse for real-world quantum systems, than for real-world classical systems?

    The intent of my post was to point out that there are solid grounds for arguing either side of this (nicely debatable) proposition.

    ReplyDelete
  7. Hmm, but what about all the results showing that determining the ground state energy of a general local Hamiltonian, even in 1D, is QMA complete? Admittedly, these rely on artifically constructed Hamiltonians that are far from what a physicist would call "natural", but, unless you believe that QMA=MA, it does seem to indicate that the quantum problem is much harder than the classical one. It is an interesting project to determine if there are any Hamiltonians encoding QMA completeness that do satisfy various notions of "naturalness", e.g. translation invariance, low Hilbert space dimension of the individual spins, etc. Nevertheless, I wouldn't like to rule out finding such a Hamiltonian in a condensed matter system a priori and, if we manage to build a quantum computer, then we can engineer them anyway.

    My main problem with your comment is not that there may be no practical problem simulating Hamiltonians that come up in nature, but that you seem to be implying that this means that quantum complexity = classical complexity. If so, you are commenting on the wrong blog because complexity theorists are well known aficionados of the worst-case scenario rather than the generic one, and it would be truly remarkable if quantum and classical were equal in worst-case complexity measures.

    ReplyDelete
  8. Matt, I completely agree with everything you say, and it seems to me that our respective posts are wholly compatible.

    In particular, your example of the QMA-completeness of estimating ground-state energies is to-the-point.

    But when we pull-back this problem onto Schmidt rank-one (classical) product states, can't we structure the interactions (in a reasonably natural way) to obtain a reduction to 3SAT?

    The point being, it seems to be a pretty reliable heuristic that hard quantum problems pullback to hard classical problems ... and the debate topic "Resolved: Real-world quantum systems are not harder to simulate than classical systems" was crafted with this heuristic in mind.

    If anyone can suggest a QMA-complete quantum simulation problem that does *not* pullback (reasonably) naturally to an NP-complete classical simulation problem ... heck ... our QSE Group definitely will deconstruct that example in our coming "Golden Spike" simulation seminar series.

    And we'll be grateful for it, too! :)

    ReplyDelete
  9. Followup to: "If anyone can suggest a QMA-complete quantum simulation problem that does *not* pullback (reasonably) naturally to an NP-complete classical simulation problem ... heck ... our QSE Group definitely will deconstruct that example in our coming 'Golden Spike' simulation seminar series."

    Oh yeah ... as an example of what we're looking for ... our "Golden Spike" seminar series is *definitely* planning to deconstruct the (scheduled) ETH-QIP/2010 presentation by Scott Aaronson and Alex Arkhipov titled Efficient simulation of quantum mechanics collapses the polynomial hierarchy ... just as soon as a preprint is available!

    For reasons that we posted on Dave Bacon's blog, "It seems to [us engineers] that Scott and his coauthor Alex Arkhipov have made a very serious and potentially very significant contribution to our understanding of the fundamental question, 'What physical systems can be simulated with classical resources?'"

    Definitely these are wonderfully interesting times, no matter *what* one's goals in mathematics, science, and engineering may be.

    ReplyDelete
  10. I would be very surprised if the Hamiltonian problem discussed in http://arxiv.org/abs/0705.4077 is in NP.

    ReplyDelete
  11. That's a terrific article you cite, Matt ... but doesn't its Hilbert-to-Kähler pullback yield a class of classical simulation problems that are not just NP-complete, but NP-hard?

    As the article itself says: "It can be harder to find the ground state energy than to simulate the dynamics of a system. More concretely, it has long been known that it is NP-hard to find the ground state energy of some classical spin systems, such as a three-dimensional Ising model."

    In contrast, what intrigued me about the Aaronson/Arkhipov model is that its Hilbert-to-Kähler pullback (seemingly) does *not* yield NP-complete (or harder) classical simulation problems (although I am still thinking about this, and am looking forward to a preprint from Aaronson/Arkhipov).

    These are tough issues, and it seems likely (to me) that both the classical and quantum sectors of the Complexity Zoo will get a whole lot bigger before we understand all the animals that are in them.

    ReplyDelete
  12. Of course it is NP hard because NP is in QMA and the problem is complete for QMA. However, that does not mean that it is NP complete, since to be complete the problem has to actually be in the complexity class it is hard for. This problem is almost certainly not actually in NP, unless you believe that QMA is equal to NP, which seems very unlikely to me.

    It seems that we may be talking at cross purposes here. I thought that you were implying that finding ground state energies of local quantum Hamiltonians is no harder than the equivalent problem for local classical Hamiltonians. I simply don't see how this can be the case, because it would be a huge step towards proving that classical and quantum computation are equivalent. Perhaps I need to look at this Hilbert-to-Kähler pullback business more closely, but surely the resulting classical Hamiltonian must have some pathological properties.

    ReplyDelete
  13. "Now, in the mathematical blogsphere, the great advantage of posting a Great Truth is, of course, that the opposite is also true".

    John, you seem to me a truly follower of paraconsistency...
    Interesting comment and discussion anyway !!

    ReplyDelete
  14. Hmmmm ... inspired by Lance's question (which amounts to) "What might be a natural definition of 'dequantification'?", here is a description of Hilbert-to-Kähler pullback that is framed so as to provide a theorem-proving environment (and I have lightly updated our seminar notes to reflect these ideas).

    Given any quantum algorithm, dequantification proceeds in the following steps.

    (1) If any steps in the algorithm are discrete, convert them to equivalent continuous Lindblad/Ito processes. Do the same for all noise processes and measurement processes.

    (2) Specify the resulting Lindblad/Ito process in terms of one-forms and symplectic structures.

    (3) Convert the Ito increments to Stratonovich increments.

    Remark: at this point we are still formally working in the Hilbert-space world of (say) Nielsen and Chuang's textbook, but we have gained two advantages: (a) general Lindblad-Ito one-forms are dynamically concentrative, and (b) metric/symplectic structures and Stratonovich increments pullback naturally onto lower-dimension manifolds. To continue:

    (4) Pick a Kählerian submanifold (preferably having a geometry well-matched to the concentrative one-forms) and pullback the Lindblad-Ito-Stratonovich increments onto it.

    Remark: it will surprise no-one that matrix product states are suitable Kähler manifolds, and there is one particular matrix product state that is (in retrospect) obviously natural, namely, the case of diagonal matrices.

    The point is, if we specify for these diagonal matrices a Schmidt rank r, then all 2^(n-1) bipartitions of an n qubit Kählerian state-space are Schmidt rank r. In other words, the spins have no intrinsic ordering, and quantum entanglement is spread thinly but evenly among all bipartitions.

    In our hands these diagonal matrix product states are the Swiss Army Knife of matrix product states: not optimal for any one system, but pretty good for a wide variety of systems. In our practical calculations we use a fixed Schmidt rank ~10--100, and (as often is the case with matrix product states) we find that this works exceedingly well.

    But this same Hilbert-to-Kähler pullback framework encourages us to ask questions that are much harder to test numerically, yet might be formally accessible via these concentration-and-pullback methods.

    For example, how much Schmidt rank do we need to simulate a QMA-complete n-qubit process? Do we really need rank 2^n? Perhaps if we are clever, a Schmidt rank of 2^(n^(1/3)) might be enough (i.e., a Kählerian state-space dimensionality that is superpolynomial, but sub-exponential)?

    That's a question whose answer would interest many people (well, me anyway) ... because it's always bothered me that Nature is so dimensionally parsimonious (and nonlinear) with Riemannian space, and yet so dimensionally profligate (and linear) with Hilbert space.

    Now, it's true that a lot of mathematical features are lost in Hilbert-to-Kähler pullback (most notably, linearity). But new features appear instead (most notably, concentrative dynamical flows). IMHO, only time will tell whether we've lost more than we've gained by this exchange.

    The resulting mathematical situation reminds me of the old Iowa joke: "I like both kinds of music: country *and* western."

    Similarly, we engineers like both kinds of quantum mechanics: Hilbert *and* Kahler. :)

    ReplyDelete
  15. Oh yeah -- I forgot to mention why Hilbert-to-Kähler pullback deserves to be called "dequantification."

    It's simply that pulling back the symplectic structure of Hilbert space onto a rank-1 product of coherent states yields the familiar classical symplectic structure of the Bloch equations and (in the large-j limit) Newtonian test-masses.

    So pullback to a Kählerian manifold having a larger (resp. smaller) Schmidt rank yields dynamics that are more quantum (resp. more classical), and in the rank-1 limit the dynamics are precisely classical.

    ReplyDelete
  16. To put this post and discussion thread in context:


    From wikipedia: “Quantum Turing machines can be related to classical and probabilistic Turing machines in a framework based on transition matrices, shown by Lance Fortnow” (Lance Fortnow (2003). "One Complexity Theorist's View of Quantum Computing". Theoretical Computer Science 292: 597–610. doi:10.1016/S0304-3975(01)00377-2.).

    In the following,
    --Z2, Z,Q,R,C as usual.
    --Unrestricted: the positions of matrices can be filled with one of above´s numerical systems wihtout restrictions;
    --Restricted: some algebraic restrictions are imposed, so that once some positions of the matrices has been filled, the others positions can not be filled freely. Deterministic and probabilistic referring to computations, as usual (the inputs and outputs vectors are distributions).

    The paper compares the three first models in a same framework, where inputs can be represented as vectors (binary, stochastic, quantum...), algorithm steps by matrices (binary, stochastic and quantum...) and outputs again by vectors. Computations are done by matrix multiplication. So according to the paper we have the following hierarchy (Fortnow does not include RDTM nor BSSM but i think its inclusion is on the spirit of his paper).

    0.RDTM (Restricted Deterministic TM: restricted, over Z2, deterministic)
    1.DTM (Deterministic TM: unrestricted, over Z2, deterministic)
    2.PTM (Probabilistic TM: restricted, over Q, probabilistic)
    3.QTM (Quantum TM: restrictred over R or C, probabilistic)
    4.BSSM (Blum-Shub-Smale machine: unrestricted, R or C, deterministic)

    Regarding computability all machines up to QTM are considered equivalent (that´s the Church-Turing-Deutsch thesis) but, if i´m not wrong, BSSM is more a kind of hypercomputing model. Regarding physical realisation we do not know yet if scalable QTM can be constructed. Regarding complexity, we know that machines of a higher level can simulate efficiently (i.e. through a polynomial transformation) machines of a lower level but we do not know if the reverse is truth.

    So the computer science theoretical question is: can we effect exactly the same computations and with the same efficiency (up to polynomial transformations) with unrestricted boolean matrices as we can do with restrictions of BSSM ?

    And the ontological question is: wich model of computation do we need to generate the reality (i mean the organized: matter, cell, brain, society ?): RDTM ? DTM? PTM? QTM? BSSM?

    ReplyDelete
  17. Proaonuiq, that was a good link to a excellent article by Lance Fortnow, One Complexity Theorist's View of Quantum Computing (arXiv:quant-ph/0003035v1).

    It's tough to decide how best to comment Lance's essay: Seriously? Sardonically? Humorously? Poetically?

    So heck, let's respond on all four channels ... but to save time, we'll mainly reference these comments.

    Some Serious Remarks ... are provided by Terry Tao's essay Perelman's proof of the Poincare conjecture: a nonlinear PDE perspective (arXiv.org:math/0610903).

    Footnote 3 of Tao's essay can be read (IMHO) as a self-contained mini-commentary on Lance's essay. Here the point is that of course topology and quantum mechanics can be conceived in discrete, algebraic terms. And equally topology and quantum mechanics can be conceived in geometric, differential terms.

    The history of the Poincare conjecture illustrates (as Tao's footnote argues persuasively), that progress on the Poincare conjecture required that *both* points of view be respected. This same diversity is absolutely essential (IMHO) for progress in complexity-theoretic aspects of quantum mechanics as well.

    A Sardonic Remark ... is offered by Bjarne Stroustrup's on-line {C}++ Style and Technique FAQ: "Whenever something can be done in two ways, someone will be confused."

    We notice that Stroustrup's FAQ links to an on-line C++ glossary that has 558 entries. Inspired by Stroustrup's glossary, our concentration-and-pullback seminar notes provides a Hilbert-to-Käher pullback glossary (with each entry hyperlinked to a Wikipedia article).

    The Hilbert-to-Kähler pullback glossary covers mathematical terms that are *not* defined in Nielsen and Chuang's text; there turn out to be (about) 80 such mathematical terms.

    Hmmm ... if learning geometric quantum mechanics requires learning 80 new mathematical elements, and learning C++ requires learning 558 new programming elements .... ("Does wood float on water? What else floats on water?", Monty Python) ... then it is obvious that learning C++ is 6.98 times harder than learning geometric quantum mechanics! :)

    A Humorous Remark ... is offered by John Lee in his book Introduction to Smooth Manifolds: "The old joke that 'differential geometry is the study of properties that are invariant under change of notation' is funny primarily because it is alarmingly close to the truth. The reader's main job ... is to absorb all the definitions and learn to think about familiar objects in new ways."

    It is regrettable (IMHO) that most students learn quantum mechanics *before* learning differential geometry. If the opposite order were the norm, then Nielsen and Chuang's textbook would have looked more like our concentration-and-pullback seminar notes ... and the seminar would have been far easier to teach!

    A Poetic Remark ... is offered by John Godfrey Saxe's nineteenth century poem

    The Blindmen and the Elephant
    "So oft in theologic wars,
    The disputants, I ween,
    Rail on in utter ignorance
    Of what each other mean,
    And prate about an Elephant
    Not one of them has seen!"


    A Concluding Remark ... It seems that a key challenge for twenty-first century quantum mechanics, is to find the time in the curriculum, and the motivation for students, and the resources to support those students, while we all learn multiple ways of appreciating the vast elephant that is quantum mechanics.

    Because (when we think about it) it's clear that we are all *very* lucky that quantum mechanics is such a vast and wonderful beast! '

    On which note, the Sidles family extends best wishes for happy holidays to all!

    ReplyDelete