Thursday, December 07, 2006

The Efficient Church-Turing Thesis

The Church-Turing thesis roughly states that everything computable is computable by a Turing machine. I strongly believe the Church-Turing thesis and have vigorously defended the thesis in this weblog.

Sometimes we hear about an efficient or strong version of the thesis, for example in the Bernstein-Vazirani paper Quantum Complexity Theory.

Just as the theory of computability has its foundations in the Church-Turing thesis, computational complexity theory rests upon a modern strengthening of this thesis, which asserts that any "reasonable" model of computation can be efficiently simulated on a probabilistic Turing machine (an efficient simulation is one whose running time is bounded by some polynomial in the running time of the simulated machine). Here, we take reasonable to mean in principle physically realizable.
You mostly hear about the thesis from the quantum computing folks as they use the thesis to justify why quantum computing changes everything we believe about efficient computation. But did anyone actually state such a strong version of the Church-Turing thesis before quantum computing came along? The closest I can find comes from Steve Cook's 1982 Turing Award Lecture.
The identification of P of with the tractable (or feasible) problem has been generally accepted in the field since the early 1970's…The most notable practical algorithm that has an exponential worst-case running time is the simplex algorithm for linear programming…but it is important to note that Khachyian showed that linear programming is in P using another algorithm. Thus, our general thesis, that P equals the feasible problems, is not violated.
But not even Cook in 1982 seemed to believe that P captured all the "physically realizable" efficient algorithms as he goes on to describe probabilistic and parallel algorithms.

By no means does computational complexity "rest upon" a strong Church-Turing thesis. The goals of computational complexity is to consider different notions of efficient computation and compare the relative strengths of these models. Quantum computing does not break the computational complexity paradigm but rather fits nicely within it.

Having said all that, as of today the strong version of the Church-Thesis as described by Bernstein and Vazirani seems true as we are not even close to physically-realizable quantum computers. We don't even need "probabilistic" since we believe we have strong pseudorandom generators. Maybe someday we will build those quantum machines or create other physical devices that will efficiently compute problems beyond polynomial time. But we are not there yet.

7 comments:

  1. But did anyone actually state such a strong version of the Church-Turing thesis before quantum computing came along?

    They may not have, but I get the impression that many of the skeptics of quantum computing believe it. It may seem a little unfair to formulate such a thesis a posteriori, only to then say that quantum computing proves it wrong. But it may not be so unfair after all, if it accurately summarizes a main intuitive objection.

    Of course we all have to concede that quantum computers don't yet exist, and may not exist for the forseeable future. I object to techno-hype about quantum computing as much as anyone does. Nonetheless, I think that it is important to debate whether quantum computers could exist in principle. It's important practically, because it could guide the invention of quantum computers if they really are possible. And it's important philosophically, because it reveals the ultimate implications of the true theory of quantum mechanics. (Or more precisely, the true theory of quantum probability.)

    To put it another way, given that quantum computers don't exist, is that only because we aren't sufficiently clever or industrious? Or is there a more fundamental reason that they are impractical or impossible? If the former, then maybe we should devote more resources to the effort. If the latter, then we should surely try to identify the fundamental obstruction.

    ReplyDelete
  2. Here is an extensive discussion on ECT even in the realm of classical physics. Andrew Yao, Classical physics and the Church--Turing Thesis, JACM 50, 100 (2003).

    ReplyDelete
  3. It seems to me that we should either talk about theory or practice.

    If we're talking about theory, then quantum computers seem realizable and so the strong C-T thesis does not seem to hold. (Assuming factoring cannot be done in poly-time on a classical computer.)

    If we're talking about practice, then no one knows how to build a computer with an infinitely-long memory tape, either.

    ReplyDelete
  4. An oft-cited reference that predates the Bernstein-Vazirani paper is the "Invariance Thesis" on page 5 of Peter van Emde Boas's article "Machine Models and Simulations", Handbook of Theoretical Computer Science A, Elsevier, 1990.

    It says:
    "Reasonable" machines can simulate each other within a polynomially bounded overhead in time and a constant-factor overhead in space.

    Van Emde Boas's paper discusses the evidence for this thesis.

    -- Ronald de Wolf

    ReplyDelete
  5. I enjoyed Greg Kuperberg's thoughtful post very much (as I enjoy all Greg's posts). I wonder if Greg's closing dichotomy exhausts the interesting logical possibilities?

    Challenge: triangulate the Kuperberg Dichotomy Given that quantum computers don't [yet] exist, is that only because we aren't sufficiently clever or industrious? Or is there a more fundamental reason that they are impractical or impossible?

    E.g., mightn't it turn out that all four of the following are true?

    (1) BPP is much larger than we think.

    (2) BQP is also much larger than we think.

    (3) There is an exponentially large number of algorithmic complexity classes that are both "natural" and intermediate between BPP and BQP.

    (4) Practical quantum computers (as contrasted with ideal BQP computers) can instantiate some of these intermediate classes, but not others.

    Just to point out, it would be very good news, for pretty much everyone, if all four were true, in the sense that (1) creates abundant jobs for engineers, (2) creates abundant jobs for quantum compution theorists, (3) provides for an indefinitely large number of conceptually novel Arxiv abstracts to be written by grad students, (4) motivates physical quantum computers to be built and operated for practical purposes.

    However, this world would not be an math and science utopia in the usual sense, because its conceptual foundations would not be simple---simplicity being a required characteristic of utopias as they are traditionally conceived.

    Which reminds us that the word utopia derives from the Greek for "no place." :;

    ReplyDelete
  6. Add-on to Ronald's comment: in fact, the Invariance Thesis was put forward in the paper by Slot and van Emde Boas at STOC'84. P-time simulation and constant-space simulation independently were known, but it was nontrivial how to achieve BOTH conditions with the same simulation.

    ReplyDelete
  7. Ah I found an early statement of the efficient Church-Turing thesis:

    Parberry, I., 'Parallel speedup of sequential machines: a defense of parallel computation thesis,' ACM SIGACT News, Volume 18, Issue 1, pp. 54-67, 1986.

    ReplyDelete