Thursday, January 06, 2011

The Enduring Legacy of the Turing Machine

Last Februrary Peter Wegner asked if I would be interested in writing an article for a series in ACM Ubiquity on "What is Computation?"
Our expanding collaboration with other fields is broadening our understanding of computation, and it is appropriate to take stock of where we are.  It is likely that the question "What is computation?" will never be completely settled, just as the question "What is life?" is never settled in biology and "what are the fundamental forces?" is never settled in physics.  Engaging with our question is valuable even if we may not find a completely satisfactory answer.
Well I know exactly what computation is, thanks to the beautiful paper of Alan Turing in 1936. My initial reaction was to stay far away but then I realized I had a forum to truly make the point that Turing had the right notion from the start. But as I started writing I realized I didn't have to make any new arguments, Turing himself anticipated the future objections. I just used his words.

The Ubiquity "What is Computation?" papers are coming out once a week. My article, The Enduring Legacy of the Turing Machine, was publshed last week.

Also check out the article by David Bacon who turns the question around. Instead of asking "Can the universe compute beyond Turing Machines", he asks "Why can the universe compute at all?".

4 comments:

  1. THe ACM's What is Computation? series is terrific, and Lance's Enduring Legacy of the Turing Machine is one of many excellent articles.

    Particularly enjoyable was the disagreement among the various eminent authors ... this discord is evidence that Great Truths are being discussed. :)

    In aggregate, these ACM/Ubiquity articles reminded me of the September 04, 2007 post here on Computational Complexity, in which GASARCH drew attention to a now-classic 1997 article by De Millo, Lipton, and Perlis, titled Social processes and proofs of theorems and programs.

    In essence, if one substitutes throughout the De Milo/ Lipton/ Perlis 1979 article the word "computation" for the word "theorem", the result can be read as a remarkably foresighted summary of the aggregated ACM/Ubiquity What is Computation? 2010 series.

    In this respect the Milo/ Lipton/ Perlis article was at least thirty-two years ahead of its time. :)

    From this we may infer ... uhh ... or can we?... that the merging of the notion of "proof technology" with the notion of "computation technology" is likely to continue ... and even accelerate ... in coming decades.

    My thanks and appreciation are extended to Computational Complexity for pointing us toward these thought-provoking essays.

    ReplyDelete
  2. Lance,

    Thanks for sharing this intriguing series.

    I am quite sympathetic to your view, though with a small amendment in the characterization of later developments in our understanding of computation. Indeed, Turing beautifully anticipated many of the directions future study would take and addressed many of their liabilities. I would add that substantive extensions and applications have been made, but none of them redefine the core sense of computation, so much as explore his ideas (e.g. Type-2 Turing Machines in Weihrauch's work on Computable Analysis explores the computation of real numbers outlined in Turing's paper; various works on oracles) or add new structure (finitary or infinitary) for the computation to work with (infinite speedups or discrimination, and other supertasks) or within (neural networks, tilings). Also, it seems to me that the various attempts to integrate information and computation have led to a great deal of confusion, with the exception of well-defined (if abused) notions of information (e.g. Shannon information, algorithmic information). These points are, I suspect, a matter of emphasis and not disagreement.

    ReplyDelete
  3. GASARCH said: Also worth checking out: the book A Guided Tour Through Alan Turing's Historic Paper on Computability and the Turing Machine

    In parallel with Turing's 1936 article, it's great fun to deconstruct von Neumann's 1945 artlcle First Draft of a Report on the EDVAC.

    Four observations: (1) both Turing and von Neumann's have an astounding capacity for working things through in detail ... in both articles, seminal ideas are interspersed with in-depth analyses of minutiae.

    (2) Neither article is easy to read ... and von Neumann carefully explains why this necessarily is the case:

    ----------------
    "The ideal procedure would be, to take up the five specific parts [of the von Neumann architecture] in some definite order, to treat each one of them exhaustively, and go on to the next one only after the predecessor is completely disposed of. However, this seems hardly feasible. The desirable features of the various parts, and the decisions based on them, emerge only after a somewhat zigzagging discussion. It is therefore necessary to take up one part first, pass after an incomplete discussion to a second part, return after an equally incomplete discussion of the latter with the combined results to the first part, extend the discussion of the first part without yet concluding it, then possibly go on to a third part, etc. Furthermore, these discussions of specific parts will be mixed with discussions of general principles, of arithmetical procedures, of the elements to be used, etc."

    "In the course of such a discussion the desired features and the arrangements which seem best suited to secure them will crystallize gradually until the device and its control assume a fairly definite shape. As emphasized before, this applies to the physical device as well as to the arithmetical and logical arrangements which govern its functioning."
    ----------------

    (3) Even though these are (arguably) the two most important works in the history of computer science, neither work was recognized as a "breakthrough" at the time of publication. Prior to WWWII, only two of Turing's colleagues expressed any substantial appreciation of this work (these two being Richard Braithwaite and Heinrich Scholz). Similarly, von Neumann's IAS colleagues showed no great appreciation of the merits of actually building computing machines ... many IAS colleagues actively opposed von Neumann's line of research.

    (4) And finally, neither Turing nor von Neumann envisioned future trends with perfect accuracy. For example, von Neumann envisions a total memory of 2^18 bits (which he calls "memory units") ... and asserts that it is "obviously impractical" to store that many bits using logic gates (which he calls "E-elements"). Nowadays, the per-bit cost of "E-element" memory has fallen to 10^(-10) dollars. If computers make sense without Moore's Law, how much more sense do they make with Moore's Law?

    Here is a 1954 audio recording of von Neumann defending computer research ... only after a full decade of hard work and repeated public presentations, by von Neumann, Turing, and many other pioneers, did widespread appreciation emerge of the unbounded potential for advances in computer science.

    ReplyDelete