Tuesday, July 03, 2012

Why is it called the Turing Award?

The ACM Turing Award represents the highest honor in the computing research community, the closest we have to a Nobel prize in computer science. In this centenary celebration of Alan Turing, it seems obvious that the award should be named after him.

But back in 1966 when Alan Perlis won the first Turing award, Turing's influence on the field was far less obvious. Turing's model had only a few years earlier found its way into computer science literature, still dominated by automata, parsing and circuits. In 1966, what is now the Foundations of Computer Science conference was just renamed Switching and Automata Theory from Switching Circuit Theory and Logical Design.

So why name the award after Turing back then? This question came up during one of the panel sessions at the ACM Turing Celebration where there were some vague guesses. I asked around during the breaks and nobody seemed to know. I even put the question to Turing biographer Andrew Hodges in Cambridge and he similarly didn't know.

The Knuth prize aside, most awards are named after people who have passed away. Back in 1966 there weren't very many dead computer scientists. Even among Turing's generation, Church, Post, Kleene, even Gödel were alive and kicking. Turing would have only been 54 had circumstances been different. John von Neumann died young from cancer in 1957 but wasn't thought of primarily as a computer scientist. There were also Charles Babbage and Ada Lovelace but I'm not sure they were in people's minds back then.

I'm guessing that Turing was just the default choice for the name of the award and the ACM just got extraordinarily lucky for picking the right person in spite of themselves.

8 comments:

  1. But back in 1966 when Alan Perlis won the first Turing award, Turing's influence on the field was far less obvious.

    [citation needed]

    ReplyDelete
  2. Google NGRAM Viewer shows a tremendous increase in the incidence of "Turing machine" spanning the years 1957-63.

    Then with the help of Google Books, we find a surge both in academic publications, and introductory texts like Martin Davis' Computability and Unsolvability (1958) and Charles Houghton's An Elementary Introduction to the Concept of a Turing Machine (1961).

    In physics we used to call this phenomenon "a bandwagon." For whatever reason(s) — plausibly a confluence of multiple reasons — there was a pronounced Turing bandwagon during the years 1957-63.

    ReplyDelete
  3. Just one correction. Emil L. Post died April 21, 1954 at age 57. In contrast to Turing he did no practical work in CS. But since by now we have Turing, Gödel, and Kleene awards it would be high time to create a Post award, too.

    ReplyDelete
  4. The first book with "Turing Machine" in the title was Charles Joseph Houghton's An elementary introduction to the concept of a Turing machine (1961); this was followed by many more.

    Similarly we have Ernest Nagel and James Newman (justly celebrated) exposition Godel's Proof (1960); this was followed by many more.

    Kleene's name is associated to several eponymous books like Renate A. Schmidt's Relations and Kleene Algebras in Computer Science (2008).

    However, the sole book with "Post" in the title apparently is Solvability, provability, definability: the collected works of Emil L. Post (1994).

    Conclusion: Emil Post deserves that someone write a book-length scholarly survey with the word "Post" appearing eponymously in the title.

    ReplyDelete
  5. A related exercise is Simon Raper's "Graphing the history of philosophy".

    Is there a similarly rich-structured graph for CS / CT / QM / QC / QIT? Where do Turing, Gödel, Kleene, and Post appear on it?

    ReplyDelete
  6. I'm sure this'll be unpopular, but I don't think Turing would approve very much. Alan Turing was a mathematician first and foremost, and would much rather his name be used for a mathematics prize. He would probably not approve at all of what theoretical CS has turned into (a sink: a collection of many bright young minds utterly wasted by a near-religious devotion to taxonomy, with no real innovation in decades)

    ReplyDelete
  7. I believe that there were good reasons, not "lucky guesses" for the choice of Turing as the name of the prize. Although this period is even before I became a theoretician, it is not hard to look at what was known in Theoretical Computer Science in 1966. Besides Formal Languages, where the most visible greats were still very active, the field was seen as important but specialized,and without a single major figure, there was the beginning of Computational Complexity: Juris Hartmanis (in collaboration with Stearns, and sometimes with Lewis) has published the first time and space hierarchy results. Michael Rabin, independently, developed similar techniques. Parsing of context-free languages on Turing machines was well understood (the n^3 CYK algorithm, and log^2 space algorithm). The computational model for these results was the Turing machine. The existence of Universal Turing Machines gave a clean mathematical model, and motivation to pursue such results. There was the hope that a general theory could be based on what was then called Recursion Theory, that would be a foundation for Theoretical Computer Science.

    Turing also showed that many questions that could be asked as foundational, were not. For example, "is there a fundamental difference between hardware and software?" "Should we use base 3 logic?" "Is it important for programs to be self-modifying?"

    Perhaps most important, the implication of the idea of universality was that "symbol manipulation" could be as good as "specific problem solving." Incredible as this may sound today, this was considered a revolutionary insight in AI. In other words, you did not have to build a checker-playing program: you could instead set up a formal system in which you could prove the theorem "X is a winning configuration for White" and then use a theorem prover for this system: the resulting program would play checkers. A less trivial version is that in order to write a program to recognize patterns, you do not have to try to imitate biology.

    Come to think of it, this idea of "pushing symbols" instead of "dealing with the problem" is far from being universally accepted even today.

    Computer programs manipulate symbols. Turing showed that this is all any rigorous scientific method can do--so in a sense he showed that Computer Science is the most important scientific endeavor.

    People did know this in 1966. So, as a lucky guess, they decided to honor him.

    ReplyDelete
  8. Reply to Anonymous 4:19
    The problem is not that you are unpopular, but that you are misinformed. Look at some recent FOCS/STOC/SODA/ICALP/SOCG conference proceedings: there will be hardly any paper dealing with taxonomy. On the other hand, in the last few decades there have been great results in such diverse subareas as sublinear time bounds, computational economics, computational game theory, distributed computing, efficient network algorithms, data structures, and quantum computing, as well as truly wonderful and important interdisciplinary work: coding theory, statistics, compressed sensing, and embeddings of one topological space to another. I wrote the list just as topics came to my mind, it is very far from comprehensive, and, on purpose, avoids Complexity Theory.

    As for what Turing would have liked, it is very much open to debate. He did spend most of his professional life in institutions that today would be affiliated with CS. His name is not being used: we honor him. He was a genius, first and foremost.

    ReplyDelete