The canonical wisdom in most computability textbooks is that there were several researchers in the early 1930's working on various formal definitions to capture the intuitive notion of a computable function, and that Turing was just one of them and shares the credit with them. This is incorrect. It was Turing and Turing alone not Church, not Kleene, not Post, and not even Gödel himself. It was Turing who:You can see Soare's historical view here.

- gave the first convincing model for an intuitively computable function;
- gave a precise demonstration that this formal model captured the intuitive notion; and
- defined the universal Turing machine, which is of immense importance.

Yesterday Soare gave a preview of a talk at the Chicago logic seminar. He focused mostly on the work of the 30's and how Kleene later established the terminology Recursion Theory and Church's Thesis. Soare argues that Turing deserves most of the credit for the Thesis because Turing gave solid arguments for the thesis and Gödel always gave Turing credit. Church may have first formulated the thesis but Turing's work made it credible.

We computer scientists have been using "Church-Turing Thesis" for years and with Soare's historical background, Church-Turing still seems like the way to go.

This just highlights that naming is rather arbitrary. Von Neumann predates Yao by three decades or so and gets no credit. Church and Levin postdate their corresponding names and they get full co-credit. There are many other examples.

ReplyDeleteSo lets stop pretending that results are named after their discoverers. In the process lets also stop the pretense that we "fix" attributions by adding or substracting names from the list of supposed creators.

We should all just agree that results are named to honor famous researchers which had done good (often pioneering) work in the area. Nothing more, nothing less.

Don't feel so sorry for von Neumann: he is usually given full credit for the stored-program computer, though there apparently is a letter from him to Turing making it clear that the main idea came from Turing's universal TM. As part of the consulting for which he is given credit for the idea, he did develop circuitry for implementing random-access memory so maybe it should be called the "Turing-von Neumann architecture".

ReplyDeleteTo anon 2: I hadn't heard about the Turing connection with the von Neumann architecture (do you have a reference?) but certainly the architecture itself was invented by Eckert, and in this case Johnny was just the guy who happened to write the key technical report on the subject. On the other hand George Dantzig often stated that linear programming and the simplex algorithm had substantial contributions from von Neumann and that Johnny selflessly declined joint authorship.

ReplyDeleteThis only reinforces my point that objects are all too often not named after their discoverers, so we should stop pretending they are.

Here's the von Neumann letter.

ReplyDeleteMy impression was that only a few logicians

ReplyDeleteeverthought Church deserved equal credit with Turing. (Sure, Church might've gotten the Thesis, but Turing got the machines!) So if we accept Soare's argument, this might be one of the few instances where the general public (OK, the generalnerdpublic) got it right and the specialists got it wrong.Thanks for the reference. I construe the letter as indicating that he might well have been aware of Turing machines, which are a first step towards stored program computers. However there is very little of the RAM (von Neumann) architecture in Turing's work. IMHO his contributions are more of a theoretical kind (but no less relevant).

ReplyDeletep.s. a bit of trivia: Turing's PhD is from Princeton, not Cambridge as one could naturally expect.

The most important thing in getting your thesis well known is who is your supervisor?

ReplyDeleteComputability theory is a stupid name. It just sounds better for selling on research grants. In reality, it ties us to a particular abstraction. Recursion theory can be described nicely in terms of closure under certain function, and this approach avoids all the chrome of turing machines or whatever your other favorite equivalent model is. Ideally, one doesn't want to get stuck on anyone one way of explaining things, because different problems are easier to think about in different models.

ReplyDelete''Recursion Theory' is a much stupider

ReplyDeletename. It already has a meaning in CS and

math. What is by historical accident we

called Computability Theory

``Calculus'' because we Calculate things.

That would be as stupid since Calculus

already means something else.

Computability Theory is descriptive- it is

the study of what is computable and also

what is not computable. It may be tied

to one model, but it is the way 100% of

all computability theorists thing about

things.

bill gasarch

ReplyDeleteComputability Theory is descriptive- it isthe study of what is computable and also

what is not computable. It may be tied

to one model, but it is the way 100% of

all computability theorists thing about

things.

This is 100% false. I know a professor active in recursion theory. In a graduate course he taught, he defined which functions were recursive by saying it was the least sets closed under universal functions, projections, a certain sort of recurison property, and perhaps one or two other simple things I've forgotten. I also saw heard of it being taught from the point of view of what's defined by certain types of bounded quantifier formulas.

In my experience, recursion theorists are also rarely talking about computation. Usually they seem to be busy constructing functions or sets which step by step avoid or include certain situations.

ReplyDeleteThis is 100% false. I know a professor active in recursion theory. In a graduate course he taught, he defined which functions were recursive by saying it was the least sets closed under universal functions, projections, a certain sort of recurison property, and perhaps one or two other simple things I've forgotten. I also saw heard of it being taught from the point of view of what's defined by certain types of bounded quantifier formulas.What you've hinted at is Kleene's mu-recursive functions-based definition of the computable functions. This is still computability theory as everyone knows it, and equivalent to TMs. Nobody requires that TMs be the only expression of computability; lambda-calculus is also equivalent. They are all part of the same objective, describing what things can be built/computed by an algorithmic process, i.e. computability theory.