This is the first of a long series of posts giving an informal introduction to computational complexity.
Computational complexity theorists try to determine which problem are efficiently solvable on a computer. This sentence already leads to many questions: What is a problem? What is efficiently solvable? Let us first start off with a truly basic question, what is a computer?
In 1936, Alan Turing invented a theoretical computing device now called a Turing Machine. This was before electronic computers existed. He tried to give a simple model of the thought processes of mathematicians. His model has stood the test of time and represents the official model of computation that we still study today.
Instead of giving the formal definition of a Turing machine, let us try a more modern approach. Consider some current programming language like Java. Let us consider the (imaginary) world where a Java program has access to a potentially infinite amount of storage. A Turing machine corresponds to a specific Java program. You might find it a little confusing to think of Turing machine = Java Program but that is the best way to think about it.
Does it matter which programming language we choose? What if we used C++ or Visual Basic or the original Turing machine model? No, it makes no difference. Consider a C++ program P. There are, at least theoretically, Java programs that will interpret C++ programs. If you consider a Java program that interprets P, it will have the same computational behavior as P. This idea holds between any two programming languages including the original Turing machine and leads to the Church-Turing thesis:
Which you can interpret as saying everything is computable is computable by a Java program.
The Church-Turing thesis cannot be proven as it is a thesis but has lead us to define computable as computable by a Turing machine. Now after about half a century of having real computers, the Turing machine has really proven itself as the right model of computation.
I find the first order logic notions of $\Sigma_1$ subsets and $\Delta_1$ subsets of $\mathbb N}$ to be much more natural than Turing enumerable or Turing decidable which are after all defined in terms of a particular kind of machine (one of many possible). In my view, the Church-Turing hypothesis looks more plausible if stated in terms of recursive functions (or \Sigma_1/\Delta_1 sets) than through the language of "machines". Talking about different machines gives the hypothesis a mystifying aura that it does not deserve.ReplyDelete
Turing machines don't tell you what a computer is. They may be special because of their historical role, and their simplicity, but they're just a model of a computer, and even there they're not intrinsically any more special than any other computational system. No particular computational model/system tells you what a computer is. We don't really understand exactly what a 'computer' is. See, for example, http://www.calculemus.org/lect/L-I-MNS/12/ekon-i-modele/smith-foundtns.pdfReplyDelete
That is not the church-turing thesis!?ReplyDelete