## Tuesday, November 20, 2012

### Recursion Theorem follows from Church-Turing

During a homework assignment in a graduate complexity course I took at Cornell back in 1985 I used the following reasoning: Since a computer code sits in RAM that a program can read, by the Church-Turing thesis we can assume a program has access to its own code.

The TA marked me wrong on the problem because I assumed the recursion theorem that we hadn't yet covered. I wasn't assuming the recursion theorem, I was assuming the Church-Turing thesis and concluding the recursion theorem.

I did deserve to lose points, the Church-Turing thesis is not a mathematical theorem, or even a mathematical statement, and not something to use in a mathematical proof. Nevertheless I still believe that if you accept the Church-Turing thesis than you have to accept the recursion theorem.

Now the recursion theorem does not have a trivial proof. So the Church-Turing thesis has real meat on it, in ways that Turing himself didn't anticipate. Since the recursion theorem does have the proof, it only adds to my faith in and importance of the Church-Turing thesis.

1. The really interesting question is: Who is this infamous TA ? ;-)

1. TA = Teaching Assistant.

2. I just got back from giving a lecture covering the Y-combinator and saw this. While the Y-combinator is definitely also mysterious, perhaps it is easier to check that it spits out fixed points for any lambda expression F, and may be the intuition of applying F to itself infinitely many times to converge to a fixed point can be suggested as some vague intuition for coming up with such a combinator.

3. Sure, a PC has something called "RAM" which it can look at (disregard concerns about the OS enforcing that only certain memory is readable etc.)

But a Turing machine does not!

A Turing machine has a tape, yes, but the particular instructions of a particular Turing machine program are not written anywhere on that tape.

1. As it turns out, a Turing machine can, in fact, obtain its own description, then use the description for some task, such as printing the description or recursively calling itself in simulation. The recursion theorem guarantees this. One need only add the proper steps into the machine.

4. I don't think this requires the Church-Turing thesis, merely the existence of universal Turing machines.

Suppose you have a universal Turing machine. It accepts the program M as input, and simulates its execution --- but the original program is still sitting in the input buffer. The UTM simulation will not allow the simulated program M to access this data. At a certain point, you could imagine that the UTM has a special instruction which "breaks out" of the simulation and grants access to the input.

This is essentially your argument, it has nothing to do with physical machines, only with the interaction between the Universal Turing Machine and the program it is simulating.

1. I was just about to post a comment saying "I don't see how this intuition comes from the Church-Turing thesis; it seems to come instead from the idea that Turing machines can simulate themselves" -- then I saw the first sentence of your comment :)

5. @Anonymous: It can. Go read the Wikipedia article on 'Universal Turing machine'.

6. @NP Slagle, @Anonymous 3:52AM: can you be more specific? I don't see anything in that Wikipedia article that comes out and says that a TM can obtain its own description.

Of course, a TM can be *devised* that writes out its own description onto the tape, but there's nothing *obliging* one to do so.

Are you suggesting the existence of a "subroutine" of sorts that could be added to an arbitrary TM, that would cause it to write a description of the (entire) TM to the tape? If so, I have a very hard time imagining this subroutine.

Anything more involved than that (e.g. adding "debugging prints" to the state transition rules, then deducing a description of the machine from the execution trace) would seem to require some knowledge of the structure of the machine in the first place, which in some sense defeats the point.

Of course, I may be completely misunderstanding what you are suggesting.

7. So, what about Harvard architecture computers? Or a program that does not live in a modifiable piece of memory? Many, many computers do not conform to the classic von Neumann architecture in all aspects. Apart from the problem you noted, there is also the omission of stating which particular machine model your assume.

8. Indeed, Turing machines can be thought of a "Harvard architecture" of sorts, with a "program" hard-coded in inaccessible "ROM".

@NP Slagle, the more I think about it, the more I think what you say is incorrect, or at least quite misleading. Adding "the proper steps to the machine" requires that you already *know* the makeup of the machine -- ask anyone who's tried writing a quine.

Example: Say I have a 6502-based computer in front of me. In 6502, I can write a 6502 emulator. In 6502, I can also devise a program which prints itself out without examining itself in memory. But in no way can I write a 6502 program which, when run on my computer, tells me how the processor is constructed from logic gates, *and* when run on my 6502 emulator, tells me what instructions I used in my 6502 emulator.

9. In course notes I wrote for our grad intro theory course, I present the Recursion Theorem as a consequence of having eval (same as universal-sim in comments above) and substitution. A student asked why can't we simply have a program read its own code stored in a file, which let me emphasize the point that RT follows in any programming language that has these two properties (which define "acceptable programming system", APS), even if it has no idea of file-reading or even direct support for strings.

An example of a non-APS system is a "Friedberg numbering". Here every partial recursive function has a unique code, so no RT. This does not gainsay Bill's answer, since the C-T Thesis entails that APS exist. My notes go further by presenting the RT construction in programming-y terms.

10. @KWRegan, thank you for the reference to Friedberg numbering. I was not aware of it.

However, I think the word "entails" is overly strong. I would say that the Church-Turing thesis presupposes universal computing machines exist (which, by their nature, support APS's), in much the same way "all crows are black" presupposes black crows exist. The C-T thesis was formulated only after several such systems were observed to be able to simulate themselves and each other.

If, tomorrow, someone exhibited a "white crow" which falsified the C-T thesis, would that say anything at all about the recursion theorem? I don't think so, because the theorem follows from what we already know to exist, and the C-T thesis is essentially a statement about what does *not* exist, i.e., effectively calculable functions that can't be expressed as Turing machines (or any of their many equivalents.)

11. You are clearly outside the average to genius level IQ150+ so you have to get used to most people failing to understand things that seem blindingly obvious to you. You also need to get over what was probably your one low grade in how many assignments? We all had one you know some of us had more, I even failed an exam in a course where I averaged 78% but I passed the retake.

12. The existence of acceptable numberings is the Church-Turing Thesis. Most computability theorists now call the Church-Turing Thesis the "s-m-n Theorem," however changing the name for this statement does not make it any more true. Nevertheless, the s-m-n Theorem remains the most widely used result in computability theory, and results like the Recursion Theorem rely on it. Go get your points back, Lance.

NB: Computability theory is still interesting without the s-m-n Theorem. See for example this paper.

13. Hi Lance,

Maybe I am missing something, but I think there is a problem in the argument.

As I understand, CCT tells us only that if a function $f$ over natural numbers is computable then there is a TM computing it. It doesn't say that we can transfer an arbitrary mathematical statement in some machine model to the TM model. So there is a need for an argument.

RT says that if we have a machine $F(x,y)$ then there is machine with code $\langle G \rangle$ s.t. $G(x) = F(x, \langle G \rangle)$. We need to assume that we can simulate these machines with TMs and vice versa (a similar argument would not hold in a model where we have RT but is not Turing-complete). Moreover it seems to me that in the end we will get a TM which has access to the code of a *similar* TM (computing the same function) and not to its own code.

14. I think the argument of Lance is slippery and the TA was correct. The CTT says that any intuitively computable funnction is partial recursive. Now, how do you intuively define and compute the function that accesses its own code? Where does this code come from? Out of the blue? You also have to argue that it is finite. There is no escape here from the very hen-egg problem that only one of the standard non-trivial proofs of the recursion theorem resolves.

1. The difficulties can be fixed with a bit more of formalization. E.g., that we have a random access machine where the code of the program is stored at the address 0. This machine model still captures the partial recursive functions. (This does not appeal to the CTT, but can be proved.) Then a program can read out the code with just one command. It does not refer to itself, it speaks only about the access of addresses. So there is no paradox involved here.