A single paper is like a snapshot of the state of research at one point in time. Research itself is a dynamic, changing, live stream of results and ideas, and a single snapshot often cannot do justice to the richness of the topic. The RAM paper is an excellent summary of definitions and problems, and worth reading today, but, at the time, it seemed to me more like a crystallization of concepts that were "in the air" and a clear and precise summary of important known questions rather than trailblazing exposition of new ideas. The theory of RAMs is fascinating, and I'll try to summarize some of the relevant work that preceded Cook's.
The RAM formulation dates back to von Neumann (after all the "von Neumann architecture" IS a RAM). von Neumann uses the RAM formulation to derive instruction counts for some programs for his first computer. So "unit cost RAMs" were well known from the beginning of computers, and counting the number of operations was known to be important. Knuth was a very strong advocate of the idea of analyzing the running time of algorithms using instruction counts: the first edition of the first volume of The Art of Computer Programming is from 1968.
Theoreticians interested in computability theory have published extensively on RAMs: an example of an early paper is Sheperdson and Sturgis JACM 1963. It has a bibliography of earlier work. These papers came from two different motivations: one was to find further examples of formalisms equivalent to Turing machines, as a kind of experimental evidence for Church's Thesis (see the book by Brainerd and Landweber for an exposition of a few dozen formalisms—Markov Algorithms, Post Systems, λ-calculus, and so on). The other was to find "better", more realistic theoretical models for real computers.
For example, one of the novel features of the ENIAC was that the program actually resided in the computer's memory (as opposed to outside fixed set of instructions as in the earlier Harvard Mark machines). Much was made of this feature of "stored program" that allows for the program to use itself as data and modify itself on the run, something that was judged to be "good for AI." Of course, the existence of a two-state universal Turing machine is a clear illustration that at a fundamental level of computability there is no difference between "hardware" and "software". Still, there was a great deal of interest to model such "ease of programming" features at a theoretical level. For example, Juris Hartmanis has an elegant result showing that there is a function that can be computed faster on a RASP (random access stored program machine) than on a RAM (Math Systems Theory, 1971).
So "RAM complexity" was alive and well. What made things confusing was that fixed length register RAMs are uninteresting, but if one allows unbounded length registers, it is unclear whether unit cost is a realistic measure, and, if not, what would be reasonable. A natural option is to charge for the length of the register that is effectively used, the log of the value stored. Of course, there is the problem that determining the complexity of an algorithm becomes even harder. Even peskier questions appear if one asks whether addition and multiplication should have the same cost, and if not, should one use the schoolbook (quadratic) cost, or perhaps the Sconhage-Strassen cost? Most researchers opted to use the unit cost, and avoid all these complications.
To make things worse, many algorithms in optimization are expressed naturally in terms RAMs with real numbers registers. Note that fundamental questions about this latter model are still very much open.
To summarize, measuring number of RAM steps as a complexity measure was not a novel idea. What made the Cook paper relevant was exactly the proliferation of RAM measure results. In particular the Stockmeyer-Pratt-Rabin vector machine paper (and the later Hartmanis-Simon multiplication RAMs) as well as RAMs holding reals used in the OR community made it important to be precise about the exact meaning of "number of RAM instructions" measure. The community was quite aware that logarithmic cost was polynomially equivalent to Turing machines, and these papers showed that unit cost likely wasn't. Cook and Reckhow wrote down precisely what was likely a kind of unwritten consensus among the researchers in the area. This was necessary and useful, but it did not "set the stage to make RAMs the gold standard". The stage was already set, people were using RAMs to analyze algorithms, and Cook and Reckhow was a precise and meticulous description of what this meant.
In short, if one wants a picture of what great things got started in the period, the paper is an excellent choice, but, as usual, the actual history is complicated, dynamic, and, I hope, interesting.
RAM machines didn't get fun, until 1969 when Yuri Matiyasevich showed how we could turn them into Diophantine equations that had integer solutions IFF the program halted.
ReplyDelete"see the book by Brainerd and Landweber for an exposition of a few dozen formalisms�Markov Algorithms, Post Systems, ?-calculus, and so on"
ReplyDeleteA few dozen? Wow. Contemporary texts don't seem to bother with this at all. I guess with computing being so ubiquitous in today's world, people accept the Church-Turing thesis at face value and don't need to be convinced of it anymore.
The ENIAC was not a stored-program computer. That concept emerged in the EDVAC proposal.
ReplyDeleteWhat is this mystical 'AI' which is 'good' for RAM, lisp, OO, and just about everything else it seems? Noone has ever been able to point to a single concrete example for me, making me conclude that the concept is hooey.
ReplyDeleteRegarding ENIAC, EDVAC, and von Neumann (and Eckert): Papadimitriou taught a course on the classics, and someone wrote a paper about von Neumann's first draft which people might find relevant to this discussion. I guess the first draft itself is also there if people want to read it.
ReplyDeleteThis sucks! It doesn't tell you what things are store in RAM!
ReplyDelete