The symposium held March 20-22, 1972 at IBM Yorktown marked a shift in theoretical computer science towards more algorithms and complexity from logic, automata and grammars. The symposium attracted 225 registrants on par with our current STOC and FOCS conferences. From the preface:
The symposium dealt with complexity studies closely related to how computations were actually performed on computers. Although this area of study has not yet found an appropriate or generally accepted name, the area is recognized by a significant commonality in problems, approaches, and motivations. The area can be described and delineated by examples such as the following.The symposium had a panel discussion on the state of the field, with a panel that included four future Turing Award winners: Robert Floyd, John Hopcroft, Richard Karp and Michael Rabin. The proceedings has a transcription of the panel session. Here are some edited quotes.
- Determining lower bounds on the number of operations or steps required for computational solutions of specific problems such as matrix and polynomial calculations, sorting and other combinatorial problems, iterative computations, solving equations, and computer resource allocation.
- Developing improved algorithms for the solution of such problems which provide good upper bounds on the number of required operations, along with experimental and theoretical evidence concerning the efficiency and numerical accuracy of those algorithms.
- Studying the effects on the efficiency of computation brought about by variations in sequencing and the introduction of parallelism.
- Studying the relative complexity of classes of problems with respect to lower bounds on computation time. In this effort, specific problems are classified as having equivalent difficulty of computation; for example, those problem which can be solved in a number of operations which is a polynomial function of the size of the input.
Ray Miller (Moderator): There are four general questions.Numerical and combinatorial algorithmic people, with some small exceptions, never did integrate well. But Blum and the computational complexity people did come "into the fold" and algorithms and complexity would come to dominate theoretical computer science in the US. The naming issue never did have a satisfactory solution, the Europeans call it Theory A. And I would be very happy with a "traditional ordinary" proof of P≠NP but I strongly doubt the solution is so simple.Floyd responding to the first question: Slowly.
- How is the theory developing from originally being a scattering of a few results on lower bounds and some algorithms into a more unified theory?
- What specific examples have been found to demonstrate how real computer computations were improved from studies of this type?
- Is the progress of numerical-type computations and understanding of them much ahead of the combinatorial?
- Are there important open questions?
Karp: We need a to find a name for our subject. "Computational Complexity" is too broad in view of the work of Blum and others, at least until we can gather them into our fold; "Concrete Computational Complexity" is too much like civil engineering; "Complexity of Computer Computations" doesn't ring true; Markov has already taken "theory of algorithms" away from us…Getting these materials into university curricula, particularly the undergraduate curriculum, is important and it's going to happen quickly.
There are lots of things that computer people do that aren't mathematical in the nineteenth century sense. We manipulate strings, we prove theorems, we retrieve information, we manipulate algebraic formulas. Looking at the primitives appropriate to these domains, we can get a much richer class of problems.
Charles Fiduccia: Working against unification is the failure of specifying a model for the computation. There seems to be too much emphasis on what is being computed and little emphasis on the model.
Hopcroft (responding to Floyd): You could change "slowly" to "it's not". Karp has shown many of these problems complete, the implication being that therefore these problems are hard. That might not be the case, they could even be algorithms that run in linear time. On the other hand, the fact that we know so little about lower bounds provides a lot of interest in the area.
Albert Meyer: The obstacle is not that we don't have a good formulation of a machine model. We have many different models, but we can't prove lower bounds for any of them.
Mike Patterson: Suppose somebody were to prove P≠NP then this would be of great personal and dramatic interest. But if it were to be established by a quite traditional, ordinary argument, which is just happened that nobody had thought of, then we would see in retrospect that it was the wrong problem.
As a side note, had I been able to find Karp's paper online I would never have read about this symposium that marked the new direction of theoretical computer science research.
Interesting: Even reading STOC/FOCS proceedings from the 80s was fun in that one could spot initial stabs at ideas that we now take for granted.
ReplyDeleteOn the issue of serendipity, it is an oft-repeated claim that the internet has destroyed the kind of serendipity you refer to. For sure, you could not have found the transcript had you searched for the paper online, but internet searching tends to replace one kind of serendipity by another. I can't even remember now the number of times I was doing a web search and started a random walk off a link on a search page and found all kinds of interesting material.
...but you can find the paper online:
ReplyDeletehttp://www.cs.berkeley.edu/~luca/cs172/karp.pdf