## Wednesday, September 21, 2011

### Where do theorems go to die?

(Joint post by Bill Gasarch and Daniel Apon)

Recently I (Daniel) reviewed Dexter Kozen's Theory of Computation (it was AWESOME). You can find the review here. Many of the topics were standard but some we (Bill and Daniel) suspect are not covered in any complexity class (well... maybe in Dexter's). This DOES NOT detract from the book, however we now raise the question:
Where do Theorems go to die?
In some eras there are some theorems that are taught in EVERY basic grad complexity courses Then all of a sudden, they are NOT taught at all. (Both the EVERY and the NOT are exaggerations.)
1. The Blum Speedup theorem: In the 1970's EVERY comp sci grad student knew it. In the 1980's EVERY theorist knew it. In the 1990's EVERY complexity theorist knew it. In the 2000's (that sounds like it means 2000-3000) EVERY... Actually NOBODY knew it, except maybe students who took Dexter's class. I doubt Blum teaches it anymore. Blum's speedup theorem was proven before Cook's theorem when people were still trying to get complexity right. Blum has since said Cook got it right! Even so, Blum's Speedup theorem should be in the preface to God's book of algorithms (See here) as a warning that there might not be an optimal algorithm.
2. The Complexity of Decidable theories (e.g, Presburger, WS1S, S1S, S2S). We all know that by Godel's theorem the theory of (N,+,times) is undecidable. There are some subtheories that are decidable. However, the decision procedures are complete in various rather high up classes (WS1S is provably not primitive recursive). This material I (Bill) learned from Michael Rabin (who himself proved S2S undecidable) in 1981 but my impression is that it is not taught anymore. Was it ever widely? We think that students should know the STATEMENTS of these results, because these are NATURAL problems (Bill did the capitalization) that are complete in classes strictly larger than P. (See the second conversation here for comment on that.) One of us thinks they should also know the proofs.
3. omega-Automata. This is used to prove S1S decidable and we've heard it is used in fair termination of concurrent programs. Sounds like it should be more well known. But it is rare to find it in a complexity cousre. It may well be taught in other classes. (I (Bill) touch on it in undergrad automata theory.)
4. Jon Katz is pondering not teaching Ladner's theorem!!! The students should surely know the result (YES, and stop calling me Shirley) however, should they know the proof?
5. Spare Sets: Bill Blogged about no longer doing Mahaney's Theorem Karp-Lipton is still used A LOT, but Mahaney's theorem is not really used at all. Daniel thinks Mahaney's Theorem is AWESOME, however Daniel is bright eyed and bushy tailed and thinks ALL of this stuff is AWESOME.
6. Computability Theory: Friedberg-Muchnik, Arithmetic Hierarchy, Analytic hierarchy, Recursion Theorem. This is like Blum Speedup- this material used to be better known to complexity theorists but is hardly taught to them anymore. It IS of course taught in courses in computability theory. But such courses are taken by complexity theorists far less often then they were. When I (Bill) tell someone there are c.e. sets that are not decidable and not complete! they say Oh, just like Ladner's theorem. Its the other way around, Ladner's result came far later.
7. Part of this is that Complexity theory has changed from Logic-Based to Combinatorics-Based (see post on CCC Call for papers) We don't teach Blum Speedup (and other things) any more because we have better things to teach. However, are there results that the students should know even if the fields they come from are dead? YES, and Ladner's theorem is one of them.

Who decides such things? Textbook writers for one. A Proof Theorist who worked on lower bounds for Resolution told me he hopes that Arora-Barak's book would include this material. He even said If it does not, the field will die. Is this true? If so then Arora and Barak have more power than they realize.

1. Items 2 and 3 are the basis of a number of "formal methods" that are used by practitioners to find bugs in systems. Dexter has a history in both the complexity and formal methods areas so it is no wonder that he chose to cover them; bringing these too streams together is one of nice things about his book. (Sipser also covered 2 in his text.)

I think that you are exaggerating the extent to which these things used to be taught. I took courses on these things in the early 1980's from Cook and others but most of the above topics were not taught. (In the computability course we did 6, and in the complexity theory we were told about Ladner's theorem but didn't do anything about the proof - though it may have been mentioned that its proof was like 6.)

On the other hand, there can be a lot of value in knowing about "dead" theorems. Buchi (better known for his omega-automata work and hence virtually unknown in complexity circles), proved the following initially surprising property (which I liked to give as an extra-credit exercise when I used to teach automata): The language consisting of the set of possible strings that might ever appear on a PDA stack is regular. This theorem was an essential part of the development of leveraging tools that could verify finite-state systems to tools that can verify stack machines.

Maybe your topic should be about the relative impact of textbooks versus papers in shaping a field. My sense is that our field suffered from a lack of quality textbooks precisely because the papers were out-stripping authors' ability to organize and synthesize. There was a long gap between Papadimitriou's text, which came out when the field was on the cusp of major advances, and Arora-Barak. (Sipser's text, though very readable, is much broader than a complexity text and only gave groundwork for a few aspects of complexity.)

2. theorems never die, they just fade away

3. For the record, I covered Ladner's theorem and Mahaney's theorem the first time I taught a complexity course. This time around, I'm stating Ladner's thorem (but skipping the proof) and omitting Mahaney's theorem entirely.

4. For what it's worth, I agree-- I think the *proof* of Ladner's theorem is about as close to pedagogically vacuous as a proof can get (at least, for the general audience of a complexity course).

As an example (sample size: 1), I recently spent a long hour attempting to explain the details of the proof to someone, only to have a final reaction with a gist of "Well... Ok. Whatever!"

I do (strongly) believe the theorem itself is important to teach though, if for no other reason than to combat potential faulty intuition about the structure of classes.

5. "Part of this is that Complexity theory has changed from Logic-Based to Combinatorics-Based"

Amazingly, this happened just as logic-based algorithms were being used more and more in industrial applications.

-- Moshe Vardi

6. I haven't taught the local complexity course in over a decade, but if I did, several theorems from my old notes would have to "die" to make room for the PCP theorem, APX hard classes, FPTAs, Quantum, etc.

Given the fix number of lectures, this seems like the normal thing to do.

7. The OLD course in complexity theorem at Maryland,
CMSC 650 (Recursion theory, NPC, Mahaney's theorem)
and the NEW course CMSC 652(GI NPC--> PH collapses,
USAT \in P --> NP=R, PCP stuff) overlap in ONLY
one place- Cook's theorem.

Even the new course, now that I look at it, is a bit outdated. Maybe a byte outdated.

8. Some results you quote have not died. They have found their true home where they have been nurtured, fed and have multiplied with joyful abandon.

Items 2 and 3 alone constitute core material behind graduate courses, research papers and tools in various different areas. I was taught Presburger arithmetic in three courses in ETH Zurich, and then another three courses in Oxford. In contrast, I expect that students at these, and many other universities see (insert your favourite complexity theorem) at most once.

- The decidability of Presburger arithmetic is taught in courses on automata theory, program verification, and automated deduction (yes, those are very different areas). The quest for efficient implementations and efficiently decidable sub-cases continues to this day.

- The automata-based decidability proofs of logical theories have descendants in recent papers and tools using automata to decide theories of linear-real arithmetic and strings.

- Rabin's theorem and various relatives are covered in courses and conferences on games, modal logics, and automata theory. The best paper award at STOC '11 went, I believe to Oliver Friedmann's work on Simplex. The bulk of his work is on fixed point logic and parity games.

- The automata-based proof method is alive in Jerome Leroux's recent work on the reachability problem for Vector Addition Systems.

- Omega automata *are* known. Well known. Extremely well known. More well known now than they have every been. This material is bread and butter for, not one, not two, but many, many, disjoint research areas.

The heritage of the automata-logic proofs, appears in programming language semantics, verification of functional programs, analysis of XML and XML schema, malware detection in Javascript, automated, high-level synthesis of hardware, program termination systems, mining of interfaces from Java APIs, modern results in language theory, papers that have appeared in the last 5 years. There are multiple tools whose core algorithms can be traced back to such proofs.

It is true that different communities form and study increasingly narrower problems, as the field progresses. What is not true is that contemporary complexity course syllabi are representative of fundamental theoretical problems in computer science. For example, I consider the language inclusion problem for boring, old-school finite automata and its omega-variants an extremely fundamental problem at the heart of much computer science. How much airtime would this problem get in a course? Well, there is much current interest in these problems (implementation competitions, best paper awards in theory conferences, etc).

9. I've seen it mentioned a few times on this blog that complexity theory is moving from logical to combinatorial techniques. Would you place descriptive complexity/finite model theory (Immerman, Grohe-type work) in the set of techniques being moved away from?