Thursday, November 10, 2005

Favorite Theorems: Defining the Future

October Edition

We end the list of favorite theorems from 1965-74 with two seminal papers by Cook and Reckhow.

Stephen Cook and Robert Reckhow, Time-bounded random access machines, STOC 1972, JCSS 1973.

Stephen Cook and Robert Reckhow, On the lengths of proofs in the propositional calculus, STOC 1974, JSL 1979.

The first paper developed the random-access machine (RAM) for measuring the time complexity of algorithms. "Random" here has nothing to do with probability, it just means we can access the memory in an arbitrary order.

Up to that time the multitape Turing machine was considered the model of choice for complexity but one had to access memory in a sequential fashion. The RAM model allowed quick access to all memory and much more accurately captured how real-world computers operated that previous models. Most algorithms papers give their time bounds in the RAM model and RAM is the gold-standard for proving lower bounds.

The second paper had a more delayed effect on complexity. Cook and Reckhow formalize a proof system for Tautologies as a polynomial-time computable function f whose range is exactly the set of tautologies. If f(p)=φ, p is said to be a proof for φ. They show that NP=co-NP iff there is some proof system such that every tautology φ has a proof polynomial in the length of φ.

This suggests an approach to separating NP from co-NP (which would imply P≠NP): Show that tautologies cannot have such proof systems. In 1985, Haken showed that the pigeonhole principle, encoded as a tautology, did not have polynomial-size resolution proofs. This led to a very active research area on proof complexity that finds lower bounds for various classes of tautologies on different proof systems, though we still remain quite far from NP≠co-NP.

Paul Beame and Toni Pitassi have a nice though slightly dated 1998 survey on propositional proof complexity.

7 comments:

  1. Why does it seem (or am I wrong) that the reception of results in proof complexity is lukewarm compared to lower bounds in, say, circuit complexity, communication complexity, etc.?

    ReplyDelete
  2. The latter fields obtain decent bounds by restricting us to extra-simple toy models of computing. This seems intuitively acceptable, since I'm guessing many (most?) theorists have had a formative experience of been turned off somewhat from the operational complexity of real programming; further, they've been encouraged by results indicating that simple models like TMs are 'almost' as expressive as say RAMs--the spirit of computation flows undiminished here, and there's no upper bound on the conceptual complexity of algorithms.

    Now, most theorists would I hazard not admit to having been traumatized by the mere logical form of mathematical arguments, such that they would take as a comfort the retreat to a toy model of *thinking* in which basic nonconstructive techniques like the Pigeonhole Principle become near-impossible to implement. Indeed, to prove the most difficult nonconstructive principles in math, PHP etc. had better be near-axiomatic, along with other concepts
    currently beyond our ken.

    In summary:
    i) the simplifications in proof systems seem to lose too much;
    ii) we're programming wusses but theorem-proving stallions by self-assessment, and putting on the analytical blinders of proof systems is hard to accept.

    ReplyDelete
  3. Why do results in proof complexity receive a cooler reception than results in circuit or communication complexity?

    In a word, Frege systems. If you put a phrase "___ Frege system" in the title of your talk, you take a 30% hit in a attendance right there.

    Seriously though, it probably comes down to applicability. People often get interested in new areas because the new area sheds light on some other questions that they had been thinking about before. Circuit complexity assumptions are often useful in cryptography or the study of complexity classes. Communication complexity has been a key tool for transferring from the computational realm to the combinatorial one, and thus getting results in many areas (such as circuit complexity, proof complexity, time space lower bounds etc).

    Proof complexity, to the best of my knowledge (and dammit, I should know) has not found many applications for its results outside of the immediate sphere of logic and satisfiability algorithms. If you don't shed light on other folks's problems, then those other folks will go to the other parallel session or the coffee shop.

    The notion that "limited logical systems seem intuitively unacceptable to the computer scientist" would be complete hogwash, but for the weasel-word "seem". The "toy systems" considered by contemporary propositional proof complexity are actually able to implement many interesting algorithms that people actually use for solving SAT or 0/1 LPs. For example, resolution for DPLL based SAT-solvers and cutting planes or Lovasz-Schrijver systems for 0/1 LPs. All the fancy clause learning based sat-solvers can be done with constant depth Frege systems.

    N

    ReplyDelete
  4. Thankyou Lance.
    We were (and still are) waiting for your post on the latest proof of PCP theorem by Dinur.

    ReplyDelete
  5. The idea of a random access machine is usually attributed to von Neumann. Cook & Reckhow certainly knew this.

    ReplyDelete
  6. Didn't mean to disparage anyone's work; even Resolution lower bounds are very interesting. My point ii) was speculative and slightly facetious, but I do think that in light of most researchers' feeling that fantastically new concepts are needed to prove P != NP, it's psychologically difficult (not to say methodologically invalid) to accept restrictions on theorem-proving that render even old concepts inaccessible
    (in some cases, the first results of our field--Shannon circuit lower bounds--require exponential proofs).

    This hard-to-swallow pill shouldn't be sugar-coated (though the power of the proof systems deserves to be noted, as in the cases N cites), in the same way that complexity research more generally shouldn't be (and needn't be) promoted by a false promise that P != NP is close at hand.

    ReplyDelete
  7. While Von Neumann deserves some credit for the idea of random access memory (and computability using random access memory), complexity is a separate matter and that was not properly formalized until Cook and Reckhow's RAM paper. (There were even competing notions such as storage modification machines at the time.)

    (Von Neumann is probably better known for the notion of a stored program computer but from his letters to Turing it is clear that he drew this directly from Turing's universal machine.)

    ReplyDelete