This month we honor the papers that gave us the first NP-complete problems and marked the official beginning of the P versus NP question. The P and NP notation did not originate with these papers but I will use them anyway for clarity.

Steve Cook, The Complexity of Theorem-Proving Procedures, STOC 1971.

Suppose a nondeterministic Turing machine M accepts a set S of strings within time Q(n), where Q(n) is a polynomial. Given an input w for M, we will construct a propositional formula A(w) in conjunctive normal form such that A(w) is satisfiable iff M accepts.In this paper Cook gives the first formal treatment of the P versus NP problem. He gives several examples of NP problems and notes his notion of NP is equivalent to a class of extended positive rudimentary relations due to James Bennett. He introduces P-reducibility (now often called Cook reductions) where one reduces a problem A to a problem B by solving A using a polynomial-time machine with access to an oracle for B. His main theorems show that Tautology and Subgraph Isomorphism are hard for NP under P-reducibility. The quote above came from the beginning of the proof for Tautology.

The theorems suggest that it is fruitless to search for a polynomial decision procedure for the subgraph problem, since success would bring polynomial decision procedures to many other apparently intractable problems…The theorems suggest that Tautology is a good candidate for a set not in P, and I feel it is worth spending considerable effort trying to prove this conjecture. Such a proof would be a major breakthrough in complexity theory.Indeed.

Leonid Levin, *Universal'nyie Perebornyie Zadachi (Universal
Search Problems)*, Problemy Peredachi Informatsii 1973. Translation in
appendix of the Trakhtenbrot
survey.

If we assume that there exists some (even if artificially formulated) problem of the search type that is unsolvable by simple (in terms of volume of computation) algorithms, then it can be shown that many "classical" search problems have the same property.Levin claims that any NP problem reduces to any of a list of problems including tautology, subgraph isomorphism, tiling, set cover and a few others. As was the Russian tradition of the times, Levin's paper does not have fully formal definitions or any proofs. Still he has similar results and the same insight as Cook that one can reduce any NP problem to specific natural problems.

All of these problems are solved by trivial algorithms entailing the sequential scanning of all possibilities. The operating time of the algorithms, however, is exponential, and mathematicians nurture the conviction that it is impossible to find simpler algorithms…but not one has yet succeeded in proving it.In today's world results proven today get transmitted around the world in minutes. But given the technological and even more important the political situation of the 70's, Cook and Levin did not learn of each other's work until years later. Today we give them both credit for taking the P versus NP problem out of the abstract and connecting it to solving natural problems. Now only three decades later the P versus NP problem has become one of the great open questions in all of mathematics.

From your message it seems Cook published his paper in 1971 and Levin in 1973. Why is then Levin given equal credit if he was two years late? Usually academic credit is given on a first-past-the-post basis.

ReplyDelete> Why is then Levin given equal credit if he was two years late?

ReplyDeleteYour point was taken into consideration by the ACM: Cook got the Turing award, while, unfortunately, Levin did not.

Levin's discovery was independent of Cook's discovery. I think it is only fair that both of them should be credited equally for their discovery.

Wasn't it just a few weeks ago that this blog talked about a student who independently discovered a result a mere few weeks after some researchers and it was questioned whether he deserved any credit at all?

ReplyDeleteThe student independently discovered a

ReplyDeleteweakerresult, otherwise of course he deserves the same amount of credit.The student in question (Vladmir Trifonov), and the researcher in question (Omer Reingold) have both been awarded best paper for their results at STOC 2005.

ReplyDeleteI think that Lance's original post suggesting that there should be any question about acceptance of Trifonov's work was ridiculous. These were essentially simultaneous results. Yes, one was stronger but both were major results and submitted to the same conference. It just happened that word got out a bit faster for the stronger result.

ReplyDeleteThe issue with NP-completeness is a relativistic notion of simultaneity.

It is clear that, in the West, Cook's paper and Karp's follow-on paper had already been widely disseminated before Levin's work was even published (and Levin's work was not known in the West until several years later).

However, there is a well-known story that from the point of view of an observer in Czechoslovakia (actually two observers meeting after simultaneous East-ward and West-ward trips) the two events appeared to be simultaneous.

It is interesting to note that parts of the NP-completeness concept were known before the two papers.

ReplyDeleteIn a letter to von Neuman, Godel poses the question of the complexity of proofs of tautologies of sentential calculus (does a Boolean formula always evaluate to "TRUE"?) as a very important problem of Mathematics. This is, of course the question "NP=coNP?", closely related to the P vs NP problem.

Unrelated to the above, researchers in Operation Research have tried to understand Integer Programming: some subcases reduce to Linear Programming (which we now know is in P, and even then researchers knew that in practice was "easy" because of the usually efficient simplex method). In contrast, some subcases seemed to be as hard as the general problem. These hardness results were effectively reductions between NP-complete problems.

Finally, the idea of encoding computations into logical formulae was well known in Computability Theory.

Of course none of this diminishes the beauty, importance, or impact of the papers by Cook and Levin. It just might help give some of the scientific sources that (at least in retrospect) contributed to this great result.

One reason why credit hasn't been divided equally between Reingold and Trifonov is that Reingold's proof is so much more elegant and insightful. Such a criterion may seem rather arbitrary, but I'm sure most theorists are in agreement on this point, it's not as subjective as one may think. In any case time's the final judge...

ReplyDeleteBy awarding equal credit to results proved "independently" (i.e., without knowledge of an existing result), aren't we providing a kind of incentive for researchers to _not_ fully investigate previous work? (Yes, you could make the case that in the Cook/Levin case knowledge of the previous result would have been very difficult to obtain, but now...)

ReplyDelete> One reason why credit hasn't

ReplyDelete> been divided equally between

> Reingold and Trifonov is that

> Reingold's proof is so much more

> elegant and insightful. Such a

> criterion may seem rather

> arbitrary, but I'm sure most

> theorists are in agreement on

> this point, it's not as

> subjective as one may think. In

> any case time's the final judge...

I don't believe in that. Trifonov result is weaker and therefore Reingold's result will receive main credits; otherwise Trifonov's result deserves the same amount of credit.

Still, Trifonov has been awarded best student paper in STOC'05 (Reingold result was awarded best paper of the conference).

ReplyDeleteCongratsto both Trifonov and Reingold for their best paper awards at STOC!!! Well deserved winners.Man, why are Computer Scientists so result-centric? Our goal as a field is not to get tight bounds for everything we can get our hands on; it is to understand fully the nature of computation,

ReplyDeleteand the related mathematics.

Tight bounds are an indication of progress, but not means unto themselves. Thus if Trifonov's result had been as insightful as Reingold's, the extra O(log log n) factor would be merely an annoyance, not evidence that Reingold should receive most of the credit.

The fact is that Reingold's proof is revolutionary--unlike the previous works achieving progressively exponents 3/2, 4/3, ..., the essence of Omer's idea can be explained in a sentence or two, and for people who have given thought to the problem previously, that sentence is enough to reproduce the proof from scratch.

See also Dinur's recent (and, again, revolutionary) proof of the PCP theorem which is "inspired," in part by Omer's technique.

In an interview, Levin commented that he was advised by Kolmogorov to rush to publication his results on NP-completeness. Apparently Andrei Nikolaevich had heard rumors of the publication of similar results in the west by Cook.

ReplyDeleteAll this fixation with credit is quite meaningless. We are all progressing slowly and building on each other works, and it is impossible to tell the contribution of a particular work or line of research before enough time has passed. Actually, as is evidenced by the comments above on the works of Cook&Levin, even with time it is not possible to completely isolate the contribution of one paper.

ReplyDeleteCredit is necessary mechanism but it is not the most important thing. In fact, if a contribution is truly revolutionary then it is quite likely not to be appreciated at first - some of the field's best papers, including the GMR paper that invented interactive proofs and zero knowledge were initially rejected from our conferences.

Thus, what is important is not whether Reingold's work is better than Trifonov or vice versa (which, despite the awards decision, we don't really know at the moment) but that we know much more about probabilistic low-space computation today than we did a year ago.

--Boaz

We have the GMR paper as an example of something that was so far ahead of its time that it was initially rejected. Are there other examples that people have in mind? I'd find it interesting, at least, to see what other fixtures of today's landscape had trouble becoming accepted.

ReplyDelete-David Molnar

I heard that GMR was initially rejected because it wasn't clearly written. The ideas in the paper weren't rejected--they weren't initially understood at least partially because of the exposition.

ReplyDeleteRejecting a badly written paper with a great idea is not an instance of people not accepting something because it is ahead of its time.

ReplyDeleteI'd find it interesting, at least, to see what other fixtures of today's landscape had trouble becoming accepted.Daniel Simon's paper "On the power of quantum computation" was initially rejected from STOC. But its manuscript inspired Peter Shor to generalize its ideas so as to solve the discrete log problem in quantum polynomial time (and then factoring as well). In the subsequent FOCS, both of their papers were accepted. I think that Simon's result a fixture in the quantum computing landscape.