Wednesday, December 07, 2005

Surprising Results

Guest Post by Bill Gasarch with help from Harry Lewis and Richard Ladner.

What are the surprising results in theory? By surprising we DO NOT mean surprising that they were PROVEN (e.g., PRIMES in P) but surprising that they were TRUE. Some results are surprising since people thought the opposite is true, but some results are surprising because people didn't even think in those terms. The latter we call BIG surprises. I will note which ones are BIG. Here is a list of results that were surprising on some level.

While compiling this list and talking to people one (obvious) thing really struck me: what people consider surprising is very much a function of WHEN they were in graduate school. For example, Median Finding in Linear time was a big surprise at the time, but we now teach it to undergrads who are not impressed. And neither are we.

  1. Gödel's Inc. Theorem. (1931) REALLY surprised Hilbert and others. BIG Original Version Modern Version
  2. HALT is undecidable. (1931–though in a different form). BIG
  3. Multiplication in subquadratic time (Toom-1963, Cook-1966). Later improved to n log n loglog n (Schonhage-Strassen 1971) People did it in O(n2) for so long! BIG
  4. Matching in Polynomial Time. (Edmonds 1965, Canadian Journal of Mathematics) Poly time seen as important. BIG.
  5. Matrix Multiplication in O(n2.87) time (less than n3 !!) (Strassen 1969).
  6. The Speed Up Theorem–There are decidable sets that cannot be assigned a complexity intelligently. (M. Blum STOC69, JACM71)
  7. Savitch's Theorem (JCSS 1970): NSPACE(T(n)) ⊆ DSPACE(T2(n)).
  8. Cook's Theorem. (1971 STOC) EVERY NP-problem is reducible to SAT! (Cook also had CLIQUE–so I've left off Karp's 22 problems). BIG.
  9. Equivalence problem for Regular Expressions with Squaring is EXP-SPACE complete (Meyer, Stockmeyer FOCS72). A `natural' problem proven to NOT be in P!
  10. Median in Linear time (Blum,Floyd,Pratt,Rivest,Tarjan STOC72, JCSS73).
  11. Super-Exponential Complexity of Presburger Arithmetic (Fischer-Rabin, 1974). A new way of looking at Hilbert's program and its difficulties.
  12. Amortized Union-Find in Θ(nINVACK(n)). (Tarjan- 1975). Inverse Ackerman comes up naturally! First time?
  13. P vs NP cannot be solved by techniques that relatives (Baker, Gill, Solovay 1975). BIG
  14. Public Key Crypto (Diffie Helman, 1976, IEEE Trans Inform Theory, Rivest-Shamir-Adleman, 1978). Establishing shared key without meeting. In retrospect solved 2000 year old problem. BIG.
  15. Permanent is #P complete. (Valiant 1979) BIG–Introduced #P.
  16. Linear Programming in P. (Khachiyan 1979).
  17. Integer Programming with fixed number of variables in P (H.W. Lenstra 1983) Note that the range of the variables is unbounded.
  18. Shortest Path problem on planar graphs BETTER THAN O(n log n) (Fredersickson, 1983, FOCS). You don't need to sort!
  19. Bit commitment⇒SAT in Zero Knowledge (Brassard,Crepeau FOCS86, Goldreich,Micali,Wigderson CRYPTO86, JACM91) if φ in SAT, can convince you that φ in SAT without giving you a clue on how to satisfy it. Result did not relativize.
  20. NSPACE(n) = coNSPACE(n). (1987-Szelepcsenyi-BEATCS, Immerman-1988-IEEECCC, SICOMP).
  21. Fixed Parameter Tractability material—Hard to pin down ONE result. I was surprised at Vertex cover for fixed k in O(f(k)n3) via Graph Minor Theorem. Other candidates are solid evidence that Dominating Set and Clique are NOT Fixed Point Tractable. More progress on Vertex Cover (down to something like n+(1.34)k + O(1)) and Subgraph Homeomorphism for fixed graphs in O(n3) (Robertson and Seymour).
  22. Under assumptions, P=BPP (Nisan, Wigderson FOCS 1988, JCSS 1994). People had thought P ≠ BPP. BIG.
  23. PH in P#P. (Toda FOCS89, SICOMP91) PH is the Poly Hierarchy. #P REALLY powerful!
  24. IP=PSPACE (Lund-Fortnow-Karloff-Nisan, Shamir FOCS90 JACM 92). Bonus: Proof did not relativize. BIG.
  25. Connection between PCP and Approximation. (Feige,Goldwasser,Lovasz,Safra,Szegedy FOCS91, JACM96). BIG.
  26. Factoring is in QUANTUM P (Shor 1994 FOCS). Natural candidate for QP-P ! BIG.
  27. Limits on what we can prove about circuits (Razborov and Rudich STOC94, JCSS97).
  28. Randomized approx algorithms for MAX CUT which are .87856.. OPT (Goemans, Williamson, 1995).
  29. Grover's Quantum O(n0.5) search algorithm (STOC 1996, Phy Review Letters 1997)
  30. HALT is truth-table reducible to RAND, where RAND is Kolmogorov rand strings (Kummer STACS 96). All the people working on this problem (granted, not many) thought it was false.
  31. Planar TSP has a PTAS. (Arora FOCS97, JACM98). Most people thought false.
  32. Assuming Unique Game Conjecture Max Cut algorithm of GW is optimal unless P=NP (Mossel, O'Donnell, Oleskiewicz 2005FOCS)

25 comments:

  1. Nice list. Comments might reveal the precision and recall...

    Lance, your reference for (32) is wrong/inaccurate. It's Khot, Kindler, Mossel, O'Donnell. And in (31) it should be TSP in the plane (or Euclidean but not planar).

    BTW, which direction of resolving the unique games conjecture will be considered surprising?

    Robi.

    ReplyDelete
  2. What about the more recent

    (33) 2-Player Nash is PPAD complete

    (see Lance's post from a couple of days ago)

    ReplyDelete
  3. Why 23 is not BIG?

    22: randomness from hardness
    30: hardness from randomness

    ReplyDelete
  4. I might add another recent surprise: Cryptography in NC^0 (Applebaum, Ishai, Kushilevitz).

    ReplyDelete
  5. In 32 it should be: Oleszkiewicz

    ReplyDelete
  6. LP is not stronly in P. Proving that it is strongly polynomial is still widely open.

    Planar TSP was independently done by Mitchell at the same time.

    I also think Cook theorem was independently done by Levin.

    I doubt if most of those results would be remembered 20 years from now. The list is too long.

    ReplyDelete
  7. The list is neat, but, of course, its content could well vary from person to person. Instead of suggesting inserts (and deletes), here is a suggestion: why not make an on-line poll ?

    ReplyDelete
  8. You list "Connection between PCP and Approximation", but do not list the PCP theorem itself.

    For me, the PCP theorem was the greatest (and probably the only) surprising result in complexity theory.

    ReplyDelete
  9. With all due respect, most of the list isn't (and shouldn't have been) surprising. Come on, how long had people been multiplying numbers more than 20 digits using computers, and how much effort had they invested in improving the quadratic-time algorithm? It was clever, decidedly cool, but people were SURPRISED by that? Shows to go how egotistic we are -- just because we couldn't conjure up ways to do something faster after a few minutes or few months of thinking (multiplication better than n^2, matrix multiplication faster than n^3, planar something without sorting, etc.), we are SURPRISED, and surprised BIG that it could be done?
    Give me a break.

    Siva

    ReplyDelete
  10. I don't know why you say (28) that GW was surprising. It was beautiful and a big result to prove. But there was a conjecture by Poljak already for several years that the 5-cycle was the worst case for the so-called eigenvalue bound, which is equivalent to the SDP relaxation. So while it was a huge result to actually prove a good bound, I doubt it was surprising since the conjecture that there *was* a good, efficiently computable bound was around and was not due to GW.

    ReplyDelete
  11. Siva's comments reflects the point that suprises are a function of time. For example, in retrospect it's hard to understand why Godel's results were surprising.

    By now we have gotten over the suprising fact that you can formally model computation, which I think is a BIG surprise.

    The issue is that people have developed some intuition, and it told them that matrix multiplication is a simple problem and should have a simple complexity (indeed, this intuition is probably right and probably the right complexity is n^2) and if you think of the matrix as a linear function acting on vectors which has n^2 complexity, then it's really surprising that computing its output on $n$ independent vectors can be done faster than n^3.

    Similarly, it seemed "obvious" that tossing coins helps speed up some algorithms. Now we may wonder why, and also how come it was not "obvious" that if there can be subexponential speedup then it's probably the case that BPP=P.

    one addition is list decoding - it seemed obvious (not to mention proven) that you cannot decode binary codes with more than 25% errors, before people understood that they were asking the wrong question.

    ReplyDelete
  12. Excellent post and discussion, everyone!

    ReplyDelete
  13. #15 is a fundamental result. It introduced a whole new way to look at things, but it wasn't surprising in the sense that people were not thinking that the permanent was in poly-time. In fact most people hadn't even heard of the permanent.

    ReplyDelete
  14. These are missing:

    Cache oblivious algorithms (Prokop et al. FOCS 1999)

    Triangulation in linear time (Chazelle FOCS 1990).

    Practical sorting in the unit cost RAM in o(n log n) time (Andersson FOCS 1996).

    Poly-time certificates for primality (Miller??).

    Existence of zero knowledge proofs.

    Burrows-Wheeler transform for compression.

    ReplyDelete
  15. Interesting as always, but very skewed. What about other areas of computer science, like logic, cryptography, computational geometry, parallel and distributed computing? SZK is closed under complement, for a start.

    The terminology is misleading. A result can't be called a surprise, let alone a "BIG surprise", if people weren't thinking along those terms. A conceptual advance, alright, but certainly not a surprise...

    Rahul.

    ReplyDelete
  16. In 22, under cryptographic assumptions, P=BPP is due to Yao I think.

    ReplyDelete
  17. What are the surprising results in theory?"

    I guess that by "theory" Bill Gasarch meant "the theory of computing"; however, if one includes G�del incompleteness result, that speaks about decidability (in contrast to tractability), then it also seems appropriate to include a big surprise (if I'm not wrong) and certainly a very big result in itself: the independence of the Continuum Hypothesis from ZFC (P. Cohen 1963 [and (G�del 1938) that showed that CH is consistent with ZFC]).

    ReplyDelete
  18. I really enjoyed this post. I wonder if the authors can compile a similar list of beautiful results. Of course, beauty is in the eye of the beholder, but certain concepts are so simple and beautiful that we can explain them to a 10 year old. And beautiful results are almost always simple.

    THe concept of undecidability is one such, in my opinion---you can explain it to anyone who knows how to write a program. Fermat's last theorem was another -- and in the words of Andrew Wiles himself, the beauty and simplicity of the theorem was what fascinated him and led him to work on it.

    ReplyDelete
  19. Two other surprises from the 1980's:

    a. Razborov's superpolynomial lower bounds for monotone circuits computing CLIQUE. At the time, using slice function arguments, it appeared that monotone and nonmonotone complexities were within an additive quadratic term.

    b. Barrington's proof that NC^1 is solvable using width-5 Branching Programs. At the time there were conjectures that MAJORITY was not efficiently solvable using only constant width.

    ReplyDelete
  20. About plane TSP: Indeed shifting strategy used for plane TSP which was essentialy the main idea of this result was invented much earlier by Baker, and it is very unfortunate that the paper on plane TSP does not refer to Baker's (FOCS'83) paper and essentially ignores her result. I consider Baker's result a much bigger result.

    ReplyDelete
  21. In response to the anonymous post on Razborov's CLIQUE lower bound: it is still true that for slice functions, monotone and non-monotone complexities differ by a small polynomial (even less than quadratic). However,no one knows how to prove qudratic or better lower bounds on monotone complexity of an explicit slice function.

    ReplyDelete
  22. Why don't we build a result-tree or a result-dag like the complexity zoo and give weights to the node? After that, we can design our course based on this graph by DFS or WFS. We can also build a bipartite graph between results and methods(ideas). We can do it in a Wiki way.

    ReplyDelete
  23. Did Janos mean
    Lovasz Local Lemma or
    Szemeredi's regularity lemma?
    I guess both have been extremely
    useful.

    ReplyDelete
  24. Janos makes many good suggestions, and indeed, the theoretical ability to sort in linear time caused quite a stir when introduced. I don't agree that we must concentrate in techniques. A single isolated result can be surprising (like the median was many moons ago) or an entire new technique can catch us by suprise (the existence of zero knowledge proofs and PKC).

    ReplyDelete
  25. #17 fixed dimension integer linear programming is in P. Is there an analogous result for L vs NL?

    ReplyDelete