*Guest Post by Bill Gasarch with help from Harry Lewis and Richard Ladner.*

What are the surprising results in theory? By surprising we DO NOT mean surprising that they were PROVEN (e.g., PRIMES in P) but surprising that they were TRUE. Some results are surprising since people thought the opposite is true, but some results are surprising because people didn't even think in those terms. The latter we call BIG surprises. I will note which ones are BIG. Here is a list of results that were surprising on some level.

While compiling this list and talking to people one (obvious) thing really struck me: what people consider surprising is very much a function of WHEN they were in graduate school. For example, Median Finding in Linear time was a big surprise at the time, but we now teach it to undergrads who are not impressed. And neither are we.

- Gödel's Inc. Theorem. (1931) REALLY surprised Hilbert and others. BIG Original Version Modern Version
- HALT is undecidable. (1931–though in a different form). BIG
- Multiplication in subquadratic time
(Toom-1963, Cook-1966). Later improved to n log n loglog n
(Schonhage-Strassen 1971) People did it in O(n
^{2}) for so long! BIG - Matching in Polynomial Time. (Edmonds 1965, Canadian Journal of Mathematics) Poly time seen as important. BIG.
- Matrix
Multiplication in O(n
^{2.87}) time (less than n^{3}!!) (Strassen 1969). - The Speed Up Theorem–There are decidable sets that cannot be assigned a complexity intelligently. (M. Blum STOC69, JACM71)
- Savitch's Theorem (JCSS
1970): NSPACE(T(n)) ⊆ DSPACE(T
^{2}(n)). - Cook's Theorem. (1971 STOC) EVERY NP-problem is reducible to SAT! (Cook also had CLIQUE–so I've left off Karp's 22 problems). BIG.
- Equivalence problem for Regular Expressions with Squaring is EXP-SPACE complete (Meyer, Stockmeyer FOCS72). A `natural' problem proven to NOT be in P!
- Median in Linear time (Blum,Floyd,Pratt,Rivest,Tarjan STOC72, JCSS73).
- Super-Exponential Complexity of Presburger Arithmetic (Fischer-Rabin, 1974). A new way of looking at Hilbert's program and its difficulties.
- Amortized Union-Find in Θ(nINVACK(n)). (Tarjan- 1975). Inverse Ackerman comes up naturally! First time?
- P vs NP cannot be solved by techniques that relatives (Baker, Gill, Solovay 1975). BIG
- Public Key Crypto (Diffie Helman, 1976, IEEE Trans Inform Theory, Rivest-Shamir-Adleman, 1978). Establishing shared key without meeting. In retrospect solved 2000 year old problem. BIG.
- Permanent is #P complete. (Valiant 1979) BIG–Introduced #P.
- Linear Programming in P. (Khachiyan 1979).
- Integer Programming with fixed number of variables in P (H.W. Lenstra 1983) Note that the range of the variables is unbounded.
- Shortest Path problem on planar graphs BETTER THAN O(n log n) (Fredersickson, 1983, FOCS). You don't need to sort!
- Bit commitment⇒SAT in Zero Knowledge (Brassard,Crepeau FOCS86, Goldreich,Micali,Wigderson CRYPTO86, JACM91) if φ in SAT, can convince you that φ in SAT without giving you a clue on how to satisfy it. Result did not relativize.
- NSPACE(n) = coNSPACE(n). (1987-Szelepcsenyi-BEATCS, Immerman-1988-IEEECCC, SICOMP).
- Fixed Parameter Tractability material—Hard to pin
down ONE result. I was surprised at Vertex cover for fixed k in
O(f(k)n
^{3}) via Graph Minor Theorem. Other candidates are solid evidence that Dominating Set and Clique are NOT Fixed Point Tractable. More progress on Vertex Cover (down to something like n+(1.34)^{k}+ O(1)) and Subgraph Homeomorphism for fixed graphs in O(n^{3}) (Robertson and Seymour). - Under assumptions, P=BPP (Nisan, Wigderson FOCS 1988, JCSS 1994). People had thought P ≠ BPP. BIG.
- PH in P
^{#P}. (Toda FOCS89, SICOMP91) PH is the Poly Hierarchy. #P REALLY powerful! - IP=PSPACE (Lund-Fortnow-Karloff-Nisan, Shamir FOCS90 JACM 92). Bonus: Proof did not relativize. BIG.
- Connection between PCP and Approximation. (Feige,Goldwasser,Lovasz,Safra,Szegedy FOCS91, JACM96). BIG.
- Factoring is in QUANTUM P (Shor 1994 FOCS). Natural candidate for QP-P ! BIG.
- Limits on what we can prove about circuits (Razborov and Rudich STOC94, JCSS97).
- Randomized approx algorithms for MAX CUT which are .87856.. OPT (Goemans, Williamson, 1995).
- Grover's Quantum O(n
^{0.5}) search algorithm (STOC 1996, Phy Review Letters 1997) - HALT is truth-table reducible to RAND, where RAND is Kolmogorov rand strings (Kummer STACS 96). All the people working on this problem (granted, not many) thought it was false.
- Planar TSP has a PTAS. (Arora FOCS97, JACM98). Most people thought false.
- Assuming Unique Game Conjecture Max Cut algorithm of GW is optimal unless P=NP (Mossel, O'Donnell, Oleskiewicz 2005FOCS)

Nice list. Comments might reveal the precision and recall...

ReplyDeleteLance, your reference for (32) is wrong/inaccurate. It's Khot, Kindler, Mossel, O'Donnell. And in (31) it should be TSP in the plane (or Euclidean but not planar).

BTW, which direction of resolving the unique games conjecture will be considered surprising?

Robi.

What about the more recent

ReplyDelete(33) 2-Player Nash is PPAD complete

(see Lance's post from a couple of days ago)

Why 23 is not BIG?

ReplyDelete22: randomness from hardness

30: hardness from randomness

I might add another recent surprise: Cryptography in NC^0 (Applebaum, Ishai, Kushilevitz).

ReplyDeleteIn 32 it should be: Oles

ReplyDeletezkiewiczLP is not stronly in P. Proving that it is strongly polynomial is still widely open.

ReplyDeletePlanar TSP was independently done by Mitchell at the same time.

I also think Cook theorem was independently done by Levin.

I doubt if most of those results would be remembered 20 years from now. The list is too long.

The list is neat, but, of course, its content could well vary from person to person. Instead of suggesting inserts (and deletes), here is a suggestion: why not make an on-line poll ?

ReplyDeleteYou list "Connection between PCP and Approximation", but do not list the PCP theorem itself.

ReplyDeleteFor me, the PCP theorem was the greatest (and probably the only) surprising result in complexity theory.

With all due respect, most of the list isn't (and shouldn't have been) surprising. Come on, how long had people been multiplying numbers more than 20 digits using computers, and how much effort had they invested in improving the quadratic-time algorithm? It was clever, decidedly cool, but people were SURPRISED by that? Shows to go how egotistic we are -- just because we couldn't conjure up ways to do something faster after a few minutes or few months of thinking (multiplication better than n^2, matrix multiplication faster than n^3, planar something without sorting, etc.), we are SURPRISED, and surprised BIG that it could be done?

ReplyDeleteGive me a break.

Siva

I don't know why you say (28) that GW was surprising. It was beautiful and a big result to prove. But there was a conjecture by Poljak already for several years that the 5-cycle was the worst case for the so-called eigenvalue bound, which is equivalent to the SDP relaxation. So while it was a huge result to actually prove a good bound, I doubt it was surprising since the conjecture that there *was* a good, efficiently computable bound was around and was not due to GW.

ReplyDeleteSiva's comments reflects the point that suprises are a function of time. For example, in retrospect it's hard to understand why Godel's results were surprising.

ReplyDeleteBy now we have gotten over the suprising fact that you can formally model computation, which I think is a BIG surprise.

The issue is that people have developed some intuition, and it told them that matrix multiplication is a simple problem and should have a simple complexity (indeed, this intuition is probably right and probably the right complexity is n^2) and if you think of the matrix as a linear function acting on vectors which has n^2 complexity, then it's really surprising that computing its output on $n$ independent vectors can be done faster than n^3.

Similarly, it seemed "obvious" that tossing coins helps speed up some algorithms. Now we may wonder why, and also how come it was not "obvious" that if there can be subexponential speedup then it's probably the case that BPP=P.

one addition is list decoding - it seemed obvious (not to mention proven) that you cannot decode binary codes with more than 25% errors, before people understood that they were asking the wrong question.

Excellent post and discussion, everyone!

ReplyDelete#15 is a fundamental result. It introduced a whole new way to look at things, but it wasn't surprising in the sense that people were not thinking that the permanent was in poly-time. In fact most people hadn't even heard of the permanent.

ReplyDeleteThese are missing:

ReplyDeleteCache oblivious algorithms (Prokop et al. FOCS 1999)

Triangulation in linear time (Chazelle FOCS 1990).

Practical sorting in the unit cost RAM in o(n log n) time (Andersson FOCS 1996).

Poly-time certificates for primality (Miller??).

Existence of zero knowledge proofs.

Burrows-Wheeler transform for compression.

Interesting as always, but very skewed. What about other areas of computer science, like logic, cryptography, computational geometry, parallel and distributed computing? SZK is closed under complement, for a start.

ReplyDeleteThe terminology is misleading. A result can't be called a surprise, let alone a "BIG surprise", if people weren't thinking along those terms. A conceptual advance, alright, but certainly not a surprise...

Rahul.

In 22, under cryptographic assumptions, P=BPP is due to Yao I think.

ReplyDeleteWhat are the surprising results in theory?"

ReplyDeleteI guess that by "theory" Bill Gasarch meant "the theory of computing"; however, if one includes G�del incompleteness result, that speaks about decidability (in contrast to tractability), then it also seems appropriate to include a big surprise (if I'm not wrong) and certainly a very big result in itself: the independence of the Continuum Hypothesis from ZFC (P. Cohen 1963 [and (G�del 1938) that showed that CH is consistent with ZFC]).

I really enjoyed this post. I wonder if the authors can compile a similar list of

ReplyDeletebeautifulresults. Of course, beauty is in the eye of the beholder, but certain concepts are so simple and beautiful that we can explain them to a 10 year old. And beautiful results are almost always simple.THe concept of undecidability is one such, in my opinion---you can explain it to anyone who knows how to write a program. Fermat's last theorem was another -- and in the words of Andrew Wiles himself, the beauty and simplicity of the theorem was what fascinated him and led him to work on it.

Two other surprises from the 1980's:

ReplyDeletea. Razborov's superpolynomial lower bounds for monotone circuits computing CLIQUE. At the time, using slice function arguments, it appeared that monotone and nonmonotone complexities were within an additive quadratic term.

b. Barrington's proof that NC^1 is solvable using width-5 Branching Programs. At the time there were conjectures that MAJORITY was not efficiently solvable using only constant width.

About plane TSP: Indeed shifting strategy used for plane TSP which was essentialy the main idea of this result was invented much earlier by Baker, and it is very unfortunate that the paper on plane TSP does not refer to Baker's (FOCS'83) paper and essentially ignores her result. I consider Baker's result a much bigger result.

ReplyDeleteIn response to the anonymous post on Razborov's CLIQUE lower bound: it is still true that for slice functions, monotone and non-monotone complexities differ by a small polynomial (even less than quadratic). However,no one knows how to prove qudratic or better lower bounds on monotone complexity of an explicit slice function.

ReplyDeleteWhy don't we build a result-tree or a result-dag like the complexity zoo and give weights to the node? After that, we can design our course based on this graph by DFS or WFS. We can also build a bipartite graph between results and methods(ideas). We can do it in a Wiki way.

ReplyDeleteCorrections and additions:

ReplyDeletePermanent is #P complete did NOT introduce #P. The class was introduced in another paper of Valiant's "The complexity of Enumeration and Reliability Problems" (SICOMP 79, but, like the Permanent paper, published as a Tech Report earlier.) A close relative of #P, PP, was introduced by Gill in 74, and many combinnatorial counting problems related to it appear in my ICALP 77 paper.

Again, the linear time sorting seems to me much less BIG than the first sub-nlogn algorithm on RAMs (Willard..)

Other results that were BIG:

Planarity in linear time (Hopcroft and Tarjan) All bounded genus in linear time (Bohar) genus NP-hard.

Planarity in NC.

Space is better than time (SPACE(n/logn) contains linear time)

(Hopcroft, Paul, Valiant)

Undirected graph reachability in sub log^2 time. (Nisan et al)

Circuit depth is a version of communication complexity (Karchmer)

More generally, I think it is better to think of *techniques* that were new and surprising.

Let me toss out an initial random selection:

splay trees

amortized complexity

Kolmogorov techniques for proving lower bounds (Paul)

Underlying graph structure of computations to prove lower bounds (Valiant -- gave us superconcentrators and much more)

Isolation lemma (Vaziranis, Mulmuley)

Depth-first search (Hopcroft and Tarjan)

Bob and Alice's adventures in cryptoland (Blum, Goldwasser and Micali)

Good pseudorandom generators

PCP

PCP implies nonapproximability

Fourier analysis of Boolean functions

Szemeredi local lemma

......

Did Janos mean

ReplyDeleteLovasz Local Lemma or

Szemeredi's regularity lemma?

I guess both have been extremely

useful.

Janos makes many good suggestions, and indeed, the theoretical ability to sort in linear time caused quite a stir when introduced. I don't agree that we must concentrate in techniques. A single isolated result can be surprising (like the median was many moons ago) or an entire new technique can catch us by suprise (the existence of zero knowledge proofs and PKC).

ReplyDelete#17 fixed dimension integer linear programming is in P. Is there an analogous result for L vs NL?

ReplyDelete