That may have been the last time I was truly shocked by a theorem in theoretical computer science. I've been shocked by proofs, that Primes are in P, Undirected connectivity in Log space, NEXP not in ACC

In the ten years before Shor we had plenty of surprises, interactive proofs, zero-knowledge proofs, probabilistically checkable proofs, nondeterministic space closed under complementation, hardness versus randomness, the permanent hard for the polynomial-time hierarchy. It seemed to come to a hard stop after Shor.

There have been some mild surprises, the Hadamard isn't rigid, holographic algorithms, the complexity of Nash equilibrium, QIP = PSPACE, and many others. But nothing that has made us rethink the complexity world.

This reflects the maturity of our field. How many shocking theorems have we seen recently in math in general? We're shocked by proofs of the Poincaré conjecture and Fermat's last theorem but both went in the expected direction.

We will have some shocking theorem in the future, maybe Factoring in P or L = NL. To be truly shocked it would have to be something I can't even imagine being true today.

^{0}, Graph Isomorphism in quasi-polynomial time. But the theorems themselves all went in the directions we expected.In the ten years before Shor we had plenty of surprises, interactive proofs, zero-knowledge proofs, probabilistically checkable proofs, nondeterministic space closed under complementation, hardness versus randomness, the permanent hard for the polynomial-time hierarchy. It seemed to come to a hard stop after Shor.

There have been some mild surprises, the Hadamard isn't rigid, holographic algorithms, the complexity of Nash equilibrium, QIP = PSPACE, and many others. But nothing that has made us rethink the complexity world.

This reflects the maturity of our field. How many shocking theorems have we seen recently in math in general? We're shocked by proofs of the Poincaré conjecture and Fermat's last theorem but both went in the expected direction.

We will have some shocking theorem in the future, maybe Factoring in P or L = NL. To be truly shocked it would have to be something I can't even imagine being true today.

In algebraic complexity, recently we have seen a few fairly surprising theorem statements, chasm at depth four and three, and the recent bootstrapping results in polynomial identity testing (Agrawal, Ghosh, Saxena'18 ; Kumar, Saptharishi, Tengse'18) that shows a bare improvement of the trivial hitting set would lead to derandomization of polynomial identity testing.

ReplyDeleteI think homomorphic encryption was a major surprise. I think crypto seems to produce surprises (maybe more so to outsiders) consistently...

ReplyDeleteNEEXP in MIP*!!

ReplyDeleteI agree with Anonymous, and I would also add that there are surprising analogues in Boolean complexity, namely the "hardness magnification" results. There's Allender-Koucky [JACM'10] which shows how n^{1.001} lower bounds can imply super-polynomial lower bounds, but also there are the recent papers:

ReplyDeletehttps://eccc.weizmann.ac.il/report/2018/199/

https://eccc.weizmann.ac.il/report/2018/139/

https://eccc.weizmann.ac.il/report/2018/158/

https://people.csail.mit.edu/rrw/MCSP-MKTP-stoc19.pdf

An example theorem: From Razborov-Rudich, we believe there are no natural properties useful against P/poly, computable in P/poly. If we could even prove there are no such properties in N*poly(log N) size and poly(log N) depth, then already NP is not in P/poly.

One could claim none of these are "shocking" because, if you expect the lower bounds to hold, the statements are of the form "True implies True". But the fact that such stuff is provable with existing complexity theory is pretty shocking to me. Hardness magnification shows in particular that there are good reasons why even weak lower bounds look out of reach: because they imply strong lower bounds!

Results more along the lines of what Lance means by "shocking" would be the results on catalytic computation.

https://iuuk.mff.cuni.cz/~koucky/papers/cats.pdf

It's extremely counterintuitive to me that you can use a "dirty memory" to compute so much (and leave the dirty memory intact).

Note that, in order to provide something that may be shocking, I had to cite an algorithmic result. It seems we are very pessimistic about what algorithms can do, so we're always shocked by algorithms but never by lower bounds. Can you think of any lower bound result you would be shocked by?

@ryanw: I think I was shocked (mildly) by both the algorithms and the lower bounds for frequency moments in the streaming model (Alon, Matias, Szegedy, 1995). Before this paper came out, I think most theorists would've predicted that either all frequency moments are approximable in small space or none of them is. The paper showed -- through highly non-trivial ideas -- that F_0 and F_2 are approximable in logarithmic space, while higher moments need polynomial space.

ReplyDeleteSure, those results could count as well.

ReplyDeleteI think the best example so far is Sanketh's. I didn't see it when I was posting initially, but I should have thought of it too! :)

Ewin Tang's killing of certain proclaimed "quantum speedups" by simulating those quantum algorithms using sophisticated classical sampling techniques was quite entertaining!

ReplyDeleteI found Hastad's optimal inapproximability results, especially the fact that MAX-3SAT can't be approximated beyond a random guess, quite shocking. Also all consequences of the UGC (e.g., GW Max-Cut algorithm, the trivial factor-2 vertex cover, etc, being all optimal), and especially Raghavendra's optimal algorithm for all CSPs. Soon we may see a proof of UGC, which would add another entry on the list of shocking results (when UGC was proposed, many thought it will be settled either way in the next conference, and nobody could foresee its power...).

ReplyDeleteI also think that "Hadamard [and more recently, DFT] isn't rigid" is quite shocking. It was a widely accepted guess that these are the most natural choices for proving rigidity.

ReplyDelete"Shocking" may not be the right choice of word here. Perhaps you meant "surprising" as in Lance's post.

DeleteI would say that the recent result of De Grey -- that the chromatic number of the plane is at least 5 -- is shocking. In my opinion, all the evidence pointed to it being 4.

ReplyDelete"Cofinality spectrum theorems in model theory, set theory and general topology" by Malliaris and Shelah proves that two cardinals, denoted p and t, are the same. This was highly surprising.

ReplyDeleteAnother example on cardinality by Shelah, 2^{\alpha_\omega} < \alepha_{\omega_4}. To quote Shelah himself in an expository paper:

You may think this is a typographical error (in fact almost all who saw it for the first time were convinced this is a typographical error) and we still do not know:

Dream/Question: Why the hell is it four?

The Taniyama-Shimura conjecture was dismissed for years by number theorists.

ReplyDelete