Emanuele Viola just outright states I Believe P = NP. He starts with a similar argument that we've seen from Richard Lipton: We've had surprising results in the past so why should the convention wisdom that P ≠ NP be true? I have two major problems with this line of reasoning:
- These arguments cherry pick certain results and ignore the vast majority of theorems that went the way we expected. It's akin to saying that because there have been lottery winners in the past, the ticket I hold in my hand is likely to be a winner.
- All the major surprises in complexity have occurred early in first two decades of the P v NP era. We've seen no surprises at that level since the early 90's. We have seen surprise proofs of results like Primes in P, Undirected Connectivity in Log Space and NEXP not in ACC0 but the theorems themselves surprised no one.
Viola also makes the argument that we've seen fewer significant lower bound results than upper bounds. This doesn't surprise me--an upper bound requires a single algorithm, a lower bound requires showing an infinite number of algorithms can't work. I fully agree that we're unlikely to see a proof that P ≠ NP in the near future. But this isn't a reason to think they are the same. And we've seen no non-trivial algorithms for general NP problems like circuit-SAT.
Which takes me to Ryan Williams Some Estimated Likelihoods For Computational Complexity where he gives 20% probability (which I consider nearly 20% too high) that P = NP based on similar logic. Ryan surprises me even more by giving a whopping 80% chance that NEXP = coNEXP. That would imply every easily describable tautology has a short proof of being a tautology (for the appropriate definitions of describable and short) which I find hard to fathom.
If I would guess collapses that might happen, I would suggest L = NL (directed connectivity in log space) or one that Ryan gives even odds, TC0 = NC1. TC0 captures constant-depth neural networks and as we've seen from the ML community, those neural nets look awfully powerful.
Since you've been in the business for a while, did any of your beliefs changed over time? Would you have guessed before the NN revolution that TC0 = NC1?ReplyDelete
The convincing power of a conjecture, such as P ≠ NP, also depends on how convincing are its equivalent reformulations. Consider the following conjecture: NPI (NP-intermediate) is non-empty. Since not a single natural problem has ever been _proven_ to fall in NPI, under the assumption that NPI is non-empty, therefore, it is not so hard to imagine that an unsuspecting person could easily "vote" for the claim that NPI is empty. However, it is known (by Ladner's Theorem) that the emptiness of NPI is _equivalent_ to P=NP.ReplyDelete
I find the existence of NP-Intermediate problems just as obvious as P ≠ NP. Consider Graph Isomorphism. (On a side note, I don't think Babai's breakthrough is progress towards showing GI is in P but the converse!) If it in P, then it would be insane. On a side note, I conjuncture that is not in BQP either. And by the beautiful result of Boppana, Hastad and Zachos, if GI is NP-Complete than the polynomial hierarchy collapses. (I know, I know, using the fact that polynomial hierarchy is infinite to give evidence that P ≠ NP is circular but think about it!)Delete
Proving that a problem is NP-Intermediate is insanely hard, because it is (as you mentioned) equivalent to proving P ≠ NP.
What I find curious about NPI is that we _know_ it is non-empty, assuming P ≠ NP. On the other hand, under the same assumption, we are unable to prove that any *natural* problem is in NPI.Delete
It is good to be agnostic about mathematical conjectures. I found Viola's argument for P=NP, like Lance, cherry picking. Going beyond agnosticism and believing P=NP requires more than a few falsified conjectures for specific problems. Re UGC various people said/say various things but we have to wait for the truth. Patience is called for.ReplyDelete
"TC0 captures constant-depth neural networks"Delete
That's not literally true by definition. Is there a formalization that makes it true? Even assuming 01 inputs, relu gates, and a signum at the top, I don't see it immediately right now.
If you haven't already, there has been a recent breakthrough regarding UGC, seeDelete
Subhash Khot, Dor Minzer, and Muli Safra:
Pseudorandom Sets in Grassmann Graph have Near-Perfect Expansion. Electronic Colloquium on Computational Complexity (ECCC) 25: 6 (2018)
Changed my view from "agnostic" to "almost certainly true".
Thank you for responding to Emanuele's post.ReplyDelete
I would add that we (the TCS community) have a pretty clear conjectural view of the computational landscape and the foundation stone of that conjectural view is P != NP.
Whereas, if P=NP, then we really don't have a good explanation for anything; vast swathes of complexity theory and essentially all of cryptography fly out the window.
Is it possible that most of TCS is wrong? Sure, we can't rule it out. But I think someone promoting that idea should provide an alternative coherent theory. How do you explain why some problems seem harder than others? "We just haven't found the algorithm yet" is not very compelling.
If I would guess collapses that might happen, ... what about ALogTime = L (where ALogTime is just uniform NC1)?ReplyDelete
There was a very specific reason why I asked this question to EJ end of last year (his answer: "While ALogTime = L is a possibility, I think it is not very likely."). My specific reason from back than has been reduced in the meantime to the difference between expression evaluation in (canonical) binary tree representation vs expression evaluation in (canonical) directed forest representation. I should be able to resolve the remaining "unclearness" myself, as soon as I find the time to do it. But after RW wrote that '... Estimated Likelihoods ...' paper, I just had to ask him too. And since I got no reply, asking that same question here is just too tempting for me.
Who is EJ? Who is RW? Are we supposed to guess?ReplyDelete
is it a mega name drop level where they don't just drop the name they drop the initials showing just how inside they are?Delete
RW is Ryan Wiliams, EJ is Anonymous, but a specific anonymous. Obviously I am not very inside, otherwise I would have talked about Alice and Bob instead. How to explain background to a question in few words, without getting complaints about all kinds of perceived misbehaviour? I don't know and don't care. I just ask what I want to know, and give some background if I believe that it is necessary or helpful.Delete
"And we've seen no non-trivial algorithms for general NP problems like circuit-SAT."ReplyDelete
What counts as a non-trivial algorithm? Does the simplex-algorithm count as a non-trivial algorithm? It works fine in practice, most of the time. Do the recent analog Max-SAT solvers (like this from 20 Jan 2018 or that from MemComputing, Inc. initially presented on 16 Dec 2015) count as non-trivial algorithms? The inventors claim that it works fine in practice, but I admit that claims from startups should be taken with a grain of salt.
NP complete/hard problems are a contrived set because real world "noise" in problem formulation imposes an upper bound way before we would hit the computational upper bound especially given the massive hardware resources that are readily available at our fingertips.ReplyDelete
To take Euclidean TSP as one prototypical example (yes, there is a trivial factor-of-2 approximation), I do not know of a situation where one possesses perfect data for the edge costs across M sites (M>100 or whatever is the upper bound of the best TSP solver) to begin planning. By the time we traverse a small fraction of the computed plan (say a Lyft or Uber autonomous car picking up passengers), the subsequent costs would have changed significantly to require re-planning every so often. While the exact TSP solution has proven optimality, it may be very sub-optimal (fragile) when considering an ever changing environment.
Despite optimal chip design being another prototypical NP hard instance we now have incredible chips that incorporate many billions of transistors without breaking a sweat (at least not mine :-)).
While NSA would love to crack encrypted data, they probably have figured out alternative schemes to get what they need - in many, if not all situations.
Chess, go and other games are already contrived in that no one really cares how well you can play on an ever increasing NxN board.
Meanwhile, my 8-years efforts for P vs NP:ReplyDelete
Meanwhile, my 8-years efforts for P vs NP:ReplyDelete
I should note that RJL had a change of heart when it struck us that any P=NP proof must embody a proof of strong circuit lower bounds, yet no strategies ever proposed seem to give any sign of awareness of this need: https://rjlipton.wordpress.com/2017/12/08/pnp-perhaps-i-change-my-mind/ (We still both believe factoring is classically tractable though.)ReplyDelete
My following paper has gained too much attention in Academia.edu preprint server. It is right now one of the top papers of this site. Perhaps this is a good approach to a final solution: who knows? This is the Abstract and the URL link:ReplyDelete
P versus NP is considered as one of the most important open problems in computer science. This consists in knowing the answer of the following question: Is P equal to NP? This question was first mentioned in a letter written by John Nash to the National Security Agency in 1955. A precise statement of the P versus NP problem was introduced independently in 1971 by Stephen Cook and Leonid Levin. Since that date, all efforts to find a proof for this problem have failed. Another major complexity class is Sharp-P. Whether P = Sharp-P is another fundamental question that it is as important as it is unresolved. If any single Sharp-P-complete problem can be solved in polynomial time, then every NP problem has a polynomial time algorithm. The problem Sharp-MONOTONE-2SAT is known to be Sharp-P-complete. We prove Sharp-MONOTONE-2SAT is in P. In this way, we demonstrate the P versus NP problem.
See my proof of P = NP inReplyDelete
Thanks in advance…