A guest post by Eric Allender prompted by an (incorrect) P ≠ NP proof recently published in Springer Nature's Frontiers of Computer Science.
For a time, I served as Editor-in-Chief of ACM Transactions on Computation Theory, and in this role I had to deal regularly with submissions that claimed to resolve the P vs NP problem. Finding referees for these papers was sometimes challenging, so I frequently ended up reviewing them myself. Dealing with such submissions involves enough overhead that ToCT, J.ACM and ACM Transactions on Algorithms limit the frequency with which authors can submit work of this sort. But for every submission, on any topic, the overall process was the same: Find someone on the Editorial Board who has sufficient expertise to find referees and make knowledgeable use of their recommendations. If there was no such person on the Editorial Board, then it was always the case that the submission was out of scope for the journal.
These thoughts are brought to mind by a recent case, where it seems to me that the editorial process broke down.
Springer publishes several high-quality academic journals that deal with Theoretical Computer Science. One Springer journal, Frontiers of Computer Science, recently published an article entitled SAT Requires Exhaustive Search, where one of the authors is a Deputy Editor-in-Chief of the journal. The abstract of the article states that it proves a result that is stronger than P not equal to NP. The Editorial Board of the journal has some members who are expert in computational complexity theory. However, all the ones whom I know personally have asserted that they had no knowledge of this paper, and that they were not involved at all in handling the paper.
When Ryan Williams and I learned about the publication of this article, we drafted a comment, which we sent to the Editor-in-Chief. We recommended that the paper be retracted, in which case there would be no need to publish our comment. However, the Editor-in-Chief declined to retract the article, saying that he could find no evidence of misconduct, and thus we have been assured that an edited version of our comment will appear in the journal.
Our comment calls attention to some shortcomings in the proof of the main theorem (similar to shortcomings in several other failed attempts to separate P from NP). But we were able to say more. Often, when one is looking for bugs in a purported proof, one has to deal with the situation where the claimed theorem is probably true, and the only problem is that the proof is not convincing. However, the main theorem in their paper (Theorem 3.2) states that a particular constraint satisfaction problem requires time more than \(d^{cn}\) for any constant \(c<1\) (where \(d\) is the domain size, and \(n\) is the number of variables). In particular, their purported proof claims that this holds even when \(k=2\) (meaning that each constraint has at most 2 variables). However, Ryan Williams presented an algorithm more than two decades ago that runs in time \(O(d^{(0.8)n})\) in this special case, contradicting the lower bound claimed in the article.
The article contains an appendix, with glowing testimonies from various researchers; the lead-in to the appendix also contains enthusiastic comments from Gregory Chaitin. I contacted Gregory Chaitin, and he asserts that he did not read the paper, and that he was quoted out of context.
The edited version of the comment that Ryan Williams and I wrote, which will supposedly appear in Frontiers of Computer Science soon, differs from the version linked here (our original submission) primarily in one respect: Our closing paragraph was removed. Here is that paragraph:
Finally, it is our opinion that the publication of this article is a complete embarrassment to this journal and its publisher. We believe that, at the very least, the paper should be withdrawn, and Springer should conduct an investigation to understand how such a paper could have made it through the peer review process.
Update (Lance, 8/5/25): The authors of the paper in question have asked me to link to their reply to the Allender-Williams comment. I do so with no endorsement.
Eric Allender asked me to note that their claim about Ryan’s algorithm requiring exponential space is misleading; the amount of space is not more than the runtime of the algorithm, which is less than the lower bound that they claim. (Their theorem does not state that \(d^{cn}\) time is required under the additional assumption that the algorithm use only \(n^{O(1)}\) space.)
See also https://arxiv.org/abs/2312.02071 for another rebuttal of an earlier version of the same paper
ReplyDeleteI would be careful of Eppstein’s comments; a non expert in this domain. But fine. Let him speak too.
Delete@Anonymous, what are you smoking?? Eppstein is a prominent and accomplished theoretical computer scientist, and his comment isn't making any claims to be "careful" of beyond linking to another paper.
DeleteEppstein is simply pointing to a paper written by other people. Ignore anonymous attacks!
DeleteThere is no need to be an expert in complexity to comment on this paper. It doesn't take 15 minutes of reading the paper to anyone with a basic education in TCS to see that the claimed """proof""" is hilarious.
DeleteAlso see: https://arxiv.org/pdf/2401.01193 and https://arxiv.org/pdf/2309.05689 which are even more hilarious.
DeleteIt is a shame that this blog repeatedly publishes such defamatory comments about good researchers.
DeleteThis seems to be a popular topic. https://eprint.iacr.org/2025/445 but of course they don't perform peer review - it's just a preprint server.
ReplyDeleteThis is either hilarious or sad. Or, both.
ReplyDeleteBoth
DeleteWith so many attempts, something will slip thru the cracks. Good job pointing this out!
ReplyDeleteIf a paper does not brag that it solved P NE NP and that result appears in the middle of the paper, YES I can imagine that slipping through. but when the claim that P NE NP is in the abstract, this is not a `just slipped thru' scenario.
DeleteDon't let President Trump know. Fake Science + China!
ReplyDeleteThe comment: “ authors can submit work of this sort” seems very derogatory .”Work of this sort” what the hell is this about here. Either have the courage to be fully descriptive but not so obnoxious please
ReplyDelete"Work of this sort" is referring to "submissions that claimed to resolve the P vs NP problem" from earlier in the paragraph.
DeleteThis was a reference to the P/NP policy of these journals, which is described at length here: https://dl.acm.org/journal/toct/pnp-policy
DeleteTo save you from having to follow the link, here is what it says: "No author may submit more than one paper to J. ACM, ACM Trans. on Algorithms, or ACM Trans. on Computation Theory in any 24-month period, purporting to resolve the P versus NP question or related long-standing questions in complexity theory, except by invitation of the Editor-in-Chief. This applies to resubmissions of previously rejected manuscripts."
I am sorry that you found my language to be obnoxious.
Another thing: an editor in chief of the journal should be aware of how such a journal made it into its journal. This journal is not as wide ranging as TCS … also likely is that the co-author who is an editor made things happen where other papers would have gotten stuck
ReplyDeleteWhen I hear stuff like “ lGregory Chaitin. I contacted Gregory Chaitin, and he asserts that he did not read the paper, and that he was quoted out of context.” i cringe… what does this even mean? Did he say it or not?
ReplyDeleteIs Greg still active in research or not? If not, I can understand. If you don’t read a paper, how do you have an opinion about it? How can you say something favorable other than … nice format and nice typeface! So something feels eerie here.
The paper says 'As of March 2025, more than 30 experts have provided highly positive feedback, including remarks like “truly revolutionary” from Prof. Gregory Chaitin'. I can believe Chaitin said something like "This result would be truly revolutionary if it's correct, so check the proof extremely carefully" and they quote-mined him.
DeleteThe article says "As of March 2025, more than 30 experts have provided highly positive feedback, including remarks like “truly revolutionary” from Prof. Gregory Chaitin." It's possible that Greg said something like, "if true, the result is truly revolutionary."
DeleteThe paper quotes Chaitin as saying "truly revolutionary". That's it, that's the whole quote. Did he say it? Does it even matter? There are many ways to use that phrase and many of them are not positive feedback, as the paper claims it is. For example "if you managed to prove that, that would be truly revolutionary" (which roughly means: you probably didn't prove anything and you may want to make extra sure that you did), which is a comment that could be made based on the title alone.
DeleteTechnically, you did not point out the error, but merely that Ryan's result and the paper are inconsistent and therefore at least one of them is wrong. :)
ReplyDeleteNo, our full comment points to a serious (and obvious) mistake in the proof of the main result. The authors flatly assume that any algorithm for SAT must proceed by reducing an instance of size n to instances of size n-1 in a particular way.
DeleteThe fact that the statement of the main result was also directly refuted 20 years ago just makes the whole thing extra silly.
nothing "eerie" here - the paper in question is how a blatant academic fraud looks like.
ReplyDelete"eerie" is an eerie word.
DeleteWe should open a journal on proofs of P=NP and P\neq NP and any publications go in there. Problem solved?
ReplyDeleteWorld Scientific in 2016 even published a whole book implying P=NP, via a purported polynomial-size LP for TSP (something that was already disproved in our STOC'12 paper).
ReplyDeletehttps://www.worldscientific.com/worldscibooks/10.1142/9725#t=aboutBook
To be more precise & avoid confusion: they gave a purported polynomial-size extended formulation for the TSP polytope (which would imply a polynomial-size LP)
DeleteI took a look at the book wondering if the P=NP claim was buried in the middle (so the editor may have missed it) or overt (so no excuse to miss it). The TITLE of the book is fine: Advances in Combinatorial Optimization: LP formulations of the TSP and other hard opt problems (SO I THOUGHT: They will either formulate approx versions, or sho whow SOME TSP problems can be formulated as an LP or stuff like that).
DeleteMoney quote from the abstract on the website: This work also represents a proof of the equality of the complexity class ''P'' (polynomial time) and "NP" (nondeterministic polynomial time), and makes a contribution to the theory and application of `extended formulatins' (EFs).
Nothing subtle about that. I wonder (a) why this was not caughy early in the procss, and (b) why this book from April 2016 was not noticed earlier.
Next step: Either this book or the recent paper try to claim the $1,000,000 prize.
Eric or Ryan- I did a blog post a while back about how `proofs' of P=NPor P NE NP have never (literally never) had an interesting idea in them. So NO ``They don't prove P=NP but they have an idea for a better SAT Solver' or `Theydon't prove P NE NP but the ideas here could lead to SAT not in n^2''
ReplyDeleteSO- is there ANYTHING of interest in the current paper?
There are sections of the paper that discuss the probability with which a random instance of CSP (in their model) is satisfiable. I didn't go through this part of their paper carefully, but the reference that David Eppstein provided in an earlier comment (https://arxiv.org/abs/2312.02071) indicates that people have gone through that argument carefully and found no real problems there. This is conceivably of interest, but I don't think that it is very helpful for understanding why P is not equal to NP (assuming that this is the case). The paper also contains philosophical discussions (with mentions of Wittgenstein and others), which would have been more interesting if their argument had been correct.
ReplyDeleteThe Moser-Tardos Algorithm does not appear to be based on divide and conquer. As such, Xu and Zhou's assertion in their article that 'As we know, at least until now, all exact algorithms for CSPs are based on divide and conquer style, such as backtracking method' is inaccurate.
ReplyDeleteFirst of all thanks to Eric and Ryan for pointing this out.
ReplyDeleteAs a Ph.D. student in a peripherally related field who happen to see a post about this on X and came here, I am surprised that Eric/Ryan even bothered to draft this much of a comment in response to this paper in the first place. This paper seems to come out of a Chinese university that writes papers for the sake of publication alone, without any regards for quality. This reminds me of a similar instance a few years ago where one of the professors had purported to have discovered sequences that both converge and diverge (or something of this ridiculous sort), with the exception that this time the paper was even accepted to publication!
Alas, it is highly possible that there are many such instances (e.g., at ICML/NeurIPS) whose errors are undetected...
Maybe the paper was refereed by ChatGPT?
DeleteOr someone equally dumb.
ReplyDelete