tag:blogger.com,1999:blog-3722233.post4354975669070450545..comments2024-08-03T11:58:59.111-05:00Comments on Computational Complexity: Is Persistence an Anachronism?Lance Fortnowhttp://www.blogger.com/profile/06752030912874378610noreply@blogger.comBlogger17125tag:blogger.com,1999:blog-3722233.post-54245259771058758612024-04-30T11:12:11.345-05:002024-04-30T11:12:11.345-05:00The editors of Combinatorica should retract Profes...The editors of Combinatorica should retract Professor Vazirani’s 1994 proof of correctness of his algorithm, citing the many errors discussed in the new MOR paper.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-35700439508172238332024-04-28T10:33:13.896-05:002024-04-28T10:33:13.896-05:00When people say FOCS/STOC are biased in what they ...When people say FOCS/STOC are biased in what they accept, this is a good example. If you are well-known or connected to well-known, you will get a pass that others won't get. <br /><br />And this in turn leads to bias in credentials that are used to decide who gets what position and what resources. <br /><br />It might have been ok when the field was young and small and a small clique of people who all knew each other, but we are way past that time. Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-42310486656159789602024-04-28T10:28:41.566-05:002024-04-28T10:28:41.566-05:00Everybody is you and your friends.
At the age of ...Everybody is you and your friends.<br /><br />At the age of Internet, you need to publish and publisize the incorrectness of your claim as soon as you notice it. That is what academic honesty requires.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-22027990144898698142024-04-27T21:09:20.504-05:002024-04-27T21:09:20.504-05:00Please also note that the above 2017 paper is a sh...Please also note that the above 2017 paper is a short and self-contained presentation of an O(\sqrt n m) matching algorithm.hal gabowhttps://home.cs.colorado.edu/~hal/noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-76352740610947301632024-04-27T15:24:55.294-05:002024-04-27T15:24:55.294-05:00I distinctly remember a talk of Vazirani roughly t...I distinctly remember a talk of Vazirani roughly ten years ago where he started with, "Everybody knows my FOCS 1980 paper is wrong!" There are many things disturbing and deeply wrong about this statement and the context in which he made it that turned me off of TCS research. Mainly it was:<br />-The statement is false: not everybody could know, as there was no full version of that paper available (to even try to check), and there no erratum or withdrawal of the paper.<br />-The speaker was unashamed of naming the prestigious conference in which the extended abstract was published, which surely boosted his career, despite the errors, to the detriment of others.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-12022173356893903482024-04-25T20:11:50.826-05:002024-04-25T20:11:50.826-05:00Is it okay to say a result is "not WRONG"...Is it okay to say a result is "not WRONG" even though it's proof as written is wrong, just because later correct proofs are found?<br /><br />For example, many incorrect P not equal to NP proofs are posted online nowadays. In the future if someone presents a correct proof that P does not equal NP, is it now justified to say that all those initial posts were "not WRONG"? Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-81800331016882038902024-04-25T20:09:46.590-05:002024-04-25T20:09:46.590-05:00That's a good point---it still means that the ...That's a good point---it still means that the statement "the MV algorithm is still the fastest known algorithm for computing a maximum cardinality matching" is wrong as written. Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-80366470810892496042024-04-25T13:24:12.101-05:002024-04-25T13:24:12.101-05:00The paper cites Wikipedia! The paper cites Wikipedia! Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-48631846494649882232024-04-25T12:28:05.183-05:002024-04-25T12:28:05.183-05:00Regarding alternatives to the MV algorithm, please...Regarding alternatives to the MV algorithm, please see these two papers:<br /><br />"The weighted matching approach to maximum cardinality matching"<br />H.N. Gabow, Fundamenta Informaticae (Elegant Structures in Computation:<br />To Andrzej Ehrenfeucht on His 85th Birthday), 154, 1-4, 2017, pp 109-130. This algorithm <br />finds a maximum cardinality matching in time $O((\sqrt n) m)$.<br /><br />``Maximum cardinality $f$-matching in Time $O(n^{2/3}m)$''<br />H.N. Gabow, arXiv:2311.14236, 24 pages. Currently submitted for publication.<br />The cited bound is for simple graphs with arbitrary degree constraint function $f$. The algorithm also runs in time $O(min( \sqrt {f(V)}, n) m)$ for multigraphs with arbitrary $f$, $f(V) = \sum_v f(v)$. <br />When $f$ is identically 1 this is again the MV bound. <br /><br />These algorithms don’t use double depth-first search, just ordinary depth-first search.hal gabownoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-38813395269767653672024-04-25T10:48:23.549-05:002024-04-25T10:48:23.549-05:00Agree that a result known to be incorrect/incomple...Agree that a result known to be incorrect/incomplete should be retracted (in unfixable). However, the new paper does a nice job examining/explaining the mistakes/gaps in the original attempt.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-44653335175850547692024-04-25T10:45:11.801-05:002024-04-25T10:45:11.801-05:00I don't think the two algorithms are comparabl...I don't think the two algorithms are comparable in terms of asymptotic complexity. MV is faster for sparse graphs but slower for dense graphs.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-58050041533569194002024-04-25T10:43:03.310-05:002024-04-25T10:43:03.310-05:00There was a proof or an idea-for-a-proof-in-the-mi...There was a proof or an idea-for-a-proof-in-the-minds-of-MV when it was first published. How does the final proof relate to the initial attempt? Entirely new proof? Based on it but needed to fill in a few details (I doutb that since it would not have taken over 40 years)? I suspect somewhere inbetween but I would be curious if Varzarani or Bulliger can tell us. <br /><br />Rather than complain about the situation I ask a question:<br />a) How common is it for a result in a prestige conference to be WRONG (note that the MV result was not WRONG)<br /><br />b) How common is it for a result in a prestige conference to have a proof or proof sketch that is NOT EVEN CLOSE to the final proof, needs new ideas, though the original result is still true. <br /><br />c) How common is it for the proof sketch to not have some detail that experts could fill in but non-experts could not. I would guess this is common. <br /><br />For important papers the problems will probably be discovered. But for unimportant papers that nobody reads, an error can linger for decades. That can be a problem if some unimportant paper becomes important as a lemma to a later result. gasarchhttps://www.blogger.com/profile/03004932739846901628noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-36692838825803691492024-04-25T09:14:08.339-05:002024-04-25T09:14:08.339-05:00The story of the Micali-Vazirani algorithm really ...The story of the Micali-Vazirani algorithm really bothers me, and I'm sure that's true for many others too. Indeed it is well-known that there were gaps in earlier proofs. Fine, that happens. But I cannot find any published statement about this, except in other papers in the interim that all purport to fill in the gaps. Is this how we want to set the norms of our field? Is it ok to publish yet another proof of correctness without retracting and explaining the issues in all previous attempts? I'm disappointed that this blog, which is already well past its prime, doesn't discuss these kinds of issues, which are definitely relevant to our field. Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-8304463435341633972024-04-25T07:56:01.179-05:002024-04-25T07:56:01.179-05:00seems to be a good candidate for coq or similar pr...seems to be a good candidate for coq or similar proof assistants. Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-52912489113489657192024-04-24T22:47:48.082-05:002024-04-24T22:47:48.082-05:00I’m not saying the paper is wrong — I’m quoting th...I’m not saying the paper is wrong — I’m quoting the blog post, and pointing out that a statement it makes is wrong.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-42946235984507892822024-04-24T18:10:51.077-05:002024-04-24T18:10:51.077-05:00That result is mentioned on page 6 of the paper. That result is mentioned on page 6 of the paper. Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-14657170346490956652024-04-24T15:39:37.466-05:002024-04-24T15:39:37.466-05:00"the MV algorithm is still the fastest known ..."the MV algorithm is still the fastest known algorithm for computing a maximum cardinality matching"<br /><br />The above comment is wrong as written, since matrix multiplication based algorithms are faster than the Micali-Vazirani algorithm for maximum matching. Anonymousnoreply@blogger.com