Bullinger's post on this blog last week focused on Vijay Vazirani's public obsession of finding a proof for the 1980 Micali-Vazirani matching algorithm. But why does Vijay, and theoretical computer science in general, obsess over proofs?

You can't submit a paper to a theory conference without a claimed complete proof, often contained in an appendix dozens of pages long. Often we judge papers more on the complexity of the proof than the statement of the theorem itself, even though for a given theorem a simpler proof is always better.

A proof does not make a theorem true; it was always true. The Micali-Vazirani algorithm is no faster with the new proof. Would we have been better off if the algorithm didn't get published before there was a full proof?

We're *theoretical *computer scientists--doesn't that mean we need proofs? Theoretical economists and physicists don't put such an emphasis on proofs, they focus on models and theorems to justify them.

Once a senior CS theorist told economists that his group had given the first full proof of a famous economics theorem and wondered why the economists didn't care. The economists said they already knew the theorem was true, so the proof added little to their knowledge base.

More than one journalist has asked me about the importance of a proof that P \(\ne\) NP. A proof that P = NP would be both surprising and hopefully give an algorithm. While a proof that P \(\ne\) NP would be incredibly interesting and solve a major mathematical challenge, it wouldn't do much more than confirm what we already believe.

I'm not anti-proof, it is useful to be absolutely sure that a theorem is true. But does focusing on the proofs hold our field back from giving intuitively correct algorithms and theorems? Is working out the gory details of a lengthy proof, which no one will ever read, the best use of anyone's time?

As computing enters a phase of machine learning and optimization where we have little formal proof of why these models and algorithms work as well as they do, does our continued focus on proofs make our field even less relevant to the computing world today?

If you insist on proofs, then there is a time hierarchy theorem for BPP, as shown by this simple argument: https://cstheory.stackexchange.com/questions/48081/provable-bpp-hierarchy

ReplyDeleteI don't think there's anything wrong with pursuing proofs alongside moving forward with assumptions. For example, much of cryptography is fleshing out possibilities from different assumptions. I think it's critical to state and investigate the underlying assumptions, but that shouldn't stop us from building useful things before we know they're true/secure.

ReplyDeleteI resonate with that feeling one hundred percent. The topic definitely deserves serious discussions in the PC meetings and business meetings of the conferences.

ReplyDeleteIt is a bit weird to ask if insisting on proofs holds our field back, when it is not even clear what our field would be without proofs. What would a typical TCS paper in this world where proofs don't matter much look like?

ReplyDelete> A proof does not make a theorem true; it was always true.

ReplyDeleteYou should mark satire, as satire, or some people will believe it.

You, of course, know full well that a proof allows us to distinguish between what is certain truth and what are educated guesses) or worse still, uneducated guesses) that could very well be wrong.

You know that our field is a hard science, not a social science or theology.

Theoretical physics is a discipline where many things are done with mathematically suspect arguments. In addition to some things like renormalization to get rid of infinities, there are methods of statistical physics involving the replica method, replica symmetry breaking and the cavity method that first come from some guessed form of solution (stated as an "ansatz") and often involve mathematical absurdities such as approximating a system by the first few terms of a non-convergent series or analyzing it with respect to the limit as k goes to 0 of the derivative with respect to k of the log of the partition function of k copies of a system. (Parisi won the Nobel prize in Physics for some of these methods, which, despite their apparent absurdity, (a) seem accurate and (b) many of their consequences were formally shown in later deep work by Talagrand and others.)

ReplyDeleteFor example, the structure of solutions for random k-SAT at different clause densities was first hypothesized using these methods and then rigorously proven in a series of TCS papers by Achlioptas, Coja-Oghlan, Sly and others. It was not initially obvious that the methods would actually translate since some (incorrect) early uses of these methods to do curve-fitting for experimental results on related questions gave provably incorrect answers.

When we in TCS can't prove things we make hypotheses and prove conditional results, which in some sense is a reasonable analogue. However, our claims are usually not accompanied by the kind of deep intuitions from statistical physics (and maybe economics). We might not be able to develop the same kind of intuitions since very few of our questions are as "smooth" as questions about instances with random inputs such as random k-SAT. (In fact, we often enough have been surprised by counter-intuitive properties: e.g. there is a clever simple algorithm that acts very differently from what we expect.)

I found the post surprising. Correctness and robustness of algorithms and software is of paramount importance given the ubiquity of computers in our lives and future lives. It may be less important to worry about the precise theoretical running time of an algorithm (which we do also obsess over) but correctness is clearly important. We obsess about running times/approximation ratios/bounds in general because we need metrics to improve. Otherwise we will not have an incentive to innovate and develop new ideas. Also, without proofs we do not have a scaffolding to think. Lance may be channelling his frustration about our theorems rather than the proofs. Many of our theorems and directions of research may not be interesting to him and feel like a waste of time.

ReplyDeletehttps://lance.fortnow.com/papers/files/ip.pdf

ReplyDelete(Bill) There needs to be a place for speculative work: ideas for theorems that there is evidence for but haven't been proven, but labelled as such. Perhaps ITCS already does this. My SIGACT news open problems column might qualify. But then the question arises, how much credit should speculation get? More then they do? Credit in what form?

ReplyDelete"A proof does not make a theorem true; it was always true."

ReplyDeleteBut how did you know it was true?

Leaving a comment, because I got a bit of whiplash from that last paragraph.

ReplyDeleteIt's true that NeurIPS et al are big on empirical benchmarks, and aren't necessarily huge sticklers for proofs.

But! When I think of "proofs" and "modern ML" intersected, I think of how LLMs are helping mathematicians prove new things. The more things we can prove are true, the better our stochastic parrots will be at priming us to prove the next true thing.

I read between the lines of this letter, and presume: "A conversation with $CHATBOT about math, will go much smoother when $CHATBOT can tool-use Lean, and over a broader set of proven results": https://unlocked.microsoft.com/ai-anthology/terence-tao/

Proofs in STOC and FOCS and all are the raw ingredients that make these seems-to-work-in-practice ML systems, actually work in practice.

The two words "theorem" and "theoretical" have the same the two words have the same etymology. But "A theorem without a proof is a mere hypothesis" [cit. me] ... so, you get "hypothesis computer science" which doesn't sound very good.

ReplyDelete

ReplyDelete"Theoretical economists and physicists don't put such an emphasis on proofs"LMAO...

Of course not, that would be the end of their trade!

On the Bullinger's blog post, the discussions/comments raise many interesting aspects pertaining to diversity, bias, cliques, and non-public information that is available to only a select few. I am disappointed that instead of discussing those aspects and raising awareness about them, this blog post chose to focus on an aspect that no comment was interested in.

ReplyDeleteproof is what makes you know that an algorithm actually works and behaves the way you say in the worst case.

ReplyDeleteML folks didn't care about worst case or even average case. They are just heuristics they fail all the time. If that is the level of reliability that you need from your system, fine, you don't Even need an algorithm. Just tell an LLM your wish and with probability less than .5 you will get what you want.

Try running a plane on LLM or performing a surgery, or loan application decision without human in the loop, see what will happen.

> confirm what we already believe.

ReplyDeleteSuspect, yes. But not believe. Most people wouldn't bet their life on P!=NP. And quite a number of people consider P=NP more likely.

Quite a number of people believe in QAnon conspiracies, too... Of course, we might even have both possibilities - the question might be independent.

DeleteHere's one if you don't like that one: NP doesn't have linear size circuits.

Is it perhaps that without the proofs there is nothing of value being generated by the field? Even with the proofs what actual societal impact can be attributed to complexity theory other than material for tenure boards?

ReplyDeleteP is NP could be 100% correct.

ReplyDeleteYour are drinking too much LLM hype. These huge LLMs, none of them can multiply 20 digit numbers.

ReplyDeleteThey are useful for some talks. They are far from what the hype makes people believe.

AGI is no where near, yet people are raising huge sums of money (think billions) as of it is just around the corner.

There seems to be some level of envy in your post regarding what ML researchers do.

And economists and physicists do prove theorems. That is often the most prestigious work. Just take a look at Nobel prize winners in those fields. They might not be at the same level of right as we are in math, but they still do prove things.

In CS, the cases where proofs were available used to outnumber the cases where proofs were missing. Before the current "phase of machine learning and optimization" including huge LLMs flipped the scales, the abscence or difficulty of proof was the mystery that caught my attention (not sure whether it caught your attention too). In this current phase, the abscence or difficulty of proof is not so mysterious.

ReplyDeleteThe obsession with proofs had good reasons, because either proof was attainable, or else the mere fact that proof was apperently too difficult in a specific case was an intriguing result too.

Not sure whether Lance is just trolling, but as a computer scientist who also publishes in economics, I strongly disagree with the claim that "theoretical economists [...] don't put such an emphasis on proofs". Quite to the contrary, there is more rigor in theoretical economics than in TCS. Most results in STOC/FOCS are not thoroughly checked for correctness before they are published. Some don’t even have complete proofs and the authors don't bother making a journal version. Vijay's 1980 paper is not an exception.

ReplyDeleteThere is a reason why the community resisted double-blind reviewing for so long. It's easier to check the authors' names than the correctness of proofs ;-)

I agree with Lance's observation that TCS papers are often judged based on the complexity of a proof rather than the statement of the theorem itself, which is crazy.

One of my math teachers (Glenn Stevens) liked saying, "We don't prove things to establish truth."

ReplyDeleteImplicitly, he was saying that one of the most important things proofs provide is *understanding*. When we prove, we understand better, and that helps us make progress.

It is not the *only* way to make progress, and demanding proof to the exclusion of all other ways of understanding would be folly -- but proof is a very effective catalyst for progress.

if you don't know who was the author of that paper, and the algorithm was submitted but without proof of correctness or its complexity, would you accept the paper? or are you implying that the lower bar should apply only to well-known people like him?

ReplyDeletewe can have a separate experimental algorithms track or conference where people submit heuristic algorithms and benchmarks on canonical datasets like SAT competitors, and so on, it is not gonna replace the algorithms with correctness proofs and performance analysis. They serve different purposes.

Your post reads like a criticism of people who are criticizing the fact that Vijay essentially got away without officially retracting a paper for an issue where a retraction would be expected.

ReplyDeleteYou might want to clarify your poison on that, if that is not your intention.

The original paper had value as the algorithm is more important than the proof. But instead of a separate publication for the proof, it could have been the journal version of the original paper. It would be different if the proof was done by someone who wasn't an author on the first paper.

DeleteShould there have been some kind of errata for an incomplete proof? Maybe but those were tricky especially back in 1980 and Vazirani hardly kept the lack of a proof a secret.

Do you think the 1994 Combinatorica paper should have been retracted by the authors and/or journal? The paper asserts a proof of correctness of MV, but the proof is fundamentally flawed.

DeleteIs the algorithm actually valuable? It would be if it was known to perform well in practice (when compared to other competing algorithms) and always output a correct solution even if it did not have the claimed worst-case guarantee on its running time. Our community does a poor job of implementing and testing algorithms even for fundamental problems. The goal is mainly to prove things. Nothing wrong with it and we make fantastic progress but then the proof of correctness is important.

DeleteYes, there could have been a formal retraction or at least note for the Combinatorica article. But I strongly disagree that "the goal is mainly to prove things." Our goal is understanding. Proofs are just a small part of that.

DeleteThanks for the clarification.

DeleteIt is a reasonable position to say the field was young and the norms were different. But we should be clear what is the expected behavior these days, and be careful to avoid creating an impression of favoritism and bias.

I agree with you that algorithms can be useful even without formal correctness proof and without proven complexity upper bounds. We should explore what criteria we should apply to these when reviewing them.

Simplex is a good example of algorithm that essentially everyone used in practice where the worst case was clearly exponential, and for a very long time we didn't have much theory about why it is performing much better in practice.

The new paper discusses errors in the Combinatorica paper in great detail. If the 1994 paper is not a poster child for retraction, what sorts of papers should be retracted? And what message does the absence of retraction send?

ReplyDeleteI think this is a good question, but an answerable one! And the answer is that, as long as there's no proof of a mathematical statement, there's apparently something crucial still to be understood about it (since otherwise why ISN'T there a proof?). Furthermore, this is just as true in physics as it is in CS! The difference is that, in physics, in principle the ultimate arbiter of truth is "can you predict what's going to happen in this experiment?" -- and if the prediction comes out right, you usually feel vindicated enough to move forward even without the full understanding that would've come from clear mathematical definitions of all your concepts and proofs of all the statements you make about them. Now, we of course also have experiments in CS, and those ARE relied on in all the non-theoretical branches of the field! But the difference is, in CS we're typically concerned with the WORST that an adversary could possibly do to break our cryptosystem, or invalidate our protocol, or make our program or algorithm fail. And here experiment is inherently less useful as a guide to truth. And that forces us to fall back more on actually understanding what's going on, i.e. proof. And when that slows us down too much, we also have our own workaround for it, which is to agree socially on a relatively small set of conjectures, and then prove theorems modulo those conjectures. :-)

ReplyDeleteWe not only need proofs, but UNDERSTANDABLE proofs. In that sense the community’s “obsession” for proofs is warranted – it can take a lot of work to get to the heart of the matter. The new MOR proof needs to be judged not only on correctness but on crispness and simplicity.

Delete