tag:blogger.com,1999:blog-3722233.post1705677363100341128..comments2024-04-13T02:40:13.964-05:00Comments on Computational Complexity: Do We Need to Formalize?Lance Fortnowhttp://www.blogger.com/profile/06752030912874378610noreply@blogger.comBlogger8125tag:blogger.com,1999:blog-3722233.post-32053900591756610462023-12-17T23:32:06.994-06:002023-12-17T23:32:06.994-06:00Aside from a field like algebraic geometry and hom...Aside from a field like algebraic geometry and homology and homotopy becoming too complicated for humans, there is also interest in verified software for critical parts of important systems.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-48060647636692079672023-12-11T17:07:53.765-06:002023-12-11T17:07:53.765-06:00Many proof checkers can "extract" runnab...Many proof checkers can "extract" runnable code. (It looks like Coq and Isabelle can extract OCaml and Haskell. And Lean is at least partway to extracting programs via C.) I will be curious to see if this gets more widely used in software engineering (compared to e.g. unit tests and "fuzzing".)<br /><br />Even if you don't have a full specification, being able to say that a slow (but easy to understand) program always returns the same thing as a fast (but tricky to understand) program, seems like a specification many people. Proving this can be tricky, which leads to the <br /><a href="https://martinsteffen.github.io/compilerconstruction/fullemployment/" rel="nofollow">full employment "theorem" for compiler writers</a>.<br /><br />I'm sure that if this does become popular, there will still be ways programs can be buggy. (But they'll be new and different ways! Progress!)Josh Burdickhttps://www.blogger.com/profile/12231348292069164630noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-50921469538368617282023-12-08T07:17:24.819-06:002023-12-08T07:17:24.819-06:00@Paul Beame, regarding Voevodsky's reasons on ...@Paul Beame, regarding Voevodsky's reasons on formalizing mathematical proofs using Coq -- fairly accurate, succinct and diplomatic statement. To add a little bit of color with an \epsilon amount of precision, highlighting why Voevodsky warmed up towards automated proof assistants.<br /><br />Voevodsky was actually (fairly) confident of his proofs (e.g., intrinsically trusted them) -- until he gave a series of lectures in which Pierre Deligne was actively engaged as `proof-check' participant rather than merely spectator. <br /><br />The unnoticed mistake -- which persisted and propagated in the works of other mathematicians for an extended period before detection -- namely, Proposition 4.23 of paper "Cohomological Theory of Presheaves with Transfers", <br />[Source: https://www.math.ias.edu/Voevodsky/files/files-original/Dropbox/Published_papers/Motives/Collection/s3.pdf]<br />later weakened to Lemma 22.10, as seen in lecture 22 entitled "Zariski sheaves with transfers" of "Lecture Notes on Motivic Cohomology" [Source: www.ams.org/bookpages/cmim-2] prompted Voevodsky to reevaluate his own stance.<br /><br />This experience caused Voevodsky to become more critical and warm up towards the idea of using computer languages to formalize mathematical statements, so as to represent <br />mathematical structures, proofs in such a way that mathematical knowledge can be archived and logically validated presuming some kind of fixed format. He pioneered his work in Coq rather than Lean. <br />(A historical footnote for people not familiar with proof checkers: Vladimir's efforts stretched from Feb 2010 to Dec 2013, however, Lean, was first released for research purposes in 2013 by Microsoft Research.)<br /><br />The incident underscores a common challenge: <br />confidence/trust in correctness until an unforeseen event exposes a mistake. This dilemma extends to contemporary proof checkers like Lean and its many predecessors. Additionally,<br />it raises a more pertinent question: What exactly makes us believe that these proof checkers are bug-free <br />and can be relied upon in an almost trustless manner? <br />(NB: Similar concerns of different nuance were already raised in the 1980s when the National Security Agency began to believe that European work in formal methods was ahead of the US ... NSA was more interested in understanding<br />how adversaries can maliciously tinker in these systems.)<br /><br />This reassessment is crucial when entrusting formalized code to ostensibly trustless, open-source proof assistants. As we delve into the intricate dance between trust and verification, the essay "Reflections on Trusting 'Trustlessness' in the era of 'Crypto'/Blockchains" provides valuable insights into the multifaceted nature of trust within the realm of open-source solutions.Evangelos Georgiadisnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-85434230366343288282023-12-06T14:20:34.939-06:002023-12-06T14:20:34.939-06:00Your post conflates two things: Formalizing proof...Your post conflates two things: Formalizing proofs to allow checking vs the problem of proof search. These are fundamentally different complexity tasks. We shouldn't expect new proofs from this process; just indications of where we might need to add to our proofs. At heart, proof assistants like Lean are just in the proof checking business. (Unlike complete SAT solvers which are in the proof finding business.)<br /><br />Your confidence in our proofs seems a bit higher than warranted: What portion of the proofs in a typical theory conference have significant bugs/gaps that the authors did not consider? It is likely considerable. Would it be worthwhile to eliminate time spent by others building on buggy/incorrect arguments? Would it be worthwhile to resolve the <a href="https://rebeccaleamorris.com/2021/04/23/its-not-as-easy-as-abc-what-happens-now-a-controversial-proof-has-been-published/" rel="nofollow">controversy about the claimed proof ABC Conjecture</a> that has gone on for a decade already?<br /><br />One of the reasons that Voevodsky was so keen on formalizing mathematical proofs using Coq was that, in the abstract domain of algebraic geometry he was working on, he felt he no longer had the same level of confidence in the correctness of his own proofs.<br /><br />I agree that NuPrl and Boyer-Moore were very far from fun to work with and the interaction of "tactics" was a bit of nightmare so your basic sense from your experience is understandable. Somehow computer scientists who want people to trust their software - or want to trust their own software - have sufficient motivation and I believe that things are vastly better these days, though that doesn't still avoid the need to specify tactics. <br />Paul Beamehttps://www.cs.washington.edu/people/faculty/beamenoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-59931405340893262672023-12-06T14:04:19.830-06:002023-12-06T14:04:19.830-06:00There is a lengthy discussion about formalisation ...There is a lengthy discussion about formalisation of complexity for Lean, but first (impressive) steps have been made in other proof assistants. For instance in Coq already in 2021 (https://drops.dagstuhl.de/opus/volltexte/2021/13915/pdf/LIPIcs-ITP-2021-20.pdf) and in Isabelle just recently (no paper available yet). What makes formalising complexity much harder is that there is a much higher amount of invisible mathematics present than in other kinds of maths. Since in proof assistants one has to spell out both the visible and invisible mathematics, complexity proofs have a very high overhead factor when going from paper to proof assistants. Yannick Forsterhttps://yforster.denoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-17252072475896198882023-12-06T13:45:36.282-06:002023-12-06T13:45:36.282-06:00While a different meaning of '"formalizat...While a different meaning of '"formalization", formalizing complexity theoretic results in bounded arithmetic (subtheories of Peano Arithmetic corresponding to efficient reasoning), is very important to computational/proof complexity! It is important to us that we formalize our results in bounded arithmetic theories, as these can correspond to algorithms (via witnessing theorems of Sam Buss and others) and proof complexity upper/lower bounds by propositional translations. <br />I imagine Lean and other proof assistants would be able to take a formally verified proof and then indicate a logical theory in which said proof could be formalized in. That would be incredibly helpful as a tool.Stefan Grosserhttps://www.blogger.com/profile/05077025860797995882noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-43283533655102826142023-12-06T13:08:48.649-06:002023-12-06T13:08:48.649-06:00I don't think that we are a long way from unas...I don't think that we are a long way from unassisted formalization. Probably the next step will be some kind of more human friendly, interactive interface, that can ask you questions while it is processing the paper, but does all the formalization in the background. I think that these systems are within reach. After this works well, and it has enough samples, it can become unassisted in formalizing. And even after that, it will be able to come up with the proofs as well.domhttps://www.blogger.com/profile/05790539025733385232noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-58842373477765066812023-12-06T12:22:42.225-06:002023-12-06T12:22:42.225-06:00I think some killer apps are more like things that...I think some killer apps are more like things that we nowadays take for granted when coding. A good IDE can find for you all invocations of a method and makes refactoring somewhat easier. Imagine if you could refactor your proofs just as easily.<br /><br />As a simple example: suppose you prove some theorem about fields. I need the same result but for commutative rings. Right now, the process is: I have to go through your proof line by line and see if everything still holds for commutative rings. But in a proof assistant, this literally (should) just becomes an act of refactoring, and where the refactoring fails can tell you where you might need to prove a new lemma, or maybe it succeeds and then you're good to go.<br /><br />And *search*. As more math becomes formalized, it becomes easier to search not based on keywords, but on actual structure/content. Maybe someone proved a combinatorial lemma about something they call widgets, but it's secretly the same as what you call gizmos. If you search for "gizmo" (on Google, MathSciNet, DBLP, etc.) you'd never find it, but if you can search based on the axioms and actual structure of the mathematics, you could.Josh Grochowhttps://home.cs.colorado.edu/~jgrochow/noreply@blogger.com