tag:blogger.com,1999:blog-3722233Tue, 21 May 2024 13:19:44 +0000typecastfocs metacommentsComputational ComplexityComputational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill Gasarchhttps://blog.computationalcomplexity.org/noreply@blogger.com (Lance Fortnow)Blogger3103125tag:blogger.com,1999:blog-3722233.post-8938799245865772881Mon, 20 May 2024 01:13:00 +00002024-05-19T22:06:36.086-05:00I don't do that well when the Jeopardy category is MathBill and Darling are watching Jeopardy.<div><br /></div><div>DARLING: Bill, one of the categories is MATH TALK. You will kick butt!</div><div><br /></div><div>BILL: Not clear. I doubt they will have <i>the least number n such that R(n) is not known</i>. They will ask things easy enough so that my math knowledge won't help.</div><div><br /></div><div>DARLING: But you can answer faster.</div><div><br />BILL: Not clear. </div><div>--------------------------------------------</div><div>Recall that in Jeopardy they give the answers and you come up with the question.</div><div>Like Sheldon Cooper <i>I prefer my questions in the form of a question. </i></div><div>Even so, I will present the answers that were given on the show (that sounds funny), then </div><div>I will provide the questions (that sounds funny), what happened, and what I would have gotten right. </div><div><br /></div><div>$400</div><div>ANSWER: Its a demonstrably true mathematical statement; Calculus has a ``Fundamental'' one.</div><div>QUESTION: What is a Theorem?</div><div>WHAT HAPPENED: Someone buzzed in and said AXIOM. This one I knew the answer and would have won!</div><div><br /></div><div>$800</div><div>ANSWER: Fire up the engines of your mind and name this solid figure with equal and parallel circles at either end. </div><div>QUESTION: What is a Cylinder?</div><div>WHAT HAPPENED: Someone buzzed in with the correct answer. I had a hard time parsing this one and only got it right in hindsight. This one I would have lost on. Note that the phrase<i> Fire up your</i> <i>engines </i>is supposed to make you think of <i>Fire on all cylinders</i>. This did not help me.</div><div><br /></div><div>$1200</div><div>ANSWER: Multiply the numerator of one fraction by the denominator of another (and vice versa) to get the ``cross'' this. </div><div>QUESTION: What is a Product?</div><div>WHAT HAPPENED: I got this one very fast. So did the contestant on the real show. Not clear what would happened if I was there.</div><div><br /></div><div>$1600</div><div>ANSWER: See if you can pick off this term for the point at which a line or curve crosses an axis. </div><div>QUESTION: What is an Intercept?</div><div>WHAT HAPPENED: Someone buzzed in with the correct answer. I really didn't know what they were getting at. Even in hindsight the answer does not seem right, though I am sure that it is. The phrase<i> pick</i> <i>off this term</i> is supposed to remind me of something, but it didn't. Lance happened to read a draft of this post and did the obvious thing: asked ChatGPT about it. ChatGPT said that in football a <i>pick off i</i>s an <i>interception. </i>To see the ChatGPT transcript see <a href="https://chatgpt.com/share/499fee35-36d5-4ba8-9a88-fd9cb4f82b4a?oai-dm=1">here</a>.</div><div><br /></div><div>$2000</div><div>ANSWER: In 19-5=14 19 is the minuend; 5 is this other ``end''</div><div>QUESTION: What is a Subtrahend?</div><div>WHAT HAPPENED: Someone buzzed in with the correct answer. The answer was news to me. It is correct; however, I am not embarrassed to say I never heard these terms. Spellcheck thinks that <i>minuend </i>and<i> subtrahend</i> words. This is similar to when I was not smarter than a fifth grader (see blog post <a href="https://blog.computationalcomplexity.org/2013/05/are-you-smarter-than-fifth-grader-im-not.html">here</a>). </div><div><br /></div><div>----------------------------------------------------------------</div><div>So the final tally:</div><div>The $400 question I would have gotten right</div><div>The $1200 question I might have gotten right if I was fast on the buzzer</div><div><br /></div><div>But that's it. Why did I do so badly? </div><div>1) Two of the ones I got wrong were phrased in funny ways. I thought so anyway. And note that they did not use advanced math knowledge, so my math knowledge didn't help. (This is not a complaint- it would be bad if they used advanced math knowledge. Like when a crossword puzzle my wife was working on wanted <i>Log-Man </i>and it began with N and I knew <i>Napier.</i> Why was that in a crossword puzzle for laypeople? Because <i>Napier </i>has a lot of vowels in it.) <br /></div><div><br /></div><div>2) One of them I really did not know the math knowledge. Is it arrogant to say that if there is a math question on Jeopardy where I don't know the answer then its a bad question? I leave that as an exercise for the reader. </div><div><br /></div><div>On questions about presidents, vice presidents, or American history, I do well.</div><div><br /></div><div>On questions about novelty songs (sometimes comes up) I do very well. (One question was about this song <a href="https://www.youtube.com/watch?v=BvUZijEuNDQ">here</a>. The question: <a href="https://www.youtube.com/watch?v=GxpH6BX4YpU">here</a>.) <br /></div><div><br /></div><div>But math... not so much. </div><div><br /></div><div>For computer science questions I also do not do that well, but I've learned some common abbreviations that I did not know: </div><div><br /></div><div>BIT: Binary Integer (A reader named Anonymous, who makes many comments, pointed out that BIT is actually Binary Digit. I have a possibly false memory of Jeopardy telling me Binary Integer. Either my memory is wrong or Jeopardy is wrong. But Anonymous is right- its Binary Digit.) </div><div><br /></div><div>HTTP: Hypertext Transfer Protocol</div><div><br /></div><div>HTML: Hyper Text Markup Language</div><div><br /></div><div>FORTRAN: Formula Translation</div><div><br /></div><div>Those were more interesting than learning about minuend and subtrahend, terms I had never heard before and won't hear again unless I catch a rerun of Jeopardy (at which time I will get it right).</div><div><br /></div><div><br /></div><div><br /></div><div><br /></div>https://blog.computationalcomplexity.org/2024/05/i-dont-do-that-well-when-jeopardy.htmlnoreply@blogger.com (gasarch)3tag:blogger.com,1999:blog-3722233.post-5789051896891651132Wed, 15 May 2024 14:00:00 +00002024-05-15T09:00:08.563-05:00Jim Simons (1938-2024)<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhG6mfnrhlSyf8dpuMhZz8CX-YmgKJGoAxLDPmGdO4rBSw-E_yXh9Fd0oR4j7wM9YyPRZALHoNK_4Zz-DV1Qf1shpFUuEp0FVn1g5WZMWVNUOonhCUsrR-ZXKne6DfcXQEC6X7R7em6pGJxSrvgdOCbxBcycTS3pnHoPfTyJArRGIRUYxEH82G8/s1278/8SR.jpeg" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="1278" data-original-width="1278" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhG6mfnrhlSyf8dpuMhZz8CX-YmgKJGoAxLDPmGdO4rBSw-E_yXh9Fd0oR4j7wM9YyPRZALHoNK_4Zz-DV1Qf1shpFUuEp0FVn1g5WZMWVNUOonhCUsrR-ZXKne6DfcXQEC6X7R7em6pGJxSrvgdOCbxBcycTS3pnHoPfTyJArRGIRUYxEH82G8/s320/8SR.jpeg" width="320" /></a></div>Jim Simons <a href="https://www.nytimes.com/2024/05/10/business/dealbook/jim-simons-dead.html?unlocked_article_code=1.rU0.y9EO.Gh7AqpX_ylPB&smid=url-share">passed away</a> Friday at the age of 86. In short he was a math professor who quit to use math to make money before it was fashionable and used part of his immense wealth to start the <a href="https://www.simonsfoundation.org/">Simons Foundation</a> to advance research in mathematics and the basic sciences.<p></p><p>While his academic research focused on manifolds, Simons and his foundation had theoretical computer science as one of its priorities and helped fund and promote our field on several fronts.</p><p>Foremost of course is the <a href="https://simons.berkeley.edu">Simons Institute</a>, a center for collaborative research in theoretical computer science. <a href="https://blog.computationalcomplexity.org/2010/08/new-institute-for-theory-of-computing.html">Announced as a competition</a> in 2010 (I was on team Chicago) with the foundation eventually landing on UC Berkeley's campus. At the time, I <a href="https://blog.computationalcomplexity.org/2012/05/berkeley-wins-simons.html">wrote</a> "this will be a game changer for CS theory" if anything proven to be an understatement over the last dozen years.</p><p>Beyond the institute, the Simons Foundation has funded a <a href="https://www.simonsfoundation.org/grant/simons-investigators/?tab=awardees&filter_disciplines=theoretical-computer-science">number of theorists</a> through their investigator and other programs.</p><p>Let's not forget <a href="https://www.quantamagazine.org/">Quanta Magazine</a>, an online science publication funded by the foundation without subscriptions or paywalls while <a href="https://www.sciencefriday.com/segments/science-journalism-trust-in-science/">science journalism has been seeing cuts elsewhere</a>. Quanta has been <a href="https://www.quantamagazine.org/tag/computational-complexity">particularly friendly</a> to the computational complexity community such as this <a href="https://www.quantamagazine.org/the-researcher-who-explores-computation-by-conjuring-new-worlds-20240327/">recent article</a> on Russell and his worlds.</p><p>The Simons Foundation will continue strong even without its founder. But as we see <a href="https://cra.org/govaffairs/blog/2024/03/cra-statement-concern-fy24-budgets/">challenges in government funding</a>, how much can or should we count on <a href="https://blog.computationalcomplexity.org/2013/11/the-new-patrons.html">wealthy patrons</a> to support our field?</p><p>Read more on Jim Simons from <a href="https://scottaaronson.blog/?p=7996">Scott</a>, <a href="https://rjlipton.com/2024/05/11/just-jim/">Dick</a>, the <a href="https://www.simonsfoundation.org/2024/05/10/simons-foundation-co-founder-mathematician-and-investor-jim-simons-dies-at-86">foundation</a> and the <a href="https://simons.berkeley.edu/news/remembering-jim-simons-1938-2024">institute</a>.</p>https://blog.computationalcomplexity.org/2024/05/jim-simons-1938-2024.htmlnoreply@blogger.com (Lance Fortnow)5tag:blogger.com,1999:blog-3722233.post-4992994049965930699Sat, 11 May 2024 18:38:00 +00002024-05-12T00:17:56.377-05:00What is Closed Form? The Horse Numbers are an illustration<p>In the book <i>Those Fascinating Numbers</i> by Jean-Marie De Konick they find interesting (or `interesting') things to say about many numbers. I reviewed the book in a SIGACT News book review column <a href="https://www.cs.umd.edu/~gasarch/bookrev/42-2.pdf">here</a>. The entry for 13 is odd: 13 is the third <i>Horse Number. </i>The nth Horse number is the number of ways n horses can finish a race. You might think: OH, that's just n!. AH- horses can tie. So it's the number of ways to order n objects allowing ties. </p><p>Is there a closed form for H(n)? We will come back to that later. </p><p>0) The Wikipedia Entry on horse races that ended in a dead heat is <a href="https://en.wikipedia.org/wiki/List_of_dead_heat_horse_races">here</a>. They list 78 dead heats (two horses tie for first place) and 10 triple dead heats (three horses tie for first place). For the horse numbers we care if (say) two horses tie for 4th place. In reality nobody cares about that. </p><p>1) I have found nowhere else where these numbers are called <i>The Horse Numbers.</i> </p><p>2) They are called the<i> Ordered Bell Numbers.</i> The Wikipedia entry <a href="https://en.wikipedia.org/wiki/Ordered_Bell_number">here</a> has some applications.</p><p>3) They are also called the <i>Fubini Numbers </i>according to the Ordered Bell Number Wikipedia page.</p><p>4) I had not thought about the Horse Numbers for a long time when they came up while I was making slides for the proof that (Q,<) is decidable (the slides are <a href="https://www.cs.umd.edu/~gasarch/COURSES/452/S24/slides/qlesstalk.pdf">here</a>).</p><p>5) There is an OEIS page for the Horse Numbers, though they are called the Ordered Bell Numbers and the Fubini Numbers. It is <a href="https://oeis.org/A000670">here</a>. That page says H(n) is asymptotically \(\frac{1}{2}n!(\log_2(e))^{n+1}\) which is approx \(\frac{1}{2}n!(1.44)^{n+1}\). <br /></p><p>6) There is a recurrence for the Horse Numbers:</p><p>H(0)=1</p><p>H(1)=1</p><p>H(2)=3</p><p>For all \(n\ge 3\) we split H(n) into what happens if i horses are tied for last place (choose i out of n) and if the rest are ordered H(n-i) ways. Hence</p><p>\( H(n) = \binom{n}{1}H(n-1) + \binom{n}{2}H(n-2) + \cdots + \binom{n}{n}H(0) \)</p><p>Using \(\binom{n}{i} = \binom{n}{n-i}\) we get</p><p>\( H(n) = \binom{n}{0}H(0) + \binom{n}{1}H(1) + \cdots + \binom{n}{n-1}H(n-1) \)</p><p>STUDENT: Is there a closed form for H(n)?</p><p>BILL: Yes. Its H(n).</p><p>STUDENT: That's not closed form.</p><p>BILL: Is there a closed form for the number of ways to choose i items out of n?</p><p>STUDENT: Yes, \(\binom{n}{i}\) or \( \frac{n!}{i!(n-i)!}\) </p><p>BILL: Does that let you compute it easily? No. The way you compute \(\binom{n}{i}\) is with a recurrence. The way you compute H(n) is with a recurrence. Just having a nice notation for something does not mean you have a closed form for it. </p><p>STUDENT: I disagree! We know what n! is!</p><p>BILL: Do not be seduced by the familiarity of the notation. </p><p><br /></p><p><br /></p>https://blog.computationalcomplexity.org/2024/05/what-is-closed-form-horse-numbers-are.htmlnoreply@blogger.com (gasarch)23tag:blogger.com,1999:blog-3722233.post-3147753408711957519Wed, 08 May 2024 19:21:00 +00002024-05-08T14:21:21.799-05:00Favorite Theorems: Dichotomy<p>A constraint satisfaction problem has a group of constraints applied to a set of variables and we want to know if there is a setting of the variables that make all the constraints true. In CNF-Satisfiability the variables are Boolean and the constraints are ORs of variables and their negations. In graph coloring, the variables are the colors of the nodes and the constraints, corresponding to edges, are two variables must be different. These problems lie in NP, just guess the values of the variables and check the constraints. They are often NP-complete. They are sometimes in P, like 2-coloring graphs. But they are never in between--all such problems are either in P or NP-complete.</p><p></p><div style="text-align: center;"><a href="https://doi.org/10.1109/FOCS.2017.37">A Dichotomy Theorem for Nonuniform CSPs</a> by Andrei Bulatov</div><div style="text-align: center;"><a href="https://doi.org/10.1145/3402029">A Proof of the CSP Dichotomy Conjecture</a> by Dmitriy Zhuk</div><div style="text-align: left;"><br /></div><div style="text-align: left;"><a href="https://blog.computationalcomplexity.org/2005/09/favorite-theorems-np-incomplete-sets.html">Ladner's Theorem</a> states that if P \(\neq\) NP then there exists a set in NP that is not in P and not NP-complete. Ladner's proof works by blowing holes in Satisfiability, an unsatisfying construction as it gives us a set that is NP-complete on some input lengths and easy on others. One could hope that some version of a constraint satisfaction problem could lead to a more natural intermediate set but dichotomy theorems tell us we need to look elsewhere.</div><div style="text-align: left;"><br /></div><div style="text-align: left;">In 1978, Thomas Schaefer gave a <a href="https://doi.org/10.1145/800133.804350">dichotomy theorem for satisfiability problems</a>, basically CSP problems over Boolean variables. In 1990, Pavol Hell and Jaroslav Nešetřil <a href="https://doi.org/10.1016/0095-8956(90)90132-J">showed</a> a dichotomy result for homomorphisms of undirected graphs as described in <a href="https://blog.computationalcomplexity.org/2017/02/the-dichotomy-conjecture.html">my 2017 blog post</a>. In 1998 Tomás Feder and Moshe Vardi <a href="https://doi.org/10.1137/S0097539794266766">formalized</a> the constraint satisfaction dichotomy conjecture and expressed it as homomorphisms of directed graphs. The blog post described a claimed but later retracted solution to the dichotomy conjecture. Bulatov and Zhuk announced independent and different correct proofs later that year. In 2020 Zhuk received the <a href="https://eatcs.org/index.php/component/content/article/1-news/2859-presburger-award-2020-laudatio-for-dmitriy-zhuk-">Presburger Award</a> for his paper (Bulatov was too senior for the award). </div>https://blog.computationalcomplexity.org/2024/05/favorite-theorems-dichotomy.htmlnoreply@blogger.com (Lance Fortnow)1tag:blogger.com,1999:blog-3722233.post-1577021321253064150Mon, 06 May 2024 02:19:00 +00002024-05-06T10:57:36.371-05:00May the fourth be with you. Too many -days? <p>(This post was inspired by Rachel F, a prior REU-CAAR student, emailing me wishing me a happy Star Wars Day.) <br /></p><p> I am writing this on May 4 which is Star Wars day. Off the top of my head I know of the following special days (I exclude official holidays, though the term official has no official meaning.) <br /></p><p>Jan 25: Opposite Day <a href="https://en.wikipedia.org/wiki/Opposite_Day">Wikipedia Link</a></p><p>Feb 2: Groundhog Day <a href="https://en.wikipedia.org/wiki/Groundhog_Day">Wikipedia Link</a> <br /></p><p>Feb 12: Darwin Day <a href="https://en.wikipedia.org/wiki/Darwin_Day">Wikipedia Link</a></p><p>March 14: Pi Day <a href="https://en.wikipedia.org/wiki/Pi_Day">Wikipedia link</a></p><p>May 4: Star Wars Day <a href="https://en.wikipedia.org/wiki/Star_Wars_Day">Wikipedia Link</a></p><p>April 22: Earth Day <a href="https://en.wikipedia.org/wiki/Earth_Day">Wikipedia link</a></p><p>April 25: Take your Child to Work Day <a href="https://en.wikipedia.org/wiki/Take_Our_Daughters_and_Sons_to_Work_Day">Wikipedia Link</a></p><p>Sep 21: National Cleanup Day <a href="https://en.wikipedia.org/wiki/National_CleanUp_Day">Wikipedia Link</a></p><p>Sept 22: Hobbit Day <a href="https://en.wikipedia.org/wiki/Hobbit_Day">Wikipedia Link</a></p><p>Oct 1: International Coffee Day <a href="https://en.wikipedia.org/wiki/International_Coffee_Day">Wikipedia Link</a></p><p>Oct 8: Ada Lovelace Day <a href="https://en.wikipedia.org/wiki/Ada_Lovelace_Day">Wikipedia Link</a></p><p>Oct 16: Boss's Day <a href="https://en.wikipedia.org/wiki/Boss%27s_Day#:~:text=Boss's%20Day%20(also%20written%20Bosses,and%20fair%20throughout%20the%20year.">Wikipedia Link</a><br /></p><p>Oct 23: Mole Day <a href="https://en.wikipedia.org/wiki/Mole_Day">Wikipedia Link</a></p><p>Nov 13: Sadie Hawkins Day <a href="https://en.wikipedia.org/wiki/Sadie_Hawkins_Day">Wikipedia Link</a></p><p>Sept 19: Talk like a Pirate Day <a href="https://en.wikipedia.org/wiki/International_Talk_Like_a_Pirate_Day">Wikipedia Link</a></p><p>A few notes</p><p>1a) Oct 23 is also Weird Al's birthday.</p><p>1b) May 4 is also Edward Nelson's birthday (he invented the problem of finding the chromatic number of the plan). See my post (actually a guest post by Alexander Soifer) on the problem <a href="https://blog.computationalcomplexity.org/2018/07/the-breakthrough-on-chromatic-number-of.html">here</a> for more information on that.</p><p>1c) I left off St. Patrick's Day (March 17) and International LGBT + Pride day (June 28) and many others. Those left off are so well known that they are official where as I was looking for unofficial holidays. But see next point. <br /></p><p>2) The Wikipedia entry for <i>Talk Like a Pirate Day</i> says it's a <i>parodic</i> holiday. The entries on the others holidays use terms like <i>unofficial. </i>I prefer <i>unofficial</i> since ALL holidays are made up, so the only real question is which ones are recognized. But even that is problematic since one can ask <i>recognized by who? </i>Also, despite collecting parody music and videos for the last 50 years, I have never heard the term <i>parodic. </i>Therefore it is not a word. Spellcheck agrees!<br /></p><p>3) Darwin Day should be Darwin-Lincoln day since they were both born on Feb 12. In fact,they were both born in 1809. Most famous same-birthday-and-year pair ever. Second place is Lenny Bruce and Margaret Thatcher (Oct 13, 1925). </p><p>4) The page on Pi Day mentions Tau Day, but Tau day has no page of its own. Tau is \(2\pi\) which some say comes up more often then \(\pi\) and hence should be THE constant. Some say that \(2\pi i\) comes up so often that it should be THE constant. However, there can't really be a day to celebrate it.(I blogged about is-tau-better-than-pi <a href="https://blog.computationalcomplexity.org/2007/08/is-pi-defined-in-best-way.html">here</a>.)<br /></p><p>5) In the future every day will be some kind of day. The Future Is NOW: <a href="https://www.timeanddate.com/holidays/fun/">Website of Fun </a><a href="https://www.timeanddate.com/holidays/fun/">Holidays</a></p><p>Are the holidays on the list real? Depends what you mean by <i>real. </i>Because of the web anyone can post a list of anything and its just one person's opinion. I do not know who controls that website but even if I did, it would be hard to say YES THOSE ARE REAL or NO THOSE ARE NOT. </p><p>One could say that to be a real DAY, it has to be on Wikipedia. But there are two problems with this:</p><p>a) <i>Goodhart's law</i>. When a measure becomes a target it stops being a measure. If I want Jan 15 to be <i>Bagel and Lox Day</i>, I'll make a page for it.</p><p>b) I'm still waiting for Raegan Revord, who has played Missy on Young Sheldon <i>for 7 years, </i>to get a Wikipedia Page. So what hope does <i>Polar Bear Plunge day </i>(Jan 1) have for getting a Wikipedia Page? </p><p><br /></p><p><br /></p>https://blog.computationalcomplexity.org/2024/05/may-fourth-be-with-you-to-many-days.htmlnoreply@blogger.com (gasarch)4tag:blogger.com,1999:blog-3722233.post-2456039078989668567Wed, 01 May 2024 20:00:00 +00002024-05-01T15:00:52.042-05:00Our Obsession with Proofs<p><a href="https://blog.computationalcomplexity.org/2024/04/is-persistence-anachronism.html">Bullinger's post</a> on this blog last week focused on Vijay Vazirani's public obsession of finding a proof for the <a href="https://doi.org/10.1109/SFCS.1980.12">1980 Micali-Vazirani matching algorithm</a>. But why does Vijay, and theoretical computer science in general, obsess over proofs? </p><p>You can't submit a paper to a theory conference without a claimed complete proof, often contained in an appendix dozens of pages long. Often we judge papers more on the complexity of the proof than the statement of the theorem itself, even though for a given theorem a simpler proof is always better.</p><p>A proof does not make a theorem true; it was always true. The Micali-Vazirani algorithm is no faster with the new proof. Would we have been better off if the algorithm didn't get published before there was a full proof?</p><p>We're <i>theoretical </i>computer scientists--doesn't that mean we need proofs? Theoretical economists and physicists don't put such an emphasis on proofs, they focus on models and theorems to justify them.</p><p>Once a senior CS theorist told economists that his group had given the first full proof of a famous economics theorem and wondered why the economists didn't care. The economists said they already knew the theorem was true, so the proof added little to their knowledge base.</p><p>More than one journalist has asked me about the importance of a proof that P \(\ne\) NP. A proof that P = NP would be both surprising and hopefully give an algorithm. While a proof that P \(\ne\) NP would be incredibly interesting and solve a major mathematical challenge, it wouldn't do much more than confirm what we already believe.</p><p>I'm not anti-proof, it is useful to be absolutely sure that a theorem is true. But does focusing on the proofs hold our field back from giving intuitively correct algorithms and theorems? Is working out the gory details of a lengthy proof, which no one will ever read, the best use of anyone's time? </p><p>As computing enters a phase of machine learning and optimization where we have little formal proof of why these models and algorithms work as well as they do, does our continued focus on proofs make our field even less relevant to the computing world today?</p>https://blog.computationalcomplexity.org/2024/05/our-obsession-with-proofs.htmlnoreply@blogger.com (Lance Fortnow)33tag:blogger.com,1999:blog-3722233.post-8421989012948948278Sun, 28 Apr 2024 23:15:00 +00002024-04-28T18:15:36.386-05:00Math Thoughts Inspired by the TV show Succession<p>I watched Succession one-episode-a-day on treadmill for 39 days. I'm glad I did this in 2023 since Season 2 aired its last show on Oct 19, 2019, and Season 3 had its first show on Oct 17, 2021, so I would have been in suspense (or at least as much suspense as corporate board meetings can generate) for about 2 years. </p><p>The show inspired the following mathematical thoughts.</p><p>1) There was a scene which I paraphrase as follows:</p><p><b>Alice</b>: I'll give you two billion dollars for your shares in the company.</p><p><b>Bob</b>: Don't insult me. It's worth at least 2.5 billion. </p><p>My thought: I would take the two billion. </p><p>My Math Thought: Let's say the shares really were worth 2.5 billion. Is it worth haggling? I think not, since I can't imagine I will ever spend that much money in my life. (See the blog post <a href="https://blog.computationalcomplexity.org/2021/07/would-you-take-this-bet-part-2.html">here</a> on the St. Petersburg paradox which is about how much money is enough.) To be fair, if I wanted to buy Tesla (note that I mean buying <i>the company</i>, not buying <i>the car</i>) or buy the <i>Comedy Cable Station</i> (I would run it so that it only airs funny commercials) then I might need the extra half a billion to even begin trying (Tesla is worth around 740 billion). SO the question of <b>is it worth the haggling </b>depends on your situation. <br /></p><p>So let's assume you don't need the money for some large purchase. Would you haggle? One reason to haggle is that you don't want the person who is ripping you off by only offering 2 billion to get away with it and/or think you are a chump. Whether one cares about this might depend on the relationship you have with that person. Even so, I would just take the 2 billion. Perhaps that's why I am in academia instead of business.</p><p>MATH QUESTION: Can we quantify what amount of money its not worth haggling over? <br /></p><p>2) Alice wants to buy Bob's company. After months of negotiations they agree to x dollars (and there are many side issues as well). The next day </p><p>Alice thinks: OH, if he was willing to sell for x, he would be willing to sell for x-1. </p><p>Bob thinks:OH, if she was willing to buy for x, he would be willing to buy for x+1.</p><p>(In the show this scenario happened many times, but usually with only one party wanting to re-negotiate and its not just +1 and -1, its things like seats-on-the-board, who-will-be-CEO.) </p><p>MATH QUESTION: If Alice and Bob behave as above is there any mechanism to make them actually come to an agreement? Might involve assuming they can't factor fast or can't solve NPC problems fast.</p>https://blog.computationalcomplexity.org/2024/04/math-thoughts-inspired-by-tv-show.htmlnoreply@blogger.com (gasarch)3tag:blogger.com,1999:blog-3722233.post-4354975669070450545Wed, 24 Apr 2024 19:15:00 +00002024-04-24T14:15:37.471-05:00Is Persistence an Anachronism?<p><i>Guest post by <a href="https://www.cs.ox.ac.uk/people/martin.bullinger/">Martin Bullinger</a></i></p>Very recently, <a href="https://ics.uci.edu/~vazirani/">Vijay Vazirani</a>'s paper <a href="https://ics.uci.edu/~vazirani/Matching_paper.pdf">A Theory of Alternating Paths and Blossoms, from the Perspective of Minimum Length</a> got accepted to Mathematics of Operations Research. For the first time, it gives a complete and correct proof that the Micali-Vazirani algorithm finds a maximum cardinality matching in time \(\mathcal O\left(m\sqrt{n}\right)\). I would like to give an account of the extraordinary story of this proof and how Vazirani's contribution inspires persistence.<p>My fascination for matching already started during my undergrad when I gave a talk on <a href="https://en.wikipedia.org/wiki/Blossom_algorithm">Edmonds' blossom algorithm</a>. It was at this time that I first heard about the <a href="https://ics.uci.edu/~vazirani/MV.pdf">Micali-Vazirani (MV) algorithm</a>. Naturally, I was quite excited when I got to know Vazirani personally years later. When I talked to him about the MV algorithm I was, however, shocked: Vazirani admitted that even to that day, there did not exist a complete proof of its correctness. How can a theoretical result be accepted to FOCS without a proof?</p><p>Now, 44 years after publication of the algorithm, a proof exists and has been peer-reviewed in great depth. But why did it take so long? Apparently, some results just need time. Sometimes a lot of time. Think of Fermat's Last Theorem, <a href="https://en.wikipedia.org/wiki/Fermat's_Last_Theorem">whose proof took 358 years</a>! So what is the story behind the MV algorithm? It can without a doubt be seen as a lifework. Together with his fellow PhD student Silvio Micali, Vazirani discovered it in the first year of his PhD in 1979-80. Without even attempting a proof, it was published in the proceedings of FOCS 1980. The first proof attempt by Vazirani was published <a href="https://ics.uci.edu/~vazirani/Matching-1994.pdf">in 1994 in Combinatorica</a>. Unfortunately, this proof turned out to be flawed. It took another 30 years until his current paper.</p><p>What kept Vazirani going for so long? In the acknowledgements of his paper, he thanks matching theory for its gloriously elegant structure. Vazirani was driven by his passion for the subject matter---but passion by itself can only go so far. Even more important was his belief in the correctness of the algorithm and the theory, which he had broadly outlined in his 1994 paper. Similar to <a href="https://en.wikipedia.org/wiki/Wiles%27s_proof_of_Fermat%27s_Last_Theorem">Andrew Wiles' story</a>, his perseverance led him to the idea which clinched the proof. In Vazirani's case, this was to use the new algorithmic idea of double depth-first search, which forms the core of the MV algorithm, and now, its proof as well. But Vazirani's result is also the story of an excellent research environment. Finding deep results requires colleagues or friends to discuss ideas with. Vazirani had these in the form of strong postdocs and PhD students. About ten years ago, he had been discussing ideas towards his proof with his former postdoc <a href="https://rutamehta.cs.illinois.edu/">Ruta Mehta</a>, and in the last three years, he discussed the final touches of his proof with his current PhD student <a href="https://ics.uci.edu/~rgangam/">Rohith Gangam</a>. Needless to say, both of them gained a lot from these discussions.</p><p>So why should we care for the MV algorithm? I have several reasons. First, without doubt, it is a historic result within combinatorial optimization. Matching is one of the most fundamental objects in discrete mathematics and we keep finding new applications for it, for example, in health, labor markets, and modern day matching markets on the Internet, basically in every part of our lives. But there is more. Once again, one can look at Vazirani's paper where he describes the impact of matching to the development of the theory of algorithms: Matching theory has led to foundational concepts like the definition of the complexity classes \(\mathcal P\) <a href="https://www.cambridge.org/core/journals/canadian-journal-of-mathematics/article/paths-trees-and-flowers/08B492B72322C4130AE800C0610E0E21">(Edmonds, 1965a)</a> and \(\# \mathcal P\) <a href="https://www.sciencedirect.com/science/article/pii/0304397579900446">(Valiant, 1979)</a>, the primal-dual paradigm <a href="https://onlinelibrary.wiley.com/doi/abs/10.1002/nav.3800020109">(Kuhn, 1955)</a>, and polyhedral combinatorics <a href="https://nvlpubs.nist.gov/nistpubs/jres/69B/jresv69Bn1-2p125_A1b.pdf">(Edmonds, 1965b)</a>. The impact of matching on complexity theory was an <a href="https://blog.computationalcomplexity.org/2017/11/matching-and-complexity.html">earlier topic of this blog</a>.</p><p>Despite being around for decades, the MV algorithm is still the fastest known algorithm for computing a maximum cardinality matching. This is surprising, to put it mildly. Similar to many other fundamental problems in combinatorial optimization, I would have expected the discovery of better algorithms in the last four decades. Why has this not happened? Vazirani appears to have gotten to the essence of the problem: a profound theory that interleaves algorithmic invariants and graph-theoretic concepts. It seems to be the kind of theory which would play an active role in the field of combinatorial optimization.</p><p>However, Vazirani's result proves something else, possibly even more important: the massive gains to be made by single-minded persistence. In a world in which departments and promotion procedures focus on publishing large numbers of papers, it seems impossible to work on one result for more than a year, let alone for decades. Vazirani managed to achieve both: pursue his passion and get the unfinished job done, but not let it come in the way of the rest of his otherwise-active research career. As a young researcher, this inspires me! In the end, it is through such persistence that science will take big steps forward.</p><p><i>This blog post evolved from many enjoyable discussions, which I had with Vijay Vazirani during a research stay at UC Irvine in spring 2024. I am grateful to Ruta Mehta for feedback on the initial version of this post. Vazirani recently presented his paper in a mini series of <a href="https://simons.berkeley.edu/events/theory-alternating-paths-blossoms-perspective-minimum-length">two talks available online</a>.</i></p>https://blog.computationalcomplexity.org/2024/04/is-persistence-anachronism.htmlnoreply@blogger.com (Lance Fortnow)17tag:blogger.com,1999:blog-3722233.post-6870657438011112341Sun, 21 Apr 2024 18:56:00 +00002024-04-21T13:56:39.755-05:00Intelligent Comments on Bill's G.H. Hardy/Avi W post that we did not post.I posted (see <a href="https://blog.computationalcomplexity.org/2024/04/avi-wigderson-is-counterexample-to-two.html">here</a>) about Avi Wigderson being a counterexample to two of G.H. Hardy's opinions:<div><br /></div><div>1) Hardy thought Math was a young man's game. I got some good comments on this. Some agreed and some disagreed. <br /></div><div><br /></div><div>2) Hardy thought applied math is dull. I got no comments on this one. I assume everyone agreed with my assessment that Hardy was wrong about this.</div><div><br /></div><div>AND I got the following comment: </div><div><br /></div><div><i>Avi Wigderson's brilliance shatters the false assumptions of G.H. Hardy, proving that intelligence knows no limits. His groundbreaking ideas challenge the status quo and inspire a new generation of thinkers.</i></div><div><i><br /></i></div><div>I agree! I was going to post it! Then I realized that </div><div>a) There were two links below it. A bad sign</div><div><br /></div><div>b) While Avi is a brilliant thinker, is he known for challenging the status quo? The NW result pointed to P=BPP which was not thought to be true at the time. (Micheal Sipser thought P=BPP, see <a href="https://www.sciencedirect.com/science/article/pii/0022000088900359">here,</a> but he was an outlier. If my memory is wrong then leave a polite comment.) Aside from that result I don't think Avi's work challenges the status quo. If I am WRONG leave a polite comment correcting my error. </div><div><br /></div><div>c) As a side note- I think of challenging the status quo as being over-rated. Wiles proof of FLT did not challenge the status quo. Many great results do not challenge the status quo. If Avi shows P NE NP that would be awesome even though it is the prevailing opinion. </div><div><br /></div><div>d) Upon reading the comment carefully, it looks like it was AI generated. </div><div><br /></div><div>e) It fooled me for about 5 minutes. It didn't fool Lance for a second. </div><div><br /></div><div>3) But this comment inspired Lance to use ChatGPT to generate comments to my post: one agreeing, and one disagreeing</div><div><br /></div><div>BEGIN COMMENT THAT AGREED<br /></div><div>Avi Wigderson’s recent Turing Award is a compelling moment to reflect on the fluid nature of mathematical careers and the erroneous stereotypes surrounding them. G.H. Hardy’s assertion in *A Mathematician’s Apology* that mathematics is a young man's game is eloquently countered by Avi’s illustrious career and the achievements of others well beyond the supposed prime age. This challenges Hardy's notion and suggests it warrants reevaluation as a mere conjecture rather than a stated fact.<br /><br />Moreover, Hardy’s dismissal of applied mathematics overlooks the dynamic interplay between various disciplines. Even in his era, the boundaries between pure and applied math, along with physics and computer science, were permeable and productive. Avi Wigderson’s work, though not strictly applied math, beautifully illustrates how applied considerations can drive significant theoretical advancements.<br /><br />In this light, the recognition of Wigderson’s contributions is not just a celebration of his individual genius but also a testament to the evolving and interconnected landscape of mathematics, which continues to defy the narrow confines set by earlier academic opinions.</div><div>END OF COMMENT</div><div><br /></div><div>This is an intelligent comment; however, it mostly reflects back what I said. Indeed, it is a more eloquent exposition of what I said. Fortunately I can't be fired from blogging and replaced with an AI since I don't get paid anyway. </div><div><br /></div><div>BEGIN COMMENT THAT DISAGREED<br /></div><div>While Avi Wigderson's Turing Award certainly highlights his exceptional contributions to theoretical computer science, invoking this as a refutation of G.H. Hardy's views on mathematics as a young man's game might be an oversimplification. Hardy's assertion, though controversial, was not without merit when considering the context of his times and the examples he witnessed firsthand. It's important to remember that exceptions like Wigderson do not necessarily disprove the rule but rather highlight remarkable outliers.<br /><br />Furthermore, Hardy's critique of applied mathematics as being dull must be understood in the philosophical context of his personal commitment to pure mathematics. While we may not agree with his perspective, dismissing it entirely fails to appreciate the depth of passion that fueled his work and the work of many pure mathematicians. The interplay between disciplines enriches mathematics, indeed, but Hardy’s emphasis on the beauty of pure theory has inspired generations and continues to hold significant value in the mathematical community. </div><div>END OF COMMENT</div><div> </div><div>This does not seem to be just reflecting back what I wrote. </div><div> </div><div>The argument about Avi is an outlier is reasonable. <br /></div><div> </div><div>Here is my counterargument for defending Hardy's negative opinion of applied math: just because the notion that applied math is dull fueled passion does not make it correct. And one must also see what harm the notion caused.</div><div> </div><div> If I debated a human on this point they might change their mind, or give better counterarguments. Can ChatGP change its mind? Does that question even make sense? I suspect that ChatGPT could give better or at least different counterarguments. <br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div>https://blog.computationalcomplexity.org/2024/04/intelligent-comments-on-bills-gh.htmlnoreply@blogger.com (gasarch)2tag:blogger.com,1999:blog-3722233.post-7996908622879327094Thu, 18 Apr 2024 13:11:00 +00002024-04-19T07:12:28.414-05:00Favorite Theorems: Quantum Provers<p>For our next favorite theorem, we look at the surprising power of provers who share entangled bits. If you can prove something to an arbitrarily computable verifier, then two entangled provers can convince a polynomial-time verifier.</p><p style="text-align: center;"><a href="https://arxiv.org/abs/2001.04383">MIP* = RE</a><br />Zhengfeng Ji, Anand Natarajan, Thomas Vidick, John Wright and Henry Yuen</p><p style="text-align: left;">A quick tour:</p><p style="text-align: left;"></p><ul style="text-align: left;"><li>A powerful prover convincing a skeptical computable deterministic verifier is another way of capturing computably enumerable (traditionally called recursively enumerable or RE). You can convince a verifier that a program halts by giving the halting time, and the verifier can simulate the machine for that many steps.</li><li>A powerful prover convincing a skeptical polytime deterministic verifier is another way of capturing the class NP, like giving a 3-coloring of a graph that can be easily checked.</li><li>If you allow the verifier to ask random questions, you can convince the verifier with high confidence that a graph is not colorable, or more generally any PSPACE problem.</li><li>If you add two separated provers that a skeptical probabilistic verifier can play off each other, the provers can convince the verifier that any problem in NEXP, non-deterministic exponential time.</li></ul><div>One of many quantum variations of interactive proofs, MIP* has two provers that cannot communicate but have entangled quantum bits. This change could go either way: </div><div><ul style="text-align: left;"><li>The provers to coordinate their answers better and so they wouldn't convince the verifier for all the languages in NEXP</li><li>The verifier could ask more complex questions to the provers which they could answer using the entanglement, allowing the provers to convince the verifier for even more complex languages.</li></ul></div><div>Turns out it's the later in a very strong way.</div><div><br /></div><div><a href="https://doi.org/10.1109/FOCS.2012.11">Ito and Vidick</a> showed that you can create a protocol that prevents the provers coordinating better, recovering all problems in NEXP. <a href="https://doi.org/10.1109/FOCS.2019.00039">Natarajan and Wright</a> showed you can ask more questions, showing that provers with entangled bits can convince a verifier of everything in NEEXP, non-deterministic double exponential time (\(2^{2^{n^c}}\)), already a proof too large for the verifier to even point into. The MIP* = RE paper takes that all the way to the computably enumerable sets, all the languages you would get with a classical prover convincing a deterministic verifier unrestricted by time.</div><p></p>https://blog.computationalcomplexity.org/2024/04/favorite-theorems-quantum-provers.htmlnoreply@blogger.com (Lance Fortnow)3tag:blogger.com,1999:blog-3722233.post-4564344477760220378Mon, 15 Apr 2024 13:53:00 +00002024-04-15T08:53:30.035-05:00Avi Wigderson is a counterexample to TWO stupid thoughts of G.H. Hardy<p> Recently</p><p>1) Avi Wigderson won the Turing Award (See blog posts by Fortnow-<a href="https://blog.computationalcomplexity.org/2024/04/avi-wins-turing-award.html">here</a>, Scott-<a href="https://scottaaronson.blog/?p=7936">here</a>, Lipton-Regan <a href="https://rjlipton.com/2024/04/10/2023-turing-award/">here</a>, and the ACM announcement <a href="https://amturing.acm.org/">here</a>). The last time I could find when Fortnow-Gasarch, Scott, Lipton-Regan all blogged on the same topic was when Goldwasser-Micali won the Turing Award- see the blog entries (<a href="https://blog.computationalcomplexity.org/2013/03/turing-award-to-shafi-and-silvio.html">here</a>, <a href="https://scottaaronson.blog/?p=1267">here</a>,<a href="https://rjlipton.com/2013/06/06/psst-its-the-definition/">here</a>). We rarely coordinate. For that matter, even Fortnow and Gasarch rarely coordinate. <br /></p><p>2) My joint book review of G.H. Hardy's <i>A Mathematician's Apology (1940)</i> and L.N. Trefethen's <i>An Applied Mathematician's Apology</i> appeared in SIGACT News. </p><p>These two events would seem unrelated. However, I criticize two points in Hardy's book; and those two points relate to Avi. The book review is <a href="https://www.cs.umd.edu/~gasarch/bookrev/NICK/apology.pdf">here</a>. </p><p>POINT ONE: Hardy says that Mathematics is a young man's game and that if you are over 40 then you are over the hill. He gives some fair example (Gauss, Newton) and some unfair ones (Galois, Ramanujan who died before they were 40.) Rather than STATE this fact he should have made it a CONJECTURE to be studied. I would make it two conjectures: </p><p>Was it true for math that Hardy would know about? Since most people died younger in those days, there might be to small a sample size. Euler and Leibniz might be counterexamples.</p><p>Is it true now? AVI is clearly a counterexample. Other modern counterexamples: Michael Rabin, Leslie Valiant, Roger Apery (proved zeta(3) irrational at the age of 62), Yitang Zhang (bounded gaps between primes at age 58, which, alas, is not a prime-- would have been really cool if it was a twin prime), Louis de Branges (proof of the Bieberbach conjecture at 51), Andre Weil, Jean-Pierre Serre. Is that enough people to disprove Hardy's conjecture? </p><p>Despite the counterexamples I provided, we have all seen some mathematicians stop producing after a time. I offer two reasons for this</p><p>a) (Andrew Gleason told me this one) A mathematician works in a field, and the field dries up. Changing fields is hard since math has so much prereq knowledge. CS has less of that problem. One can see if in the counterexamples above, and in other counterexamples, the fields they were in didn't dry up. </p><p>b) The Peter Principle: Abosla is a great research so lets make her department chair!</p><p>My conjecture: The notion that math is a young mans game is false. </p><p>POINT TWO: Applied Math is dull. Trefethan's book makes a good counter argument to this. I will say something else.</p><p>Even in Hardy's time he would have seen (if his head was not so far up his ass) that math, applied math, physics, compute science and perhaps other areas interact with each other. It is common to say that things done in pure math get applied. However, there are also cases where pure math uses a theorem from applied math. Or where Physics MOTIVATES a topic in pure or applied math. The boundaries are rather thin and none of these areas has the intellectual or moral high ground. There is the matter of personal taste, and if G.H. Hardy prefers pure math, that's fine for him. But he should not mistake his tastes for anything global. And is well known, he thought pure math like number theory would never apply to the real world. He was wrong about that of course. But also notice that Cryptography motivated work in number theory. I am not sure if I would call AVI's work applied math,but it was certainty motivated by applied considerations. <br /></p>https://blog.computationalcomplexity.org/2024/04/avi-wigderson-is-counterexample-to-two.htmlnoreply@blogger.com (gasarch)9tag:blogger.com,1999:blog-3722233.post-268446554761000069Wed, 10 Apr 2024 16:11:00 +00002024-04-10T11:27:36.395-05:00Avi wins the Turing Award<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://www.ias.edu/sites/default/files/styles/portrait/public/images/scholars/757.jpg?itok=G6mMA9iI" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="240" data-original-width="200" height="240" src="https://www.ias.edu/sites/default/files/styles/portrait/public/images/scholars/757.jpg?itok=G6mMA9iI" width="200" /></a></div>The ACM <a href="https://awards.acm.org/award-recipients/wigderson_3844537">announced</a> that Avi Wigderson, a force in computational complexity and beyond, will receive the 2023 A. M. Turing Award (<a href="https://www.quantamagazine.org/avi-wigderson-complexity-theory-pioneer-wins-turing-award-20240410/">Quanta article</a>). This is the first primarily complexity theorist to win the award since Andy Yao in 2000. Avi can add this to his <a href="https://abelprize.no/abel-prize-laureates/2021">Abel</a>, <a href="https://www.mathunion.org/imu-awards/rolf-nevanlinna-prize">Nevanlinna</a> and <a href="https://www.ias.edu/news/avi-wigderson-awarded-2019-knuth-prize">Knuth</a> prizes. Avi has served on the faculty at the Institute for Advanced Study since 1999 after many years at Hebrew University. He's hosted many postdocs and visiting faculty and students who've gone onto great careers of their own.<div><p></p><p>Avi is my first co-author to win the Turing award. <a href="https://doi.org/10.1007/BF01275486">Our paper</a> was just one link in a chain of papers, from <a href="https://doi.org/10.1016/S0022-0000(05)80043-1">Nisan-Wigderson</a> to <a href="https://dl.acm.org/doi/abs/10.1145/258533.258590">Impagliazzo-Wigderson</a> showing how circuit bounds yield derandomization. Philosophically these results tell us randomness is just computation we cannot predict.</p><p>But Avi did so much more. <a href="https://dl.acm.org/doi/abs/10.1145/116825.116852">NP has zero-knowledge proofs</a>. <a href="https://doi.org/10.1109/SFCS.2000.892006">Zig-Zag expanders</a> that led to Reingold's <a href="https://doi.org/10.1145/1391289.1391291">SL = L</a>. <a href="https://doi.org/10.1145/62212.62265">Monotone circuit lower-bounds using communication complexity</a>. <a href="https://blog.computationalcomplexity.org/2017/11/matching-and-complexity.html">Upper and lower bound for matching</a>. <a href="https://doi.org/10.1145/780542.780630">Optimal Extractors</a>. And that's just the tip of the iceberg. </p><p>Notably none of these papers are solely authored or even have much overlap in their author lists. Avi shared his wisdom with many, nearly 200 distinct co-authors according to DBLP. </p><p>Beyond his research, Avi has been a great advocate for our field, advocating to the NSF as a founding member of the SIGACT Committee for the Advancement for Theoretical Computer Science, and on the Simons Foundation scientific board which led to Simons Fellows and the <a href="https://simons.berkeley.edu/homepage">Simons Institute</a>. Avi <a href="https://www.amazon.com/Mathematics-Computation-Avi-Wigderson/dp/0691189137">wrote a book</a> placing computational complexity as a great mathematical discipline that just also happens to have great applications in so many different fields.</p><p>Congrats to Avi and this capstone to an incredible career and individual. </p></div>https://blog.computationalcomplexity.org/2024/04/avi-wins-turing-award.htmlnoreply@blogger.com (Lance Fortnow)9tag:blogger.com,1999:blog-3722233.post-8440922765313444517Tue, 09 Apr 2024 14:20:00 +00002024-04-09T09:20:14.475-05:00Rance Cleaveland passed away on March 27, 2024. He will be missed<p> <br />My friend and colleague Rance Cleaveland passed away on March 27, 2024 at the age of 62. He was a professor at The University of Maryland at College Park in the Computer Science Department. He worked in Software Engineering. He did program verification and model checking. He had his own company, so he did both theoretical and practical work.<br /><br />He joined UMCP in 2005. I had known him and some of his work before then so we got together for lunch about once a month to talk about the department(he was new to the dept. so I filled him in on things) and about computer science.He would probably be considered a theorist in Europe, though he was considered a Software Engineer in America.<br /><br />The department's announcement is <a href="https://www.cs.umd.edu/article/2024/03/remembering-rance-cleaveland-1961-2024">here</a></p><p>Below is</p><p> <br />1) A note from Rance's Grad student Peter Fontana, who got his PhD in 2014.<br /><br />2) An email that Jeroen Keiren sent to the concurrency mailinglist.</p><p>3) A picture of Peter, Jeroen, and Rance in that order left to right. </p><p>-------------------------------------------</p><p>Peter Fontana's Note:</p><p>PERSONAL:</p><p>I’m truly shocked and saddened to hear that Rance Cleaveland passed away. Rance advised me (as a Ph.D. student of his at UMCP) in model checking (also called formal verification).<br /><br />Rance was an extremely kind advisor and extremely skilled in leadership and communication. He also had all of the swiftness, communication, and people understanding of a skilled manager. He always encouraged us to find the simplest example possible to illustrate a nuanced corner-case of a property we wanted to prove or disprove. The simplicity made complicated things clearer. He was also an extremely clear communicator and extremely skilled with people. Rance was always patient and kind, eager to guide rather than to chastise. I will truly miss him.<br /><br /></p><p>COMPUTER SCIENCE</p><p>Model checking involves the following:</p><p>(1) abstracting a programs as a state machine (automaton) with labels,</p><p>(2) writing a desirable property (such as “bad event X will never happen”) as a formula in a logic,</p><p>(3) using a computer to automatically show (over all possible cases) that the specified property is true or false. This is model checking.<br /><br />Theorists will note that this process of model checking is asking if state q of a machine satisfy a property phi, which is the model checking problem. If you are in the world of boolean formulas and propositions, the NP-complete satisfiability problem (SAT) asks: does there exists a boolean assignment q that s satisfy formula \phi? The analogous model checking problem is: given boolean assignment q (each proposition is either T or F), does q satisfy \phi? For boolean assignments, the model-checking problem is in P.<br /><br />While Rance worked in a variety of areas related to formal verification that spanned process algebras, different logics, different automata types, and cyber-physical systems, with me we improved the art of timed automata model checking using a timed modal-mu calculus (timed logic). Timed automata and timed logics extend state machines by introducing a finite number of clocks. These clocks all advance (like time advancing) and can reset, and timed logics now have timing constraints (“within T time” is the most common constraint). We worked on extending the state of the art of what we could model-check on timed automata, both with theoretical proofs and by implementing a model checker (in C++) to model-check these richer timed properties. It turns out that this model checking work is decidable (model checking the simplest formulas in timed automata was previously shown to be PSPACE HARD). I inherited work started by others, enhanced it, and passed it on; that work is currently being enhanced by others today. Our approach was novel in that we used a proof system of predicates and were able to model check more expressive formulas on timed automata with this enhanced system (See <a href="https://link.springer.com/chapter/10.1007/978-3-319-10512-3_9">this paper</a>). For details see my PhD thesis <a href="https://drum.lib.umd.edu/items/9de558ba-1e0a-41bd-b85a-37f89db8aadd">here</a>.</p><p></p><p>----------------------------------------------------------------------------------------------<br /><br />Jeroen Keiren's message to the concurrency mailinglist<br /><br />Dear colleagues,<br /><br />It is with great sadness that I share the news of the sudden and unexpected passing of<br />our colleague Rance Cleaveland on 27 March 2024.<br /><br />My thoughts are with his family, friends and loved ones during this sad time.<br /><br />I am convinced that those of us that interacted with Rance throughout his career will remember him as a friendly person. He was always happy to discuss research, but also dwell on more personal topics or give his honest, yet in my experience always constructive, advice.<br /><br />Rance obtained his PhD at Cornell, and held academic positions at Sussex, NC State and Stony Brook, before accepting his current position at the University of Maryland, College Park in 2005. UMD shared an obituary for Rance <a href="https://www.cs.umd.edu/article/2024/03/remembering-rance-cleaveland-1961-2024">here</a>.<br /><br />Rance was not only interested in the theoretical foundations of our field, witnessed by over 150 scientific publications. He also had a strong focus on building the tools (for instance the Concurrency Workbench), and commercializing it (through his company Reactis).<br /><br />Rance was also an active member of our community. He was one of the co-founders of TACAS, for which he was still serving on the steering committee, as well as one of the co-founders of the Springer journal Software Tools for Technology Transfer.<br /><br />In the words of his wife “For Rance, the concurrency community was truly his intellectual home and he appreciated working with colleagues in Europe and the US - and around the world.”<br /><br />I’m sorry to bring this news.<br /><br />With kind regards,<br /><br />Jeroen Keiren<br />Assistant professor, Formal System Analysis group<br />Eindhoven University of Technology, The Netherlands</p><p>------------------------------------</p><p>A Picture of Peter, Jeroen, and Rance, left to right:</p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgAvmtvs1PNLS5u-bPc24uAuHEvTac1Vle5G3IsgeeUBmxZcofhBAj6KAsQWg2p2X1XiSOiEf2AOib3TTaFXIxJ2ttZQEKHDOq83dpwDcN6Uh2oRtN7R7k5AYyqA1iYXAYueDGd7-dj3vGk6wDd4x0H3qk6GcuTcdjrQFJSsnuYcJHFAh1BeZOq7w/s4032/rance.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="3024" data-original-width="4032" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgAvmtvs1PNLS5u-bPc24uAuHEvTac1Vle5G3IsgeeUBmxZcofhBAj6KAsQWg2p2X1XiSOiEf2AOib3TTaFXIxJ2ttZQEKHDOq83dpwDcN6Uh2oRtN7R7k5AYyqA1iYXAYueDGd7-dj3vGk6wDd4x0H3qk6GcuTcdjrQFJSsnuYcJHFAh1BeZOq7w/s320/rance.jpg" width="320" /></a></div><br /><p><br /><br /><br /></p>https://blog.computationalcomplexity.org/2024/04/rance-cleaveland-passed-away-on-march.htmlnoreply@blogger.com (gasarch)1tag:blogger.com,1999:blog-3722233.post-7929318468146152414Wed, 03 Apr 2024 14:01:00 +00002024-04-03T09:13:19.261-05:00Answer to Question. MetaQuestion remains unsolved<p> In a prior post I asked the following question:</p><p><br /></p><p>find x,y,z positive natural numbers such that the following is true:</p><p><br /></p><p>$$ \frac{x}{y+z} + \frac{y}{x+z} + \frac{z}{x+y} = 4. $$</p><p>I first saw the question in a more fun way:</p><div class="separator" style="clear: both; text-align: center;"><a href="https://www.cs.umd.edu/~gasarch/BLOGPAPERS/fruit.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="255" data-original-width="367" height="255" src="https://www.cs.umd.edu/~gasarch/BLOGPAPERS/fruit.png" width="367" /></a></div><br /><p><br /></p><p>I did not put the answer in the post (should I have? That was the meta question.)</p><p>The question has an infinite number of (x,y,z) that work, so I'll just give the least one:</p><p><br /></p><p><br /></p><p>x= 154476802108746166441951315019919837485664325669565431700026634898253202035277999</p><p><br /></p><p>y = 36875131794129999827197811565225474825492979968971970996283137471637224634055579</p><p><br /></p><p>z = 4373612677928697257861252602371390152816537558161613618621437993378423467772036</p><p><br /></p><p>1) For details on how you could have found the answer see <a href="https://www.quora.com/How-do-you-find-the-positive-integer-solutions-to-frac-x-y+z-+-frac-y-z+x-+-frac-z-x+y-4/answer/Alon-Amit">here</a>. Or watch a YouTube video on it <a href="https://www.youtube.com/watch?v=0sw0l3SRCc8&t=3s">here</a>.</p><p>2) Did I really expect my readers to get this one? Note that I posted it on April Fools Day, though it is a legit problem with a legit answer. <br /></p><p>3) The image that says that 95% of all people couldn't solve it---I wonder what their sample size was and where it was drawn from. I suspect that among <i>mathematicians </i>99% or more can't solve it. </p><p>4) Comments on the comments I got:</p><p>a) Austin Buchanan says that Wolfram Alpha says NO SOLUTION. I wonder if Wolfram Alpha cannot handle numbers of this size.</p><p>b) Anonymous right after Austin had a comment that I MISREAD as saying that they found it using a python program. I asked that person to email me, and it turns out that NO-- they recalled where to look (on the web I assume).</p><p>c) Several commenters solved it by looking at the web. Math Overflow and Quora had solutions. So did other places. This may make the meta question<i> should a blogger post the solution </i>a moot point for a well known problem. If you get a problem off the web its quite likely its well known, or at least well enough known, to have the answer also on the web. If you make up a problem yourself then its harder to tell. <br /></p><p>5) I think its a very hard problem to solve unless you have the prior KNOWLEDGE to solve it, so it would not be a good math competition problem. </p><p>6) The cute pictures of fruit in the presentation of the problem makes it LOOK like its a cute problem. It not. </p><p>7) Only one comment on the meta question about<i> should a blogger post the solution at the same time as the problem </i>(There were more comments about the unimportant question of whether 0 is a natural number.) The one comment says that a blogger SHOULD NOT - let the reader enjoy/agonize for a while. I agree. <br /></p><p>8) Determining if a given math problem is interesting is a hard problem; however, that will be a topic for another blog. (Tip for young bloggers, if there are any (blogs are so 2010): If you do ONE idea per blog then your blog can last longer.) <br /></p><p><br /></p><p><br /></p><p><br /></p><p><br /></p>https://blog.computationalcomplexity.org/2024/04/answer-to-question-metaquestion-remains.htmlnoreply@blogger.com (gasarch)11tag:blogger.com,1999:blog-3722233.post-1093332795126035941Mon, 01 Apr 2024 20:39:00 +00002024-04-01T19:40:24.343-05:00A Math Question and a Meta Question<p> 1) Question: find x,y,z natural numbers such that the following is true:</p><p><br /></p><p>$$ \frac{x}{y+z} + \frac{y}{x+z} + \frac{z}{x+y} = 4. $$</p><p>I was first presented the problem a more fun way:</p><div class="separator" style="clear: both; text-align: center;"><a href="https://www.cs.umd.edu/~gasarch/BLOGPAPERS/fruit.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="255" data-original-width="367" height="255" src="https://www.cs.umd.edu/~gasarch/BLOGPAPERS/fruit.png" width="367" /></a></div><br /><p>(NOTE- a commenter pointed out that ``Natural Numbers'' and `Positive Whole Values' are different since some people (and I AM one of them) include 0 as a natural. SO- to clarify, I want x and y and z to be naturals that are \(\ge 1\). )</p><p>2) Meta Question: When a blogger poses a question like that should they also post the answer? Have a pointer to the answer? Not have an answer? Pros and Cons:</p><p>a) If you do not list the answer at all (or post it in a few days) then people will not be tempted to look at the answer. They have to really work on it. (Maybe they can find the answer on the web). </p><p>b) There are people whose curiosity far exceeds their ego. So they want to work on in for (say) 5 minutes and then LOOK AT THE ANSWER! I am one of those people. </p><p>c) When you first look at a problem and work on it you are curious. If you have to wait a few days to get the answer then you may lose interest. </p><p>I invite you to work on both the question and the meta question. I will not be blocking people who post the answer in the comments, so if you want to work on it you might not want to look at the answers.</p><p>I will post the answer in a few days. </p><p><br /></p><p><br /></p>https://blog.computationalcomplexity.org/2024/04/a-math-question-and-meta-question.htmlnoreply@blogger.com (gasarch)18tag:blogger.com,1999:blog-3722233.post-5394625418887154332Wed, 27 Mar 2024 13:21:00 +00002024-03-27T08:21:51.064-05:00The Softening of Computing Jobs: Blip or Trend?<p>Tech companies are performing exceptionally well, driving the S&P 500 to new heights with their soaring stock prices. However, the tech sector, apart from AI, expects a <a href="https://www.wsj.com/tech/ai/ai-jobs-demand-tech-layoffs-5b7344c0?st=2srtmwkhj4rno8j&reflink=desktopwebshare_permalink">job decline</a> to <a href="https://www.wsj.com/articles/tech-job-seekers-without-ai-skills-face-a-new-reality-lower-salaries-and-fewer-roles-db63f6e0?st=p243n7qwn6kobym&reflink=desktopwebshare_permalink">persist</a> throughout the year, accompanied by a <a href="https://www.wired.com/story/tech-job-interviews-out-of-control/">tougher hiring process</a>. The situation becomes even more challenging for foreign students requiring sponsorship to secure a job after college, or older tech workers without the latest skills.</p><p>Despite these challenges, tech jobs remain more abundant than in most other fields, although the era of immediate employment with nearly six-figure salaries straight out of college with any CS degree has ended.</p><p>In discussions with industry leaders, I encounter varied perspectives on whether these changes represent a temporary fluctuation or a long-term trend. Let's explore some of the reasons for this </p><p><b>The Elon Effect</b></p><p>Since Elon Musk took over Twitter in the fall of 2022, he <a href="https://www.cnn.com/2023/04/12/tech/elon-musk-bbc-interview-twitter-intl-hnk/index.html">cut about 80% of the employees</a>. The platform had some hiccups but didn't fall apart. You might not like what is now called X became but technically it still functions pretty well. Many other companies started looking at their workforce and starting thinking whether they needed all the software develops they've hired.</p><p><b>Over Supply</b></p><p>We've seen <a href="https://www.wired.com/story/tech-jobs-layoffs-hiring/">tremendous layoffs</a> among the larger tech companies, paring down from over hiring during the pandemic, and <a href="https://www.theatlantic.com/technology/archive/2024/03/computing-college-cs-majors/677792/">massive growth</a> of computer science programs at universities across the country and world. We just have too many job searchers going after too few tech jobs.</p><p><b>Uncertainty</b></p><p>Companies hold back hiring in the face of uncertainty. Uncertainty in elections in the US and abroad, international relations particularly with China, regulatory and legal landscapes, wars, interest rates, and the economy. </p><p><b>Artificial Intelligence</b></p><p>Almost everyone I talk to thinks (wrongly) that their careers will not dramatically change with AI, except for programmers where we already see significant productivity improvements with ML tools like Github co-pilot. So many companies are re-assessing how many software developers they need. AI also adds to the uncertainly as the tools continue to improve, but how much and how fast remain difficult to predict. </p><p><b>Blip or Trend?</b></p><p>The supply will balance itself out in the future, though possibly through a drop in the number of CS majors. The world will get more certain, hopefully in a good way. But how AI will affect the tech (and every other) job will take years if not decades to play out.</p><p>So what advice for those looking for tech jobs: build skills, get certificates, perhaps a Masters to wait out the job market, learn AI both as a skill in demand but also to make yourself more productive. Be aggressive (but not <a href="https://blog.computationalcomplexity.org/2023/10/saving-grace.html">too aggressive</a>), network, enter competitions, build a portfolio. The jobs are not as plentiful but they are out there. </p>https://blog.computationalcomplexity.org/2024/03/the-softening-of-computing-jobs-blip-or.htmlnoreply@blogger.com (Lance Fortnow)6tag:blogger.com,1999:blog-3722233.post-679450709559825833Mon, 25 Mar 2024 12:32:00 +00002024-03-25T07:32:21.957-05:00I know what A-B-C-D-F mean but what about V? X? HP? <p> I am looking at LOTS of transcript of students who applied for my program <a href="https://www.cs.umd.edu/projects/reucaar/">REU-CAAR</a> so I sometimes come across grades that I don't understand. The transcript does not have a guide to them, and I have been unable to find the meaning on line.</p><p>Normal grades of A,B,C,D,F possibly with + or - I DO understand, as do you, though standards differ from school to school.</p><p>UMCP also has </p><p>P for Pass in a course the student chose to take Pass-Fail</p><p>W for withdrawing from a course</p><p>WW which will be on all courses in a semester- so the student dropped out that semester </p><p>XF means failed because you cheated. I suspect people outside of UMCP would not know that, though the `F' part looks bad. </p><p>NG seems to mean they placed out of the course somehow. Might stand for No Grade. </p><p>I've seen</p><p>V at Georgia Tech. Lance told me that means the student audited the course. </p><p>Q at University of Texas at Austin.I do not know that means. </p><p>X at Depauw. I do not know what that means.</p><p>HP at Harvey Mudd. I do not know what that means.</p><p>WV at Harvey Mudd. I suspect some kind of withdrawal but I don't know. </p><p>SX at Cornell. I do not know what that means. </p><p>E is used at some schools for FAIL and at other schools for EXCELLENT</p><p>Some schools in India use O for Outstanding- higher than an A. </p><p>Fortunately it is rare that I NEED to know a grade that has a letter I don't understand the meaning of. However, another problem is names of courses. </p><p><i>Analysis</i> could mean Calculus with or without proofs, one or many variables. <br /></p><p><i>Discrete Math</i> could mean an easy course on how to proof simple thing or a hard course in combinatorics. Often I can tell if its a Freshman, Sophomore, Junior, or Senior course so that may help tell what it is. </p><p><i>Foundations</i> could mean... a lot of things. </p><p>So what can be done? The only thing I can think of is to have schools include a legend on their transcripts that tells what each grade means. Why hasn't this already been done? Speculation</p><p>a) Harder than it seems to do.</p><p>b) Not really an important problem (this blog is the only time I've every seen it mentioned)</p><p>There may be some tradeoff between how easy something is to do and how important a problem is to solve in order to take action. And this problem does not reach that threshold.</p><p>This would seem to be a problem for admissions to grad school as well, yet I have not heard of people complaining about it there either.<br /></p><p><br /></p><p><br /></p>https://blog.computationalcomplexity.org/2024/03/i-know-what-b-c-d-f-mean-but-what-about.htmlnoreply@blogger.com (gasarch)5tag:blogger.com,1999:blog-3722233.post-4833099759906403906Wed, 20 Mar 2024 13:41:00 +00002024-03-29T15:37:33.269-05:00Can you feel the machine?In the recent academy award winning movie <a href="https://www.imdb.com/title/tt15398776/">Oppenheimer</a>, Niels Bohr <a href="https://www.youtube.com/watch?v=qiuSBWVdgLI">tests</a> a young Oppenheimer.<div><div></div><blockquote><div><b>Bohr</b>: Algebra's like sheet music, the important thing isn't can you read music, it's can you hear it. Can you hear the music, Robert?</div><div><b>Oppenheimer</b>: Yes, I can.</div></blockquote><p>I can't hear the algebra but I feel the machine. </p><p>A Turing machine has a <a href="https://en.wikipedia.org/wiki/Turing_machine#Formal_definition">formal mathematical definition</a> but that's not how I think of it. When I write code, or prove a theorem involving computation, I feel the machine processing step by step. I don't really visualize or hear the machine, but it sings to me, I feel it humming along, updating variables, looping, branching, searching until it arrives at its final destination and gives us an answer. I know computers don't physically work exactly this way, but that doesn't stop my metaphorical machine from its manipulations.</p><p>I felt this way since I first started programming in Basic in the 1970's, or maybe even before that on calculators or using the phone or sending a letter. I could feel the process moving along.</p><p>So when I took my first CS theory class, I could feel those finite automata moving along states and the PDAs pushing and popping. I even felt the languages pumping. After graduate complexity, I knew I found my calling.</p><p>And that's why I prefer Kolmogorov Complexity arguments more than information theory. Information doesn't sing to me but I can see the compression.</p><p>As computational complexity evolved, focusing more on combinatorics, algebra and analysis, and often on models that have little to do with computation, I sometimes find myself lost amongst the static math. But give me a Turing machine and I can compute anything!</p><div></div></div>https://blog.computationalcomplexity.org/2024/03/can-you-feel-machine.htmlnoreply@blogger.com (Lance Fortnow)6tag:blogger.com,1999:blog-3722233.post-8292548797332354154Sun, 17 Mar 2024 18:54:00 +00002024-03-17T13:54:59.851-05:00Grad Student Visit Day: That was then, this is now.<p>(Harry Lewis helped me with this post.) </p><p>March 15 was UMCP Computer Science Grad Student Visit Day. I suspect many of my readers are at schools that had their Grad Student Visit Day recently, or will have it soon. </p><p>In 1980 I got into Harvard Grad School in Applied Sciences. I went there in the Spring to talk to some people and take a look at the campus. I paid my own way- it did not dawn on me to ask them to reimburse me and I doubt they had the mechanism to do so. </p><p>I know a student who got into two grad schools in 1992 and contacted them about a visit. Both set up the visit and reimbursed his travel, and the two trips helped him decide which school to go to. His criteria: The grad students looked happy at one and sad at the other. He went to the school where the grad students looked happy. Was it the right choice? He got his PhD so... it was not the wrong choice.<br /></p><p>That was then. </p><p>In 1980 no schools that I know of had anything called<i> Grad Student Visit Day. </i>In 1992 I suspect some did but it was a mixed bag. </p><p>Now all schools that I know of (including Harvard) have a day in the spring where prospective grad students in CS are invited to come to campus. There are talks about grad school at that school, a very nice lunch, and 1-on-1 (or perhaps finite-to-1) meetings of grad students with faculty. Students are reimbursed for travel and lodging (within reason). There are variants on all of this, but that's the basic structure. The idea is to convince grad students to go to that school. It also serves the purpose of helping grad students who are already coming to get a sense of the place and a free lunch. It costs money: reimbursing students for travel and food for the students on the day itself. </p><p>Random thoughts.</p><p>1) In 1980 no grad schools in CS did this. In 2024 (and I think for quite some time except the COVID years) all CS grad students do this. Does anyone know when the change happened? I suspect the early 1990's. </p><p>2) Do other departments do this? For example Math? Physics? Applied Math? Chemistry? Biology? Engineering? I doubt Art History does. </p><p>3) Does it really help convince students to go to that school? I suspect that at this point if a school DIDN"T do it they would look bad and might lose students. Is there a way out? See the next two points. </p><p>4) Do students judge a school based on the professors they see (``Oh, UMCP has more people doing a combination of ML and Vision then I thought- I'll go there!'') or on the quality of the food ("UMCP had ginger bread for one of their desserts, which is my favorite dessert, so I'll go there.'') or how smoothly the trip went (``UMCP had an easy mechanism for reimbursement, whereas school X had me fill out a lot of forms.'') <br /></p><p>5) Are we really advancing the public good for which we (schools) either have tax-exempt status OR are using tax-payer money by spending extra money to provide better meals and softer beds than our competitors do? Maybe we should all agree with each other to not waste money trying to outdo each other on stuff of no educational or research significance to the students. But wait---THIS MIGHT BE A VIOLATION OF ANTITRUST LAW. We are competitors, and under federal law are not allowed to (I think) cooperate to prevent a bidding war, even when it would be in the public interest that we do so in order to save money to use on the stuff that matters. </p><p>6) One benefit I get from this as a professor: During the talks I hear things about my department I didn't know. </p><p><br /></p>https://blog.computationalcomplexity.org/2024/03/grad-student-visit-day-that-was-then.htmlnoreply@blogger.com (gasarch)7tag:blogger.com,1999:blog-3722233.post-7673905085641261669Wed, 13 Mar 2024 14:51:00 +00002024-03-13T09:51:42.999-05:00Translation in Context<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEijRI3dsmNkDZdzE8GN9S13gtKevcF9Pu8SzWxVEObESWPl2wA6IRmVwIl4smMdQoJzkQAWNrPONgAuE1sOn_StGgllBq1alXcFqQ81AjdFLhLExLNsTE_dA-suOgyswjM6cMq1sh3acvrGpNZ87EVQjvX30wTqRddjq17O-bF5mFcR_YuXyRiM/s669/35.jpg" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="446" data-original-width="669" height="266" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEijRI3dsmNkDZdzE8GN9S13gtKevcF9Pu8SzWxVEObESWPl2wA6IRmVwIl4smMdQoJzkQAWNrPONgAuE1sOn_StGgllBq1alXcFqQ81AjdFLhLExLNsTE_dA-suOgyswjM6cMq1sh3acvrGpNZ87EVQjvX30wTqRddjq17O-bF5mFcR_YuXyRiM/w400-h266/35.jpg" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">La Scala in Milan</td></tr></tbody></table><br />Google translate generally impresses but consider this translation from a short <a href="https://www.ansa.it/lombardia/notizie/2024/02/25/alla-scala-un-telefonino-cade-da-un-palco-e-colpisce-uno-spettatore_0f364833-c7e6-4595-a2d6-ed5e1de604d5.html">Italian news article</a>. I boldfaced a few items.<div><div></div><blockquote><div>Not scheduled at the premiere of Mozart's Die Entführung aus dem Serail (The Abduction from the Seraglio) which debuted this evening at La Scala.</div><div><br /></div><div>During the second act, in fact, someone dropped a mobile phone from <b>the stage</b> and hit a spectator in the front row on the cheek.</div><div><br /></div><div>'<b>I'll report you</b>,' the man who was hit shouted. And then at halftime he explained to the ushers that he would keep the smartphone until the owner arrived to report it. In the meantime, the performance, in the historic staging by Giorgio Strehler conducted by Thomas Guggeis, continued with applause from the audience for almost every aria.</div></blockquote><p>The stage? Last I checked Mozart didn't write any mobile phones into his opera, though you can never tell with those European directors. It certainly would have made the abduction easier. Even if a phone was dropped from the stage how did it jump over the orchestra pit into the audience.</p><p>So I asked ChatGPT to translate the story.</p><p></p><blockquote><p>An unexpected event occurred during the premiere of Mozart's "Die Entführung aus dem Serail" (The Abduction from the Seraglio) which debuted this evening at La Scala.</p><p>During the second act, indeed, a cellphone fell from <b>a box</b> and struck a spectator in the front row on the cheek.</p><p>"<b>I'll sue you!</b>" the struck gentleman shouted. And then, during the interval, he explained to the ushers that he would keep the smartphone until the owner arrived to sue him. In the meantime, the performance, in the historic staging by Giorgio Strehler with the direction of Thomas Guggeis, continued amidst the audience's applause almost after every aria.</p></blockquote><p></p><p>That makes more sense. So what's going on? The word for box is "palco" and the word for stage is "palcoscenico" sometimes shortened to "palco" when the context is clear (Italian speakers can fact check me). LLMs capture the context and seem to yield better translations.</p><p>Also note the difference in what the man shouted. According to ChatGPT</p><blockquote><p>The Italian verb "denunciare" means "to report" or "to denounce" in English. When conjugated in the first person present indicative, it becomes "denuncio," translating to "I report" or "I denounce." It's used to express the action of making a formal accusation or report against someone, often in a legal or official context. So, in the context you provided earlier, "Ti denuncio" translates to "I'll sue you" or more literally, "I report you" or "I denounce you," indicating the speaker's intent to take formal action against the person responsible for dropping the cellphone.</p></blockquote><p>We love to talk about the failures of LLMs, but here it improves and explains the translations. </p><p>I wonder if the owner ever did try to get his phone back. </p><div></div></div>https://blog.computationalcomplexity.org/2024/03/translation-in-context.htmlnoreply@blogger.com (Lance Fortnow)1tag:blogger.com,1999:blog-3722233.post-2641739516639189440Sun, 10 Mar 2024 19:32:00 +00002024-03-11T09:32:14.852-05:00The Thrill of Seeing Your Name in Print is Gone<p> In the 1980's and 1990's when I got a paper accepted to a journal or conference it seemed important to <i>see it in print. </i>Having a paper accepted was nice, but it didn't seem real until I held the conference proceedings or journal in my hand. I can't imagine that now.</p><p>Random Thoughts</p><p>1) Now when a paper gets ACCEPTED, that's when it's REAL. For some journals the paper goes on their website very soon after that. For many papers this does not matter since the paper has been on arXiv's for months, so having it on a journal's website (perhaps behind a paywall) does not increase its being out-there anyway. Caveat- there may be people who subscribe to that journal who are not aware of the paper until they get the issue. Then they go to arXiv to get an e-copy. <br /></p><p>2) For e-journals there is no such thing as holding a journal in your hands. </p><p>3) There are still some people who want to see their name in print. That was part of <a href="https://blog.computationalcomplexity.org/2023/09/acm-to-go-paper-free-good-bad.html">this blog</a> and <a href="https://blog.computationalcomplexity.org/2011/02/all-news-that-fit-to-tweet.html">this blog</a></p><p>4) This post was inspired by the following: I had a paper accepted for a special issue of Discrete Mathematics , a memorial issue for Landon Rabern (a Combinatorist who died young). The paper is <a href="https://arxiv.org/abs/2302.04755">here</a> (that's the arXiv version). I recently got the journal, on paper, mailed to me. So I got to see my name in print. My wife was thrilled. I didn't care. I don't know if they send a paper copy of the journal to all authors, for all issues, or do they only do that for special issue. (My spellcheck thinks that <i>combinatorist</i> is not a word. Or perhaps I am spelling it wrong. ADDED LATER: There is a debate about this word in the comment. Really!)<br /></p><p>5) Note for aspiring bloggers: Getting that issue was the inspiration for this post. When you are inspired try to write the blog post soon before you lose interest.<i><br /></i></p><p>6) Before I got the paper copy I didn't know (or care) if there was a paper copy.</p><p>7) I recently asked an editor for a BEATCS column if BEATCS also has a paper copy. <i>He didn't know. </i>That is how unimportant it has become. So why did I ask? I was planning a blog post on which journals are paper free and which aren't- though I don't think I'll write that after all- to much work, and this post and some prior posts (the ones pointed to in item 3) cover that ground. <br /></p><p>8) I wonder for my younger readers: Did you EVER feel a thrill seeing your name in print? Have you ever even seen your name in print? If you don't understand the question, ask your grandparents what <i>in print </i>means.<br /></p>https://blog.computationalcomplexity.org/2024/03/the-thrill-of-seeing-your-name-in-print.htmlnoreply@blogger.com (gasarch)8tag:blogger.com,1999:blog-3722233.post-4924761439783763157Wed, 06 Mar 2024 22:13:00 +00002024-03-06T16:13:32.943-06:00Favorite Theorems: Sensitivity<p>Our next favorite theorem gave a relatively simple proof of the sensitivity conjecture, a long-standing problem of Boolean functions.</p><p></p><div style="text-align: center;"><a href="https://dx.doi.org/10.4007/annals.2019.190.3.6">Induced subgraphs of hypercubes and a proof of the Sensitivity Conjecture</a></div><div style="text-align: center;">by Hao Huang</div><div style="text-align: left;"><br /></div><div style="text-align: left;">Consider the following measures of Boolean functions \(f:\{0,1\}^n\rightarrow\{0,1\}\) on an input x</div><div style="text-align: left;"><ul style="text-align: left;"><li>Decision tree complexity: How many bits of an input do you need to query to determine \(f(x)\)</li><li>Probabilistic decision tree complexity</li><li>Quantum decision tree complexity</li><li>Certificate complexity: Least number of bits that forces the function</li><li>Polynomial degree complexity: What is the smallest degree of a polynomial p over the reals such that \(p(x_1,\ldots,x_n)=f(x_1,\ldots,x_n)\) for \(x\in\{0,1\}^n\). </li><li>Approximate polynomial degree complexity: Same as above but only require \(|p(x_1,\ldots,x_n)-f(x_1,\ldots,x_n)|\leq 1/3\).</li><li>Sensitivity: Over all x, what is the maximum number of i such that if x' is x with the ith bit flipped then \(f(x)\neq f(x')\).</li><li>Block sensitivity: Same as sensitivity but you can flip disjoint blocks of bits.</li></ul><div>Based mostly on the 1994 <a href="https://doi.org/10.1007/BF01263419">classic paper</a> by Nisan and Szegedy, all of the above measures, except for sensitivity, are polynomially related, in particularly if say f has polylogarithmic decision-tree complexity then they all have polylogarithmic complexity. The sensitivity conjecture states that sensitivity is polynomially related to the rest. I <a href="https://blog.computationalcomplexity.org/2017/12/razors-edge.html">wrote about the conjecture</a> in 2017 and a <a href="https://blog.computationalcomplexity.org/2018/01/a-sensitive-game.html">combinatorial game</a> that, if solved the right way, could settle the conjecture.</div><div><br /></div><div>Twenty-five years later Huang settled the conjecture in the positive, and now all the above measures are known to be polynomially-related. His proof uses eigenvalues of some cleverly constructed matrices. </div><div><br /></div><div>So what's left? The <a href="https://blog.computationalcomplexity.org/2018/01/a-sensitive-game.html">game</a> remains open. Also whether the <a href="https://blog.computationalcomplexity.org/2003/11/rational-functions-and-decision-tree.html">rational degree</a> is polynomially-related to all the above. But sensitivity was the big one!</div></div>https://blog.computationalcomplexity.org/2024/03/favorite-theorems-sensitivity.htmlnoreply@blogger.com (Lance Fortnow)2tag:blogger.com,1999:blog-3722233.post-9139688569964111347Sun, 03 Mar 2024 20:07:00 +00002024-03-03T14:07:26.441-06:00The letter to recommend John Nash was ``The Best Recomendation Letter Ever''- I do not think so. <p>There is an article about the letter Richard Duffin wrote for John Nash that helped John Nash get into Princeton: <a href="https://www.vox.com/2015/6/6/8738229/john-nash-recommendation-letter">here</a>. The title of the article is </p><p><br /><i>The Best Recommendation Letter Ever</i>.</p><p>The article appeared in 2015. </p><p>The letter is dated Feb 11, 1948. </p><p>The letter itself is short enough that I will just write it:</p><p><i>Dear Professor Lefschetz:</i></p><p><i>This is to recommend Mr. John F Nash Jr, who has applied for entrance to graduate college at Princeton.</i></p><p><i>Mr. Nash is nineteen years old and is graduating from Carnegie Tech in June. He is a mathematical genius.</i></p><p><i>Yours sincerely, </i></p><p><i>Richard Duffin</i></p><p><br /></p><p>I am right now looking through 365 applicants for my REU program. (Am I bragging or complaining? When it was 200 I was bragging. At 365 I am complaining.) </p><p>If I got a letter like that would I be impressed?</p><p>No.</p><p>A good letter doesn't just say <i>this person is genius. </i>It has to justify that. Examples: </p><p><i>She did research with me on topological algebraic topology. I was impressed with her insights. We are working on a paper that will be submitted to the journal of algebraic topological algebra. See the following arXiv link for the current draft. </i></p><p><i>They had my course on Ramsey theory as a Freshman and scored the highest A in the class. However, more impressive is that, on their own, they discovered that R(5)=49 by using their knowledge of both History and Mathematics. </i></p><p><i>Writing a letter for Jack Lotsofmath makes me sad we live in an era of overinflated letters. I worked with him on recursion theory when he was a high school student; however, he ended up teaching me 0''' priority arguments. </i></p><p>So my question is</p><p>1) Why did just stating that John Nash was a genius good enough back in 1948? Was Richard Duffin a sufficiently impressive person so that his name carried some weight? His Wikipedia entry is <a href="https://en.wikipedia.org/wiki/Richard_Duffin">here</a>.</p><p>2) Maybe its just me, but if a letter comes from a very impressive person I still want it to say what why the applicant is so great. </p><p>3) Was there more of an old-boys-network in 1949? Could the thinking have been <i>if Duffin thinks he's</i> <i>good, then he's good. </i>The old-boys-network was bad since it excluded blacks, women, Jews, Catholics, and perhaps people not of a certain social standing. But did it err on the other side-- did they take people who weren't qualified because they were part of the in crowd? And was Duffin's letter a way to say <i>but this guy really is good and not just one of us.</i></p><p>4) I suspect there were both less people applying to grad school and less slots available. I do not know how that played out.</p><p>5) Having criticized the letter there is one aspect I do like.</p><p>Today letters sometimes drone on about the following:</p><p>The letter writers qualification to judge:</p><p><i>Example:</i> I have supervised over 1000 students and have been funded by the NSF on three large grants and the NIH on one small grant. Or maybe its the other way around.</p><p>The research project, which is fine, but the letter needs to say what the student DID on the project. <br /></p><p><i>Example</i>: The project was on finite versions of Miletti's Can Ramsey Theory proof. This had never been attempted in the history of mankind! This is important because the Can Ramsey Theory is fundamental to Campbell's soup. This connects up with my upcoming book on the Can Ramsey Theorem to be published by Publish or Perish Press, coming out in early 2025. <br /></p><p> Irrelevant things for my program or for grad school, though perhaps relevant for College: </p><p><i>Example</i>: He is in the fencing club and the Latin club and was able to trash talk his opponents in Latin. They didn't even know they were being insulted!</p><p>So I give credit to Duffin for keeping it short and to the point. Even so, I prefer a letter to prove its assertions. <br /></p><p><br /></p><p><br /></p><p><br /></p><p><br /></p>https://blog.computationalcomplexity.org/2024/03/the-letter-to-recommend-john-nash-was.htmlnoreply@blogger.com (gasarch)7tag:blogger.com,1999:blog-3722233.post-7912911629140519727Wed, 28 Feb 2024 15:48:00 +00002024-02-28T09:48:18.864-06:00A Quantum State<p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgR5xUTVIOdCNDTK9Y0SG_ObviNOlIuqf03LCogPLNBIWo_TyvztTtSjzHLVgfF2oxyN7PdmpfVSK2AWy_uDmjkn6nez1aqJ4ffvHQFQe3RFnL4m0J428Lkz3Avx6tsVq3vHnQsCJiR0iG0R73TBR8-qsbtFY-5xWQTMiNGpc9Lvt-trMy3-cvb/s1024/Quantum%20Abe.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1024" data-original-width="1024" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgR5xUTVIOdCNDTK9Y0SG_ObviNOlIuqf03LCogPLNBIWo_TyvztTtSjzHLVgfF2oxyN7PdmpfVSK2AWy_uDmjkn6nez1aqJ4ffvHQFQe3RFnL4m0J428Lkz3Avx6tsVq3vHnQsCJiR0iG0R73TBR8-qsbtFY-5xWQTMiNGpc9Lvt-trMy3-cvb/s320/Quantum%20Abe.jpg" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Illinois' most famous citizen working on a quantum computer</td></tr></tbody></table><br />The governor of Illinois, JB Pritzker, unveiled his budget last week including <a href="https://www.axios.com/2024/02/21/illinois-jb-pritzker-quantum-computing-semiconductors">$500 million for quantum computing research</a>. Is this the best way to spend my tax dollars?</p><p>As long-time readers know, I have <a href="https://blog.computationalcomplexity.org/2021/04/quantum-stories.html">strong doubts</a> about the real-world applications of quantum computing and the hype for the field. But the article does not suggest any applications of quantum computing, rather</p><blockquote style="border: none; margin: 0px 0px 0px 40px; padding: 0px;"><p style="text-align: left;">Pritzker says he's optimistic that the Illinois state legislature will embrace his proposal as a catalyst for job creation and investment attraction.</p></blockquote><p>That does make sense. Investing in quantum may very well bring in extra federal and corporate investment into quantum in Chicago. At the least it will bring in smart people to Illinois to fill research roles. And it's not if this money would go to any other scientific endeavor if we don't put it into quantum.</p><p>So it makes sense financially and scientifically even if these machines don't actually solve any real-world problems. Quantum winter will eventually come but might as well take advantage of the hype while it's still there. Or should we?</p><p>A physicist colleague strongly supports Illinois spending half a billion on quantum. He lives in Indiana. </p>https://blog.computationalcomplexity.org/2024/02/a-quantum-state.htmlnoreply@blogger.com (Lance Fortnow)2tag:blogger.com,1999:blog-3722233.post-6079621552130924717Sun, 25 Feb 2024 19:40:00 +00002024-02-25T13:40:54.474-06:00When is it worth the time and effort to verify a proof FORMALLY?<p>(This post was inspired by Lance's <a href="https://twitter.com/fortnow/status/1758134974568431713">tweet</a> and later <a href="https://blog.computationalcomplexity.org/2024/02/sumchecks-and-snarks.html">post</a> on part of IP=PSPACE being formally verified.) </p><p>We now have the means to verify that a proof (prob just some proofs) is correct (one could quibble about verifying the verifier but we will not address that here). </p><p><i>The Coq proof assistant</i> (see <a href="https://coq.inria.fr/">here)</a>. This was used to verify the proof of<i> the four-color theorem,</i> see <a href="https://www.ams.org/notices/200811/tx081101382p.pdf">here</a>.</p><p><i>The Lean Programming Language and Proof Verifier</i> (see <a href="https://lean-lang.org/">here)</a>. It has been used to verify<i> Marton's Conjecture </i>which had been proven by Gowers-Green-Tao (see <a href="https://www.quantamagazine.org/a-team-of-math-proves-a-critical-link-between-addition-and-sets-20231206/">here</a> for a quanta article, and <a href="https://terrytao.wordpress.com/2023/11/18/formalizing-the-proof-of-pfr-in-lean4-using-blueprint-a-short-tour/">here</a> for Terry Tao's blog post about it.) </p><p><i>The HOL (Higher Order Logic) </i> is a family of interactive theorem proving systems (see <a href="https://hol-theorem-prover.org/">here)</a>. The Wikipedia entry for HOL (proof assistant) (see <a href="https://en.wikipedia.org/wiki/HOL_(proof_assistant)">here</a>) says: </p><p><i>The CakeML project developed a formally proven compiler for ML [ML is a programming language, not to be confused with Machine Learning.]</i></p><p><i>The ISABELLE is an HOL system</i>. See <a href="https://en.wikipedia.org/wiki/Isabelle_(proof_assistant)">here</a>. The Wikipedia page says that it is used to verify hardware and software. It has recently been used to verify the <i>Sumcheck Protocol </i>which was developed to help prove IP=PSPACE (See <a href="https://arxiv.org/pdf/2402.06093v1.pdf">here</a>.)One of the authors of the IP=PSPACE paper, Lance Fortnow, <a href="https://twitter.com/fortnow/status/1758134974568431713">said of this</a></p><p><i>Now I can sleep soundly at night.</i></p><p><i>The Kepler Conjecture</i> was proven by Thomas Hales. (Alfred Hales is the co-author of the <i>Hales-Jewitt. </i>I do not know if Thomas and Alfred Hales are related, and <i>Wikipedia </i>does not discuss mathematicians blood-relatives, only their academic ancestors.) The proof was... long. Over a period of many years it was verified by HOL light and Isabelle. See <a href="https://arxiv.org/pdf/2402.06093v1.pdf">here</a>. (Note- the paper itself says it used HOL-light and Isabelle, but the Wikipedia site on Lean says that Hales used Lean.) </p><p>I am sure there are other proof assistants and other theorems verified by them and the ones above. </p><p>My question is </p><p><b>Which proofs should we be spending time and effort verifying? </b></p><p>Random Thoughts</p><p>1) The time and effort is now much less than it used to be so perhaps <a href="https://www.dailymotion.com/video/x54vjte">the question is moot.</a></p><p>2) Proofs that were done with the help of a program (e.g., the four-color theorem) should be verified.</p><p>3) The theorem has to have a certain level of importance that is hard to define, but item 1 might make that a less compelling criteria. </p><p>4) A proof in an area where there have been mistakes made before should be verified. That is the justification given in the paper about verifying part of the IP=PSPACE proof.</p><p>5) A rather complicated and long proof should be verified. That was the case for Marton's Conjecture. </p><p>6) A cautionary note: Usually when a long and hard proof comes out, people try to find an easier proof. Once a theorem is proven (a) we know its true (hmmm..) and (b) we have some idea how the proof goes. Hence an easier proof may be possible. In some cases just a better write up and presentation is done which is also a good idea. I wonder if after having a verifier say YES people will stop bothering getting easier proofs.</p><p>7) Sometimes the people involved with the original proof also were involved in the verification (Kepler Conjecture, Marton's Conjecture) and sometimes not (IP=PSPACE, 4-color theorem). </p><p>8) When I teach my Ramsey Theory class I try to find easier proofs, or at least better write ups, of what is in the literature. I sometimes end up with well-written but WRONG proofs. The students, who are very good, act as my VERIFIER. This does not always work: there are still some students who, 10 years ago, believed an incorrect proof of the a-ary Canonical Ramsey Theorem, though they went on to live normal lives nonetheless. Would it be worth it for me to use a formal verifier? I would guess no, but again, see item (1). </p><p>9) Might it one day be ROUTINE that BEFORE submitting an article you use a verifier? Will using a verifier be as easy as using LaTeX? </p><p>10) The use of these tools for verifying code--- does that put a dent in the argument by Demillo-Lipton-Perlis (their paper is <a href="https://www.cs.umd.edu/~gasarch/BLOGPAPERS/social.pdf">here</a>, and I had two blog posts on it <a href="https://blog.computationalcomplexity.org/2021/06/i-went-to-debate-about-program-verif.html">here</a> and <a href="https://blog.computationalcomplexity.org/2007/09/social-process-and-proofs-of-theorems.html">here</a>) that verifying software is impossible?'</p><p>11) HOL stands for High Order Logic. I could not find out what Isabelle, Lean, or Coq stand for. I notice that they all use a Cap letter then small letters so perhaps they don't stand for anything. </p>https://blog.computationalcomplexity.org/2024/02/when-is-it-worth-time-and-effort-to.htmlnoreply@blogger.com (gasarch)9