tag:blogger.com,1999:blog-37222332024-03-18T17:27:11.680-05:00Computational ComplexityComputational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill GasarchLance Fortnowhttp://www.blogger.com/profile/06752030912874378610noreply@blogger.comBlogger3085125tag:blogger.com,1999:blog-3722233.post-82925487973323541542024-03-17T13:54:00.000-05:002024-03-17T13:54:59.851-05:00Grad Student Visit Day: That was then, this is now.<p>(Harry Lewis helped me with this post.) </p><p>March 15 was UMCP Computer Science Grad Student Visit Day. I suspect many of my readers are at schools that had their Grad Student Visit Day recently, or will have it soon. </p><p>In 1980 I got into Harvard Grad School in Applied Sciences. I went there in the Spring to talk to some people and take a look at the campus. I paid my own way- it did not dawn on me to ask them to reimburse me and I doubt they had the mechanism to do so. </p><p>I know a student who got into two grad schools in 1992 and contacted them about a visit. Both set up the visit and reimbursed his travel, and the two trips helped him decide which school to go to. His criteria: The grad students looked happy at one and sad at the other. He went to the school where the grad students looked happy. Was it the right choice? He got his PhD so... it was not the wrong choice.<br /></p><p>That was then. </p><p>In 1980 no schools that I know of had anything called<i> Grad Student Visit Day. </i>In 1992 I suspect some did but it was a mixed bag. </p><p>Now all schools that I know of (including Harvard) have a day in the spring where prospective grad students in CS are invited to come to campus. There are talks about grad school at that school, a very nice lunch, and 1-on-1 (or perhaps finite-to-1) meetings of grad students with faculty. Students are reimbursed for travel and lodging (within reason). There are variants on all of this, but that's the basic structure. The idea is to convince grad students to go to that school. It also serves the purpose of helping grad students who are already coming to get a sense of the place and a free lunch. It costs money: reimbursing students for travel and food for the students on the day itself. </p><p>Random thoughts.</p><p>1) In 1980 no grad schools in CS did this. In 2024 (and I think for quite some time except the COVID years) all CS grad students do this. Does anyone know when the change happened? I suspect the early 1990's. </p><p>2) Do other departments do this? For example Math? Physics? Applied Math? Chemistry? Biology? Engineering? I doubt Art History does. </p><p>3) Does it really help convince students to go to that school? I suspect that at this point if a school DIDN"T do it they would look bad and might lose students. Is there a way out? See the next two points. </p><p>4) Do students judge a school based on the professors they see (``Oh, UMCP has more people doing a combination of ML and Vision then I thought- I'll go there!'') or on the quality of the food ("UMCP had ginger bread for one of their desserts, which is my favorite dessert, so I'll go there.'') or how smoothly the trip went (``UMCP had an easy mechanism for reimbursement, whereas school X had me fill out a lot of forms.'') <br /></p><p>5) Are we really advancing the public good for which we (schools) either have tax-exempt status OR are using tax-payer money by spending extra money to provide better meals and softer beds than our competitors do? Maybe we should all agree with each other to not waste money trying to outdo each other on stuff of no educational or research significance to the students. But wait---THIS MIGHT BE A VIOLATION OF ANTITRUST LAW. We are competitors, and under federal law are not allowed to (I think) cooperate to prevent a bidding war, even when it would be in the public interest that we do so in order to save money to use on the stuff that matters. </p><p>6) One benefit I get from this as a professor: During the talks I hear things about my department I didn't know. </p><p><br /></p>gasarchhttp://www.blogger.com/profile/03004932739846901628noreply@blogger.com6tag:blogger.com,1999:blog-3722233.post-76739050856412616692024-03-13T09:51:00.000-05:002024-03-13T09:51:42.999-05:00Translation in Context<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEijRI3dsmNkDZdzE8GN9S13gtKevcF9Pu8SzWxVEObESWPl2wA6IRmVwIl4smMdQoJzkQAWNrPONgAuE1sOn_StGgllBq1alXcFqQ81AjdFLhLExLNsTE_dA-suOgyswjM6cMq1sh3acvrGpNZ87EVQjvX30wTqRddjq17O-bF5mFcR_YuXyRiM/s669/35.jpg" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="446" data-original-width="669" height="266" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEijRI3dsmNkDZdzE8GN9S13gtKevcF9Pu8SzWxVEObESWPl2wA6IRmVwIl4smMdQoJzkQAWNrPONgAuE1sOn_StGgllBq1alXcFqQ81AjdFLhLExLNsTE_dA-suOgyswjM6cMq1sh3acvrGpNZ87EVQjvX30wTqRddjq17O-bF5mFcR_YuXyRiM/w400-h266/35.jpg" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">La Scala in Milan</td></tr></tbody></table><br />Google translate generally impresses but consider this translation from a short <a href="https://www.ansa.it/lombardia/notizie/2024/02/25/alla-scala-un-telefonino-cade-da-un-palco-e-colpisce-uno-spettatore_0f364833-c7e6-4595-a2d6-ed5e1de604d5.html">Italian news article</a>. I boldfaced a few items.<div><div></div><blockquote><div>Not scheduled at the premiere of Mozart's Die Entführung aus dem Serail (The Abduction from the Seraglio) which debuted this evening at La Scala.</div><div><br /></div><div>During the second act, in fact, someone dropped a mobile phone from <b>the stage</b> and hit a spectator in the front row on the cheek.</div><div><br /></div><div>'<b>I'll report you</b>,' the man who was hit shouted. And then at halftime he explained to the ushers that he would keep the smartphone until the owner arrived to report it. In the meantime, the performance, in the historic staging by Giorgio Strehler conducted by Thomas Guggeis, continued with applause from the audience for almost every aria.</div></blockquote><p>The stage? Last I checked Mozart didn't write any mobile phones into his opera, though you can never tell with those European directors. It certainly would have made the abduction easier. Even if a phone was dropped from the stage how did it jump over the orchestra pit into the audience.</p><p>So I asked ChatGPT to translate the story.</p><p></p><blockquote><p>An unexpected event occurred during the premiere of Mozart's "Die Entführung aus dem Serail" (The Abduction from the Seraglio) which debuted this evening at La Scala.</p><p>During the second act, indeed, a cellphone fell from <b>a box</b> and struck a spectator in the front row on the cheek.</p><p>"<b>I'll sue you!</b>" the struck gentleman shouted. And then, during the interval, he explained to the ushers that he would keep the smartphone until the owner arrived to sue him. In the meantime, the performance, in the historic staging by Giorgio Strehler with the direction of Thomas Guggeis, continued amidst the audience's applause almost after every aria.</p></blockquote><p></p><p>That makes more sense. So what's going on? The word for box is "palco" and the word for stage is "palcoscenico" sometimes shortened to "palco" when the context is clear (Italian speakers can fact check me). LLMs capture the context and seem to yield better translations.</p><p>Also note the difference in what the man shouted. According to ChatGPT</p><blockquote><p>The Italian verb "denunciare" means "to report" or "to denounce" in English. When conjugated in the first person present indicative, it becomes "denuncio," translating to "I report" or "I denounce." It's used to express the action of making a formal accusation or report against someone, often in a legal or official context. So, in the context you provided earlier, "Ti denuncio" translates to "I'll sue you" or more literally, "I report you" or "I denounce you," indicating the speaker's intent to take formal action against the person responsible for dropping the cellphone.</p></blockquote><p>We love to talk about the failures of LLMs, but here it improves and explains the translations. </p><p>I wonder if the owner ever did try to get his phone back. </p><div></div></div>Lance Fortnowhttp://www.blogger.com/profile/06752030912874378610noreply@blogger.com0tag:blogger.com,1999:blog-3722233.post-26417395166391894402024-03-10T14:32:00.001-05:002024-03-11T09:32:14.852-05:00The Thrill of Seeing Your Name in Print is Gone<p> In the 1980's and 1990's when I got a paper accepted to a journal or conference it seemed important to <i>see it in print. </i>Having a paper accepted was nice, but it didn't seem real until I held the conference proceedings or journal in my hand. I can't imagine that now.</p><p>Random Thoughts</p><p>1) Now when a paper gets ACCEPTED, that's when it's REAL. For some journals the paper goes on their website very soon after that. For many papers this does not matter since the paper has been on arXiv's for months, so having it on a journal's website (perhaps behind a paywall) does not increase its being out-there anyway. Caveat- there may be people who subscribe to that journal who are not aware of the paper until they get the issue. Then they go to arXiv to get an e-copy. <br /></p><p>2) For e-journals there is no such thing as holding a journal in your hands. </p><p>3) There are still some people who want to see their name in print. That was part of <a href="https://blog.computationalcomplexity.org/2023/09/acm-to-go-paper-free-good-bad.html">this blog</a> and <a href="https://blog.computationalcomplexity.org/2011/02/all-news-that-fit-to-tweet.html">this blog</a></p><p>4) This post was inspired by the following: I had a paper accepted for a special issue of Discrete Mathematics , a memorial issue for Landon Rabern (a Combinatorist who died young). The paper is <a href="https://arxiv.org/abs/2302.04755">here</a> (that's the arXiv version). I recently got the journal, on paper, mailed to me. So I got to see my name in print. My wife was thrilled. I didn't care. I don't know if they send a paper copy of the journal to all authors, for all issues, or do they only do that for special issue. (My spellcheck thinks that <i>combinatorist</i> is not a word. Or perhaps I am spelling it wrong. ADDED LATER: There is a debate about this word in the comment. Really!)<br /></p><p>5) Note for aspiring bloggers: Getting that issue was the inspiration for this post. When you are inspired try to write the blog post soon before you lose interest.<i><br /></i></p><p>6) Before I got the paper copy I didn't know (or care) if there was a paper copy.</p><p>7) I recently asked an editor for a BEATCS column if BEATCS also has a paper copy. <i>He didn't know. </i>That is how unimportant it has become. So why did I ask? I was planning a blog post on which journals are paper free and which aren't- though I don't think I'll write that after all- to much work, and this post and some prior posts (the ones pointed to in item 3) cover that ground. <br /></p><p>8) I wonder for my younger readers: Did you EVER feel a thrill seeing your name in print? Have you ever even seen your name in print? If you don't understand the question, ask your grandparents what <i>in print </i>means.<br /></p>gasarchhttp://www.blogger.com/profile/03004932739846901628noreply@blogger.com6tag:blogger.com,1999:blog-3722233.post-49247614397837631572024-03-06T16:13:00.001-06:002024-03-06T16:13:32.943-06:00Favorite Theorems: Sensitivity<p>Our next favorite theorem gave a relatively simple proof of the sensitivity conjecture, a long-standing problem of Boolean functions.</p><p></p><div style="text-align: center;"><a href="https://dx.doi.org/10.4007/annals.2019.190.3.6">Induced subgraphs of hypercubes and a proof of the Sensitivity Conjecture</a></div><div style="text-align: center;">by Hao Huang</div><div style="text-align: left;"><br /></div><div style="text-align: left;">Consider the following measures of Boolean functions \(f:\{0,1\}^n\rightarrow\{0,1\}\) on an input x</div><div style="text-align: left;"><ul style="text-align: left;"><li>Decision tree complexity: How many bits of an input do you need to query to determine \(f(x)\)</li><li>Probabilistic decision tree complexity</li><li>Quantum decision tree complexity</li><li>Certificate complexity: Least number of bits that forces the function</li><li>Polynomial degree complexity: What is the smallest degree of a polynomial p over the reals such that \(p(x_1,\ldots,x_n)=f(x_1,\ldots,x_n)\) for \(x\in\{0,1\}^n\). </li><li>Approximate polynomial degree complexity: Same as above but only require \(|p(x_1,\ldots,x_n)-f(x_1,\ldots,x_n)|\leq 1/3\).</li><li>Sensitivity: Over all x, what is the maximum number of i such that if x' is x with the ith bit flipped then \(f(x)\neq f(x')\).</li><li>Block sensitivity: Same as sensitivity but you can flip disjoint blocks of bits.</li></ul><div>Based mostly on the 1994 <a href="https://doi.org/10.1007/BF01263419">classic paper</a> by Nisan and Szegedy, all of the above measures, except for sensitivity, are polynomially related, in particularly if say f has polylogarithmic decision-tree complexity then they all have polylogarithmic complexity. The sensitivity conjecture states that sensitivity is polynomially related to the rest. I <a href="https://blog.computationalcomplexity.org/2017/12/razors-edge.html">wrote about the conjecture</a> in 2017 and a <a href="https://blog.computationalcomplexity.org/2018/01/a-sensitive-game.html">combinatorial game</a> that, if solved the right way, could settle the conjecture.</div><div><br /></div><div>Twenty-five years later Huang settled the conjecture in the positive, and now all the above measures are known to be polynomially-related. His proof uses eigenvalues of some cleverly constructed matrices. </div><div><br /></div><div>So what's left? The <a href="https://blog.computationalcomplexity.org/2018/01/a-sensitive-game.html">game</a> remains open. Also whether the <a href="https://blog.computationalcomplexity.org/2003/11/rational-functions-and-decision-tree.html">rational degree</a> is polynomially-related to all the above. But sensitivity was the big one!</div></div>Lance Fortnowhttp://www.blogger.com/profile/06752030912874378610noreply@blogger.com2tag:blogger.com,1999:blog-3722233.post-91396885699641113472024-03-03T14:07:00.000-06:002024-03-03T14:07:26.441-06:00The letter to recommend John Nash was ``The Best Recomendation Letter Ever''- I do not think so. <p>There is an article about the letter Richard Duffin wrote for John Nash that helped John Nash get into Princeton: <a href="https://www.vox.com/2015/6/6/8738229/john-nash-recommendation-letter">here</a>. The title of the article is </p><p><br /><i>The Best Recommendation Letter Ever</i>.</p><p>The article appeared in 2015. </p><p>The letter is dated Feb 11, 1948. </p><p>The letter itself is short enough that I will just write it:</p><p><i>Dear Professor Lefschetz:</i></p><p><i>This is to recommend Mr. John F Nash Jr, who has applied for entrance to graduate college at Princeton.</i></p><p><i>Mr. Nash is nineteen years old and is graduating from Carnegie Tech in June. He is a mathematical genius.</i></p><p><i>Yours sincerely, </i></p><p><i>Richard Duffin</i></p><p><br /></p><p>I am right now looking through 365 applicants for my REU program. (Am I bragging or complaining? When it was 200 I was bragging. At 365 I am complaining.) </p><p>If I got a letter like that would I be impressed?</p><p>No.</p><p>A good letter doesn't just say <i>this person is genius. </i>It has to justify that. Examples: </p><p><i>She did research with me on topological algebraic topology. I was impressed with her insights. We are working on a paper that will be submitted to the journal of algebraic topological algebra. See the following arXiv link for the current draft. </i></p><p><i>They had my course on Ramsey theory as a Freshman and scored the highest A in the class. However, more impressive is that, on their own, they discovered that R(5)=49 by using their knowledge of both History and Mathematics. </i></p><p><i>Writing a letter for Jack Lotsofmath makes me sad we live in an era of overinflated letters. I worked with him on recursion theory when he was a high school student; however, he ended up teaching me 0''' priority arguments. </i></p><p>So my question is</p><p>1) Why did just stating that John Nash was a genius good enough back in 1948? Was Richard Duffin a sufficiently impressive person so that his name carried some weight? His Wikipedia entry is <a href="https://en.wikipedia.org/wiki/Richard_Duffin">here</a>.</p><p>2) Maybe its just me, but if a letter comes from a very impressive person I still want it to say what why the applicant is so great. </p><p>3) Was there more of an old-boys-network in 1949? Could the thinking have been <i>if Duffin thinks he's</i> <i>good, then he's good. </i>The old-boys-network was bad since it excluded blacks, women, Jews, Catholics, and perhaps people not of a certain social standing. But did it err on the other side-- did they take people who weren't qualified because they were part of the in crowd? And was Duffin's letter a way to say <i>but this guy really is good and not just one of us.</i></p><p>4) I suspect there were both less people applying to grad school and less slots available. I do not know how that played out.</p><p>5) Having criticized the letter there is one aspect I do like.</p><p>Today letters sometimes drone on about the following:</p><p>The letter writers qualification to judge:</p><p><i>Example:</i> I have supervised over 1000 students and have been funded by the NSF on three large grants and the NIH on one small grant. Or maybe its the other way around.</p><p>The research project, which is fine, but the letter needs to say what the student DID on the project. <br /></p><p><i>Example</i>: The project was on finite versions of Miletti's Can Ramsey Theory proof. This had never been attempted in the history of mankind! This is important because the Can Ramsey Theory is fundamental to Campbell's soup. This connects up with my upcoming book on the Can Ramsey Theorem to be published by Publish or Perish Press, coming out in early 2025. <br /></p><p> Irrelevant things for my program or for grad school, though perhaps relevant for College: </p><p><i>Example</i>: He is in the fencing club and the Latin club and was able to trash talk his opponents in Latin. They didn't even know they were being insulted!</p><p>So I give credit to Duffin for keeping it short and to the point. Even so, I prefer a letter to prove its assertions. <br /></p><p><br /></p><p><br /></p><p><br /></p><p><br /></p>gasarchhttp://www.blogger.com/profile/03004932739846901628noreply@blogger.com7tag:blogger.com,1999:blog-3722233.post-79129116291405197272024-02-28T09:48:00.000-06:002024-02-28T09:48:18.864-06:00A Quantum State<p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgR5xUTVIOdCNDTK9Y0SG_ObviNOlIuqf03LCogPLNBIWo_TyvztTtSjzHLVgfF2oxyN7PdmpfVSK2AWy_uDmjkn6nez1aqJ4ffvHQFQe3RFnL4m0J428Lkz3Avx6tsVq3vHnQsCJiR0iG0R73TBR8-qsbtFY-5xWQTMiNGpc9Lvt-trMy3-cvb/s1024/Quantum%20Abe.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1024" data-original-width="1024" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgR5xUTVIOdCNDTK9Y0SG_ObviNOlIuqf03LCogPLNBIWo_TyvztTtSjzHLVgfF2oxyN7PdmpfVSK2AWy_uDmjkn6nez1aqJ4ffvHQFQe3RFnL4m0J428Lkz3Avx6tsVq3vHnQsCJiR0iG0R73TBR8-qsbtFY-5xWQTMiNGpc9Lvt-trMy3-cvb/s320/Quantum%20Abe.jpg" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Illinois' most famous citizen working on a quantum computer</td></tr></tbody></table><br />The governor of Illinois, JB Pritzker, unveiled his budget last week including <a href="https://www.axios.com/2024/02/21/illinois-jb-pritzker-quantum-computing-semiconductors">$500 million for quantum computing research</a>. Is this the best way to spend my tax dollars?</p><p>As long-time readers know, I have <a href="https://blog.computationalcomplexity.org/2021/04/quantum-stories.html">strong doubts</a> about the real-world applications of quantum computing and the hype for the field. But the article does not suggest any applications of quantum computing, rather</p><blockquote style="border: none; margin: 0px 0px 0px 40px; padding: 0px;"><p style="text-align: left;">Pritzker says he's optimistic that the Illinois state legislature will embrace his proposal as a catalyst for job creation and investment attraction.</p></blockquote><p>That does make sense. Investing in quantum may very well bring in extra federal and corporate investment into quantum in Chicago. At the least it will bring in smart people to Illinois to fill research roles. And it's not if this money would go to any other scientific endeavor if we don't put it into quantum.</p><p>So it makes sense financially and scientifically even if these machines don't actually solve any real-world problems. Quantum winter will eventually come but might as well take advantage of the hype while it's still there. Or should we?</p><p>A physicist colleague strongly supports Illinois spending half a billion on quantum. He lives in Indiana. </p>Lance Fortnowhttp://www.blogger.com/profile/06752030912874378610noreply@blogger.com2tag:blogger.com,1999:blog-3722233.post-60796215521309247172024-02-25T13:40:00.000-06:002024-02-25T13:40:54.474-06:00When is it worth the time and effort to verify a proof FORMALLY?<p>(This post was inspired by Lance's <a href="https://twitter.com/fortnow/status/1758134974568431713">tweet</a> and later <a href="https://blog.computationalcomplexity.org/2024/02/sumchecks-and-snarks.html">post</a> on part of IP=PSPACE being formally verified.) </p><p>We now have the means to verify that a proof (prob just some proofs) is correct (one could quibble about verifying the verifier but we will not address that here). </p><p><i>The Coq proof assistant</i> (see <a href="https://coq.inria.fr/">here)</a>. This was used to verify the proof of<i> the four-color theorem,</i> see <a href="https://www.ams.org/notices/200811/tx081101382p.pdf">here</a>.</p><p><i>The Lean Programming Language and Proof Verifier</i> (see <a href="https://lean-lang.org/">here)</a>. It has been used to verify<i> Marton's Conjecture </i>which had been proven by Gowers-Green-Tao (see <a href="https://www.quantamagazine.org/a-team-of-math-proves-a-critical-link-between-addition-and-sets-20231206/">here</a> for a quanta article, and <a href="https://terrytao.wordpress.com/2023/11/18/formalizing-the-proof-of-pfr-in-lean4-using-blueprint-a-short-tour/">here</a> for Terry Tao's blog post about it.) </p><p><i>The HOL (Higher Order Logic) </i> is a family of interactive theorem proving systems (see <a href="https://hol-theorem-prover.org/">here)</a>. The Wikipedia entry for HOL (proof assistant) (see <a href="https://en.wikipedia.org/wiki/HOL_(proof_assistant)">here</a>) says: </p><p><i>The CakeML project developed a formally proven compiler for ML [ML is a programming language, not to be confused with Machine Learning.]</i></p><p><i>The ISABELLE is an HOL system</i>. See <a href="https://en.wikipedia.org/wiki/Isabelle_(proof_assistant)">here</a>. The Wikipedia page says that it is used to verify hardware and software. It has recently been used to verify the <i>Sumcheck Protocol </i>which was developed to help prove IP=PSPACE (See <a href="https://arxiv.org/pdf/2402.06093v1.pdf">here</a>.)One of the authors of the IP=PSPACE paper, Lance Fortnow, <a href="https://twitter.com/fortnow/status/1758134974568431713">said of this</a></p><p><i>Now I can sleep soundly at night.</i></p><p><i>The Kepler Conjecture</i> was proven by Thomas Hales. (Alfred Hales is the co-author of the <i>Hales-Jewitt. </i>I do not know if Thomas and Alfred Hales are related, and <i>Wikipedia </i>does not discuss mathematicians blood-relatives, only their academic ancestors.) The proof was... long. Over a period of many years it was verified by HOL light and Isabelle. See <a href="https://arxiv.org/pdf/2402.06093v1.pdf">here</a>. (Note- the paper itself says it used HOL-light and Isabelle, but the Wikipedia site on Lean says that Hales used Lean.) </p><p>I am sure there are other proof assistants and other theorems verified by them and the ones above. </p><p>My question is </p><p><b>Which proofs should we be spending time and effort verifying? </b></p><p>Random Thoughts</p><p>1) The time and effort is now much less than it used to be so perhaps <a href="https://www.dailymotion.com/video/x54vjte">the question is moot.</a></p><p>2) Proofs that were done with the help of a program (e.g., the four-color theorem) should be verified.</p><p>3) The theorem has to have a certain level of importance that is hard to define, but item 1 might make that a less compelling criteria. </p><p>4) A proof in an area where there have been mistakes made before should be verified. That is the justification given in the paper about verifying part of the IP=PSPACE proof.</p><p>5) A rather complicated and long proof should be verified. That was the case for Marton's Conjecture. </p><p>6) A cautionary note: Usually when a long and hard proof comes out, people try to find an easier proof. Once a theorem is proven (a) we know its true (hmmm..) and (b) we have some idea how the proof goes. Hence an easier proof may be possible. In some cases just a better write up and presentation is done which is also a good idea. I wonder if after having a verifier say YES people will stop bothering getting easier proofs.</p><p>7) Sometimes the people involved with the original proof also were involved in the verification (Kepler Conjecture, Marton's Conjecture) and sometimes not (IP=PSPACE, 4-color theorem). </p><p>8) When I teach my Ramsey Theory class I try to find easier proofs, or at least better write ups, of what is in the literature. I sometimes end up with well-written but WRONG proofs. The students, who are very good, act as my VERIFIER. This does not always work: there are still some students who, 10 years ago, believed an incorrect proof of the a-ary Canonical Ramsey Theorem, though they went on to live normal lives nonetheless. Would it be worth it for me to use a formal verifier? I would guess no, but again, see item (1). </p><p>9) Might it one day be ROUTINE that BEFORE submitting an article you use a verifier? Will using a verifier be as easy as using LaTeX? </p><p>10) The use of these tools for verifying code--- does that put a dent in the argument by Demillo-Lipton-Perlis (their paper is <a href="https://www.cs.umd.edu/~gasarch/BLOGPAPERS/social.pdf">here</a>, and I had two blog posts on it <a href="https://blog.computationalcomplexity.org/2021/06/i-went-to-debate-about-program-verif.html">here</a> and <a href="https://blog.computationalcomplexity.org/2007/09/social-process-and-proofs-of-theorems.html">here</a>) that verifying software is impossible?'</p><p>11) HOL stands for High Order Logic. I could not find out what Isabelle, Lean, or Coq stand for. I notice that they all use a Cap letter then small letters so perhaps they don't stand for anything. </p>gasarchhttp://www.blogger.com/profile/03004932739846901628noreply@blogger.com9tag:blogger.com,1999:blog-3722233.post-43210939345357939422024-02-21T10:19:00.003-06:002024-02-21T10:22:14.484-06:00Sumchecks and Snarks<p>Last summer as I lamented that my research didn't have real world implications, <a href="https://blog.computationalcomplexity.org/2023/08/turning-sixty.html?showComment=1692147431425#c4042051939245936837">one of the comments</a> mentioned the sumcheck protocol used for zero-knowledge SNARKs. I tried to figure out the connection back then but got lost in technical papers. </p><p>With the <a href="https://arxiv.org/abs/2402.06093v1">formal verification</a> of the sumcheck protocol announced last week I tried again this time using Google's new Gemini Ultra. Gemini gave a <a href="https://g.co/gemini/share/940eaeb5ead0">readable explanation</a>. Let me try to summarize here.</p><p>Sumcheck comes from the <a href="https://dl.acm.org/doi/10.1145/146585.146605">paper</a> where Howard Karloff, Carsten Lund, Noam Nisan and I showed that co-NP has interactive proofs. It was Carsten who <a href="https://blog.computationalcomplexity.org/2009/11/breaking-walls.html#:~:text=I%20remember%20most%20the%20breakthrough%20moments%20when%20Carsten%20walked%20into%20my%20office%20asking%20me%20why%20his%20protocol%20for%20the%20permanent%20didn%27t%20work%20(it%20did)">first came up</a> with the sumcheck protocol in fall of 1989. Here's a simple version:</p><blockquote style="border: none; margin: 0px 0px 0px 40px; padding: 0px;"><p style="text-align: left;">Suppose there is a hard to compute polynomial p of low-degree d and a prover claims p(0)+p(1) = v. The prover gives a degree-d polynomial q to the verifier claiming q = p. The verifier checks q(0)+q(1) = v and picks an r from a large range. If q was actually p then p(r) = q(r). But if p(0)+p(1) \(\neq\) v then p and q are different polynomials and with high probability p(r) \(\neq\) q(r) since p and q can only agree on at most d points.</p></blockquote><p>The protocol reduces checking a sum to checking a single randomly chosen location. Our first proof used the sumcheck protocol to reduce checking the permanent of a nxn matrix to a (n-1)x(n-1) matrix and then we repeat until the matrix is small enough to compute the permanent directly.</p><p>Now let's break down the properties of zkSNARK (zero-knowledge succinct non-interactive arguments of knowledge). </p><p></p><ul style="text-align: left;"><li><b>Efficient: </b>A prover who knows the secret input can generate the proof efficiently.</li><li><b>Zero-Knowledge</b>: A prover can convince a verifier that they know a secret input that satisfies a statement without revealing anything else about that secret.</li><li><b>Succinct</b>: The proof itself is extremely small and easy to verify.</li><li><b>Non-Interactive</b>: No repeated interaction between prover and verifier is necessary. The prover generates one proof, and anyone can verify it.</li></ul><p></p><p>Our original sumcheck protocol is not all efficient, the prover needs to solve #P-hard problems, but in SNARKs we assume the prover already knows the problem's solution.</p><p>Our protocol requires significant interaction, in particular that r is chosen after q is set. But for SNARKs since we can assume the prover is efficient, we can use r as a hash of q. It should be difficult to find a q different than p such that q(hash(q)) \(\neq\) p(hash(q)).</p><p>Our protocol is not at all zero-knowledge, if p = q then the protocol reveals p(0) and p(1). Lots more tricks needed to make it zero-knowledge and succinct but I leave that to the interested reader.</p>Lance Fortnowhttp://www.blogger.com/profile/06752030912874378610noreply@blogger.com3tag:blogger.com,1999:blog-3722233.post-3346204013249817232024-02-18T15:24:00.002-06:002024-02-19T07:52:05.104-06:00ChatGPT thinks Graph Isomorphism has real applications. Is it right? <p>Lance did a post on Babai's result on Graph Isomorphism (see <a href="https://blog.computationalcomplexity.org/2024/02/favorite-theorems-graph-isomorphism.html">here</a>). I then did a post asking if Graph Isomorphism has real applications (see <a href="https://blog.computationalcomplexity.org/2024/02/are-there-any-real-applications-of.html">here</a>). Lance proofread my post (There were some typos! Really!) and then he was inspired to ask ChatGPT what it thought. The conversation is below (I added some fonts, and my opinion clearly marked, but nothing else). I leave it to the reader to determine what ChatGPT got right and got wrong. In areas that I am unfamiliar with I tend to think that ChatGPT is right, perhaps a case of Gell-man Amnesia (see <a href="https://www.epsilontheory.com/gell-mann-amnesia/">here</a>).</p><p>LANCE: What are some applications of Graph Isomorphism? <br /><br />ChatGPT:<br /><br />Isomorphism, a concept from the field of graph theory in mathematics and computer science, refers to the problem of determining whether two graphs are isomorphic, meaning there is a one-to-one correspondence between their vertex sets that preserves adjacency. Despite its theoretical underpinnings, graph isomorphism has practical applications in various domains:<br /><br /><i>Chemical Informatics</i>: It's crucial for identifying chemical compounds. Since chemical structures can be represented as graphs (with atoms as vertices and bonds as edges), determining if two graphs are isomorphic can indicate whether two compounds are structurally identical.</p><p>BILL'S OPINION: I suspect simple heuristics work fine. Also I suspect that chemicals are more complicated than graphs. More generally, instead of asking<i> Does GI have any real applications?</i> I should have asked<i> Do the last 45 years of graph isomorphism algorithms produced by our community have any applications? </i>I suspect the answer is no. <br /><br /><i>Pattern Recognition</i>: In image and pattern recognition tasks, objects can be represented as graphs, and finding isomorphic graphs helps in identifying similar patterns or shapes within different datasets.</p><p>BILL'S OPINION: I suspect that in real world pattern recognition, especially for images, approx matches is what you really want. What makes GI hard/interesting is that you need exact Isomorphism. So I doubt GI really helps for Pattern Recognition. <br /><br /><i>Network Analysis</i>: Graph isomorphism is used to analyze and compare different network structures in social networks, biological networks (like neural connections and protein interactions), and the Internet, to understand their underlying properties and dynamics.</p><p>BILL'S OPINION: This sounds plausible to me. The people doing the studying are other academics, however, that can still count. <br /><br /><i>Computer Vision</i>: For object recognition and classification, where objects within an image can be represented as graphs, and their identification involves finding isomorphic subgraphs within larger scene graphs.</p><p>BILL'S OPINION: Same issue as with Pattern matching. <br /><br /><i>Cryptography</i>: Graph isomorphism problems can serve as the basis for cryptographic protocols, including public key cryptography, due to the computational difficulty of solving graph isomorphism for general cases.</p><p>BILL'S OPINION: Just NO. One reason factoring is so useful for crypto is that one can easily generate hard instances. For GI you cannot (at least not yet). I think ChatGPT confused GI being in Zero-Knowledge with GI actually being used for Crypto. More on this later.<br /><br /><i>Database Indexing:</i> In databases, especially graph databases, graph isomorphism helps in query optimization and efficient data retrieval by identifying identical or similar graph structures.</p><p>BILL'S OPINION: I can believe this! Gell-man Amnesia? <br /><br />These applications demonstrate the wide-ranging impact of graph isomorphism beyond theoretical computer science, influencing fields where structural and relational data analysis is crucial.</p><p><br />LANCE: Can you give me specific examples of these?</p><p>(BILL: I won't comment on these, except in one case, as it would be the same comments as above.) <br /><br />ChatGPT:<br /><br />Certainly! Here are specific examples for the practical applications of graph isomorphism mentioned:<br /><br /><i>Chemical Informatics:</i> A classic example is the identification of isomers in chemistry. Isomers are compounds with the same molecular formula but different structures. Graph isomorphism algorithms can distinguish between different isomers by comparing their graph representations, where vertices represent atoms and edges represent chemical bonds. For instance, butane and isobutane are structural isomers with different connectivity among the same set of atoms, which can be identified as distinct through graph isomorphism.<br /><br /><i>Pattern Recognition:</i> In character recognition systems, letters or symbols can be represented as graphs, where the strokes are edges and intersections are vertices. Graph isomorphism algorithms can then be used to match a scanned, handwritten, or printed character against a database of graph-represented characters to identify the letter or symbol.<br /><br /><i>Network Analysis:</i> In social network analysis, researchers might use graph isomorphism to identify similar subnetworks within larger networks. For example, detecting communities or cliques within social networks that have identical or similar interaction patterns, which can be crucial for understanding social dynamics or the spread of information.<br /><br /><i>Computer Vision: </i>An application in computer vision could be recognizing objects regardless of their orientation in an image. By representing objects as graphs, where vertices correspond to object features and edges to spatial relationships between these features, graph isomorphism can help in identifying an object by finding a match between its graph representation and that of a known object.<br /><br /><i>Cryptography</i>: A specific use case in cryptography is the design of cryptographic primitives based on the Graph Isomorphism problem, such as the Goldreich-Goldwasser-Halevi (GGH) cryptosystem. This cryptosystem constructs a public key from a hard instance of a lattice problem, which is closely related to finding isomorphisms between noisy graph representations, leveraging the computational difficulty of graph isomorphism for security.</p><p>BILL: The pointer to the Wikipedia page on the GGH system is <a href="https://en.wikipedia.org/wiki/GGH_encryption_scheme">here</a>. It has nothing to do with graph isomorphism. So this one is complete garbage. <br /><br /><i>Database Indexing</i>: In the context of graph databases used for storing knowledge graphs or social network data, an example application is the optimization of query processing. When a user queries the database to find data patterns that match a specific graph structure, graph isomorphism algorithms can help quickly locate all instances of the data that are isomorphic to the query graph, enhancing search efficiency and performance.<br /><br />These examples highlight how graph isomorphism plays a crucial role in solving real-world problems across different domains by leveraging the structure and properties of graphs.<br /><br />LANCE: Give me a specific research paper in Chemistry that uses graph isomorphism<br /><br />ChatGPT<br />A notable research paper that applies graph isomorphism in the field of chemistry is:<br /><br />Title: "Development and Application of a Novel Graph-Theoretical Approach for Molecular Similarity Analysis"<br /><br />Authors: Raymond E. Carhart, David H. Smith, Robert Venkataraghavan</p><p></p><p><br />Journal: Journal of Chemical Information and Computer Sciences<br /><br />Year: 1985, Volume: 25, Issue: 1, Pages: 64–73, DOI: 10.1021/ci00045a002<br /><br />Abstract:<br /><br />In this paper, the authors introduce a novel method for analyzing molecular similarity based on graph-theoretical concepts. They present an algorithm, known as the MCS (Maximum Common Subgraph) algorithm, which is used to identify the largest subgraph common to two chemical compounds represented as graphs. This process relies on graph isomorphism to determine the structural similarity between molecules. The approach was groundbreaking at the time because it provided a quantitative method for assessing molecular similarity, which is crucial for drug discovery and the development of chemical databases.<br /><br />The significance of this work lies in its application of graph isomorphism to solve a practical problem in chemistry, demonstrating the interdisciplinary nature of graph theory and its potential to impact research beyond theoretical computer science. This paper has been influential in the fields of cheminformatics and drug design, where molecular similarity plays a key role in identifying potential therapeutic agents.</p><p>BILL'S COMMENT: This information from ChatGPT would be more impressive if this paper actually existed. </p><p>(ADDED LATER- there is some debate in the comments about if the paper exists. </p><p>DOES NOT EXIST: Bill could not find it on the web.</p><p>DOES NOT EXIST: Bill went to the website of that journal and did a search and did not find it. </p><p>DOES NOT EXIST: The DOI does not point to it.</p><p>DOES EXIST: Some slide packet pointed to it as a reference. </p><p>If someone leaves a comment with a pointer to it, or emails me a pointer to it, I will update this post.)</p><p><br /><br /><br /></p><p><br /></p>gasarchhttp://www.blogger.com/profile/03004932739846901628noreply@blogger.com10tag:blogger.com,1999:blog-3722233.post-19060735206630898552024-02-15T11:10:00.027-06:002024-02-15T13:12:23.483-06:00ChangeAn academic field often organizes itself pulling ideas from its own field. For example, students on the economics faculty job search <a href="https://www.aeaweb.org/joe/signal/">can signal at most two schools</a>, without giving any other rules on how the signals should be used, allowing equilibrium behavior to rule.<div><br /></div><div>In computer science, our field is decentralized without any single organization setting policies or much coordination. We're very data oriented, so we do a good job like with the <a href="https://cra.org/resources/taulbee-survey/">Taulbee Survey</a>. And because we're a field based on programming and logic, we tend to outsource decision to specific rules, formulas for <a href="https://csrankings.org/">ranking departments</a>, judging faculty productivity (h-index, major conference publications, course evaluations) and graduation requirements.</div><div><br /></div><div>Computing itself has changed over the last decade with the growth of artificial intelligence and learning from data. We see many examples when computers trained on data often dramatically out-perform those built on logic, certainly in say machine translation, facial recognition and spam detection for example.</div><div><br /></div><div>Will the changes in computing change the way we think about how we run our field? Probably, but over a long period of time. People don't change their ways, but eventually the people change.</div>Lance Fortnowhttp://www.blogger.com/profile/06752030912874378610noreply@blogger.com0tag:blogger.com,1999:blog-3722233.post-32732485469000600612024-02-11T13:53:00.000-06:002024-02-11T13:53:18.183-06:00Are there any REAL applications of Graph Isomorphism?Lance's post on Babai's result on Graph Isomorphism (henceforth GI) inspired some random thoughts on GI. (Lance's post is <a href="https://blog.computationalcomplexity.org/2024/02/favorite-theorems-graph-isomorphism.html">here</a>.) <div><br /></div><div>1) Here is a conversation with someone who I will call DAVE. This conversation took place in the late 1980's. </div><div><br /></div><div>BILL: You got your ugrad degree in engineering and then went to CS grad school. You are a pragmatic guy. So why did you work on graph isomorphism?</div><div><br /></div><div>DAVE: My advisor was working on it. The 1980's was a good time for GI: the problems restricted to bounded genus, bounded degree, and bounded eigenvalue multiplicity were proven to be in P. Tools from Linear Algebra and Group Theory were being used. People thought that GI might be shown to be in P within the next five year. Then it all stopped for quite some time. </div><div><br /></div><div>But back to your question about GI being practical, I asked my advisor for real applications of GI. He said: </div><div><br /></div><div><i>Organic Chemists use graph isomorphism to match chemical structures.</i></div><div><i><br /></i></div><div>By the time I found out this wasn't true, I already had my PhD. I quickly switched to Computational Geometry which really does have applications. Maybe. </div><div><br /></div><div>2) So why was graph isomorphism studied? Much to my surprise, a survey by Grohe and Schweitzer (see <a href="https://cacm.acm.org/magazines/2020/11/248220-the-graph-isomorphism-problem/fulltext">here</a>) says that </div><div><br /></div><div><i>Graph isomorphism as a computational problem first appeared in the chemical documentation literature of he 1950's (for example Ray and Kirsh) as the problem of matching a molecular graph against a database of such graphs.</i></div><div><i><br /></i></div><div>(The article by Ray and Kirsh is <a href="https://www.science.org/doi/10.1126/science.126.3278.814">here</a>.) </div><div><br /></div><div>So at one time it was thought that GI would apply to Chemistry. Did it? I suspect some simple algorithms and heuristics did, but (1) chemicals are more complicated than graphs, and (2) by the time we are talking about graph isomorphism of bounded eigenvalue multiplicity, the algorithms got to complicated to really use. BUT these are just my suspicions (what is the difference between a suspicion and a guess?) so I welcome comments or corrections.</div><div><br /></div><div>2) The same article also says:</div><div><i><br /></i></div><div><i>Applications [of GI] spans a broad field of areas ranging from Chemistry to computer vision. Closely related is the problem of detecting symmetries of graphs and of general combinatorial structures. Again this has many application domains, for example, combinatorial optimization, the generation of combinatorial structures, and the computation of normal forms. </i></div><div><i><br /></i></div><div>That <i>sounds</i> impressive, but I would like to know if the applications are to the real world, or are they do other theory things. </div><div><br /></div><div>3) The prominence of GI in the CS theory literature is because its one of the few natural problems that is in NP, not in P, but not known to be (and unlikely to be) NP-complete.</div><div><br /></div><div>4) Graph Isomorphism IS a natural problem. What is an unnatural problem? </div><div><br /></div><div>BILL: Do you find the following interesting: There is an r.e. set that is not decidable and not Turing-equivalent to HALT.</div><div><br /></div><div>DARLING: Yes. Unless it was some dumb-ass set that people like you constructed for the whole point of being r.e., not decidable, and not Turing-equivalent to HALT. </div><div><br /></div><div>BILL: You nailed it!</div><div> </div><div>5) So, is GI in P? This is truly open question in that it really could go either direction. There is no real consensus. </div><div> </div><div>6) I have heard that Babai's result is as far as current techniques can take us. So we await a new idea.</div>gasarchhttp://www.blogger.com/profile/03004932739846901628noreply@blogger.com14tag:blogger.com,1999:blog-3722233.post-83900846864188328032024-02-07T09:37:00.000-06:002024-02-07T09:37:44.692-06:00Favorite Theorems: Graph Isomorphism<p>We start our favorite theorems of the last decade with a blockbuster improvement in a long-standing problem. </p><p style="text-align: center;"><a href="https://arxiv.org/abs/1512.03547">Graph Isomorphism in Quasipolynomial Time</a> by László Babai</p><p style="text-align: left;">The graph isomorphism problem takes two graphs G and H both on n nodes and asks if there a permutation \(\sigma\) such that \((i,j)\) is an edge of G if and only if \((\sigma(i),\sigma(j))\) is an edge of H. While we can solve graph isomorphism in practice, until Babai's paper we had no known subexponential algorithm for the problem.</p><p style="text-align: left;">Graph Isomorphism has a long history in complexity. It has its own self-named complexity class in the <a href="https://complexityzoo.net/Complexity_Zoo:G#gi">zoo</a>. It's one of the few decision problems in NP not known to be NP-complete or in P. Graph Isomorphism and its complement are the poster child problems for complexity classes <a href="https://dl.acm.org/doi/abs/10.1145/116825.116852">AM</a>, <a href="https://dl.acm.org/doi/abs/10.1145/116825.116852">SZK</a> and <a href="https://doi.org/10.1016/j.ic.2006.02.002">SPP</a>, <a href="https://dl.acm.org/doi/10.1145/12130.12137">public-coin proof systems</a>, and the <a href="https://arxiv.org/abs/1710.09806">minimum-size circuit problem</a>. Köbler, Schöning and Torán wrote a <a href="https://link.springer.com/book/10.1007/978-1-4612-0333-9">whole book</a> on the structural complexity of graph isomorphism.</p><p style="text-align: left;">Graph Isomorphism has the kind of group structure that suggests a quick quantum algorithm, but that remains an annoying open problem. </p><p style="text-align: left;">Babai found a simple algorithm and proof based on <a href="https://link.springer.com/book/10.1007/978-1-4612-0333-9">Johnson Graphs</a>. Who am I kidding? You would need several months to work through the paper even for an expert group theorist. </p><p style="text-align: left;">Babai has been working on graph isomorphism since the 80's. In November 2015, he announced a talk with that title and the theory world went crazy. They needed a larger room. We invited Babai down to Georgia Tech to give a talk. A week before he sends me an email saying the proof no longer gave the quasipolynomial time bound and asks if he should still give the talk. We say of course and he gives the talk on January 9, 2017, describing the problem with the proof. And then he announces "It's been fixed". I <a href="https://blog.computationalcomplexity.org/2017/01/babai-strikes-back.html">witnessed history</a> in the making. </p>Lance Fortnowhttp://www.blogger.com/profile/06752030912874378610noreply@blogger.com2tag:blogger.com,1999:blog-3722233.post-57340324419718198462024-02-04T13:38:00.000-06:002024-02-04T13:38:54.709-06:00The advantage of working on obscue things<p> I<i> </i>got an email from an organization that wants to publicize one of my papers. Which paper did they want to publicize?</p><p>1) If the organization was Quanta, they are KNOWN so I would trust it. Indeed, they have contacted me about my paper proving primes infinite from Ramsey theory (they contacted all of the authors of papers that did that). The article they published is <a href="https://www.quantamagazine.org/why-mathematicians-re-prove-what-they-already-know-20230426/">here</a> and I blogged about those proofs <a href="https://blog.computationalcomplexity.org/2023/05/there-are-infinite-number-of-proofs.html">here</a>.</p><p>2) Someone writing an article on my muffin work for a recreational journal contacted me- also believable.</p><p>3) But the email I got was from an organization I never heard of and they wanted to publicize <a href="https://arxiv.org/abs/1205.3813">An NP-Complete Problem in Grid Coloring</a>. This is not a topic of general interest, even among complexity theorist or Ramsey theorists. Indeed, the paper has three authors and only two of them care about it. Hence I suspected the organization was a scam or quasi-scam. I looked them up and YES, at some point they would have wanted money for their efforts. </p><p>I want to say <i>none of my work is of general interest </i>but that may depend on what is meant by <i>general </i>and by <i>interest. </i>Suffice to say that if a scammer claims they want to publicize my work and picks a random paper to ask me about, the probability that its one where I will say <i>Yes, I can see that one being worth getting to a wider audience</i> is negligible. Indeed, Darling was amazed that Quanta cared about using Ramsey Theory to prove primes infinite. She may have a point there.</p><p>But if I worked in Quantum or ML or Quantum ML then it would be harder for me to say for a random paper <i>they can't possibly think that this was wider appeal, so its obviously a scam. </i></p><p>That is the advantage of working on obscure things. <br /></p>gasarchhttp://www.blogger.com/profile/03004932739846901628noreply@blogger.com1tag:blogger.com,1999:blog-3722233.post-24403831575480336462024-01-31T14:45:00.001-06:002024-01-31T14:45:56.841-06:00University Challenges<p>Seems like US universities have been in the news quite a bit recently. You'd think for the great academics and research. Alas, no.</p><p>I decided to make a list of the not so unrelated topics. The list got long and still from from complete.</p><p></p><ul style="text-align: left;"><li>Public perception of universities has been <a href="https://news.gallup.com/poll/508352/americans-confidence-higher-education-down-sharply.aspx">dropping</a>. </li><li><a href="https://www.forbes.com/sites/michaeltnietzel/2024/01/24/college-enrollment-up-176000-undergraduates-according-to-new-report/">Enrollment</a>: Up in 2023 for the first time since the pandemic. But still down 900,000 undergrads since five years ago and the <a href="https://www.bestcolleges.com/news/analysis/looming-enrollment-cliff-poses-serious-threat-to-colleges/">demographic cliff</a> is just a couple years away.</li><li><a href="https://www.insidehighered.com/news/business/financial-health/2024/01/25/2024-begins-wave-job-and-program-cuts">Fiscal Challenges</a> even at Penn State and University of Chicago.</li><li>On the other hand are those with <a href="https://www.cbsnews.com/news/harvard-endowment-2023-harvard-president-salary/">huge endowments</a>.</li><li>Where have the men gone? While CS is still predominantly male, men make up <a href="https://www.pewresearch.org/short-reads/2023/12/18/fewer-young-men-are-in-college-especially-at-4-year-schools/">only about 40%</a> of undergrads on four-year colleges. The percentages are lower for African-American and Hispanic men.</li><li>The <a href="https://www.diverseeducation.com/faculty-staff-issues/article/15380737/report-shows-the-shift-away-from-tenured-faculty-in-the-us-academic-workforce">increase in teaching faculty</a> over tenure-track. </li><li>AI - How best to use it to help the educational process without letting students cheat, and where do you draw the line. </li><li>State government exerting control. We are seeing a number of conservative states pushing back against DEI and the perceived liberal bias.</li><li>Affirmative Action - How do universities maintain a diverse student body in the wake the supreme court ruling last summer.</li><li>Admissions policies that favor the rich, notably legacy admissions and sports other than football and basketball, where wealthier kids have the time, training and equipment to succeed.</li><li>SAT - Most schools have eliminated the SAT exam but <a href="https://www.nytimes.com/2024/01/07/briefing/the-misguided-war-on-the-sat.html?unlocked_article_code=1.Rk0.qCF4.fSt83c_Fb2ur&bgrp=a&smid=url-share">should they bring it back</a>? </li><li>College is seen more as a place to build career skills. STEM fields especially in computing have seen huge gains in enrollment while humanities and social sciences have been decimated. How should colleges respond?</li><li>Competition from certificate programs, online degrees, apprenticeships and boot camps.</li><li>Athletics - Chasing ever-increasing broadcast revenue has restructured conferences (Goodbye <a href="https://en.wikipedia.org/wiki/Pac-12_Conference">Pac-12</a>). <a href="https://en.wikipedia.org/wiki/Student_athlete_compensation">Name-Image-Likeness</a> has made some college athletes, notably in football and basketball, significant money. <a href="https://www.insidehighered.com/news/students/athletics/2023/10/23/report-documents-rise-donor-collectives-college-sports">Alumni collectives</a> and easier transfer rules have turned college student athletes into free agents. Meanwhile some lower revenue sports are getting cut.</li><li>Colleges as a <a href="https://www.insidehighered.com/news/government/student-aid-policy/2023/11/06/trump-hits-colleges-wokeness-pitches-free-online">political football</a> as <a href="https://www.nytimes.com/2021/09/08/us/politics/how-college-graduates-vote.html?unlocked_article_code=1.Rk0.r0EO.nBUA4E4bz2qP&bgrp=a&smid=url-share">college graduates trend democratic</a>.</li></ul><p></p><div>The Israel-Gaza-Hamas conflict alone has supercharged a number of issues.</div><div><ul style="text-align: left;"><li>When and how should universities take a stand on political issues. </li><li><a href="https://www.axios.com/2023/12/06/harvard-university-presidents-congress-hearing-antisemitism-campus">Congressional hearings</a> into university policies</li><li>Free speech, especially when it creates disruption, makes people feel unsafe and leads to discrimination. Where do you draw the line?</li><li>Activist donors and boards. As universities rely more on "net-high worth individuals", these large donors can hold considerable sway, including pushing out presidents. </li></ul><div>So as a college professor or graduate student, how do you deal with all of the above? Best to ignore it all and focus on your research and teaching.</div></div><p></p>Lance Fortnowhttp://www.blogger.com/profile/06752030912874378610noreply@blogger.com2tag:blogger.com,1999:blog-3722233.post-52502111152866502932024-01-28T12:21:00.003-06:002024-01-28T12:27:43.339-06:00Certifying a Number is in a set A using Polynomials <p> (This post was done with the help of Max Burkes and Larry Washington.)</p><p><br />During this post \(N= \{0,1,2,\ldots \}\) and \(N^+=\{1,2,3,\ldots \}\).<br /><b></b><br /></p><p></p><p></p><p><b>Recall:</b> Hilbert's 10th problem was to (in todays terms) find an algorithm that would, on input a polynomial \(p(x_1,\ldots,x_n)\in Z[x]\), determine if there are integers \(a_1,\ldots,a_n\) such that \(p(a_1,\ldots,a_n)=0\).<br /><br />From the combined work of Martin Davis, Yuri Matiyasevich, Hillary Putnam, and Julia Robinson it was shown that there is no such algorithm. I have a survey on the work done since then, see <a href="https://arxiv.org/abs/2104.07220">here</a><br />The following is a corollary of their work:<br /><br /><br /><b>Main Theorem:</b><br />Let \(A\subseteq N^+\) be an r.e. set. There is a polynomial \(p(y_0,y_1,\ldots,y_n)\in Z[y_0,y_1,\ldots,y_n]\) such that<br /><br />\((x\in A) \hbox{ iff } (\exists a_1,\ldots,a_n\in {\sf N})[(p(x,a_1,\ldots,a_n)=0) \wedge (x> 0)] \}.\)<br /><br /><b>Random Thoughts:</b><br /><br />1) Actual examples of polynomials \(p\) are of the form<br /><br />\(p_1(y_0,y_1,\ldots,y_n)^2 + p_2(y_0,y_1,\ldots,y_n)^2 + \cdots + p_m(y_0,y_1,\ldots,y_n)^2\)<br /><br />as a way of saying that we want \(a_1,\ldots,a_n\) such that the following are all true simultaneously:<br /><br />\(p_1(x,a_1,\ldots,a_n)=0\), \(p_2(x,a_1,\ldots,a_n)=0\), \(\ldots\), \(p_m(x,a_1,\ldots,a_n)=0\),<br /><br />2) The condition \(x>0\) can be phrased<br /><br />\((\exists z_1,z_2,z_3,z_4)[x-1-z_1^2-z_2^2-z_3^2-z_4^2=0].\)<br /><br />This phrasing uses that every natural number is the sum of 4 squares.<br /><br /><br />The<b> Main theorem</b> gives a ways to certify that \(x\in A\): Find \(a_1,\ldots,a_n\in Z\)<br />such that \(p(x,a_1,\ldots,a_n)=0\).<br /><br />Can we really find such \(a_1,\ldots,a_n\)?</p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p>A High School student, Max Burkes, working with my math colleague Larry Washington, worked on the problem of finding \(a_1,\ldots,a_n\).<br /><br />Not much is known on this type of problem. We will see why soon. Here is a list of what is known.<br /><br />1) Jones, Sato, Wada, Wiens (see <a href="https://www.cs.umd.edu/~gasarch/BLOGPAPERS/Jonesh10.pdf">here</a>) obtained a 26-variable polynomial \(q(x_1,\ldots,x_{26})\in Z[x_1,\ldots,x_{26}]\) such that<br /><br />\(x\in \hbox{PRIMES iff } (\exists a_1,\ldots a_{26}\in N)[(q(a_1,\ldots,a_{26}=x) \wedge (x>0)].\)<br /><br />To obtain a polynomial that fits the main theorem take<br /><br />\((x,x_1,\ldots,x_{26},z_1,z_2,z_3,z_4)=(x-q(x_1,\ldots,x_{26}))^2+(x-z_1^2+z_2^2+z_3^2+z_4^2)^2.\)<br /><br />Jones et al. wrote the polynomial \(q\) using as variables \(a,\ldots,z\) which is cute since thats all of the letters in the English Alphabet. See their paper pointed to above, or see Max's paper <a href="https://www.cs.umd.edu/~gasarch/BLOGPAPERS/BurkesMax.pdf">here</a>.<br /><br />2) Nachiketa Gupta, in his Masters Thesis, (see <a href="https://www.cs.umd.edu/~gasarch/BLOGPAPERS/PrimeThesis.pdf">here</a>) tried to obtain the the 26 numbers\ (a_1,\ldots,a_{26}\) such that \(q(a_1,\ldots,a_{26})=2\) where \(q\) is the polynomial that Jones et al. came up with. Nachiketa Gupta found 22 of them. The other 4 are, like the odds of getting a Royal Fizzbin, astronomical. Could todays computers (21 years later) or AI or Quantum or Quantum AI obtain those four numbers? No, the numbers are just to big.<br /><br />3) There is a 19-variable polynomial \(p\) from the Main Theorem for the set<br /><br />\(\{ (x,y,k) \colon x^k=y\}.\)<br /><br />See Max's paper <a href="https://www.cs.umd.edu/~gasarch/BLOGPAPERS/BurkesMax.pdf">here</a> Page 2 and 3, equations 1 to 13. The polynomial \(p\) is the sum of squares of those equations. So for example \(r(x,y,z)=1\) becomes \((r(x,y,z)-1)^2\).<br /><br />Max Burkes found the needed numbers to prove \(1^1=1\) and \(2^2=4.\) The numbers for the \(2^2=4\) are quite large, though they can be written down (as he did).</p><p> <br /><b>Some Random Thoughts:</b><br /><br />1) It is good to know some of these values, but we really can't go much further.<br /><br />2) Open Question: Can we obtain polynomials for primes and other r.e. sets so that the numbers used are not that large. Tangible goals:<br /><br />(a) Get a complete verification-via-polynomials that 2 is prime.<br /><br />(b) The numbers to verify that \(2^3=8\).<br /><br />3) In a 1974 book about progress on Hilbert's problems (I reviewed it in this book rev col, see <a href="https://www.cs.umd.edu/~gasarch/bookrev/44-4.pdf">here</a>) there is a chapter on Hilbert's 10 problem by Davis-Matiyasevich-Robinson that notes the following. Using the polynomial for primes, there is a constant \(c\) such that, for all primes \(p\) there is a computation that shows \(p\) is prime in \( \le c \) operations. The article did not mention that the operations are on enormous numbers.<br /><br />OPEN: Is there some way to verify a prime with a constant number of operations using numbers that are not quite so enormous.<br /><br /></p><p></p><p></p><p><br /><br /><br /></p>gasarchhttp://www.blogger.com/profile/03004932739846901628noreply@blogger.com0tag:blogger.com,1999:blog-3722233.post-69456094018777050262024-01-24T10:54:00.001-06:002024-01-26T07:23:47.392-06:00Learning Complexity on Your OwnThe following request came from a <a href="https://blog.computationalcomplexity.org/2024/01/favorite-theorems-2024.html?showComment=1704703425269#c1875304943136644983">comment</a> earlier this month (shortened)<div><blockquote><div>Could you give some advice on how to study complexity theory on one's own, and/or to follow the research frontier even to find one's own research topic, for someone with solid math and TCS background (say somewhere between Sipser and Arora & Barak), nonetheless an outsider? For example, what books/papers to read? How hard is it to follow all the important results? How would you determine whether a new paper is worth reading in full details?</div><div><br /></div><div>Of course the short answer is to go to graduate school, and my question is mainly for those who don't have the luxury, partially motivated by <a href="https://goodtheorist.science/">’t Hooft</a> and <a href="https://theoreticalminimum.com/biography">Susskind</a> on physics.</div></blockquote><p>Let me suggest a backwards approach. Find interesting papers, by checking the latest proceedings in major conferences such as <a href="https://dl.acm.org/conference/stoc">STOC</a>, <a href="http://ieee-focs.org/">FOCS</a> and <a href="https://computationalcomplexity.org/conferences.php">Computational Complexity</a> or those mentioned on blogs or <a href="https://www.quantamagazine.org/computer-science/">Quanta</a>. If you don't have access to the papers you can often find them on author's home pages or on <a href="https://arxiv.org/list/cs.CC/recent">arXiv</a> or <a href="https://eccc.weizmann.ac.il//eccc/">ECCC</a>. First look at titles that interest you, then read the abstract, intro and the full paper. If you lose interest go on to another paper. </p><p>Once you find a paper that excites you, start reading in details. When you find terms or references to old results you don't know, go do some research, either the cited papers or sites like the <a href="https://complexityzoo.net/Complexity_Zoo">Complexity Zoo</a>, Wikipedia and <a href="https://cstheory.stackexchange.com/">TCS Stack Exchange</a>. You can also find videos and lecture notes on various topics. Large language models like ChatGPT can help understand concepts but have a healthy skepticism on what it says, particularly on specialized material.</p><p>As your read the cited papers, repeat the process. You basically doing a depth-first search through the literature. If you do this for a paper like Ryan Williams' <a href="https://www.cs.cmu.edu/~ryanw/acc-lbs.pdf">NEXP is not in ACC\(^0\)</a>, you'll get a pretty well-rounded education in complexity. </p><div></div></div>Lance Fortnowhttp://www.blogger.com/profile/06752030912874378610noreply@blogger.com5tag:blogger.com,1999:blog-3722233.post-71316417626249721172024-01-21T20:45:00.002-06:002024-01-21T20:45:16.292-06:00A paper that every Undergraduate should read<p> The paper <i>As we may think</i></p><p>by</p><p>Vannevar Bush </p><p>appeared in <br /></p><p><i>The Atlantic Monthly</i>, in July 1945.</p><p> I first read it since it was one of the papers in <a href="https://www.amazon.com/Ideas-That-Created-Future-Computer/dp/0262045303">Ideas that Created the Future: Classic Papers in Computer Science</a> which I reviewed <a href="https://www.cs.umd.edu/~gasarch/bookrev/FRED/greatideas.pdf">here</a>.</p><p>The link to the paper is <a href="http://web.mit.edu/STS.035/www/PDFs/think.pdf">here</a>.</p><p>This paper predicts so much about the future that it should be read by every undergraduate. Not every Undergraduate computer science major, but every undergraduate. I am teaching Honors Discrete Math (Freshman and Sophomores) and Automata Theory (Juniors and Seniors) and on hw00 they are required to read the paper and write about two advances that were predicted.</p><p>I now ask my readers to do the same: Read it and</p><p>a) Leave comments about what was predicted that came true.</p><p>b) Leave comments about what was predicted that did not come true. </p><p>c) Leave comments on anything related to the paper that you want to.</p><p><br /></p><p>Enjoy!</p><p><br /></p>gasarchhttp://www.blogger.com/profile/03004932739846901628noreply@blogger.com3tag:blogger.com,1999:blog-3722233.post-48025633096567226662024-01-17T07:15:00.001-06:002024-01-17T07:16:07.270-06:00Offer Timing<p>In most academic fields, departments, either formally or informally, have their interviews and make their junior faculty offers at about the same time. Faculty candidates know their options and decisions get made quickly. </p><p>Not so in computer science, decentralized by design. Interview and offers are made from January through June (or beyond). Candidates sometimes will hold an offer for months to see if they will eventually get an offer that fits them better. Finding jobs in a the same location for partners further muddles the water. This is certainly a topic I've <a href="https://blog.computationalcomplexity.org/2016/11/computer-science-academic-hiring.html">tackled</a> <a href="https://blog.computationalcomplexity.org/2016/12/fixing-academic-job-market.html">before</a> but I've lost hope that we'll ever see real reform in CS faculty recruiting. So how should departments time their offers.</p><p>I tried many different strategies when I was department chair at Georgia Tech. We tried making offers early, say in February, but by the time these candidates were ready to decide months later your university is out of their recent memory. Some department try to make early offers with short strict fuses. I don't like that approach--it puts undue stress on young candidates and creates resentment even if successful. Candidates will also use early offers as leverage to get offers out of other schools.</p><p>On the other hand if you hold off too long, you may lose the candidate to another university or not have time to find a position for the partner.</p><p>I found the sweet spot is early April to start making offers, though you need to hold off faculty chomping at the bit to make offers earlier. But that's the job as chair. As the saying goes, if a coach starts listening to the crowd, he soon becomes one of them.</p><p>I've heard the argument that not making an offer right after the interview makes the candidate feel they we didn't want them. That goes away if you say upfront you hold off offers until April and stick to it. </p><p>If a candidate tells me that they have another offer with a "tight" deadline, I tell them to push back. Almost all departments will give an extension rather than losing a good candidate.</p><p>Every rule has their exceptions. If a candidate you want says they'll accept an offer if you make it now and you believe them, go ahead and lock them up. If there is a risk that holding on making an offer means that position might disappear for financial or other reasons, then make those offers earlier.</p><p>My secret plan is that every department steals this strategy, we all make offers at the same time, and the reason for this post goes away. </p>Lance Fortnowhttp://www.blogger.com/profile/06752030912874378610noreply@blogger.com3tag:blogger.com,1999:blog-3722233.post-10889425094244900492024-01-14T22:00:00.003-06:002024-01-14T22:05:30.310-06:00A nice dice problem-Part 2<p>In my last post (see <a href="https://blog.computationalcomplexity.org/2024/01/a-nice-dice-problem-part-i.html">here</a>) I posed a dice problem, promising to give the answer in the next blog which is this blog. Here is the problem from my last blog:</p><p>------------------------------------------------------</p><p>In this blog I pose a dice problem. The problem is NOT mine and the answer is KNOWN. However, I DO NOT think its well known, and I DO think it's interesting. My next post will have the answer. </p><p>PROBLEM:</p><p>A <i>6-sided fair die</i> is a 6-tuple of positive natural numbers. (NOTE- POSITIVE NATURAL NUMBERS SO YOU CANNOT USE ZERO. I am emphasizing this since some answers in the comments used 0.) </p><p>The <i>standard 6-sided di</i>e is (1,2,3,4,5,6).</p><p>Do there exist two 6-sided dice \(a_1,\ldots,a_6\) and \(b_1,\ldots,b_6\) (the numbers need not be distinct) such that</p><p>a) The dice are NOT standard.</p><p>b) When you roll the two dice you get THE SAME probabilities of sums as rolling two standard dice (we are assuming the dice are fair so the prob of any side is 1/6, though if a die has two faces with 4 pips on them, then of course the prob will of getting a 4 will be 1/3). So, for example, the probability of getting a sum of 2 is 1/12, the probability of getting a sum of 7 is 1/6.</p><p>-----------------------------------------------------------</p><p>Some pips on this:</p><p>a) This problem was posed and solved in a Martin Gardner column, and later generalized by Gallian and Rusin (the paper is <a href="https://www.cs.umd.edu/~gasarch/BLOGPAPERS/nonstandarddice.pdf">here</a>). I did a write up of just the two 6-sided dice case and a few other things <a href="https://www.cs.umd.edu/~gasarch/BLOGPAPERS/billdice.pdf">here</a>. I also have slides on the problem <a href="https://www.cs.umd.edu/~gasarch/BLOGPAPERS/dicetalk.pdf">here</a>.</p><p>b) The only answer is (1,2,2,3,3,4) and (1,3,4,5,6,8). Here is the first step of the solution, though better off reading my writeup pointed to in item a.</p><p>We seek \(a_1,\ldots,a_6\) and \(b_1,\ldots,b_6\) (NOT (1,2,3,4,5,6)) such that</p><p>\((x^{a_1} + \cdots + x^{a_6})(x^{b_1}+ \cdots + x^{b_6}) = (x^1 + \cdots + x^6)^2 \).</p><p>Using polynomials also lets you solve the problem for other types of dice, as shown in my writeup, my slides, and the original paper. (Spellcheck thinks <i>writeup</i> is not a word. I disagree.) <br /></p><p>c) How did I find this question, and its answer, <i>at random</i>? I intentionally went to the math library, turned my cell phone off, and browsed some back issues of the journal <i>Discrete Mathematics</i>. I would read the table of contents and decide what article sounded interesting, read enough to see if I really wanted to read that article. I then SAT DOWN AND READ THE ARTICLES, taking some notes on them. </p><p>d) This is more evidence that perhaps we should unplug, at least partially, sometimes. I blogged about that <a href="https://blog.computationalcomplexity.org/2022/11/would-you-be-more-productive-if-you.html">here</a>.</p><p>e) Technology is NOT the real issue here. Its allowing yourself the freedom to NOT work on a a paper for the next STOC/FOCS/WHATNOT conference and just read math for FUN without thinking in terms of writing a paper. I am supposed to say <i>you never know when random</i> <i>knowledge may help you get a result </i>but that's the wrong mentality since it circles back to being non-random as you will only read things that you think will lead to results.</p><p>f) The STEM library at UMCP stopped getting PAPER journals a while back. Hence the only journals I can look at are before a certain date. Should the STEM library subscribe to paper journals just so I can read journals at random? OF COURSE NOT. Instead I may in the future surf the web in an intelligent way looking for random article I find interesting. I have sometimes used arxivs for this, though I also sometimes go into a NON-PRODUCTIVE black hole.<br /></p><p>g) Older journals are sometimes more readable. The STEM library still has plenty of them, so I might not need to use the web for this for a while. </p><p>h) For the purposes of random browsing, journals that are focused and whose name says what their focus is (e.g., <i>Discrete Math</i>, <i>Journal of Symbolic Logic</i>, <i>Conference on Topological Algebraic Topology</i>) are good since you know what's in them and hence if you care (or have enough background knowledge). By contrast, a journal title like <i>Proceedings of he London Math Society</i> or <i>Duke Mathematics Journal </i>or <i>Southern North Dakota Math Journal </i>don't tell you anything about what's in them, so its not worth looking at for this purpose. <br /></p><p>i) SO, is the dice problem I posted on <i>well known. </i>From the comments on the original post, and my own observations, here are arguments for YES and for NO.<br /></p><p>YES, Well known: (i) There is a name for these kind of dice, Sicherman dice. (ii) There is a Wikipedia page on Sickerman dice <a href="https://en.wikipedia.org/wiki/Sicherman_dice">here</a>. (iii) You can buy Sicherman dice <a href="https://trick-dice.com/product/sicherman-dice/">here</a>. (iv) The problem is in the book <i>Concrete Mathematics</i> as an exercise in Chapter 8, page 431. (v) There a Martin Gardner column, so it was well known at one time (though that may have faded). </p><p>NO, Not Well known: (i) I had not heard of it and I've been around math and dice (I have a paper on loaded dice) for a long time. (ii) Lance had not heard of it. (iii) Some of my readers had not heard of it.</p><p>UPSHOT: The notion of <i>well known </i>is not <i>well defined</i>; however, if some of my readers did not know it, and are now enlightened, then I am happy.</p>gasarchhttp://www.blogger.com/profile/03004932739846901628noreply@blogger.com1tag:blogger.com,1999:blog-3722233.post-86484270249293882512024-01-10T21:36:00.002-06:002024-01-11T16:12:32.209-06:00Guest Post by Mohammad Hajiaghayi on SODA Business Meeting 2024<p>This is a guest post Mohammad Hajiaghayi on the SODA business Meeting from 2024. The meeting was held in Alexandria Virginia, Jan 7-10, 2024. The following meetings were held jointly: <br /></p><p>SODA: ACM-SIAM Symposium on Discrete Algorithms</p><p>ALENEX: SIAM Symposium on Algorithmic Engineering and Experiments<br /></p><p>SOSA: Symposium on Simplicity in Algorithms</p><p>The website for SODA 2024 is <a href="https://www.siam.org/conferences/cm/conference/soda24">here</a>. <br /></p><p>And now the guest post:</p><p>---------------------------------------------------------------------------------------------</p><p>SODA Business Meeting Report:<br /><br />After a few years of being unable to attend the SODA business meeting due to factors like Covid, I was able to participate this time, especially since it took place nearby in Alexandria, VA. So I thought it's a good idea to provide a brief report on the business meeting.<br /><br />DISCLAIMER: Please be aware that this report is solely based on MY PERSONAL recollection of the meeting and is intended solely as an informal update.<br /><br />The session started with Piotr Indyk, the chair of the SODA Steering Committee, presenting a slide outlining the meeting agenda. Following that, SODA24 PC Chair David Woodruff gave a brief overview of SODA24's numbers, including the record-high number of submissions, exceeding 650. Following that, Seth Pettie, SOSA PC Chair, shared a concise overview, highlighting the enjoyable experience of being a PC in SOSA, where papers are typically short, often less than 15 pages, emphasizing simplicity in algorithms.Best paper awards were then presented for both SODA and SOSA, detailed <a href="https://www.siam.org/conferences/cm/program/special-events/soda24-special-events"> here </a> and Greg Bodwin's <i>An Alternate Proof of Near-Optimal Light Spanners</i> for SOSA. Pettie bragged about a distinctive trophy for SOSA's best paper authors, in contrast to SODA, which only offers paper acknowledgments for awards. :)<br /><br />Looking ahead, SODA is scheduled for New Orleans next year, followed by Vancouver in the subsequent year.<br /><br />The majority of the discussion focused on the acceptance rate for PC submissions, particularly as David mentioned since the process is double-blind, and the PC is sizable, allowing PC members to submit can enhance the overall paper quality. Initially at around 42%, significantly higher than the average acceptance rate of 29%, the Steering Committee of SODA requested a reduction to 35% for PC submissions this year.<br /><br />In the past year, the fixed rate was 30%, prompting a debate on determining the appropriate acceptance rate. Nikhil Bansal shared insights into the acceptance rate of FAMOUS AUTHORS (authors with more than 20 papers in SODA) over the past few years, revealing no significant changes. While some argued against controlling the PC acceptance rate, I think the summary was to examine the average acceptance rate of PC members who were not part of the PC in the past few years. This evaluation could potentially guide the establishment of a threshold rate. This aligns with a truthful mechanism that I discussed with David Woodruff before the session.<br /><br />During the discussion of PC submissions, Mikkel Thorup raised a subtle point about double-blind submissions, where the authors might subtly boast about their results compared to their own previous work, potentially resulting in incremental advancements, as improving upon others is highly regarded in our community. I believe PC members, rather than reviewers, should diligently verify such claims for all papers, particularly regarding previous work, without the need for author names. Additionally, if necessary, the PC could inquire with the PC chair who has access to the authors' names for cross-referencing with previous works.<br /><br />Last but not least, in the concluding business segment of the meeting, I expressed my satisfaction with the adoption of double-blind submissions as the default for SODA/STOC/FOCS, consistently advocating for it in theory conference business meetings and behind the scenes (e.g., refer to my earlier conversation with SODA founder, the late David Johnson, <a href="https://blog.computationalcomplexity.org/2016/03/mohammadtaghi-hajiaghayi-on-david.html">here</a>.) Despite experiencing increased difficulty in getting my own papers accepted in a double-blind SODA, being ranked second among SODA <i>Famous Authors</i> by DBLP (as defined by Nikhil), I firmly believe that double-blind submissions are the correct and fair approach. This is based on the principle that an author's name should not influence the fate of a scientific claim. I also voiced my happiness at the extensive discussion on fair policies for PC authors and even advocated for transparent criteria in selecting PC Chairs for SODA/FOCS/STOC. I stressed in particular the significance of valuing contributions to the conference, commonly measured by the number of papers, in selecting PC Chairs as those who dedicate effort to a conference should have a say in the process. This stands in contrast to relying solely on potentially biased personal opinions when such a quantitative criterion does not exist. Regardless, I highlighted the need for transparency in any criteria used to select a PC Chair, who can contribute further to diversity by forming the PC. I suggested discussing these criteria in business meetings, akin to our approach to other topics, to foster fairness, a sense of belonging, and prevent the conference from becoming an <i>Insider Club</i> (the term used by Martin Farach-Colton during the meeting), ensuring inclusivity.<br /><br /></p>gasarchhttp://www.blogger.com/profile/03004932739846901628noreply@blogger.com0tag:blogger.com,1999:blog-3722233.post-80851353343762356262024-01-07T15:41:00.005-06:002024-01-07T21:25:02.903-06:00A nice dice problem- Part I<p>In this blog I pose a dice problem. The problem is NOT mine and the answer is KNOWN. However, I DO NOT think its well known, and I DO think it's interesting. My next post will have the answer. </p><p>PROBLEM:</p><p>A <i>6-sided fair die</i> is a 6-tuple of positive natural numbers. (NOTE- POSITIVE NATURAL NUMBERS SO YOU CANNOT USE ZERO. I am emphasizing this since some answers given used 0.) </p><p>The <i>standard 6-sided di</i>e is (1,2,3,4,5,6).</p><p>Do there exist two 6-sided dice \(a_1,\ldots,a_6\) and \(b_1,\ldots,b_6\) (the numbers need not be distinct) such that</p><p>a) The dice are NOT standard</p><p>b) When you roll the two dice you get THE SAME probabilities of sums as rolling two standard dice (we are assuming the dice are fair so the prob of any side is 1/6, though if a die has two faces with 4 pips on them, then of course the prob will of getting a 4 will be 1/3). So, for example, the probability of getting a sum of 2 is 1/12, the probability of getting a sum of 7 is 1/6. <br /></p><p>A few pips for now:</p><p>a) Try to get a method to solve it for two d-sided dice. Or other generalization.</p><p>b) You may post the answer or whatever thoughts on the problem you want; however, if you want to solve it without any hints, don't look at the comments. </p><p>c) I do not know how hard this problem is to solve since I read the solution before really trying to solve it. I do not know how hard it is to find the answer online since I found the paper at random. </p><p>d) I think the problem is not well known. I could be wrong. Leave comments if you've heard of it or not heard of it or whatever, to comment on that point. Note that <i>well known </i>is not a well defined term.<br /></p><p> <br /></p><p><br /></p><p><br /></p>gasarchhttp://www.blogger.com/profile/03004932739846901628noreply@blogger.com9tag:blogger.com,1999:blog-3722233.post-36364444323242882482024-01-04T13:39:00.003-06:002024-01-05T08:14:13.611-06:00Favorite Theorems 2015-2024<p>In 1994, the FST&TCS conference invited me to give a talk at their meeting in Madras (now Chennai). I was in my tenth year as a complexity theorists since I started graduate school and looked back at a very impressive decade in computational complexity. So for the talk and <a href="http://dx.doi.org/10.1007/3-540-58715-2_130">corresponding paper</a> (<a href="https://lance.fortnow.com/papers/files/topten.pdf">PDF</a>) I went through my favorite ten theorems from 1985-1994, and used them to describe progress in a slew of subareas of complexity. </p><p>In 2002 I started this blog and in 2004 decided to <a href="https://blog.computationalcomplexity.org/2004/12/favorite-theorems-recap.html">repeat the exercise</a> for the following decade, and <a href="https://blog.computationalcomplexity.org/2014/12/favorite-theorems-recap.html">again in 2014</a>. I also went back through my favorite theorems from the early days of complexity, <a href="http://blog.computationalcomplexity.org/2005/12/favorite-theorems-first-decade-recap.html">1965-1974</a> and <a href="http://blog.computationalcomplexity.org/2006/12/favorite-theorems-second-decade-recap.html">1975-1984</a>. </p><p>It's 2024 and as I approach my fortieth year in complexity, it's time once again a time to look back through the sixth decade of the field, still producing amazing results in a now mature field. I'll announce one theorem a month from February through November with a wrap-up in December. The choices are mine though feel free to send me your nominees. I choose theorems not based on technical depth but on how they change the way we think about complexity with an eye towards hitting may different subareas of the field.</p><p>See you in February where we'll go back to 2015 for a theorem decades in the making.</p>Lance Fortnowhttp://www.blogger.com/profile/06752030912874378610noreply@blogger.com5tag:blogger.com,1999:blog-3722233.post-17109175433416826792024-01-02T12:12:00.000-06:002024-01-02T12:12:05.278-06:00The Betty White Award for 2023: Tommy Smothers<p>Betty White died on December 31, 2021. When I mention that, even now, some people are surprised that they didn't hear about it. Why? Because she died AFTER all of the<i> who we said goodbye to this year </i>articles have come out. I think its idiotic to have these things come out in DECEMBER instead of early January. Same with Time Magazine's <i>Person of the Year. </i>If Joe Biden had managed to create peace between Israel and the Palestinians AND between Russia and Ukraine on Dec 31, then he might, just might, have been a better choice than Taylor Swift for <i>Person of the Year.</i></p><p>BUT, rather than complain, I created <i>The Betty White Award </i>for famous people who die towards the end of December. </p><p>Past Winners:</p><p>2006: James Brown and Gerald Ford. I didn't have the award then but I noticed their deaths coming to late in the year to be noticed, see my post <a href="https://blog.computationalcomplexity.org/search?q=Gerald">here</a>.<br /></p><p>2021: Betty White and Bishop Tutu. See my post <a href="https://blog.computationalcomplexity.org/2022/01/did-betty-white-die-in-2021why-do.html?m=0">here</a>. I didn't quite have the award then, but its implied.<br /></p><p>2022: Pele, Barbara Walters, Pope Emeritus Benedict. See my post <a href="https://blog.computationalcomplexity.org/2023/01/the-betty-white-award-for-2022.html">here</a>.</p><p>I thought based on these three data points that I would often have 2 or more winners. But no. For 2023 there is one winner: <br /></p><p>Tommy Smothers who died on Dec 26, 2023 at the age of 86.</p><p>Tommy and Dickie Smothers (his brother is still alive at the age of 85) were known as <i>The Smothers Brothers.</i> They did Comedy and Music together. They are more known for their comedy. A few notes</p><p>0) I wrote this post on Dec 31. I was going to post it on Dec 31 at 11:00PM but then thought <i>What if someone famous dies in the next hour? Then I'll need to redo the post, probably giving both that person and Tommy Smothers the award!</i> I waited until Jan 2 since its possible that someone famous dies and I don't hear about for a few days. <br /></p><p>1) They had a comedy-variety show <a href="https://en.wikipedia.org/wiki/The_Smothers_Brothers_Comedy_Hour">The Smothers Brothers Comedy Hour</a> from Feb 1967 to June 1968. It was controversial at the time for some of its political comments though, frankly, if you saw it now I really don't see why they were controversial. They had very high ratings but were cancelled because of the controversy. I don't understand that either- if its making the network money, then why cancel it? Possibilities:</p><p>a) They were losing sponsors. I have not read that this was the case. And I would think they could get new ones.</p><p>b) Some local stations refused to air it. This seems to be true.</p><p>c) The suits themselves were offended and put their principles above profits. </p><p>d) They were scared that the FCC would crack down on them.</p><p>e) They got angry letters (this is true though I don't see why this means you cancel). <br /></p><p>What controversy? They made fun of the president! They were against the war in Vietnam! <br /></p><p>So I am puzzled that they were cancelled. </p><p>I was young at the time and didn't really understand it (I still don't) but I recalled a great Mad Magazine piece about it (in 1968). And thanks to the wonder of the internet, I found it in about 5 seconds. Here it is: <a href="https://imgur.com/gallery/Db49N50">here</a>. I suspect that my young readers are not at all surprised that an article I read over 50 years ago I can find easily. This still surprises me. And I note that its not always so easy. <br /></p><p>2) The show was one of the first to do political humor about real living politicians (Another one was <a href="https://en.wikipedia.org/wiki/That_Was_the_Week_That_Was">That was the week that was</a> which had Tim Lehrer on it, among others.) Comedy featuring a generic politician being, say, dishonest, was certainly allowed, but not an actual one with current issues. As such it lead the way for SNL, The Daily Show, and other shows. </p><p>3) While <i>Smothers Brothers</i> sounds like a made up name, their last name really was Smothers. Reminds me of Clint Eastwood and Dolly Parton having great stage names for their real names. <br /></p><p>4) One of their types of comedy was to start singing a real song (they were good singers- folk singers) and then in the middle of it get into an argument or discussion about it. Here is an example: <a href="https://www.youtube.com/watch?v=pAaG6fJBUwA">here</a>. (warning- this is NOT controversial or political)</p><p>5) There was one episode that never aired because of its controversy. Here it is: <a href="https://www.youtube.com/watch?v=cXx8qfIy9qI">here</a>. I watched it. I cannot see what is controversial in it. There is also some commentary after it that explains why it was controversial.</p><p>More odd- I didn't find it that funny. And note that this is the kind of thing I usually DO find funny. But this is not a criticism- its just evidence that some shows (or art in general) has a short shelf live.</p><p>6) That one episode being on you tube reminds me of the ONE picture of FDR in a wheelchair is ALL OVER the web. Here is one place: <a href="https://lib.arizona.edu/hsl/materials/collections/secret-illness/fdr">here</a>. Not just the web--- it was in my High School History Text with the caption a <i>rare photo of FDR in a wheelchair. </i>Even then I doubt it was rare since it was in many textbooks. And now being on the web makes it even harder to say its rare. I did a post a long time ago (I was a guest blogger) on how things on the web can't really be considered rare. Its <a href="https://blog.computationalcomplexity.org/2007/02/definition-of-rare.html">here</a>.</p><p>7) The fact that they were controversial then but would not be now reminds me of the TV show <a href="https://en.wikipedia.org/wiki/Tanner_%2788">Tanner 88</a>, which was on HBO in 1988 (one of their first original series) which followed the fictional presidential campaign of Jack Tanner. At the time it was considered biting political satire. I bought the DVD and watched it in 2008. A typical episode of West Wing (a good show, but hardly a biting political satire) had more bite. But again, much like The Smothers Brothers, it paved the way.Which brings up back to our original point: <br /></p><p> Tommy and Dickie Smothers: We THANK YOU for paving the way for political satire and commentary. </p><p><br /></p><p><br /></p><p><br /></p><p><br /></p><p><i> </i><br /></p><p> <br /></p>gasarchhttp://www.blogger.com/profile/03004932739846901628noreply@blogger.com5tag:blogger.com,1999:blog-3722233.post-25793598085275063972023-12-20T08:02:00.000-06:002023-12-20T08:02:38.099-06:002023 Complexity Year in Review<p>Result of the year goes to </p><p style="text-align: center;"><a href="https://arxiv.org/abs/2305.15140">Polynomial-Time Pseudodeterministic Construction of Primes</a><br />Lijie Chen, Zhenjian Lu, Igor C. Oliveira, Hanlin Ren and Rahul Santhanam</p><p style="text-align: left;">An impressive year for complexity and we wrote several theorem posts this year including the <a href="https://blog.computationalcomplexity.org/2023/05/finding-primes-pseudodeterministically.html">one above</a>, new <a href="https://blog.computationalcomplexity.org/2023/03/new-upper-bound-on-rk-wow.html">Ramsey</a> <a href="https://blog.computationalcomplexity.org/2023/09/asymptotics-of-r4k-new-result.html">bounds</a>, <a href="https://blog.computationalcomplexity.org/2023/09/half-exponential-no-more.html">exponential circuit bounds for \(\Sigma_2^E\)</a>, <a href="https://blog.computationalcomplexity.org/2023/11/inverting-function.html">inverting compression functions</a> and <a href="https://blog.computationalcomplexity.org/2023/05/breaking-ground-in-isomorphism-testing.html">advances in group isomorphism</a>. </p><p style="text-align: left;">It's a year dominated by wars, <a href="https://time.com/6342806/person-of-the-year-2023-taylor-swift/">Taylor Swift</a>, <a href="https://blog.computationalcomplexity.org/search?q=chatgpt&max-results=20&by-date=false">ChatGPT</a> and mathematicians <a href="https://leanprover-community.github.io/"><i>Lean</i>ing in</a>. The blog word of the year is <a href="https://blog.computationalcomplexity.org/2023/01/meta-complexity.html">meta-complexity</a>.</p><p>We remember <a href="https://blog.computationalcomplexity.org/2023/09/the-contradiction-of-margaritaville-and.html">Jimmy Buffett</a>, <a href="https://blog.computationalcomplexity.org/2023/01/martin-davis-passed-away-on-jan-1-2023.html">Martin Davis</a>, <a href="https://blog.computationalcomplexity.org/2023/04/neil-jones-1941-2023.html">Neil Jones</a>, <a href="https://www.ynetnews.com/business/article/hjdfzoa2o">Abraham Lempel</a>, <a href="https://www.nytimes.com/2023/03/24/technology/gordon-moore-dead.html?unlocked_article_code=owzmROHYy0P-jLeq0vYYYPIp0Vi-de0GdvaNQOf7U56jhbe9HPXgt-aWliD28R_by_npqmUyHK_6aPUia5d65jtw8M_ZuqasR0u1ND_WHnYrYBJfE1Fe10COTEALML_QMVwCHofcvoKB-iu4jqBBZNyy-qZZwjzm4ERd_FQ52N4G2mSxSWIJ26Mce6olrzdpZ2HePuYnGsSrsdr_cYqxj_6sweS7ohL2dPuEEgtHgPNfqUG83tZL_gE8dBcwX1-SBRJz-60T59GpCr-6ilSdKHUJlM2EFKD4yq00AGhZyS8Vh6bO4_tncbR3JAwRMweVzx30e1791DsKd27ITyUqNABd4F4PMD9X">Gordon</a> <a href="https://blog.computationalcomplexity.org/2023/04/thoughts-on-gordon-moore.html">Moore</a>, <a href="https://www.nytimes.com/2023/02/20/technology/roger-c-schank-dead.html?unlocked_article_code=1.EU0.iVsy.rgz5IA5fPZLM&hpgrp=k-abar&smid=url-share">Roger Schank</a>, <a href="https://mta.hu/english/mathematician-vera-t-sos-full-member-of-mta-passes-away-112816">Vera Sós</a>, <a href="https://rjlipton.wpcomstaging.com/2023/03/26/william-wulf-1939-2023/">William Wulf</a>, <a href="https://www.itsoc.org/news/jacob-ziv-pioneer-lossless-data-compression-passed-away">Jacob Ziv</a> and <a href="https://blog.computationalcomplexity.org/2023/03/goodbye-dilbert.html">Dilbert</a>. Happy to forget <a href="https://blog.computationalcomplexity.org/2023/06/ted-kaczynski-was-at-one-time-worlds.html">Ted Kaczynski</a>. </p><p>Thanks to our guest posters <a href="https://blog.computationalcomplexity.org/2023/04/neil-jones-1941-2023.html">Eric Allender</a>, <a href="https://blog.computationalcomplexity.org/2023/07/chernoff-turns-100.html">Ravi Boppana</a>, <a href="https://blog.computationalcomplexity.org/2023/03/identities-in-computational-complexity.html">Josh</a> <a href="https://blog.computationalcomplexity.org/2023/05/breaking-ground-in-isomorphism-testing.html">Grochow</a>, and <a href="https://blog.computationalcomplexity.org/2023/05/breaking-ground-in-isomorphism-testing.html">Youming Qiao</a>.</p><p>Enjoy the holidays and we'll see you in 2024.</p>Lance Fortnowhttp://www.blogger.com/profile/06752030912874378610noreply@blogger.com1tag:blogger.com,1999:blog-3722233.post-15536987530036946482023-12-17T19:14:00.001-06:002023-12-18T08:22:49.243-06:00Archimedes, Pi, and Pickelball<p>I typed </p><p>3,1,4,1,5,9 </p><p>into OEIS and, as expected, I found that these are the first few digits of \(\pi\). See <a href="https://oeis.org/search?q=3%2C1%2C4%2C1%2C5%2C9&language=english&go=Search">here</a>.</p><p>I then read on the page:</p><p><i>\(\pi\) is sometimes refereed to a Archimedes constant, because the Greek mathematician computed lower and upper bounds of \(\pi\) by drawing regular polygons inside and outside a circle. </i></p><p>In my 40 or so years being involved with math, I have <b>never </b>heard \(\pi\) called <i>Archimedes constant</i>. Whats going on here?</p><p>(a) The term is STILL in use I just happened to never hear it. I noted in a blog <a href="https://blog.computationalcomplexity.org/search?q=Trig">here</a> that I didn't have a single book on trig in my house, so this is plausible.</p><p>(b) The term is NOT in use but the person who wrote that entry either is rather old or THOUGHT it was in use or heard it once when he was 10 years old or...</p><p>(c) Some weighted sum of (a) and (b) </p><p>I suspect the term is NOT in use anymore. So when was it in use, and when did it fall out of favor? </p><p>More random thoughts</p><p>1) I googled <i>why is it called pi </i>but it was auto filled to ask <i>why is it called pickleball</i>. I'll report on both:</p><p><i>pi: It was first called pi in 1706 by [The Welsh Mathematician] William Jones because pi is the first letter in the Greek word perimitros which means perimeter. (ADDED LATER- the comments say that the question-and-answer are not as simple as this quote would imply. I am sure they are right.) <br /></i></p><p><i>Pickleball:</i> This website <a href="https://longcoveclub.com/news/the-history-of-pickleball">here</a> says the following. Joel Pritchard invented the game (the story is more complicated than that). With that in mind here is a direct quote: <br /></p><i>Okay, now you know how it all started but the question we’re all thinking still remains: why do they call pickleball, well, pickleball? According the to U.S.A. Pickleball Association, the origins of the name differs between different accounts. <br /><br />Joel Pritchard’s wife, Joan, started to call their game pickleball because “the combination of different sports reminded me of the pickle boat in crew where oarsmen were chosen from the leftovers of other boats.” But according to Barney McCallum, they named the game after Pritchard’s dog, who was (as you might’ve guessed) named Pickles! Despite the sour taste of actual pickles, their dog was sweet and known to run off with the ball while it was still being played!<br /><br />Depending on who you ask, both accounts of the game’s name may actually be true. At the start, there wasn’t any name for pickleball until an official one was needed when the game started to gain traction. With the laid-back nature of pickleball, it’s only appropriate that it was named in a similar fashion. <br /></i><p><br /></p>gasarchhttp://www.blogger.com/profile/03004932739846901628noreply@blogger.com9