tag:blogger.com,1999:blog-3722233Sun, 13 Oct 2024 02:45:04 +0000typecastfocs metacommentsComputational ComplexityComputational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill Gasarchhttps://blog.computationalcomplexity.org/noreply@blogger.com (Lance Fortnow)Blogger3143125tag:blogger.com,1999:blog-3722233.post-1081777910302234097Wed, 09 Oct 2024 12:51:00 +00002024-10-09T07:51:48.827-05:00Fall Jobs Post 2024<p>In the fall, I write a jobs post predicting the upcoming CS faculty job market and giving suggestions and links. In the spring I used to crowdsource a list of where everyone got jobs but have since outsourced the crowdsource to <a href="https://grigory.us/blog/">Grigory Yaroslavtsev</a>. Let's start with a few announcements.</p><p><a href="https://focs.computer.org/2024/">FOCS in Chicago</a> will have a <a href="https://docs.google.com/forms/d/e/1FAIpQLScv_nTBzJfmZW_QNE1iYBsXFqWVgwnxfSYh-KJ0qNay4iSjCg/viewform">Graduating Bits</a> on Sunday, October 27 from 12-2 PM. If you have job openings for postdocs and researchers the FOCS organizers are <a href="https://docs.google.com/forms/d/e/1FAIpQLSc0FDVG6Lh9J5Mo9JEDbflzwM3y68fk-GLrAwJLdKZhdBeU6Q/viewform">collecting them here</a>, The CATCS also maintains a <a href="https://cstheory-jobs.org/">Theoretical Computer Science Jobs posting site</a>. You are also free to add pointers to theory-related job listings as comments to this post. More generally in CS, there is the <a href="https://cra.org/cv-database/">CRA Database of Job Candidates</a> and the <a href="https://cra.org/ads/">CRA</a> and <a href="https://jobs.acm.org/">ACM</a> job postings.</p><p>MohammadTaghi Hajiaghayi is organizing a virtual theory jobs panel in November including yours truly. I'll post details here and on social media when it's finalized.</p><p>If you are a bit more senior, the Simons Institute is <a href="https://simons.berkeley.edu/news/institute-director-search-launched">looking for a new director</a>. </p><p><a href="https://blog.computationalcomplexity.org/2023/10/fall-jobs-post-2023.html">Last year</a> I suggested AI (which by the way just won two <a href="https://www.nobelprize.org/prizes/physics/2024/summary/">Nobel</a> <a href="https://www.nobelprize.org/prizes/chemistry/2024/summary/">Prizes</a>) wasn't dramatically affecting the CS faculty job market yet but</p><blockquote><p>Many see programming, rightly or wrongly, as one of the first careers that AI will displace, which may reduce enrollment in the future, as offshoring fears drove CS enrollment down 20 years ago</p></blockquote><p>It didn't take long. In two years we have gone from nearly all of our CS graduates getting jobs in the field to many of them struggling to get internships and jobs <a href="https://www.wsj.com/lifestyle/careers/computer-science-majors-job-market-7ad443bf?st=vr2Bs5&reflink=desktopwebshare_permalink">in the top companies</a> if <a href="https://www.wsj.com/tech/tech-jobs-artificial-intelligence-cce22393?st=fGhgat&reflink=desktopwebshare_permalink">at all</a>. If the past is any guide, a weak tech job market leads to fewer majors which leads to fewer slots for CS faculty. We'll start to see these trends this year and they will accelerate quickly if the tech jobs market doesn't recover.</p><p>Areas related to data such as Artificial Intelligence, Data Science and Cybersecurity, will draw the most interest. Best if you can tie your research to those areas, or at least that you are open to teaching in them.</p><p>Have a well-designed website with all your job materials and links to your papers. Make sure your Google Scholar, LinkedIn and GitHub (if relevant) sites are accurate and up to date. Make a short video describing your research to a general CS crowd. Take advantage of all the links above. Network at FOCS if you can make it. And start early. </p><p><br /></p>https://blog.computationalcomplexity.org/2024/10/fall-jobs-post-2024.htmlnoreply@blogger.com (Lance Fortnow)2tag:blogger.com,1999:blog-3722233.post-8836223629403060537Sun, 06 Oct 2024 20:33:00 +00002024-10-06T15:33:58.972-05:00Emil Post Anticipated (more than anticipated) Godel and Turing<p> (Thanks to James De Santis for pointing the article that inspired this post on Post. The article is pointed to in this post.) <br /></p><p>What is Emil Post known for? I know of him for the following: <br /></p><p>a) Post's Problem: Show that there is an r.e. set A that is strictly in between Decidable and Halt using Turing Reductions. He posed the problem in 1944. It was solved in 1956 by the priority method, simultaneously invented by Friedberg and Muchnik . (My students wondered who posted to the web first and if the other one could have seen it there.) <br /></p><p>b) He invented Post Tag Systems, a natural problem that is undecidable. Or a model of computation. Depends on how you look at it. <br /></p><p>c) The Kleene-Post Theorem which produces a set A that is strictly in between Decidable and Halt. It did not solve Post's Problem since A is not r.e. The proof used forcing and may have been the first or close to the first use of forcing. This was published in 1954. <br /></p><p>In summary, I thought of Post as being <i>a recursion theorist. </i></p><p><i> </i>The more I read about Post the more I realize that calling him <i>a recursion theorist </i>is wrong but in an interesting way. Back in the 1950's I am sure Emil Post was called <i>a logician. </i>The over-specialization that produces Recursion Theorists, Model Theorists, Set Theorists, Proof Theorist was about a decade away. In fact, people who worked in logic often also worked in other parts of math. (Post worked on Laplace Transforms. See <a href="https://en.wikipedia.org/wiki/Inverse_Laplace_transform">The Post Inversion Formula</a> as part of the Wikipedia entry on Laplace Transforms.) <br /></p><p>I recently read the article, <a href="https://www.cs.umd.edu/~gasarch/BLOGPAPERS/EmilPost.pdf">Emil Post and His Anticipation of Godel and Turing</a> which shows that Post really did have some of the ideas of Godel and Turing at about the same time, and possibly before they did. I will discuss briefly what he did and why it is not better known; however, the article is worth reading for more on this.</p><p><b>What did Post Do? </b></p><p>Part of Post's Thesis (1920) was showing that the Propositional Logic in Russell-Whitehead was consistent and complete. </p><p>He tried to show that all of RW was consistent and complete. He got RW down to a normal form; however, he realized that rather than reducing the complexity he just localized it. In 1921 he noted the following (I quote the article).<br /></p><p>a) Normal Systems can simulate any symbolic logic, indeed any mechanical system for proving theorems.</p><p>b) This means, however, that all such systems can be mechanically listed, and the diagonal argument shows that the general problem of deciding whether a given theorem is produced by a given system is unsolvable.</p><p>c) It follows, in turn, that no consistent mechanical system can produce all theorems.<br /></p><p>Wow- that sounds like the outline of a proof of Godel's Incompleteness theorem!</p><p><b>Why Didn't Post Publish This? </b></p><p><b> </b>In 1921 Post suffered his first (of many) attacks of manic-depression. He was unable to get an academic job until 1935. By the time he could have written a paper, his ideas were already known. Note that the items about Post I knew are all post-1940. <br /></p><p><b>The Story is Interesting Because its Boring</b></p><p>There is no interesting conflicts between mathematicians in this story. No Newton vs Leibniz rap battle (see <a href="https://www.youtube.com/watch?v=COeKdP3EkXU">here</a>) no plagiarism. Just bad timing and bad luck. So this story is not even worth being exaggerated. </p><p>But I find that interesting. Timing and Luck play a big role, perhaps bigger than is commonly thought. <br /></p><p><br /></p><p><br /></p><p><br /></p><p><br /></p><br />https://blog.computationalcomplexity.org/2024/10/emil-post-anticipated-more-than.htmlnoreply@blogger.com (gasarch)2tag:blogger.com,1999:blog-3722233.post-2090441422971695167Wed, 02 Oct 2024 12:16:00 +00002024-10-02T07:16:07.668-05:00Favorite Theorems: Gradient Descent<p><a href="https://blog.computationalcomplexity.org/2024/09/favorite-theorems-parity-games.html">September Edition</a></p><p>Who thought the algorithm behind machine learning would have cool complexity implications?</p><p></p><div style="text-align: center;"><a href="https://doi.org/10.1145/3568163">The Complexity of Gradient Descent: CLS = PPAD ∩ PLS</a></div><div style="text-align: center;">John Fearnley, Paul Goldberg, Alexandros Hollender and Rahul Savani</div><p></p><p>Let's unpack these classes, subclasses of TFNP, where for every input we know there is an easily verifiable solution and we are looking at the complexity of finding it. PPAD is the class famous for having finding a Nash Equilibrium as a complete set, see <a href="https://blog.computationalcomplexity.org/2005/12/what-is-ppad.html">this post</a> for details.</p><p>PLS is the class of problems where we look for a local minimum. Finding a global minimum is NP-complete--think vertex cover. Finding a local minimum is often easier but still could be hard if you are optimizing over an exponential set of values.</p><p>CLS is a bit trickier to define, basically you are finding an approximate local minimum of a function mapping three real variables to one real value.</p><p>The authors show that gradient descent is complete for PPAD ∩ PLS even if you only use two input variables. Since gradient descent is in CLS, the equality follows. </p><p>More in my <a href="https://blog.computationalcomplexity.org/2021/07/intersecting-classes.html">2021 post</a>. On that post author Paul Goldberg <a href="https://blog.computationalcomplexity.org/2021/07/intersecting-classes.html?showComment=1625214868607#c3015651882903900540">commented</a></p><blockquote><p>The paper is a fine example of the humorous stereotype of complexity theorists proving a problem "hard" when meanwhile the problem is being routinely solved in practice in the real world.</p></blockquote><p>Nevertheless it's a neat complexity result and now officially one of my favorite theorems.</p>https://blog.computationalcomplexity.org/2024/10/favorite-theorems-gradient-descent.htmlnoreply@blogger.com (Lance Fortnow)3tag:blogger.com,1999:blog-3722233.post-3119974858044789011Mon, 30 Sep 2024 00:47:00 +00002024-10-02T22:57:00.335-05:00Progress on R(5). Will there be more? <p><b> (</b>I had a post a while back requesting people to submit open problems in Luca Trevisan's honor with deadline Oct 1. I am extending that to Oct 14, but that is a HARD deadline. See my original post which I have updated, <a href="https://blog.computationalcomplexity.org/2024/08/request-open-problems-in-honor-of-luca.html">here</a>.)</p><p>And now back to our regularly scheduled program. <br /></p><p><b> ====================================</b></p><p><b>Breaking News</b>: \(R(5) \le 46 \)</p><p>I know this since 46 people have emailed me this link <a href="https://arxiv.org/abs/2409.15709">here</a>.</p><p><br /></p><p>(ADDED) Recall that \(K_n\) is the complete graph on \(n\) vertices which consists of \(n\) vertices and every single pair of vertices has an edge. <br /></p><p>Recall that R(5) is the least n such that </p><p><i>For all 2-colorings of the edges of \(K_n\) there is a set of 5 vertices such that all of the edges between them are the same color. </i></p><p>Here is a history of what is known about R(5). It is not complete. </p><p>Proofs of points 1 and 2 below are on my slides <a href="https://www.cs.umd.edu/~gasarch/COURSES/250/S24/slides/COMB/smallramseytalk.pdf">here</a>.<br /></p><p>1) Let R(a,b) be the least n such that </p><p>For all 2-colorings of the edges of K_n there is either a RED K_a or a BLUE K_b.</p><p>Note that</p><p>R(2,b)=b</p><p>R(a,2)=a</p><p>It is well known that R(3,3)=6.</p><p>Is it well known that \( R(a,b) \le R(a,b-1) + R(a-1,b) \).</p><p>From this one can derive \( R(5,5) = R(5) \le 70 \)</p><p>2) One can also show that if R(a,b-1) and R(a-1,b) are even then </p><p>\( R(a,b) \le R(a-1,b) + R(a,b-1) -1 \).</p><p>From this one can derive \(R(5,5)=R(5)\le 62 \). </p><p>3) In 1989 Exoo showed \(R(5) \ge 43 \). That is, Exoo gave a 2-coloring of \(K_{42}\) with no monochromatic \(K_5\). I was unable to find his paper online; however, there is a modern exposition of his paper <a href="https://arxiv.org/pdf/2212.12630">here</a>.</p><p>4) In 1997 McKay and Radzisowski showed \(R(5) \le 49\). The paper is <a href="https://www.cs.umd.edu/~gasarch/BLOGPAPERS/Rof5LE49.pdf">here. </a>This used some clever math and lots of computer time. This paper also has a more complete history of R(5) up to 1997 then I have in this post. (Radzisowski also has a dynamic survey of small Ramsey numbers <a href="https://www.combinatorics.org/ojs/index.php/eljc/article/view/DS1">here</a>.)<br /></p><p>5) In 2017 Angelveit and McKay showed \(R(5) \le 48 \). This used some clever math of and lots of computer time. The paper is <a href="https://arxiv.org/pdf/1703.08768">here</a>. That is the arxiv version. The journal version is behind a paywall; however, if you want to reference it and need to know journal etc, the link is <a href="https://onlinelibrary.wiley.com/doi/10.1002/jgt.22235">here</a>.<br /></p><p>6) In 2024 Angelveit and McKay showed \(R(5) \le 46 \). This used some clever math and and a big computer search. The paper is <a href="https://arxiv.org/pdf/2409.15709">here</a>. (ADDED LATER - they used Glucose, a SAT Solver.) <br /></p><p>COMMENTS ON ALL THIS</p><p>1) It is widely believed that R(5)=43. </p><p>2) I have been asked if AI or SAT Solvers will help. I asked Radziowski and Angelveit and McKay and they all thought NO. There is just no way around ALL those possiblities. </p><p><b>Lance</b>: Getting 46 via an extensive computer search using "30 years of CPU time." Hard to believe AI and SAT Solvers won't play some role in future advances. </p><p><b>Bill:</b> Some problems are just really hard for SAT Solvers. Getting \(R(5)\le 45\) may take 3000 years of CPU time. So it may not be for a while or ever. </p><p><b>Bill: </b>How about a wager? If R(5) =43 is proven by 2040 then you win, otherwise I win.</p><p><b>Lance:</b> Let's make it an exact value for R(5). "Widely believed" doesn't always mean "true". What do we win?</p><p><b>Bill:</b> Bragging rights! And the loser buys dinner.</p><p>(ADDED LATER- A commenter left a comment which probably means they want to ask the following:</p><p><b>Commenter: </b>Since SAT Solvers did play a role in the new result, why are you skeptical that they won't play a role in getting further improvements, and perhaps finding R(5)?</p><p><b>Bill: </b>Even with SAT Solvers, it is taking more and more time. Do you want in on the bet? If so then email me privately. </p><p>)<br /></p><p>3) IF there is some clever math that will help a lot then an AI or a Human may find it. But I am skeptical this will happen.</p><p>4) I am surprised the bound went from 48 to 46 without a paper about 47. <br /></p><p>5) Has any nice math come out of trying to find R(5) (and other concrete values)? </p><p>a) YES- the work on R(k) for large k has lead to nice math.</p><p>b) NO- the results above on R(5), and the math for other concrete values has largely been clever and computer work, but nothing that generalizes that much. </p><p>6) One of the Anonymous commenters pointed this out: <a href="https://cdn.vox-cdn.com/thumbor/6yteUIDdQxwbFo2aHerioITCB6w=/1400x1400/filters:format(jpeg)/cdn.vox-cdn.com/uploads/chorus_asset/file/24418234/1456247036.jpg">here</a><br /></p><p><br /></p>https://blog.computationalcomplexity.org/2024/09/progress-on-r5-will-there-be-more.htmlnoreply@blogger.com (gasarch)7tag:blogger.com,1999:blog-3722233.post-2502491932873778983Thu, 26 Sep 2024 14:42:00 +00002024-09-26T09:42:04.594-05:00LeetCode and AI<p>I often do <a href="https://leetcode.com/">LeetCode</a> problems for fun. This site mainly provides short coding problems for students and others to train for the kinds of question that come up in technical job interviews. I use the site to keep up my programming skills and it often requires clever algorithms and data structures. The Daily Question is like a Wordle for recreational programmers. Try <a href="https://leetcode.com/problems/insert-delete-getrandom-o1/description/">this problem</a> which asks you to create a data structure for sets with insert, delete and get-random-element operations in expected constant amortized time.</p><p>I have to turn off GitHub Co-pilot, otherwise it will give you the solution before you even finish typing the function name. There are so many solutions to these problems out there and in the training sets for LLMs.</p><p>A student asked me last week why he should do LeetCode problems if AI can solve them all. I responded that doing the problems (and CS homework more importantly) give you the skills to understand code and algorithms and in your future jobs you'll encounter problems AI may not solve fully, or correctly, or efficiently and having those skills will allow you to solve those kinds of problems AI alone can't tackle.</p><p>But is this the right answer as AI continues to improve? Ideally we want to create students who transcend AI instead of being replaced by it. For that they need to fully understand programming and computing and be smart enough to know when and when not to outsource that skill to AI. That's the challenge for CS departments: teaching students how to use AI to make themselves better computer scientists without relying on it. It's a hard balance in a technology where we can't predict AI's capabilities at the time these students graduate.</p>https://blog.computationalcomplexity.org/2024/09/leetcode-and-ai.htmlnoreply@blogger.com (Lance Fortnow)7tag:blogger.com,1999:blog-3722233.post-6476692104331014359Mon, 23 Sep 2024 16:06:00 +00002024-09-23T12:52:56.001-05:00I thought I knew what pizza was...<p> On Page 75 of </p><p>The Existential Theory of the Reals as a Complexity Class: A Compendium</p><p>by Marcus Schaefer, Jean Cardinal, Tillmann Mitzow</p><p>(see <a href="https://arxiv.org/pdf/2407.18006">here</a> for the paper) </p><p>I came across the following definition:</p><p><br /></p><p><i>a pizza is a mass distribution (measure) on \( [0,1]^2 \) that can be computed for polygonal subsets using arithmetic circuits.</i></p><p>Okay. Can I add green peppers and mushrooms?</p><p>There are many words in English that we use for math terms. How close are their English-meaning and their Math-meaning?</p><p>Greedy and Girth are pretty close.</p><p>Pizza, not so much. </p><p>Forcing is a bit odd- when we do a forcing argument we are (say) creating a model of set theory where CH fails, I don't think the model minds that, whereas, the term forcing in English usually means the person being forced does not like being forced.</p><p>Dynamic Programming sounds like the programmer is typing really fast.</p><p>Divide and Conquer- that matches pretty well.</p><p>Games. There are some math papers about actual games people play. And there are also math papers that inspired Darling to say <i>Math Games are NOT Fun Games! </i>I blogged about that a few times, mostly notably <a href="https://blog.computationalcomplexity.org/2010/06/whats-your-game-mr-bond-sequel.html">here</a>.</p><p><br /></p><p><br /></p><p><br /></p>https://blog.computationalcomplexity.org/2024/09/i-thought-i-knew-what-pizza-was.htmlnoreply@blogger.com (gasarch)4tag:blogger.com,1999:blog-3722233.post-822167356481006496Wed, 18 Sep 2024 17:38:00 +00002024-09-18T12:41:00.159-05:00Embracing the Future of Manufacturing<p></p><div style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEigYi8HaBm_3e99Z3n0QSe1y3HESmrEHmULaQFj7QMEvPrzZKQlYyENkJxQujQnRuPwmmZIs9U8CIg6N27MNwDx9PUw_0edMast1ESptak9Ow4v6ZzxPEbx-lL1fr2YbPetIiC2OxF_Y_mUCT-IzQuBKogGNrcVAP5xEjU1mj23KvZe6fg_zpba/s4000/IMTS.jpg" imageanchor="1"><img border="0" data-original-height="3000" data-original-width="4000" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEigYi8HaBm_3e99Z3n0QSe1y3HESmrEHmULaQFj7QMEvPrzZKQlYyENkJxQujQnRuPwmmZIs9U8CIg6N27MNwDx9PUw_0edMast1ESptak9Ow4v6ZzxPEbx-lL1fr2YbPetIiC2OxF_Y_mUCT-IzQuBKogGNrcVAP5xEjU1mj23KvZe6fg_zpba/w400-h300/IMTS.jpg" width="400" /></a></div><br />Last week I walked around the <a href="https://www.imts.com/">International Manufacturing Technology Show</a> with 89,000 other participants at the McCormick Center in Chicago. I went to see how AI was affecting manufacturing and it wasn't quite the way I was expecting.<p></p><p>The sheer scale of physical machinery is overwhelming: from elaborate manufacturing equipment and numerous robots, to water jets performing precision cuts on metal. You also notice the international aspects--the big booths belonged to German, Japanese and Taiwanese companies, not so much American. US leads the world in many things, but manufacturing technology is not one of them.</p><p>There was one exception--the three big cloud providers: Google, Microsoft and AWS (Amazon). All, especially Microsoft, have dedicated cloud services for manufacturing. All were touting their AI services for manufacturing from predictive maintenance to Edge AI to generative AI diagnosing machine failures. It was less clear if the audience was listening.</p><p>When I walked around a <a href="https://blog.computationalcomplexity.org/2023/04/health-tech.html">Health Tech show</a> last year, one could fee the focus on data and the excitement about AI even just a few months after ChatGPT went public. At IMTS the feeling was different. The most exciting "new technology" was allowing supervisors to get stats and adjust the machines remotely from the beach, offering retrofits to old machines to allow it to happen. One exhibitor showed me an app where a line worker could show a video to a distant supervisor to get some help (they need a new app for that?). I suggested that soon the supervisor could be replaced by AI in the app and they gave me a funny look.</p><p>The most interesting use of AI came from a small Oregon-based company <a href="https://www.machineresearch.com/">Machine Research</a>, which uses AI for small manufacturers to create bids from proposals. It's a small operations--just seven employees, basically only two developers working on the cloud. </p><p>Like academia, the manufacturing industry is slow to change. Replacing old machines and products are expensive--you can't just do a software update. Both presidential candidates are pushing to increase manufacturing in the US but unless we can take advantage of our technical strengths, embracing the future and <a href="https://www.sap.com/products/scm/industry-4-0.html">Industry 4.0</a>, we'll find it hard to compete. Tariffs and financial incentives will only take us so far.</p>https://blog.computationalcomplexity.org/2024/09/embracing-future-of-manufacturing.htmlnoreply@blogger.com (Lance Fortnow)1tag:blogger.com,1999:blog-3722233.post-5638176079836846050Mon, 16 Sep 2024 00:52:00 +00002024-09-15T19:52:23.501-05:00How will chatGPT affect Homework? <p>LANCE: I gave my final exam for my ugrad theory course (regular, Context Free, P, NP, Decidable, Undecidable) to the new <a href="https://openai.com/index/introducing-openai-o1-preview/">ChatGPT o1</a> that claims to reason about math to see how it would do.</p><p>BILL: How did it do?</p><p>LANCE: Impressive--beat most of my students. Lets just say I could write a pretty good letter of recommendation.</p><p>BILL (arrogantly): I am a great problem maker, so this won't affect me. I ask problems in Grad Ramsey that ChatGPT would crash and burn on.</p><p>LANCE: Okay big-shot, email me some HWs from Ramsey and we'll see how ChatGPT does on it.</p><p>BILL: Okay! </p><p>[Bill emails Lance the first two HW assignments and prediction on how ChatGPT will do.</p><p>Bill predicts that it will do well on the standard problems, but crash and burn on unusual problems that (key!) are not on the web. Concepts that Bill invented just for his course. Bill is mostly correct in these predictions.]<br /></p><p>LANCE: But how did Chatty do compared to the students who took the course?</p><p>BILL: About the same with one caveat: when ChatGPT gets something wrong its really weird- it has the look and feel of a proof but what it outputs is something no human would ever right. So it does badly on getting partial credit. And one other thing- asking it to do a proof A CERTAIN WAY or SIMILAR TO WHAT I DID IN CLASS it does badly on. <br /></p><p>LANCE: I wonder if the same holds for Ramsey courses taught at other universities. Wait is there even any one else who teaches Ramsey theory?</p><p>BILL: Very funny. In your theory course I assume students can't use ChatGPT on an exam so this is not really a problem. But for HW it might be a problem. It also brings up the analog of the question</p><p><i>do we allow students to use slide rules on an exam</i></p><p>LANCE: Slide rules? Your age is showing.</p><p>BILL: I am only 2 years older than you. Or is it 3? I forget these things.</p><p>LANCE: Yes, I would allow students to use ChatGPT as long as they acknowledge they did so, write up their own solution and take responsibility for any mistakes. Problem is they will just cut and past.</p><p>BILL: Lets break this down course-by-course</p><p>GRAD RAMSEY: The students want to be there and are very good. My hope is that </p><p>1) For standard problems they won't need ChatGPT</p><p>2) For unusual problems ChatGPT won't help. And I am good at making up unusual problems. <br /></p><p>3) In any case, if they use it and they UNDERSTAND the answer and put it into their own words, I am okay with that.</p><p>HONORS DISCRETE MATH: Similar to Ramsey though I am a bit nervous. <br /></p><p>UNDERGRAD THEORY (similar to Lance's course): I would like to think its similar to Ramsey Theory. However, I may be wrong.</p><p>ALL COURSES: <br /></p><p>There is a more profound problem here that applies to all courses: the students may confuse getting the answer from ChatGPT with understanding. </p><p>LANCE: ChatGPT is a hell of a lot better at solving problems than it was last week. Maybe ChatGPT won't help on the unusual problems today but that might not last for long. Perhaps the solution is to just give up on assessments and stop grading all together. Students can learn if they want to and waste their money if they don't.</p><p>BILL: That kind of change in how we educate is above my pay grade. You are a Dean, so it may be at your pay grade. <br /></p><p>LANCE: Nope, even deans have their limits.</p>https://blog.computationalcomplexity.org/2024/09/how-will-chatgpt-affect-homework.htmlnoreply@blogger.com (gasarch)6tag:blogger.com,1999:blog-3722233.post-8002228486873527006Wed, 11 Sep 2024 14:41:00 +00002024-09-12T09:07:34.846-05:00Natural Proofs is Not the Barrier You Think It Is<p>If there's a position where I differ from most other complexity theorists it's that I don't believe that natural proofs present a significant barrier to proving circuit results. I wrote about <a href="https://blog.computationalcomplexity.org/2006/05/importance-of-natural-proofs.html">this before</a> but I buried the lead and nobody noticed.</p><p>Let's review natural proofs. I'll give a very high-level description. Li-Yang Tan gives a good <a href="https://theory.stanford.edu/~liyang/teaching/projects/natural-proofs-barrier-and-P-NP.pdf">technical description</a> or you could read the original Razborov-Rudich <a href="https://doi.org/10.1006/jcss.1997.1494">paper</a>. A natural proof to prove lower bounds against a circuit class \(\cal C\) consists of a collection \(C_n\) of Boolean functions on \(n\) inputs such that<br /></p><p></p><ol style="text-align: left;"><li>No polynomial-size circuit family from \(\cal C\) can compute an element of \(C_n\) for large enough \(n\). </li><li>\(C_n\) is a large fraction of all the function on \(n\) inputs.</li><li>A large subset of \(C_n\) is constructive--given the truth-table of a function, you can determine whether it sits in the subset in time polynomial in the length of the truth-table. Note: This is a different than the usual notion of "constructive proof". </li></ol><div>The natural proof theorem states that if all three conditions hold than you can break pseudorandom generators and one-way functions.</div><div><br /></div><div>My problem is with the third property, constructivity. I haven't seen good reasons why a proof should be constructive. When I saw Rudich give an early talk on the paper, he both had to change the definition of constructivity (allowing subsets instead of requiring an algorithm for \(C_n\) itself) and needed to give heavily modified proofs of old theorems to make them constructive. Nothing natural about it. Compare this to the often maligned relativization where most proofs in complexity relativize without any changes.</div><div><br /></div><div>Even Razborov and Rudich acknowledge they don't have a good argument for constructivity.</div><div><blockquote>We do not have any formal evidence for constructivity, but from experience it is plausible to say that we do not yet understand the mathematics of \(C_n\) outside exponential time (as a function of \(n\)) well enough to use them effectively in a combinatorial style proof.</blockquote></div><div>Let's call a proof semi-natural if conditions (1) and (2) hold. If you have a semi-natural proof you get the following implication. </div><div><br /></div><div style="text-align: center;">Constructivity \(\Rightarrow\) One-way Functions Fail</div><div style="text-align: center;"><br /></div><div>In other words, you still get the lower bound, just with the caveat that if an algorithm exists for the property than an algorithm exists to break a one-way function. You still get the lower bound, but you are not breaking a one-way function, just showing that recognizing the proofs would be as hard as breaking one-way functions. An algorithm begets another algorithm. You don't have to determine constructivity either way to get the lower bound. </div><div><br /></div><div>Even if they aren't a great barrier to circuit lower bounds, natural proofs can be an interesting, if badly named, concept in their own right. For example the Carmosino-Impagliazzo-Kabanets-Kolokolova paper <a href="https://doi.org/10.4230/LIPIcs.CCC.2016.10">Learning Algorithms from Natural Proofs</a>.</div><div><br /></div><div>So if I don't believe in the barrier, why are circuit lower bounds hard? In recent years, we've seen the surprising power of neural nets which roughly correspond to the complexity class TC\(^0\), and we simply don't know how to prove lower bounds against powerful computational models. Blame our limited ability to understand computation, not a natural proof barrier that really isn't there.</div><p></p>https://blog.computationalcomplexity.org/2024/09/natural-proofs-is-not-barrier-you-think.htmlnoreply@blogger.com (Lance Fortnow)6tag:blogger.com,1999:blog-3722233.post-6456208407222329728Sun, 08 Sep 2024 19:24:00 +00002024-09-08T14:24:03.395-05:00Very few problems are in NP intersect coNP but not known to be in P. What to make of that?<p>Someone once told me:</p><p><i> I was not surprised when Linear Programming was in P since it was already in \( NP \cap coNP \), and problems in that intersection tend to be in P.</i></p><p>The same thing happened for PRIMALITY.</p><p>However FACTORING, viewed as the set </p><p>\( \{ (n,m) \colon \hbox{there is a factor of n that is } \le m \} \),</p><p>is in \( {\rm NP} \cap {\rm coNP} \). Is that indicative that FACTORING is in P? I do not think so, though that's backwards-since I already don't think FACTORING is in P, I don't think being in the intersection is indicative of being in P.</p><p>Same for DISCRETE LOG, viewed as the set</p><p>\( \{ (g,b,y,p) : \hbox{p prime, g is a gen mod p and } (\exists x\le y)[ g^x\equiv b \pmod p] \} \) </p><p>which is in \( {\rm NP} \cap {\rm coNP} \).<br /></p><p>1) Sets in \({\rm NP} \cap {\rm coNP} \) but not known to be in P:</p><p>FACTORING- thought to not be in P, though number theory can surprise you. </p><p>DISCRETE LOG-thought to not be in P, but again number theory...<br /></p><p>PARITY GAMES-thought to not be in P. (See Lance's post on the theorem that PARITY GAMES are in Quasi-poly time see <a href="https://blog.computationalcomplexity.org/2024/09/favorite-theorems-parity-games.html">here</a>.) </p><p>Darn, only three that I know of. If you know others, then let me know. <br /></p><p>2) Candidates for sets in \( {\rm NP} \cap {\rm coNP} \) but not known to be in P:</p><p>Graph Isomorphism. Its in NP and its in co-AM. Under the standard derandomization assumptions GI is in AM=NP so then GI would be in \( {\rm NP} \cap {\rm coNP} \). Not known to be in P. Is it in P? I do not think there is a consensus on this. </p><p>Darn, only one that I know of. If you know of others, then let me know, but make sure they are not in the third category below. <br /></p><p>3) Problems with an \( {\rm NP} \cap {\rm coNP} \) feel to them but not known to be in P.</p><p>A binary relation B(x,y) is in TFNP if, B(x,y)\in P and, for all x there is a y such that</p><p>|y| is bounded by a poly in |x|</p><p>B(x,y) holds.</p><p>So these problems don't really fit into the set-framework of P and NP.</p><p>TFNP has the feel of \( {\rm NP} \cap {\rm coNP} \).</p><p>Nash Eq is in TFNP and is not known to be in P, and indeed thought to not be in P. There are other problems in here as well, and some complexity classes, and some completeness results. But these problems are not sets so not in the intersection. (I will prob have a future post on those other classes.) <br /></p><p>--------------------------</p><p>So what to make of this? Why are so few problems in the intersection? Does the intersection = P (I do not think so)? <br /></p><p>---------------------------</p><p>I close with another quote: </p><p><i>as a former recursion theorist I hope that \(NP \cap coNP\)=P, but as someone whose credit card numbers are on the web, I hope not. </i></p><p><br /></p><p><br /></p><p><br /></p><p><br /></p>https://blog.computationalcomplexity.org/2024/09/very-few-problems-are-in-np-intersect.htmlnoreply@blogger.com (gasarch)8tag:blogger.com,1999:blog-3722233.post-6280197218191985777Wed, 04 Sep 2024 13:59:00 +00002024-09-04T08:59:32.002-05:00Favorite Theorems: Parity Games<p><a href="https://blog.computationalcomplexity.org/2024/08/favorite-theorems-random-oracles.html">August Edition</a></p><p>A quasipolynomial-time algorithm for a long standing open problem. Yes, we have <a href="https://blog.computationalcomplexity.org/2024/02/favorite-theorems-graph-isomorphism.html">two</a> of them this decade.</p><p></p><div style="text-align: center;"><a href="https://doi.org/10.1137/17M1145288">Deciding Parity Games in Quasi-polynomial Time</a></div><div style="text-align: center;">Cristian Calude, Sanjay Jain, Bakhadyr Khoussainov, Wei Li and Frank Stephan</div><div style="text-align: left;"><br /></div><div style="text-align: left;">I <a href="https://blog.computationalcomplexity.org/2017/03/parity-games-in-quasipolynomial-time.html">covered this theorem</a> in 2017. In a parity game, Alice and Bob take turns walking along a directed graph with integer weights on the vertices and no sinks. Alice wins if the largest weight seen infinitely often is even. While it's not hard to show computing the winner sits in NP\(\cap\)co-NP and even <a href="https://doi.org/10.1016/S0020-0190(98)00150-1">UP\(\cap\)co-UP</a>, the authors give the surprising result that you can determine the winner in near polynomial time.</div><div style="text-align: left;"><br /></div><div style="text-align: left;">The result has implications for <a href="https://en.wikipedia.org/wiki/Modal_logic">modal logics</a>, for example that model checking for the <a href="https://en.wikipedia.org/wiki/Modal_%CE%BC-calculus">\(\mu\)-calculus</a> can now be solved in quasipolynomial-time.</div><div style="text-align: left;"><br /></div><div style="text-align: left;">In follow-up work, Hugo Gimbert and Rasmus Ibsen-Jensen <a href="https://arxiv.org/abs/1702.01953">give a short proof of correctness</a> of the parity games algorithm. Marcin Jurdziński and Ranko Lazić <a href="https://doi.org/10.1109/LICS.2017.8005092">give an alternative algorithm</a> that reduces the space complexity from quasipolynomial to nearly linear.</div><p></p>https://blog.computationalcomplexity.org/2024/09/favorite-theorems-parity-games.htmlnoreply@blogger.com (Lance Fortnow)0tag:blogger.com,1999:blog-3722233.post-7167876175577402946Sun, 01 Sep 2024 19:29:00 +00002024-09-02T08:11:40.115-05:00Six degrees of separation has been proven. Really? <p>There is a paper (see <a href="https://studyfinds.org/six-degrees-of-separation-math/">here</a> for an article about the paper, the link to the paper itself is later) that claims to PROVE that, on average, the distance (for some definition of distance) between any two people is 6.</p><p>1) We've blogged about this kind of thing:</p><p><a href="https://blog.computationalcomplexity.org/2013/11/my-pope-number-is-2-smaller-world.html">My Pope Number is 2</a><br /></p><p><a href="https://blog.computationalcomplexity.org/2003/01/what-is-your-erds-number.html">What is your Erdos Number?</a><br /></p><p><a href="https://blog.computationalcomplexity.org/2018/07/the-six-degrees-of-vdw.html">The six degrees of VDW</a><br /></p><p>2) The paper's link is <a href="https://doi.org/10.1103/PhysRevX.13.021032">here</a> and in case link rot sets in, my copy of it is <a href="https://www.cs.umd.edu/~gasarch/BLOGPAPERS/sixdegrees.pdf">here</a>.</p><p>3) The paper has 14 authors. AH- so that's why we are, on the average, 6 degrees apart- because papers have so many authors. (Actually papers in Biology have LOTS more than 14.) <br /></p><p>4) The paper defines a mathematical model for social networks and analyzes a persons cost and benefit of forming a connection. </p><p>5) Is the mathematical model realistic? I think so. But its always tricky since empirical evidence already gave the answer of <i>six. </i>The true test of a mathematical model is to predict something we didn't already know. </p><p>6) One thing about applying this to the real world: What is a connection? Friendship? (also hard to define), handshakes? I like the <i>will respond to my emails </i>metric, though that may leave out half of my colleagues and even some of my friends. <br /></p><p>(How come people do not respond to important emails? Perhaps a topic for a later blog.) </p><p>7) My Jeopardy number is 1 in two different ways:</p><p>a) My co-author Erik Demaine was mentioned in a question on Jeopardy, see <a href="https://j-archive.com/showgame.php?game_id=8839">here</a> and look at the 800 dollar question in Double Jeopardy, Category Boy Genius.</p><p>b) My cousin Adam Winkler's book, Gunfight, was mentioned in a question Jeopardy, see <a href="https://x.com/adamwinkler/status/1121953811692544002?lang=es">here</a>. It was a 400 dollar question.</p><p>In both cases the question was easy, hence my inside information did not give me an advantage. </p><p>(Note: they were actually mentioned in an answer on Jeop since Jeop has that weird format where they give the answer and you need to find the question. For example:</p><p>Game show that has a format that Bill Gasarch thinks is stupid</p><p>What is Jeopardy?</p>https://blog.computationalcomplexity.org/2024/09/six-degrees-of-separation-has-been.htmlnoreply@blogger.com (gasarch)0tag:blogger.com,1999:blog-3722233.post-235093537785309311Thu, 29 Aug 2024 14:18:00 +00002024-08-29T09:18:44.008-05:00My Quantum Summer<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjq1laOz2yisGCfaGofNWOcRyJClHCJROoyJtGj1ajx2G2QwbAqTtRoO80yhbpBI2VEhntd52zbX4YeYL0Unc9d075wfa_pqFo4VigyJszQmBh8awkpbxwUKWYUUyPyYs2spUnszvfgu5DIEiT73JGKDi_S_raqxKrhR7LYyYizMVhC497qC4TN/s1536/PsiQuantum_IQMP_2.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="918" data-original-width="1536" height="239" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjq1laOz2yisGCfaGofNWOcRyJClHCJROoyJtGj1ajx2G2QwbAqTtRoO80yhbpBI2VEhntd52zbX4YeYL0Unc9d075wfa_pqFo4VigyJszQmBh8awkpbxwUKWYUUyPyYs2spUnszvfgu5DIEiT73JGKDi_S_raqxKrhR7LYyYizMVhC497qC4TN/w400-h239/PsiQuantum_IQMP_2.jpg" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Rendering of PsiQuantum's facility in Chicago</td></tr></tbody></table><p>I wasn't looking for quantum this summer but it found me. At various events I ran into some of the most recognized names in quantum computing: Peter Shor, Charlie Bennett, Gilles Brassard and Scott Aaronson (twice), Harry Buhrman and Ronald de Wolf.</p><p>I was invited to Amsterdam for a <a href="https://www.cwi.nl/en/groups/algorithms-and-complexity/events/farewell-symposium-harry-buhrman/">goodbye event</a> for Harry Buhrman. Harry co-founded and co-led the CWI quantum center <a href="https://qusoft.org/">QuSoft</a> and has now moved to London to join <a href="https://www.quantinuum.com/">Quantinuum</a> as chief scientist and I was invited to give a talk on Harry's classical complexity work before he joined the dark side. Ronald and Gilles gave talks after mine. </p><p>On the way to Amsterdam I spent a few days visiting Rahul Santhanam in Oxford. Scott Aaronson and Dana Moshkovitz showed up with <a href="https://scottaaronson.blog/?p=8022">kids in tow</a>. Scott <a href="https://www.cs.ox.ac.uk/seminars/2656.html">gave a talk</a> on AI not quantum in Oxford. I would see Scott again at the <a href="https://blog.computationalcomplexity.org/2024/07/complexity-in-michigan.html">Complexity conference</a> in Michigan.</p><p>Peter Shor and Charlie Bennett both attended the <a href="https://www.cs.bu.edu/faculty/gacs/lnd-symp/index.html">Levin Event</a> I mentioned last week.</p><p>I talked to all of them about the future of quantum computing. Even though I'm the quantum skeptic in the crowd, we don't have that much disagreement. Everyone agreed we haven't yet achieved practical applications of quantum computing and that the power of quantum computing is often overstated, especially in what it can achieve for general search and optimization problems. There is some disagreement on when we'll get large scale quantum computers, say enough to factor large numbers. Scott and Harry would say growth will come quickly like we've seen in AI, others thought it would be more gradual. Meanwhile, machine learning <a href="https://deepmind.google/discover/blog/ferminet-quantum-physics-and-chemistry-from-first-principles/">continues to solve problems</a> we were waiting for quantum machines to attack. </p><p>My city of Chicago had a big quantum announcement, the <a href="https://blockclubchicago.org/2024/07/25/former-south-works-site-to-host-quantum-computing-campus-psiquantum/">Illinois Quantum and Microelectronics Park</a> built on an old steel works site on the Southeast Side of the city built with federal, state and local funds as well as a big investment from <a href="https://www.psiquantum.com/">PsiQuantum</a>. I have my doubts as to whether this will lead to a practical quantum machine but no doubt having all this investment in quantum will bring more money and talent to the area, and we'll get a much better scientific and technological understanding of quantum. </p><p>PsiQuantum's website claims they are "Building the world's first useful quantum computer". PsiQuantum is using photonic qubits, based on particles of light. Harry's company Quantinuum is using trapped ions. IBM and Google are trying superconducting qubits. Microsoft is using topological qubits and Intel with Silicon qubits (naturally). Who will succeed? They all might. None of them? Time will tell, though it might be a lot of time.</p>https://blog.computationalcomplexity.org/2024/08/my-quantum-summer.htmlnoreply@blogger.com (Lance Fortnow)1tag:blogger.com,1999:blog-3722233.post-4349899153217989879Mon, 26 Aug 2024 19:49:00 +00002024-08-26T14:49:02.335-05:00Whats worse for a company: being hacked or having technical difficulties? I would have thought being hacked but...<p>At the Trump-Musk interview:</p><p>1) There were technical difficulties which caused it to start late and have some other problems.</p><p>2) Musk and (I think) Trump claimed that this was a DDOS attack because people were trying to prevent Donald from having his say (listen to the beginning of the interview).<br /></p><p>3) Experts have said it was not a DDOS attack, or any kind of attack.</p><p>(For the interview see <a href="https://x.com/elonmusk/status/1823254086126608862">here</a>. If the link does not work either blame a DDOS hack or technical difficulties.)</p><p>When a company is hacked, to they admit it? This is hard to tell since if it never comes out, how do you know? </p><p>Would a company rather admit to the public they had tech difficulties OR admit to the public they were attacked? I would think they would rather admit to the public they had tech difficulties. </p><p>I suspect that X had tech difficulties because LOTS of people wanted to hear Trump. <br /></p><p>Faced with this, what are Elon's options: <br /></p><p>Options</p><p>1) Claim that this was caused because so many people wanted to hear Trump that the system could not handle it. This would make Trump look good, not make Elon look too bad, and is probably true so it won't be discovered later that he lied.</p><p>2) Claim that his system was attacked. This allows Trump to claim his enemies are out to get him, thus pushing the narrative that he is a victim. But Musk looks worse than if the system was just overloaded. Plus its false which will (and did) come out. However, there were absolutely no consequences to lying. </p><p>I think its unusual for a company to lie by claiming they were hacked when they weren't. If you know of any other examples then please comment.</p><p><br /></p><p><br /></p><p><br /></p><p><br /></p>https://blog.computationalcomplexity.org/2024/08/whats-worse-for-company-being-hacked-or.htmlnoreply@blogger.com (gasarch)4tag:blogger.com,1999:blog-3722233.post-5093068895888573450Wed, 21 Aug 2024 14:49:00 +00002024-08-21T10:04:16.196-05:00The Levin Translation<p>Earlier this summer I attended a <a href="https://www.cs.bu.edu/faculty/gacs/lnd-symp/index.html">Celebration for Leonid Levin</a> who recently turned 75. To prepare my talk I wanted to go back to Levin's 1971 <a href=" https://www.mathnet.ru/eng/ppi914">two-page Russian masterpiece</a> that covered both his independent discovery of NP-completeness and <a href="https://blog.computationalcomplexity.org/2003/05/universal-search.html">Universal Search</a>. Boris Trakhtenbrot's wonderful <a href="https://doi.org/10.1109/MAHC.1984.10036">Survey of Russian Approaches to Perebor</a> has a messy translation of the Levin paper in the appendix. Could I use AI to get a cleaner version? TLDR Here's the <a href="https://lance.fortnow.com/papers/files/Levin%20Universal.pdf">PDF</a>.</p><p>Let me first take a detour to when I was a graduate student in the 1980s. I went to meet my advisor Mike Sipser and found him with a couple of Russian students and postdocs carefully translating the latest paper from Alexander Razborov, then a young researcher at the Steklov Mathematical Institute in Moscow. Razborov had send a physical copy of his latest paper and they were translating word by word--would this be the paper that show NP sets don't have polynomial-size circuits and P ≠ NP? Spoiler alert: It wasn't, but the process remained etched in my memory four decades later.</p><p>But now we live in the 2020s and I turned to my friend <a href="https://claude.ai/">Claude</a> for help. I gave Claude the <a href="https://www.mathnet.ru/links/3bf8add1b04882bbafcefb73556b3357/ppi914.pdf">Russian PDF</a> and asked <i>Translate into English</i></p><blockquote><p>Certainly, I'll translate the Russian text into English for you. Here's the translation:</p><p>PROBLEMS OF INFORMATION TRANSMISSION</p><p>Volume IX 1973 Issue 3</p><p>BRIEF COMMUNICATIONS</p><p>UDC 519.14</p><p>UNIVERSAL SEQUENTIAL SEARCH PROBLEMS</p><p>L. A. Levin</p><p>The article examines several well-known mass problems of the "sequential search type" and proves that these problems can only be solved in the time it takes to solve any problem of the specified type in general...</p></blockquote><p>So far so good. <i>Summarize the article. </i></p><blockquote><p>This article by L. A. Levin, titled "Universal Sequential Search Problems," discusses a class of computational problems known as "sequential search problems" or "enumeration problems." Here's a summary of the key points:</p><p>1. The article addresses well-known computational problems that require exhaustive search (enumeration) to solve.</p><p>2. While many classical problems have been proven to be algorithmically unsolvable, there exists a class of problems that are solvable but require an impractically large amount of computation time.</p><p>3. These problems, called "sequential search problems," include tasks like minimizing Boolean functions, searching for proofs of limited length, and determining graph isomorphism.</p><p>4. While these problems can be solved by simple algorithms that enumerate all possibilities, such algorithms require exponential time, leading mathematicians to believe that simpler algorithms are impossible.</p><p>5. The article introduces the concept of "universal sequential search problems" - problems to which all other sequential search problems can be reduced.</p><p>6. Levin presents six specific problems (numbered 1-6 in the text) and claims they are universal sequential search problems.</p><p>7. The main theorem (Theorem 1) states that if there exists any sequential search problem that cannot be solved in less than f(n) time (for some function f), then the six presented problems also cannot be solved in less than f(n) time.</p><p>8. Another important result (Theorem 2) states that for any sequential search problem, there exists an algorithm that solves it in optimal time, up to a constant factor and an additive term comparable to the input length.</p><p>9. The article suggests that this approach can be extended to most interesting sequential search problems.</p><p>10. While the article doesn't solve the fundamental question of whether these problems truly require exponential time, it establishes relationships between different problems in this class.</p><p>This work is significant in computational complexity theory, as it relates various hard problems to each other and suggests a hierarchy of difficulty among computational tasks.</p></blockquote><p>The summary is almost as long as the article.</p><p>Then I noticed the term "mass problems" that occurs throughout the paper. <i>In the translation the word "mass" seems off. What are some alternatives?</i></p><p></p><blockquote><p>You're right to notice that the word "mass" in this context might seem unusual or potentially confusing. In the original Russian text, the term used is likely "массовые задачи" (massovye zadachi), which is a term commonly used in Russian mathematical literature. However, in English, we typically use different terminology. Here are some alternative translations that might be more appropriate in this context:</p><p>1. "General problems" or "general computational problems"</p><p>2. "Large-scale problems"</p><p>3. "Broad classes of problems"</p><p>4. "Universal problems"</p><p>5. "Widely applicable problems"</p><p>6. "Common computational problems"</p><p>7. "Algorithmic problems"</p><p>8. "Computational class problems"</p><p>9. "Generic problems"</p><p>10. "Generalized problems"</p><p>In the context of this paper, the author is likely referring to problems that are not specific to a particular instance but represent a whole class or type of problem. The term "universal" is actually used later in the paper, which aligns well with this interpretation.</p><p>So, a more idiomatic translation might replace phrases like "classical mass problems" with "classical universal problems" or "broad classes of classical problems".</p><p>This choice of wording would better convey the idea that these are problems that apply broadly across many instances or types of computational tasks, rather than being specific to a particular case.</p></blockquote><p></p><p>So "mass problems" is mass as in mass audience not mass as in physics. <i>What if we left out the word "mass" completely.</i></p><p></p><blockquote><p>You're right, omitting the word "mass" entirely could indeed improve the clarity and flow of the translation without losing the essential meaning. Let's consider how some key sentences would read without it:</p><p>1. Instead of:</p><p>"After clarifying the concept of an algorithm, the algorithmic unsolvability of a number of classical mass problems was proven..."</p><p>We could have:</p><p>"After clarifying the concept of an algorithm, the algorithmic unsolvability of a number of classical problems was proven..."</p><p>2. Instead of:</p><p>"However, if we assume that there exists some (even artificially constructed) mass problem of the sequential search type that cannot be solved by simple algorithms..."</p><p>We could have:</p><p>"However, if we assume that there exists some (even artificially constructed) problem of the sequential search type that cannot be solved by simple algorithms..."</p><p>3. And for the theorem:</p><p>"If there exists any problem of sequential search (quasi-sequential search) type that cannot be solved in time less than f(n) for argument length comparable to n, then problems 1-6 also have this property."</p><p>This approach works well because:</p><p>1. The context already implies that we're dealing with broad classes of problems, not specific instances.</p><p>2. The term "sequential search problems" itself conveys the idea of a general class of problems.</p><p>3. It makes the text more concise and easier to read in English.</p><p>4. It avoids potential confusion that might arise from an unusual use of the word "mass" in this context.</p><p>So yes, simply omitting "mass" throughout the translation would likely result in a clearer, more idiomatic English text while preserving the original meaning of the article.</p></blockquote><p><i>Give me the full translation without the mass, </i>which Claude dutifully did. Why stick with a text version. <i>Format this nicely in latex. </i></p><p></p><blockquote><p>Certainly! Here's the article formatted in <a href="https://lance.fortnow.com/papers/files/Levin%20Universal.tex">LaTeX</a>.</p><p>This LaTeX code will produce a nicely formatted version of the paper. It includes:</p><p></p><ol style="text-align: left;"><li>Proper title, author, and abstract</li><li>Sections for introduction, definitions and problems, and main results</li><li>Theorem, lemma, and definition environments</li><li>Proper mathematical formatting for equations and logical statements</li><li>Enumerated list for the six problems</li><li>Bibliography section</li></ol><p></p><p>To compile this LaTeX document, you would need to save it as a .tex file and run it through a LaTeX compiler. The output will be a professionally formatted academic paper.</p></blockquote><p>So I did and here is the <a href="https://lance.fortnow.com/papers/files/Levin%20Universal.pdf">final PDF</a>. </p><p>Russian papers at the time famously had no proofs so for fun I tried <i>Can you fill in the proofs of the theorems</i></p><p></p><blockquote><p>Certainly! I'll expand the LaTeX document to include proofs for the theorems and lemma. Please note that since the original paper doesn't provide detailed proofs, I'll have to construct plausible proofs based on the information given and general knowledge of the field. These proofs should be considered as educated guesses rather than the actual proofs from Levin's original work.</p><p>Here's the updated <a href="https://lance.fortnow.com/papers/files/Levin%20Universal%20with%20Proofs.tex">LaTeX code</a> with added proofs.</p><p>This expanded version includes proof sketches for Theorem 1, Lemma 1, and Theorem 2. These proofs are constructed based on the information given in the paper and general principles of computational complexity theory. They should give a good idea of the reasoning behind the theorems, but may not exactly match Levin's original proofs.</p><p>The main ideas in these proofs are:</p><p></p><ol style="text-align: left;"><li>For Theorem 1: Using the universality of problems 1-6 to show that they must be at least as hard as any other sequential search problem.</li><li>For Lemma 1: Demonstrating how an arbitrary sequential search problem can be reduced to one of the listed problems (in this case, Problem 2).</li><li>For Theorem 2: Constructing a universal algorithm that simulates all possible algorithms in parallel, achieving optimal time complexity up to a constant factor.</li></ol><p></p><p>These proofs maintain the paper's focus on the relationships between different types of sequential search problems and the idea of universal sequential search problems.</p></blockquote><p>You can judge the <a href="https://lance.fortnow.com/papers/files/Levin%20Universal%20with%20Proofs.pdf">proofs</a> on your own. </p><p></p><p></p>https://blog.computationalcomplexity.org/2024/08/the-levin-translation.htmlnoreply@blogger.com (Lance Fortnow)2tag:blogger.com,1999:blog-3722233.post-3995032929363357995Sun, 18 Aug 2024 22:14:00 +00002024-09-29T19:42:16.043-05:00Request open problems in honor of Luca Trevisan<p><b> Request for Open Problems In Memory of Luca Trevisan</b></p><p><b>ADDED LATER: DEADLINE EXTENDED TO OCT 14 BUT THAT IS A VERY VERY HARD DEADLINE. <br /></b></p><p>Luca Trevisan passed away on June 19, 2024 at the age of 52, of cancer.</p><p>I am putting together an open problems column of open problems in his honor.</p><p>If you are interested in contributing then please email me a document with the following specifications.</p><p>1) It can be as short as half-a-page or as long as 2 pages. One way to make it short is to give many references or pointers to papers with more information.</p><p>2) It should be about an open problem that is either by Luca or inspired by Luca or a problem you think Luca would care about.</p><p>3) In LaTeX. Keep it simple as I will be cutting-and-pasting all of these into one column.</p><p>4) Email to </p><p>gasarch@umd.edu <br /></p><p>Deadline is Oct 1, 2024. EXTENDED TO OCT 14.<br /></p><p> <br /></p><p><br /></p><p><br /></p>https://blog.computationalcomplexity.org/2024/08/request-open-problems-in-honor-of-luca.htmlnoreply@blogger.com (gasarch)0tag:blogger.com,1999:blog-3722233.post-4066846424758672535Wed, 14 Aug 2024 13:04:00 +00002024-08-14T08:04:47.224-05:00Favorite Theorems: Random Oracles<div style="text-align: left;"><a href="https://blog.computationalcomplexity.org/2024/07/favorite-theorems-extracting-ramsey.html">July Edition</a></div><div style="text-align: left;"><br /></div><div style="text-align: left;">This months favorite theorem is a circuit result that implies the polynomial-time hierarchy is infinite relative to a random oracle, answering an open question that goes back to the 80's. </div><div style="text-align: center;"><br /></div><div style="text-align: center;"><a href="https://doi.org/10.1145/3095799">An Average-Case Depth Hierarchy Theorem for Boolean Circuits</a></div><div style="text-align: center;">Johan Håstad, Benjamin Rossman, Rocco A. Servedio and Li-Yang Tan</div><div style="text-align: left;"><br /></div><div style="text-align: left;">The authors show how to separate depth d from depth d+1 circuits for random inputs. As a corollary, the polynomial hierarchy is infinite with a random oracle, which means that if we choose an oracle R at random, with probability one, the k+1-st level of the polynomial-time hierarchy relative to R is different than the k-th level relative to R. </div><div style="text-align: left;"><br /></div><div style="text-align: left;">Why should we care about random oracles? By the <a href="https://blog.computationalcomplexity.org/2018/08/the-zero-one-law-for-random-oracles.html">Kolmogorov zero-one law</a>, every complexity statement holds with probability zero or probability one with a random oracle, so for every statement either it or its negation holds with probability one. And since the countable intersection of measure-one sets is measure one, every complexity statement true relative to a random oracle are all simultaneously true relative to a random oracle, a kind of consistent world. With a random oracle, we have full derandomization, like BPP = P, AM = NP and PH in \(\oplus\mathrm{P}\). We have separations like P ≠ UP ≠ NP. We have results like NP doesn't have measure zero and SAT solutions can be found with non-adaptive queries to an NP oracle. And now we have that the PH is infinite simultaneously with all these other results. </div><div style="text-align: left;"><br /></div><div style="text-align: left;"><a href="https://blog.computationalcomplexity.org/2015/04/ph-infinite-under-random-oracle.html">More details</a> on this paper from a post I wrote back in 2015. </div>https://blog.computationalcomplexity.org/2024/08/favorite-theorems-random-oracles.htmlnoreply@blogger.com (Lance Fortnow)2tag:blogger.com,1999:blog-3722233.post-3593941461040824201Sun, 11 Aug 2024 01:23:00 +00002024-08-10T20:23:09.848-05:00The combinatorics of Game Shows<p> (Inspired by Pat Sajak stepping down from Wheel of Fortune)</p><p>How many different game show are there? Many. How many could there be?</p><p>1) Based on Knowledge or something else. <i>Jeopardy</i> is knowledge.<i> Wheel </i>and<i> Deal-No Deal </i>are something else. The oddest are <i>Family Feud </i>and <i>America Says</i> where you have to guess what people said which might not be whats true. Reminds me of the SNL sketch about a fictional game show <a href="https://www.youtube.com/watch?v=e0HGEZXTy8Y">Common Knowledge</a>. Oddly enough there now is a REAL game show of that name, but it actually wants true answers to questions. </p><p>2) Do the losers get anything more than some min? On <i>Wheel t</i>hey do, but I do not know of any other show where they do. On <i>People Puzzler </i>they get a subscription to People Magazine!</p><p>3) Is how much the winners win based on how well they do in the game, or is it a flat number. <i>Jeop </i>and <i>Wheel </i>is based on how they do in the game. <i>America Says, Switch,</i> and many others the winner does something else do either win (say) $10,000 or just $1,000.</p><p>4) MONEY: I want to say can you win A LOT or NOT SO MUCH. I'll set $20,000 above or below. ON <i>Jeop </i>or <i>Wheel </i>you CAN get > $20.000. As noted in the above item there are shows where at the end you can win $10,000 but that's it.</p><p>5) Do winners get to come back the next day? <i>Jeop </i>and <i>Masterminds </i>does that but I do not know of any other show that does. </p><p>6) Are celebrities involved? This used to be more common, but seems to have faded. <i>Hollywood Squares</i> was an example. <i>Masterminds</i> is hard to classify in this regard since the celebs are people who know a lot of stuff (they are called Masterminds) and their main claim to fame might be being on Mastermind. Or on some other quiz show. Ken Jennings was a Mastermind in the first season. </p><p>7) If its question-based then is it multiple choice or fill in the blank? Some do both. Common Knowledge has the first round multiple choice, and later rounds fill-in-the-blank, and the finale is multiple-choice. </p><p>8) Do you win money or merchandise? There are many combinations of the two. </p><p>9) Are the players teams-of-3? (<i>Common Knowledge</i>), Individuals (<i>Jeopardy</i>), just one person or team (<i>Deal-No Deal</i> and <i>Cash Cab</i>), or something else. <br /></p><p>10) Is it a competition so people vs people (most of the quiz shows) or is it one person answering questions (<i>Who wants to be a Millionaire, Cash Cab</i>). I've noticed that on <i>Cash Cab </i>if you are close to the answer they give it to you, which I don't think hey would do on <i>Jeop</i>. This could be because there is no competitor, so being flexible is not unfair to someone else. <i>Millionaire</i> can't do this since its strictly multiple choice. </p><p>There are other parameters but I will stop here. I leave it to the reader to more fully expand these categories and find out how many game shows there can be.</p><p><br /></p><p><br /></p><p><br /></p><p><br /></p>https://blog.computationalcomplexity.org/2024/08/the-combinatorics-of-game-shows.htmlnoreply@blogger.com (gasarch)2tag:blogger.com,1999:blog-3722233.post-5329945089839495048Mon, 05 Aug 2024 02:13:00 +00002024-08-05T10:17:04.533-05:00Determing which math problems are hard is a hard problem<p> I was wondering what the hardest math problems were, and how to define it. So I googled </p><p><i>Hardest Math Problems</i></p><p>The first hit is <a href="https://moonpreneur.com/math-corner/worlds-hardest-math-problems-with-solutions/">here</a>. The 10 problems given there bring up the question of <i>what is meant by hard?</i></p><p>I do not think the order they problems were given is an indication of hardness. Then again, they seem to implicitly use many definitions of hardness</p><p>1) The 4-color problem. It required a computer to solve it and had lots of cases. But even if that is why its considered hard, the solution to the Kepler Conjecture (see <a href="https://annals.math.princeton.edu/wp-content/uploads/annals-v162-n3-p01.pdf">here</a>) is harder. And, of course, its possible that either of these may get simpler proofs (the 4-color theorem already has, though it still needs a computer).</p><p>2) Fermat's Last Theorem. Open a long time, used lots of hard math, so that makes sense.</p><p>3) The Monty Hall Paradox. Really? If<i> hard </i>means <i>confusing to most people and even some mathematicians </i> then yes, its hard. But on a list of the 10 hardest math problems of all time? I think not. </p><p>4) The Traveling Salesperson problem. If they mean resolving P vs NP then yes, its hard. If they mean finding a poly time algorithm for TSP then it may be impossible. <br /></p><p>5) The Twin Primes Conjecture. Yes that one is hard. Open a long time and the Sieve method is known to NOT be able to solve it. There is a song about it <a href="https://www.youtube.com/watch?v=djzKCZHeVjY">here</a>.<br /></p><p>6) The Poincare Conjecture. Yes, that was hard before it was solved. Its still hard. This is another issue with the list- they mix together SOLVED and UNSOLVED problems.</p><p>7) The Goldbach Conjecture. Yes, that one is hard. </p><p>8) The Riemann hypothesis is the only problem on both Hilbert's 23 problems in 1900 and on the Clay prize list. Respect! There is a song about it <a href="https://www.youtube.com/watch?v=gKS3pxscyHM">here</a>.<br /></p><p>9) The Collatz conjecture. Hard but this might not be a good problem. Fermat was a good problem since working on it lead to math of interest even before it was solved. Riemann is a good problem since we really want to know it. Collatz has not lead to that much math of interest and the final result is not that interesting.</p><p>10) Navier-Stokes and Smoothness. Hard! Note that its a Millennium problem. </p><p>NOTES</p><p>1) TSP, Poincare, Riemann, Navier-Stokes are all Millennium problems. While that's fine, it also means that there are some Millennium problems that were not included: The Hodge Conjecture, The Birch and Swinnerton-Dyer Conjecture, Yang-Mills and the Mass gap (thats one problem: YM and the Mass gap). These three would be hard to explain to a layperson.<i>Yang Mills and the Mass Gap </i>is a good name for a rock band. <br /></p><p>2) Four have been solved (4-color, FLT, Monty Hall (which was never open), Poincare) and six have not been solved (TSP, Twin primes, Goldbach, RH, Collatz, Navier-Stokes)</p><p>3) I have also asked the web for the longest amount of time between a problem being posed and solved. FLT seems to be the winner with 358 years, though I think the number is too precise since it not quite clear when it was posed. I have another candidate but you might not want to count it: The Greek Constructions of trisecting an angle, duplicating the cube, and squaring the circle. The problem is that the statement:</p><p>In 400BC the Greeks posed the question: Prove or Disprove that one can trisect an angle with a ruler and compass</p><p>is false on many level:</p><p>a) Nobody thought of <i>prove or disprove </i>back in 400BC (and that date is to precise). </p><p>b) Why would a compass, which helps you find where North is, help you with this problem? <br /></p><p>(ADDED LATER: Some of the comments indicate that people do not know that point b is a joke. Perhaps not a good joke, but a joke.) </p><p>SO, when was it POSED in the modern sense is much harder to say. For more on this problem see the book <i>Tales of Impossibility o</i>r read my review of it <a href="https://www.cs.umd.edu/~gasarch/bookrev/FRED/impossible.pdf">here</a>.</p><p>(ADDED LATER: A comment pointed out that the constructing a trisection (and duplicating a cube and squaring the circuit) were proven impossible. I knew that but forgot to say it., and make the point of the very long time between posing and solving, so I will elaborate here:</p><p>1837: Wantzel showed that there is no way to, with a straightedge and compass, trisect an angle or duplicate the cute. This used Field Theory.</p><p>1882: Lindemann showed pi was transcendental and hence there is no straightedge and compass construction to square the circle.</p><p>So one<i> could say</i> it took 1882+400 years to solve the problem, but as noted above, to say the problem was posed in 400BC is not really right.)</p><p>4) Songs are needed for the other problems on this list UNION the Millennium problem. The Hodge Conjecture would be a challenge. I DID see some songs on You Tube that claimed to be about some of these problems, but they weren't. Some were instrumentals and some seemed to have no connection to the math. <br /></p><p>5) Other lists I've seen include:</p><p>a) Prove there are no odd perfect numbers. That seems to be hard. This could have been posed before FLT was posed, but its hard to say. <br /></p><p>b) Prove the following are transcendental: pi + e, the Euler-Mascheroni. There are other open problems here as well. </p><p>These lists make me think more carefully about what I mean by HARD and PROBLEM and even MATH.</p><p><br /></p><p><br /></p>https://blog.computationalcomplexity.org/2024/08/determing-which-math-problems-are-hard.htmlnoreply@blogger.com (gasarch)10tag:blogger.com,1999:blog-3722233.post-3373321512834477308Sun, 28 Jul 2024 12:23:00 +00002024-07-28T07:23:24.878-05:00In the future we will all have songs written about us, and it will be Lance's fault. <p>In response to my blog post about how its easier to FIND novelty songs (and other things) than it used to be (see <a href="https://blog.computationalcomplexity.org/2024/06/technology-1966-2006-2023.html">here</a>) Lance showed how easy it is to CREATE a novelty song using AI. He had an AI write lyrics and music for THE BILL, see <a href="https://suno.com/song/777de155-4493-45ad-bdab-8f356c0d6ba5">here</a>.</p><p>The song is pretty good and pretty accurate (except that I don't drink coffee or burn toast and I would not say that in math I'm <i>quite the star</i>), but this post is NOT about the song. </p><p>There have been songs about</p><p>The Mandelbrot set (see <a href="https://www.youtube.com/watch?v=ES-yKOYaXq0">here</a>), </p><p>Lobachevsky (see <a href="https://www.youtube.com/watch?v=gXlfXirQF3A">here</a>), </p><p>Gauss's Law (see <a href="https://www.youtube.com/watch?v=RnPy0jJYZX0">here</a>), </p><p>Galois (see <a href="https://www.youtube.com/watch?v=l3hNtBbh_E0 Stefnie ">here</a>), </p><p>The Bolzano-Weierstrass Theorem (see <a href="https://www.youtube.com/watch?v=dfO18klwKHg">here</a>), </p><p>William Rowan Hamilton (see <a href="https://www.youtube.com/watch?v=SZXHoWwBcDc">here</a>), </p><p>and I will end this list with the Newton-Leibniz Rap (see <a href="https://www.youtube.com/watch?v=COeKdP3EkXU">here</a>).</p><p>(I am sure there are more songs about famous mathematicians. If you know any that are better than the BW Rap, that is, any of them, please leave a comment.) <br /></p><p>Side note: There are poems about Fermat's last theorem, as discussed in my post <a href="https://blog.computationalcomplexity.org/2024/07/flt-solution-annouement-had-its-31s.html">here</a>.</p><p>So what do Mandelbrot, Lobachevsky, Gauss, Galois, Bolzano, Weierstrass, Hamilton, Newton, Leibniz, and Fermat have in common? </p><p>They are all famous and for a good reason- they all did math worth doing that is remembered many years later. </p><p>Bill Gasarch- not so much (unless the Muffin Problem is the key to solving world hunger).</p><p>In the past the EFFORT to write a song about someone was enough so that one would only bother for famous people. </p><p>With AI it is now EASY, as Lance did with his song THE BILL. He used <a href="https://chatgpt.com/share/1b531380-6e19-41b1-a6cc-0d0659305cc9">ChatGPT for the lyrics</a> and <a href="https://suno.com/song/777de155-4493-45ad-bdab-8f356c0d6ba5">Suno for the song itself</a>. </p><p>So what does this say about the future? It will be so easy to write songs about ANYBODY that it will be done. So having a song about you will no longer be a sign that you are famous or special. We are RIGHT NOW in a transition. If I tell my nephew that there is a song about me and that I have a Wikipedia page, he is impressed. My great niece- not so much. </p><p><br /></p>https://blog.computationalcomplexity.org/2024/07/in-future-we-will-all-have-songs.htmlnoreply@blogger.com (gasarch)2tag:blogger.com,1999:blog-3722233.post-948304751345278166Wed, 24 Jul 2024 13:07:00 +00002024-07-24T08:07:37.411-05:00Complexity in Michigan<p><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgPPQMfJyV8y3GZNMocA2fWM2vr2k3PBwwo4fUq_grZrJuWEeOWfRiNnn2x1D_gFrDMgk72SJ0WfPysSGcqWKcf1RTnBIqMQlAeviLunN7vJ-RhPv7zhPfxgzir_GWlRhHfdx7ADlDUcL7e5MKrGrkmIK-MfXAQtMT3pu_W68QhyZ_P0zxJ807O/s4080/PXL_20240724_000544404.jpg" imageanchor="1" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="3072" data-original-width="4080" height="301" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgPPQMfJyV8y3GZNMocA2fWM2vr2k3PBwwo4fUq_grZrJuWEeOWfRiNnn2x1D_gFrDMgk72SJ0WfPysSGcqWKcf1RTnBIqMQlAeviLunN7vJ-RhPv7zhPfxgzir_GWlRhHfdx7ADlDUcL7e5MKrGrkmIK-MfXAQtMT3pu_W68QhyZ_P0zxJ807O/w400-h301/PXL_20240724_000544404.jpg" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Invited Speaker Nutan Limaye, Conference Chair Valentine Kabanets,<br />2024 PC Chair Rahul Santhanam, myself, 2025 PC Chair Srikanth Srinivasan<br />and 2025 Local Arrangements chair Swastik Kopparty enjoy some tapas.</td></tr></tbody></table>I have a long history with the <a href="https://computationalcomplexity.org/">Computational Complexity conference</a>. I attended the first 26 meetings (1996-2011) and 30 of the first 31. I chaired the conference committee from 2000-2006. <a href="https://dblp.org/db/conf/coco/index.html">According to DBLP</a> I still have the most papers appearing in the conference (32). I even donated the domain name for the conference (with the caveat that I could keep the subdomain for this blog).</p><p>Only Eric Allender had a longer streak having attended the first 37 conferences through 2022 in Philadelphia (if you count the two online during the pandemic) before he retired.</p><p>But I haven't been back to Complexity since that <a href="https://blog.computationalcomplexity.org/2016/06/ccc-2016.html">31st conference</a> in Tokyo in 2016. Due to my administrative roles, various conflicts and changes in the field you just start missing conferences. But with the conference at the University of Michigan in Ann Arbor within driving distance of Chicago it was time to go home for the 39th meeting. And it's a good thing I drove as more than one person had flight delays due to the Crowdstrike bug.</p><p>The complexity conference remains relatively stable at about 75-100 registrants, the majority students and young researchers. I've moved from wise-old sage to who is that guy. But I'm having great fun talking to old acquaintances and new. I'm impressed with the newer generations of complexity theorists--the field is in good hands.</p><p>Best paper goes to Michael Forbes <a href="https://doi.org/10.4230/LIPIcs.CCC.2024.31">Low-Depth Algebraic Circuit Lower Bounds over Any Field</a>. the work of Limaye, Srinivasan and Tavenas I talked about <a href="https://blog.computationalcomplexity.org/2024/06/favorite-theorems-algebraic-circuits.html">last month</a> gave an explicit polynomials with superpolynomial-size over constant depth algebraic circuits but it required polynomials over large fields. Forbes extended the lower bounds to all field sizes.</p><p>Best student paper goes to Ted Pyne from MIT for <a href="https://doi.org/10.4230/LIPIcs.CCC.2024.4">Derandomizing Logspace with a Small Shared Hard Drive</a> for showing how to reduce space for randomized log-space algorithms on catalytic machines.</p><p>Check out all the papers in the <a href="https://drops.dagstuhl.de/entities/volume/LIPIcs-volume-300">online proceedings</a>.</p><p>From a relatively quiet business meeting: 36 papers accepted out of 104 submissions, a bit up from previous years. 75 attendees including 42 students, similar to recent years. 2025 conference at the Fields Institute in Toronto August 5-8. 2026 in Lisbon or Auckland.</p><p>The <a href="https://blog.computationalcomplexity.org/2024/06/luca-trevisan-1971-2024.html">loss of Luca Trevisan</a>, PC Chair 2005 and local arrangements chair in 2013 in San Jose, loomed large in the business meeting and at the conference.</p>https://blog.computationalcomplexity.org/2024/07/complexity-in-michigan.htmlnoreply@blogger.com (Lance Fortnow)0tag:blogger.com,1999:blog-3722233.post-6986208603456574594Sun, 21 Jul 2024 13:00:00 +00002024-07-28T07:38:45.334-05:00FLT solution annouement had its 31's anniv was about a month ago. Some poems about FLT NOT from ChatGPT <p>On June 21, 1993, at the Issac Newton Institute for Mathematical Science, Andrew Wiles announced that he had proven Fermat's Last Theorem. That wasn't quite right- there was a hole in the proof that was later patched up with the help of Richard Taylor (his former grad student). A correct proof was submitted in 1994 and appeared in 1995. Wiles is the sole author. </p><p>June 21, 2024 was the 31st anniversary of the announcement. (So today is the 31-years and 1-month anniversary). I COULD have had ChatGPT write some poems about it. But there is no need. There are already some very nice poems about it written by humans. Will humans eventually lose the ability to write such things? Would that be a bad thing? Either ponder those questions or just enjoy the poems. (My spellcheck still thinks ChatGPT is not a word. It needs to get with the times.) <br /></p><p>1) A link to a set of poems about FLT: <a href="http://scm.org.co/archivos/revista/Articulos/698.pdf">here</a>.</p><p>2) Here is a poem that is not in that set but is excellent. </p><p>A challenge for many long ages<br />Had baffled the savants and sages<br />Yet at last came the light <br />Seems that Fermat was right<br />To the margins add 200 pages </p><p>(I don't know who wrote this or even where I read it. If you know anything about where it was published or who wrote it, please let me know. ADDED LATER:Eric Angelini left a comment telling me that this limerick was written by Paul Robert Chernoff. The commets also has a link to lots of limericks that Paul Robert Chernoff wrote. Thanks!)<br /></p><p>3) Here is a poem by Jonathan Harvey that mentions the gap in the original proof.<br /></p><p>A mathematician named Wiles<br />Had papers stacked in large piles<br />Since he saw a clue<br />He could show Fermat true<br />Mixing many mathematical styles<br /></p><p>He labored in search of the light<br />To find the crucial insight<br />Young Andrew, it seems<br />Had childhood dreams<br />To prove Mr. Fermat was right</p><p>He studied for seven long years<br />Expending much blood, sweat, and tears<br />After showing the proof<br />A skeptic said “Poof!<br />There’s a hole here”, raising deep fears.</p><p>This shattered Mr. Wiles’s belief<br />His ship was wrecked on a reef<br />Then a quick switcheroo<br />Came out of the blue<br />Providing his mind much relief.</p><p>Mr. Wiles had been under the gun<br />But the obstacle blocking Proof One<br />Fixed a much older way<br />From an earlier day<br />And now Wiles has his place in the sun<br /></p><p>4) Here is a poem by John Fitzgerald that mentions other unsolved problems including P vs NP<br /></p><p>Fermat’s theorem has been solved,<br />What will now make math evolve?<br /><br />There are many problems still,<br />None of which can cause that thrill.<br /><br />Years and years of history,<br />Gave romance to Fermat-spree,<br /><br />Amateurs and top men too,<br />Tried to push this theorem through.<br /><br />Some have thought they reached the goal,<br />But were shipwrecked on the shoal,<br /><br />So the quest grew stronger still;<br />Who would pay for Fermat’s bill?<br /><br />So what is now the pearl to probe,<br />The snark to hunt, the pot of gold,<br /><br />The fish to catch, the rainbows end,<br />The distant call towards which to tend?<br /><br />One such goal’s the number brick,<br />where integers to all lengths stick:<br /><br />To sides, diagonals, everyone,<br />Does it exist or are there none?<br /><br />Then there are those famous pearls,<br />That have stymied kins and earls:<br /><br />Goldbach, Twin Primes, Riemann Zeta;<br />No solutions, plenty data.<br /><br />Find a perfect number odd;<br />Through 3n + 1 go plod;<br /><br />Will the P = N P ?<br />Send a code unbreakably.<br /><br />Are independence proofs amiss;<br />Continuum Hypothesis;<br /><br />Find a proof which has some texture<br />of the Poincaré conjecture.<br /><br />And so, you see, onward we sail,<br />there still are mountains we must scale;</p><p><br />But now there’s something gone from math,<br />At Fermat’s end we weep and laugh.<br /><br /><br /></p>https://blog.computationalcomplexity.org/2024/07/flt-solution-annouement-had-its-31s.htmlnoreply@blogger.com (gasarch)2tag:blogger.com,1999:blog-3722233.post-3103016214493120243Thu, 18 Jul 2024 13:39:00 +00002024-07-19T08:15:55.074-05:00The Story of Shor's Algorithm<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj26KOMyvXsFHWmkkQDqdYyo_phnQvygcxMObQ7-UeSuq1XuBrf0EhFoRBw8oDn11SL4cPDz1TtPLSFZzwL5JvTvA7hd8Xl9UGrgGg1Ht5G0m7lg1CwjKzoz6-Z0dtpqe8ChElzr2O3fJOIO-jLQ25cY-MiRvtKKiujgXsiKSingTrG6_N6aZ0V/s1260/pic3.jpg" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="1260" data-original-width="1225" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj26KOMyvXsFHWmkkQDqdYyo_phnQvygcxMObQ7-UeSuq1XuBrf0EhFoRBw8oDn11SL4cPDz1TtPLSFZzwL5JvTvA7hd8Xl9UGrgGg1Ht5G0m7lg1CwjKzoz6-Z0dtpqe8ChElzr2O3fJOIO-jLQ25cY-MiRvtKKiujgXsiKSingTrG6_N6aZ0V/s320/pic3.jpg" width="311" /></a></div>The quantum factoring algorithm of Peter Shor (<a href="https://doi.org/10.1109/SFCS.1994.365700">FOCS</a> 1994, <a href="https://doi.org/10.1137/S0036144598347011">SIAM Review</a> 1999) turns thirty this year. Before his algorithm, quantum computing lacked the killer app, something practical that quantum could do that seems hard for classical computers. Back in 1994, I said Shor's algorithm bought quantum computing another twenty years. How I misjudged the longevity of quantum hype. <p></p><p>Peter got the idea for his algorithm from a paper by Daniel Simon solving a theoretical complexity problem. The quantum factoring algorithm is a great example of how a complexity result can open doors to new algorithmic ideas.</p><p>Simon came up with a <a href="https://epubs.siam.org/doi/10.1137/S0097539796298637">beautifully simple example</a> of a problem that required exponential-time on a probabilistic machine but polynomial-time on a quantum computer. Let's define addition over the \(n\)-bit strings, for \(x\) and \(y\) in \(\{0,1\}^n\), \(x+y\) is the bitwise parity of \(x\) and \(y\). For example if \(x\) is 0110 and \(y\) is 1100, \(x+y = 1010\).</p><p>Suppose we have a Boolean function \(f:\{0,1\}^n\rightarrow\{0,1\}^n\) (maps \(n\) bits to \(n\) bits) with the property that \(f(x)=f(y)\) iff \(x=y+z\) for some fixed \(z\). The problem is given \(f\) as an oracle or a circuit, find the \(z\). A classical machine would need exponential steps in to find \(z\) in the worst case.</p><div>Simon gave a simple quantum algorithm that would with a single query output a random w such that \(w\cdot z=0\). With \(n = \log N\) linearly independent \(w\), you can solve for \(z\).</div><p>Shor's asked what if we could do the same for regular integer addition instead of bitwise parity. Suppose you have a function \(f(x)=f(y)\) iff \(x-y\) is a multiple of \(z\) for a fixed \(z\). (In Simon's case over bits the only multiples are zero and one.) That means \(f\) is periodic and \(z\) is the period. Shor knew that by an <a href="https://doi.org/10.1145/800116.803773">algorithm by Miller</a>, finding a period leads to factoring.</p><p>Let m be an odd number with multiple prime factors. Consider \(f(x)=a^x\bmod m\) for a randomly chosen \(a\) relatively prime to \(m\). If this function has a period \(z\), then \(a^z\bmod m=a\), \(a^{z-1}\bmod m=1\) and with probability at least one-half, the gcd of \(a^{\frac{z-1}{2}}\) and \(m\) will be a nontrivial factor of m. </p><p>Getting all this to work on a quantum computer requires a number of addition tricks beyond what Simon did but once Shor had the inspiration the rest followed. </p><p>Peter Shor really understood the landscape of theory from complexity to cryptography, a curiosity for quantum computing and the vision to see how it all connected together to get the quantum algorithm that almost single-handedly brought billions of dollars to the field. </p><p>Peter just received the <a href="https://www.itsoc.org/news/shannon-award-2025">Shannon Award</a> for his work on quantum error correction that would help enable quantum computers to run his algorithm. Still the largest number present day quantum computers can factor with the algorithm is 21. If (and its a big if) that number gets up past the <a href="https://en.wikipedia.org/wiki/RSA_Factoring_Challenge">RSA challenge numbers</a>, Peter will have far larger prizes in his future.</p>https://blog.computationalcomplexity.org/2024/07/the-story-of-shors-algorithm.htmlnoreply@blogger.com (Lance Fortnow)3tag:blogger.com,1999:blog-3722233.post-3681611090316383574Sun, 14 Jul 2024 19:09:00 +00002024-07-14T14:09:44.299-05:00The Term Quantum Being Misused ... Again<p><br /></p><p>In a post from 2015 I noted that the word <i>quantum</i> is often misused (see <a href="https://blog.computationalcomplexity.org/2015/11/is-word-quantum-being-used-properly-by.html">here</a>). Have things gotten better since then? I think you know the answer. But two uses of the word <i>quantum </i>caught my attention</p><p>1) The episode <i>Subspace Rhapsody</i> of <i>Star Trek- Strange New Worlds</i> is described on IMDB as follows:</p><p><i>An accident with an experimental quantum probability field causes everyone on the Enterprise to break uncontrollably into song, but the real danger is that the field is expanding and beginning to impact other ships--- allies and enemies alike.</i></p><p><i> </i>(I mentioned this episode and pointed to my website of all the songs in it <a href="https://blog.computationalcomplexity.org/2024/06/technology-1966-2006-2023.html">here</a>.) <br /></p><p>SO- is this an incorrect use of of the word <i>quantum</i>? Since ST-SNW is fictional, its a silly question. However, it seems like a lazy Sci-Fi convention to just use the word <i>quantum </i>for random technobabble. </p><p>2) <b>The Economist </b>is a serious British weekly newspaper. Or so I thought until I read this passage in the June 15-21, 2024 issue, the article featured on the cover <i>The rise of Chinese Science</i></p><p><i>Thanks to Chinese agronomists, farmers everywhere could reap more bountiful harvests. Its perovskite-based solar panels will work just as well in Gabon as in the Gobi desert. But a more innovative China may also thrive in fields with military uses, such as quantum computing or hypersonic weapons.</i></p><p>So <b>The Economist </b>is saying that Quantum Computing has military uses. I am skeptical of this except for the (in my opinion unlikely) possibility that QC can factor and break RSA which, if it will happen, won't be for a while. </p><p>It also makes me wonder if the rest of the paragraph, which is on fields I don't know anything about, is also incorrect or deeply flawed. (See <a href="https://www.epsilontheory.com/gell-mann-amnesia/">Gell-Man Amnesia</a> which I've also heard called The Gell-Man Affect.) </p><p>I am not surprised that ST:SNW uses quantum incorrectly (Or did it? Maybe an experimental quantum probability field would cause people to sing.) but I am surprised that<b> The Economis</b>t misused it. I thought they were more reliable. Oh well. <br /></p><p> <br /></p><p><br /></p><p><br /></p><p><br /></p>https://blog.computationalcomplexity.org/2024/07/the-term-quantum-being-misused-again.htmlnoreply@blogger.com (gasarch)4tag:blogger.com,1999:blog-3722233.post-7923396283869695639Wed, 10 Jul 2024 11:34:00 +00002024-07-10T06:34:06.721-05:00Favorite Theorems: Extracting Ramsey Graphs<p><a href="https://blog.computationalcomplexity.org/2024/06/favorite-theorems-algebraic-circuits.html">June Edition</a></p><p>Two decades ago, I named the recently departed Luca Trevisan's <a href="https://doi.org/10.1145/502090.502099">paper</a> connecting extractors to psuedorandom generators as one of my <a href="https://blog.computationalcomplexity.org/2004/06/favorite-theorems-connections.html">favorite theorems</a> from 1995-2004. I'm dedicating this month's favorite theorem to him.</p><p>Suppose we have two independent sources with just a little bit of entropy each. Can I pull out a single random bit? This month's favorite theorem shows us how, with a nice application to constructing Ramsey graphs.</p><p></p><div style="text-align: center;"><a href="https://doi.org/10.1145/2897518.2897528">Explicit Two-Source Extractors and Resilient Functions</a></div><div style="text-align: center;">Eshan Chattopadhyay and David Zuckerman</div><p></p><p>More formally (feel free to skip this part) suppose we had two independent distributions U and V each of poly log min-entropy, which means for every string x of length n, the probability of choosing x from U and the probability of choosing x from V is at most \(2^{-(\log n)^c}\) for some c. There is a deterministic polytime function (which doesn't depend on U and V) such that f(x,y) with x and y chosen independently from U and V will output 1 with probability \(1/2\pm\epsilon\) for \(\epsilon\) smaller than any polynomial.</p><p>Previous work required a linear amount of min-entropy for U and V. </p><p>As a corollary, we can use f to deterministically generate a Ramsey graph on n vertices with no cliques or independent sets of size \(2^{(\log\log n)^c}\) for a sufficiently large c. This is also an exponential improvement from previous constructions. Gil Cohen gave an <a href="https://doi.org/10.1145/2897518.2897530">independent construction</a> that doesn't go through extractors.</p><p>There have been several papers improving the bounds of Chattopadhyay and Zuckerman. In FOCS 2023 Xin Li <a href="https://doi.org/10.1109/FOCS57990.2023.00075">gave a construction</a> of extractors with \(O(\log n)\) min-entropy, the current state-of-the-art for extracting a single random bit with constant error, and Ramsey graphs with no cliques or independent sets of size \(\log^c n\) for some constant c.</p>https://blog.computationalcomplexity.org/2024/07/favorite-theorems-extracting-ramsey.htmlnoreply@blogger.com (Lance Fortnow)0