tag:blogger.com,1999:blog-3722233Sat, 03 Jun 2023 12:10:48 +0000typecastfocs metacommentsComputational ComplexityComputational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill Gasarchhttps://blog.computationalcomplexity.org/noreply@blogger.com (Lance Fortnow)Blogger3010125tag:blogger.com,1999:blog-3722233.post-7380998070167715843Sun, 28 May 2023 19:34:00 +00002023-05-28T17:47:08.856-05:00On Nov 10, 2014 the TV show Scorpion mentions Software that is like ChatGPT for Music<p><br /></p><p>Scorpion is a TV show that ran from 2014 to 2018. It involves a group of brilliant (to say the least) but socially awkward (to say the least) people who help the government battle threats and/or solve crimes. </p><p>The episode <i>Risky Business </i>aired on Nov 10, 2014 (episode 8). </p><p>In the episode someone wrote a program that (in todays terminology) scrapes the web for all pop songs that were hits for the last 50 years and creates hits that share those properties and hence will be popular. And it works! </p><p>1) The episode says that if the fans of group X find out that they didn't create their own music, but its computer generated, then sales will drop. This may be true but I am not quite sure WHY its true. If I LIKE listening to a song, I will still like it even if I know it was computer generated. </p><p>2) While they didn't go into any detail about what the program does, it SOUNDS a lot like ChatGPT to me. </p><p>3) Could ChatGPT do that now? Has it already? Lance happened to read an earlier version of this post and emailed me a link which says this is already happening to some extent. The link is <a href="https://www.nytimes.com/2023/04/19/arts/music/ai-drake-the-weeknd-fake.html?unlocked_article_code=EAbzgrrnYPlWWP8bMw7TdZYabKqbPRK84bx8yi2YwEHpQL7iuKUpJ4Qz7F-3STJMP9li11P1y0MRNGBRdHwJJtYjQS1-vDsLZpn6mAbWoPQmcevIRg665mw3n_e_OyjFTK2Cfe928mnnK5RCGYm4giIv5zNMqz5vp8FA0s3jW2wuD7je4PnTDMRBBBo1VD4hISgqnGogiyyYYpBXE4rTebEjqyFvtakYxQ-lkjgNfAVShci0Q60FIgfnPk4oP-b8l1kXST71XbBsi2GjeJI1EVRv5HFcFz3IrKhHd5AJnehEXoAwCK-6ycaaWNwD1yxUcTT1kF_sXBrs8jfqFNFUsCHhv-U5&smid=url-share">here</a> but might be behind a paywall. (Note- My spellchecker DOES think paywall is a word. Yeah! It also thinks that spellchecker is a word. Yeah!)</p><p>4) Is it impressive that the shows writers predicted this kind of technology way back in 2014? </p><p>5) In the real world would someone be murdered because they are going to reveal that group X's songs were not authentic? Either TV has far more murders than the real world OR on TV they just get caught more often. I would like to think that TV has far more murders (see <a href="https://www.dailymail.co.uk/news/article-2191990/Murder-capital-world-Quiet-seaside-town-Cabot-Cove-named-dangerous-place-Earth.html">here</a>). It certainly has far more <i>interesting</i> murders. </p>https://blog.computationalcomplexity.org/2023/05/on-nov-10-2014-tv-show-scorpion.htmlnoreply@blogger.com (gasarch)3tag:blogger.com,1999:blog-3722233.post-1455510985147072060Thu, 25 May 2023 13:17:00 +00002023-05-25T14:17:16.658-05:00Finding Primes Pseudodeterministically<p>In 2003, Agrawal, Kayal and Saxena showed that primality testing is in P, i.e., you could test for primes without using any randomness.</p><p>What if you want to find a prime? Specifically given a number m in binary, can you find a prime p > m in time polynomial in the length of m. In 2009 I <a href="https://blog.computationalcomplexity.org/2009/08/finding-primes.html">posted</a> about a polymath project to find a deterministic algorithm for finding primes and the problem remains open today. </p><p>Likely you could just try m+1,m+2, ... until you find a prime but whether that is bounded by a polynomial number of steps is a big open question in number theory. You can choose random numbers between m and 2m and find one in expected polytime given the <a href="https://en.wikipedia.org/wiki/Prime_number_theorem">prime number theorem</a>. This algorithm will likely output a different prime every time you run it.</p><p>There's a <a href="https://eccc.weizmann.ac.il/report/2023/076/">new paper</a> by Lijie Chen, Zhenjian Lu, Igor Carboni Oliveira, Hanlin Ren and Rahul Santhanam that solves this problem pseudodeterministically for infinitely many inputs. This is a randomized polynomial-time algorithm B that for infinitely many n, there is a specific prime p between 2<sup>n</sup> and 2<sup>n+1</sup> such that B(1<sup>n</sup>) outputs p with high probability. With high probability you will get the same prime every time you run the algorithm!</p><p>The proof uses a win-win kind of argument, if a certain algorithm fails to work you can use that to derandomize. Making that argument work requires bootstrapping on variants of previous pseudorandom generator and hitting sets constructions. </p><p>Almost surely there is a deterministic polytime algorithm for finding primes. But for now the new result of Chen et al. is a nice stop in that direction. </p>https://blog.computationalcomplexity.org/2023/05/finding-primes-pseudodeterministically.htmlnoreply@blogger.com (Lance Fortnow)6tag:blogger.com,1999:blog-3722233.post-2325300399606087215Sun, 21 May 2023 12:25:00 +00002023-05-21T14:37:51.877-05:00Logic and lack of Logic of Anti Vaxers<p>I wrote the post below the dotted line a long time ago but never got around to posting it. Now that </p><p><br /> WHO says COVID emergency is over.</p><p>(When I first saw I misread it as a question: </p><p> Who says COVID emergency is over? </p><p>) </p><p>I decided to post it (with one update that has an ADDED LATER on it). </p><p>If you didn't know , WHO is World Health Organization. Actually, that is true whether or not you know it.</p><p>------------------------------------------------------------------------------------</p><p>Some of the reasons anti-vaxers give are better than others. Some are consistent, some are not. </p><p>We list some of them and invite the comments to list more. Our point is- which ones have something interesting to say? </p><p><br />I am PRO VAX but I often seek out intelligent opposing views on any of my opinions. Did I find any here? I leave that as an exercise for the reader. </p><p><br /></p><p>Reasons anti-vaxxers give or could give. </p><p>a) <i>FREEDOM! </i>That's not quite a reason. I am reminded of asking a neighbor why he flies a confederate flag and he said</p><p>FREEDOM!</p><p>I was hoping he would say either</p><p>To honor my great grandfather who died for the south in the civil war.</p><p>or</p><p>Because I don't like black people.</p><p>or</p><p>To honor those many people, black and white, who died in the civil war fighting for the south. </p><p>or</p><p>SOMETHING with content to it. <br /><br /></p><p>Any of those answered would tell me something could be discussed. Just the word FREEDOM does not. </p><p>b) <i>Bill Gates put microchips in the vaccine so he can keep track of where you are</i>. Why? Bill Gates and Mark Zuckerberg can ALREADY keep track of where you are. I do wonder if this is a common anti-vax viewpoint or if its VERY RARE and the media plays it up to make anti-vaxers look stupid. Same with the notion that the vaccine makes you magnetic (that sounds cool!)</p><p>c) I<i> want to wait and see its effects since it was rushed out</i>. That might have made sense X months ago, but by now its very well established.</p><p>d) <i>I haven't gotten around to it yet.</i> These are a very good target for the mandates or at least to be talked into doing it. It may be more fun to talk about the radical anti-vaxers; however, if the <i>haven't gotten</i> <i>around to it </i>group all got vaccinated we would be closer to herd immunity.</p><p>e) I<i>'ve got COVID so I am immune. </i>This is true for Y months (I am not sure what Y is); however, the person I know who told me this had COVID about 3 years ago. But at least its an attempt at an argument. </p><p>f) <i>The medical community has been bad to (i) me, (ii) my people, (iii) some other people that I care</i> <i>about hence I do not trust them. </i>The premise is actually true, so this is an attempt at an intelligent argument. </p><p>g) <i>Big Pharma is making a mint on this and ripping us off.</i> This is a view of, not the extreme left but the fringe left. Robert Kennedy is a big voice here, see <a href="https://www.mcgill.ca/oss/article/covid-19-health-pseudoscience/anti-vaccine-propaganda-robert-f-kennedy-jr">here</a>. On a right-wing relatives' urging I listened to an hour-long interview with him. Mostly a rambling incoherent argument. Much of it was anti-business which got no pushback from the usually-pro-business republican's. Are there any mainstream democrats who are anti-vax? I do not think so. Also, is it true that Big Pharma is is making a mint? Ripping us off? Ripping someone off? This could be an interesting question if asked more coherently. But its not a good reason to be anti-vax. </p><p>h) Trump said it was a hoax to get him out of office. Taking a vaccine now would be to admit its not a hoax. Such people should take Karl Popper's view of science: Conjecture that, as Ted Cruz said, if Biden wins then COVID will go away since it was a hoax made up to get Biden to win. If Biden wins and COVID does not go away then you must reject your conjecture. (ADDED LATER: Odd point- Trump has sometimes said, though not lately, that people should get the Vax that HE invented. If we called it a Trumpzine then would people take it? As of now Trump and DeathSantos are trying to anti-vax each other.) </p><p>i) The first few days after you take it it hurts and you feel tired or get chills or other reactions. This is actually true, though one has to balance that against getting COVID.</p><p>j) The Ford Administration really messed up on the Swine Flu so why trust the government now? This is true- The Ford Admin DID mess up. Oddly enough, I have never (or perhaps rarely) heard this as a reason. This puzzles me- Why claim Bill Gates microchip stuff (which is absurd) or Vax causes autism (debunked) and NOT use arguments are are more reasonable?</p><p>k) Vaccines cause autism. That study was debunked a long time ago, but it still lives on.</p><p>l) Kamala Harris said she was hesitant since it was rushed. I've only heard this one as Republicans try to blame her for Vaccine Hesitancy. The problem with that argument is that you are claiming that the anti-vaxers, who are mostly republicans, are listening to Kamala Harris for advice.</p><p>l) If many people get Vaxed and COVID goes away then Biden will look good, and we can't have that.</p><p>m) COVID was invented by the Chinese to cripple us. Uh- even if true (which it is not, though the leaked lab hypothesis might be) that's a reason to TAKE it to thwart their plans. For that matter, Trump could have been Trumpian and PRO-MASK and PRO-VAX by blaming the Chinese and George Soros and the Deep State and Biden and Obama and of course Hillary, but using this to say WEAR THE MASK! TAKE THE VAX! to DEFEAT these ENEMIES who want to destroy America. Would that have worked to slow the spread of COVID? If so then would he have won re-election? </p>https://blog.computationalcomplexity.org/2023/05/logic-and-lack-of-logic-of-anti-vaxers.htmlnoreply@blogger.com (gasarch)20tag:blogger.com,1999:blog-3722233.post-5530482914168827134Thu, 18 May 2023 12:54:00 +00002023-05-18T07:55:08.516-05:00Community Guidelines that Ignore History<p>Last week I got the following email from Blogger:</p><blockquote><p>As you may know, our Community Guidelines (<a href="https://blogger.com/go/contentpolicy">https://blogger.com/go/contentpolicy</a>) describe the boundaries for what we allow-- and don't allow-- on Blogger. Your post titled "Mysteries of the Seventies" was flagged to us for review. This post was put behind a warning for readers because it contains sensitive content; the post is visible at <a href="http://blog.computationalcomplexity.org/2005/06/mysteries-of-seventies.html">http://blog.computationalcomplexity.org/2005/06/mysteries-of-seventies.html</a>. Your blog readers must acknowledge the warning before being able to read the post/blog.</p></blockquote><p>So let me describe what happened carefully so this post doesn't also get flagged. First a history lesson. In 1972, five men were arrested for breaking into the Democratic National Committee (DNC) headquarters at the Watergate complex in Washington, D.C. They were found with bugging devices and cameras, attempting to wiretap the DNC offices. One of them had ties to the Committee to Re-Elect the President (CREEP), which supported Richard Nixon's re-election campaign.</p><p>Two Washington Post reporters, Bob Woodward and Carl Bernstein, broke the story of the coverup that would eventually lead to Nixon's resignation in 1974. I highly recommend both the <a href="https://amzn.to/4561z5n">book</a> and the <a href="https://www.imdb.com/title/tt0074119/">film</a> <i>All the President's Men</i> about their investigation.<b style="font-style: italic;"> </b>Woodward and Bernstein got help from a then anonymous source. The identity of the source was subject to huge speculation and remained a mystery for three decades. In 2005 <a href="https://en.wikipedia.org/wiki/Mark_Felt">Mark Felt</a>, associate director of the FBI, outed himself as the source.</p><p>After the announcement I wrote <a href="https://blog.computationalcomplexity.org/2005/06/mysteries-of-seventies.html">a short post</a> (would have been a tweet today) mentioning that while we learned the solution of one mystery from the 1970s, <a href="https://www.claymath.org/millennium-problems/p-vs-np-problem">another</a> remained.</p><p>So why was this post flagged as sensitive eighteen years later? Because I used the pseudonym the Washington Post reporters gave to Mark Felt, a name take from a popular adult movie at the time. </p>https://blog.computationalcomplexity.org/2023/05/community-guidelines-that-ignore-history.htmlnoreply@blogger.com (Lance Fortnow)0tag:blogger.com,1999:blog-3722233.post-6967500125921426284Sun, 14 May 2023 21:53:00 +00002023-05-17T10:42:32.726-05:00Take a number and map it to the number of letters in its name<p>Let f: N--> N map a number to the number-of-letters in its name in English (we will consider other languages later).</p><p>So for example 14 is fourteen so it maps to 7. Let's iterate</p><p>14 --> 7 --> 5 --> 4-->4-->4 ...</p><p>It is known (perhaps well-known) that, given any starting point, the sequence eventually is all 4's. </p><p>I recently got an email asking if this was PROVEN. I could not find a proof on the web so I did it myself. My writeup of a proof is <a href="https://www.cs.umd.edu/~gasarch/BLOGPAPERS/four.pdf">here</a>. </p><p>1) So now there IS a proof on the web. It may be hard to find. Does this problem have a well known name that I should add to this blog post so that it is easier to find?</p><p>2) My next thought was</p><p>For which functions f: N-->N is it the case that every sequence a, f(a), f(f(a)), ... is eventually constant. I leave that for you to ponder.</p><p>3) The asker of the question is of a more real-world bent and emailed me what happens in other languages:</p><p>Spanish has one number whose name is its own length: 5 is cinco. I leave it to the reader to show that in Spanish the sequence always becomes all 5's. (NOTE- a commenter says NO, you an have a 4-6 cycle. Cool!) </p><p>French seems to have no such numbers, so it cannot have this behavior.</p><p>Norwegian has three such numbers: 2 is to, 3 is tre, 4 is fire. So I suspect every such sequence either is constant-2, constant-3 or contant-4.</p><p>See this article <a href="https://mathlair.allfunandgames.ca/languages.php">here</a> for more the numbers 1-10 in several languages.</p><p>OBLIGATORY ChatGpt note: Lance saw this post in our blog account and was curious what ChatGpt would do with it. For what he got see <a href="https://www.cs.umd.edu/~gasarch/BLOGPAPERS/mapnum.txt">here</a>. You can decide if a program can take over the blog anytime soon. </p>https://blog.computationalcomplexity.org/2023/05/take-number-and-map-it-to-number-of.htmlnoreply@blogger.com (gasarch)13tag:blogger.com,1999:blog-3722233.post-7739714885756524623Thu, 11 May 2023 13:38:00 +00002023-05-11T08:38:46.804-05:00Winter is Coming<p>I fear we are heading to a computer science winter. Now why would I say that when CS enrollments are at record levels and AI is driving incredible excitement in the field? Back in 2016 I <a href="https://blog.computationalcomplexity.org/2016/04/its-all-about-jobs.html">wrote</a></p><blockquote><p>We won’t see a decline in demand for computer science students until we automate ourselves out of a job.</p></blockquote><p>With advances in machine learning, especially generative AI, you can now use AI tools with little to no knowledge of computer science and mathematics. You can do quite a bit more with just a basic knowledge of Python programming and APIs. And tools like Github co-pilot make programming that much easier.</p><p>In 2005 during the last period computer science saw small enrollments, I <a href="https://blog.computationalcomplexity.org/2005/07/computer-science-in-high-school.html">suggested</a> computing became a commodity and we lost the excitement in the field, leading to a decrease of interest and students. It didn't help that potential students had a (<a href="https://cra.org/govaffairs/blog/2006/03/ny-times-editorial-outsourcing-isnt-as-bad-as-you-think/">mostly mistaken</a>) perception that all the computing jobs were being outsourced to India.</p><p>We may soon see a time when computing loses its excitement again if everyone can just interact in English (or any other language). Students might now have a perception that computing jobs will be outsourced to AI. The recent tech layoffs don't help. Even the ones interested in computing might focus more on the various low-cost certificate programs instead of a full CS degree.</p><p>What can we do? We need to reframe our degrees or create new ones to recognize the move to data-oriented computing. We need to embrace teaching responsible AI but without fighting the future. </p><p>CS is in a great place right now but we have to continually adjust to ensure our future relevance or we may no longer have it.</p>https://blog.computationalcomplexity.org/2023/05/winter-is-coming.htmlnoreply@blogger.com (Lance Fortnow)7tag:blogger.com,1999:blog-3722233.post-7835545901904295299Mon, 08 May 2023 23:06:00 +00002023-05-08T18:06:29.696-05:00Other Ramsey's <p> I often Google Ramsey stuff to find something. I often end up back on my own collection of Ramsey theory papers. But I sometimes find OTHER uses of the phrase <i>Ramsey. </i></p><p>To tell these Ramsey's apart we will call the math Ramsey guy <i>Frank Ramsey </i>since that was his name. </p><p>1) Frank's brother Arthur Michael Ramsey (usually just Michael Ramsey). He was the Archbishop of Canterbury from 1961 until 1974. He was ahead of his time in being Ecumenical and supporting women clergy. His Wikipedia page is <a href="https://en.wikipedia.org/wiki/Michael_Ramsey">here</a>. </p><p>2) The Ramsey Effect. Aaron Ramsey is a football player (what Americans call Soccer). There were four time when he scored a goal and soon after an important person died. This was dubbed <i>The Ramsey Effect. </i>This is of course silly, and if he asked Frank about the probabilities (unlikely-Frank died in the 1930's and Aaron was born in 1990) I am sure Frank would tell you that four is too small a number to make anything out of this. The Ramsey Effect is discussed <a href="https://knowyourmeme.com/memes/the-ramsey-effect">here</a>. There is no Wikipedia page about it, and I found it by accident so, as the kids say, its NOT a think. </p><p>3) Jon Bennett Ramsey. A girl who was murdered when she was six. Case still unsolved. See the Wikipedia page on this <a href="https://en.wikipedia.org/wiki/Killing_of_JonBen%C3%A9t_Ramsey">here</a>.</p><p>4) First name Ramsey: <a href="https://en.wikipedia.org/wiki/Ramsey_(given_name)">here</a>. The only one I had heard of is Ramsey Clark.</p><p>5) Last name Ramsey: <a href="https://en.wikipedia.org/wiki/Ramsey_(surname)">here</a>. Lots of people! Most I had not heard of. </p><p>(With regard to 4 and 5: there are so many famous people you can't have heard of all of them)</p><p>6) For more Ramsey Stuff see <a href="https://en.wikipedia.org/wiki/Ramsey">here</a></p><p><br /></p><p><br /></p>https://blog.computationalcomplexity.org/2023/05/other-ramseys.htmlnoreply@blogger.com (gasarch)1tag:blogger.com,1999:blog-3722233.post-7182332023085822484Thu, 04 May 2023 11:12:00 +00002023-05-04T09:49:55.210-05:00Breaking Ground in Isomorphism Testing: A Leap Forward for a Bottleneck Case of Group Isomorphism<p><span style="font-family: Arial; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;"><i>Guest post by Josh Grochow and </i></span><span style="font-family: Arial;"><span style="font-size: 14.6667px; white-space: pre-wrap;"><i>Youming Qiao</i></span></span></p><p><span style="font-family: Arial; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;">There has, quietly, been somewhat of a breakthrough in isomorphism testing. No, not as big as Babai's 2016 </span><a href="https://arxiv.org/abs/1512.03547" style="text-decoration-line: none;"><span style="color: #1155cc; font-family: Arial; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; text-decoration-line: underline; text-decoration-skip-ink: none; vertical-align: baseline; white-space: pre-wrap;">Graph Isomorphism in Quasipolynomial Time</span></a><span style="font-family: Arial; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;">. But a first foothold in climbing a wall for which no one had gotten much off the ground before. The result, due to </span><a href="https://arxiv.org/abs/2303.15412" style="text-decoration-line: none;"><span style="color: #1155cc; font-family: Arial; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; text-decoration-line: underline; text-decoration-skip-ink: none; vertical-align: baseline; white-space: pre-wrap;">Xiaorui Sun in this year's STOC</span></a><span style="font-family: Arial; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;">, is an algorithm for testing isomorphism of a certain class of groups - p-groups of class 2 and exponent p if you must know, but we'll get to that - in time \(n^{O(\log^{5/6} n)}\) where n is the order of the group. To understand why we're excited about this we have to tell a bit of a story. </span></p><span id="docs-internal-guid-259ecb6b-7fff-b1a3-e5ab-283b928791b9"><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="font-family: Arial; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;">In the 1970s, when Graph Isomorphism was still a mystery, people also thought more widely about isomorphism testing of other combinatorial and algebraic structures. For finite groups of order n, Robert Tarjan realized that there is an \(n^{\log n+O(1)}\)-time algorithm, simply because a group of order n has a generating set of size \(\log n\). This observation was recorded by Gary Miller in a </span><a href="https://doi.org/10.1145/800133.804331" style="text-decoration-line: none;"><span style="color: #1155cc; font-family: Arial; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; text-decoration-line: underline; text-decoration-skip-ink: none; vertical-align: baseline; white-space: pre-wrap;">paper in STOC'78</span></a><span style="font-family: Arial; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;">, and independently realized by </span><a href="https://www.sciencedirect.com/science/article/abs/pii/B9780080129754500114" style="text-decoration-line: none;"><span style="color: #1155cc; font-family: Arial; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; text-decoration-line: underline; text-decoration-skip-ink: none; vertical-align: baseline; white-space: pre-wrap;">Felsch and Neubüser</span></a><span style="font-family: Arial; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;">. A natural question is then whether Group Isomorphism can be solved in time poly(n) where n is the group order.</span></p><br /><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="font-family: Arial; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;">Not only is this question natural from the perspective of studying groups computationally, it is also natural from the perspective of </span><span style="font-family: Arial; font-size: 11pt; font-style: italic; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;">Graph</span><span style="font-family: Arial; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;"> Isomorphism. For Group Isomorphism reduces to Graph Isomorphism in polynomial-time (as does the isomorphism problem for any finite algebraic or relational structure, see </span><a href="https://doi.org/10.1007/BF02104746" style="text-decoration-line: none;"><span style="color: #1155cc; font-family: Arial; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; text-decoration-line: underline; text-decoration-skip-ink: none; vertical-align: baseline; white-space: pre-wrap;">Zemlyachenko, Korneenko, & Tyshkevich</span></a><span style="font-family: Arial; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;">). While this has been known for a long time, Babai’s result on Graph Isomorphism brings the running times quite close: \(n^{O(\log^2 n)}\) for graphs, and \(n^{O(\log n)}\) for groups. So not only does Group Isomorphism stand in the way of getting Graph Isomorphism into P, but in our current state of knowledge, it even stands in the way of shaving off more than a single log in the exponent of the runtime.</span></p><br /><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="font-family: Arial; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;">Since the general Group Isomorphism problem seems difficult, attention turned to special classes of groups. It was not hard to see that isomorphism of Abelian groups could be computed in polynomial time. However, a group class that is just “one step away” from Abelian - groups G where, when you mod out by the center Z(G), what’s left is Abelian - turned out to be difficult. Such groups are called class-2 nilpotent, and in one sense, their group-theoretic structure is relatively straightforward: both G/Z(G) and Z(G) are Abelian. Yet, to devise an efficient isomorphism testing procedure turned out to be extremely difficult (see e.g. </span><a href="https://doi.org/10.1016/0022-0000(91)90012-T" style="text-decoration-line: none;"><span style="color: #1155cc; font-family: Arial; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; text-decoration-line: underline; text-decoration-skip-ink: none; vertical-align: baseline; white-space: pre-wrap;">Garzon-Zalcstein</span></a><span style="font-family: Arial; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;">, </span><a href="https://doi.org/10.1016/j.tcs.2015.05.036" style="text-decoration-line: none;"><span style="color: #1155cc; font-family: Arial; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; text-decoration-line: underline; text-decoration-skip-ink: none; vertical-align: baseline; white-space: pre-wrap;">Rosenbaum-Wagner</span></a><span style="font-family: Arial; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;">, </span><a href="https://www.math.auckland.ac.nz/~obrien/research/isom.pdf" style="text-decoration-line: none;"><span style="color: #1155cc; font-family: Arial; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; text-decoration-line: underline; text-decoration-skip-ink: none; vertical-align: baseline; white-space: pre-wrap;">O’Brien</span></a><span style="font-family: Arial; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;">, </span><a href="https://www.sciencedirect.com/science/article/pii/S0021869309004463" style="text-decoration-line: none;"><span style="color: #1155cc; font-family: Arial; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; text-decoration-line: underline; text-decoration-skip-ink: none; vertical-align: baseline; white-space: pre-wrap;">Wilson</span></a><span style="font-family: Arial; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;">), to the point that this is usually considered as a bottleneck for putting Group Isomorphism in P. </span></p><br /><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="font-family: Arial; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;">Among class-2 nilpotent groups, the “key case” to resolve is widely believed, </span><a href="https://cstheory.stackexchange.com/a/42551/129" style="text-decoration-line: none;"><span style="color: #1155cc; font-family: Arial; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; text-decoration-line: underline; text-decoration-skip-ink: none; vertical-align: baseline; white-space: pre-wrap;">for several reasons</span></a><span style="font-family: Arial; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;">, to be p-groups of class 2 and exponent p. In such groups, both the center Z(G) and quotient G/Z(G) are elementary abelian, i.e., of the form \((Z_p)^d\). Despite having an even simpler group-theoretic structure, this group class still turns out to be difficult! For a long time, the asymptotic growth of the exponent of the runtime for solving this restricted problem has not improved over the \(n^{\log n+O(1)}\)-time algorithm, which works for all groups.<sup>1</sup></span></p><br /><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="font-family: Arial; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;">Xiaorui Sun’s result represents the first substantial improvement, cracking open this decades-old quest. His algorithm runs in time \(n^{O(\log^{5/6} n)}\), and its techniques are indeed novel. The starting point of this algorithm is to consider the following equivalent problem in (multi)linear algebra: let \(f, g:Z_p^d \times Z_p^d \rightarrow Z_p^e\) be two skew-symmetric bilinear maps. Do there exist change of bases A in \(GL(d, p)\) and B in \(GL(e, p)\), such that for all \(u, v\) in \(Z_p^d\), \(f(A(u), A(v))=B(g(u, v))\)?</span></p><br /><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><a href="https://www.jstor.org/stable/1989886" style="text-decoration-line: none;"><span style="color: #1155cc; font-family: Arial; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; text-decoration-line: underline; text-decoration-skip-ink: none; vertical-align: baseline; white-space: pre-wrap;">Baer’s Correspondence</span></a><span style="font-family: Arial; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;"> sets up an equivalence of categories between p-groups of class 2 and exponent p, and skew-symmetric bilinear maps over \(Z_p\). This viewpoint allows Xiaorui to use multilinear algebra to study the structure of these bilinear maps. He also crucially depends on a result of </span><a href="https://doi.org/10.1137/18M1165682" style="text-decoration-line: none;"><span style="color: #1155cc; font-family: Arial; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; text-decoration-line: underline; text-decoration-skip-ink: none; vertical-align: baseline; white-space: pre-wrap;">Ivanyos and Qiao</span></a><span style="font-family: Arial; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;">, which built on </span><a href="https://doi.org/10.1016/j.jalgebra.2009.07.029" style="text-decoration-line: none;"><span style="color: #1155cc; font-family: Arial; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; text-decoration-line: underline; text-decoration-skip-ink: none; vertical-align: baseline; white-space: pre-wrap;">Wilson’s use</span></a><span style="font-family: Arial; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;"> of involutive algebras in this context. He also uses the individualization-and-refinement technique (but for matrix spaces, not graphs!), a characterization of spaces of matrices of low rank, and reducing a tensor to a “semi-canonical” form part of which is somewhat reminiscent of the Tucker decomposition.</span></p><br /><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="font-family: Arial; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;">All this results in an algorithm which solves the above problem on bilinear maps in time \(p^{(d+e)^{1.8} \log p}\). For groups of order \(p^n\) with \(\log_p(n)\) larger than \(\log^5 p\), Baer’s Correspondence then says that this algorithm does it; when \(\log_p n\) is smaller than \(log^5 p,\) he can fall back on the generator-enumerator algorithm, since the number of generators is at most \(log_p n\).</span></p><br /><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="font-family: Arial; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;">For us, who have been working on Group Isomorphism for more than a decade, Xiaorui’s result represents an exciting development on this classic algorithmic problem, and we look forward to seeing more progress in this direction in the near future. </span></p><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="font-family: Arial; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;"><span style="font-family: Arial; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline;"></span></span></p><hr /><span style="font-family: Arial; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;"><span style="font-family: Arial; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline;"><sup>1</sup></span><a href="https://doi.org/10.1016/j.tcs.2015.05.036" style="font-family: "Times New Roman"; font-size: medium; text-decoration-line: none; white-space: normal;"><span style="color: #1155cc; font-family: Arial; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; text-decoration-line: underline; text-decoration-skip-ink: none; vertical-align: baseline; white-space: pre-wrap;">Rosenbaum & Wagner</span></a><span style="font-family: Arial; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline;"> improved the exponent to \(\frac{1}{2}\log {p(n)} + O(1)\), and later improved to \(\frac{1}{4}\log {p(n)} + O(1)\) for all groups, see p.5 of </span><a href="https://arxiv.org/abs/1609.08253" style="font-family: "Times New Roman"; font-size: medium; text-decoration-line: none; white-space: normal;"><span style="color: #1155cc; font-family: Arial; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; text-decoration-line: underline; text-decoration-skip-ink: none; vertical-align: baseline; white-space: pre-wrap;">Le Gall & Rosenbaum</span></a><span style="font-family: Arial; font-size: 11pt; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline;">. In 2014, at a conference on Groups, Computation, and Geometry organized by Wilson, Brooksbank, Hulpke, Kantor, and Penttila, it was concluded that modern practical methods, such as those used in GAP and MAGMA, still take \(n^{O(\log n)}\) steps in the worst case.</span></span><p></p></span>https://blog.computationalcomplexity.org/2023/05/breaking-ground-in-isomorphism-testing.htmlnoreply@blogger.com (Lance Fortnow)0tag:blogger.com,1999:blog-3722233.post-747317852549327178Mon, 01 May 2023 13:05:00 +00002023-05-02T09:37:37.772-05:00There are an infinite number of proofs that there are an infinite number of primes<p>In the last few years there have been four papers that prove the primes are infinite using some number theory and some Ramsey Theory. The papers are:</p><p><i>Van der Waerden and the primes</i> by Alpoge. See <a href="https://www.cs.umd.edu/~gasarch/BLOGPAPERS/alpoge-primes.pdf">here</a> or <a href="https://www.tandfonline.com/doi/abs/10.4169/amer.math.monthly.122.8.784">here</a>.</p><p><i>Squares in arithmetic progression and infinitely many primes </i>by Granville. See <a href="https://arxiv.org/abs/1708.06951">here</a> or <a href="https://www.tandfonline.com/doi/abs/10.4169/amer.math.monthly.124.10.951">here</a>.</p><p><i>Fermat's last theorem implies the infinitude of primes</i> by Elsholtz. See <a href="https://arxiv.org/abs/2009.06722">here</a> or <a href="https://www.tandfonline.com/doi/full/10.1080/00029890.2021.1856544">here</a>.</p><p><i>Fermat's last theorem, Schur's theorem (in Ramsey Theory), and the infinitude of the primes </i>by Gasarch See <a href="https://arxiv.org/abs/2302.04755">here</a> and <a href="https://www.sciencedirect.com/science/article/pii/S0012365X23000225?via%3Dihub">here</a>.</p><p>(I included the arxiv version and the doi pointer.)</p><p>We also note that</p><p>1) Mestrovic has collected up 183 proofs that the primes are infinite, see <a href="https://arxiv.org/pdf/1202.3670.pdf">here</a>. Some of them are similar so if you mod out by similarity you would get fewer proofs. How many you get depends on your criteria for similarity. This comment applies to other theorems that have many proofs. </p><p>2) Quanta recently published an article about the fact that there are an so many proofs, though highlighting the four mentioned above. See <a href="https://www.quantamagazine.org/why-mathematicians-re-prove-what-they-already-know-20230426/">here</a>. (This might be behind a paywall.) </p><p>This raises the obvious question:</p><p>Why are there so many proofs that the primes are infinite? </p><p>Some thoughts</p><p>1) Which other theorems have many proofs?</p><p>a) <i>The Pythagorean Theorem</i>. See <a href="https://link.springer.com/article/10.1057/jt.2009.16#:~:text=There%20are%20well%20over%20371,the%20United%20States%20James%20A.">here</a> for the claim that there are 371 proofs. There is a recent claim of a proof using trigonometry <a href="https://www.youtube.com/watch?v=p6j2nZKwf20">here</a> (this was thought to be impossible since it would involve a circular argument). </p><p>b) According to Wikipedia (see <a href="https://en.wikipedia.org/wiki/Quadratic_reciprocity">here</a>) <i>The law of quadratic reciprocity</i> has 240 proofs. This paper <a href="https://egrove.olemiss.edu/cgi/viewcontent.cgi?article=2539&context=etd#:~:text=Having%20a%20total%20of%20246,laws%20in%20the%20natural%20sciences.">here</a> has some of them. That paper also shows </p><p><i>QR IMPLIES primes infinite</i>.</p><p> Actually more: <i>QR IMPLIES primes \(\equiv 4 \pmod 5\) is infinite.</i></p><p>c) \(\sqrt 2\) <i>is irrational</i> has many proofs. I can't find a clean reference that states there are many proofs---if you know one, leave a comment. Wikipedia (see <a href="https://en.wikipedia.org/wiki/Square_root_of_2#:~:text=Geometrically%2C%20the%20square%20root%20of,number%20known%20to%20be%20irrational.">here</a>) has five proofs, though there are many more. </p><p>d) There is a mathoverflow post about theorems with many proofs <a href="https://mathoverflow.net/questions/401493/theorems-with-many-distinct-proofs">here</a>. I had thought only easy theorems had many proofs; however, there are several hard ones on this list. </p><p>2) Primes are so basic that many parts of math can be used to proof they are infinite.</p><p>3) WHY do people do these proofs? For the four papers listed above, and likely for many of the other proofs, the proof that primes are infinite is a springboard to other questions or concepts. We look at those four papers: </p><p>a) Alpoge's showed <i>Van Der Waerden's theorem and Unique factorization</i> IMPLIES <i>primes infinite.</i> Alpoge didn't use this as a springboard to other questions, but is amused that VDW can be used to prove primes infinite. I will note that the proof made me realize (a) the proof of Unique Factorization does NOT use that primes are infinite, and (b) ANY integral domain with Unique Factorization has an infinite number of primes. </p><p>b) Granville's showed <i>VDW's Theorem and also a result of Fermat that there cannot be four squares</i> <i>in arithmetic progression </i>IMPLIES <i>primes infinite</i>. He then uses this as a springboard to talk about other interactions between Ramsey Theory and Number Theory. Let Q(N) be the max number of squares in arithmetic sequence of length N. Find upper and lower asy bounds on Q(N). From Szemeredi's theorem (which is part of Ramsey theory) Szemeredi himself showed that for all \(\delta>0, Q(N) < \delta N\). Granville's paper shows how to get this result from a corollary to Faltings theorem. He also notes that better is known: \(Q(N) < N^{3/5 + \epsilon}\). </p><p>c) Elsholtz showed <i>Schur's theorem (for all c there is an S=S(c) such that for all c-colorings of {1,...,S}</i> <i>there exists x,y,z the same color such that x+y=z)</i> and <i>FLT (the n=3 case or n=4 case) </i>IMPLIES primes infinite. He then looks at various INFINITE Ramsey Theorems that imply the primes are infinite.</p><p>d) Gasarch's proof is identical to Elsholtz. He then looks at (1) for domains with a finite number of primes, what goes wrong? (2) when does the proof apply to other integral domains? The last question involves looking at variants of FLT some of which are open questions. </p><p>4) Gasarch also wondered about the following (though its not in his paper). The four papers above, and also other proofs that the primes are infinite, are of the form </p><p><i>A </i>IMPLIES<i> Primes are infinite</i></p><p>Are we really using A? How to pin this down? Find a logical system L such that </p><p>1) From L one can show <i>A </i>IMPLIES<i> Primes are infinite</i></p><p>2) From L one CANNOT prove <i>Primes are infinite. </i>(You may need some hardness assumption)</p><p>One can also examine this kind of question for other theorems like sqrt(2) is irrational. </p><p>I have shown this to several people and am told its not doable. Oh well. </p><p>I had a prior blog on this, after I saw Alpoge's proof, see <a href="https://blog.computationalcomplexity.org/2017/11/van-der-waerdens-theorem-implies.html#comment-form">here</a>.</p><p>ADDED LATER: a commenter left a link to a math overflow page that has information on the reverse mathematics of Euclid's theorem that the primes are infinite. The link is <a href="https://mathoverflow.net/questions/319686/reverse-mathematics-of-euclids-theorem">here</a>.</p><p><br /></p><p>5) USUALLY mathematicians want to (or should want to) find EASIER proofs of HARD theorems. </p><p>For some of the proofs that primes are infinite, QR, \(\sqrt 2\) irrational, some other theorems that have many proofs, mathematicians want to find HARD proofs of EASY theorems. </p><p><br /></p><p><br /></p><p><br /></p><p><br /></p>https://blog.computationalcomplexity.org/2023/05/there-are-infinite-number-of-proofs.htmlnoreply@blogger.com (gasarch)9tag:blogger.com,1999:blog-3722233.post-8797517964009295640Wed, 26 Apr 2023 20:59:00 +00002023-04-26T15:59:10.949-05:00Comic Book Alignment<p>Talk about AI "alignment" makes me think of a time in the 1950s that a group of companies decided to</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiOJyHIQsK2gmTZrReKtomoONYxMochTlBFA2aG4vRoMaixDHYVo34nRw7jK6zJLd3h2ur4Q5siUgsCqZ9luhIooNr43H-KFP7Ui-1TmN14ZZTBSMtD0BhXOndA3lPXGr9rL_J-nu0YihbC9V7-qiqMn8yxUnIqplSFfkMtKOFBF2Y9L9nO0A/s209/Approved_by_the_Comics_Code_Authority.gif" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="209" data-original-width="172" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiOJyHIQsK2gmTZrReKtomoONYxMochTlBFA2aG4vRoMaixDHYVo34nRw7jK6zJLd3h2ur4Q5siUgsCqZ9luhIooNr43H-KFP7Ui-1TmN14ZZTBSMtD0BhXOndA3lPXGr9rL_J-nu0YihbC9V7-qiqMn8yxUnIqplSFfkMtKOFBF2Y9L9nO0A/w165-h200/Approved_by_the_Comics_Code_Authority.gif" width="165" /></a></div>create an industry organization to self-govern their work to make sure that they were following the values of the time and avoid government oversight. Of course, I'm talking about the <a href="https://en.wikipedia.org/wiki/Comics_Code_Authority">Comics Code Authority</a> (CCA). <p></p><p>Fueled by psychiatrist Fredric Wertham's book <a href="https://en.wikipedia.org/wiki/Seduction_of_the_Innocent">Seduction of the Innocent</a> and a series of U.S. Congressional hearings, the comic publishers realized they needed to police themselves and formed a trade group, the Comics Magazine Association of America (CMAA). The CMAA created the Comics Code Authority (CCA) in 1954 to enforce a code of content guidelines that comic book publishers would adhere to. The Comics Code prohibited explicit violence, sexual content, and other adult themes in comic books, as well as a promoting a "positive portrayal" of authority figures and institutions. The CCA seal, which was a small stamp indicating that a comic book had been reviewed and approved by the organization, became a requirement for distribution by most newsstands and retailers pushing many publishers to follow the code.</p><p>I started reading comic books in the 1970s with the code in full swing. It was not a golden time for comic books with mostly bland, straightforward stories, and I gave it up as I went into high school. In college in the '80s, a friend brought me back into comics with some series, like <a href="https://en.wikipedia.org/wiki/Watchmen">Watchmen</a>, having given up the code and the seal. I started reading comics voraciously, so much that I had to go cold turkey in grad school so I could focus on research. The code itself was abandoned in 2011 after even Archie Comics gave up using the seal.</p><p>There's not a direct parallel between comic book writers and large language models, but the principle is the same. If you try to enforce a collection of values, you will get blander, less interesting output. I'm not saying that all alignment is a bad idea, but that you need to realize you will lose something when you do.</p>https://blog.computationalcomplexity.org/2023/04/comic-book-alignment.htmlnoreply@blogger.com (Lance Fortnow)5tag:blogger.com,1999:blog-3722233.post-98191395906298011Sun, 23 Apr 2023 20:05:00 +00002023-04-23T15:05:59.720-05:00Thoughts on Gordon Moore<p> Gordon Moore passed away on March 24, 2023. He was 94 years old. </p><p>He is best known for the article </p><p><br /></p><p><i>Cramming more components onto integrated circuits. </i></p><p>It appeared in the magazine Electronics (it is now defunct), Volume 38, No. 8, April 19, 1965. Do you need to track it down in the basement of your library. No. Its <a href="https://hasler.ece.gatech.edu/Published_papers/Technology_overview/gordon_moore_1965_article.pdf">here</a> and <a href="https://www.cs.umd.edu/~gasarch/BLOGPAPERS/gmoore.pdf">here</a>. I wonder if Moore would have predicted that his article would be available easily over 50 years later. Or is it? Link rot is a serious problem so you might want to download it to your local files. Note that the first link is some sort of official version and the second version is my local version. Not clear which link will rot first. The article also has an addition which is an interview with Moore that was done later.</p><p>In the article Moore said that the number of components-per-circuit (I think that means chip) will double every year. Moore credits Dave House with modifying it to `doubling every 18 months' and Carver Mead with calling it `Moore's Law'. Later it came to be quoted as computer SPEED would double every 18 months. We will take this to be Moore's Law in this blog post. </p><p>Is Moore's law dead? Browsing Google the answer seems to be that it is slowing down but not dead yet. (IDEA for a comedy sketch: Redo the Monty Python Dead Parrot sketch about the death of Moore's law.) </p><p>If Moore had 1 penny in April 1965 and it doubled every 18 months then how rich would he be now? How rich was he in April 2022? Compare the two numbers. </p><p><br /></p><p><br /></p><p><br /></p><p><br /></p><p><br /></p><p><br /></p>https://blog.computationalcomplexity.org/2023/04/thoughts-on-gordon-moore.htmlnoreply@blogger.com (gasarch)11tag:blogger.com,1999:blog-3722233.post-5261164684490370098Thu, 20 Apr 2023 21:30:00 +00002023-04-20T16:30:58.433-05:00Health Tech<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhyGNMicDXyUD2rOs44Tlkus5rR0BF-8qwueFzOJQdXm6L_eaGZ3V9blhL9X1Viqg01o_gl-YkkgkE7wwPWfbJlN9q0HJU8RpEJIw6iVauLUFPD-c_iK0645ZzXrpgVAuDHASeM1X7zFcBn3YQyHIPJrRKsPIw6Rj0VYGzzY4yZbmRwQ9rwIQ/s4080/PXL_20230418_180451901.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="3072" data-original-width="4080" height="301" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhyGNMicDXyUD2rOs44Tlkus5rR0BF-8qwueFzOJQdXm6L_eaGZ3V9blhL9X1Viqg01o_gl-YkkgkE7wwPWfbJlN9q0HJU8RpEJIw6iVauLUFPD-c_iK0645ZzXrpgVAuDHASeM1X7zFcBn3YQyHIPJrRKsPIw6Rj0VYGzzY4yZbmRwQ9rwIQ/w400-h301/PXL_20230418_180451901.jpg" width="400" /></a></div><br />On Tuesday, at the behest of an alumnus, I spent the afternoon at <a href="https://www.himss.org/global-conference">HIMSS</a>, a large health tech conference being held at the big convention center in Chicago. When I think of Health Tech, I imagine fancy medical devices, but most of the exhibitors were focused on software solutions.<p></p><p>Cybersecurity had the biggest theme area, no surprise given the devasting role ransomware has had on some hospital chains. The second largest theme area focused on Interoperability. Just a few years ago, the vast majority of medical data is transferred via fax. A few companies, like Epic Systems, dominate the electronic health records space and don't share nicely. There's a relatively new standard, <a href="https://en.wikipedia.org/wiki/Fast_Healthcare_Interoperability_Resources">FHIR</a>, for transferring medical data, making it easily accessible via APIs while keeping it secure. Hopefully, we can finally kill off the fax machines in doctors offices. Patient Engagement was the other big theme area.</p><p>Of course the big discussion topics are about how AI will change health care. For example, the advances in electronic records have led to doctors spending far too much time entering data instead of seeing patients. AI could make data entry quick, easier or perhaps even unnecessary. Also AI could help provide functionality for triage and initial diagnoses, helping to extend the capabilities in a staff-limited environment and help bring down health-care costs. Many of the exhibited software systems boasted about using AI but it won't be until next year's meeting that we see the true integration of large-language models into health care technology.</p><p>Many of the challenges of technology in health care carry over to higher education. We don't generally use faxes, but why do we send transcripts by PDFs? Health and student data share similar privacy and security challenges, why can't we develop a FHIR-like system for higher education? Cybersecurity and <strike>Patient</strike> Student Engagement challenges loom large for universities as well. </p>https://blog.computationalcomplexity.org/2023/04/health-tech.htmlnoreply@blogger.com (Lance Fortnow)3tag:blogger.com,1999:blog-3722233.post-6757105980544925558Thu, 13 Apr 2023 17:07:00 +00002023-04-18T07:56:08.365-05:00My Week at Simons<p>This week finds me at the <a href="https://simons.berkeley.edu/">Simons Institute for Theoretical Computer Science</a> in Berkeley California. Simons started about the same time I joined the administrative ranks and never had the opportunity to spend a full semester there. I can manage a shorter trip and purposely chose a week with no workshops and great visitors including Sam Buss, Russell Impagliazzo, Valentine Kabanets, Toni Pitassi, Ryan Williams, former student Rahul Santhanam and former postdocs Pavan Aduri and Vinod Variyam and <a href="https://simons.berkeley.edu/people/visitors">many others</a> including the next generations of complexity theory leaders. Simons is having programs on <a href="https://simons.berkeley.edu/programs/Meta-Complexity2023">Meta-Complexity</a> and an "extended reunion" for <a href="https://simons.berkeley.edu/programs/extended-reunion-satisfiability">Satisfiability</a>. Apparently I used to work on Meta-Complexity before it was a thing.</p><p>Computational complexity traditionally has tried to get ahead of new technologies, and modelled randomized, parallel, quantum computation and cryptography in the infancy of their development allowing complexity to help guide our understanding and development of these areas. In the last twenty years or so, complexity has migrated more towards mathematics, and has mostly missed technological changes like cloud computing, hierarchical memory models, edge and mobile computing for example. </p><p>But the recent advances in optimization and machine learning cannot be ignored. There has certainly been plenty of discussion of ChatGPT and Russell gave an informal lecture yesterday trying to model large-language models at some level. I've been having some discussions about how complexity can answer questions like what it means for a model to be explainable. </p><p>Complexity theory also ought to reckon that practically we seem to be getting the best of P = NP while avoiding losing cryptography <a href="https://blog.computationalcomplexity.org/2020/12/optiland.html">simultaneously</a> in Heuristica and Cryptomania among Russell's <a href="https://blog.computationalcomplexity.org/2004/06/impagliazzos-five-worlds.html">five worlds</a>. Russell claims we're not in Heuristica, at least not now, since we can still generate hard to solve problems. But if our models aren't modeling the world we live in, perhaps it's time to rethink the models. </p>https://blog.computationalcomplexity.org/2023/04/my-week-at-simons.htmlnoreply@blogger.com (Lance Fortnow)14tag:blogger.com,1999:blog-3722233.post-8698511952609437812Mon, 10 Apr 2023 17:37:00 +00002023-04-10T12:38:31.259-05:00Complexity and Explainable AI<p>About six years ago, I <a href="https://blog.computationalcomplexity.org/2017/04/understanding-machine-learning.html">posted</a> on why it was important to understand machine learning, mentioning trust, fairness, security and causality. But I then I brought in complexity.</p><p></p><blockquote>What if P = NP? Would that help. Actually it would makes things worse. If you had a quick algorithm for NP-complete problems, you could use it to find the smallest possible circuit for say matching or traveling salesman but you would have no clue why that circuit works. </blockquote><p>Ryan Williams <a href="https://blog.computationalcomplexity.org/2017/04/understanding-machine-learning.html?showComment=1492464695392#c7595821335622802218">countered</a></p><p></p><blockquote>If P=NP you could also find the shortest proof in your favorite formal system that the smallest possible circuit does what you wanted it to do, as well as any other claim you are wondering that may be true about the circuit. That proof might not be comprehensible to you, but it could be written in a format where proof assistant software such as HOL or Coq could parse it and convince you it is correct. So if P=NP (with feasible low constants) I think that would definitely help.</blockquote><p>So if P = NP maybe you couldn't understand how the circuit works but any question about how it works you could answer. </p><p>While we don't live in a world where P = NP, we are in a world where we can produce very strong learning algorithms where we can't understand the programs they produce. They are now powerful enough to apply Ryan's idea and use the algorithms themselves to understand the decisions they make.</p><p>For example I tried using Google Translate on a <a href="https://mta.hu/mta_hirei/elhunyt-t-sos-vera-matematikus-az-mta-rendes-tagja-112809">Hungarian obituary</a> of Vera Sós. Hungarian does not use gendered pronouns and and the translation incorrectly produced male pronouns. I then fed the translation into ChatGPT which translated with the correct female pronouns. </p><p>You can speculate why Google Translate uses male pronouns but you can't ask it why. But you can ask ChatGPT.</p><p></p><blockquote>I used female pronouns in the translation because Vera T. Sós is a female name. Vera is a feminine given name, and since the context of the original text is about her life and accomplishments, it is appropriate to use female pronouns in the English translation to maintain clarity and accurately convey information about her.</blockquote><p>So perhaps if you want to understand how ML works, perhaps we should seek stronger algorithms, not weaker ones, algorithms that can explain themselves. As <a href="https://knivesengraved.com/blogs/news/why-sharp-knives-are-safer-than-dull-knives">they say</a>, a dull knife is more dangerous than a sharp one.</p><p></p><p></p><p></p><p></p>https://blog.computationalcomplexity.org/2023/04/complexity-and-generative-ai.htmlnoreply@blogger.com (Lance Fortnow)5tag:blogger.com,1999:blog-3722233.post-4674935859497045234Tue, 04 Apr 2023 19:05:00 +00002023-04-04T14:05:15.778-05:00Neil Jones (1941-2023)<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgvZYTofhc2zPgPq7_2CuI3JUbXlbSvpWL3xla-H8c4gpWz_wXGCjQQLGt3sNl3XP0E5PZ4qIzpnwddhMX1FzqbzTnicWf_oROVBk6TEzBq7novIBCrqlQ1-qj_KNB0LqJGeM6TXNGzx8yoFR4UyDOqme79mlKdSRpbGJrA0G-L_1m8nJF-1w/s885/Neil_D._Jones.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="885" data-original-width="657" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgvZYTofhc2zPgPq7_2CuI3JUbXlbSvpWL3xla-H8c4gpWz_wXGCjQQLGt3sNl3XP0E5PZ4qIzpnwddhMX1FzqbzTnicWf_oROVBk6TEzBq7novIBCrqlQ1-qj_KNB0LqJGeM6TXNGzx8yoFR4UyDOqme79mlKdSRpbGJrA0G-L_1m8nJF-1w/w149-h200/Neil_D._Jones.jpg" width="149" /></a></div><i>Eric Allender graciously agreed to write this remembrance of Neil Jones.<br /></i><p></p><p><a href="https://en.wikipedia.org/wiki/Neil_D._Jones">Neil Jones</a> passed away on March 27.</p><p>Neil's work had a profound impact on the field of computational complexity theory, although he was primarily known for his work in other areas of computer science. For example, his 1998 ACM Fellow citation is "For outstanding contributions to semantics-directed compilation, especially partial evaluation, and to the theory of computation, formal models and their practical realization." Note that there's no mention of "complexity" (except as it is bundled together with "theory of computation" -- and Jones also published in the area of computability theory).</p><p>So what were some ways that Neil influenced the development of computational complexity theory?</p><p>In 1972, in the 4th STOC conference, Neil <a href="https://doi.org/10.1145/800152.804909">collaborated</a> with Alan Selman, to show that a notion that logicians had been studying since the 1950's coincided exactly with a natural complexity class. More specifically, given a logical formula F, the "spectrum" of F is the set of numbers {n : F has a finite model of size n}. What kinds of sets can be the spectrum of a first-order formula? Is this class of sets closed under complement? These were some of the questions that logicians had struggled with. Jones and Selman gave a precise characterization, as NE (nondeterministic exponential time). Thus the complement of every spectrum is a spectrum if and only if NE=coNE. As D. Sivakumar points out in a <a href="https://www.linkedin.com/feed/update/urn:li:activity:7047571379897921536?commentUrn=urn%3Ali%3Acomment%3A%28activity%3A7047571379897921536%2C7048828674200006656%29&replyUrn=urn%3Ali%3Acomment%3A%28activity%3A7047571379897921536%2C7048876431841378304%29">LinkedIn comment</a> on Neil's death: "The following year, Fagin proved that generalized spectra coincide with NP, and descriptive complexity theory was born."</p><p>Of course, a lot of other things were happening in the late 60's and early 70's: Savitch proved Savitch's Theorem. The first NP-completeness results appeared. It appears that several people were trying to build on Savitch's theorem, to show that everything in P can be done in log<sup>2</sup> space, and this motivated Steve Cook to define a problem ("Path Systems") and show (1) certain algorithms for Path Systems could not be implemented in small space, and (2) Path Systems has a small-space algorithm iff everything in P does. This result of Cook's was similar in flavor to a theorem of Savitch, showing that a problem he called "Threadable Mazes" was in L if and only if NL=L. Although these notions were clearly in the air, Jones (and -- simultaneously -- Meyer & Stockmeyer) were the first to explicitly formalize the notion of logspace reducibility (including closure under composition), and to notice that the NP-completeness results of Cook and Karp held also under logspace reducibility. And Jones was the first one to go ahead and develop the general theory of logspace reducibility and the theory of completeness for subclasses of P (first for P itself (with Laaser), and later for NL (with Laaser and Lien)). I think that this is when people started to get the idea that completeness was not a special property that only a few problems shared. Rather: Nearly EVERY natural computational problem was likely to be complete for some reasonable complexity class.</p><p>Notably, Jones also recognized that logspace was overkill, when considering reductions. He also wanted to have a really restricted notion of reducibility, so that one could talk meaningfully about problems being complete for L. To this end, he <a href="https://doi.org/10.1016/S0022-0000(75)80050-X">defined</a> "log-rudimentary" reducibility. This was quite natural for him, since he had work previously on Smullyan's "Rudimentary Relations". But log-rudimentary reductions never really caught on. Instead, after Furst, Saxe, and Sipser kickstarted the study of AC<sup>0</sup> circuits, a notion of AC<sup>0</sup> reducibility was developed by Chandra, Stockmeyer, and Vishkin in the mid-1980's, which turned out to be very useful in classifying problems as being complete in various subclasses of P. Much later, in 1991, I published a <a href="https://doi.org/10.1016/0020-0190(91)90015-A">paper</a> with Vivek Gore, showing that Neil's log-rudimentary reductions are precisely the same thing as uniform AC<sup>0</sup> reducibility. Thus Neil Jones had the insight to define and study a notion that would not become mainstream for another decade, and which still provides the best tool for classifying the complexity of natural problems in subclasses of P.</p><p>I only had the pleasure of meeting Neil once, during a visit to Copenhagen in 2004, although we would occasionally exchange some e-mail about topics in complexity. It is interesting to note that the most <a href="https://doi.org/10.48550/arXiv.2008.02932">recent paper</a> on Neil's <a href="https://dblp.org/pid/j/NeilDJones.html">DBLP page</a> deals with complexity classes. I haven't spent much time looking at the paper, but I do see that the authors define a complexity class that lies between NL and P. It might be interesting to see if this class coincides with SAC<sup>1</sup> (also known as LogCFL).</p><p>I thank Lance and Bill for encouraging me to write a few lines about Neil's importance to the field. </p>https://blog.computationalcomplexity.org/2023/04/neil-jones-1941-2023.htmlnoreply@blogger.com (Lance Fortnow)1tag:blogger.com,1999:blog-3722233.post-691726575962306125Sat, 01 Apr 2023 14:04:00 +00002023-04-01T09:04:54.386-05:00Who's on April First<p> </p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgUtAqIYgEqirBv-z2YRBAEYrFBAawDGs1NjdMsbwYKeoNs_ma5zlR9yyRD5ifxav8zQt3FhOBV8_xLQjScraH3UoscL-RS-nmYPpzQvEuZW9frJXlc1txBOpoTsGz5lUtJ1dAmArOvA2dagN1l-J5Uda-v959LDoBiidDWUZCQQVSsc7LUoA/s1024/baseball.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="769" data-original-width="1024" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgUtAqIYgEqirBv-z2YRBAEYrFBAawDGs1NjdMsbwYKeoNs_ma5zlR9yyRD5ifxav8zQt3FhOBV8_xLQjScraH3UoscL-RS-nmYPpzQvEuZW9frJXlc1txBOpoTsGz5lUtJ1dAmArOvA2dagN1l-J5Uda-v959LDoBiidDWUZCQQVSsc7LUoA/w320-h240/baseball.jpg" width="320" /></a></div><br /><div class="separator" style="clear: both; text-align: center;">Carlos May waving to the crowd on April 1, 1972</div><p><i>Instead of the usual April Fools’ Day post, I present one of the best April Fools Day stunts ever. Here’s the text from an old Parade Magazine clipping I dug up recently that was published on April 1, 1985.</i></p><p>When it comes to innovative and wacky ideas in baseball, Bill Veeck was a true legend. As a team owner and promoter, Veeck was known for his creative approach to the sport, from planting ivy on the walls at Wrigley Field to his famous "exploding scoreboard" at Comiskey Park. But did you know about the time Veeck pulled off an unforgettable April Fools' stunt by having the 1972 Chicago White Sox wear the names from the classic "Who's on First?" sketch?</p><p>It was April 1, 1972, and the Chicago White Sox were getting ready to play a game that would go down in history. Bill Veeck had decided to pay homage to the <a href="https://youtu.be/sShMA85pv8M">iconic comedy routine</a> by Bud Abbott and Lou Costello, considered by many the greatest comedy sketch ever performed. For those unfamiliar with the sketch, it revolves around a series of misunderstandings based on the names of the players on a fictional baseball team. The names sound like common phrases, leading to a hilariously confusing conversation.</p><p>In Veeck's version of the stunt, the White Sox players would take the field with the names of the "Who's on First?" team on the back of their jerseys. The players, initially skeptical of the idea, eventually embraced the spirit of April Fools' Day and played along.</p><p>As the game commenced, fans were treated to a scene straight out of the Abbott and Costello routine. Instead of their usual names, the players' jerseys featured names like "Who," "What," "I Don't Know," "Why," "Because," "Tomorrow," and "Today." Here was the starting lineup:</p><p></p><ol style="text-align: left;"><li>Who - First Base: Dick Allen</li><li>What - Second Base: Mike Andrews</li><li>I Don't Know - Third Base: Bill Melton</li><li>Why - Left Field: Carlos May</li><li>Because - Center Field: Ken Berry</li><li>Abbott - Right Field: Jay Johnstone</li><li>I Don't Care - Shortstop: Luis Aparicio</li><li>Today - Catcher: Ed Herrmann</li><li>Tomorrow - Pitcher: Wilbur Wood</li></ol><div>The right fielder is never named in the sketch. Pat Kelly pinch hit for Johnstone in the 6th wearing “Costello”. </div><p></p><p>The confusion was not only limited to the fans in the stadium. The opposing team and the umpires struggled to keep track of the game, often leading to comical misunderstandings on the field. For instance, the umpire might have shouted, "Who's out!" only to be met with the response, "No, Who's on first!"</p><p>Though some traditional baseball fans were initially taken aback by the stunt, the majority embraced the humor, making the game one of the most memorable in White Sox history. It was a testament to Veeck's genius that he could seamlessly blend comedy with the sport he loved.</p><p>The "Who's on First?" game became a cherished part of baseball lore and added to the legend of Bill Veeck. It demonstrated his willingness to think outside the box, engage fans, and remind everyone that, at its core, baseball should be a source of fun and entertainment.</p><p>The 1972 Chicago White Sox "Who's on First?" April Fools' Day game captured the spirit of Bill Veeck's inventive approach to baseball. As we celebrate April Fools' Day this year, let's remember the time when the White Sox took the field with the most confusing lineup in baseball history and showed us all that, sometimes, laughter truly is the best medicine.</p>https://blog.computationalcomplexity.org/2023/04/whos-on-april-first.htmlnoreply@blogger.com (Lance Fortnow)4tag:blogger.com,1999:blog-3722233.post-448142924631254224Wed, 29 Mar 2023 15:29:00 +00002023-03-29T10:29:59.716-05:00Alan Turing, The Opera<p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEih91e9xcRo-Hr1RCp_82Gpy7YFV2sUTXy8IYTFSJ-LMZrr__CZtlFk2EHGczD2-XZdOtbkYNOC2pX_nSJqFqz6PFQTj0vNEDQMgSi02QC5vQ0gF5KnEsIXCDh4wB_NTBP2TzFdwYXCeYIoNRUDjKwYpNZpCGvTHyH8a2tW-q2Bw3F2vpMwxw/s1312/Turing.png" style="clear: right; display: inline; float: right; margin-bottom: 1em; margin-left: 1em; text-align: center;"><img border="0" data-original-height="1312" data-original-width="855" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEih91e9xcRo-Hr1RCp_82Gpy7YFV2sUTXy8IYTFSJ-LMZrr__CZtlFk2EHGczD2-XZdOtbkYNOC2pX_nSJqFqz6PFQTj0vNEDQMgSi02QC5vQ0gF5KnEsIXCDh4wB_NTBP2TzFdwYXCeYIoNRUDjKwYpNZpCGvTHyH8a2tW-q2Bw3F2vpMwxw/w210-h320/Turing.png" width="210" /></a> </p>Last Thursday I attended the world premier of <a href="https://chicagooperatheater.org/season/turing">The Life and Death(s) of Alan Turing</a>, a new production from Chicago Opera Theater composed by Justine Chen with the libretto (text) from David Simpatico.<p></p><p>The opera takes a mostly chronological trip through his life in seven scenes, focusing less on the computer science and more on Turing's struggle with homosexuality and his prosecution. Turing does fiddle with an old computer throughout the opera. The opera ends with a different take on his death (spoiler alert), where he attempts to upload his consciousness to escape his chemically castrated body. </p><p>Between the scenes, a chorus chants random numbers and words as they floated on a scrim, in what the composer calls "chat clouds". </p><blockquote><p>These “chat clouds” transport the listeners with a sonic approximation of internet chatter, filled with information that brings them to the next moment. The aural depiction of these "chat clouds" was inspired by the animated movies and television series of Masamune Shirow's cyberpunk manga Ghost in the Shell. Another sonic influence was Walt Disney’s fantastical Snow White, one of Alan’s greatest obsessions.</p></blockquote><p>I found them reminiscent of the Philip Glass' <a href="https://www.youtube.com/watch?v=9YRzS9y-8S8">knee play</a> from Einstein on the Beach. I really enjoyed these sequences, though another academic I ran into during intermission felt otherwise.</p><p>Over all I enjoyed the music and the opera, particularly the courtroom scene where Turing gave an impassioned defense though it turns out all in his head. The cast did well across the board, especially Jonathan Michie who sang Turing. </p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgTEyFR4Sjvw7Ew4ip5OXdi6Q0Q3iJLmzZg7P0jknvTJjHgVdUj7f_XtvW16CiHsbDu8pJxL8QVKurCOc3crdwE03zcWzfh0FF1SLPBGUBqqunj_GAUmmcGnoQKHr2b-QPI5X2fafywY1UrVIEXQBHJGxChdL699rjzysU0OgQbQvY1tuJnDg/s1954/PXL_20230324_024100463.jpg" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="769" data-original-width="1954" height="158" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgTEyFR4Sjvw7Ew4ip5OXdi6Q0Q3iJLmzZg7P0jknvTJjHgVdUj7f_XtvW16CiHsbDu8pJxL8QVKurCOc3crdwE03zcWzfh0FF1SLPBGUBqqunj_GAUmmcGnoQKHr2b-QPI5X2fafywY1UrVIEXQBHJGxChdL699rjzysU0OgQbQvY1tuJnDg/w400-h158/PXL_20230324_024100463.jpg" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Librettist David Simpatico (wearing jacking), Composer Justine Chen (in gown)<br />conductor Lidiya Yankovskaya behind her and Jonathan Michie (in red robe)</td></tr></tbody></table><p>One can't help compare this opera to <a href="https://en.wikipedia.org/wiki/The_(R)evolution_of_Steve_Jobs">The (R)evolution of Steve Jobs</a>, that I <a href="https://blog.computationalcomplexity.org/2022/05/the-revolution-of-steve-jobs.html">saw in Atlanta</a> last year. Both operas chart a metaphysical journey of a computing giant through various important moments of their lives. During a Q&A I asked Simpatico about the two and he said he purposely avoided the Jobs opera so as to not affect how he wrote this one. Probably for the best.</p>https://blog.computationalcomplexity.org/2023/03/alan-turing-opera.htmlnoreply@blogger.com (Lance Fortnow)1tag:blogger.com,1999:blog-3722233.post-6985939692001476036Sun, 26 Mar 2023 19:25:00 +00002023-03-26T14:28:26.598-05:00The SIGACT Book Review column list of books it wants reviewed<p>I am posting this for Nick Tran who is the current SIGACT Book Review Editor (before him it was Fred Green for about 6 years, and before him it was me (Bill Gasarch) for 18 years. Nobody should have the job for more than 6 years. No TV show should to on more than 6 years. The 6-year rule probably has other applications.)</p><p>Nick asked me to post the list of books that need reviewing. It is most of this post. </p><p>If you spot one you want to review then email him (email address later) the name of the book you want to review and your postal address so he can send it to you or have it sent to you. Here are his specs:</p><p>Reviews of recently published or bucket-list books of interest to the TCS community are welcome. Manuscripts (NOTE FROM BILL `manuscripts'? Really? Sounds like the kind of thing you would FAX or postal mail) should be between 3 and 6 pages and include a brief introduction, a detailed content summary, an assessment of the work, and a recommendation to the book's targeted audience. </p><p>Nick's email is ntran@scu.edu</p><p>The books are: </p><p>ALGORITHMS</p><p>Knebl, H. (2020). Algorithms and Data Structures: Foundations and Probabilistic Methods for Design and Analysis. Springer.</p><p>Roughgarden, T. (2022). Algorithms Illuminated: Omnibus Edition. Cambridge University Press.</p><p><br /></p><p>MISCELLANEOUS COMPUTER SCIENCE</p><p>Amaral Turkman, M., Paulino, C., & Müller, P. (2019). Computational Bayesian Statistics: An Introduction (Institute of Mathematical Statistics Textbooks). Cambridge University Press.</p><p>Nakajima, S., Watanabe, K., & Sugiyama, M. (2019). Variational Bayesian Learning Theory. Cambridge University Press.</p><p>Hidary, J. D. (2021). Quantum Computing: An Applied Approach (2nd ed.). Springer.</p><p>Apt, K. R., & Hoare, T. (Eds.). (2022). Edsger Wybe Dijkstra: His Life, Work, and Legacy (ACM Books). Morgan & Claypool.</p><p>Burton, E., Goldsmith, J., Mattei, N., Siler, C., & Swiatek, S. (2023). Computing and Technology Ethics: Engaging through Science Fiction. The MIT Press.</p><p><br /></p><p>DISCRETE MATHEMATICS AND COMPUTING</p><p>O’Regan, G. (2020). Mathematics in Computing: An Accessible Guide to Historical, Foundational and Application Contexts. Springer Publishing.</p><p>Rosenberg, A. L., & Trystram, D. (2020). Understand Mathematics, Understand Computing: Discrete Mathematics That All Computing Students Should Know. Springer Publishing.</p><p>Liben-Nowell, D. (2022). Connecting Discrete Mathematics and Computer Science (2nd ed.). Cambridge University Press.</p><p><br /></p><p>CRYPTOGRAPHY AND SECURITY</p><p>Oorschot, P. . C. (2020). Computer Security and the Internet: Tools and Jewels (Information Security and Cryptography). Springer.</p><p><br /></p><p>COMBINATORICS AND GRAPH THEORY</p><p>Golumbic, M. C., & Sainte-Laguë, A. (2021). The Zeroth Book of Graph Theory: An Annotated Translation of Les Réseaux (ou Graphes)—André Sainte-Laguë (1926) (Lecture Notes in Mathematics). Springer.</p><p>Beineke, L., Golumbic, M., & Wilson, R. (Eds.). (2021). Topics in Algorithmic Graph Theory (Encyclopedia of Mathematics and its Applications). Cambridge University Press.</p><p><br /></p><p>PROGRAMMING ETC.</p><p>Nielson, F., & Nielson, R. H. (2019). Formal Methods: An Appetizer. Springer.</p><p>Sanders, P., Mehlhorn, K., Dietzfelbinger, M., & Dementiev, R. (2019). Sequential and Parallel Algorithms and Data Structures: The Basic Toolbox. Springer.</p><p><br /></p><p>MISCELLANEOUS MATHEMATICS</p><p>Kurgalin, S., & Borzunov, S. (2022). Algebra and Geometry with Python. Springer.</p><p><br /></p><p> </p>https://blog.computationalcomplexity.org/2023/03/the-sigact-book-review-column-list-of.htmlnoreply@blogger.com (gasarch)7tag:blogger.com,1999:blog-3722233.post-1843365612377959262Thu, 23 Mar 2023 13:58:00 +00002023-03-23T08:58:07.888-05:00A Strange Hiring Season<p>In my <a href="https://blog.computationalcomplexity.org/2022/11/fall-jobs-post-2022.html">fall jobs post</a>, I had trouble predicting this season's CS faculty job market but I didn't expect this: strong supply and strong demand, something we haven't seen since the early '80s when the then young CS departments were ramping up.</p><p>We have a confluence of two forces that have both strengthened since my post back in November: The layoffs and hiring freezes at the major internet companies (Alphabet, Amazon, Apple, Meta and Microsoft) contrasted with major excitement in machine learning. I wrote the jobs post before ChatGPT was released and Amazon and Meta have since announced even more layoffs.</p><p>ML-mania is leading to very strong demand from undergrad and grad students for computing degrees. Across the US we have a record number of open faculty slots in computing as universities try to meet that need.</p><p>Meanwhile, PhDs who might have gone to industry are going on the academic job market instead. Also some tech researchers who have been laid off or spooked by the layoffs are considering academic jobs.</p><p>Between these two forces we will likely have a record number of faculty hires this year but we may see fewer senior people switching universities because good departments can fill their needs with junior faculty.</p><p>There is a mismatch of area. There is a big demand in CS departments to hire in machine learning because that is where the student interest is. ML is not where the big companies are cutting back. If you are on the market this year, position your research to make it relevant to learning, or at least that you are willing to teach ML courses.</p><p>By the way, this post is for computer science. Count yourself extremely lucky if you can get a tenure-track job in a non-computing field.</p>https://blog.computationalcomplexity.org/2023/03/a-strange-hiring-season.htmlnoreply@blogger.com (Lance Fortnow)1tag:blogger.com,1999:blog-3722233.post-8933765861878524476Mon, 20 Mar 2023 00:47:00 +00002023-03-19T21:46:31.377-05:00New Upper Bound on R(k). WOW!<p> R(k) is the least n such that for all 2-colorings of the edges of \(K_n\) there is a monochromatic \(K_k\)</p><p>(so there are k vertices such that the coloring restricted to the edges between those k vertices is constant)</p><p><br />Here is some history. If I miss a reference or a result, let me know in the comments</p><p><br /></p><p>\(R(k) \le 2^{2k} = 4^k \) seems to be folklore and is a very old result. Proof could be shown to HS students. I have. Or 9 year old's who are good at math. I have. </p><p><br /></p><p>\(R(k)\le {2k-2 \choose k-1}\) which is approx \(\frac{4^k}{\sqrt k}\) was shown by Erdos and Szekeres in 1935. Proof could be shown to HS students. I have. Or 9 year old's who are good at math. I have. </p><p><br /></p><p>Thomson in 1988 got</p><p>$$R(k) \le \frac{4^k}{k^A\sqrt{\log k}}$$</p><p>for some A. This paper is behind paywalls so will be lost to history and I cannot comment on if the proof is easy or hard. (If you know of a link it, let me know and I will provide it). </p><p>Conlon in 2006 got the denom to be super-poly. We omit the result but the paper is <a href="https://arxiv.org/abs/math/0607788">here</a>. Proof used the next method of quasi-randomness. </p><p>Sah (his paper is <a href="https://arxiv.org/abs/2005.09251">here</a>) optimized Conlon's technique to get </p><p>$$R(k) \le 4^{k-c(\log k)^2}.$$</p><p>(According to the paper I will point to soon that has the major result, Sah's result is the best one can do with the Thomason-Conlon technique. In Complexity theory we may have formalized that and called it a barrier result.)</p><p>These results were interesting and used interesting techniques. However, the best known lower bound is \(R(k) \ge 2^{k/2}\) (I've left out constants) so the REAL questions are</p><p>a) Can the 4 be replaced with a smaller number in the upper bound?</p><p>b) Is there a number a so that R(k) is essentially \(2^{ak}\)? </p><p>We note in passing that the lower bound was proven by the Prob Method. That last statement obscures the history--- the Prob Method was invented by Erdos TO PROVE the lower bound. I've seen the prob method called <i>an application of Ramsey Theory</i> which is not quite right. Its more like Ramsey INSPIRED a technique that is used A LOT, including things that are actually practical. </p><p>But back to our story. SO, while all of the above results were interesting, were they making progress towards the REAL question? We can now say YES as progress HAS been made and DID use some of the techniques.</p><p>On March 16, 2023 Campos, Griffthis, Morris, Sahasrabudhe posted a paper on arxiv, <a href="https://arxiv.org/pdf/2303.09521.pdf">here</a>, that showed</p><p>$$R(k) \le (4-\epsilon)^k$$</p><p> for some (quite small) epsilon. </p><p>Here are some thoughts which are influenced by my email correspondence with Jacob Fox (an eminent combinatorist) on this topic.</p><p>1) I am NOT surprised that the result is true. </p><p>2) I AM surprised that it's been proven. Lots of brilliant people have worked on this problem for many years so... .why now? Since its been open for around 70 years, I thought it would take another 70 years to crack.</p><p>3) Will this result lead to better results? I hope so!</p><p>4) Does R(k) have a nice upper bound? Not clear. The following is unlikely though something like it may be possible:</p><p>R(k) roughly\( (3.5)^k\) for k a power of a Fibonacci prime</p><p>R(k) roughly\( (3.8)^k \)othewise</p><p>5) (Jacob pointed me to this) There is a paper on Book-Ramsey Numbers that also (on page 2) notes a connection to Ramsey Numbers. The paper is <a href="https://arxiv.org/pdf/2001.00407.pdf">here</a>. A conjecture on Book-Ramsey will lead to a big improvement on Ramsey. Those kinds of results are odd since I can't tell if the upshot is</p><p>Lets work hard on Book-Ramsey so we can get better bounds on Ramsey!</p><p>or</p><p>Book-Ramsey is HARD so lets give up. (Who first said <i>if at first you don't succeed, quit. Why make a damn fool of yourself? ?) </i></p><p>6) The paper with the new result is 57 pages. It looks dense. I will wait for the movie. I may have to wait a long time. </p>https://blog.computationalcomplexity.org/2023/03/new-upper-bound-on-rk-wow.htmlnoreply@blogger.com (gasarch)6tag:blogger.com,1999:blog-3722233.post-2275048041517285399Thu, 16 Mar 2023 20:31:00 +00002023-03-17T11:28:44.822-05:00Identities in Computational Complexity<p><i>Guest post by Josh Grochow</i></p>
<p>On the birdsite, Jay Cummings <a href="https://twitter.com/LongFormMath/status/1623389627880194048" target="<sub>b</sub>lank">tweeted</a>:</p>
<center><blockquote class="twitter-tweet"><p dir="ltr" lang="en">TOP 5 IDENTITIES OF ALL TIME<br />5. You<br />4. Can't <br />3. Rank<br />2. Identities <br />1. e<sup>iπ</sup> + 1 = 0</p>— Jay Cummings (@LongFormMath) <a href="https://twitter.com/LongFormMath/status/1623389627880194048?ref<sub>s</sub>rc=twsrc%5Etfw">February 8, 2023</a></blockquote> <script async="" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script></center>
<p><br />
And it got me thinking about identities in computational complexity. At first I
thought: how many could there be? 10? After some thought and with some help,
there turn out to be quite a few more than that, and I think they make a pretty
good list!<o:p></o:p></p>
<p>Rules/comments on the list:</p><p><o:p></o:p></p>
<p></p><ul style="text-align: left;"><li>Some are relatively simple alternative
characterizations, such as the various definitions of PH; most are pretty
nontrivial theorems.</li><li>I didn't include existentially quantified oracle
results (such as "there exists A such that P<sup>A</sup>=NP<sup>A</sup>"), despite them
being some of my favorite results, because there'd be waaaay too many of those.
Probably thousands? Ask Lance. In some circles he's known as the "oracle
oracle" - as in, he's who you go ask if you want to know if a certain
oracle exists. I <i>did</i> include identities involving oracles
where one side of the identity didn't have an oracle, such
as EXP=NP<sup>R<sub>Kt</sub></sup> or AlmostP=BPP (AlmostP is the class of languages
L such that {A : L is in P<sup>A</sup>} has measure 1).</li><li>I also didn't include some important conditional
equalities, such as "<a href="http://doi.org/10.1007/BF01200056" target="<sub>b</sub>lank">EXP in P/poly iff EXP=MA</a>" or "<a href="https://doi.org/10.1016/0022-0000(88)90037-2">NP in BPP iff NP=RP</a>". I guess it's not really an "identity" if it's
conditional. Games are only fun if the rules are at least a little constraining!</li><li>There are some surprising containments that either
aren't equalities, or aren't known to be equalities, and I didn't include those
either, despite some of them being very important. For example, <a href="https://doi.org/10.1016/0020-0190(83)90044-3" target="<sub>b</sub>lank">BPP in Σ<sub>2</sub>P</a>,
other depth reduction results (such as the <a href="https://doi.org/10.1109/FOCS.2008.32" target="<sub>b</sub>lank">chasms at depth 4</a> and <a href="https://doi.org/10.1137/140957123" target="<sub>b</sub>lank">3</a>), and <a href="https://doi.org/10.1007/BF01263423" target="<sub>b</sub>lank">Beigel-Tarui</a>/<a href="https://doi.ieeecomputersociety.org/10.1109/FSCS.1990.89583" target="<sub>b</sub>lank">Yao</a>.</li></ul><o:p></o:p><p></p>
<p><o:p></o:p></p>
<p><o:p></o:p></p>
<p><o:p></o:p></p>
<p>One could teach a class based on a list like this and cover
a lot of good ground, but I think it'd be sad to leave out any lower bounds.
Actually, by my estimate, if you were teaching from "scratch" (say,
starting after an undergrad algorithms class), this list, and all its
implied prerequisite material, is already probably the equivalent of about 3-4
semester-long classes!</p><p><o:p></o:p></p>
<p><o:p> </o:p></p>
<p>What other complexity identities did I miss?<o:p></o:p></p>
<p><o:p> </o:p></p>
<p>Finally, without further ado, a list of (some) identities in
computational complexity:<o:p></o:p></p>
<p><o:p> </o:p></p>
<p><a href="https://doi.org/10.1145/800119.803889" target="<sub>b</sub>lank">RP∩coRP=ZPP</a> <o:p></o:p></p>
<p><a href="https://dl.acm.org/doi/10.1145/3568163" target="<sub>b</sub>lank">CLS=PPAD∩PLS</a> [from Paul Goldberg @paulwgoldberg]<o:p></o:p></p>
<p><a href="https://doi.org/10.1137/16M1107632" target="<sub>b</sub>lank">quasi-poly
Frege =quasi-poly noncommutative formula IPS</a><o:p></o:p></p>
<p><o:p> </o:p></p>
<p><b>Space classes</b><o:p></o:p></p>
<p><a href="https://doi.org/10.1016%2FS0022-0000%2870%2980006-X" target="<sub>b</sub>lank">PSPACE=NPSPACE</a> <o:p></o:p></p>
<p><a href="https://en.wikipedia.org/wiki/Immerman%E2%80%93Szelepcs%C3%A9nyi_theorem" target="<sub>b</sub>lank">NL=coNL</a> <o:p></o:p></p>
<p><a href="https://doi.org/10.1145%2F1391289.1391291" target="<sub>b</sub>lank">L=SL</a><o:p></o:p></p>
<p><a href="https://complexityzoo.net/Complexity_Zoo:G#gapl" target="<sub>b</sub>lank">DET=L<sup>GapL</sup></a> (see <a href="#update1">Update 1</a>)<o:p></o:p></p>
<p><o:p> </o:p></p>
<p><b>Interactive Proofs</b><o:p></o:p></p>
<p><a href="https://en.wikipedia.org/wiki/PCP_theorem#History" target="<sub>b</sub>lank">NP=PCP(O(log n), 3)</a></p><p><a href="https://en.wikipedia.org/wiki/IP_(complexity)" target="<sub>b</sub>lank">IP=PSPACE</a></p><p><o:p></o:p></p>
<p><a href="https://doi.org/10.1016/0022-0000(88)90028-1" target="<sub>b</sub>lank">AM = MAM = AMAM =</a> ... [From Albert Atserias @atserias]<o:p></o:p></p>
<p><a href="https://doi.org/10.1007/BF01200056" target="<sub>b</sub>lank">MIP=NEXP=PCP(poly,poly)</a><o:p></o:p></p>
<p><o:p> </o:p></p>
<p><b>Alternating Turing machines</b><o:p></o:p></p>
<p><a href="https://doi.org/10.1016/0304-3975(76)90061-X" target="<sub>b</sub>lank">Σ<sub>k</sub> P = ∃ Pi<sub>k-1</sub> P = NP<sup>Σ<sub>k-1</sub> P</sup> =
Σ<sub>k</sub>TIME(poly(n)</a>) [from Lance]<o:p></o:p></p>
<p><a href="https://dl.acm.org/doi/10.1145/322234.322243">AP=PSPACE</a><o:p></o:p></p>
<p><a href="https://dl.acm.org/doi/10.1145/322234.322243">APSPACE=EXP</a><o:p></o:p></p>
<p><o:p> </o:p></p>
<p><b>Circuit classes within P</b><o:p></o:p></p>
<p><a href="https://doi.org/10.1016%2F0022-0000%2889%2990037-8" target="<sub>b</sub>lank">NC<sup>1</sup>=5-PBP</a><o:p></o:p></p>
<p><a href="https://doi.org/10.1145/48014.63138" target="<sub>b</sub>lank">ACC<sup>0</sup>=branching
programs over solvable monoids</a><o:p></o:p></p>
<p><a href="https://doi.org/10.1145/321623.321625" target="<sub>b</sub>lank">P=AuxPDA</a><o:p></o:p></p>
<p><a href="https://doi.org/10.1016/0022-0000(91)90020-6" target="<sub>b</sub>lank">SAC<sup>1</sup> = LogCFL</a> [from Michael Levet
@Michael_Levet] <o:p></o:p></p>
<p>REG = DSPACE(O(1)) = NSPACE(O(1)) [from Levet]</p><p>
REG=DTIME<sub>1tape</sub>(o(n log n))<o:p></o:p></p>
<p><a href="https://doi.org/10.1137/0213027">AC<sup>k</sup> = log<sup>k</sup> time on a CRCW PRAM</a><o:p></o:p></p>
<p><b>Quantum</b></p><p><a href="https://doi.org/10.1145/2049697.2049704" target="<sub>b</sub>lank">QIP=PSPACE</a><o:p></o:p></p>
<p><a href="https://cacm.acm.org/magazines/2021/11/256404-mip-re/fulltext" target="<sub>b</sub>lank">MIP*=CE</a><o:p></o:p></p>
<p><a href="https://doi.org/10.1007/s00037-005-0194-x" target="<sub>b</sub>lank">QMA<sub>log</sub> (1)=BQP</a> [from Martin Schwarz @martin_schwarz]<o:p></o:p></p>
<p><a href="https://arxiv.org/abs/1108.0617" target="<sub>b</sub>lank">QMA<sub>log</sub>
(poly)=QCMA</a> [ditto] </p><p>
<a href="https://doi.org/10.1007/s00037-016-0140-0" target="<sub>b</sub>lank">QAC<sup>0</sup><sub>f</sub> =
QNC<sup>0</sup><sub>f</sub></a> [from Ale `Scinawa' Luongo @scinawa]</p><p>
<a href="https://doi.org/10.1145/2432622.2432625" target="<sub>b</sub>lank">QMA(k) =
QMA(2)</a> for any k ≥ 2 [from Sarvagya Upadhyay @sarvagya82]<o:p></o:p></p>
<p><o:p> </o:p></p>
<p><b>Algebraic complexity</b><o:p></o:p></p>
<p><a href="http://doi.org/10.1137/0212043" target="<sub>b</sub>lank">VP=VNC<sup>2</sup></a> <o:p></o:p></p>
<p><a href="https://doi.org/10.1016/j.jco.2006.09.006" target="<sub>b</sub>lank">VDET=VABP=VP<sub>s</sub>=VP<sub>ws</sub></a><o:p></o:p></p>
<p><a href="https://doi.org/10.1137/0221006" target="<sub>b</sub>lank">VP<sub>e</sub>=VABP<sub>3</sub></a> <o:p></o:p></p>
<p><a href="https://doi.org/10.1145/3209663" target="<sub>b</sub>lank">border-VP<sub>e</sub>=border-VABP<sub>2</sub></a><o:p></o:p></p>
<p>For bilinear maps, tensor rank=Theta(algebraic circuit size)<o:p></o:p></p>
<p><o:p> </o:p></p>
<p><b>Logical characterizations</b></p><p><a href="https://doi.org/10.1007/978-1-4612-0539-5" target="<sub>b</sub>lank">FO=AC<sup>0</sup></a></p><p><o:p></o:p></p>
<p><a href="https://doi.org/10.1007/978-1-4612-0539-5" target="<sub>b</sub>lank">AC=NC=FO[poly(log)]</a></p><p>
<a href="https://en.wikipedia.org/wiki/Fagin%27s_theorem" target="<sub>b</sub>lank"><span style="font-family: "Cambria Math",serif; mso-bidi-font-family: "Cambria Math";">∃</span>SO=NP</a></p><p>
<a href="https://en.m.wikipedia.org/wiki/Fixed-point_logic#Least<sub>f</sub>ixed-point<sub>l</sub>ogic" target="<sub>b</sub>lank">P = FO + LFP on ordered structures</a> [thanks to Lance,
and Michael Levet]<o:p></o:p></p>
<p><o:p> </o:p></p>
<p><b>Kolmogorov-random strings as oracles</b><o:p></o:p></p>
<p><a href="https://doi.org/10.1137/050628994" target="<sub>b</sub>lank">EXP=NP<sup>R<sub>Kt</sub></sup></a><o:p></o:p></p>
<p><a href="https://doi.org/10.1137/050628994" target="<sub>b</sub>lank">PSPACE=ZPP<sup>R<sub>KS</sub></sup></a><o:p></o:p></p>
<p><a href="https://doi.org/10.1016/j.apal.2005.06.003" target="<sub>b</sub>lank">P=COMP∩{dtt reducible to R<sub>K</sub>U}</a> <o:p></o:p></p>
<p><b>Almost-classes</b><o:p></o:p></p>
<p><a href="http://dx.doi.org/10.1137/0210008" target="<sub>b</sub>lank">AlmostP=BPP</a> <o:p></o:p></p>
<p><a href="https://doi.org/10.1016/S0022-0000(05)80043-1" target="<sub>b</sub>lank">AlmostNP=AM</a><o:p></o:p></p>
<p><a href="https://doi.org/10.1007/s000370050012" target="<sub>b</sub>lank">AlmostPSPACE=BP<sup>exp</sup>.PSPACE</a><o:p></o:p></p>
<p><o:p> </o:p></p>
<hr />
Updates
<ol>
<li id="update1">March 17th update, h/t Eric Allender: Using Cook's original definition of DET as problems NC<sup>1</sup>-reducible to the integer determinant, apparently this equality is not known! See Eric's recent <a href="https://dl.acm.org/doi/10.1145/3586165.3586175">guest column in SIGACT News</a> for details and a $1000-prize related to this question, along with many other interesting open questions.</li>
</ol>https://blog.computationalcomplexity.org/2023/03/identities-in-computational-complexity.htmlnoreply@blogger.com (Lance Fortnow)6tag:blogger.com,1999:blog-3722233.post-6482721495747819843Mon, 13 Mar 2023 15:39:00 +00002023-03-25T10:01:56.257-05:00Problems we assume are hard. Are they? ALSO- revised version of Demaine-Gasarch-Hajiaghayi is posted!<p> (The latest version of </p><p>Computational Intractability: A Guide to Algorithmic Lower Bounds</p><p>by Demaine-Gasarch-Hajiaghayi is posted <a href="https://hardness.mit.edu/">here</a>. Its new and improved: (1) we have made all or most of the corrections send to us by proofreaders, (2) there is a chapter on quantum computing, (3) there is an index. Feel free to read it and send us corrections! </p><p>This post is related to it in that most of the problems-assumed-hard mentioned in this post were the basis for chapters of the book.)</p><p><br /></p><p>We think SAT is hard because (1) its NPC, and (2) many years of effort have failed to get it into P. </p><p>Imagine a world where we didn't have the Cook-Levin Theorem. We may still think SAT is hard. We may even take it as a hardness assumption to prove other things hard by showing SAT \le A. We may also be curious if they are equivalent and have lots of reductions A \le SAT. The reductions might be Karp or Cook. </p><p>You do not have to imagine this world! We already have it- in different contexts. In other areas of complexity theory there are problems that are assumed hard, but for which there is no analog of Cook-Levin. Are they hard? They seem to be- but the evidence is empirical. Not that there's anything wrong with that. </p><p>I will list out problems that </p><p>a) we assume are hard, for some definition of <i>we </i>and <i>hard.</i></p><p>b) we have NO proof of that and NO completeness or hardness result. ADDED LATER- a commenter wanted me to clarify this.</p><p> For SAT there is a well defined set of problems, NP, defined independent of any particular problem (that is, NP was NOT defined as all sets Karp-Red to TSP or anything of the sort) and by Cook-Levin we have </p><p>if SAT is in P then NP is contained in P.</p><p>For FPT (the class the commenter was interested in) there IS the result</p><p>if weighed k-SAT is in FPT then W[1] is contained in FPT.</p><p>but W[1] is DEFINED as the set of problems FPT reducible to Weft-1 circuits (some books use a different basic problems) which IN MY OPINION is not a natural class. One may disagree with this of course. </p><p><br /></p><p><br /></p><p>COUNTERAUGMENT: but W[1] Was defined as all problems FPT-reducible to Weft-1 circuits. And this is not the same </p><p>(I do not include hardness assumptions for crypto because that's not quite the same thing. Also there are just so many of them!) </p><p>I am sure I missed some- which is where YOU come in! Please leave comments with additional problems that I forgot (or perhaps did not know) to include. </p><p>1) 1vs2 cycle: Given a graph that you are guaranteed is 1 cycle or the union of 2 cycles, determine which is the case. This is assumed to be hard to parallelize (we omit details of defining that formally). This has been used for lower bounds in parallelism. See <a href="https://doi.org/10.1109/FOCS.2019.00097">here</a>.</p><p>2) 3SUM: Given x1,..,xn an array of integers, are there 3 that add to 0? There is an O(n^2) algorithm. The hardness assumption is that, for all epsilon, there is no O(n^{2-\epsilon}) algorithm. This assumption has been used to get lower bounds in comp. geom. See the Wikipedia entry <a href="https://en.wikipedia.org/wiki/3SUM">here</a> or the introduction of this paper <a href="https://arxiv.org/pdf/2203.08356.pdf">here</a>. (ADDED LATER, a 2-pager on some algorithms for 3SUM including some randomized ones: <a href="https://www.cs.umd.edu/users/gasarch/BLOGPAPERS/3sum.pdf">here</a>. A commenter asked about randomized algorithms. Perhaps the conjecture should be that no randomized algorithm is sub quadratic.) </p><p>4) APSP (All Pairs Shortest Path) Given a graph G, find for each pair of vertices the length of the shortest path. There is an O(n^3) algorithm. The hardness assumption is that, for all epsilon, there is no O(n^{3-epsilon}) algorithm. This assumption has been used to get lower bounds on graph problems. For more details see the introduction of this paper: <a href="https://arxiv.org/pdf/2203.08356.pdf">here</a></p><p>5) Weighted-SAT-k: Given a Boolean formula (it can be taken to be in 2CNF form) is there a satisfying assignment that has exactly k of the variables set to TRUE. This is assumed to not be fixed parameter tractable (that is no function f such that this problem is in O(f(k)n^{O(1)}) time). Problems that are FPT-equiv to it are called W[1]-complete and are all thought to not be in FPT. W[2], W[t] are also defined but we omit this. W[1]-complete has also been defined in other ways, but I can't seem to find a Cook-Levin type theorem for them. </p><p>6) Graph Isom. One of he few problems that are in NP but thought to not be NP-complete and, at least for now, is not in P. Babai has shown its in quasi-poly time (n^{(log n)^{O(1)}). There is a notion of GI-hard: problems that, if they are in P then GI is in P. See he Wikipedia entry <a href="https://en.wikipedia.org/wiki/Graph_isomorphism_problem">here</a>. Most of the GI-hard problems are variants of GI, e.g., graph isom. for directed graphs. GI could be in P without unbelievable consequences for complexity and without terrifying consequences for cryptography. </p><p>7) Unique Games Conj: I won't define it formally here, see the Wikipedia entry <a href="https://en.wikipedia.org/wiki/Unique_games_conjecture">here</a>. From UGC you get several approximation results are optimal. Does that argue for UGC being true-- having consequences we believe? I would say YES since in some cases the algorithm that gives a good approx has an alpha-approx for some weird number alpha, and assuming UGC you get the SAME alpha as a lower bound. </p><p>8) Existential Theory of the Reals. Easier for you to read the Wikipedia entry <a href="https://en.wikipedia.org/wiki/Existential_theory_of_the_reals">here</a>. It is used for problems that are inbetween NP and PSPACE. (ADDED LATER: One of the commenters says that Blum-Shub-Smale showed ETR is complete for the real version of NP, so this item should not be on my list.) </p><p>9) Orthogonal vector conjecture. See this paper: <a href="https://www.mit.edu/~lijieche/SODA_2019_OV-Equiv-Fullversion.pdf">here</a>. This is used to show problems are not in subquadratic time. </p><p>Possible research directions and thoughts</p><p>a) Try to prove a Cook-Levin type theorem for one of these problems.</p><p>b) Build classes analogous to the poly-hiearchy on one of these problems.</p><p>c) Ask bounded-query questions. For example: Are k queries to 3SUM more powerful than k-1 (This is a VERY Gasarchian question.) </p><p>d) Try to prove that one of these problems is actually hard. That seems hard. Perhaps on a weaker model (thats prob already been done for at least one of them.)</p><p><br /></p><p><br /></p><p><br /></p><p><br /></p><p><br /></p>https://blog.computationalcomplexity.org/2023/03/problems-we-assume-are-hard-are-they.htmlnoreply@blogger.com (gasarch)6tag:blogger.com,1999:blog-3722233.post-2234125126384970909Mon, 06 Mar 2023 14:08:00 +00002023-03-06T08:08:06.608-06:00Peer ReviewI ran into a partner of a computer scientist at a social event who asked me "Is the publication system in CS screwed up or really screwed up?" If you don't know my response you haven't been reading this blog long.<div><br /></div><div>Today let's talk about peer review. <a href="https://sigcrap.org/2023/01/on-peer-review-in-academic-publications/">Kevin McCurley</a> and <a href="https://experimentalhistory.substack.com/p/the-rise-and-fall-of-peer-review">Adam</a> <a href="https://experimentalhistory.substack.com/p/the-dance-of-the-naked-emperors">Mastroianni</a> have recent, not so positive, takes on this topic. </div><div><br /></div><div>Peer review came out of a system where we had limited slots in journals and, in computer science, conferences and we had to make tough decisions. Journals and conferences would gain a reputation based somewhat on how difficult it was to get papers published there.</div><div><br /></div><div>Now we have basically unlimited space to publish your results. And you definitely should do so, posting your papers on your own webpage, and a paper archive site like <a href="https://arxiv.org/">arXiv</a> or <a href="https://eccc.weizmann.ac.il">ECCC</a>. The research community would flourish in a world where everyone posts their paper online for public comment, people can promote their favorite papers on social media and we have a TikTok-system for recommending papers to you.</div><div><br /></div><div>So why do we still need paper review? Mostly because we have to review researchers for jobs and grants, and with <a href="https://www.chronicle.com/article/is-it-time-to-eliminate-recommendation-letters-hint-yes">the questioning the value of recommendation letters</a>, publication quality and quantity has become a stronger proxy for measuring people for better or for worse.</div><div><br /></div><div>First of all, peer review is a cornerstone of science. Would you rather have papers reviewed by faceless bureaucrats who know little about the subject area? Or papers only ranked by manipulable statistics like citations. </div><div><br /></div><div>But the way we apply peer review, to decide acceptances in conferences, just adds too much randomness to the system. CS conferences have multiplied and continue to get <a href="https://analyticsindiamag.com/icml-is-turning-down-papers-by-the-dozen-and-researchers-are-pissed">increased submissions</a> as the field grows. It's just impossible to maintain any sort of uniformity in quality of acceptances. Or too often, we find conference committees and funding panels playing it safe rather than take risks with new research. With so much randomness, it's best to try many papers instead of focusing on a stronger result, leading to too much <a href="https://doi.org/10.1073/pnas.2021636118">incremental research</a>, especially in academia. </div><div><br /></div><div>For hiring, promotion, tenure and funding decisions, we too often rely on short cuts, such as the number of papers accepted to major conferences. Those who don't win the conference lottery get disillusioned and often leave academia for industry and no one wins.</div>https://blog.computationalcomplexity.org/2023/03/peer-review.htmlnoreply@blogger.com (Lance Fortnow)3tag:blogger.com,1999:blog-3722233.post-3317586011216436608Thu, 02 Mar 2023 13:19:00 +00002023-03-02T07:29:47.174-06:00Goodbye Dilbert<p>Scott Adams, creator of Dilbert, had a racist rant in a video he posted last week. As a result <a href="https://www.wsj.com/articles/newspapers-drop-dilbert-after-cartoonist-calls-black-americans-hate-group-21348ce1?st=mcxbs51a70by7i3&reflink=desktopwebshare_permalink">most newspapers that carried the comic strip are dropping Dilbert</a>, including our local Chicago Tribune. I fully support these moves. Much as I believe in separating the art from the artist, it's different when the artist is living and profiting from their art.</p><p>So we need to say to Dilbert, making the end of an era. Dilbert started in 1989 as a strip that captured the absurdities of the work place in an anonymous tech company, predating movies like <a href="https://www.imdb.com/title/tt0151804/">Office Space</a> and shows like <a href="https://www.imdb.com/title/tt1235547/">Better Off Ted</a> and <a href="https://www.imdb.com/title/tt2575988/">Silicon Valley</a>. I used Dilbert strips (with permission) in my book, namely <a href="https://dilbert.com/strip/2001-10-25">this strip</a> to introduce Kolmogorov complexity and <a href="https://dilbert.com/strip/1997-12-22">this strip</a> to describe my research area. Just call me Dan.</p><p>Farewell to Dilbert, Dogbert, Wally, Alice, Asok, the pointy-haired boss and the rest. I won't miss Scott Adams, but I will miss his creations.</p>https://blog.computationalcomplexity.org/2023/03/goodbye-dilbert.htmlnoreply@blogger.com (Lance Fortnow)9tag:blogger.com,1999:blog-3722233.post-4886994334648303687Mon, 27 Feb 2023 15:10:00 +00002023-02-27T09:10:59.133-06:00I wish we had less students in a Class. Demographics says I may get my wish.<p> According to <a href="https://www.vox.com/the-highlight/23428166/college-enrollment-population-education-crash">this</a> article, in the near future LESS people will be going to college. There is even a name for this upcoming shift: <i>The Enrollment Cliff. </i>Why?</p><p>Is it Covid-related? Is it that College has gotten to expensive? To liberal? To much cancel culture? To many dead white males in the core? The core is to multicultural? Online learning is stealing our students? </p><p>No. The reason is actually very boring and does not serve anyone's political agenda. (thats not quite right). Or any agenda. And you can probably guess the cause from the title of this blog post.</p><p>For some years up until 2007 the birth rate was slowly dropping. Then there was a large drop in the birth rate after the recession of 2007, and the birth rate has never really recovered. And the recession might not have that much to do with it-- the long term move from an agricultural society (where kids are an economic gain) to an industrial one (where, after child labor laws and the expense of college, kids are an economic loss- though that can be debated) has resulted in a very long term decline in births. </p><p>And from personal experience, I know (a) very few people who have 4 or more kids, (b) there is NO stigma about having 0 kids as there once was. Of course the sample size of people I know may be skewed. </p><p>ANYWAY, what will this mean for colleges? </p><p>a) Harvard, Yale, etc will not be affected. Plenty of people will still apply. Note that they draw from all of American and also internationally. </p><p>b) Colleges that draw from a local area may be affected a lot since they depend on locals, and that population may be shrinking. </p><p>c) Schools in between Harvard and Small colleges- hard to say. </p><p>d) The sports betting places paying schools to allow them to promote on campus (and in some cases helping them promote it) may find far less students to sucker into this loser's game. See my blog on this topic <a href="https://blog.computationalcomplexity.org/2023/02/it-is-more-important-than-ever-to-teach.html">here</a></p><p>Univ of MD has around 4000 Computer Science majors (depending on who tells you this its either a brag or a complaint). In the Spring of 2023 there are three lectures of Discrete math of sizes 240, 270, and 90. Each of those also has recitations of 30 (or so) each. If the decline is gradual (either from demographics or from the CS majors bubble finally bursting, or from the other reasons above) then I am sure we can handle it. If it declines very suddenly we may have a problem adjusting. </p><p>One caveat to this that I've heard is that immigration will save us. Maybe. But America is politically going in the opposite direction. The counterargument of <i>without immigration there will be less students</i> <i>going to college </i>is not that compelling to most Americans. There are other more intelligent and compelling pro-immigration arguments. However, American politics is no longer interested in compelling and logical arguments. (The notion that it once was may be nostalgia for a time that never was.) </p><p><br /></p>https://blog.computationalcomplexity.org/2023/02/i-wish-we-had-less-students-in-class.htmlnoreply@blogger.com (gasarch)6