Computational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill Gasarch
Wednesday, October 29, 2025
AI and the Power of Nonuniform Circuits
Sunday, October 26, 2025
Bill's Bad Advice
I sometimes give the following advice for research which I label Bill's Bad Advice. We will later see who it might be good advice for. Spoiler alert: the number of people for whom it is good advice is shrinking but might include Lance especially now (see his post about stepping down from admin, here).
When you come across a possible research topic or problem, or have some idea, and are wondering if you want to pursue it, here is my bad advice:
1) DON"T CARE if anyone else cares. If YOU care then that is enough to at least get started.
2) DON"T CARE if it has the potential for a published paper. FIRST do the work then, if you feel like it, look for a good venue. You might not bother if posting to arxiv or making an open problems column out of it or a (guest) blog post out of it is a good endpoint. (I run the SIGACT News Open Problems Column- feel free to contact me if you want to submit one.)
3) DON"T CARE if it has practical implications.
4) DON"T CARE if you can get a grant for it. With the current state of the NSF this advice may soon become irrelevant.
5) DON"T CARE if someone else already did it (though at a later stage you should check on this). Even if you work on it and find someone else did it, you will have LEARNED about the problem through your efforts. You might then want to do a survey for your own benefit to consolidate your knowledge.
Why should you NOT CARE about any of these things? Because they get in the way of actually DOING something.
Here are two examples of when this approach WORKED and one where it DID NOT WORK, though both classifications might depend on your definition of WORKED.
WORKED: My work on Muffins. All I wanted was to get some High School and Undergraduate Projects out of it. I ended up with a book on it which made me twenty dollars last year! More to the point, I learned a lot, as did my co-authors and I am happy with the book. The co-authors were Undergraduates so my dept put me up for a mentoring award (I have other credentials as well). I did not win, but I got a nice letter saying they had many qualified applicants. OH- it didn't say if I was one of them.
WORKED: I have had many Ramsey Projects where a High School Student codes stuff up and learns some Ramsey, some coding, and gets the experience of research. Sometimes they do a survey paper or open problems column. We both learn A LOT from this and the student gets a good letter from me. Do they do something NEW? Publishable? No, though some surveys and open problems columns have come out of this. I DO tell them ahead of time that the work is unlikely to lead to original results (and hence unlikely to be good for a science competition).
DID NOT WORK: See this blog post here about the math and here about finding out that the problem we were working on was already known and more was known than we thought. I didn't mind this, but one of the authors did.
Note that WORKED and DID NOT WORK also depend on your goals.
For whom is this bad advice? Good advice?
1) It was always bad advice for young assistant professors who need to get papers and grants to get tenure.
2) Hypothetically, once you get tenure you have job security and hence can change fields or follow my bad advice without consequences. But with grants and salary and teaching load issues, this is less the case. Perhaps I am nostalgic for a time that never was.
3) High School Students were my main audience for bad advice. It's not as important for them to get papers out as for (say) assistant professors. But even this is changing. Colleges are getting more competitive. And HS students may well want a project that can lead to Science competitions. I am not going to say things were better when I was a kid but instead pose non-rhetorical questions:
a) Are high school students getting into research earlier than they used to? I am sure the answer is yes.
b) Are we losing the safe space where a high school student can just learn things and do things and not worry so much about if it's publishable? Yes, but I am not sure how widespread that is.
c) If we are losing that safe space, is that a bad thing?
d) Points a,b,c apply to ugraduates who want to go to graduate school more than for high school students who want to go to college.
4) Full Professors may have more freedom to follow my bad advice. Lance is looking for things to do now that he is no longer a dean, and indeed, is back to being a teaching-and-research professor. So he might follow my advice. However, he actually cares if people care about his work. He does not have to follow all of my advice, but he can follow some of it.
Thursday, October 23, 2025
AI and Intro Theory
This fall, for the first time at Illinois Tech, I'm teaching Introduction to Theory of Computation. While I've taught a variation of this course a couple dozen times, I last taught this class Spring 2016 at Georgia Tech. Intro Theory is a course whose main theorems don't change but it feels different in the AI era.
Here are the big and little things for me.
NP as a Hard Problem?
Garey and Johnson's classic 1979 book on NP-completeness had this famous cartoon.
Monday, October 20, 2025
Sept 16, 2025 was Pythagorean Day
     
Several people emailed me that September 16, 2025---written as 9-16-25 in the US---represents the integer side lengths of a right triangle.
9-16-25 is the only such triple that is also a valid date. This kind of mathematical alignment only happens once every 100 years.  The next occurrence will be  September 16, 2125.
Since this is such a rare event, let's explore some more math-themed dates.
1) Pythagorean Triples That Work as Future Dates
Note that 9-16-25 is not a Pythagorean triple; however, 3-4-5 is.
Here are some future  dates that are both Pythagorean triples and valid calendar dates:
March  4, 2105 is 3-4-5
May 12, 2113 is 5-12-13
June 8, 2110 is 6-8-10
July 24, 2125 is 7-24-25 (Darn---July 24, 2025 was recent and I missed it!)
August 15, 2117 is 8-15-17
I think that's it.  Recall that we need the month to be in \(\{1,\ldots,12\}\) and the day to be in \(\{1,\ldots,31\}\) with some exceptions:
Thirty days has September, April, June, and November
All the rest have thirty-one,
Excepting February, fun!
And that has twenty-eight days clear
And twenty-nine in a Leap Year
There are 24 versions of this poem at a website which is  here.
2) Why Didn't Anyone Email Me About Earlier Dates?
I wonder why nobody emailed me on, say, March 4, 2005 (3-4-5).  That's a Pythagorean triple, but maybe it just looked like three consecutive numbers. Oh well.
And what about May 12, 2013 (5-12-13)? That's a really cool Pythagorean triple. Oh well.
3) Other Math-Related Dates Using Month, Day, and Year.
So dates like Pi Day don't count---we want the full date to be interesting mathematically. Side note---I looked up how Pi Day is referred to and its Pi Day, not \(\pi\) day. Probably because not all typesetting systems can easily produce \(\pi\). 
Every year divisible by 4 is a leap year.
That is the end of my post. A bonus for my readers: a mini meta post. Long time reader and fellow SUNY Stony brook math major David Marcus (he was class of 1979, I was class of 1980) has been proofreading some of my posts before I post them. Lance suggested I used chatty instead. Instead of using chatty instead, I used chatty and then used David. David still found mistakes. I give an example here by pointing to all three versions:
Wednesday, October 15, 2025
Fall Jobs Post 2025
In two years we have gone from nearly all of our computer science graduates getting jobs in the field to many of them struggling to get internships and jobs in the top companies if at all. If the past is any guide, a weak tech job market leads to fewer majors which leads to fewer slots for computer science faculty. We'll start to see these trends this year and they will accelerate quickly if the tech jobs market doesn't recover.
In fact the tech job market for computer science graduates has become much more challenging, ironically as the tech companies have had huge market growth with advances in artificial intelligence. Academia moves slowly, some departments are still catching up on hiring, but that will soon slow. Many computer science departments are adding AI majors and specializations and computer science enrollments over the next few years will be hard to predict.
Computer science is not protected from larger forces. Many universities are facing long-term fiscal challenges, and cuts in research and student funding from the US government aren't helping, leading to hiring freezes at many schools. The demographic cliff, a drop in the US birth rate around the 2008 fiscal crisis, will start hitting colleges in earnest next year.
The Trump administration has made it more difficult for foreigners to get student visas, leading to a projected drop in incoming international students between 30% and 40%. Recent changes to H1B's, namely the $100K fee an employer would need to pay, may discourage more students who study in the US with the hopes of building a career and a life here. International students, many of whom have gone into computer science, are a major source of tuition for many universities.
Universities would also need to pay the $100K to hire foreigners as faculty and postdocs. Startup packages for new computer science faculty typically run several hundred thousand dollars so this is not out of the question, but schools might be reluctant to pay the $100K in tight fiscal times. These rules may evolve via litigation or administrative reinterpretation.
Even in a turbulent market, visibility and adaptability matter most. Universities outside the US are trying to take advantage by going after students and faculty. More than ever you should consider positions around the world, especially, but not only, if you are not a US citizen or permanent resident. Other countries have their own challenges, so tread carefully.
On to specific advice for students on the academic job market.
CATCS maintains a Theoretical Computer Science Jobs posting site. More generally in computer science, there is the CRA Database of Job Candidates and the CRA and ACM job postings.
And in general make sure you stand out. Areas related to data such as Artificial Intelligence, Data Science and Cybersecurity, will draw the most interest. Best if you can tie your research to those areas, or at least that you are open to teaching in them. Make sure your teaching statement talks about how you will handle AI in the classroom.
Have a well-designed website with all your job materials and links to your papers. Make sure your Google Scholar, LinkedIn and GitHub (if relevant) sites are accurate and up to date. Make a short video describing your research to a general computer science crowd. Take advantage of all the links above. Network at FOCS or SODA if you can get there. Reach out directly to professors you know at schools you are interested in. And start now.
Sunday, October 12, 2025
The Most Common Name in the World is Not Charles Lin. But It Seems That Way To Me.
In 2001 I supervised Charles Lin's Master's Thesis, which was on Private Information Retrieval.
In 2025 I supervised Charles Lin's Master's Thesis, which was on Ramsey Theory.
These are different people. To celebrate the second one's thesis, I took them both out to lunch together.
A Picture of Charles, Charles, and Bill is here.
I asked ChatGPT about common names.
1) ChatGPT tells me (and I believe them) that the number of people in American named Charles Lin is between 200 and 900. The answer pointed out that Charles is common and Lin is common but the combination is not common.
2) ChatGPT tells me (and I believe them) that the number of people named John Smith is between 25,000 and 40,000. I've heard it's a common name, but I do not know anyone named that. How is that possible?
3) I predict that people with last name Smith will be reluctant to name their child John because it is so common. Hence this name will decline. However, I made that same prediction 40 years ago. Maybe it did decline TO 40,000.
4) The most common boys' names in 2025 (so far) in the US are, in order:
Liam, Noah, Oliver, Theodore, James, Henry, Mateo, Elijah, Lucas, William
According to this website here, the five most common boys' names in 1960 (the year I was born), in order, are
David, Michael, John, James Robert.
James is the only on both lists.
I don't have the next five, though I suspect Henry and William would be on it. I also suspect that Liam and Mateo would not be in the top 100.
5) The most common girls' names in 2025 (so far) in the US are, in order:
Olivia, Emma, Amelia, Charlotte, Mia, Sophia, Isabella, Evelyn, Ava, Sofia
(Should they have Sophia and Sofia both on the list? I would say yes but it is odd.)
According to that same website, the most common girls' names in 1960 (the year I was born), in order, are
Mary, Susan, Linda, Karen, Donna.
None of the 1960 name are on the 2025 list. I suspect that would still true for the next 10.
6) When I was in grade school if the teacher said Quiet down Bill and Lisa the class went quiet.
7) The most common first name in the world is Mohammad. The most common last name in the world is Wang. Hence the most common name in the world has to be Mohammad Wang. I've asked students to find the flaw in that logic and all 8 Mohammad Wangs in my class found the flaw. Kudos to them! (Darling says I should always tell you when I am kidding. I am kidding. Only 2 of the 8 Mohammad Wangs in my class found the flaw.)
8) (ADDED LATER) One of my readers emailed me the following two items
a) From The Big Bang Theory
Howard: It gets better. Someone has to go up with the telescope as a payload specialist, and guess who that someone is.
Sheldon: Mohammed Lee.
Howard: Who’s Mohammed Lee?
Sheldon: Mohammed is the most common first name in the world, Lee, the most common surname. As I didn’t know the answer, I thought that gave me a mathematical edge.
b) As an interesting aside, I noticed that this semester I have three students named "Luke" and three named "Ben" which made me notice that kids who grew up with the original Star Wars movies would have kids in college around now (but none of my current students are named "Leia")
9) (ADDED LATER) I HEARD that the name `Barack' was popular when he was president.
I looked it up:
5 babies got that name in 2007
52 babies got that name in 2008
69 babies got that name in 2009
it declined since then.
BAD SCIENCE TIME: The number of babies named Barack increased 10-fold from 2007 to 2008.
TRUE but not significant.
10) I hope to supervise at least one more Charles Lin before I retire.
Wednesday, October 08, 2025
Big Bots Don't Cry
Sunday, October 05, 2025
If you use AI in your work do you brag about it or hide it?
You used AI in your work. Do you hide it or brag about it?
1) In 2002 there was a movie Simone about an actress who is really an AI. The Wikipedia entry tells the entire plot. I saved time by reading it in two minutes rather than watching it in 2 hours. I don't think I missed anything.
Key Plot Point: The creator of Simone hides the fact that Simone is an AI. Why hide it? Think of the publicity that would be generated by bragging about it!
The following comment would have been clever in 2002 but is so obvious now that it's banal:
Hiding that it's an AI actress is more unrealistic than having an AI actress.
Fast Forward to 2025 and into reality: There IS an AI actress named Tilly Norwood (do AIs need last names? Apparently yes.) She has not appeared in any movies yet but she will be on Instagram and other media soon. Are the creators trying to hide that she is not human? Quite the contrary---they are bragging about it. The Screen Actors Guild is complaining about this, see here. The complaint is that she can't be a real actress since she has no real life experiences to draw upon. If they are saying she won't be a good actress, that's for the studios and the audience and the critics to decide. If a new technology is threatening your livelihood then the argument it's not very good is a terrible argument since it may well be false and is not really what your concern is.
2) Recently Scott used AI GPT-5-thinking to help him with a proof. Did he hide this fact? Quite the contrary, he pointed it out as an interesting application of AI. See his blog post here.
3) There was a Chemistry Nobel prize, and a Physics Nobel prize, for work done where AI played a role. Did they try to hide that AI was used? Quite the contrary. See Lance's post on this, here.
4) Do you know of any case where AI or a computer was used and the authors wanted to hide that fact? Aside from Chess players using a computer, and students using ChatGPT, I cannot. Not for any serious research. (My spellcheck thinks ChatGPT is not a word. This time spellcheck is wrong.)
5) The opposite has happened: The Mechanical Turk chess-playing "machine". See here.
6) I recently saw the movie I, Robot from 2004. The main human character is against robots and says Can a robot write a symphony? He means this to be a rhetorical question whose answer is no. I counter: can a computer write a movie as dumb as I, Robot? Simone?
Wednesday, October 01, 2025
Computers Don't Want
Sunday, September 28, 2025
Clyde Kruskal talks about his Father Martin on Martin's 100th birthday
Martin Kruskal was born Sept 28, 1925 and passed away on Dec 26, 2006, at the age of 81 (we did two posts for his memorial, here and here). Today, Sept 28, 2025, is his 100th birthday. His son Clyde Kruskal wrote today's blog post as a tribute to his father.
We note that Clyde did a blog for his Uncle Bill's 100th Birthday, here, and plans on doing one for his Uncle Joe in two years.
-----------------------------------------------------------------------------------------
My father, Martin Kruskal, was a mathematician and physicist, best known for his early work in plasma physics, the discovery (independently of George Szerekes) of Kruskal-Szerekes coordinates, the modern discovery with Norman Zabusky of solitons, and the discovery with Clifford Gardner, John Greene, and Robert Miura of the inverse scattering transform.  He had an incredible willingness to tackle interesting, seemingly small problems\(-\)some of which later turned out to be highly influential.  Here, in chronological order, is a list of some of his more esoteric and/or less well-known contributions, not necessarily research contributions (besides me).
1. Random Mappings
Renowned mathematicians Nicholas Metropolis and Stanislaw Ulam (inventors of the Monte Carlo method) posed the problem of determining the expected number of components in a random mapping from \(N\) to \(N\). Framing this as a random graph problem: Construct a graph on \(N\) vertices by choosing, for each vertex \(i\) (from 1 to \(N\)), a random vertex \(j\) (also from \(1\) and \(N\)), and create the undirected edge \((i,j)\). If \(i=j\), no edge is created (or equivalently, a self-loop is created). The question is: What is the expected number of connected components in such a graph? In 1954, in his first paper after graduate school, he showed that this value is asymptotically \((1/2)(\ln(2N) + C) + o(1)\), where \(C=0.5772\ldots\) is Euler's constant. For the paper see here.
2. Computer Chess
My father was a strong chess player and, in 1956, arguably became the first person to play a game of chess against a computer.  When I learned about this in junior high school, I figured that this must be his most significant achievement.  Because the MANIAC 1 computer was slow and had little memory, the game was played on a 6×6 chessboard without bishops and simplified rules.  My father gave the computer queen odds. Spoiler alert: He won.  For more details, including the full game, see this explanation on Wikipedia here.
3. The Card Game Delphi
In 1956, Bob Abbott invented Eleusis, a card game involving inductive inference.  While the game itself was highly original, the scoring system was somewhat arbitrary.  In 1962, my father devised the game Delphi, which, according to Martin Gardner, made several important improvements, including a much more logical scoring system.  While it solved the scoring problem, Delphi turned out to be less fun to play than Eleusis.  Later, Bob Abbott created an improved version of Eleusis called New Eleusis, which, unfortunately, still did not completely solve the scoring problem.
For the rules to Delphi, see this article in Denexa Games here.
4. The Kruskal Count
My father invented a card trick, called the Kruskal Count, with the unique characteristic that the magician never touches the deck of cards during the performance.  To make this plausible, he would claim to the audience that he was psychic.  There are various YouTube videos demonstrating and discussing it.  Shortly after devising the trick, my father, brother, and I visited a young married couple.  My father performed the trick several times on the husband, who kept accusing his wife of secretly colluding with him.  She was insulted and kept denying it\(-\)truthfully. Finally my father said to the wife, Well, the cat’s out of the bag.  You have been in on it all along, and you can admit it now. (She hadn’t.) My brother and I (both teenagers) thought it was hilarious; I doubt the couple did.
The trick turned out to have serious applications: According to Wikipedia, the underlying phenomenon has applications in cryptography, code-breaking, software tamper protection, code self-synchronization, control-flow resynchronization, design of variable-length codes and variable-length instruction sets, web navigation, object alignment, and others. There are various YouTube videos demonstrating and discussing it.
The Kruskal Count was featured in an episode of NUMB3RS, which Gasarch blogged about here.
5. Surreal Numbers
My father had a lifelong fascination with infinity starting in childhood, which deepened when he read Donald Knuth’s introductory book on surreal numbers.  For more about his interaction with Knuth after reading the book, see this blog entry here.  Wikipedia provides a concise explanation on my father’s webpage about surreal numbers and his contributions.  The entry highlights important results\(-\)both positive and negative\(-\)by Ovidiu Costin, Philip Ehrlich, and the mathematical logician Harvey Friedman (see here and here for the papers).
Martin's website has the following passage, which captures the issue beautifully.
Surreal numbers, which are defined constructively, have all the basic properties and operations of the real numbers.  They include the real numbers alongside many types of infinities and infinitesimals. Kruskal contributed to the foundation of the theory, to defining surreal functions, and to analyzing their structure.  He discovered a remarkable link between surreal numbers, asymptotics, and exponential asymptotics. A major open question, raised by Conway, Kruskal, and Norton in the late 1970s, and investigated by Kruskal with great tenacity, is whether sufficiently well behaved surreal functions possess definite integrals.  This question was answered negatively in the full generality, for which Conway et al. had hoped, by Costin, Friedman, and Ehrlich in 2015.  However, the analysis of Costin et al. shows that definite integrals do exist for a sufficiently broad class of surreal functions for which Kruskal's vision of asymptotic analysis, broadly conceived, goes through.  At the time of his death, Kruskal was in the process of writing a book on surreal analysis with O. Costin.
6. Alpha-Beta Search
While in graduate school, I came across Donald Knuth and Ronald Moore's paper on alpha-beta search (see here).  They posed the question of what is the expected branching factor in a game tree with \(c\) children per node and depth \(d\), and uniformly distributed values at the leaves.  They gave a partial solution.  Gerard Baudet found a recurrence for this value and improved the bound (see here).  This was the kind of asymptotics that my father excelled at, so I asked him if he could solve the recurrence.  He solved it by finding an asymptotic value for the number of leaves evaluated.  Right after that Judea Pearl published the asymptotic value for the branching factor (see here).
At the time I figured that the original question had been answered by Pearl, so the stronger result was not interesting.  A more mature version of me might have pursued it.  In reality, answering the original question is interesting, but for a number of reasons any further improvement is not particularly interesting, so I was probably right to drop it even if for the wrong reasons.  Unfortunately it was in the file drawer lost during the move to our new building, but if anyone cares I think I remember the exact high order term.
7.  Public Key Crypto
When he learned about public key cryptography, he invented his own system based on quadratic mapping just to understand how it works.  He gave a talk on it in 1983. He showed it to Ron Rivest who said that it was not secure. Oh well.
Thursday, September 25, 2025
Self-Driving Cars
A few weeks ago I took an Uber to a regional airport and was picked up by a Tesla. The driver used FSD, so-called Full Self-Driving, never touching the steering wheel during the entire trip. Should you tip a driver who just sits there? In the end I gave my usual tip and five-star rating. I figured the driver had to be there or the car wouldn't drive and he probably ponied up his own money for the FSD. I give five stars to any driver who gets me from point A to point B without major incident. Remember when five stars meant "above and beyond". Grade inflation gets us all.
But you can't tip Waymos. Last fall during a trip to San Francisco, I took my first Waymo ride. At one point a couple of people jaywalked in front of the Waymo and the Waymo promptly stopped. The car behind us started honking. Who are you honking at? It's a Waymo. It did drop me off at the wrong hotel but that's on me, who thought there would be two Hyatt Regency's in San Francisco.
In both cases the driving felt safer than the average Uber driver. It really is safer. About 40,000 people die in human-driven traffic crashes per year in the United States. But even a single accident could be devasting to a self-driving car company so they are designed to drive on the safer side. I have friends who see self-driving as the ultimate safety upgrade, especially as they lose fast reflexes as they age.
With new technologies (to paraphrase Hemingway), things generally move gradually than suddenly. Waymo and its rivals are expanding into several cities in the US (but not Chicago 😞) and robotaxis are all over China and across Asia. Cities like Shenzhen and Beijing have thousands of robotaxis operating daily, is the US falling behind? And why are we not seeing plans for self-driving cars in the midwest? Is it the weather, or just the usual blind spot technology companies have to this part of the country?
Nevertheless, I suspect we'll see the vast majority of ride-share as robotaxis by the end of the decade and cars without a steering wheel soon after that.
Sunday, September 21, 2025
We can find more papers on the web than we used to. Are we reading them?
STUDENT: What did you do before the web to find papers?
BILL: We went to the library and copied papers to read later.
STUDENT: Did you read them later?
BILL: Well, uh,hmm, ...
BILL to a professor in his 80's: What did you do before copy machines?
PROF: We went to the library, took the journal off of the shelf, and read the article. We may also have taken notes on paper on the article.
Back to 2025: I do the following sequence of events with a branch point at the end.
1) Get interested in some problem.
I give an example:
Rado's Theorem for equations with a constant. Short summary:
Rado's Theorem: Let \(a_1,\ldots,a_n\) be integers. The following are equivalent
--For all finite colorings of \(N^{\ge 1}\) there is a monochromatic solution to \(\sum_{i=1}^n a_ix_i= 0\)
--Some subset of \(a_1,\ldots,a_n\) sums to 0.
My interest: What if we look at equations which replace 0 by some other constant? AND what if you just want to look at (say) 2-colorings?
2) Assemble a website of papers on the topic.
I assembled a website of papers on the Rado question. I did that. The website is here.
3) I then read the papers and understand them. Or not.
I intend to do the following:
a) Read the paper with pen and pad and take notes, draw diagrams, do examples. I may create some handwritten notes. The benefit of the notes is from making them. The actual notes are terrible.
b) Type in the notes and while doing that polish them. I might work with a student on this.
c) If needed make slides out of the proof for presentation.
I sometimes go straight from the paper to the slides. I've stopped doing that since it is valuable to first go through the Understanding the proof phase before going to the Best way to present the proof phase.
----------------------------------------
Step 3 can be a problem. I have many website of papers that I never got around to reading. The key for me is to strike while the iron is hot that is, SOON after assembling the papers READ them.
For Rado's Theorem with constants I have not gotten around to reading any of the papers yet.
There is some benefit to having to read a paper in the library since then you actually read it. I am NOT saying things were better when I was a kid. The lesson is more of how to combine current technology with the helpful constraints of a prior era.
Another approach: go to the library and look through journals to find something interesting. I've done this a few times with success. I blogged about one of them here
Wednesday, September 17, 2025
What is "PhD-Level Intelligence"?
When announcing Open-AI's latest release last month, Sam Altman said "GPT-5 is the first time that it really feels like talking to an expert in any topic, like a PhD-level expert." Before we discuss whether GPT-5 got there, what does "PhD-Level intelligence" even mean?
We could just dismiss the idea but I'd rather try to formulate a reasonable definition, which I would expect from a good PhD student. It's not about knowing stuff, which we can always look up, but the ability to talk and engage about current research. Here is my suggestion.
The ability to understand a (well-presented) research paper or talk in the field.
The word "field" has narrowed over time as knowledge has become more specialized, but since the claim is that GPT-5 is an expert over all fields, that doesn't matter. The word "understand" causes more problems, it is hard to define for humans let alone machines.
In many PhD programs, there's an oral exam where we have previously given the candidate a list of research papers and they are expected to answer questions about these papers. If we claim a LLM has "PhD-level" knowledge, I'd expect the LLM to pass this test.
Does GPT-5 get there? I did an experiment with two recent papers, one showing Dijkstra's algorithm was optimal and another showing Dijkstra is not optimal. I used the GPT 5 Thinking model and GPT 5 Pro on the last question about new directions. A little more technical answers than I would have liked but it would likely have passed the oral exam. A good PhD student may work harder to get a more intuitive idea of the paper in order to understand it, and later on extend it.
You could ask for far more--getting a PhD requires significant original research, and LLMs for the most part haven't gotten there (yet). I've not had luck getting any large-language model to make real progress on open questions and haven't seen many successful examples from other people trying to do the same.
So while large-language models might have PhD-level expertise they can't replace PhD students who actually do the research.
Monday, September 15, 2025
``I'm on vacation so I won't be checking email'' will sound funny soon. Maybe it already does.
"I'll be on vacation so I won't be checking email.''
"I can't be at the meeting since I will be out of town''
Technology has made it so that:
a) You really CAN get email when you are on vacation.
b) You really CAN go to a meeting if you are out of town (if the meeting is on Zoom which is becoming more and more common). (My proofreader tells me that Zoom is spelled with a capital Z. I did not know that! The phrase I Googled is formally correct, though I googled is acceptable. I have never seen anyone say or write I Binged or I binged for searching, though I have seen I binged watched Babylon Five for example. I have never seen I Yahooed or I yahooed for search or for any other reason. Fun fact: the term Yahoo was coined by Jonathan Swift for use in the book Gulliver's Travels.)
Caveats:
1) You are on vacation! You DO NOT WANT to answer email!
2) You are out of town at a conference and the meeting is at the same time as a talk you want to go to.
3) There are too many meetings so it's good to have an excuse to not go to some.
Personal:
This is one issue where I may be LESS of a Luddite than Lance:
When Lance is on vacation, he DOES NOT get email.
When I am out of town, I DO get email, and respond to it. This first struck me when I was on a riverboat cruise in France and I got an email from my chairman asking if I would be on some committee (I said yes). I was more `in contact' than people on campus who don't respond to email.
How we talk about things: My advisor told me he would be giving a talk in Zurich. I azoomed he meant a talk on Zoom. He actually meant in person. Fortunately it was recorded so I didn't have to get up at 5:00 a.m. to see it. (My spellcheck thinks azoomed is not a word. It is correct. For now.)
Thursday, September 11, 2025
Is the Prob Method `Just Counting'- I say no and HELL NO
(After I wrote this post Lance tweeted a pointer to a great talk by Ronald de Wolf with more examples, and also examples of quantum proofs, see here.)
I was teaching the prob method for lower bounds on Ramsey Numbers (see my slides here).
As often happens, a student said:
That's not a new technique--- you're just counting.
My response
1) For this example, it can be rephrased as counting, but there is no advantage in that.
2) There are other examples where either
a) You could rephrase it as counting but it's MUCH EASIER to use probability, or
b) You really CANNOT rephrase as counting.
And I came prepared-I presented the following result (see my slides here):
Theorem: Let \(G\) be a graph on \(n\) vertices where every vertex has degree \( \ge d \). Then \(G\) has a dominating set of size
\( \le n \frac{\ln(d+1)+1}{d+1} \).
The proof uses that \( E(X+Y)=E(X)+E(Y) \). I cannot see a way to make the argument into a counting argument. Clearly 2a is true. It may even be that 2b is true, though that would be hard to formalize.
The student admitted that yes, the Dom Set Theorem either could not be rephrased as a counting argument or it would be hard to do so.
Can you make it into a counting argument? Would you want to bother?
Monday, September 08, 2025
A Restless Soul
Tuesday, September 02, 2025
Guest Post on Why Coding Style is Important
Coding Style Is Important
Does coding style matter? We teach students how to write code and about algorithms. But, do we discuss coding style? Some people may say that style is just personal preference. But, there is undoubtedly good style and bad style, both for prose and for code. Following is a guest post by David Marcus that is a book review of a book that focuses on coding style.
====
This is a review of the book "Professional Pascal: Essays on the Practice of Programming" by Henry Ledgard. This book changed my life: After reading it, I rewrote all my code.
"Professional Pascal" is not about algorithms. It is about writing code. I consider it to be the Strunk & White for programmers.
While programs must be understood by the computer, they must also be understood by people: We need to read the code to determine that the program is correct (testing can only show that it is incorrect, not that it is correct). We need to read the code to maintain/improve/extend/debug/fix it. This applies even if you are the only person who will ever read or use your code. You probably spend more time reading your code than writing it. If you need to do something with code that you wrote long ago, you will be grateful if the style makes it easy to understand.
If you don't use Pascal (or Delphi), don't be put off by the "Pascal" in the title. Almost all of the advice can be applied to any programming language.
I won't try to summarize what Professor Ledgard has written in his book: He explains it much better than I can. But, I will mention a few of the topics that he covers and some things that I found notable.
One of the essays in the book is "The Naming Thicket". Names are important. Variables should be nouns, procedures should be verbs, booleans should be adjectives.
I was once reading some code in a commercial product, and I saw several methods that had a variable named "Ary". I eventually figured out that this was an abbreviation for "Array". Of course, this was a terrible name: "Array" didn't tell me what the contents of the array was, plus the unnecessary abbreviation turned it into a puzzle.
Another time I was reading a method in a commercial product and the method had a two-letter name. It wasn't clear to me what the two letters meant. I eventually guessed that they were the initials of the developer. I confirmed this with another developer who worked on the same team.
Another essay in the book is "Comments: The Reader's Bridge".
Following Professor Ledgard's advice, in my code I put a comment at the beginning of every procedure, function, method, unit, group of constants/types, and every class, but I never put comments mixed in with the code. If I'm tempted to put a comment in the code, I just make it a method name and call the method at that point in the code.
I own a commercial database library that comes with source code. There are some comments in the source, but most of them appear to be there so that they can be extracted into the Help. I know someone who works for the company. I asked them whether the source code that they ship is the same as their internal copy. I thought maybe they strip out some of the comments in the shipped version. Nope. It is the same as their internal version.
Another essay in the book is "Program Layout". To differ slightly from Professor Ledgard, I would say the layout should make the syntax clear (not the semantics). In case you haven't figured it out, the correct indentation for blocks is three spaces (I don't think Professor Ledgard says this, but he indents three spaces in his examples).
Another essay in the book is "A Purist's View of Structured Programming". Only use one-in, one-out control structures. Never exit a method or block in the middle. If you know the code is written to adhere to this rule, it is much easier to understand: You don't have to scan the whole method looking for quit/exit/return statements.
Another essay in the book is "The Persistence of Global Variables". I once had a bug in one of my programs. I spent a lot of time trying to figure out what was wrong. The problem was in a method that called several other methods of the same class, something like
   DoThis( X, Y );
   DoThat( X, Y );
After much confusion, a light bulb went off in my head when I remembered that DoThis was changing the value of the object's fields. The object is global to the method calls because the compiler passes it in, but it isn't explicitly listed in the parameters/arguments of the methods. After that experience, I always include the object when calling a method, e.g.,
   self.DoThis( X, Y );
   self.DoThat( X, Y );
Another essay in the book is "It Worked Right the Third Time". When I write code, I compile it to catch syntax errors (like a spell checker). I then proofread it to make sure that it is correct (just like reading a math proof to check it). (Ideally, I will print the code out, but if the changes are scattered in multiple files, then viewing a diff onscreen works better.) Only then will I try it. The emphasis should be on writing good code. No amount of testing can make poor code good.
This book was published in 1986, and that is when I first read it. So, why am I writing a book review now? I read a lot of code written by people who make their living writing code. And, all of these people would benefit from reading this book.
Professor Ledgard has also written the books "Software Engineering Concepts (Professional Software, Vol 1)" and "Programming Practice (Professional Software, Vol 2)" (both written with John Tauer) . Much of the material in "Professional Pascal" is also in these books. "Professional Pascal" is shorter, so if you are only going to read one of these books, read it. If a book is too long, here is an article: "Coding Guidelines: Finding the Art in the Science: What separates good code from great code?" by Robert Green and Henry Ledgard, ACM Queue, vol. 9, No. 11, 2011-11-02. The link is here
Wednesday, August 27, 2025
The Logical Argument
Saturday, August 23, 2025
Was the George Foreman Grill The Best Invention of the last 50 Years?
(This post was inspired by George Foreman, who passed away March 21, 2025, at the age of 76.)
About 10 years ago I asked my class
What is the best invention or tech advance of the last 50 years?
Here are the answers I got NOT ranked.
1) The internet. We can look things up so easily! (see my post on one aspect of this here). Young people reading this blog have no idea what the world was like before the internet. Here is a short story that shows how much has changed.
At MIT in 1982 I saw Anil Nerode give a great talk about Recursive Mathematics: using recursion theory to show that some theorems whose proofs were not effective could not be made effective (e.g., there is a computable 2-coloring of pairs of naturals with no decidable infinite homogeneous set, so the proof of Ramsey theory on \(K_N\) has to be non-effective). The talk got me interested in the topic, so that day I went to the math library (ask your grandparents what a library is) and found the paper journals that had some of the articles he talked about (Journals used to be on paper? See my post here about that.) I then copied the articles on the copy machine (ask your grandparents what a copy machine is) and read them. A few years later Anil Nerode asked me to write a survey of recursive combinatorics for a collection of articles in recursive mathematics. He was one of the editors of a joint America/USSR project (ask your parents what the USSR was) which I was happy to do. The book was delayed when the USSR fell apart (ask your parents about that important historical era), but was eventually published. The survey is 131 pages and is here. The book, Handbook of Recursive Mathematics, is in two volumes. Its available on Amazon Volume 1 and Volume 2. I've also sometimes seen it available for free download though I don't know if it really is or if thats some kind of scam (if your grandparents try to download it warn them that it might be a scam).
2) Advances in medicine. You can have an operation and be home that night (this is partially medical advances and partially the insurance companies forcing you to have short stays). I asked the question pre-COVID. If I asked it again, I may have some people saying vaccines.
3) VCR/DVD/DVR/Streaming so we have a lot more control over our entertainment (Is this really on a par with medical advances? Yes- you can watch TV of your choice while you recover.)
4) Easy Pass for Toll Booths. The students said that it was always a pain having to bring change to toss into a basket at the toll booth. And sometimes they missed.
5) The George Foreman Lean Mean Fat-Reducing Grilling Machine. The student really liked using it and said that burgers were tastier and healthier. I asked him if he really thought George Foreman got to be the heavyweight champion of the world (twice!) because of the grill. He didn't know George Foreman had been a boxer, he thought that George Foreman was a pitchman. Which is true but inomplete. Whether he is a pitchman or a boxer or an ex-boxer, does he really know enough about healthy eating so that you should take his advice? Note also that he has a financial incentive to believe that the grill produces tasty and healthy food. See here for a blog post on the illogic of advertising.
6) The Cell phone. It would be rude to, at a party, go into a corner and read a newspaper. But it is socially acceptable to go into a corner and read the news (or anything else) on your phone. I discussed this, though about laptops, as part of a post here. On the one hand, maybe it was good to be forced to talk to people. On the other hand, I can check my mail without being at home or the office!
7) THE CELL PHONE!!!!!! See Lance's post here.
8) Caller ID. The Cell Phone makes it possible to call anyone, anytime. Caller ID makes it possible to avoid talking to anyone, anytime. Symmetry!
9) ChatGPT was not on the list 10 years ago but might be now. And other AI things will be also.
But for now I ask: What do YOU think were the greatest tech advances of the last 50 years? If you prefer thinking on a full stomach then grill some burgers and eat them before answering.
BILL to LANCE: Anything you want to add?
LANCE:
9) GPS - First launched in 1978 and now you never get lost, even at sea.
BILL: Some people trust their GPS to much, see here, though I think these stories happen less and less over time.
10) CNN - Watching wars play out in real time was a game changer.
BILL comment on CNN: In 1991 I was on a cruise. In the time I was gone there was an attempted coup against Gorbachev. I didn't hear anything about it until the cruise was over. CNN existed but it was not-a-thing to have it wherever you go. The next time I went on a cruise was 2022. While on that cruise, Gorbachev died and I heard about it right away. Two points here:(a) How fast we got news anytime-anywhere has changed a lot, and (b) For the sake of the Gorbachev family I should stop going on cruises.
11) Cloud Computing - Enabled small startups to do big things.
BILL Sometimes very small, like one person. New word: Solopreneur
Wednesday, August 20, 2025
The Phone
I've heard this story from a few places. A father watches Back to the Future II with his kid. The 1989 movie view of 2015 looks entirely different when in fact not much has changed except for the fashion and the lack of mobile phones. This is supposed to be a parable about the lack of technological progress.
I disagree. It's about a lack of imagination of the future because of those phones. Phones that are hidden from sight, and have become so ubiquitous we take them for granted.
The phone in your pocket is more powerful than the most powerful computer of 1985 and it's not even close. But that's the least interesting thing about the phone.
You no longer need a printed newspaper, encyclopedia, atlas, almanac, dictionary, thesaurus, dining guides, programming manual or any other reference book. The printed sports almanac at the center of the plot of the movie doesn't exist anymore.
You can read virtually any book ever published, listen to any song ever recorded, watch any movie ever filmed. You have access to nearly all publicly available information. You can watch major sporting events live. On that phone.
You can buy tickets to any event and save and use them on your phone. You can pay for just about anything with your phone, and if you wish ship it anywhere.
You no longer have to fold a map or ask for directions. The phone will give you directions, taking account traffic and transit delays, and guide you along the way.
When visiting a foreign country you can point your phone at a sign in Japanese and have the words magically replaced by English. Take a picture of a menu in German and have it describe the dishes in English. Instantly translate voice as well.
You can have a conversation with your phone about anything. It will understand your voice and respond likewise.
You can take photos and videos on your phone and share them instantly with your friends or with everyone around the world. The quality is far superior to any consumer-level camera from 1985. Or you can have the phone create its own photos and videos based on your descriptions.
It's also, of course, a phone. You can speak to anyone anywhere, with video if desired, for zero marginal cost. Back in 1985 it cost about a dollar a minute to call someone in a different state, and much more internationally, on a landline. You can have impromptu video meetings and you can send messages to anyone and share them with everyone.
And I'm just scratching the surface.
So between the phone and the flying hoverboards, I'll take the phone any day.
Saturday, August 16, 2025
A few more notes about Tom L
Tom Lehrer passed away on July 26, 2025, at the age of 97. I wrote a blog-obit here. One of my readers read the post and went down a rabbit hole (or did he?), which lead to a blog post about rabbit holes here.
A few more notes about Tom L.
1) When someone of interest to our readers dies, Lance calls me (he seems to always know before I do) to discuss who should do the obit (me, him, someone else, or perhaps no obit needed). This was a case where there was no need for a discussion. To say I am a fan of novelty songs would be an understatement---I've been a fan of novelty songs before I was a fan of mathematics. A link to my website of novelty songs by topic, including Math, is here
2) Whenever Lance calls I answer with Who Died?
3) Who was the oldest novelty song singer to die in the Summer of 2025? The summer is not over yet but I think the answer is known. And it's NOT Tom L!
Jane Morgan, a singer, died on August 4, 2025, at the age of 101. She recorded a parody of Johnny Cash's A Boy Named Sue titled A Girl Named Johnny Cash
A boy named sue: here
A girl named Johnny Cash: here
Wikipedia Page for Jane Morgan: here
To my knowledge, Jane Morgan only had one novelty song (she had many non-novelty songs) so calling her a novelty singer is a stretch, but its a great song, so I will call her a novelty singer. (I checked if the flipside of that song was a novely song. It was Charley, a love song, not a novelty song. Ask your grandparents what a flipside is.)
4) Tom Lehrer and Jim Lehrer (news anchor on PBS, deceased) are not related.
I googled how many people have last name Lehrer
Google-AI: FamilySearch.org estimates there are 211,503 records for the surname.
But this website says approximately 5982 bear this surname.
I'm surprised by the large gap there.
5) There have been more elements found since Tom L did The Elements. I wondered whether someone updated it. Of course they did:
The Original by Tom L: here
Updated version by Leonard Lehrman that has the latest element, Number 118, Oganesson: here
I've heard that 118 is natural barrier- we probably won't create 119 or higher. Elements that high are created, not discovered. They also don't last long.
6) There are many elements, so a list-song made sense. There are a lot of complexity classes, so a list-song makes sense for those as well.
Dave Barrington sings Song of the Complexity Classes: here
7) How many people emailed me to tell me that Tom L passed away? R(5).
8) One of my favorite and less known Tom L songs, whose advice he did not take: Selling Out
9) Tom L wrote a paper with R.E. Fagen (Not Ronald Fagin) for the NSA. It's a real math paper, but note the third reference in the bibliography, the paper is here.
Wednesday, August 13, 2025
Total Pixel Space
Last month the New York Times highlighted some AI generated short movies, including Total Pixel Space, by Jacob Adler that gets philosophical about information, à la infinite monkeys. It imagines the finite number of images and video that contain all human history, both real and fake, along with all possible creatures, and even the rise and fall of alien civilizations.
Let's talk about the math. According to the video there are roughly 7.8 x 107,575,667 possible 1024x1024 images, and 9.3 x 101,309,075,411,322 possible two hour films, incomprehensibly large numbers. The narrator acknowledges that most of the images are random noise.
But within this ocean of pixel possibility, natural images are but a drop. Recognizable scenes, faces and objects are extremely rare islands in a vast sea of noise.
So how do we capture which images actually matter? Let's visit our friend Kolmogorov Complexity, which measures information by the amount we can compress. A JPEG image typically compresses (lossily) to about 200 KB, which reduces the number of images to 8.7 x 101,061,650. Still enormous.
All the images and videos were generated by short prompts. JPEG compression reduces raw pixel space by discarding details our eyes don’t notice. Prompts do something similar at a higher level: they compress ideas, not pixels.
Consider prompts of length 100 over a 10,000 word alphabet typically used in prompting and we're down to 10400 possibilities, a mere quintic of the number of atoms in the universe. And you could cut that down by considering grammatical structure.
Now from a prompt you won't get the same image each time, based on the randomness of machine learning. But that's the whole power of AI, the randomness just gives us examples of the structure embedded in the prompt. It's the structure that matters.
And perhaps those monkeys would be more efficient if they entered prompts into AI, instead of generating random text. What images they could produce!
Sunday, August 10, 2025
My Tom L post inspired a mathematical definition of Rabbithole
BILL: Uh- I was only kidding.
Thursday, August 07, 2025
AI and ...
AI and Vacation
I'm back from my German vacation. This was my first AI vacation, by which I mean how I used AI to navigate a foreign country. Taking a picture of a hand-written menu board, not just to translate the dishes, but to get a description of each. Visiting a palace with a German-speaking tour guide and translating in real time. Even the more mundane taking pictures of buildings to learn about them.
On the TV I saw something about Künstliche Intelligenz, and Chatty told me it was German for Artificial Intelligence. The Germans use KI instead of AI partly because AI can be confused with Ei (egg). At least that's what AI tells me, so it must be true.
AI and Math
At the beginning of my vacation, Google announced that they achieved gold medal status in the International Mathematical Olympiad. An impressive achievement though Terry Tao makes a good point that comparing an AI system to a time-constrained high-school students is not an apples-to-apples comparison.
I already find talking to AI about math topics quite useful, though it's like talking to an early PhD student. Sometimes they just say things that aren't correct, but usually they are. The reasoning models are particularly good at finding holes in P v NP proofs. For example here's the conclusion of ChatGPT o3-pro's review of the paper from Eric's guest post.
The paper is a reminder that lower‑bound proofs live or die on the exact breadth of the algorithmic model they exclude—too narrow and the result is unsurprising, too broad and the proof tends to break. At present this work sits in the first category.
What I want to see is AI come up with a solution to an open math problem, a true new insight beyond just some optimized search. I'm not looking for P ≠ NP, just some result that would be publishable in a respectable conference or journal, even just a new completeness result. We haven't really seen that yet, but I suspect we will soon and then we can figure out where math goes from there.
AI and Bill
In his presidents question and solution, Bill states that AI had failed to find the right answer to his problem. Back in June, I saw Bill's draft post and tried AI to solve it.
AI initially failed the test but for a good reason. Bill's initial draft post had Ford and Dole in Group Two because they received LLBs instead of JDs. In the past the LLB was the professional law degree. Yale didn't change to JD until 1971. Ford got his LLB from Yale in 1941.
When I removed Ford and Dole, ChatGPT o3-pro correctly gave the reason for the partition, though it did take over 13 minutes.
Every name in Group One spent time in a law school—most completed a J.D. or LL.B., and the two exceptions (Al Gore and, in some accounts, Lloyd Bentsen) still enrolled in law studies.
examples – Walter Mondale (University of Minnesota Law) en.wikipedia.org,
JD Vance (Yale Law) en.wikipedia.org,
Al Gore (attended Vanderbilt Law) en.wikipedia.org.No one in Group Two ever attended a law school; their highest formal education is in fields such as engineering (Jimmy Carter), economics (George H. W. Bush), business (Donald Trump), political science (Paul Ryan), or acting (Ronald Reagan) en.wikipedia.orgen.wikipedia.org.
So the distinguishing property is legal education: every Group One figure went to law school, while none in Group Two did.
Another commentor got a similar result for ChatGPT o4-mini-high. I just tried it on Gemini 2.5-Pro and it also gave the correct response, this time in seconds.
On the other hand, E tried several base models and none of them succeeded. The lesson: You want to solve a tricky problem, pony up the $20/month and use a reasoning model.
Monday, August 04, 2025
Some thoughts on journals, refereeing, and the P vs NP problem
A guest post by Eric Allender prompted by an (incorrect) P ≠ NP proof recently published in Springer Nature's Frontiers of Computer Science.
For a time, I served as Editor-in-Chief of ACM Transactions on Computation Theory, and in this role I had to deal regularly with submissions that claimed to resolve the P vs NP problem. Finding referees for these papers was sometimes challenging, so I frequently ended up reviewing them myself. Dealing with such submissions involves enough overhead that ToCT, J.ACM and ACM Transactions on Algorithms limit the frequency with which authors can submit work of this sort. But for every submission, on any topic, the overall process was the same: Find someone on the Editorial Board who has sufficient expertise to find referees and make knowledgeable use of their recommendations. If there was no such person on the Editorial Board, then it was always the case that the submission was out of scope for the journal.
These thoughts are brought to mind by a recent case, where it seems to me that the editorial process broke down.
Springer publishes several high-quality academic journals that deal with Theoretical Computer Science. One Springer journal, Frontiers of Computer Science, recently published an article entitled SAT Requires Exhaustive Search, where one of the authors is a Deputy Editor-in-Chief of the journal. The abstract of the article states that it proves a result that is stronger than P not equal to NP. The Editorial Board of the journal has some members who are expert in computational complexity theory. However, all the ones whom I know personally have asserted that they had no knowledge of this paper, and that they were not involved at all in handling the paper.
When Ryan Williams and I learned about the publication of this article, we drafted a comment, which we sent to the Editor-in-Chief. We recommended that the paper be retracted, in which case there would be no need to publish our comment. However, the Editor-in-Chief declined to retract the article, saying that he could find no evidence of misconduct, and thus we have been assured that an edited version of our comment will appear in the journal.
Our comment calls attention to some shortcomings in the proof of the main theorem (similar to shortcomings in several other failed attempts to separate P from NP). But we were able to say more. Often, when one is looking for bugs in a purported proof, one has to deal with the situation where the claimed theorem is probably true, and the only problem is that the proof is not convincing. However, the main theorem in their paper (Theorem 3.2) states that a particular constraint satisfaction problem requires time more than \(d^{cn}\) for any constant \(c<1\) (where \(d\) is the domain size, and \(n\) is the number of variables). In particular, their purported proof claims that this holds even when \(k=2\) (meaning that each constraint has at most 2 variables). However, Ryan Williams presented an algorithm more than two decades ago that runs in time \(O(d^{(0.8)n})\) in this special case, contradicting the lower bound claimed in the article.
The article contains an appendix, with glowing testimonies from various researchers; the lead-in to the appendix also contains enthusiastic comments from Gregory Chaitin. I contacted Gregory Chaitin, and he asserts that he did not read the paper, and that he was quoted out of context.
The edited version of the comment that Ryan Williams and I wrote, which will supposedly appear in Frontiers of Computer Science soon, differs from the version linked here (our original submission) primarily in one respect: Our closing paragraph was removed. Here is that paragraph:
Finally, it is our opinion that the publication of this article is a complete embarrassment to this journal and its publisher. We believe that, at the very least, the paper should be withdrawn, and Springer should conduct an investigation to understand how such a paper could have made it through the peer review process.
Update (Lance, 8/5/25): The authors of the paper in question have asked me to link to their reply to the Allender-Williams comment. I do so with no endorsement.
Eric Allender asked me to note that their claim about Ryan’s algorithm requiring exponential space is misleading; the amount of space is not more than the runtime of the algorithm, which is less than the lower bound that they claim. (Their theorem does not state that \(d^{cn}\) time is required under the additional assumption that the algorithm use only \(n^{O(1)}\) space.)
 
 
