Monday, December 22, 2025

Complexity Year in Review

An easy choice for paper of the year, a paper that has nothing to do with randomness, interaction, quantum, circuits or codes. Just a near quadratic improvement in the amount of memory you need to simulate time.

Simulating Time with Square-Root Space by Ryan Williams

Any time \(t(n)\) algorithm can be simulated in space \(O(\sqrt{t(n)\log t(n)})\) greatly improving the \(O(t(n)/\log t(n))\) result from the 70's. Ryan's work makes strong use of last year's space efficient tree evaluation by James Cook and Ian Mertz. More in my February post and a Quanta article which did a better job explaining the importance of the result than I could.

Bill is also excited by the new \(O(m\log^{2/3}n)\) single-sourced shortest path algorithm by Ran Duan, Jiayi Mao, Xiao Mao, Xinkai Shu and Longhui Yinthat that beats out Dijkstra on sparse graphs. 

Last year I wrote

We're heading to a perfect storm for US higher education with the oncoming trains of the new administration, artificial intelligence, fiscal challenges and the demographic cliff. Hang on tight, it's going to be a bumpy ride.

Bumpy is an understatement and we're just starting the ride. Limited immigration, National Science Foundation woes in its 75th anniversary, and a drop in computer science enrollments as AI continues to suck up the atmosphere. Do we buckle down or should we completely rethink our institutions? 

In the spirit of all the AI wrapped content, I asked Claude to put together a full year in review for this blog. This is getting scarily good.

We remember George Foreman, Frank GehryRay Laflamme, Tom Lehrer, Charles Lin, Pradyut Shah and Tom Stoppard

We thank our guest posters Eric Allender, Daniel Fernández and Alberto Fraile, Clyde Kruskal and Nick Sovich.

See you all in January!

4 comments:

  1. I read the Claude-generated Review of the blog. It was very good. I did fine ONE mistake (ONE is a VERY small number.) The review claimed that Bill G wrote the blog post about LOTS of P vs NP proof, when actually Lance wrote that one. I wonder why it got that wrong.

    That aside, I concur with Lance that its scary how well Claude did.

    ReplyDelete
  2. Why do you care how well Claude did?

    I'm serious here. We know what the underlying algorithms are: random text generation with no model to do logic or computations on and thus no means of verification of the generated text. And no scientific model of what meaning in language consists of. In any sense of "intellectual content", there's none there. Sure, it's "better" than single-word based Markov chain text generation. But it's still exactly the same thing: random text generation.

    The world's gone crazy, and some of the smartest people around have gone there with it.

    ReplyDelete
    Replies
    1. I care how well Claude did since I am curious how well AI can do at various tasks. You are saying its jut random text generation. Fine. i am curious how well random text generation can do.

      Delete
    2. " i am curious how well random text generation can do."

      But that makes no _intellectual_ sense. Reading randomly generated text is, any way you could possibly think about it, no more sensible now than it was in 1990 with the Markov Chain programs.

      LLMs can _copy_ reasoning they find on the net, but they don't know what multiplication is (they look up the answer). They are, in logical principle, really really dumb.

      Even worse, playing with an LLM and thinking you are going to "figure out what it can, or cannot, do" is a fools erand, because it has way more text in its database than you have in your head and there are a lot of blokes that have a lot of their money on the line that's dependent on them creating front and rear ends for their pet LLM that hide the infelicities.

      As Peter Shor (yes, that Peter Shor) pointed out (over at "Not Even Wrong"), they are good at "solving" problems for which there are solutions on the net, and seriously terrible for problems that don't have solutions.

      The hilarious thing is Google and MickeySoft messed up their search engines by putting in ads and links to paying sites ("enshitification"), so the LLMs act for most people as better search engines than the search engines. (They're not, since you don't know when they're being wrong, but that's another discussion.)



      Delete