Wednesday, January 28, 2026

The Fighting Temeraire (Re)visited

The Fighting Temeraire by JWM Turner

A year ago I wrote about an experiment I ran to learn about the modern period of art from ChatGPT. Chatty picked four paintings to discuss and I wrote about Joseph Mallord William Turner's The Fighting Temeraire. To review, the Temeraire fought in the Battle of Trafalgar but in this painting it's being towed by a steamboat up the Thames to be broken down for parts. I liked the painting because it captured the change in technology from the great sailing ships to boats moving without sails. How technology can move us from the beautiful to the practical with parallels to what we see today.

I wrote the post based on high-resolution images but there is nothing like seeing a painting in person. So during a trip into London, we made a pilgrimage to Room 40 of the National Gallery to see the Temeraire up close. The National Gallery is across the street from Trafalgar Square and a short walk from the Thames the Temeraire traveled in its last trip.



I had a new experience with the painting. I could see the brush strokes, and details I missed before, like the people on the steamboat and how its wheels pushed it along the water. More generally, the emotional experience of seeing this last trip of a great ship. A reminder that no matter how digital our world gets, seeing art in its original form brings the artist's true intentions to mind.

In the same room hang another JMW Turner masterpiece, Rain, Steam, and Speed - The Great Western Railway.


Rain, Steam, and Speed - The Great Western Railway by Turner

Turner painted The Great Western Railway in 1844 less than a decade after the train line started running. Like the Temeraire captures the change in technology, this big fast train moving quickly towards the viewer in a quiet countryside. On the right side of the painting, a bit hard to see even in person, is a man with a horse-drawn plough, and a small boat on the river on the left.

Coincidentally I took the Great Western Railway into London that day, but it's not the same railroad company.

Turner captured a new time in history, where man could travel faster than by horse on land and by wind on the sea, in the early days of the industrial revolution, a reminder of the technological changes we see today. But also the importance of the museum, of seeing something in person that no digital experience can replicate and a location where you can focus on art, undistracted by anything else, except other art.

Two hundred years from now will someone go to a museum to see art that captures the early days of the AI revolution? And will it be generated by a human?

Saturday, January 24, 2026

Online Talks on Accessible Theorems!

Bogdan Grechuk has written a book Landscapes of 21st Century Mathematics that came out in 2021. There is a revised version coming out soon. The theme is that he takes theorems whose statements can be understood and describes them in 5–10 pages. No proofs, but lots of context and, most importantly, if you read all 800 pages of it you know about many areas of mathematics and where to look things up. He is organizing a series of online seminars with accessible talks about great recent theorems featuring world-renowned mathematicians:
  • January 28, 2026. Prof. Boaz Klartag. “Sphere packing.” join the talk here.
  • February 11, 2026. Prof. Avi Wigderson. “Expander graphs.” join the talk here.
  • March 4, 2026. Prof. Assaf Naor. “Sparsest cut.”
  • April 2026. Prof. Harald Helfgott. Date and topic to be decided.
  • May 20, 2026. Prof. Ben Green. “The polynomial Freiman–Ruzsa conjecture.”
More details can be found on the seminar webpage and the blog devoted to great accessible theorems across mathematics.

Thursday, January 22, 2026

Community

I once had a provost who felt that academic departments hindered the university as they tended to silo the faculty. He would argue we should eliminate departments and that would increase cross-disciplinary work. That went nowhere of course.

He missed a critical point. While departments play an official role in hosting academic programs, they importantly serve as the main community for the faculty within the department. You see your colleagues in the department in meetings, at seminars and just walking by them in the hallways. They are your peers and the ones who hired you and judge your promotion and tenure cases. You will argue with them but you all share a common mission to make your department as strong as possible, so you can attract even better colleagues.

At Oxford, most faculty have a position at one of the 36 colleges, and faculty community comes in the college instead of the department. Many faculty walk from their department offices to have lunch in their college. The colleges are interdisciplinary, there are only at most a couple of computer scientists in each college. My visiting fellowship is with Magdalen College, not the CS department though I plan to spend some time there.

Does the Oxford system lead to more interdisciplinary research, or less research in the field? Not sure, and it's hard to run a controlled experiment.

We all have two communities, one where we work, but also our academic community, the people who do research in the same areas, the people in our field that we collaborate with and see at conferences and workshops. The ones we try to impress with our own research so they may accept our papers, give us grants, nominate us for awards and write good letters for us. That's the real challenge of interdisciplinary research--you hope to impress researchers in both fields, but really you just impress those in the intersection.

In computer science our academic communities are pretty small, usually the conferences where we publish regularly. I was having a conversation with another computer scientist at Oxford. I asked him which conference and he said ICALP B. I said I've published in ICALP A, completely different!

Monday, January 19, 2026

What to do about students using ChatGPT to do their homework?

Students are using ChatGPT to do their HW. Here are things I've heard and some of my thoughts on the issue (Lance also added some comments). I have no strong opinions on the issue. Some of what I say here applies to any AI or, for that matter, old-fashioned cheating by having your friend do the homework for you or by going to the web for the answer (Is ChatGPT  going to the web for the answer but with a much better search tool?)

1) Ban the use of ChatGPT.  That might be impossible. 

2) Allow them to use ChatGPT but they must take what it outputs and put it in their own words. This should work in an advanced course where the students want to be there and are mature, but might be problematic in, say, Freshman calculus. This is also a problem if you want to see if they can come up with the answer on their own. Lance: How do you know if the words come from AI or the student?

3) Assign problems that ChatGPT does not do well on. I can do this in Ramsey Theorem and (to some extent) in Honors Discrete Math but (a) over time it will be harder, and (b) this may be hard to do in a standard course like calculus. Lance: Pretty much impossible for most undergrad courses. Bill: Slight disagreement- computer science is a fast-moving field where recent papers can be used to get problem sets before ChatGPT has digested the paper. Sometimes. 

4) For some assignments have them use ChatGPT and hand in both what chatGPT output and the student's rewriting of it. If ChatGPT made a mistake they should also report that. Lance: Many would just have AI rewrite the AI. Bill: Alas true.

5) Revise the curriculum so that the course is about HOW to use ChatGPT intelligently including cases where it is wrong and types of problems where it is wrong (do we know that?). Lance: And that changes as models change. Bill: True but not necessarily  bad. 

6) Make sure that a large percent of the grade is based on in-class exams.  I don't like this but it may become needed. Lance: I just grade homeworks for completion instead of correctness so that students didn't feel they needed to use AI to keep up with their classmates. Bill: I like the idea but they may still use AI just because they are lazy. 

7) The in-class exams will have variants of the homework so if a student used ChatGPT this will test if they actually learned the material (I do this anyway and warn the students on day one that I will do this to discourage cheating of any kind.) Lance: This works until the students wear AI glasses. Bill: I can't tell if you are kidding. More seriously, how far off is that? Lance: Not far.

8) Abolish Grades. Lance thinks yes and blogged about it here. Abstractly I wish students were mature enough to not need grades as a motivation. But even putting that aside, we do need some way to tell if students are good enough for a job or for grad school. Lance: We do need that, but grades aren't playing that role well anymore. Bill: What do do about this is a topic for another blog post. 

9) In a small class have the students do the homework by meeting with them and having them tell you the answer, and you can ask follow up questions. Perhaps have them do this in groups. Lance: Panos Ipeirotis used AI to automate oral questioning. Bill: Looks great, though I'll wait till they work out the bugs.

So---readers- what have been your experiences? 

Wednesday, January 14, 2026

Rational Functions Solved!

It's not every day that one of my open problems is solved, especially one that I asked about over three decades ago. Matt Kovacs-Deak, Daochen Wang and Rain Zimin Yang just posted a paper showing that if you have a Boolean function \(f\) and two polynomials \(p\) and \(q\) of degree at most \(d\) such that \(f(x)=p(x)/q(x)\) for every \(x\) of length \(n\) then \(f\) has decision tree complexity at most \(2d^4\).

Noam Nisan and Mario Szegedy had this beautiful paper in the early 90s showing that if a low-degree polynomial approximated a Boolean function, the decision tree complexity, the number of bits you needed to query from the input to decide the output, was bounded by the degree. For rational functions, that wasn't true, the majority function can be approximated by the ratio of two low-degree polynomials. This was the main tool used by Beigel, Reingold and Spielman when they showed that the complexity class PP is closed under intersection. 

Motivated by the complexity class C\(_=\)P\(\cap\)co-C\(_=\)P, I asked Mario if a low-degree rational function exactly computed a Boolean function, then would that function have low decision tree complexity. Mario thought about it, said it was an interesting problem, and added it as an open question to the journal version of his paper with Nisan.

I gave the problem to every student who asked me for an open combinatorial problem, as well as every reasoning AI model I could get my hands on. Iyer et al. posted a paper that examined this problem and showed it true for symmetric functions but the general case remained open until now.

The main insight of Kovacs-Deak, Wang and Yang is to consider the minimum block sensitivity, which is usually an uninteresting property of a function. The majority function for example has large maximum and average block sensitivity but low min block sensitivity. Nevertheless, their proof bounded the min block sensitivity by twice the degree of the rational function and used it to get a degree reduction of the rational function by querying a small number of bits of the input. Nice.

I always joked that if I told people the complexity application of the problem they would lose interest. The question came out of work with Steve Fenner, Stuart Kurtz and Lide Li on An Oracle Builder's Toolkit. C\(_=\)P is a generalization of co-NP where we ask if a gap-P function (the difference of two #P functions) is zero. So C\(_=\)P\(\cap\)co-C\(_=\)P is a generalization of NP\(\cap\)co-NP. This new theorem implies the technical result that if P = PSPACE unrelativized then P = C\(_=\)P\(\cap\)co-C\(_=\)P relative to a generic oracle. This implies an oracle where P = C\(_=\)P\(\cap\)co-C\(_=\)P and the PH is infinite and another where C\(_=\)P\(\cap\)co-C\(_=\)P has no complete sets.

If you are more of a quantum person, C\(_=\)P\(\cap\)co-C\(_=\)P = PostEQP while PostBQP = PP, and this work shows quite a difference between these classes. PP has complete sets and if P = PP then the polynomial-time hierarchy and even the counting hierarchy collapses to P.

Sunday, January 11, 2026

Is `smells like' commutative?

 
1) Smells Like... Something

In many TV shows having to do with murder (and there are plenty of them), I’ve heard the following exchange:

        His breath smells like bitter almonds. So he was poisoned with cyanide

They’re either saying

        bitter almonds smell like cyanide

or

        cyanide smells like bitter almonds.

If you say X smells like Y, you mean that X is the new smell and Y is the familiar one.  However, on these shows, people seem to smell cyanide a lot,
yet I’ve never seen them smell or taste bitter almonds.  That's good since bitter almonds can be lethal (see here). So there should be mystery stories where bitter almonds are used and the cops say

             His breath smells like cyanide. So he was poisoned with bitter almonds.

I don’t know what either one smells like.

2) Rotten Eggs

In real life: My Darling grew up in Pittsburgh when it was still a steel-mill city.
She said she often smelled something that

        smelled like rotten eggs.

It was sulfur. But in telling me this, she assumes I’ve smelled rotten eggs.
I haven’t. But I have smelled other things that I was told smell like rotten eggs.

I think the phrase

        smells like rotten eggs

is often used by people who’ve never actually smelled rotten eggs.

3) Cardboard and Matzoh

A  blog post by Scott (see here), and my post about his post (see here), brought up the question:

        Does matzoh taste like cardboard?

I doubt any of us have actually tasted cardboard.


My proofreader once accidentally did, while eating takeout from a paper container. He says
(1) it doesn’t taste like matzoh, and
(2) it doesn’t taste like food — which matzoh does.


4) Dog Food

I’ve heard the cliché insult:

        Your cooking is so bad that it tastes like dog food.

I’ve never eaten dog food.  Maybe it tastes good.

5) When X Smells Like Y

If someone says X smells like Y, then:

a) If people know what Y smells like but not X, that’s informative.
b) If people know what X smells like but not Y, that’s not informative.
c) If I hear that X smells like rotten eggs and Y smells like rotten eggs, then I know X and Y smell the same —
even though I don’t know what rotten eggs smell like.
Oh wait — I do. They smell like X or Y!

6) How do the following fit into this discussion?:

a) The Nirvana song Smells Like Teen Spirit, video here.
b) The Weird AI song Smells Like Nirvana, video here.


Friday, January 09, 2026

Computational Depth

I'm posting from Oxford University where I will be spending the "Hilary Term" (through late March) as a visiting fellow at Magdalen College. If you are relatively local, reach out if you'd like to connect.

I plan to get back into research after thirteen years of administration, working primarily with Rahul Santhanam and his group. I haven't had a significant sabbatical or leave since Amsterdam 30 years ago, which is what comes from changing jobs too often.

Today I'd like to talk about a 2006 paper about a topic I first thought about in Amsterdam and will likely play a role in this visit, Computational Depth: Concept and Applications by Luis Antunes, Dieter van Melkebeek, Vinod Variyam and myself. 

In Amsterdam, I was hosted by Paul Vitányi and Harry Buhrman at CWI, and naturally worked on Kolmogorov complexity, the algorithmic study of randomness, as well as various problems in computational complexity.

Very simple strings don't have much information. Completely random strings have maximal Kolmogorov complexity but not particularly useful either as we can create our own random strings by flipping coins. Is there some way to measure useful information? 

In particular I had this question: If NP reduces to a sparse set then it has polynomial-size circuits. If NP reduces to a random set then it has polynomial-size circuits. Is this just coincidence or is there a broader principle here?

Charlie Bennett developed a notion of logical depth that has a similar philosophy but it is a difficult definition to work with. Luis Antunes, a student from Porto, came to work with me at NEC Research in New Jersey when I was there around the turn of the century. We developed this simpler notion of computational depth as the difference of two Kolmogorov measures, like time-bounded Kolmogorov complexity minus traditional Kolmogorov complexity. This would be small for both very easy strings and full random strings. With then NEC postdoc (and now Nebraska professor) Vinod Variyam, we found a connection to finding SAT witnesses and with DIMACS and IAS postdoc Dieter van Melkebeek (now Wisconsin professor), we came up with the notion of shallow sets, that generalized both random and sparse sets and, as hoped, if NP reduces to a shallow set than it has polynomial-sized circuits. Luis would title his PhD thesis "Useless Information".

Luis and I would later find a nice connection of computational depth to average-case complexity. 

Computational Depth had a resurgence of popularity with the rise of meta-complexity, with 50 of the original paper's 90 citations coming since 2020.

So I hope to find new applications of computational depth working with Rahul who's at the center of meta-complexity. Maybe computational complexity can help us understand machine learning better based on my frozen concentrate analogy, where the learning process removes the randomness, leaving structured information behind. 

Monday, January 05, 2026

AI and Research Papers

2026 will be a year of AI disruption across all of academia. Let's start by talking about AI is changing how we write research papers. Not the research itself (another post), just about the dissemination thereof.

Technology has changed research distribution since the invention of paper, and the printing press, typewriters, journals and conferences have all had dramatic effects. But we've already seen such dramatic changes just within my research career from authors doing their own formatting (TeX/LaTeX), instant distribution of papers (email/web) and collaborative writing (email/shared repositories/Overleaf).

How AI affects research writing follows a trend not unlike other AI applications. 

The Slop Phase

This is where AI does more harm than good. Useful for finding grammar mistakes and making fancy LaTeX tables, but if you use it for writing it may make stuff up, create citations out of thin air and put in words like "as a large-language model" in the text. Things have gotten much better with the latest models, I'm currently a fan of Gemini 3, but we haven't fully left the slop phase and anything AI written needs close verification.

The Verification Phase

2026 will be the year we go from humans having to look over AI generated content to AI having to look over human-generated content. Already you should give your paper, ideally in LaTeX, to the best model you have access to with a prompt asking for a critical review, or use a specialized service. You don't have to follow what the AI says but you should be open to improving your paper. Google provided an optional tool for those preparing STOC 2026 submission that many found valuable. In 2026 some conferences and journals will start requiring submitted papers to go through an AI review as well as a human review (for now), especially with those overwhelmed by submissions.

At some point, we may require authors, with AI help, have their theorems verified by theorem provers like Lean or Rocq. It will be a while, especially in computational complexity, where it is hard to properly state a theorem in formal logic, let alone its proof. 

The Collaborative Phase

Not just AI critiquing papers, but researchers and AI working together in paper writing. We already see pieces of this, having AI fill in details of arguments, give a first draft of an introduction, help create diagrams, find relevant citations, etc. Right now anything the AI writes requires a careful lookover by the researcher. As these models improve, we'll see more and more generated by the AI and researchers starting to trust the output, maybe more than they should. This will lead to...

The Generation Phase

Here the AI writes the LaTeX from scratch based on proof sketch given by the researchers. As an experiment, I "vibe-coded" a minor research paper and it went better than expected. 

The researcher will go and ask for updates and touch up the LaTeX at the end, but at some point the human won't ever touch the LaTeX file, just a prompt or a markdown file that generates it. 

By the end of the 1980s, researchers just trusted the LaTeX for formatting, fixing only syntax errors and overfull boxes. Most researchers (but not all) stopped fiddling with the look of the paper. At some point, and at higher stakes, we'll do the same with the paper content as well.

Friday, January 02, 2026

The Betty White Award for 2025

The Betty White Award goes to people who die at the end of the year--- too late to be on those articles with titles like 

                                                            people we lost this year.

The  point of the award is that news outlets and blogs should WAIT until Jan before having any articles that look back on the prior year. (I tell Lance we should have our end-of-the-year post in January just in case someone solves P vs NP the last week of December. [Lance: I'm not worried])

I made up the award in 2023. More recently, I looked at the web to see who would have won it in past years. My chart is here. As I was writing this post I wondered if there already was a Betty White Award for something else. The response is here. Short version: ChatGPT says that there is an informal award called that for reading cue cards really well on Saturday Night Live. As opposed to my award which is formal(?).

This year's list of winners so far has only one person:

Brigitte Bardot who died on Dec. 28, 2025, at the age of 91. She was an actress, singer, model, and activist (she was for animal rights and against interracial marriage---see her Wikipedia page for other views she had).  Like most celebrities who die when they are over 90, I suspect many people either don't know who she was or thought she was already dead. For more on her see her Wikipedia page here. Or type her name into chatty, though be warned that you may get some false statements. 

1) This year I am taking NOMINATIONS for additions to the list.  The number of winners can be more than 1. The max in a year has been 4 so far.  SO 

If you know of someone who

a) died between Dec. 20 and Dec. 31

b) is famous

then leave a comment.  DEADLINE: Jan. 8 at noon East Coast time. On Jan. 8 I will update this blog post with more winners if need be. 

I've already looked at Wikipedia's list of who died in 2025, here. If there is either (1)  someone on there who is more famous than I thought (that happened last year with Manmohan Singh) or (2)  there is someone not on the Wikipedia list who is famous to our community, let me know.

2) As noted above, the point of the award is to point out that publications should WAIT until Jan to have those lists. I am happy to say that Entertainment Tonight has found a good way to handle this---they had an article title Celebrity deaths 2025: Remembering those who've died this year but they are updating it (at least their online version). The updated version includes Bardot! See here. Paper publications  won't be able to do that. This may not matter---will there still be paper publications at the end of 2026? I blogged on paper free stuff here.