Wednesday, February 04, 2026

Sampling the Oxford CS Library

Wandering around maze known as the Computer Science building at Oxford I found the computer science library. Rarely these days do you see a library (and a librarian) devoted to computer science. The librarian found their copy of The Golden Ticket and asked me to inscribe and sign it, just like at Dagstuhl, perhaps the only other active CS library I know of.

It brought back memories of the early 90s when I would often head to the 
Math/CS library at the University of Chicago to track down some conference or journal paper. Now we just click and download but you miss finding something else interesting in the proceedings or the stacks in general.

I had time to kill so I wandered around the library finding memories in the stacks including the 1987 STOC Proceedings, home to my first conference paper, The complexity of perfect zero-knowledge. The paper might be best known for my upper bound protocol which is republished here in its entirety. 

That's how I wrote it nearly four decades ago, without proof just an intuition why it works. Those were the days. I did work out the full covariance argument in the journal version though I missed other bugs in the proof

The upper bound requires the verifier to have a random sample of the distribution unknown to the prover. Rahul Santhanam, who is hosting my visit to Oxford, asked if the converse was known. Goldreich, Vadhan and Wigderson, in the appendix of their Laconic Prover paper, show a sampling protocol based on the upper bound on the size of a set, though the sample is not completely unknown to the prover. Neat to revisit questions from my first conference paper. 

Oxford CS Librarian Aza Ballard-Whyte

Sunday, February 01, 2026

Before the ChatGPT-HW debate there were other ``If students use X to do their HW'' debates

Lance and I had a blog-debate about What to do about students using ChatGPT to do their Homework.

Some commenters pointed out that we've been here before. I will now list past technologies that looked like they were a problem for student assignments and ponder what happened. 

If students can consult diagrams in their text then they will lose the ability to I DON"T KNOW AND I DON"T CARE . I did a post about this  titled In the 1960's students protested the Vietnam war!/In 1830 students protested...Math? I suspect that students eventually got to consult their texts. Actually, the entire topic of geometry of conic sections,  seems to have gone away.

If students learn how to read then they will lose the ability to listen to their elders tell stories and also lose the ability to memorize. I've heard this was a concern though I don't really know if it was. In any case people are probably worse at memorizing than they used to be, but the plus of having books and reading far outweighs the negative of less good memories. 

If students use spellcheck they will forget how to spell I think people are sloppier with first drafts than they used to be since they know that spellcheck will catch their spelling errors. And even before ChatGPT there were programs to check grammar as well. My spell check wants me to replace ChatGPT with catgut. This points to the need to use spellcheck carefully which foreshadows having to use ChatGPT carefully. My spellcheck does think that spellcheck is spelled correctly.

If students have calculators they will forget that 7*8 equals... hmm, I forgot: I think we ask much fewer questions depending on calculation than we used to.  Do kids in grade school still memorize times-tables? If so, then up to what number?  In my blog post on Numbers That Look Prime But Aren't, I casually mentioned that students learn up to 12x12 but I do not know if that's true. 

SO- for those of you who have kids in grade school, leave a comment on if they 

a) Memorize Times Tables.

b) Learn an algorithm for multiplication ( O(n^2) or O(n^{1.58}) or  O(n log n)) . I used Wikipedia for the pointer to the O(n^{1.58}). The entry describes the algorithm very well. I used Wikipedia for the O(nlog n) algorithm. That entry just says that there is a galactic algorithm (one that needs very large n to be worth using). They did not give the algorithm or a pointer to a paper that has it.) 

c) Are allowed calculators on exams.

d) Some combination of the above or something else. 


If students use Encyclopedias they will not be using primary sources. Over time Encyclopedias became seen as primary sources. My proofreader relayed the following to me: When I was in fourth grade I weren't supposed to use Encyclopedias for their reports if the library had suitable books. So the trick was to find a topic that the library did not have suitable books on. My proofreader is only a few years older than me, and lived in the same state, but I was allowed to use Encyclopedias. 


If students use Wikipedia they will not be using primary sources. I don't hear this debated anymore but I am not sure how the issue was resolved, or if it was resolved. If someone who has kids in school knows, please leave a comment. 

Annie and Lance Fortnow had a blog entry about the Wikipedia issue. 

I reviewed a book titled Should you Believe Wikipedia? by Amy Bruckman. The review is here. Spoiler alert: She thinks yes but I am more skeptical.

I once needed a list of ER-complete problems and asked an expert if there was a survey. He said that the best source was the Wikipedia page. For other examples of Wikipedia being the only source  see this blog post.

A similar issue is referring to papers on arXiv that have not been refereed. That might be the topic for a future blog post. 

If the first programming language is in a high-level language the students will not learn assembly code and stuff about how computers really work. This has happened. I think students do not know as much about low-level code as they used to. Is that a problem? This type of concern happens whenever a  higher level language is available. Students using  ChatGPT  to write code for you is another example of this issue, though it also has other problems. 

If students learn typing too early they will not learn cursive. I am an early example of this---my handwriting was bad (still is) so I eagerly took a typing class in my school in 5th grade (the class was 13 girls whose parents wanted them to be secretaries, and me) and worked really hard at it and began handing in typed book reports.  

The only letters I know how to do in cursive are

 W I L L I A M   G A S A R C H  

and only  in that order.  

ANYWAY, people have lost the ability to write in cursive, or even write in print neatly.  Drew Faust, a former history professor at Harvard (she retired in 2018) has pointed out that students have a hard time reading cursive in her article Gen Z Never Learned to Read Cursive.

I ask non-rhetorically, is losing the ability to read or write cursive  a problem? 

Takeaways: 

1) (From the prior blog on ChatGPT) Grading has been broken for a long time. ChatGPT just makes that more obvious.

2) When a  new technology comes along we may have to rethink education. 








Wednesday, January 28, 2026

The Fighting Temeraire (Re)visited

The Fighting Temeraire by JWM Turner

A year ago I wrote about an experiment I ran to learn about the modern period of art from ChatGPT. Chatty picked four paintings to discuss and I wrote about Joseph Mallord William Turner's The Fighting Temeraire. To review, the Temeraire fought in the Battle of Trafalgar but in this painting it's being towed by a steamboat up the Thames to be broken down for parts. I liked the painting because it captured the change in technology from the great sailing ships to boats moving without sails. How technology can move us from the beautiful to the practical with parallels to what we see today.

I wrote the post based on high-resolution images but there is nothing like seeing a painting in person. So during a trip into London, we made a pilgrimage to Room 40 of the National Gallery to see the Temeraire up close. The National Gallery is across the street from Trafalgar Square and a short walk from the Thames the Temeraire traveled in its last trip.



I had a new experience with the painting. I could see the brush strokes, and details I missed before, like the people on the steamboat and how its wheels pushed it along the water. More generally, the emotional experience of seeing this last trip of a great ship. A reminder that no matter how digital our world gets, seeing art in its original form brings the artist's true intentions to mind.

In the same room hang another JMW Turner masterpiece, Rain, Steam, and Speed - The Great Western Railway.


Rain, Steam, and Speed - The Great Western Railway by Turner

Turner painted The Great Western Railway in 1844 less than a decade after the train line started running. Like the Temeraire captures the change in technology, this big fast train moving quickly towards the viewer in a quiet countryside. On the right side of the painting, a bit hard to see even in person, is a man with a horse-drawn plough, and a small boat on the river on the left.

Coincidentally I took the Great Western Railway into London that day, but it's not the same railroad company.

Turner captured a new time in history, where man could travel faster than by horse on land and by wind on the sea, in the early days of the industrial revolution, a reminder of the technological changes we see today. But also the importance of the museum, of seeing something in person that no digital experience can replicate and a location where you can focus on art, undistracted by anything else, except other art.

Two hundred years from now will someone go to a museum to see art that captures the early days of the AI revolution? And will it be generated by a human?

Saturday, January 24, 2026

Online Talks on Accessible Theorems!

Bogdan Grechuk has written a book Landscapes of 21st Century Mathematics that came out in 2021. There is a revised version coming out soon. The theme is that he takes theorems whose statements can be understood and describes them in 5–10 pages. No proofs, but lots of context and, most importantly, if you read all 800 pages of it you know about many areas of mathematics and where to look things up. He is organizing a series of online seminars with accessible talks about great recent theorems featuring world-renowned mathematicians:
  • January 28, 2026. Prof. Boaz Klartag. “Sphere packing.” join the talk here.
  • February 11, 2026. Prof. Avi Wigderson. “Expander graphs.” join the talk here.
  • March 4, 2026. Prof. Assaf Naor. “Sparsest cut.”
  • April 2026. Prof. Harald Helfgott. Date and topic to be decided.
  • May 20, 2026. Prof. Ben Green. “The polynomial Freiman–Ruzsa conjecture.”
More details can be found on the seminar webpage and the blog devoted to great accessible theorems across mathematics.

Thursday, January 22, 2026

Community

I once had a provost who felt that academic departments hindered the university as they tended to silo the faculty. He would argue we should eliminate departments and that would increase cross-disciplinary work. That went nowhere of course.

He missed a critical point. While departments play an official role in hosting academic programs, they importantly serve as the main community for the faculty within the department. You see your colleagues in the department in meetings, at seminars and just walking by them in the hallways. They are your peers and the ones who hired you and judge your promotion and tenure cases. You will argue with them but you all share a common mission to make your department as strong as possible, so you can attract even better colleagues.

At Oxford, most faculty have a position at one of the 36 colleges, and faculty community comes in the college instead of the department. Many faculty walk from their department offices to have lunch in their college. The colleges are interdisciplinary, there are only at most a couple of computer scientists in each college. My visiting fellowship is with Magdalen College, not the CS department though I plan to spend some time there.

Does the Oxford system lead to more interdisciplinary research, or less research in the field? Not sure, and it's hard to run a controlled experiment.

We all have two communities, one where we work, but also our academic community, the people who do research in the same areas, the people in our field that we collaborate with and see at conferences and workshops. The ones we try to impress with our own research so they may accept our papers, give us grants, nominate us for awards and write good letters for us. That's the real challenge of interdisciplinary research--you hope to impress researchers in both fields, but really you just impress those in the intersection.

In computer science our academic communities are pretty small, usually the conferences where we publish regularly. I was having a conversation with another computer scientist at Oxford. I asked him which conference and he said ICALP B. I said I've published in ICALP A, completely different!

Monday, January 19, 2026

What to do about students using ChatGPT to do their homework?

Students are using ChatGPT to do their HW. Here are things I've heard and some of my thoughts on the issue (Lance also added some comments). I have no strong opinions on the issue. Some of what I say here applies to any AI or, for that matter, old-fashioned cheating by having your friend do the homework for you or by going to the web for the answer (Is ChatGPT  going to the web for the answer but with a much better search tool?)

1) Ban the use of ChatGPT.  That might be impossible. 

2) Allow them to use ChatGPT but they must take what it outputs and put it in their own words. This should work in an advanced course where the students want to be there and are mature, but might be problematic in, say, Freshman calculus. This is also a problem if you want to see if they can come up with the answer on their own. Lance: How do you know if the words come from AI or the student?

3) Assign problems that ChatGPT does not do well on. I can do this in Ramsey Theorem and (to some extent) in Honors Discrete Math but (a) over time it will be harder, and (b) this may be hard to do in a standard course like calculus. Lance: Pretty much impossible for most undergrad courses. Bill: Slight disagreement- computer science is a fast-moving field where recent papers can be used to get problem sets before ChatGPT has digested the paper. Sometimes. 

4) For some assignments have them use ChatGPT and hand in both what chatGPT output and the student's rewriting of it. If ChatGPT made a mistake they should also report that. Lance: Many would just have AI rewrite the AI. Bill: Alas true.

5) Revise the curriculum so that the course is about HOW to use ChatGPT intelligently including cases where it is wrong and types of problems where it is wrong (do we know that?). Lance: And that changes as models change. Bill: True but not necessarily  bad. 

6) Make sure that a large percent of the grade is based on in-class exams.  I don't like this but it may become needed. Lance: I just grade homeworks for completion instead of correctness so that students didn't feel they needed to use AI to keep up with their classmates. Bill: I like the idea but they may still use AI just because they are lazy. 

7) The in-class exams will have variants of the homework so if a student used ChatGPT this will test if they actually learned the material (I do this anyway and warn the students on day one that I will do this to discourage cheating of any kind.) Lance: This works until the students wear AI glasses. Bill: I can't tell if you are kidding. More seriously, how far off is that? Lance: Not far.

8) Abolish Grades. Lance thinks yes and blogged about it here. Abstractly I wish students were mature enough to not need grades as a motivation. But even putting that aside, we do need some way to tell if students are good enough for a job or for grad school. Lance: We do need that, but grades aren't playing that role well anymore. Bill: What do do about this is a topic for another blog post. 

9) In a small class have the students do the homework by meeting with them and having them tell you the answer, and you can ask follow up questions. Perhaps have them do this in groups. Lance: Panos Ipeirotis used AI to automate oral questioning. Bill: Looks great, though I'll wait till they work out the bugs.

So---readers- what have been your experiences? 

Wednesday, January 14, 2026

Rational Functions Solved!

It's not every day that one of my open problems is solved, especially one that I asked about over three decades ago. Matt Kovacs-Deak, Daochen Wang and Rain Zimin Yang just posted a paper showing that if you have a Boolean function \(f\) and two polynomials \(p\) and \(q\) of degree at most \(d\) such that \(f(x)=p(x)/q(x)\) for every \(x\) of length \(n\) then \(f\) has decision tree complexity at most \(2d^4\).

Noam Nisan and Mario Szegedy had this beautiful paper in the early 90s showing that if a low-degree polynomial approximated a Boolean function, the decision tree complexity, the number of bits you needed to query from the input to decide the output, was bounded by the degree. For rational functions, that wasn't true, the majority function can be approximated by the ratio of two low-degree polynomials. This was the main tool used by Beigel, Reingold and Spielman when they showed that the complexity class PP is closed under intersection. 

Motivated by the complexity class C\(_=\)P\(\cap\)co-C\(_=\)P, I asked Mario if a low-degree rational function exactly computed a Boolean function, then would that function have low decision tree complexity. Mario thought about it, said it was an interesting problem, and added it as an open question to the journal version of his paper with Nisan.

I gave the problem to every student who asked me for an open combinatorial problem, as well as every reasoning AI model I could get my hands on. Iyer et al. posted a paper that examined this problem and showed it true for symmetric functions but the general case remained open until now.

The main insight of Kovacs-Deak, Wang and Yang is to consider the minimum block sensitivity, which is usually an uninteresting property of a function. The majority function for example has large maximum and average block sensitivity but low min block sensitivity. Nevertheless, their proof bounded the min block sensitivity by twice the degree of the rational function and used it to get a degree reduction of the rational function by querying a small number of bits of the input. Nice.

I always joked that if I told people the complexity application of the problem they would lose interest. The question came out of work with Steve Fenner, Stuart Kurtz and Lide Li on An Oracle Builder's Toolkit. C\(_=\)P is a generalization of co-NP where we ask if a gap-P function (the difference of two #P functions) is zero. So C\(_=\)P\(\cap\)co-C\(_=\)P is a generalization of NP\(\cap\)co-NP. This new theorem implies the technical result that if P = PSPACE unrelativized then P = C\(_=\)P\(\cap\)co-C\(_=\)P relative to a generic oracle. This implies an oracle where P = C\(_=\)P\(\cap\)co-C\(_=\)P and the PH is infinite and another where C\(_=\)P\(\cap\)co-C\(_=\)P has no complete sets.

If you are more of a quantum person, C\(_=\)P\(\cap\)co-C\(_=\)P = PostEQP while PostBQP = PP, and this work shows quite a difference between these classes. PP has complete sets and if P = PP then the polynomial-time hierarchy and even the counting hierarchy collapses to P.

Sunday, January 11, 2026

Is `smells like' commutative?

 
1) Smells Like... Something

In many TV shows having to do with murder (and there are plenty of them), I’ve heard the following exchange:

        His breath smells like bitter almonds. So he was poisoned with cyanide

They’re either saying

        bitter almonds smell like cyanide

or

        cyanide smells like bitter almonds.

If you say X smells like Y, you mean that X is the new smell and Y is the familiar one.  However, on these shows, people seem to smell cyanide a lot,
yet I’ve never seen them smell or taste bitter almonds.  That's good since bitter almonds can be lethal (see here). So there should be mystery stories where bitter almonds are used and the cops say

             His breath smells like cyanide. So he was poisoned with bitter almonds.

I don’t know what either one smells like.

2) Rotten Eggs

In real life: My Darling grew up in Pittsburgh when it was still a steel-mill city.
She said she often smelled something that

        smelled like rotten eggs.

It was sulfur. But in telling me this, she assumes I’ve smelled rotten eggs.
I haven’t. But I have smelled other things that I was told smell like rotten eggs.

I think the phrase

        smells like rotten eggs

is often used by people who’ve never actually smelled rotten eggs.

3) Cardboard and Matzoh

A  blog post by Scott (see here), and my post about his post (see here), brought up the question:

        Does matzoh taste like cardboard?

I doubt any of us have actually tasted cardboard.


My proofreader once accidentally did, while eating takeout from a paper container. He says
(1) it doesn’t taste like matzoh, and
(2) it doesn’t taste like food — which matzoh does.


4) Dog Food

I’ve heard the cliché insult:

        Your cooking is so bad that it tastes like dog food.

I’ve never eaten dog food.  Maybe it tastes good.

5) When X Smells Like Y

If someone says X smells like Y, then:

a) If people know what Y smells like but not X, that’s informative.
b) If people know what X smells like but not Y, that’s not informative.
c) If I hear that X smells like rotten eggs and Y smells like rotten eggs, then I know X and Y smell the same —
even though I don’t know what rotten eggs smell like.
Oh wait — I do. They smell like X or Y!

6) How do the following fit into this discussion?:

a) The Nirvana song Smells Like Teen Spirit, video here.
b) The Weird AI song Smells Like Nirvana, video here.