Wednesday, February 18, 2026

Joe Halpern (1953-2026)

Computer Science Professor Joseph Halpern passed away on Friday after a long battle with cancer. He was a leader in the mathematical reasoning about knowledge. His paper with Yoram Moses, Knowledge and Common Knowledge in a Distributed Environment, received both the 1997 Gödel Prize and the 2009 Dijkstra Prize. Halpern also co-authored a comprehensive book on the topic.

Halpern helped create a model of knowledge representation which consisted of a set of states of the world, and each person has a partition into a collection of sets of states, where states are in the same partition if that person can't distinguish between those states. You can use this system to define knowledge and common knowledge, and model problems like muddy children. It also serves as a great framework for temporal logic

Halpern led the creation of the Computing Research Repository (CoRR), a forerunner of arXiv, and would later moderate CS papers for arXiv. 

Joe Halpern was the driving force behind the Theoretical Aspects of Rationality and Knowledge (TARK) conference, which attracts philosophers, economists, computer scientists and others to discuss what it means to know stuff. I had two papers in TARK 2009 in Stanford. But my favorite TARK memory came from a debate at the 1998 TARK conference at Northwestern. 

Consider the centipede game, where two players alternate turns where each can either play to the right (R/r), or defect (D/d) to end the game immediately, with payouts in the diagram below.

The game is solved by backward induction, working out that in each subgame the player does better defecting.

The debate asked the following. Player 1 needs to think about the backward induction of the future moves, considering the case where player 2 played right in its first move. But this is an irrational move, so why should you assume player 2 is being rational when playing its second move later on?

Someone said such reasoning is fine, like when we assume that square root of two is rational, in order to get a contradiction. The counter argument: Square root of two does not "choose" to be irrational.

Thank you Joe for helping us think about knowledge and giving us the forums to do so.

Sunday, February 15, 2026

Assigning Open Problems in Class

I sometimes assign open problems as extra credit problems. Some thoughts:

1) Do you tell the students the problems are open?

YES- it would be unfair for a student to work on something they almost surely won't get.

NO- Some Open Problems are open because people are scared to work on them. Having said that, I think P vs NP is beyond the one smart person phase or even the if they don't know it's hard maybe they can solve it phase.

NO- See Page 301 of this interview with George Dantzig where he talks about his mistaking an open problem for a homework and ... solving it.

CAVEAT---There are OPEN PROBLEMS!!! and there are open problems???  If I make up a problem, think about it for 30 minutes, and can't solve it, it's open but might not be hard. See next point. 

 I tell the students:

This is a problem I made up but could not solve. It may be that I am missing just one idea or combination of ideas so it is quite possible you will solve it even though I could not. Of course, it could be that it really is hard. 

A friend of mine who is not in academia thought that telling the students that I came up with a problem I could not solve, but maybe they can,  is a terrible idea. He said that if a student solves it, they will think worse of me. I think he's clearly wrong.  If I am enthused about their solution and give NO indication that I was close to solving it (even if I was) then there is no way they would think less of me.

Is there any reason why telling the students I could not solve it but they might be able to is a bad idea? 

2) Should Extra Credit count towards the grade? (We ignore that there are far more serious problems with grades with whatever  seems to make them obsolete: Calculators, Cliff  notes,  Cheating, Encyclopedias, Wikipedia, the Internet, ChatGPT, other AI, your plastic pal who's fun to be with.) 

No- if they count towards the grade then they are not extra credit. 

I tell the students they DO NOT count for the grade but they DO count for a letter I may write them.

What do you do? 

Thursday, February 12, 2026

The Future of Mathematics and Mathematicians

A reader worried about the future.

I am writing this email as a young aspiring researcher/scientist. We live in a period of uncertainty and I have a lot of doubts about the decisions I should make. I've always been interested in mathematics and physics and I believe that a career in this area would be a fulfilling one for me. However, with the development of AI I'm starting to have some worries about my future. It is difficult to understand what is really happening. It feels like everyday these models are improving and sooner rather than later they will render us useless.

I know that the most important objective of the study of mathematics/science is understanding and that AI will not take that pleasure away from us. But still I feel that it will be removing something fundamental to the discipline. If we have a machine that is significantly better than us at solving problems doesn't science lose a part of its magic? The struggle that we face when trying to tackle the genuinely difficult questions is perhaps the most fascinating part of doing research.

I would very much like to read about your opinion on this subject. Will mathematicians/scientists/researchers still have a role to play in all of this? Will science still be an interesting subject to pursue?

There are two questions here, the future of mathematics and the future of mathematicians. Let's take them separately.

Looks like the same letter went to Martin Hairer and highlighted in a recent NYT article about the state of AI doing math and the too-early First Proof project. According to Hairer, "I believe that mathematics is actually quite ‘safe', I haven’t seen any plausible example of an L.L.M. coming up with a genuinely new idea and/or concept."

I don't disagree with Hairer but the state of the art can quickly change. A few months ago I would have said that AI had yet to prove a new theorem, no longer true.

It's too early to tell just how good AI will get in mathematics. Right now it is like an early career graduate student, good at literature search, formalizing proofs, writing paper drafts, exploring known concepts and simple mathematical discussions. But no matter how good it gets, mathematics is a field of infinite concepts, there will always be more to explore as Gödel showed. I do hope AI gets strong at finding new concepts and novel proofs of theorems, and see new math that might not have happened while I'm still here to see it.

For mathematicians the future looks more complicated. AI may never come up with new ideas and Hairer might be right. Or AI could become so good at theorem proving that if you give a conjecture to AI and it can't resolve it, you might as well not try. The true answer is likely in-between and we'll get there slower rather than faster. 

People go into mathematics for different reasons. Some find joy in seeing new and exciting ideas in math. Some like to create good questions and models to help us make sense of mathematical ideas. Some enjoy the struggle of proving new theorems, working against an unseen force that seems to spoil your proofs until you finally break through, and impressing their peers with their prowess. Some take pleasure in education, exciting others in the importance and excitement of mathematics. These will all evolve with advances in AI and the mathematician's role will evolve with it.

My advice: Embrace mathematics research but be ready to pivot as AI evolves. We could be at the precipice of an incredible time for mathematical advances. How can you not be there to see it? And if not, then math needs you even more.

Sunday, February 08, 2026

I used to think historians in the future will have too much to work with. I could be wrong

 (I thought I had already posted this but the blogger system we use says I didn't. Apologies if I did. Most likely is that I posted something similar. When you blog for X years you forget what you've already blogged on.) 

Historians who study ancient Greece often have to work with fragments of text or just a few pottery shards. Nowadays we preserve so much that historians 1000 years from now will have an easier time. Indeed, they may have too much to look at; and have to sort through news, fake news, opinions, and satires, to figure out what was true.

The above is what I used to think. But I could be wrong. 

1) When technology changes stuff is lost. E.g., floppy disks.

2) (This is the inspiration for this blog post) Harry Lewis gave a talk in Zurich on 

The Birth of Binary: Leibniz and the origins of computer arithmetic

On Dec. 8, 2022 at 1:15PM-3:30PM Zurich time. I didn't watch it live (too early in the morning, east coast time) but it was taped and I watched a recording later. Yeah!

His blog about it (see here) had a pointer to the video, and my blog about it (see here) had a pointer to both the video and to his blog.

A while back  I was writing a blog post where I wanted to point to the video. My link didn't work. His link didn't work. I emailed him asking where it was. IT IS LOST FOREVER! Future Historians will not know about Leibniz and binary! Or they might--- he has a book on the topic that I reviewed here. But what if the book goes out of print and the only information on this topic is my review of his book? 

3) Entire journals can vanish. I blogged about that here.

4) I am happy that the link to the Wikipedia entry on Link Rot (see here) has not rotted.

5) I did a post on what tends to NOT be recorded and hence may be lost forever here.

6) (This is  bigger topic than my one point here.) People tend to OWN less than they used to. 


DVDs-don't bother buying! Whatever you want is on streaming (I recently watched, for the first time, Buffy the Vampire Slayer, one episode a day, on Treadmill, and it was great!)

CD's- don't bother buying!  Use Spotify. I do that and it's awesome-I have found novelty songs I didn't know about! Including a song by The Doubleclicks  which I thought was about Buffy: here. I emailed them about that it and they responded with: Hello! Buffy, hunger games, divergent, Harry Potter, you name it.

JOURNALS- don't bother buying them, its all on arXiv (Very true in TCS, might be less true in other fields). 

CONFERENCES: Not sure. I think very few have paper proceedings. At one time they gave out memory sticks with all the papers on them, so that IS ownership though depends on technology that might go away. Not sure what they do now. 

This may make it easier to lose things since nobody has a physical copy. 

7) Counterargument: Even given the points above, far more today IS being preserved than used to be. See my blog post on that here. But will that be true in the long run? 

8) I began saying that I used to think future historians will have too much to look at and have to sort through lots of stuff (using quicksort?) to figure out what's true. Then I said they may lose a lot. Oddly enough, both might be true- of the stuff they DO have they will have a hard time figuring out what's true (e.g., Was Pope Leo's ugrad thesis on Rado's Theorem for Non-Linear Equations? No. See my blog about that falsehood getting out to the world here. Spoiler alert- it was my fault.)

QUESTIONS:

1) Am I right--- will the future lose lots of stuff?

2) If so, what can we do about this? Not clear who we is in that last sentence. 



Wednesday, February 04, 2026

Sampling the Oxford CS Library

Wandering around maze known as the Computer Science building at Oxford I found the computer science library. Rarely these days do you see a library (and a librarian) devoted to computer science. The librarian found their copy of The Golden Ticket and asked me to inscribe and sign it, just like at Dagstuhl, perhaps the only other active CS library I know of.

It brought back memories of the early 90s when I would often head to the 
Math/CS library at the University of Chicago to track down some conference or journal paper. Now we just click and download but you miss finding something else interesting in the proceedings or the stacks in general.

I had time to kill so I wandered around the library finding memories in the stacks including the 1987 STOC Proceedings, home to my first conference paper, The complexity of perfect zero-knowledge. The paper might be best known for my upper bound protocol which is republished here in its entirety. 

That's how I wrote it nearly four decades ago, without proof just an intuition why it works. Those were the days. I did work out the full covariance argument in the journal version though I missed other bugs in the proof

The upper bound requires the verifier to have a random sample of the distribution unknown to the prover. Rahul Santhanam, who is hosting my visit to Oxford, asked if the converse was known. Goldreich, Vadhan and Wigderson, in the appendix of their Laconic Prover paper, show a sampling protocol based on the upper bound on the size of a set, though the sample is not completely unknown to the prover. Neat to revisit questions from my first conference paper. 

Oxford CS Librarian Aza Ballard-Whyte

Sunday, February 01, 2026

Before the ChatGPT-HW debate there were other ``If students use X to do their HW'' debates

Lance and I had a blog-debate about What to do about students using ChatGPT to do their Homework.

Some commenters pointed out that we've been here before. I will now list past technologies that looked like they were a problem for student assignments and ponder what happened. 

If students can consult diagrams in their text then they will lose the ability to I DON"T KNOW AND I DON"T CARE . I did a post about this  titled In the 1960's students protested the Vietnam war!/In 1830 students protested...Math? I suspect that students eventually got to consult their texts. Actually, the entire topic of geometry of conic sections,  seems to have gone away.

If students learn how to read then they will lose the ability to listen to their elders tell stories and also lose the ability to memorize. I've heard this was a concern though I don't really know if it was. In any case people are probably worse at memorizing than they used to be, but the plus of having books and reading far outweighs the negative of less good memories. 

If students use spellcheck they will forget how to spell I think people are sloppier with first drafts than they used to be since they know that spellcheck will catch their spelling errors. And even before ChatGPT there were programs to check grammar as well. My spell check wants me to replace ChatGPT with catgut. This points to the need to use spellcheck carefully which foreshadows having to use ChatGPT carefully. My spellcheck does think that spellcheck is spelled correctly.

If students have calculators they will forget that 7*8 equals... hmm, I forgot: I think we ask much fewer questions depending on calculation than we used to.  Do kids in grade school still memorize times-tables? If so, then up to what number?  In my blog post on Numbers That Look Prime But Aren't, I casually mentioned that students learn up to 12x12 but I do not know if that's true. 

SO- for those of you who have kids in grade school, leave a comment on if they 

a) Memorize Times Tables.

b) Learn an algorithm for multiplication ( O(n^2) or O(n^{1.58}) or  O(n log n)) . I used Wikipedia for the pointer to the O(n^{1.58}). The entry describes the algorithm very well. I used Wikipedia for the O(nlog n) algorithm. That entry just says that there is a galactic algorithm (one that needs very large n to be worth using). They did not give the algorithm or a pointer to a paper that has it.) 

c) Are allowed calculators on exams.

d) Some combination of the above or something else. 


If students use Encyclopedias they will not be using primary sources. Over time Encyclopedias became seen as primary sources. My proofreader relayed the following to me: When I was in fourth grade I weren't supposed to use Encyclopedias for their reports if the library had suitable books. So the trick was to find a topic that the library did not have suitable books on. My proofreader is only a few years older than me, and lived in the same state, but I was allowed to use Encyclopedias. 


If students use Wikipedia they will not be using primary sources. I don't hear this debated anymore but I am not sure how the issue was resolved, or if it was resolved. If someone who has kids in school knows, please leave a comment. 

Annie and Lance Fortnow had a blog entry about the Wikipedia issue. 

I reviewed a book titled Should you Believe Wikipedia? by Amy Bruckman. The review is here. Spoiler alert: She thinks yes but I am more skeptical.

I once needed a list of ER-complete problems and asked an expert if there was a survey. He said that the best source was the Wikipedia page. For other examples of Wikipedia being the only source  see this blog post.

A similar issue is referring to papers on arXiv that have not been refereed. That might be the topic for a future blog post. 

If the first programming language is in a high-level language the students will not learn assembly code and stuff about how computers really work. This has happened. I think students do not know as much about low-level code as they used to. Is that a problem? This type of concern happens whenever a  higher level language is available. Students using  ChatGPT  to write code for you is another example of this issue, though it also has other problems. 

If students learn typing too early they will not learn cursive. I am an early example of this---my handwriting was bad (still is) so I eagerly took a typing class in my school in 5th grade (the class was 13 girls whose parents wanted them to be secretaries, and me) and worked really hard at it and began handing in typed book reports.  

The only letters I know how to do in cursive are

 W I L L I A M   G A S A R C H  

and only  in that order.  

ANYWAY, people have lost the ability to write in cursive, or even write in print neatly.  Drew Faust, a former history professor at Harvard (she retired in 2018) has pointed out that students have a hard time reading cursive in her article Gen Z Never Learned to Read Cursive.

I ask non-rhetorically, is losing the ability to read or write cursive  a problem? 

Takeaways: 

1) (From the prior blog on ChatGPT) Grading has been broken for a long time. ChatGPT just makes that more obvious.

2) When a  new technology comes along we may have to rethink education. 








Wednesday, January 28, 2026

The Fighting Temeraire (Re)visited

The Fighting Temeraire by JWM Turner

A year ago I wrote about an experiment I ran to learn about the modern period of art from ChatGPT. Chatty picked four paintings to discuss and I wrote about Joseph Mallord William Turner's The Fighting Temeraire. To review, the Temeraire fought in the Battle of Trafalgar but in this painting it's being towed by a steamboat up the Thames to be broken down for parts. I liked the painting because it captured the change in technology from the great sailing ships to boats moving without sails. How technology can move us from the beautiful to the practical with parallels to what we see today.

I wrote the post based on high-resolution images but there is nothing like seeing a painting in person. So during a trip into London, we made a pilgrimage to Room 40 of the National Gallery to see the Temeraire up close. The National Gallery is across the street from Trafalgar Square and a short walk from the Thames the Temeraire traveled in its last trip.



I had a new experience with the painting. I could see the brush strokes, and details I missed before, like the people on the steamboat and how its wheels pushed it along the water. More generally, the emotional experience of seeing this last trip of a great ship. A reminder that no matter how digital our world gets, seeing art in its original form brings the artist's true intentions to mind.

In the same room hang another JMW Turner masterpiece, Rain, Steam, and Speed - The Great Western Railway.


Rain, Steam, and Speed - The Great Western Railway by Turner

Turner painted The Great Western Railway in 1844 less than a decade after the train line started running. Like the Temeraire captures the change in technology, this big fast train moving quickly towards the viewer in a quiet countryside. On the right side of the painting, a bit hard to see even in person, is a man with a horse-drawn plough, and a small boat on the river on the left.

Coincidentally I took the Great Western Railway into London that day, but it's not the same railroad company.

Turner captured a new time in history, where man could travel faster than by horse on land and by wind on the sea, in the early days of the industrial revolution, a reminder of the technological changes we see today. But also the importance of the museum, of seeing something in person that no digital experience can replicate and a location where you can focus on art, undistracted by anything else, except other art.

Two hundred years from now will someone go to a museum to see art that captures the early days of the AI revolution? And will it be generated by a human?

Saturday, January 24, 2026

Online Talks on Accessible Theorems!

Bogdan Grechuk has written a book Landscapes of 21st Century Mathematics that came out in 2021. There is a revised version coming out soon. The theme is that he takes theorems whose statements can be understood and describes them in 5–10 pages. No proofs, but lots of context and, most importantly, if you read all 800 pages of it you know about many areas of mathematics and where to look things up. He is organizing a series of online seminars with accessible talks about great recent theorems featuring world-renowned mathematicians:
  • January 28, 2026. Prof. Boaz Klartag. “Sphere packing.” join the talk here.
  • February 11, 2026. Prof. Avi Wigderson. “Expander graphs.” join the talk here.
  • March 4, 2026. Prof. Assaf Naor. “Sparsest cut.”
  • April 2026. Prof. Harald Helfgott. Date and topic to be decided.
  • May 20, 2026. Prof. Ben Green. “The polynomial Freiman–Ruzsa conjecture.”
More details can be found on the seminar webpage and the blog devoted to great accessible theorems across mathematics.