Tuesday, March 10, 2026

Tony Hoare (1934-2026)

Turing Award winner and former Oxford professor Tony Hoare passed away last Thursday at the age of 92. Hoare is famous for quicksort, ALGOL, Hoare logic and so much more. Jim Miles gives his personal reflections.

Jill Hoare, Tony Hoare, Jim Miles. Cambridge, 7 September 2021

Last Thursday (5th March 2026), Tony Hoare passed away, at the age of 92. He made many important contributions to Computer Science, which go well beyond just the one for which most Maths/CompSci undergraduates might know his name: the quicksort algorithm. His achievements in the field are covered comprehensively across easy-to-find books and articles, and I am sure will be addressed in detail as obituaries are published over the coming weeks. I was invited in this entry to remember the Tony that I knew, so here I will be writing about his personality from the occasions that I met him.

I visited Tony Hoare several times in the past 5 years, as we both live in Cambridge (UK) and it turned out that my family knew his. As a Mathematics graduate, I was very keen to meet and learn about his life from the great man himself. I was further prompted by a post on this blog which mentioned Tony a few times and summarised a relevant portion of his work. I took a print out of that entry the first time I visited him to help break the ice - it is the green sheet of paper in the picture above.

Tony read the entry and smiled, clearly recalling very well the material of his that it referenced, and then elaborating a bit, explaining how vastly programs had scaled up in a rather short space of time and how they typically require different methods than many of those he had been developing in the early days.

I was aware that Tony had studied Classics and Philosophy at university so I was keen to learn how one thing had led to another in the development of his career. He explained that after completing his degree he had been intensively trained in Russian on the Joint Services School for Linguists programme and was also personally very interested in statistics as well as the emerging and exciting world of computers. This meant that after his National Service (which was essentially the JSSL) he took on a job 'demonstrating' a type of early computer, in particular globally, and especially in the Soviet Union. He described the place of these demonstrations as 'fairs' but I suppose we might now call them 'expos'. In a sense, this seemed like a very modest description of his job, when in fact - reading up on Tony's career - he was also involved in the development of code for these devices, but perhaps that's a historical quirk of the period: being a demonstrator of these machines meant really knowing them inside and out to the point of acting on the dev team (AND, one might deduce, being fluent in Russian!).

Tony would tell these stories with a clarity and warmth that made it clear that certainly he was still entirely 'all there' mentally, and that his memory was pinpoint sharp, even if there were some physical health issues, typical for anyone who makes it so far into their 80s (and, as we now know, beyond!).

A story that I was determined to hear from the source was the legendary quicksort 'wager'. The story goes that Tony told his boss at Elliott Brothers Ltd that he knew a faster sorting algorithm than the one that he had just implemented for the company. He was told 'I bet you sixpence you don't!'. Lo and behold, quicksort WAS faster. I asked Tony to tell this story pretty much every time we met, because I enjoyed it so much and it always put a smile on both of our faces. To his credit, Tony never tired of telling me this story 'right from the top'. I had hoped to visit again in the past year and record him telling it so that there was a record, but unfortunately this did not happen. However, I discover that it is indeed recorded elsewhere. One detail I might be able to add is that I asked Tony if indeed the wager was paid out or if it had merely been a figure of speech. He confirmed that indeed he WAS paid the wager (!). A detail of this story that I find particularly reflective of Tony's humble personality is that he went ahead and implemented the slower algorithm he was asked to, while he believed quicksort to be faster, and before chiming in with this belief. It speaks to a professionalism that Tony always carried.

About 50% of our meetings were spent talking about these matters relating to his career, while the rest varied across a vast range of topics. In particular, I wanted to ask him about a story that I had heard from a relative, that Tony - whilst working at Microsoft in Cambridge - would like to slip out some afternoons and watch films at the local Arts Picturehouse. This had come about because on one occasion a current film in question was brought up in conversation and it transpired Tony had seen it, much to the bemusement of some present. The jig was up - Tony admitted that, yes, sometimes he would nip out on an afternoon and visit the cinema. When I met Tony and gently questioned him on this anecdote he confirmed that indeed this was one of his pleasures and his position at Microsoft more than accommodated it.

On the topic of films, I wanted to follow up with Tony a quote that I have seen online attributed to him about Hollywood portrayal of geniuses, often especially in relation to Good Will Hunting. A typical example is: "Hollywood's idea of genius is Good Will Hunting: someone who can solve any problem instantly. In reality, geniuses struggle with a single problem for years". Tony agreed with the idea that cinema often misrepresents how ability in abstract fields such as mathematics is learned over countless hours of thought, rather than - as the movies like to make out - imparted, unexplained, to people of 'genius'. However, he was unsure where exactly he had said this or how/why it had gotten onto the internet, and he agreed that online quotes on the subject, attributed to him, may well be erroneous.

One final note I would like to share from these meetings with Tony is perhaps the most intriguing of what he said, but also the one he delivered with the greatest outright confidence. In a discussion about the developments of computers in the future - whether we are reaching limits of Moore's Law, whether Quantum Computers will be required to reinvigorate progress, and other rather shallow and obvious hardware talking points raised by me in an effort to spark Tony's interest - he said 'Well, of course, nothing we have even comes close to what the government has access to. They will always be years ahead of what you can imagine'. When pressed on this, in particular whether he believed such technology to be on the scale of solving the large prime factorisation that the world's cryptographic protocols are based on, he was cagey and shrugged enigmatically. One wonders what he had seen, or perhaps he was engaging in a bit of knowing trolling; Tony had a fantastic sense of humour and was certainly capable of leading me down the garden path with irony and satire before I realised a joke was being made.

I will greatly miss this humour, patience, and sharpness of mind, as I miss everything else about Tony.

RIP Tony Hoare (11 January 1934 - 5 March 2026)

Sunday, March 08, 2026

How does AI do on Baseball-Brothers-Pitchers

In my graduate Ramsey Theory class I taught Kruskal's tree theorem (KTT) which was proven by Joe Kruskal in his PhD thesis in 1960.  (Should that be in a graduate Ramsey Theory class? There are not enough people teaching such a course to get a debate going.) A simpler proof was discovered (invented?) by Nash-Williams in 1963.

The theorem is that the set of trees under the homeomorphism ordering is a well quasi order.

But this blog post is not about well quasi orderings. It's about baseball brothers and AI.

The Kruskals are one of the best math families of all time. See my post on math families.The Bernoullis are another great math family.  What makes both families so great is that they had at least THREE great math people. Most have two.

Having taught the KTT and talked briefly about math families, I was curious how ChatGPT would do on the better-defined question of largest number of wins by a pair of brothers in baseball.  So I asked my students to look that up and include which tools they used,  as a HW problem  (worth 0 points but they had to do it). 

I wrote up 9 pages on what the answer is (there are some issues) and what the students' answers were. See my write up.

In case those 9 pages are tl;dr, here are the main takeaways

1) 7 of the answers given were just WRONG no matter how you look at it.

2) 7 had either the Clarksons who played in the 1880's, so don't count as modern baseball, or there were three of them, or both.  Even so, one could argue these are correct

3) 1 got it right. (That's   one got it right  not  I, bill g, got it right. It can be hard to distinguish the numeral for one, the letter capital i, and the letter small L. I blogged about L vs I here in the context of Weird AI vs Weird AL).


4) There were 13 different answers, which I find amazing.

As usual, when I study some odd issue, I learn a few other things of interest, at least to me, which may also have life lessons, though YMMV. 

a) Around 85% of pitchers in the Hall of Fame have won over 200 games. Dizzy Dean (his brother was also a pitcher which is why I was looking at this) got into the Hall of Fame with only 150 wins. Why? For 6 years he was the most dominant pitcher in the game. In addition (1) there was some sympathy for him since his career was cut short by an injury he got in an All-star game, and (2) he had a colorful personality and people liked him. The four least-wins for a HOF are: Candy Cummings (W-L 145-94).  [he is said to have invented the curveball], Dizzy Dean (W-L 150-83) , Addie Joss (W-L 160-97 )[he only played 9 years, which is less than the 10 needed for eligibility in the HOF, but he died so he was given an exception], Sandy Koufax (W-L 165-87) [he was dominant, like Dean, for a short time]. Sandy is the only one who is still alive. (W-L means Win-Loss. For example, Candy won 145 games and lost 94.)


b) Modern baseball starts in 1900. But this is arbitrary. In 1904 Jack Chesbro won 40 games which would never happen in modern baseball. But you have to draw the line someplace.  History is hard because there are fuzzy boundaries.

c) I had not known about the Clarkson brothers. 

d) My interest in this subject goes back to 1973. In that year I heard the following during a baseball game and, ever since I began blogging, I wondered how I could fit it into a blog post:


An old baseball trick question is now gone!  Just last week [in 1973] if you asked

What pair of brothers in baseball won the most games?

the answer was Christy and Henry Mathewson. Christy played for 16 seasons and had a W-L record of 373-188, while his brother Henry played in 3 games and had a W-L record of 0-1. So their total number of wins is 373 which was the record.

But this last week [in 1973] the Perry brothers, Gaylord and Jim, got 374 wins between them.  Hence the question

What pair of brothers in baseball won the most games?

is no longer a trick question since both Perrys are fine pitchers [Gaylord made it into the Hall of Fame; Jim didn't, but Jim was still quite good.]

As you will see in my writeup, the Perrys' record was broken by the Niekros.

e) Christy and Henry are a trick answer because Henry wasn't much of a player with his 0-1 W-L record. Are there other pairs like that? Greg (355 wins) and Mike (39 wins) Maddux might seem that way but they are not. While Mike only had 39 wins he had a 14-year career as a relief pitcher. Such pitchers can be valuable to the team, but because of their role they do not get many wins.



SO the usual question: Why did AI get this one wrong? Lance will say that the paid version wouldn't have and he may be right. David in Tokyo might say that ChatGPT is a random word generator. Others tell me that ChatGPT does not understand the questions it is asked (my proofreader thinks this is obvious).  I'll add that the pitcher-brother  question has more ambiguity than I thought- do you count pitchers before 1900? Do you count a brother who won 0 games? What about 3 brothers (not the restaurant)?

Wednesday, March 04, 2026

The Purpose of Proofs

In discussions of AI and Mathematics, the discussion often goes to mathematical proofs, such as the the First Proof challenge. So let's look at the role of proofs in mathematics.

Without a proof, you don't even know whether a theorem is true or false. It's not even a theorem until you have a proof, just a conjecture or hypothesis. You might have some intuition but you don't know the hardness of a proof until you find it. Even then that only gives you an upper bound on hardness as someone might find a simpler alternative proof in the future,

Proofs give an understanding of why a theorem is true. A proof can look like a piece of art, especially if the proof works in a new or clever way, maybe even a proof from the book like Kelly's Proof of the Sylvester–Gallai theorem. A mathematician often has their most satisfying experiences finding a proof of a theorem they or others have had challenge proving earlier.

Conjectures drive the need for proofs, but often it goes the other direction. A failed proof leads to a new conjecture and maybe a different theorem altogether. My time-space tradeoffs came out of a failed attempt to show NP different from L. Too often I see papers with theorems that are really just what the best proof they can find achieves.

Proofs are supposedly objective, either it is a valid proof or it isn't, the biggest reason AI tackles proofs as we can grade them as right or wrong. Most academic papers have high-level proofs and a referee however needs to make a subjective decision whether the proof has enough details to count as a legitimate proofs. If a proof has minor fixable errors, then it isn't a proof but we usually count it as one.

Now we have proof systems like Lean where we can make proofs truly objective. There's a large mathlib project to formalize much of mathematics in Lean, and talk of a cslib as well. 

Lean excites me less, I'm not sure what we gain by formalizing proofs besides certainty. Will Fermat's last theorem be any more true once we formalize it? One could argue Lean also helps AI reason about mathematics in a grounded way, though could AI just get lost in the logical weeds?

I worry about the heavy task of converting proofs to Lean, even with AI help, takes away from time spent finding and proving new theorems and we lose the beauty of the proofs. If there was a fully lean verified proof of P ≠ NP that you couldn't understand, would you find that satisfying?

Sunday, March 01, 2026

Goodhart's law: Ken Jennings and Types of Knowledge

Goodhart's lawWhen a measure becomes a target, it stops being a measure.  

I was watching the show Masterminds where Ken Jennings is one of the Masterminds. Here is what happened: 

Brook Burns (the host): The only vice president in the 20th century with initials H.H. was Hubert Humphrey. Who was the only vice president in the 19th century with initials H.H?

Bill Gasarch shouting at the screen: Hannibal Hamlin, Lincoln's VP for his first term! This is really obscure so this may be a rare case where I get a question right that Ken Jennings does not know!

Ken Jennings: Hannibal Hamlin. 

Bill Gasarch: Darn! However, I suspect he studied lists of presidents and vice presidents for the purpose of doing well on Jeopardy and now other shows. My knowledge is more natural in that I read books on presidents and vice presidents (best book, maybe the only book,  on Vice Presidents: Bland Ambition).  My knowledge of it is different from his. I tend to think my knowledge is more legitimate, though it would be hard to make that statement rigorous. If on quiz shows they asked follow-up questions, that might help alleviate this problem, if it is indeed a problem.

Misc: 

Ken Jennings is a Mormon so he does not drink or know about drinks. However, before going on Jeopardy he studied drinks for the show.

Ken Jennings does know novelty songs and that is legit since novelty songs comes up rarely on Jeopardy so I doubt he studied them in his preparation for going on Jeopardy. 

a) He knew that Shel Silverstein wrote A boy named Sue which was sung by Johnny Cash

b) He knew that Johnny Cash did a cover of Shel Silverstein's I'm being swallowed by a boa constrictor.

c) During his streak there was a category on novelty songs. He got 4 out of the 5 correct, and the one he got wrong I could tell he really knew. I got them all right, and faster, but Ken Jennings gets my respect for his legit knowledge of novelty songs. See here for the questions and answers. 

Goodharts law (maybe): Jeopardy is supposed to be about what people know. Is it supposed to be about what people study? Does studying for it help you  for things other than quiz shows?

When I watch a quiz show and get a question right there are levels of legitimacy:

1) I know the area naturally. 

2) I saw the question and answer on a different show and and I then looked it up and know more about it. So this is now legit.

3) I saw the question and answer on a different show and just know it as stimulus-response with very little understanding. Example: Kelly Clarkson was the first winner on American Idol, but I have never seen the show and only vaguely know how it works.

4) I saw the question and answer on the show I am watching, the exact episode, and I am watching a repeat. Sometimes I know stuff I don't normally know and then say OH, I've seen this episode before. 

Back to my point:

Is Ken Jennings memorizing lists a less-legit form of knowledge? If so, how to make that statement rigorous?

Is this what AI does? See also Chinese Room since that's also about a device that gets the right answers but perhaps for the ''wrong'' reason. 



Wednesday, February 25, 2026

A Probability Challenge

Last week I had the pleasure of meeting Alex Bellos in Oxford. Among other things Bellos writes the Guardian Monday puzzle column. He gave me a copy of his latest book, Puzzle Me Twice, where the obvious answer is not correct. I got more right than wrong, but I hated being wrong. Here is one of those puzzles, Sistery Mystery (page 28), which is a variation of a puzzle from Rob Eastaway

Puzzle 1: Suppose the probability of a girl is 51% independently and uniformly over all children. In expectation, who has more sisters, a boy or a girl?

Go ahead and try to solve this before reading further.

In any family with both boys and girls, each boy will have one more sister than each girl. For example in a family with four girls, each boy has four sisters and each girl only has three. Thus boys have more sisters on average.

Wrong, it's exactly the same. To see this consider Alex, the sixth child of a ten-child family. The number of Alex's sisters is independent of Alex's gender. This is a pretty robust result, it doesn't depend on the probability of a child being a girl, or if we allow non-binary children, or if the distributions aren't identical, say the probability of a girl is higher for later kids in a family. All you need is independence.

So what's wrong with my initial intuition that in every family boys have more sisters than girls. Eastaway suggests this gets balanced by the families of a single gender, but this happens rarely for large families. Instead it's a variation of Simpson's paradox. The naive argument doesn't account for the fact that girls are overrepresented in girl-heavy families. Consider a family of two boys and eight girls. Each of the two boys has eight sisters but four times as many girls have seven sisters, which adds to the expected value more to the girls than the boys.

If you lose independence the solution may not hold, for example if we have identical twins. 

I'll leave you with one more puzzle.

Puzzle 2: Suppose you are in a country where each family has children until they get their first boy. In this country, do boys or girls have more sisters on average?

Answer below.

In Puzzle 2 we lose independence and if Alex is a girl, she's more likely to be in a family with many girls. Indeed if boys and girls have equal probability, when you work out the infinite sums in expectation a boy will have one sister and a girl will have two.

Sunday, February 22, 2026

ChatGPT gets an easy math problem wrong (I got it right). How is that possible?

A commenter on this post asked for me (or anyone) to solve the problem without AI:

A,B,C,D,E are digits (the poster said A could be 0 but I took A to be nonzero)  such that

ABCDE + BCDE + CDE + DE + E = 20320.

(CLARIFICATION ADDED LATER: We allow two letters to map to the same digit.) 

I solved it completely by hand. You can try it yourself or look at my solution which is here. I found seven solutions. 

I THEN asked ChatGPT to give me all solutions to see if I missed any. 
 
I had it backwards. ChatGPT missed some solutions.  The entire exchange between chatty and me is here.

I asked it how it could get it wrong and how I can trust it. It responded to that and follow-up questions intelligently. 

Note that the problem is NOT a Putnam problem or anything of the sort. But I've read that AI can solve Putnam problems. SO, without an ax to grind I am curious- how come ChatGPT got the abcde problem wrong.

Speculative answers

1) The statement AI has solved IMO problems refers to an AI that was trained for Putnam problems, not the free ChatGPT. For more issues with the AI-IMO results see Terry Tao's comments here

2) ChatGPT is really good when the answer to the question is on the web someplace or can even be reconstructed from what's on the web. But if a problem, even an easy one, is new to the web, it can hallucinate (it didn't do that on my problem but it did on muffin problems) or miss some cases (it did that on my problem). 

3) It's only human. (It pretty much says this.)

4) The next version or even the paid version is better!  Lance ran it on his paid-for chatGPT and it wrote a program to brute force it and got all 7 solutions. 

5) I said ChatGPT got the problem wrong. If a student had submitted the solution it would get lots of partial credit since the solution took the right approach and only missed a few cases. So should I judge ChatGPT more harshly than a student? Yes. 

The question still stands: How come ChatGPT could not do this well defined simple math problem. 




Wednesday, February 18, 2026

Joe Halpern (1953-2026)

Computer Science Professor Joseph Halpern passed away on Friday after a long battle with cancer. He was a leader in the mathematical reasoning about knowledge. His paper with Yoram Moses, Knowledge and Common Knowledge in a Distributed Environment, received both the 1997 Gödel Prize and the 2009 Dijkstra Prize. Halpern also co-authored a comprehensive book on the topic.

Halpern helped create a model of knowledge representation which consisted of a set of states of the world, and each person has a partition into a collection of sets of states, where states are in the same partition if that person can't distinguish between those states. You can use this system to define knowledge and common knowledge, and model problems like muddy children. It also serves as a great framework for temporal logic

Halpern led the creation of the Computing Research Repository (CoRR), a forerunner of arXiv, and would later moderate CS papers for arXiv. 

Joe Halpern was the driving force behind the Theoretical Aspects of Rationality and Knowledge (TARK) conference, which attracts philosophers, economists, computer scientists and others to discuss what it means to know stuff. I had two papers in TARK 2009 in Stanford. But my favorite TARK memory came from a debate at the 1998 TARK conference at Northwestern. 

Consider the centipede game, where two players alternate turns where each can either play to the right (R/r), or defect (D/d) to end the game immediately, with payouts in the diagram below.

The game is solved by backward induction, working out that in each subgame the player does better defecting.

The debate asked the following. Player 1 needs to think about the backward induction of the future moves, considering the case where player 2 played right in its first move. But this is an irrational move, so why should you assume player 2 is being rational when playing its second move later on?

Someone said such reasoning is fine, like when we assume that square root of two is rational, in order to get a contradiction. The counter argument: Square root of two does not "choose" to be irrational.

Thank you Joe for helping us think about knowledge and giving us the forums to do so.

Sunday, February 15, 2026

Assigning Open Problems in Class

I sometimes assign open problems as extra credit problems. Some thoughts:

1) Do you tell the students the problems are open?

YES- it would be unfair for a student to work on something they almost surely won't get.

NO- Some Open Problems are open because people are scared to work on them. Having said that, I think P vs NP is beyond the one smart person phase or even the if they don't know it's hard maybe they can solve it phase.

NO- See Page 301 of this interview with George Dantzig where he talks about his mistaking an open problem for a homework and ... solving it.

CAVEAT---There are OPEN PROBLEMS!!! and there are open problems???  If I make up a problem, think about it for 30 minutes, and can't solve it, it's open but might not be hard. See next point. 

 I tell the students:

This is a problem I made up but could not solve. It may be that I am missing just one idea or combination of ideas so it is quite possible you will solve it even though I could not. Of course, it could be that it really is hard. 

A friend of mine who is not in academia thought that telling the students that I came up with a problem I could not solve, but maybe they can,  is a terrible idea. He said that if a student solves it, they will think worse of me. I think he's clearly wrong.  If I am enthused about their solution and give NO indication that I was close to solving it (even if I was) then there is no way they would think less of me.

Is there any reason why telling the students I could not solve it but they might be able to is a bad idea? 

2) Should Extra Credit count towards the grade? (We ignore that there are far more serious problems with grades with whatever  seems to make them obsolete: Calculators, Cliff  notes,  Cheating, Encyclopedias, Wikipedia, the Internet, ChatGPT, other AI, your plastic pal who's fun to be with.) 

No- if they count towards the grade then they are not extra credit. 

I tell the students they DO NOT count for the grade but they DO count for a letter I may write them.

What do you do?