Saturday, January 24, 2026

Online Talks on Accessible Theorems!

Bogdan Grechuk has written a book Landscapes of 21st Century Mathematics that came out in 2021. There is a revised version coming out soon. The theme is that he takes theorems whose statements can be understood and describes them in 5–10 pages. No proofs, but lots of context and, most importantly, if you read all 800 pages of it you know about many areas of mathematics and where to look things up. He is organizing a series of online seminars with accessible talks about great recent theorems featuring world-renowned mathematicians:
  • January 28, 2026. Prof. Boaz Klartag. “Sphere packing.” join the talk here.
  • February 11, 2026. Prof. Avi Wigderson. “Expander graphs.” join the talk here.
  • March 4, 2026. Prof. Assaf Naor. “Sparsest cut.”
  • April 2026. Prof. Harald Helfgott. Date and topic to be decided.
  • May 20, 2026. Prof. Ben Green. “The polynomial Freiman–Ruzsa conjecture.”
More details can be found on the seminar webpage and the blog devoted to great accessible theorems across mathematics.

Thursday, January 22, 2026

Community

I once had a provost who felt that academic departments hindered the university as they tended to silo the faculty. He would argue we should eliminate departments and that would increase cross-disciplinary work. That went nowhere of course.

He missed a critical point. While departments play an official role in hosting academic programs, they importantly serve as the main community for the faculty within the department. You see your colleagues in the department in meetings, at seminars and just walking by them in the hallways. They are your peers and the ones who hired you and judge your promotion and tenure cases. You will argue with them but you all share a common mission to make your department as strong as possible, so you can attract even better colleagues.

At Oxford, most faculty have a position at one of the 36 colleges, and faculty community comes in the college instead of the department. Many faculty walk from their department offices to have lunch in their college. The colleges are interdisciplinary, there are only at most a couple of computer scientists in each college. My visiting fellowship is with Magdalen College, not the CS department though I plan to spend some time there.

Does the Oxford system lead to more interdisciplinary research, or less research in the field? Not sure, and it's hard to run a controlled experiment.

We all have two communities, one where we work, but also our academic community, the people who do research in the same areas, the people in our field that we collaborate with and see at conferences and workshops. The ones we try to impress with our own research so they may accept our papers, give us grants, nominate us for awards and write good letters for us. That's the real challenge of interdisciplinary research--you hope to impress researchers in both fields, but really you just impress those in the intersection.

In computer science our academic communities are pretty small, usually the conferences where we publish regularly. I was having a conversation with another computer scientist at Oxford. I asked him which conference and he said ICALP B. I said I've published in ICALP A, completely different!

Monday, January 19, 2026

What to do about students using ChatGPT to do their homework?

Students are using ChatGPT to do their HW. Here are things I've heard and some of my thoughts on the issue (Lance also added some comments). I have no strong opinions on the issue. Some of what I say here applies to any AI or, for that matter, old-fashioned cheating by having your friend do the homework for you or by going to the web for the answer (Is ChatGPT  going to the web for the answer but with a much better search tool?)

1) Ban the use of ChatGPT.  That might be impossible. 

2) Allow them to use ChatGPT but they must take what it outputs and put it in their own words. This should work in an advanced course where the students want to be there and are mature, but might be problematic in, say, Freshman calculus. This is also a problem if you want to see if they can come up with the answer on their own. Lance: How do you know if the words come from AI or the student?

3) Assign problems that ChatGPT does not do well on. I can do this in Ramsey Theorem and (to some extent) in Honors Discrete Math but (a) over time it will be harder, and (b) this may be hard to do in a standard course like calculus. Lance: Pretty much impossible for most undergrad courses. Bill: Slight disagreement- computer science is a fast-moving field where recent papers can be used to get problem sets before ChatGPT has digested the paper. Sometimes. 

4) For some assignments have them use ChatGPT and hand in both what chatGPT output and the student's rewriting of it. If ChatGPT made a mistake they should also report that. Lance: Many would just have AI rewrite the AI. Bill: Alas true.

5) Revise the curriculum so that the course is about HOW to use ChatGPT intelligently including cases where it is wrong and types of problems where it is wrong (do we know that?). Lance: And that changes as models change. Bill: True but not necessarily  bad. 

6) Make sure that a large percent of the grade is based on in-class exams.  I don't like this but it may become needed. Lance: I just grade homeworks for completion instead of correctness so that students didn't feel they needed to use AI to keep up with their classmates. Bill: I like the idea but they may still use AI just because they are lazy. 

7) The in-class exams will have variants of the homework so if a student used ChatGPT this will test if they actually learned the material (I do this anyway and warn the students on day one that I will do this to discourage cheating of any kind.) Lance: This works until the students wear AI glasses. Bill: I can't tell if you are kidding. More seriously, how far off is that? Lance: Not far.

8) Abolish Grades. Lance thinks yes and blogged about it here. Abstractly I wish students were mature enough to not need grades as a motivation. But even putting that aside, we do need some way to tell if students are good enough for a job or for grad school. Lance: We do need that, but grades aren't playing that role well anymore. Bill: What do do about this is a topic for another blog post. 

9) In a small class have the students do the homework by meeting with them and having them tell you the answer, and you can ask follow up questions. Perhaps have them do this in groups. Lance: Panos Ipeirotis used AI to automate oral questioning. Bill: Looks great, though I'll wait till they work out the bugs.

So---readers- what have been your experiences? 

Wednesday, January 14, 2026

Rational Functions Solved!

It's not every day that one of my open problems is solved, especially one that I asked about over three decades ago. Matt Kovacs-Deak, Daochen Wang and Rain Zimin Yang just posted a paper showing that if you have a Boolean function \(f\) and two polynomials \(p\) and \(q\) of degree at most \(d\) such that \(f(x)=p(x)/q(x)\) for every \(x\) of length \(n\) then \(f\) has decision tree complexity at most \(2d^4\).

Noam Nisan and Mario Szegedy had this beautiful paper in the early 90s showing that if a low-degree polynomial approximated a Boolean function, the decision tree complexity, the number of bits you needed to query from the input to decide the output, was bounded by the degree. For rational functions, that wasn't true, the majority function can be approximated by the ratio of two low-degree polynomials. This was the main tool used by Beigel, Reingold and Spielman when they showed that the complexity class PP is closed under intersection. 

Motivated by the complexity class C\(_=\)P\(\cap\)co-C\(_=\)P, I asked Mario if a low-degree rational function exactly computed a Boolean function, then would that function have low decision tree complexity. Mario thought about it, said it was an interesting problem, and added it as an open question to the journal version of his paper with Nisan.

I gave the problem to every student who asked me for an open combinatorial problem, as well as every reasoning AI model I could get my hands on. Iyer et al. posted a paper that examined this problem and showed it true for symmetric functions but the general case remained open until now.

The main insight of Kovacs-Deak, Wang and Yang is to consider the minimum block sensitivity, which is usually an uninteresting property of a function. The majority function for example has large maximum and average block sensitivity but low min block sensitivity. Nevertheless, their proof bounded the min block sensitivity by twice the degree of the rational function and used it to get a degree reduction of the rational function by querying a small number of bits of the input. Nice.

I always joked that if I told people the complexity application of the problem they would lose interest. The question came out of work with Steve Fenner, Stuart Kurtz and Lide Li on An Oracle Builder's Toolkit. C\(_=\)P is a generalization of co-NP where we ask if a gap-P function (the difference of two #P functions) is zero. So C\(_=\)P\(\cap\)co-C\(_=\)P is a generalization of NP\(\cap\)co-NP. This new theorem implies the technical result that if P = PSPACE unrelativized then P = C\(_=\)P\(\cap\)co-C\(_=\)P relative to a generic oracle. This implies an oracle where P = C\(_=\)P\(\cap\)co-C\(_=\)P and the PH is infinite and another where C\(_=\)P\(\cap\)co-C\(_=\)P has no complete sets.

If you are more of a quantum person, C\(_=\)P\(\cap\)co-C\(_=\)P = PostEQP while PostBQP = PP, and this work shows quite a difference between these classes. PP has complete sets and if P = PP then the polynomial-time hierarchy and even the counting hierarchy collapses to P.

Sunday, January 11, 2026

Is `smells like' commutative?

 
1) Smells Like... Something

In many TV shows having to do with murder (and there are plenty of them), I’ve heard the following exchange:

        His breath smells like bitter almonds. So he was poisoned with cyanide

They’re either saying

        bitter almonds smell like cyanide

or

        cyanide smells like bitter almonds.

If you say X smells like Y, you mean that X is the new smell and Y is the familiar one.  However, on these shows, people seem to smell cyanide a lot,
yet I’ve never seen them smell or taste bitter almonds.  That's good since bitter almonds can be lethal (see here). So there should be mystery stories where bitter almonds are used and the cops say

             His breath smells like cyanide. So he was poisoned with bitter almonds.

I don’t know what either one smells like.

2) Rotten Eggs

In real life: My Darling grew up in Pittsburgh when it was still a steel-mill city.
She said she often smelled something that

        smelled like rotten eggs.

It was sulfur. But in telling me this, she assumes I’ve smelled rotten eggs.
I haven’t. But I have smelled other things that I was told smell like rotten eggs.

I think the phrase

        smells like rotten eggs

is often used by people who’ve never actually smelled rotten eggs.

3) Cardboard and Matzoh

A  blog post by Scott (see here), and my post about his post (see here), brought up the question:

        Does matzoh taste like cardboard?

I doubt any of us have actually tasted cardboard.


My proofreader once accidentally did, while eating takeout from a paper container. He says
(1) it doesn’t taste like matzoh, and
(2) it doesn’t taste like food — which matzoh does.


4) Dog Food

I’ve heard the cliché insult:

        Your cooking is so bad that it tastes like dog food.

I’ve never eaten dog food.  Maybe it tastes good.

5) When X Smells Like Y

If someone says X smells like Y, then:

a) If people know what Y smells like but not X, that’s informative.
b) If people know what X smells like but not Y, that’s not informative.
c) If I hear that X smells like rotten eggs and Y smells like rotten eggs, then I know X and Y smell the same —
even though I don’t know what rotten eggs smell like.
Oh wait — I do. They smell like X or Y!

6) How do the following fit into this discussion?:

a) The Nirvana song Smells Like Teen Spirit, video here.
b) The Weird AI song Smells Like Nirvana, video here.


Friday, January 09, 2026

Computational Depth

I'm posting from Oxford University where I will be spending the "Hilary Term" (through late March) as a visiting fellow at Magdalen College. If you are relatively local, reach out if you'd like to connect.

I plan to get back into research after thirteen years of administration, working primarily with Rahul Santhanam and his group. I haven't had a significant sabbatical or leave since Amsterdam 30 years ago, which is what comes from changing jobs too often.

Today I'd like to talk about a 2006 paper about a topic I first thought about in Amsterdam and will likely play a role in this visit, Computational Depth: Concept and Applications by Luis Antunes, Dieter van Melkebeek, Vinod Variyam and myself. 

In Amsterdam, I was hosted by Paul Vitányi and Harry Buhrman at CWI, and naturally worked on Kolmogorov complexity, the algorithmic study of randomness, as well as various problems in computational complexity.

Very simple strings don't have much information. Completely random strings have maximal Kolmogorov complexity but not particularly useful either as we can create our own random strings by flipping coins. Is there some way to measure useful information? 

In particular I had this question: If NP reduces to a sparse set then it has polynomial-size circuits. If NP reduces to a random set then it has polynomial-size circuits. Is this just coincidence or is there a broader principle here?

Charlie Bennett developed a notion of logical depth that has a similar philosophy but it is a difficult definition to work with. Luis Antunes, a student from Porto, came to work with me at NEC Research in New Jersey when I was there around the turn of the century. We developed this simpler notion of computational depth as the difference of two Kolmogorov measures, like time-bounded Kolmogorov complexity minus traditional Kolmogorov complexity. This would be small for both very easy strings and full random strings. With then NEC postdoc (and now Nebraska professor) Vinod Variyam, we found a connection to finding SAT witnesses and with DIMACS and IAS postdoc Dieter van Melkebeek (now Wisconsin professor), we came up with the notion of shallow sets, that generalized both random and sparse sets and, as hoped, if NP reduces to a shallow set than it has polynomial-sized circuits. Luis would title his PhD thesis "Useless Information".

Luis and I would later find a nice connection of computational depth to average-case complexity. 

Computational Depth had a resurgence of popularity with the rise of meta-complexity, with 50 of the original paper's 90 citations coming since 2020.

So I hope to find new applications of computational depth working with Rahul who's at the center of meta-complexity. Maybe computational complexity can help us understand machine learning better based on my frozen concentrate analogy, where the learning process removes the randomness, leaving structured information behind. 

Monday, January 05, 2026

AI and Research Papers

2026 will be a year of AI disruption across all of academia. Let's start by talking about AI is changing how we write research papers. Not the research itself (another post), just about the dissemination thereof.

Technology has changed research distribution since the invention of paper, and the printing press, typewriters, journals and conferences have all had dramatic effects. But we've already seen such dramatic changes just within my research career from authors doing their own formatting (TeX/LaTeX), instant distribution of papers (email/web) and collaborative writing (email/shared repositories/Overleaf).

How AI affects research writing follows a trend not unlike other AI applications. 

The Slop Phase

This is where AI does more harm than good. Useful for finding grammar mistakes and making fancy LaTeX tables, but if you use it for writing it may make stuff up, create citations out of thin air and put in words like "as a large-language model" in the text. Things have gotten much better with the latest models, I'm currently a fan of Gemini 3, but we haven't fully left the slop phase and anything AI written needs close verification.

The Verification Phase

2026 will be the year we go from humans having to look over AI generated content to AI having to look over human-generated content. Already you should give your paper, ideally in LaTeX, to the best model you have access to with a prompt asking for a critical review, or use a specialized service. You don't have to follow what the AI says but you should be open to improving your paper. Google provided an optional tool for those preparing STOC 2026 submission that many found valuable. In 2026 some conferences and journals will start requiring submitted papers to go through an AI review as well as a human review (for now), especially with those overwhelmed by submissions.

At some point, we may require authors, with AI help, have their theorems verified by theorem provers like Lean or Rocq. It will be a while, especially in computational complexity, where it is hard to properly state a theorem in formal logic, let alone its proof. 

The Collaborative Phase

Not just AI critiquing papers, but researchers and AI working together in paper writing. We already see pieces of this, having AI fill in details of arguments, give a first draft of an introduction, help create diagrams, find relevant citations, etc. Right now anything the AI writes requires a careful lookover by the researcher. As these models improve, we'll see more and more generated by the AI and researchers starting to trust the output, maybe more than they should. This will lead to...

The Generation Phase

Here the AI writes the LaTeX from scratch based on proof sketch given by the researchers. As an experiment, I "vibe-coded" a minor research paper and it went better than expected. 

The researcher will go and ask for updates and touch up the LaTeX at the end, but at some point the human won't ever touch the LaTeX file, just a prompt or a markdown file that generates it. 

By the end of the 1980s, researchers just trusted the LaTeX for formatting, fixing only syntax errors and overfull boxes. Most researchers (but not all) stopped fiddling with the look of the paper. At some point, and at higher stakes, we'll do the same with the paper content as well.

Friday, January 02, 2026

The Betty White Award for 2025

The Betty White Award goes to people who die at the end of the year--- too late to be on those articles with titles like 

                                                            people we lost this year.

The  point of the award is that news outlets and blogs should WAIT until Jan before having any articles that look back on the prior year. (I tell Lance we should have our end-of-the-year post in January just in case someone solves P vs NP the last week of December. [Lance: I'm not worried])

I made up the award in 2023. More recently, I looked at the web to see who would have won it in past years. My chart is here. As I was writing this post I wondered if there already was a Betty White Award for something else. The response is here. Short version: ChatGPT says that there is an informal award called that for reading cue cards really well on Saturday Night Live. As opposed to my award which is formal(?).

This year's list of winners so far has only one person:

Brigitte Bardot who died on Dec. 28, 2025, at the age of 91. She was an actress, singer, model, and activist (she was for animal rights and against interracial marriage---see her Wikipedia page for other views she had).  Like most celebrities who die when they are over 90, I suspect many people either don't know who she was or thought she was already dead. For more on her see her Wikipedia page here. Or type her name into chatty, though be warned that you may get some false statements. 

1) This year I am taking NOMINATIONS for additions to the list.  The number of winners can be more than 1. The max in a year has been 4 so far.  SO 

If you know of someone who

a) died between Dec. 20 and Dec. 31

b) is famous

then leave a comment.  DEADLINE: Jan. 8 at noon East Coast time. On Jan. 8 I will update this blog post with more winners if need be. 

I've already looked at Wikipedia's list of who died in 2025, here. If there is either (1)  someone on there who is more famous than I thought (that happened last year with Manmohan Singh) or (2)  there is someone not on the Wikipedia list who is famous to our community, let me know.

2) As noted above, the point of the award is to point out that publications should WAIT until Jan to have those lists. I am happy to say that Entertainment Tonight has found a good way to handle this---they had an article title Celebrity deaths 2025: Remembering those who've died this year but they are updating it (at least their online version). The updated version includes Bardot! See here. Paper publications  won't be able to do that. This may not matter---will there still be paper publications at the end of 2026? I blogged on paper free stuff here.



Monday, December 22, 2025

Complexity Year in Review

An easy choice for paper of the year, a paper that has nothing to do with randomness, interaction, quantum, circuits or codes. Just a near quadratic improvement in the amount of memory you need to simulate time.

Simulating Time with Square-Root Space by Ryan Williams

Any time \(t(n)\) algorithm can be simulated in space \(O(\sqrt{t(n)\log t(n)})\) greatly improving the \(O(t(n)/\log t(n))\) result from the 70's. Ryan's work makes strong use of last year's space efficient tree evaluation by James Cook and Ian Mertz. More in my February post and a Quanta article which did a better job explaining the importance of the result than I could.

Bill is also excited by the new \(O(m\log^{2/3}n)\) single-sourced shortest path algorithm by Ran Duan, Jiayi Mao, Xiao Mao, Xinkai Shu and Longhui Yinthat that beats out Dijkstra on sparse graphs. 

Last year I wrote

We're heading to a perfect storm for US higher education with the oncoming trains of the new administration, artificial intelligence, fiscal challenges and the demographic cliff. Hang on tight, it's going to be a bumpy ride.

Bumpy is an understatement and we're just starting the ride. Limited immigration, National Science Foundation woes in its 75th anniversary, and a drop in computer science enrollments as AI continues to suck up the atmosphere. Do we buckle down or should we completely rethink our institutions? 

In the spirit of all the AI wrapped content, I asked Claude to put together a full year in review for this blog. This is getting scarily good.

We remember George Foreman, Frank GehryRay Laflamme, Tom Lehrer, Charles Lin, Pradyut Shah and Tom Stoppard

We thank our guest posters Eric Allender, Daniel Fernández and Alberto Fraile, Clyde Kruskal and Nick Sovich.

See you all in January!

Wednesday, December 17, 2025

A Place Away From Tech

 

The Fine Arts Building

Last week, I partook of the second Fridays open house in the The Fine Arts Building, ten floors of offices all related to the arts and creatives in some way. Art studios of all kinds, from fine art to photography, music instrument sales, repairs and instruction, an opera company and various music ensembles, puppetry, jewelry makers, authors, interior decorators, a store that sells music scores on paper and an independent books store, and much more. 

On the evenings of the second Friday of the month, the building has an open house, so you can visit the art studios and hear some music performances in small studios. You can stop by the Exile in Bookville bookstore and have Keir Graff sign a copy of his recent coffee table book about the building. 

The building started as the Studebaker building in 1885 as a factory and showroom for horse-drawn carriages. By 1896, the Studebaker family converted the building to studios for artists, architects, musicians and others and has more or less remained that way ever since. The big theater in the building, recently renovated, still bears the family name.

As a one-time tuba player, I appreciate that Arnold Jacobs, principal tubist of the Chicago Symphony for 44 years and perhaps the greatest to play the instrument, taught tuba from an office in the building.

Marker for Arnold Jacobs' Studio

The building is so low tech, it has the last remaining manually operated elevators in the city. It's not that everyone is anti-technology. Most studios have a computer to keep up with finances, emails, web pages and social media. But that's tech in the background. In a world focused on technology, nice to see a building in Chicago devoted to those who still create in a magical place that stands the test of time.

Sunday, December 14, 2025

Weird Al vs Weird AI

 ONE

The following headline confused me:
 

                  Trump, 79, Deletes Weird AI Video Shilling Magic Beds (see here). 

Was Weird Al selling magic beds? Magic beds?! How does that relate to President Trump? What’s going on?

The problem is the font: a capital I (as in AI) can look like a lowercase l (as in Al).
So the headline should really be:

Trump, 79, Deletes Weird Artificial Intelligence Video Shilling Magic Beds.

This case is particularly confusing because:

a) Weird AI videos clearly means Videos that Weird Al has that go with his songs. 


While we’re on this topic, here are some Weird Al videos related to computer science or math:

Don't Download This Song 

 It's all about the Pentiums

 Polka Patterns 

The first two  songs are dated but I still like them. They also show that Weird Al has had (and still has) a long career. 

b) The story, even when understood completely, is really weird. 

TWO

I pointed out the Weird AI vs Weird Al issue to a fellow Weird Al fan, and he emailed me the following links:

McDonald's Making job Applicants Take Weird AI Personality Tests

MAGA Rep Weights in on Sydney Seeney Jeans Debate with Weird AI Ad

A Weird AI Camera With a Human Name is Coming for Your Cell Phone

Leith Ross Denounces Weird AI Songs Uploaded to Their Spotify Profile

Why We Don't Believe MIT Nanda's Weird Al Study

Anthropic Researchers Discover the weird AI Proble: Why thinking longer makes models dumber

Samsung phones are getting a weird AI shopping platform nobody asked for

THREE

0) The one with Weird AI Songs is the most confusing since that clearly means songs by Weird Al. 

1) Am I the first person to notice this? Of course not-- see here

2) Has someone used this confusion? Of course--see here

3) Will this confusion help or hurt Weird AL's career? Time will teLL.


Thursday, December 11, 2025

Learning the Mathematical Process

Watching Mathematicians at Work (AI generated)

The Smithsonian Natural History Museum has a FossiLab where visitors can peek through windows watching scientists prepare fossils for conservation. Maybe we should have a similar exhibit at math museums or universities. How else can we learn what mathematicians do? 

In 2025, artificial intelligence has achieved gold medal status at the International Mathematical Olympiad but so far has only contributed modestly in finding new theorems. Of course, finding and proving new theorems requires a different set of skills than competition problems but it goes further than that.

The Internet has considerable text and video on how to solve math competition problems that machine learning systems can train on. On the other hand, mathematical research papers usually have little more than theorems and proofs. Maybe some intuition. Rarely do papers go into the thinking process and the false steps that one takes until one finds the proof. For some problems I've spent weeks proving a theorem but only the last day's work gets written up.

Now I doubt many mathematicians would give up their privacy and time to train AI systems to take over their jobs, but just suppose we wanted to do so. We could equip every mathematician with a camera recording every mathematical conversation and everything they write, especially the ideas that don't pan out. We can transcribe it all and feed it into an ML system. But it probably won't be enough.

Trouble is most mathematical breakthroughs just happen inside of people's heads. If you ask a mathematician how they came up with the clever idea that led to a major new result, they can rarely truly explain the process behind it. Not unlike neural nets. 

If machines can't learn to prove theorems by watching mathematicians, perhaps the route mathematicians take: A grad school slog towards PhD research and learning from endless failure. 

Sunday, December 07, 2025

Tom Stoppard 1937-2025

 
The playwright Tom Stoppard passed away at the age of 88 on Nov. 29, 2025.

ONE) He wrote many plays and some movies.  Below I highlight his works whose themes I think will be of interest to my readers (Or at least to me—your mileage may vary.)

1) Rosencrantz and Guildenstern are Dead (1966)

This is Hamlet told from the point of view of two minor characters who, in Shakespeare’s original, can best be described as plot furniture.

The play begins with, R and G are flipping coins. R bets heads ninety-two times in a row and wins each one. The play explores determinism and free will, as well as the mathematical question: At what point should you stop flipping coins and go buy a lottery ticket?

There is also a movie for this one. I think this is better as a play. 

2) Jumpers (1972)

A play about academic philosophers—so naturally it includes gymnastics, both literal and intellectual. There’s also a murder mystery, discussions of God, and enough moral philosophy to power an entire semester of office-hour arguments.

3) Travesties (1974) 

This play Imagines that Vladmir Lenin, James Joyce, and Tristan Tzara (a Dadaist poet, see here for the Dada Art Movement) met in Zurich in 1917. They actually were in Zurich in 1917, but the events are narrated by an unreliable octogenarian, so accuracy is...doubtful.

Literature, politics, and art are explored, often simultaneously.


4) Arcadia (1993)

The Stoppard play with the most math—a sentence that delights mathematicians and terrifies theater majors.

It takes place in two time periods: 1809 and 1993.

In 1809, a 13-year-old girl named Thomasina Coverly is (i) trying to prove Fermat’s Last Theorem and (ii) inventing chaos theory by stirring pudding. (This is the only known instance of dessert contributing to mathematics other than pie and muffins.)

In 1993, a historian is working on a theory about Lord Byron, which is only slightly less complicated than Fermat's Last Theorem.

Themes: math, determinism, academia, and the strong correlation between intellectual brilliance and household messes.

Note that this was written a year before FLT was proved. If it had come out a year after FLT was proved this would not change anything since (i) Thomasina Coverly is working in 1809, and (ii) FLT was still a well-known problem when the play came out. If the play had come out in 2025, then this might be a problem since FLT is not nearly as well-known as it was in 1993. 

Some say that FLT being proved was bad for math since it was 

(a) understandable to the non mathematician, 

(b) had prize money attached to it, 

(c) had the wonderful margin story, and 

(d) was open for many years.

(e) there are many poems about it, see here, which is a consequence of a,b,c,d. They were written without ChatGPT.   This criteria is no longer important since ChatGPT allows you to write poems about any math problem you want. I blogged on that here.

 I don't think anything has come close to replacing FLT.

 P vs NP: (a)-it's hard to get across to non-math people what a problem is , (b)-I think it's well known but perhaps I wouldn't know since the non-math people I hang out with know about it from me, (c)-no, (d)-no (hmm- is 50+years a long time?) (e)

Goldbach's conjecture has (a) and (d). As for (b): at one time there was a million dollar prize for solving it, see here, as a way to promote the book Uncle Petros and Goldbach's Conjecture but I think the prize expired. The link to find out the contest rules just points to that book companies website. In any case, this prize was not that well known.

While I would not expect a problem to have (c), does any open problem have (a), (b), some story like (c), and (d)? I doubt it.

5) Enigma (2001)

A fictional film about cracking the Enigma code.

Despite expectations, Alan Turing does not appear, nor is he even mentioned. This confused me, since Andrew Hodges's 1983 biography is titled Alan Turing: The Enigma (See herewhich was the inspiration for the movie The Imitation Game.

Note that Enigma-the-movie has zero real people, but Travesties-the-play has three real people.


TWO) The Tom Stoppard Prize

The Tom Stoppard Prize was established in 1983 and first awarded in 1984. It is given annually for:
 

outstanding primarily non-fiction work by a writer of Czech origin.

This raises a question: Which is the greater honor—winning an award, or having one named after you while you’re still alive? The answer probably depends on both the award you receive and the award named after you.

In computer science, the only award I know named after a living person is the Knuth Prize. If there are others, leave a comment.

In Country Music there is the Willie Nelson Lifetime Achievement award and note that Willie Nelson is still alive. Also, he won the first one. 

If you ever get this trivia question: 

What do Tom Stoppard, Donald Knuth, and Willie Nelson  have in common?

you now know the answer: They were are all famous enough to be turned into prizes while they could still appreciate it.

Thursday, December 04, 2025

Finding Papers Before the Web

Inspired by Daniel Litt's X Post

and Bill's recent post on finding papers on the web I would tell the story of the before times.

In the 1980s if you wanted to read a paper, you either had to find it in a journal or conference proceedings or have it mailed to you. You could reach out to an author or SIGACT News would publish a list of tech reports from various universities. Departments would keep a master copy of each paper. You would send a stamped self-addressed envelope to the department which would copy the paper, put on a tech-report cover and send it back to you.

If you had a particularly exciting result, you would share it by physically mailing it out to your colleagues. I found out about the latest circuit results from HÃ¥stad and Razborov, as they sent papers to my advisor Michael Sipser, often hand-written and in Razborov's case in Russian. Neil Immerman sent a copy of his nondeterministic space closed under complement paper to Sipser but he was away for the summer. I found out about the result from a Berkeley talk announcement

Email wasn't a common method of communication until the mid-80's and it wasn't until a few years after that that people figured out how to send papers by putting the latex or postscript text directly in the email. This was before attachments and PDFs. Old mail systems put a ">" before From so it wouldn't be confused as a header and LaTeX rendered ">From" as "¿From" which you'd often see in conference papers from around that time.

In my first year as an assistant professor in 1989-90, there was a flurry of emailed papers marking (and causing) the quick progress we had in interactive proofs, described so well by László Babai's E-mail and the Unexpected Power of Interaction. Babai had a warning about researchers disadvantaged because they weren't receiving these emails.

I got tired of emailing papers so as soon as the web became a thing in 1993, I put all my papers online and have maintained it since. Now with sites like arXiv and ECCC, everyone has access to the latest and greatest in complexity.

Now how long before the next generation asks how we discovered papers before we had chatbots to find them for us?

Sunday, November 30, 2025

Does ChatGPT really help programmers?

 
BILL: I honestly do not know whether ChatGPT will make programmers more productive. (I am not touching question of whether it puts programmers out of work. That's a problem for Future Bill.) Who can I ask? I found two people who disagree on the issue:

Alice who supports developers in industry. She doesn't write code full time now, but she has written plenty before. She thinks that NO, ChatGPT and LLMs DO NOT help programmers.

Bob is a  friend of a friend who writes code for a living and owns a small software/database company. He thinks YES, ChatGPT and LLMs DO help programmers.

I asked Alice and Bob to discuss the issue over email and make a blog post out of it, or have ChatGPT do it for them. Below is the result.

---------------------------------------------

ALICE: Recently I needed to add CSV import to my apps. The columns in the file needed to be optional and allowed in any order. I  could have figured it out myself but I thought Why not let an LLM do the boring part?

The first two answers it gave me ignored my requirement on the columns. If a human had done this, I would have asked them whether they had read what I wrote. On the third try, it produced something
reasonable. But in the end I went with a different approach anyway.

So yes, the LLM gave me code---just not the code I wanted. Sort of like ordering a salad and getting a smoothie: technically vegetable, but not what I asked for.

BOB: ALICE; you’re basically proving my point for me. You gave the model a fuzzy spec, it gave you fuzzy code. That’s not an indictment of LLMs — that’s just computers doing what computers have done for 70 years.

When I give an LLM a tight spec, it behaves. When I’m vague, it hallucinates. Honestly, that’s not that different from hiring a junior dev, except that the LLM doesn't steal your sparkling water out of the refrigerator or ask what CRUD stands for.  (Create-Read-Update-Delete)

BILL: I recommend keeping a small refrigerator in your office rather than use a communal one.

BOB: Good idea! Anyway, this (writing code, not giving advice on cutting down office theft) is exactly why LLMs save me time: I can hand them the dull bits: scaffolding, boilerplate, adapters — all the stuff I can write but would rather not spend a day on when a decent model will spit out 80% of it in 20 seconds.


ALICE: We tried having an LLM add some error checking to one of my apps. I wanted to put the checks in a plausible place, but it wasn't the best place because it didn't understand that there was another method that already did some of the error checking. It was able to modify a method in a way that was similar to another method it was told to use as a sample. But, I didn't find this too useful. Trying to get it to make such small changes to the code just interrupted my train of thought and required me to spend effort carefully reviewing what it wanted to do.

BOB: I get the train of thought thing — but that’s because you’re trying to use the model as a scalpel. I use it as a bulldozer.

I don’t ask it to make tiny edits inside an existing method; I ask it to refactor an entire 20-year-old unit. That is where it shines.

Recently I refactored code I hadn’t touched since the middle ages. Without an LLM? A week of muttering my ancient comments I’m embarrassed to admit I wrote. With an LLM? Half a day.

It identified the structure, spotted duplications, clarified naming, and turned archaeological dig into Tuesday morning. That’s the sort of productivity gain you don’t shrug off.

ALICE: We had an LLM review one of my apps. Some of what it wrote was correct, but some were just nonsense. It was just writing plausible sounding things, picking bits of my code to justify them, but didn't  understand the code. Its suggestions were vague (e.g., add unit tests), which is the code equivalent of your dentist saying floss more.

BOB: If you ask an LLM to review my app, you’ll get vague motherhood statements. If you ask a human the same Focus only on concurrency issues in this module, or Explain where this user’s error could be triggered based on these logs, it becomes very sharp.


This is, frankly, where it’s scarily good: tracking down where a user-reported issue is probably happening. Humans tell me symptoms; the LLM translates that into “It’s almost certainly in this 40-line cluster over here”, and it’s right far more often than chance. That alone pays for the subscription.

BILL: I've read through your 194 email saga, I still don't know what the upshot is. So, like a court case, each of you give your closing statements.

BOB: LLMs are not magic interns who understand your architecture. They're incredibly powerful amplifiers when you use them properly. For me, the time savings are real — especially in refactoring legacy code, debugging from cryptic user reports, and generating UI scaffolding that would otherwise be tedious.

Do I trust them to invent a brand-new algorithm whole cloth? Of course not — I write NP-Complete solvers for fun. But for 70% of my day-to-day coding, they turn chores into quick wins. And that makes me a more productive programmer, not a lazier one. 

ALICE: LLMs may be helpful for writing small, self-contained chunks of code or for suggesting approaches. But, they do  not understand code. Most of what programmers do requires understanding the code. So, LLMs will be of little, if any benefit, to most competent programmers. Coaxing LLMs to produced something useful takes more time than people realize.

Or as I like to put it, they're great at writing code-shaped objects, not actual code.

BILL: Thank you. I am enlightened. And don't forget to floss before brushing.




Monday, November 24, 2025

The Little Theorems

Last week we had a talk by Purdue philosophy professor Eamon Duede Tail Novelty, Knowledge Collapse, and Useful Frictions in Science. In part of the talk he argued that if AI makes writing technical papers easier, researchers will write up small results that they wouldn't have bothered with before. He thought having a swarm of minor result papers was a bad thing but I'm less sure. Let's get the little results out there. We'll sort them out later, probably with the same AI that helped write them in the first place.

In my career I've had several little theorems in complexity. They should never get into any respectable conference, so what do you do with them? Sometimes I stick them in a sort-of-related paper, in a blog post, or let someone else mention the theorem. Too often the results just don't get written. Did you know that finding an \(S_2^p\) witness is equivalent to TFNPNP? I never found a good place to put it.

Fermat himself never published his "little theorem" from last week's post. He put it in a letter to his friend and fellow mathematician Bernard Frénicle de Bessy. Euler would first publish the proof nearly a century later.

The journal Information Processing Letters used to play the role of publishing moderately interesting research but like most Elsevier journals has long lost its cachet. You can stick little theorems on arXiv, though that still requires writing them up. Using AI to mostly write up results is still frowned upon, but maybe we should make exceptions for papers that wouldn't normally get written. 

Once I used the fact that PPP = P#P and a referee asked for a reference. The proof isn't hard, just a simple binary search, but I couldn't find anyone who first proved it. Not by Valiant who introduced #P or by Gill who defined PP. Eventually I cited a paper by Toda, who had mentioned the result but didn't claim it. Perhaps whomever proved it first never thought to write it up assuming that it was already well known. 

Thursday, November 20, 2025

Factoring Carmichael Numbers

Carmichael Numbers are the bane of probabilistic primality algorithms. You have to go through extra steps just to handle these relatively rare numbers. But did you know that the Miller-Rabin algorithm not only determines the compositeness of Carmichael numbers but can actually find non-trivial factors? Apparently none of the AI models I tried did either. Feel free, Google, OpenAI and Anthropic, to train on this blog post.

Let's start with Fermat's little theorem, not as big as his "last" one but one where we know he had a proof. It states that for any prime \(p\),  \(a^p \equiv a\bmod p\) for all integers a. That suggests a primality test, if you can find an \(a\) such that \(a^m \not\equiv a\bmod m\) then you know \(m\) is not a prime. For example, since \(29^{143}\equiv 35\bmod 143\) then \(143\) is not prime. 

The set of \(a\) such that \(a^m \equiv a\bmod m\) forms a subgroup of \(Z_m^*\) (the numbers \(b\) less than \(m\) with gcd\((b,m)=1)\), if there are any integers \(a\) that fail the test then at least half of  the \(a\) will. So you could just choose a random \(a\) and check if \(a^m \equiv a\bmod m\). That would work for all the composites where there is an \(a\) such that \(a^m \not\equiv a\bmod m\). Alas there is a relatively rare set of composites, known as Carmichael numbers, for which  \(a^m \equiv a\bmod m\) for all \(a\) co-prime to \(m\). The first few Carmichael numbers are \(561\), \(1105\), \(1729\) and \(2465\). For example \(29^{561}\equiv 29\bmod 561\).

So probabilistic primality tests need to account for these Carmichael numbers. Let's look at the Miller-Rabin test

  • If \(m\) is even, output "prime" if \(m=2\), composite otherwise.
  • Pick \(s\) and odd \(d\) such that \(m-1=2^sd\).
  • Pick random \(a\) between 2 and \(m-2\). If gcd\((a,m) \neq 1\) return composite.
  • Let \(x = a^d\bmod m\)
  • Repeat s times
    • Let \(y=x^2\bmod m\)
    • If \(y=1\) and \(x\not\in\{1,m-1\}\) then composite.
    • Let \(x=y\)
  • If \(y=1\) output "probably prime" otherwise "composite"
Here's the kicker. If the Miller-Rabin test says "composite" before the last step, you'll get a factor. Note that \(y-1=x^2-1=(x-1)(x+1)\equiv 0\bmod m\). But since neither \(x-1\) or \(x+1\) is a multiple of \(m\), the greatest common divisor of \(x-1\) and \(m\), as well as the gcd of \(x+1\) and \(m\) will give you a non-trivial factor of \(m\).

For example if we start with \(m=561\) and \(a=29\), we have \(d=35\) and \(s=4\), \(29^{35}\bmod 561=164\) and the \(y\)'s will be \(592,463,67,1\). For \(x=67\), the gcd of \(x-1=66\) and \(561\) is \(33\) and the gcd of \(x+1=68\) and \(561\) is \(17\). \(561=33\times 17.\)

Finally Carmichael numbers will never say "composite" in the last step, so the Miller-Rabin algorithm will always find a factor when it says "composite".

In 2002, Agrawal, Kayal and Saxena gave the first provably polynomial-time algorithm for primality. As far as I know AKS doesn't factor Carmichael numbers and there isn't any known deterministic polynomial-time algorithm finding non-trivial factors of Carmichaels, though if the generalized Riemann hypothesis is true, you need only try the first \(O(\log^2 m)\) \(a\)'s in the Miller-Rabin test

Sunday, November 16, 2025

Test of Time Awards: A Good Idea but ....

Since there is now a CCC Test-of-Time Award, see here,  (CCC stands for Computational Complexity Conference), I decided to look at other Test-of-Time awards in computer science.

Below is a list of various computer science Test-of-Time awards, along with their
eligibility requirements.

1) IEEE Visualization (VIS) Test of Time Award, see here.   VIS=Visualization.  Their 2023 award page says:

Papers are selected for each of the three historic conferences (VAST, InfoVis, and SciVis). ... 
This year VAST gave out a 10-year test of time award, InfoVis a 10- and 20-year award, and SciVis a 13, 14 and 25 year award.

2) SC Test of Time Award, see here. SC=Supercomputing Conference.  Their site says:

Papers appearing in the SC Program 10 to 25 years prior are eligible.

3) CIKM Test of Time Award, see here.   CIKM=Conf. on Information and Knowledge Management.  Their page states:

The Test of Time Award is given for CIKM papers published at least ten years ago.

3) ACM SIGCSE Test of Time Award, see here. SIGCSE = Special Interest Group on Computer Science Education.  ACM=Association for Computing Machinery. The name is from the era when computers looked like angry filing cabinets.  The ACM SIGCSE Test-of-Time page states:

The award recognizes an outstanding paper published in the SIGCSE community. 

 Note that this is broader than just their conference. Good!

4) LICS Test of Time Award, see here. LICS=Logic in Computer Science.  Their award page says:

The LICS Test-of-Time Award recognizes a small number of papers from the LICS proceedings from 20 years prior.

5) STOC Test of Time Award, see  here. STOC = Symposium on Theory of Computing.  Their page states:

There are three awards---papers presented at the STOC conferences in 10, 20, or 30 years ago [they later say there is some flexibility on that]. 

6) FOCS Test of Time Award, see  here. FOCS stands for Foundations of Computer Science.  Their page states:

The awards recognize papers published in the Proceedings of the Annual IEEE Symposium on FOCS. [From elsewhere on the page they have three categories: papers 10-years ago, 20-years-ago, and 30-years ago, but they have some flexibility with that.]

7) SIGecom Test of Time Award, see here. SIGecom = Special Interest Group---Economics and Computation.  Their page states:

...must have been first published in an archival journal or conference proceedings 10–25 years before. At least one author must not be dead. [A surprisingly nontrivial constraint in a Test-of-Time award.]

I think this is the only award on this page that stated the not-dead criteria explicitly.  Are people in Economics and Computation publishing papers into their 90's? While in hospice care?

8) NeurIPS Test of Time Award, see here.  NeurIPS = Neural Information Processing Systems.  I couldn’t find an eligibility description, so I asked ChatGPT, which confidently stated:

Yes—the paper must have been published in NeurIPS at least 10 years ago. 

If this is wrong, please let me know. ChatGPT is sometimes confident the way undergraduates are confident before seeing the homework solution.

9) CCC Test of Time Award, see here. CCC=Computational Complexity Conference. Their page states:

Eligible papers are those that appeared in CCC (or Structures, pre-1996) at least 10 years ago. 

------------------------------------------------------------------------
These awards raise some questions:

1) Why tie eligibility to a conference?  Suppose Abisola proves P ≠ NP and publishes it in Annals of Mathematics.  Under many of these rules, she would be ineligible for most theory Test-of-Time awards.

CON: Important results may not appear in the “right” conference or any conference at all.

PRO: None. Absolutely none.  Why is it done?  To publicize the conference and provide a nice ceremony venue.  These are not strong intellectual reasons. (I had a nastier comment about this but ChatGPT convinced me to be more polite.)

My proofreader brought up the following:

a) Who will give out the award? Answer: An organization like the ACM, and it DOES NOT need to go to an ACM conference.

b) How to limit what papers are eligible? Any paper that was in a conference or journal (though see item 3 below). That seems like a rather large set of papers to consider; however, after 10 years we DO know which papers stood the test of time. As an example- when Lance made his best theorems of the decade lists he did not pay attention to which venue the result appeared in.

2) Why the 10–25 years ago window?  I understand waiting 10 years—time needs time.  But why exclude older papers?  What if the paper Some Connections Between Bounded Query Classes and Nonuniform Complexity (STRUCTURES/CCC 1990) suddenly becomes important? Too bad: it aged out, like a carton of milk.

Are there examples of papers that became important many years after they were published?

I have one sort-of example: Google AI Overview says

The permanent of a matrix was first introduced independently by Cauchy in 1812 and later independently rediscovered by Binet in 1812. [The identical years makes me suspect, and also makes the notion of ``later'' rather odd- Did Cauchy do it in February and Binet in June?]

The permanent became much more important after Valiant, in 1979, showed that it was Sharp-P-Complete. So perhaps Cauchy's paper should get a test-of-time award.

Fun Fact: The Wikipedia entry on The Permanent (see here) does not mention Valiant, though there is a separate Wikipedia entry on Computing the Permanent (see here) that does.

3) Why require publication in a journal or conference at all?  Suppose Perelman proves P ≠ NP, posts it on arXiv, and never submits it anywhere.  Perelman did just that with his proof of the Poincare Conjecture, and that was good enough for a Fields Medal.

This touches on the much broader---and increasingly relevant—question: What is the future role of journals and conferences in the age of arXiv?

4) (Bonus question): Is there any real difference between the STOC conference and the FOCS conference in terms of the scope of the papers?  Was there ever?  I would guess no and no. Maybe some of my older readers can tell me, unless they are too busy writing papers in Economics and Computation. 

5) Another proofreader pointed out that it would be a good idea to live up to the title of this post, Test of Time Awards: A Good Idea but, and say why they are a good idea instead of bitching and moaning about eligibility. Good Idea! Some thoughts:

a) They might encourage researchrs to aim for deep contributions, not just fashionable ones. I doubt this is true since I doubt authors think about which award they might win.

b)  They reward long-time impact over short-term excitement. Is it much of a reward? How do they benefit the recipient? I ask non-rhetorically. 

c) They are an objective record of subjective opinion.  Useful for historians!
 

Wednesday, November 12, 2025

The Future of Teaching Assistants

In 2016, in the pre-transformer times, Georgia Tech professor Ashok Goel gave a prescient TEDx Talk on an AI teaching assistant for his large online Artificial Intelligence course. Students would ask questions to an online forum, and fellow students and TAs would answer these question. Unbeknownst to the students, Ashok had an AI system answer questions from the forum with the moniker Jill Watson (it was built on technology used for Watson's Jeopardy win). Jill answered questions about administrative issues in the class, so well that most students were surprised it was an AI system. Goel still needed human TAs to handle questions about course content.

In the talk Goel made comments ahead of his time.

Today, we have classes with a hundred or more students, and few get personal attention. We even have massive open online courses, where students learn from prerecorded video lessons. Some of these classes attract 100,000 students, but only 10 percent or fewer complete them—because of the lack of teaching assistance and personal attention. 
This raises a fundamental issue: can we have personal attention at scale? What if we could give personal attention to every student, when and how they needed it? That would create an educational revolution—because learning and teaching would become personal again...I envision a future where everyone has access to AI teaching assistants like Jill Watson—anytime, anywhere, for any task. A future where education is affordable and accessible to all, yet still personal and fun. 

Fast forward to today and AI assistants can answer questions about course content pretty well for nearly every undergraduate course in all disciplines. A well-trained AI tutor outperformed in-class active learning. While I haven't seen formal studies, students seem to go to ChatGPT before reaching out to TAs, instructors or their fellow students. AI can grade assignments and exams. 

Teaching assistants still have an important role to play. They teach recitation sections and run student labs. A great TA can provide a human connection with a student that no computer could. A teaching assistantship gives important training to students who might become academics, as well as providing an important funding mechanism, especially in the humanities and social science which has limited grant support for graduate students. But in a time of budget constraints at many institutions, how can we argue for the same TA support at the levels we had back in 2016?

Sunday, November 09, 2025

A Presidential Trivia Question, how I tried to solve it

       

A friend of mine told me that in the last six months, the last grandchild of one of our former presidents (who had already passed away) died.

I tried to  deduce who it was without checking the web directly. For example, I  looked up when various presidents were born to narrow it down, but try not to search for the answer itself.


I could look this up easily. However, by spending time thinking about it and exploring related questions, I might learn things I  didn’t know before.

Things I care about? Maybe.

I now have a lot of white space, so you can think about the answer yourself and perhaps make a guess, then my reasoning process and the answer.





























Before answering, I’ll walk through the reasoning I used for my educated guess.
Assume the president was born in year x.

ADDED LATER: In the original version of this post (and its still there) I would make comments like: 


                           Warren G. Harding- why is the G always mentioned.

A commenter pointed out that middle names and initials are an American thing. This made me wonder why some of them didn't use their middle initial. Hence I have ADDED information about that. I won't bother writing ADDED LATER for those late adds. 

END OF ADDED LATER


Assume the president’s youngest child is born in approximately year x+35.

Assume that child’s youngest child is born in approximately year x+70.

Assume the grandchild lives 80 years, so dies in approximately year x+150.

Then x+150=2025, so x=1875.

Since this is an approximation, I looked at all presidents born between 1850 and 1900. The website I consulted tells us the answer. What would we do without the internet? (Maybe spend more time offline tightening our bounds on the polynomial van der Waerden numbers.)
Here are all the presidents who were born in a year in [1850,1900].

Theodore Roosevelt — born 1858. Formal name: Theodore Roosevelt Jr. No middle name. 

William Howard Taft — born 1857. He seems to have his full name mentioned. Why? 

Woodrow Wilson — born 1856. Formal name: Thomas Woodrow Wilson. So it would be impossible to use a middle initial unless you do what David Dwight Eisenhower did and swith the two names, so then he would be Woodrow T. Wilson. 

Warren G. Harding — born 1865

Calvin Coolidge — born 1872. Formal name: John Calvin Coolidge Jr. Similar to Woodrow Wilson.  Cal's dad was John Calvin Coolidge Sr.  Naming your kid after yourself, including the middle name, coud be a mistake, see here.


Herbert Hoover — born 1874. Formal name: Herbert Clark Hoover. I am surprised he wasn't called Herbert C. Hoover. Maybe Herbert Hoover was better because its alliterative. 

Franklin D. Roosevelt — born 1882

Harry S Truman — born 1884 (There really is no period after the S. Both chatty and Wikipedia get that wrong.)

Dwight D. Eisenhower — born 1890 (born David Dwight Eisenhower; he preferred to be called Dwight).

When I was asked the question, I guessed Harry S. Truman. I was wrong.


In fact, none of the presidents on that list is the one. 

The correct answer is John Tyler, who was born in 1790.


My rule-of-thumb assumptions (35 years to have a child; an 80-year lifespan) were  large underestimates for this case. John Tyler had sixteen children. The third-to-last was Lyon Tyler, born in 1853 — John Tyler was 63 at the time, which is 28 years more than my estimate of 35. Lyon Tyler had six children; the second-to-last was Harrison Tyler, born in 1928 — Lyon was 75, which is 40 years more than my estimate of 35.

(In 1840 William Henry Harrison and John Tyler were elected Prez and Veep. WHH died after giving his inaugural speech in the cold rain, and Tyler became president. Lyon Tyler names his son Harrison. My thought was: Did Lyon name his son after WHH? I asked Google 

Why did Lyon Tyler name his son Harrison? 

The AI Overview said:

Lyon Tyler named his son Harrison because his own mother, Julia Gardiner, was a member of the Harrison family of Virginia, making Harrison a great-grandson of both President John Tyler and William Henry Harrison, the president whom John Tyler succeeded. Naming his son Harrison was a way for Lyon Tyler to honor the family connection to both presidents, particularly the presidential connection. 

Thats not quite right- Harrison is John Tyler's grandson, not great-grandson. 

I didn't know that John Tyler's grandchild was named Harrison.

I didn't know that John Tyler's grandchild was also related to WHH.

)


Harrison Tyler died in 2025 at the age of 96, which is 16 years more than my estimate of 80.

So my point (do my posts need a point?) is that I made an approximation but was still way off. John Tyler is an outlier, which is hard to account for.

Let’s say I assumed 60-year-old fathers and the grandson lives to 90. Then we would have:

x + 210 = 2025
x = 1815

This is an approximation, so I would look at presidents born between 1790 and 1840:

John Tyler: 1790. No middle name. 

James K. Polk: 1790. Why is the K always mentioned?

Zachary Taylor: 1784. No middle name. 

Millard Fillmore: 1800. No middle name. 

James Buchanan: 179. He never had kids. That’s just as well, since his children would have had the stigma of their father being one of the worst presidents of all time by most accounts.

Abraham Lincoln: 1809. Born February 12, 1809 — the same day and year Charles Darwin was born. No middle name. 

Andrew Johnson: 1808. No middle name. 

Ulysses S. Grant: 1822.  Born Hiram Ulysses Grant, but he didn’t like that the initials were H.U.G.

Rutherford B. Hayes: 1822. Why is the B always mentioned?

James Garfield: 1831. James Abram Garfield. I sometimes see the A initial and sometimes not. 

Chester Arthur: 1829. My Darling’s favorite president — really! Formal name Chester Arthur which I have seen written, though not as much as the others. Perhaps I just didn't notice him as much as my Darling did. 

Grover Cleveland: 1837. Formal name: Stephen Grover Clevelant. 

Three points:

1) When I picked 60–60–90, I did not know that John Tyler would actually make it onto the list.

2) He just barely made the list.

3) I would not have picked 60–60–90 unless I had already learned that 35–35–80 was far too small.