Sunday, March 09, 2025

Numbers that look prime but aren't

 
A long time ago I made up the question (are questions ever really made up?)

What is the least number that looks prime but isn't?

It was not quite a joke in that it has an answer despite being non-rigorous.

My answer is 91:

Dividing by 2,3,5,11 have TRICKS

Squares are well known.

So the first number that looks prime but isn't is \(7\times 13=91.\)

I recently  saw the question asked on Quora and given a more interesting answer. The answer is by Nathan Nannon, a PhD in Math at Univ of CA, Davis (Graduated 2021). I paraphrase it here and then have some questions.

-----------------

What does it mean to look prime?

FIRST CRITERIA:

1) Its decimal representation ends in 1 , 3 , 7 , or 9 , and

2) It is not on the multiplication tables that a lot of people memorize, which go up to 12.

Based on these criteria, 39 is the first composite number that looks prime.

(If people no longer learn the mult tables then the answer would be 21.) 

SECOND CRITERIA: Use the trick that a number is divided by 3 if the sum of the digits is divisible by 3.  Then the first composite number that looks prime is 91 , followed by 119 , 133 , and 143.

THIRD CRITERIA: Fermat's Test: If \(p\) is prime then for all \(a\), \(a^p\equiv a \pmod p\).
Numbers that pass this test and yet are composite are called Carmichael numbers.

Here are the first few  Carmichael number:

561 AH- that does not count since 5+6+1 is divisible by 3.

1105 AH-doesn't count, ends with 5.

1729 AH- Nathan Nannon points out that 1729 is the sum of two cubes (more on that later) and hence we can use \(a^3+b^3 = (a+b)(a^2-ab+b^2)\). This only works if you KNOW that 1729 is the sum of two cubes. AH- most mathematicians do know this because (1) it is the least number that can be written as the sum of 2 cubes in 2 diff ways, and (2) this fact has been popularized by the following true story (I quote from Wikipedia, see here) which explains why such numbers are called Taxicab Numbers

The name is derived from a conversation ca. 1919 involving mathematicians G. H. Hardy and Srinivasa Ramanujan. As told by Hardy:

I remember once going to see him [Ramanujan] when he was lying ill at Putney. I had ridden in taxi-cab No. 1729, and remarked that the number seemed to be rather a dull one, and that I hoped it was not an unfavourable omen. "No," he replied, "it is a very interesting number; it is the smallest number expressible as the sum of two cubes in two different ways."

Note 

1) Oddly enough, the fact that 1729 is the sum of two cubes in TWO diff ways does not make it any easier to factor. We just needed one way. 

2) To say that 1729  does NOT look prime depends on history as well as math. If not for the Hardy-Ramanujan story, would most mathematicians know that 1729 is the sum of 2 cubes. Possibly since its 1000+729. But not clear. Martians may think 1729 looks prime.


2465 AH-doesn't count, ends with 5

2821 AH- just looking at it, it is clearly divisible by 7.

6601 AH- OKAY, this one really does look prime.

UPSHOT
Depending on what criteria you use, the least number that looks prime but isn't is either 21 OR 39 OR  91 OR  6601 or something else, depending on what looks prime to you.

------------------------------------------------------

QUESTION
Is there some natural and simple criteria that rules out 6601? This may depend on your definitions of natural and simple.

QUESTION The first few Carmichael numbers had small factors. 6601 is divided by 7. Is there some function   f with \(f(n) \ll \sqrt n\) such that if \(n\) is a Carmichael number then it has a factor \(< f(n)\). ?

The next few Carmichael number after 6601 is 8911, which 7 divides. So that looks good. But alas, Jack Chernick proved (see here) that any number of the form \((6k+1)(12k+1)(18k+1)\) where \(6k+1\),\(12k+1\), and \(18k+1\) are all primes, is a Carmichael number. It is not know if this generates infinitely many Carmichael numbers. Hence if some f(n) exists then its probably \(\Omega(n^{1/3})\).


Wednesday, March 05, 2025

Taking a Stand

On February 20th we got the news from the National Science Foundation Algorithms Foundations Team that long-time NSF program director Tracy Kimbrel, was leaving the NSF, and not by choice.

Along with many others in part-time status at NSF, my service has been terminated earlier than I intended or expected.  It has been a great privilege and a great honor to serve the Algorithmic Foundations community over the last decade and a half.  It's disappointing to have it end so abruptly.  I will miss it and all of you.

Tracy is just one of many government employees losing their jobs but when you know someone it feels personal. Tracy has been a fixture at the NSF and often comes to theory conferences to talk about grant opportunities and the state of the NSF. In my yearly pre-covid pilgrimages to the foundation for panels, I always had great conversations with Tracy and watched him work, getting the information he needed from us to make the tough decisions of which projects to fund, always many more worthy than the available funding. The theory community loses with Tracy out of the NSF.

We did get some good news earlier this week with the NSF reinstating most of their probationary employees. And Trump did say near the end of his speech yesterday "we are going to conquer the vast frontiers of science" but apparently we'll do it with a much smaller NSF if Trump follows through with his plans.

Talking with some fellow academics at another university, they had almost given up. "What can we do?". 

We can push back.

Start by doing nothing. Don't preemptively change your policies and your values. Too many universities and organization are abandoning DEI programs, changing their curriculum, freezing hiring of faculty and students, in anticipation of challenges to come. We may see a time that new policies will survive the courts and force us to change, but not yet.

While the democrats in congress seem powerless, many of the governors, including my own governor JB Pritzker, have fought back, mostly in the courts, and have stopped, for now, much of the damage to the NIH and NSF. The computing societies urge congress to protect our research funding, especially in a time when we need to compete technologically with China and other countries. 

As individuals, we can take our own steps, participate in Stand Up for Science on Friday, reach out to our representatives at the national and state level, and just be part of the resistance. We can't let bullies dictate our future, we must control it for ourselves. 

Sunday, March 02, 2025

Karp recently turned 90 but there was no conference to celebrate that. Which numbers do we use and why?

Karp turned 90 in January of 2025. I searched to see if there is a 90th Birthday Conference for him.  I did not find one (is there one?).  For which years do we have celebratory birthday conferences?

Here are some conferences in honor of  60th Birthdays, by alphabetical order of last name. 

Eric Allender here

Laci Babai here

Allan Borodin here

Stephen Cook here

Rod Downey here

Juris Hartmanis (at the 1988 Structures, now Complexity, conference, predating the web). Lance posted about it here.

Russell Impagliazzo here

Ravi Kanan here

Richard Karp (I could not find the link.)

Stuart Kurtz here

Michael Rabin (I could not find the link but I recall planning to go but snow cancelled my flight.)

Ronitt Rubinfeld here

Michael Saks here

Alexander Shen here

Michael Sipser here

Shang-Hua Teng here

Leslie Valiant here (I thought he also had an 80th bday but I am wrong- he is younger than 80.) 

Vijay Vazarani here

Nikolay Vereschagin here

Avi Wigderson here

I am sure there are more. 

Having a conference for someone's 80th birthday is also done. Here are a few:

Richard Stanley here

Michael Rabin here

Stephen Cook here (This was actually a conference for 50 years of Complexity Theory: A Celebration of the work of Stephen Cook. It was 50 years since Cook published what is now called the Cook-Levin Theorem. It also happened to be the year he turned 80.) 

Donald Knuth here but also see Lance's blog post on the meeting   here  and Knuth's favorite bday     here.

 Martin Kruskal here and here for my post on it, and Clyde Kruskal (Martin's son) post on it. That was back in 2007 when I was a guest poster. And Clyde was my guest, so he was a guest-guest poster.

I am sure there are many more. 

Numbers between 60 and 80 are rare (my wife read this and noted that there are 18 of them not including endpoints) but here are some:

John Osborne  (UMCP Math Prof) had a 64th. Could not find a link. 

Dick Dudley 65.  (MIT Math professor, Not to be confused with Big Dick Dudley who was a wrestler or Underwood Dudley who wrote a book on Math Cranks, see here, which I was amazed to find out I DID NOT write a review of.

Mike Patterson here (Why 66? Why not 66!)

Harry Lewis had a 70th, see here (I asked WHY 70? He said the organizer, Margo Seltzer, wanted it then. That is another point- the organizer really controls which year and also if it happens at all.) 

Mihalis Yannakakis had a 70th here 

Ravi Kanan 70 here

Leonid Levin had a 75th, see here

Dick Lipton has a 75th, see here

Manuel Blum had a 77th since 77=7*11 is a Blum Integer. ( The only reason I know it exists is because Lance went to it.) 

I've seen 100th bday conferences. 

Turing here (This is one of many Turing Celebrations for his 100th. It was in 2012. Turing died in 1954.)

Erdos here (This was in 2012. Erdos died in 1996)

Chernoff here (He attended. He is still alive as of this writing, at the age of 101) 

Kolmogorov here (The Day K turned 100 a student told me this. I then gave a lecture on Kolm complexity instead of the planned topic, on the fly. Now that my course is all on slides, and some classrooms don't even have  a blackboard or whiteboard, I can't do that anymore. Oh well.)

I am sure there are more. 

1) Why are 60, 80, 100 the usual numbers? They are nice and round. And 60 is big enough so that the person being celebrated has done stuff, but not so big that they are dead. 

2) There should be more clever ones like Osborn (64) and Blum (77). If there was ever a conference in my honor that would be hard, since the number most associated to me is 5/12 (see here). I had not done much in math at the age of 5 months. Oh well.

Wednesday, February 26, 2025

You Need Much Less Memory than Time

Just as I was complaining that we haven't seen many surprising breakthroughs in complexity recently, we get an earthquake of a result to start the year, showing that all algorithms can be simulated using considerable less memory than the time of the original algorithm. You can reuse space (memory) but you can't reuse time, and this new result from Ryan Williams in an upcoming STOC paper provides the first stark difference.

DTIME(\(t(n)\)) \(\subseteq\) DSPACE(\(\sqrt{t(n)\log t(n)}\))

This is a vast improvement on the previous best known simulation, the classic 1977 Hopcroft-Paul-Valiant paper showing

DTIME(\(t(n)\)) \(\subseteq\) DSPACE(\(t(n)/\log t(n)\))

only slightly lower than the trivial \(t(n)\) bound. Williams gets a huge near quadratic improvement that will go down as a true classic complexity theorem. Note that the space simulation does not maintain the time bound.

Williams' proof relies on a space-efficient tree evaluation algorithm by James Cook and Ian Mertz from last year's STOC conference. Cook and Mertz's algorithm builds on earlier work on catalytic computing, highlighted in a recent Quanta article

Let me give an highly overly simplified view of the combined proof.

A \(t(n)\) time Turing machine uses at most that much space on its tapes. Split the tapes into \(\sqrt{t(n)}\) segments of size \(\sqrt{t(n)}\). Using the fact that it takes \(\sqrt{t(n)}\) time to cross an entire segment, Williams with some clever tricks models acceptance of the Turing machines as a circuit of bounded degree and depth \(\sqrt{t(n)}\), where the wires carry the contents of the size \(\sqrt{t(n)}\) segments at various times in the computation. 

Williams then applies the tree evaluation algorithm of Cook and Mertz. Cook and Mertz use finite fields to encode these segments as a combination of registers of size \(\log t(n)\) and show how to compute the value of each node of the tree using only \(\sqrt{t(n)}\) space for the local computation plus needing to only remember a constant number of registers while reusing the rest of the space when recursively computing the tree. It's pretty magical how they manage to make it all work. 

It's worth going through the proof yourself. I recommend Sections 3.1 and Footnote 6 in Williams' paper (a slightly weaker space bound but much simpler) and Sections 2-4 of the Cook-Mertz paper. Oded Goldreich has an alternative exposition of the Cook-Mertz algorithm and proof.

Williams' theorem works for multitape Turing machines and oblivious random-access machines, where the queries to the memory are fixed in advance. He shows how to use this result to compute the output a circuit of size \(s\) using nearly \(\sqrt{s}\) space. Fully general random access machines remains open, as does nondeterministic and other models of computation (random, quantum, etc).

In 1986 my advisor Mike Sipser gave the first hardness vs randomness result, showing roughly that if there were problems that took time \(2^n\) but could not be solved in space \(2^{.99n}\) on multi-tape Turing machines then RP = P. Williams' theorem kills this assumption though we've developed weaker assumptions since. 

Moving forward, can we push Williams' result to get a simulation in space \(n^\epsilon\) for \(\epsilon<1/2\). A simulation for all \(\epsilon>0\) would separate P from PSPACE. Even a slight improvement would have applications for alternating time. Maybe try to use the Cook-Mertz techniques directly in the Turing machine simulation instead of going through computation trees.

Read sections 4 and 5 of Williams' paper for some further consequences and challenges for further improvements. 

Sunday, February 23, 2025

Why my department hopes I do not die this spring

Alice is scheduled to teach X in the Spring.

Then Alice CAN'T! (illness, death, or some other reason)

What is the department to do?

1) If it's an undergraduate class then likely there are other people who are QUALIFIED. Perhaps a grad student, perhaps a postdoc, perhaps someone borrowed from another dept (Math for a theory course for example), perhaps a teacher of another section of the course, perhaps a retired teacher in the area. Might be rough because of the short notice. In Math this is easier since the courses are more standardized.

2) If it's a graduate class then either

a) It's still something that someone else could teach. The set of people is smaller but might be nonempty. 

b) Nobody else can teach it. Still, it's a graduate class, so it's small, so it can be cancelled and the students can probably find other courses to take.

There is another positive aspect to this negative situation: If nobody else can teach it then probably the entire department is small, so even more likely that the class is small.

If I die this semester then the department will be faced with the perfect storm:

a) I am teaching a graduate course that would be a lot of work for someone else to get up to speed:Ramsey Theory. (I do have good slides, so that might help.)

b) There are around 30 students taking it. (There are around 30 in most of the graduate courses, and more in the AI graduate courses.)

SO, what would they do? 

1) Cancel it. There are a few other theory grad courses the students could take, and some might want to take a non-theory course. Still awkward since those courses are already fairly full.

2) Have someone teach a similar course, like Prob method (the only part of Ramsey Theory that Lance thinks is worthwhile, see here) or combinatorics. This would be a lot of work so it may be hard to find someone who BOTH is qualified AND wants to. Perhaps a grad student, though I think we try to avoid having grad students teach grad courses. Then again, desperate times call for desperate measures. Perhaps a former grad student who is still in the area geographically and mathematically. 

I've been talking about the teacher being unable to teach the course BEFORE it begins. What if the teacher becomes unavailable DURING the semester? That's even harder to deal with.  

OKAY, the above has all been abstract and the events portrayed are rare. Here are things I have SEEN happen and what was done

1) Someone hired as a lecturer to start in the spring ends up being unable to come for the spring. This was found out in November. They were going to teach 2 ugrad courses. 2 profs did an overload.

2) Someone who was going to teach a grad course ended up being unable to do so. This was found out in December. That teacher really did have a former grad student in the area who was available and qualified. Lucky!

3) In December a teacher ends up being unable to teach with 2 lectures to go, and the final to be administered and graded. Another teacher (who has taught that course) takes over, but this is not a big deal since its not much work to finish the course. 

4) A teacher knows ahead-of-time that they won't be able to teach for 4 weeks. Two other teachers agree to do the course in that teachers absence. 

None of  the above are ideal, but solutions were found that did work (for some definition of worked) But I do wonder if there will come a time when no solution can be found.

One piece of advice: If you are not going to be able to fulfill your commitment to teach a course, let your chairman know with a lot of lead time so they can find a solution. 


Wednesday, February 19, 2025

Tomorrow and Yesterday

I recently completed Tomorrow, and Tomorrow, and Tomorrow by Gabrielle Zevin, a book recommended by many including the City of Chicago. The novel covers the decades long journey of two game developers, Sadie and Sam, and how their lives interact with the games they create.

A paragraph towards the end made me rethink the whole book (not a spoiler):

Well, if we’d been born a little bit earlier, we wouldn’t have been able to make our games so easily. Access to computers would have been harder. We would have been part of the generation who was putting floppy disks in Ziploc bags and driving the games to stores. And if we’d been born a little bit later, there would have been even greater access to the internet and certain tools, but honestly, the games got so much more complicated; the industry got so professional. We couldn’t have done as much as we did on our own.

This paragraph hearkens back to my post last week, about how the era you grew up in can affect your trajectory. But also I'm a generation older than the book's main characters, and indeed Ribbit was distributed on a floppy disk in a Ziploc bag.

The novel at its heart is about two friends making games. I was lucky to have that experience myself for a couple of years in the early 80s, with high school friend Chris Eisnaugle, working on Ribbit, Excalibur and some other games that never saw the light of day. We coded for days on end while listening to music like REO Speedwagon, and taking time off for bowling or watching early Schwarzenegger movies. Coding in assembly language on slow processors with limited graphics, taking advantage of our complementary strengths and making it work. I don't regret leaving that life behind for the theoretical wonders of computational complexity, but that doesn't mean I don't miss it.

Sunday, February 16, 2025

Big Breakthrough in the exciting world of sum-free sets!

 Let \([n]\) be the set \(\{1,\ldots,n\}\). (This is standard.)

Let X be a set of integers. \(X\) is sum-free if there is NO  \(x,y,z\in X\) such that \(x+y=z\). (Note that \(x=y\) is allowed.)

Lets try to find a large sum-free set of \([n]\). One obvious candidate is 

\(\{1,3^1, 3^2,\ldots,3^{\log_3 n} \}\) (assume \(n\) is a power of 3). 

So we can get a \(\log_3 n\) sized sum free set of \([n]\). Can we do better?

YES: 

Erdos showed that every set of \(n\) reals has a sum-free subset of size \(n/3\). I have a slightly weaker version of that on slides here.

That result lead to many questions:

a) Find \(f(n)\) that goes to infinity  such that every set of \(n\) reals has a sum-free subset of size \( n/3 + f(n)\).  

b) Replace reals with other sets closed under addition: naturals, integers, some groups of interest.

c) Just care about \([n]\).

Tao and Vu had a survey in 2016 of sum-free results, see here.

We mention three results from their survey

(0) Eberhard, Green, and Manners showed that the 1/3 cannot be improved (see the survey for a more formal statement of this). So, for example, nobody will ever be able to replace 1/3 with 1/2.9.

(1) Alon and Kleitman  modified the prob argument of Erdos (in the slides) and were able to replace \( n/3\) with   \( (n+1)/3\).

(2) Bourgain showed, with a sophisticated argument,that \(n/3\) could be replaced by \((n+2)/3\)

I believe that result (2) was the best until a recent paper of Ben Bedert (see here). Did he manage to 

push it to \((n+100)/3\)? No. He did much better:

For every set of \(n\) reals there is a sum-free  subset of size \(n/3 + c\log\log n\).

Reflections

1) This is a real improvement!

2) Will it stay at \(n/3 + c\log\log n\) or will there NOW be an avalanche of results?

3) Contrast: 

Erdos's proof  I can (and have) taught to my High School Ramsey Gang (note: you DO NOT want to mess with them).

 The proof by Ben Bedert is 36 pages of dense math.

4) This leads to the obvious open question: Is there an elementary proof of some bound of the form \(n/3 + f(n)\) where \(f(n)\) goes to infinity. 



Wednesday, February 12, 2025

Research Then and Now

A student asked me if complexity research was easier when I was a student. Interesting question. Let's compare research now versus the late 80's.

The big advantage today is technology. Just a small sampling below.

Information: Google, Wikipedia, Complexity Zoo and the ever more powerful AI systems

Online Papers: arXiv, digital libraries, individual's web sites

Collaboration: Zoom, Overleaf, Theory Stack Exchange

Communication: Social Media, Online Talks and Classes, YouTube, Mobile Phones

Back in the 80's we had LaTeX and email. LaTeX was slow and you had to print out the paper to see it. Email was only text and it was difficult to share papers electronically. You had to go to the library to read papers unless you had the paper proceedings nearby. It was often easier to reprove a theorem then go find the proof.

We did have some advantages back then. Reproving a theorem made you understand it better. Reading a paper in a proceedings or journal often introduced you to other papers in the publication. We didn't have the distractions of social media and the Internet in general so you could actually focus on research. (Though there was a Mac in the MIT student area that ran Tetris)

People came to the office every day. That opened up collaborations formal and informal. I could walk into an office and ask questions to Johan Håstad, Noam Nisan, Paul Beame and Michael Kearns, all postdocs at MIT, not to mention my fellow grad students or the professors. There was a huge advantage being at a place like MIT or Berkeley, better that those advantages have been mitigated somewhat since.

But the biggest advantage of all, and I'm not afraid to admit it, low hanging fruit. Computational complexity was just 20 years old when I started grad school, and the P v NP problem a young teenager. There was surprising theorem after surprising theorem through the early 90's and not so much since. You didn't need to know deep math and most graduate students could follow nearly all of the 47 talks at my first conference (STOC 1986), not likely in the 219 STOC 2025 papers

Much of my best work was done by reading a paper or watching a talk and wondering why the authors didn't ask some question, and then asking it and sometimes solving it myself or with others. Now you need to spend time climbing the trees and going down deep branches to find new problems that only people on nearby branches would care about or even understand.

But who knows, AI may soon climb those branches for you.

Sunday, February 09, 2025

Does Lance dislike Ramsey Theory Because he's colorblind?

BILL: Lance, my wife asked if you dislike Ramsey Theory because you are colorblind.

LANCE: (laughs) It's why I don't like Ramsey Theory talks--impossible for me to follow. But I don't actually dislike Ramsey theory. I just don't like it as much as you do.

BILL: Nobody does, except possibly Graham, Rothchild,  Spencer, and your average Hungarian first grader.

LANCE: To me Ramsey Theory had one useful cool idea: The Probabilistic Method, that made people actually think Ramsey Theory was far more interesting than it really is.

BILL: Um, that is bad history and bad mathematics.

LANCE: I learned Ramsey Theory from Spencer and his book entitled Ten Lectures on the Probabilistic Method. But the Probabilistic Method was far more interesting than the Ramsey Theory. I suspect this is common: learn Ramsey Theory because of the Probabilistic Method. And some people get suckered into thinking that Ramsey Theory is interesting or important or both. My favorite application of the Probabilistic Method has nothing to do with Ramsey theory: Lautemann's proof that \(BPP \subseteq \Sigma_2\).

BILL:  A few points to make here

a) Some people call the Prob Method an application of  Ramsey Theory. I do not. The Prob Method was developed by Erdos to get lower bounds on asymptotic Ramsey numbers and was then used for many other things, that you and others find far more interesting. That's great, but that's not really an application of Ramsey Theory.

b) If the prob method was not discovered by Erdos for Ramsey Theory, would it have been discovered later when it was needed for something you care about more? Probably. But it may have been much later.

c) Ramsey Theory has real applications. I have a website of them here. And there are more. For example---

LANCE: Bill, your graduate course in Ramsey theory is titled Ramsey Theory and its ``Applications''. So you do not believe there are real applications.

BILL: Depends how you define real and applications. I put it in quotes since many of the applications are to pure math. Some are to lower bounds in models of computation, but some may still consider that to not be a real application. Rather than debate with the students what a real application is, I put it in quotes.

LANCE: Are there any real applications? That is, applications that are not to pure math, and not to lower bounds.

BILL: I would think you would count lower bounds to be a real application. 

LANCE: I am asking on behalf of the unfortunate Programming Language Student who takes your class thinking there will be real applications- perhaps they missed the quotes around applications.

BILL: I know of one real application. And its to Programming Languages! Ramsey Theory has been used  to prove programs terminate. I wrote a survey of that line of research  here.

LANCE: One? There is only One real application? Besides the application of learning the probabilistic method so they can use the method for more interesting problems. Or to save the earth from aliens.

BILL: Judging a field of Pure Math by how many applications it has does not really make sense. I find the questions and answers themselves interesting. Here is a list of theorems in Ramsey Theory that I find interesting. Do you? (This might not be a fair question either since many theorems are interesting because of their proofs.) 

a) (Ramsey's Theorem, Infinite Case) For every 2-coloring of \(K_\omega\) there exists \(H\subseteq N\) such that \(H\) is infinite and every edge between vertices in \(H\) is the same color. My slides here

b) (Van Der Warden's Theorem) For every \(c,k\) there exists W such that for all c-coloring of  \(\{1,\ldots,W\} \) there exists \(a,d\), both \(\ge 1  \) such that

\(a, a+d, \ldots, a+(k-1)d\) are all the same color.  My slides here.

c) (Subcase of Poly VDW Thm) For every \(c\) there exists W such that for all c-colorings of \(\{1,\ldots,W)\} there exists \(a,d\), both  \(\ge 1\) such that 

\(a, a+d, a+d^2,\ldots,a+d^{100}\) are the same color. My slides here.

d) For all finite colorings of the Eucidean plane there exists three points, the same color, such that the area of the triangle formed is 1. My slides: here.

So Lance, do these Theorems interest you?

LANCE: Sorry I fell asleep after you said "Ramsey". Let me look. Your slides are terrible. All of the colors are the same!

BILL: Maybe my wife was right. 

Friday, January 31, 2025

The Situation at the NSF

The National Science Foundation is one of the agencies most affected by the various executive orders issued by the Trump administration. As a critical funder of research in theoretical computer science, and science and engineering more broadly, the NSF has effectively come to a standstill sending universities scrambling. Here's a little primer on what's going on in the foundation.

The NSF, like all parts of the federal government, has to follow the Executive Orders, and the first step is to determine which orders apply to NSF business, then how these orders can be addressed in a way that satisfies the White House and other supervisory agencies. This is a process that the NSF leadership works out and negotiates.

For the immediate NSF business, most influential is the order Ending Illegal Discrimination and Restoring Merit-Based Opportunity, which states, among other things, that some DEIA measures violate anti-discrimination laws. Since NSF cannot pay for activities that violate current law, all active awards are checked whether they might be affected by that. This is done with the highest priority, since it affects the cash flow of current awards. After that, awards that are in progress towards being made, as well as entire programs and solicitations, will be reviewed for compliance with the executive order. Until compliance is assured in a way acceptable to the supervisory agencies, no new awards or other financial commitments can be made. After that, normal business should resume, although probably with a huge backlog.

The Hiring Freeze Executive Order also has a huge influence on NSF. The hiring freeze applies to all federal agencies, but NSF has a large number of rotators, usually university researchers who serve as a program director for a one to four-year term. The rotators are essential to the programs and the hiring freeze prevents new rotators from starting their role at the NSF. The hiring freeze will last for 90 days; then a plan to reduce the size of the federal workforce will be presented, and NSF might, we hope, again start hiring. In the past, NSF hiring processes were excruciatingly slow, so we need to expect NSF to be understaffed for a significant period beyond the 90 days. The recent Fork in the Road letter of the Office of Personnel Management might lead further people to leave federal employment, and the strong stand on return to in-person work might make hiring Rotators even more difficult. So, although all this is in flow and changing, it currently looks like difficult times ahead for the NSF.

What does this all mean for the research community? Some current funding locked up, hopefully for a short time, and heavy delays on new grants given higher scrutiny and lower staffing, and funding in fairness or focused heavily on broadening participation might be killed all together. A lengthy delay will mean less funding for PhD students and postdocs next academic year. Given the reduced funding and the political atmosphere, we may lose America's ability to recruit the world's top talent to our universities. 

You can read official news on how the executive orders affect NSF and also see Scott's take

Update 2/2/2025: For now, the NSF is getting back to business due to a temporary restraining order.

Wednesday, January 29, 2025

Lautemann's Beautiful Proof

In writing the drunken theorem post, I realized I never wrote a post on Lautemann's amazing proof that BPP is contained in \(\Sigma^p_2\), the second level of the polynomial-time hierarchy.

Clemens Lautemann, who passed away in 2005 at the too young age of 53, wasn't the first to prove this theorem. That honor goes to Michael Sipser who proved that BPP is in the polynomial-time hierarchy using Kolmogorov complexity and Peter Gacs who puts it into the second level using hash function, both results in the same paper

Nisan and Wigderson, after Sipser, Gacs and Lautemann, note that BPP in \(\Sigma^p_2\) follows from their pseudorandom generators, simply guess a potentially hard function and use the universal quantifier to check that it's hard. Once you have a hard function you can use their generator to derandomize BPP.

But Lautemann's proof is incredibly beautiful because he just directly gives the \(\Sigma^p_2\) (\(\exists\forall)\) expression, and two simple probabilistic method arguments to show it works. QED. 

Let L be in BPP accepted by a probabilistic polynomial-time Turing machine M with error bounded by \(2^{-n}\). Let \(A(x,r)\) be true iff \(M(x)\) using random tape \(r\) accepts. \(A(x,r)\) is deterministically polynomial-time computable. We can assume \(|r|=|x|^k\) for some \(k\). 

Here is the \(\Sigma^p_2\) expression for input \(x\) of length \(n\). All quantification is over binary strings of length \(n^k\). Let \(\oplus\) be bitwise parity and \(\vee\) be logical OR.

\(\exists z_1,\ldots,z_{n^k} \forall y\ A(x,z_1\oplus y) \vee \ldots \vee A(x,z_{n^k}\oplus y)\)

That's it, the beautiful self-contained formula. We just need to show that this expression is true if and only if \(x\) is in L. 

Suppose \(x\) is in L and pick \(z_1,\ldots,z_{n^k}\) independently at random. For a string \(y\) the probability that \(A(x,z_i\oplus y)\) is false for all \(i\) is at most \((2^{-n})^{n^k}=2^{-n^{k+1}}\). So the probability that for some \(y\) this happens is at most \(2^{n^k}2^{-n^{k+1}}=2^{n^k-n^{k+1}}\ll 1\) so for some choice (even most choices) of the \(z_i\), the expression will be true.

Now suppose \(x\) is not in L. Fix  \(z_1,\ldots,z_{n^k}\) and now pick \(y\) at random. The probability that \(A(x,z_i\oplus y)\) is true for a fixed i is at most \(2^{-n}\), so the probability that one of them is true is at most \(n^k2^{-n}\ll 1\). 

MIC DROP

Sunday, January 26, 2025

People who live through two square years

 44*44=1936.

45*45=2025. This year!

46*46= 2116.

Since my fake birthday is Oct 1, 1960 (I do not reveal my real birthday to try to prevent ID theft), which is past 1936, and I won't live to 2116 unless Quantum-AI finds a way to put my brain in a a vat, I will not see two square years in my life :-(

Since I keep a list of celebrities (defined as people I know who have some fame - so its subjective) who are over 80, I have a LIST of celebrities who were born in 1936 or earlier. I was going to list them in this post but there are to many. So I list those that I think my readers care about, and point to the full list.

Here are people who have lived through two squares who I think you care about. I leave out how old they are or what year they were born.  They are in alphabetical order by their first names. I put a * next to the people who, in 1936, were  AWARE that it was a square year. So the starred names are those who truly ENJOYED living through 2 square years.

NAME                  KNOWN TO US FOR

Andrzej Ehrenfeucht   Math. EF-games (he is the E)
Anil Nerode                   Math- Recursive Math
Aviezri Fraenkel           Math-Combinatorial Games

Buzz Aldrin             Walked on the moon

Charles Duke           Walked on the moon.

Dana Scott            Math-CS. Prog Langs. Turing Award
David Scott           Walked on the moon
Dirk Van Dalen     Math- Logic

Eric Hirsch Jr        American Educator *

Harrison Schmidt      Walked on the Moon.
Harry Furstenberg     Math-Ergodic methods in Ramsey Theory.
Heisuka Hironik        Math-Algebraic Geom-Fields Medal
Herman Chernoff      Math-Probability *

Jack Edmonds         CS. Theorist.
James Watson          Biologist-The Double Helix. Nobel Prize with Crick.*
Jane Goodall            Zoologist and Activist
Jean-Pierre Serre     Math. Algebraic X. Fields Medal.*
John Thompson       Math. Group Theory.  Fields Medal, Abel Prize

Micahel Rabin           CS/ Math. Theorsit. Turing Award.

Noam Chomsky          Linguistics. Did work on Grammars

Richard Friedberg     Physicist. Also invented Priority method in Rec Theory.
Richard Karp            CS. Theorist. Turing Award. .
Richard Stearns        CS. Theorist. Turing Award.

Stephen Smale         Math-Lots of Stuff

Tom Baker             Actor-Dr. Who.
Tom Lehrer            Math/Novelty songs..*
Tony Hoare            CS. PL. Turing Award.

Volker Strassen       Math-CS.

Walter Koenig         Actor. Star Trek
William Shatner      Actor- Star Trek

For my complete list see here

It is currently  impossible for someone to live through three square years (again, unless they get their brain in a vat, or some other not-yet-invented mechanism). In a much earlier era it was possible: 

20*20=400

21*21=441

22*22=484

So if you were born in 400 and lived to be 84 years old, three squares! While MOST people didn't live that long back then, SOME did.

10*10=100

11*11=121

12*12=144

Born in the year 100, live to be 44 years old. Was the calendar well established by then?

(ADDED LATER: a colleague emailed me about the calendar. Dionysius Exiguus is credited with inventing the AD/BC calendar (not to be confused with the band AC/DC). He was born in 470 so lets say that the modern calendar was in known from 500 on. (I am not dealing with the times its changed a bit.). SO

23*23=529

24*24=576

25*25=625

To live through three squares you would need to live to 96.

To be aware that you lived through three squares you would need to ive to 104.

To enjoy the 3-squareness you would need to be a healthy 104 year old.

Did someone who was born in 529 live to 625. I would doubt it. See here for an article about how long people lived in Middle Ages. Or, more accurately, how short they lived. 

)


Reminds me of a great line from A Funny Think Happened on the Way to the Forum, a musical that takes place in the early AD's. The character Pseudolus says: 

(Looking at a bottle of wine) Was 1 a good year?


Wednesday, January 22, 2025

The Fighting Temeraire

What does an 1838 painting tell us about technological change?

A colleague and I decided to see how well LLMs could teach us a topic we knew nothing about. We picked the Romanticism art movement. I asked ChatGPT to tutor me on the topic for an hour. Chatty picked four paintings. 

Top Left: Liberty Leading the People (Delacroix, 1830)
Top Right: Wanderer above the Sea of Fog (Friedrich, 1818)
Bottom Left: The Fighting Temeraire (Turner, 1838)
Bottom Right: The Third of May 1808 (Goya, 1814)
For each of these paintings, I put the painting up a one screen and used the voice feature to have ChatGPT give me an overview of each, and then we would have a discussion about it where I would ask about various features. Ended up spending about an hour on each. Was it successful? I now know significantly more about the Romantic art period and these paintings, though of course not an expert. It was certainly a better and more enjoyable experience than freshman seminar course on art history I took in college.

Let discuss one of these paintings in more detail, the 1938 painting The Fighting Temeraire, tugged to her last berth to be broken up by Joseph Mallord William Turner, on display at the National Gallery in London. 

The Fighting Temeraire by J.M.W. Turner (click on picture for more detail)

The 98-gun ship Temeraire featured on the left fought in the Battle of Trafalgar. This painting captures the ship being towed by a steam tug through the Thames to be broken up for scrap. 

Much to love in the painting: the reddish sunset, the reflections of the boats in the water, the detail of the Temeraire and the lack of detail of the tug.

But also note the nod to technological change, the tall sailboat being taken to its end by a coal-powered tugboat, marking the new era of shipping vessels, the industrial revolution in full swing, and the beauty we lose to progress. Now a bad metaphor for the AI revolution of today.

If you want to learn more about the painting, you can watch this lecture from the National Gallery, or you could ask your favorite LLM.

Sunday, January 19, 2025

Presidential Quiz!

I made up a quiz about the American Presidents here.  

It has 40 questions. In the modern electronic age you can probably look up most or even all of the answers. So what to do about that?

1) The quiz is not for money or credits or anything, so if you ``cheat'' you only cheat yourself.

2) Be honest with yourself and take it in three hours.

3) USE the web and see how long it takes you to finish it. You can make up some way to measure how well you did by combining how many you got right with how long it took.

4) The answers, that I will post later in the week or next week, have lots of other information of interest. So  whichever of 1,2,3 you do or something else, read the solutions (even those you got right) and be enlightened. 

It is actually titled Prez Trivia Quiz. This might not be accurate.

What is trivia? I think it is knowledge that does not connect to other knowledge and hence is not important. Some of my questions are trivia, and some are not. I give examples of each:

What is the most common middle initial for a president? This is clearly trivia. I don't know the answer and its no on the quiz, but I might put it on the next time I do this, four years from now.

(ADDED LATER: A comment used an AI and gave the wrong answer. However, this encourgaged me t find out the right answer. The website I found is here. The answer is H: William HENRY Harrison, George HERBERT WALKER Bush, William HOWARD Taft, Barach HUSSEIN Obama. Second place is W with three: George WALKER Bush, Ronald WILSON Reagan, and, much to my surprise, President Woodrow Wilson was actually Thomas WOODROW Wilson. There are many initials that appeared twice. Does any of this enlighten me? About presidents no. Finding out why the AI got it wrong would be interesting but perhaps unknowable.) 

Five presidents ran again for president four or more years after leaving office. Name them and how thy did. This is not trivia (what word means the opposite of trivia? See later). Since Trump ran four years later and won it is of interest to see what circumstances in the past  lead to a former prez (a) running again, and (b) winning. 

If you are so inclined you can, for each question on the quiz, say if its trivia or not. YMMV.

I googled "opposite of trivia" and got this informative website (I am not being sarcastic) here.

Wednesday, January 15, 2025

"Our Days Are Numbered"

Proofs are amenable to chess techniques. "Our Days are Numbered".

Slide in Lev Reyzin's JMM talk "Problems in AI and ML for Mathematicians" Reyzin is paraphrasing Telgarsky. Posted with permission.

Last week I attended the Joint Mathematics Meeting in Seattle with a theme of

We Decide Our Future: Mathematics in the Age of AI

With little computational complexity in the conference, I attended many of the AI talks. You could divide them into Math for AI and AI for Math. Mathematics of course plays critical roles in machine learning optimization, and several talks focused on provably good learning algorithms, though they overemphasized the importance of such. Do you get on a plane because you understand the physics or because air travel has a strong safety record?

But let's focus on AI for Math which generated the most discussion and angst. 

I started the conference sitting on a panel on the challenges of peer review in mathematics. Math doesn't have the replication crisis that dogs other scientific communities. But math does have papers with very specialized, long, detailed proofs and few qualified referees willing to check them over with care. 

We're not there yet, but in the "near future" we ought to have AI systems that can verify well-written proofs by compiling them using proof assistants like Lean. Referees could spend less time on checking proofs and more on deciding whether the model, theorem and/or techniques merit publication. We might get to the point that you couldn't even submit a paper to a journal until the proofs have been verified.

It's what comes next that really spooks mathematicians. While AI has made significant progress in solving competition problems with DeepMind's AlphaProof and Open AI's O3, it has only played minor roles in developing new ideas for theorems. Eventually AI systems will find critical steps for publishable theorems. Who do we give the credit to? When AI systems become stronger that typical PhD students, what kinds of problems to we give the students? 

We'll get plenty of AI slop, flooding journals with mathematical papers that technically prove new theorems but don't offer any particularly interesting or illuminating insights. But we'll also get mathematicians who can unleash an army of virtual grad students to find new connections between different subfields, or make significant progress on major open problems in the field. 

Some mathematicians don't want AI trained on their papers and using their techniques and theorems, even though they wouldn't have problems with other mathematicians doing so. In any case, they might not have much choice as many publishers are making deals with AI companies.

The future of mathematicians might follow that of artists and programmers. The best will do fine since they can find proofs beyond what an AI can do. Mathematicians who know how to ask the right questions can harness AI to make themselves far more productive. All mathematicians will have to navigate this brave new world.

"At least they'll still need us to teach the courses," one mathematician told me. Don't be so sure.

Sunday, January 12, 2025

Random Thought on AI from someone in the REAL WORLD

Guest Post from Nick Sovich. 

-----------------------------------------------------------------------------

Bill Gasarch recently blogged on RANDOM THOUGHTS ON AI here . He is in the realm of theory. I am in the realm of applications so I asked if I could do a post on AI from that background. He agreed, and here's the post:

1) Words Matter

Today you tell a foundation model who to be and how to behave by giving it a system prompt and a chat prompt. The system prompt is the AI analog of 

 I’m a CS professor, I’m an expert in Ramsay theory, and my goal is to help students learn. 

The chat prompt is the AI analog of


Hi CS Professor, can you please help me write a blog post to formulate my thoughts on AI”?

Words comprise these system prompts and chat prompts. Getting the right words is very important, in the same way writing the right lines of code is important.

So if you have an AI that *can* be any 157-level IQ expert in any field, you still haveto tell it what kind of expert to be, what its worldview is, and how to communicate. Because it can’t have infinite world views and communication styles all at once.

Example System Prompt and Chat Prompt is here.


 
2) If English becomes the #1 programming language, then what’s the difference between a poet and a computer scientist?

There has been talk in the industry of English eventually becoming the number 1 programming language. What does that mean? A poet has mastery over the English language. A computer scientist has mastery over programming languages. A poet can create works of art.  A poet can be creative, but so can a computer scientist. A computer scientist can be pragmatic, but so can a poet. Will poets become better programmers than computer scientists?  Will computer scientists become better artists than poets? Does it matter?

3) What is code and what is poetry?

Bad code doesn’t compile, or it compiles but doesn’t run, or it runs but has bugs. Bad poetry doesn’t express the author’s intent, or it doesn’t evoke the reader’s emotion, or it doesn’t convey a message to society. Bad code is still code, bad poetry is still poetry. Humans can produce good and bad code and good and bad poetry. AI can produce good and bad code and bad poetry. Can AI produce good poetry?


4) AI is a tool, like auto-complete. 

This is both good and bad.

Under use the tool, and you waste a lot of time.

Over use the tool and you don't understand your own product.

5) Where should AI start and end in academia?


This is an important question worthy of a separate blog that I might do in the future.

6) AI can either replace people or empower people. 

We should look for ways that it empowers people. A nice 38 second clip about that here.

As an example:

Good: Giving the vet tech tools to record my intake questions through natural language instead of pen and paper.

Bad: Automating away all vet tech jobs and replacing them with humanoid robots that would scare my dog anyway. It’s traumatic already when a human pins her down to give her eye drops.

Example System Prompt and Chat Prompt here.

7) How many tokens and how much test-time computation would it take to fully capture my dog’s personality?

First of all, the real thing is better. To see what the latest models are capable of, I used the reasoning models to help generate prompts for the diffusion models, to see how closely I could replicate my dog’s appearance. The reasoning models capture the veterinary terminology very well, and increased specificity in prompting leads to better results from the diffusion models. The diffusion models get close, but don’t seem to have the level of expert veterinary knowledge that the reasoning models do (yet).

Example Prompt and Resulting Image here.

 
8) If spending an hour a day on social media can radicalize you politically, can spending an hour a day reasoning with an AI on technical topics make you more technical?

Spending an hour a day searching the internet for technical topics and reading them can certainly make you more technical. If AI helps you get that information more efficiently, and if the AI actually works (hallucinates less than doing a web search would lead you to incorrect technical information), then it follows that AI can be more efficient than a web search in making you more technical. AI needs to be “aligned”, just like the web search needs to be “unbiased”. And make sure that one of the topics you learn, either through web search or AI, is how to read a map if you lose internet access.

BILL's comment on point 8: While I agree that spending an hour a day (or more) reasoning with an AI on a technical topic is a good idea, make sure you don't fall into a rabbit hole.  For example, while using AI to help me with Ramsey Theory I, on a whim, asked it to write me a poem about Ramsey Theory. It wasn't that good but it was better than what I could do. I won't reproduce it for fear of sending my readers down a rabbit hole.



 



Wednesday, January 08, 2025

When DO Names Change? When SHOULD Names Change?

 BILL: Good news for Jimmy Carter! He won  The Betty White Award! (see here).

LANCE: That's not good news. He had to die to get it.

BILL: Call it a mixed bag. Good news for me, in that I have a famous person for The Betty White award. And I later found out that Manmohan Singh, a former prime minister of India passed away on Dec 26, 2024, so I have two famous people for The Betty White Award.

LANCE: You should change the name to The Jimmy Carter Award. Did Betty White start a Department of Education?

BILL: No can do. What if someone even more famous dies next year. I don't want to play musical chairs with the name. Past winners were Pele and The Pope Emeritus. Were they more famous than Betty White?

LANCE: YES!

BILL: And that's the problem. If I changed it to The Pele Award and later to The Jimmy Carter Award, that's three names in three years. Also,

If it was The Pele Award, people would think it has to do with Soccer.

If it was The Pope Benedict award people would think it has to do with the Catholic Church.

If it was The Jimmy Carter award, people would think it was about building houses.

If it was The Manmohan Singh award, people would think I am not an ignorant American who knows nothing about Indian Politics.

With Betty White there is nothing so striking about her to think its about something else. (Not quite true- she was involved with Animal Rights.)

LANCE: Yup, Carter got the Nobel prize for homebuilding. You over estimate how many people care about  The Betty White Award. But you named the award after Betty White only because she died shortly before you started the award. Hardly seems like a good criteria. 

BILL: Well, we do this all the time in computer science.

LANCE: Indeed. Some of them, like the Turing Award, survived the test of time. Others, which we shall not name, have not.

But anyway, all of this raises the question, should we change the name when we have more famous dead people? 

BILL: Rarely. Do you have examples of names being changed?

LANCE: Yes, though not replaced by other names.

The best student paper award at the Complexity Conference was named after Ronald Book from 1999 to 2104 and then quietly dropped when the conference left the IEEE. And of course when Rolf Nevanlinna got cancelled in 2019, changing his award to the Abacus Medal. Not a person named Abacus but the ancient computing device. Personally I'm holding out for the Slide Rule Statue.

BILL: So, we need more examples of name changes that are NOT because a person was cancelled or a conference changes organizations.

LANCE: It's hard to do because it's admitting you made a mistake to begin with. So instead we are getting gun shy on new award names. The ACM has made it harder to make a named award and the more recent SIGACT awards have been named after those who died long ago (Gödel) or still living (Knuth). An exception was made for the Danny Lewin Best Student Paper Award at STOC. Danny died fighting the terrorists on American Flight 11 on September 11, 2001 and the award was named for him starting in 2002.

BILL: So it's no longer worth dying early to get an award.

LANCE: I wouldn't recommend it.

Sunday, January 05, 2025

The Betty White Award for 2024

In Jan of 2023 I estabalished the Betty White Award, see here which is given to people who died late in the prior year and hence won't be in the  those who we lost in year X articles. I also gave out a few from prior years. Here are past winners, some in retrospect.

2006: James Brown and Gerald Ford. See my post here. The post was not on the Betty White award since Betty White had not died yet. But the idea is there.

2021: Betty White and Bishop Tutu. See my post here. I didn't have the award yet but I do now. Can person X win the X award? I say yes. Its rare since usually the person an award is named after is dead, but in this case that's kind of the point.

2022: Pele (Soccor player), Barbara Walters (broadcast journalist), Pope Emeritus Benedict. See my post here. Three famous people! To quote my post: One was prominent in one of the worlds largest religions. The others were a broadcast journalist and a former Pope. 

2023: Tommy Smothers (Singer and Comedian). See my post here. Since I collect novelty songs I knew who he was, but I think most people did not. 

2024: I began writing this post on Dec 28. Bad idea- the whole point of the award is that we should WAIT until the year is OVER before doing retrospectives. In fact, the award was going to go to Ricky Henderson (famous baseball player), Greg Gumbel (sportscaster), and Olivia Hussey (Actress). The last two I had never head of until they died, but I didn't want to just give it to one person.

Then on Dec 29 Jimmy Carter died. Okay then. Greg and Olivia will still get Honorable Mention.  After that but before Jan 1, Linda Lavin died who will also get Honorable Mention. 

ADDED LATER: An alert commented that Manmohan Singh, who was prime minister of India 200 4-2014 (analogous to being president of America) passed away on Dec 26, 2024. Hence I have added him as well.

Here are the WINNERS of the Betty White Award for 2024:

Ricky Henderson A hall-of-fame baseball player who was truly a superstar. He died on Dec 20, 2024, at the age of 65. See his Wikipedia entry here. There are two criteria for the Betty White Award. 

a) Being famous enough so IF he had died earlier, he WOULD be in the those we lost in 2024 articles. On this criteria, Ricky Henderson is solid. Note that he holds the record for most stolen bases in a career by A LOT: RH has 1406, Lou Brock is second with  938.

b) Dying to late in the year to be on those lists. I checked- he is on some lists but not others. To NOT get the Betty White award because he died late but not late enough would be really sad. SO, even though on this criteria he is borderline, the judges have decided to give it to him.

Jimmy Carter A former president of the United States; however, he may be more known for his post-presidency work on charities. He won a Nobel Peace Prize in 2002. He died on Dec 29, 2024 at the age of 100. Providing a Wikipedia link sounds silly- you all know who he is. Here are some miscellany:

a) I was hoping he would last until Biden stepped down so there would be six living ex-presidents:

Carter, Clinton, Bush Jr, Obama, Trump, Biden.  I blogged about this here.

b) Carter is the prez who lived the longest. He also  had the longest marriage. Jimmy and Rosalyn Carter were married for 77 years. George and Barbara Bush were second with 73 years of marriage. 

c) Carter is the only president sworn into office by his nickname Jimmy (his `real' name is James). It annoyed me when BILL Clinton was sworn in as WILLIAM and when JOE Biden was sworn in as JOSEPH. If Jeb Bush had become prez he probably would have been sworn in by his nickname Jeb (his `real' name is John). Oh Well. 

Jimmy is Clearly famous enough and Clearly died late enough in the year. 

Manmohan Singh A former prime minister of India; however, some of my American readers may not know that (indeed- I did not). He served as prime minister from 2004 to 2014, serving two 5-year terms. He was the first Sikh prime minister of India. See his Wikipedia entry here

Manmohan is not famous in America but he is very famous in India and in countries that pay more attention to world events than Americans do. And he died late enough in the year.


AND three Honorable Mentions:

Greg Gumbel A sportscaster, died on December 27, 2024 at the age of 78. See his Wikipedia entry here. SO, does he deserve it? 

a) Being famous. I've read that he is famous but frankly, I had never heard of him until I saw the obit and said  Betty White Award Contender.? That he had an obit on some news site IS an indicator of fame. And more to the point of the award, had he died a a few days later, in 2025, he would have been on the those who we lost in 2025 lists. If Jimmy Carter hadn't died he would have won the award (with Ricky Henderson and Olivia Hussey and Linda Lavin) so I decided to give him Honorable Mention.  My award, My rules. 

b) Dying late. OH YEAH! Dec 27 is very late. 

Olivia Hussey An actress, died December 27, 2024 at the age of 73. See her Wikipedia entry here. SO, does she deserve the award? 

The Being Famous and Dying late comments are identical to those for Greg Gumbel. 

So she also gets an Honorable Mention. 

Linda Lavin An actress, died December 29, 2024 at the age of 87. See her Wikipedia entry here. SO does she deserve the award? 

The Being Famous and Dying late comments are identical to those for Greg Gumbel. 

So she also gets an Honorable Mention. Also she gets credit for dying really late in the year.


Thursday, January 02, 2025

My Drunken Theorem

Bill's SIGACT Open Problems Column remembering Luca Trevisan is out. I chose the problem of whether Promise-ZPP in P implies Promise-BPP in P, an extension of an earlier theorem by Luca and his co-authors, which showed that Promise-RP in P implies Promise-BPP in P. But now, let me share a story that I didn’t include in print.

In the mid-1990s, I receive an email from Luca saying that Noam Nisan had told him I’d come up with an easier proof of one of his theorems. Luca asked if he could use it in his upcoming paper. I had no idea what he was talking about.

Then, I vaguely remembered…

I was in Dagstuhl, back when we’d hang out in a room meant for drinking beer and wine. I had, shall we say, more than one good German Pilsner, when Noam came by and asked if I knew how to show that Promise-RP in P implies P = BPP. I mumbled something about how it might follow from Lautemann's proof that BPP is in the second level of the polynomial-time hierarchy. Lautemann’s proof uses the probabilistic method in both directions, which I thought might fit nicely into Promise-RP.

Now to all you kids out there: you should never drink and derive. A theorem you prove after a couple of beers usually falls apart when you sober up. But this time it turns out I was right—and I totally forgot about it until I got Luca’s email.

I never admitted this to Luca but did give him permission to include it in his paper with Andreev, Clementi, and Rolim. And they did.

However, Lautemann’s proof doesn’t tell us anything about Promise-ZPP, so that problem remains open. Go ahead, read Bill’s column, and give it a try. If you drink a couple of Warsteiners along the way, it may not help you prove the theorem—but at least you’ll enjoy some good beer.