Sunday, February 09, 2025

Does Lance dislike Ramsey Theory Because he's colorblind?

BILL: Lance, my wife asked if you dislike Ramsey Theory because you are colorblind.

LANCE: (laughs) It's why I don't like Ramsey Theory talks--impossible for me to follow. But I don't actually dislike Ramsey theory. I just don't like it as much as you do.

BILL: Nobody does, except possibly Graham, Rothchild,  Spencer, and your average Hungarian first grader.

LANCE: To me Ramsey Theory had one useful cool idea: The Probabilistic Method, that made people actually think Ramsey Theory was far more interesting than it really is.

BILL: Um, that is bad history and bad mathematics.

LANCE: I learned Ramsey Theory from Spencer and his book entitled Ten Lectures on the Probabilistic Method. But the Probabilistic Method was far more interesting than the Ramsey Theory. I suspect this is common: learn Ramsey Theory because of the Probabilistic Method. And some people get suckered into thinking that Ramsey Theory is interesting or important or both. My favorite application of the Probabilistic Method has nothing to do with Ramsey theory: Lautemann's proof that \(BPP \subseteq \Sigma_2\).

BILL:  A few points to make here

a) Some people call the Prob Method an application of  Ramsey Theory. I do not. The Prob Method was developed by Erdos to get lower bounds on asymptotic Ramsey numbers and was then used for many other things, that you and others find far more interesting. That's great, but that's not really an application of Ramsey Theory.

b) If the prob method was not discovered by Erdos for Ramsey Theory, would it have been discovered later when it was needed for something you care about more? Probably. But it may have been much later.

c) Ramsey Theory has real applications. I have a website of them here. And there are more. For example---

LANCE: Bill, your graduate course in Ramsey theory is titled Ramsey Theory and its ``Applications''. So you do not believe there are real applications.

BILL: Depends how you define real and applications. I put it in quotes since many of the applications are to pure math. Some are to lower bounds in models of computation, but some may still consider that to not be a real application. Rather than debate with the students what a real application is, I put it in quotes.

LANCE: Are there any real applications? That is, applications that are not to pure math, and not to lower bounds.

BILL: I would think you would count lower bounds to be a real application. 

LANCE: I am asking on behalf of the unfortunate Programming Language Student who takes your class thinking there will be real applications- perhaps they missed the quotes around applications.

BILL: I know of one real application. And its to Programming Languages! Ramsey Theory has been used  to prove programs terminate. I wrote a survey of that line of research  here.

LANCE: One? There is only One real application? Besides the application of learning the probabilistic method so they can use the method for more interesting problems. Or to save the earth from aliens.

BILL: Judging a field of Pure Math by how many applications it has does not really make sense. I find the questions and answers themselves interesting. Here is a list of theorems in Ramsey Theory that I find interesting. Do you? (This might not be a fair question either since many theorems are interesting because of their proofs.) 

a) (Ramsey's Theorem, Infinite Case) For every 2-coloring of \(K_\omega\) there exists \(H\subseteq N\) such that \(H\) is infinite and every edge between vertices in \(H\) is the same color. My slides here

b) (Van Der Warden's Theorem) For every \(c,k\) there exists W such that for all c-coloring of  \(\{1,\ldots,W\} \) there exists \(a,d\), both \(\ge 1  \) such that

\(a, a+d, \ldots, a+(k-1)d\) are all the same color.  My slides here.

c) (Subcase of Poly VDW Thm) For every \(c\) there exists W such that for all c-colorings of \(\{1,\ldots,W)\} there exists \(a,d\), both  \(\ge 1\) such that 

\(a, a+d, a+d^2,\ldots,a+d^{100}\) are the same color. My slides here.

d) For all finite colorings of the Eucidean plane there exists three points, the same color, such that the area of the triangle formed is 1. My slides: here.

So Lance, do these Theorems interest you?

LANCE: Sorry I fell asleep after you said "Ramsey". Let me look. Your slides are terrible. All of the colors are the same!

BILL: Maybe my wife was right. 

Friday, January 31, 2025

The Situation at the NSF

The National Science Foundation is one of the agencies most affected by the various executive orders issued by the Trump administration. As a critical funder of research in theoretical computer science, and science and engineering more broadly, the NSF has effectively come to a standstill sending universities scrambling. Here's a little primer on what's going on in the foundation.

The NSF, like all parts of the federal government, has to follow the Executive Orders, and the first step is to determine which orders apply to NSF business, then how these orders can be addressed in a way that satisfies the White House and other supervisory agencies. This is a process that the NSF leadership works out and negotiates.

For the immediate NSF business, most influential is the order Ending Illegal Discrimination and Restoring Merit-Based Opportunity, which states, among other things, that some DEIA measures violate anti-discrimination laws. Since NSF cannot pay for activities that violate current law, all active awards are checked whether they might be affected by that. This is done with the highest priority, since it affects the cash flow of current awards. After that, awards that are in progress towards being made, as well as entire programs and solicitations, will be reviewed for compliance with the executive order. Until compliance is assured in a way acceptable to the supervisory agencies, no new awards or other financial commitments can be made. After that, normal business should resume, although probably with a huge backlog.

The Hiring Freeze Executive Order also has a huge influence on NSF. The hiring freeze applies to all federal agencies, but NSF has a large number of rotators, usually university researchers who serve as a program director for a one to four-year term. The rotators are essential to the programs and the hiring freeze prevents new rotators from starting their role at the NSF. The hiring freeze will last for 90 days; then a plan to reduce the size of the federal workforce will be presented, and NSF might, we hope, again start hiring. In the past, NSF hiring processes were excruciatingly slow, so we need to expect NSF to be understaffed for a significant period beyond the 90 days. The recent Fork in the Road letter of the Office of Personnel Management might lead further people to leave federal employment, and the strong stand on return to in-person work might make hiring Rotators even more difficult. So, although all this is in flow and changing, it currently looks like difficult times ahead for the NSF.

What does this all mean for the research community? Some current funding locked up, hopefully for a short time, and heavy delays on new grants given higher scrutiny and lower staffing, and funding in fairness or focused heavily on broadening participation might be killed all together. A lengthy delay will mean less funding for PhD students and postdocs next academic year. Given the reduced funding and the political atmosphere, we may lose America's ability to recruit the world's top talent to our universities. 

You can read official news on how the executive orders affect NSF and also see Scott's take

Update 2/2/2025: For now, the NSF is getting back to business due to a temporary restraining order.

Wednesday, January 29, 2025

Lautemann's Beautiful Proof

In writing the drunken theorem post, I realized I never wrote a post on Lautemann's amazing proof that BPP is contained in \(\Sigma^p_2\), the second level of the polynomial-time hierarchy.

Clemens Lautemann, who passed away in 2005 at the too young age of 53, wasn't the first to prove this theorem. That honor goes to Michael Sipser who proved that BPP is in the polynomial-time hierarchy using Kolmogorov complexity and Peter Gacs who puts it into the second level using hash function, both results in the same paper

Nisan and Wigderson, after Sipser, Gacs and Lautemann, note that BPP in \(\Sigma^p_2\) follows from their pseudorandom generators, simply guess a potentially hard function and use the universal quantifier to check that it's hard. Once you have a hard function you can use their generator to derandomize BPP.

But Lautemann's proof is incredibly beautiful because he just directly gives the \(\Sigma^p_2\) (\(\exists\forall)\) expression, and two simple probabilistic method arguments to show it works. QED. 

Let L be in BPP accepted by a probabilistic polynomial-time Turing machine M with error bounded by \(2^{-n}\). Let \(A(x,r)\) be true iff \(M(x)\) using random tape \(r\) accepts. \(A(x,r)\) is deterministically polynomial-time computable. We can assume \(|r|=|x|^k\) for some \(k\). 

Here is the \(\Sigma^p_2\) expression for input \(x\) of length \(n\). All quantification is over binary strings of length \(n^k\). Let \(\oplus\) be bitwise parity and \(\vee\) be logical OR.

\(\exists z_1,\ldots,z_{n^k} \forall y\ A(x,z_1\oplus y) \vee \ldots \vee A(x,z_{n^k}\oplus y)\)

That's it, the beautiful self-contained formula. We just need to show that this expression is true if and only if \(x\) is in L. 

Suppose \(x\) is in L and pick \(z_1,\ldots,z_{n^k}\) independently at random. For a string \(y\) the probability that \(A(x,z_i\oplus y)\) is false for all \(i\) is at most \((2^{-n})^{n^k}=2^{-n^{k+1}}\). So the probability that for some \(y\) this happens is at most \(2^{n^k}2^{-n^{k+1}}=2^{n^k-n^{k+1}}\ll 1\) so for some choice (even most choices) of the \(z_i\), the expression will be true.

Now suppose \(x\) is not in L. Fix  \(z_1,\ldots,z_{n^k}\) and now pick \(y\) at random. The probability that \(A(x,z_i\oplus y)\) is true for a fixed i is at most \(2^{-n}\), so the probability that one of them is true is at most \(n^k2^{-n}\ll 1\). 

MIC DROP

Sunday, January 26, 2025

People who live through two square years

 44*44=1936.

45*45=2025. This year!

46*46= 2116.

Since my fake birthday is Oct 1, 1960 (I do not reveal my real birthday to try to prevent ID theft), which is past 1936, and I won't live to 2116 unless Quantum-AI finds a way to put my brain in a a vat, I will not see two square years in my life :-(

Since I keep a list of celebrities (defined as people I know who have some fame - so its subjective) who are over 80, I have a LIST of celebrities who were born in 1936 or earlier. I was going to list them in this post but there are to many. So I list those that I think my readers care about, and point to the full list.

Here are people who have lived through two squares who I think you care about. I leave out how old they are or what year they were born.  They are in alphabetical order by their first names. I put a * next to the people who, in 1936, were  AWARE that it was a square year. So the starred names are those who truly ENJOYED living through 2 square years.

NAME                  KNOWN TO US FOR

Andrzej Ehrenfeucht   Math. EF-games (he is the E)
Anil Nerode                   Math- Recursive Math
Aviezri Fraenkel           Math-Combinatorial Games

Buzz Aldrin             Walked on the moon

Charles Duke           Walked on the moon.

Dana Scott            Math-CS. Prog Langs. Turing Award
David Scott           Walked on the moon
Dirk Van Dalen     Math- Logic

Eric Hirsch Jr        American Educator *

Harrison Schmidt      Walked on the Moon.
Harry Furstenberg     Math-Ergodic methods in Ramsey Theory.
Heisuka Hironik        Math-Algebraic Geom-Fields Medal
Herman Chernoff      Math-Probability *

Jack Edmonds         CS. Theorist.
James Watson          Biologist-The Double Helix. Nobel Prize with Crick.*
Jane Goodall            Zoologist and Activist
Jean-Pierre Serre     Math. Algebraic X. Fields Medal.*
John Thompson       Math. Group Theory.  Fields Medal, Abel Prize

Micahel Rabin           CS/ Math. Theorsit. Turing Award.

Noam Chomsky          Linguistics. Did work on Grammars

Richard Friedberg     Physicist. Also invented Priority method in Rec Theory.
Richard Karp            CS. Theorist. Turing Award. .
Richard Stearns        CS. Theorist. Turing Award.

Stephen Smale         Math-Lots of Stuff

Tom Baker             Actor-Dr. Who.
Tom Lehrer            Math/Novelty songs..*
Tony Hoare            CS. PL. Turing Award.

Volker Strassen       Math-CS.

Walter Koenig         Actor. Star Trek
William Shatner      Actor- Star Trek

For my complete list see here

It is currently  impossible for someone to live through three square years (again, unless they get their brain in a vat, or some other not-yet-invented mechanism). In a much earlier era it was possible: 

20*20=400

21*21=441

22*22=484

So if you were born in 400 and lived to be 84 years old, three squares! While MOST people didn't live that long back then, SOME did.

10*10=100

11*11=121

12*12=144

Born in the year 100, live to be 44 years old. Was the calendar well established by then?

(ADDED LATER: a colleague emailed me about the calendar. Dionysius Exiguus is credited with inventing the AD/BC calendar (not to be confused with the band AC/DC). He was born in 470 so lets say that the modern calendar was in known from 500 on. (I am not dealing with the times its changed a bit.). SO

23*23=529

24*24=576

25*25=625

To live through three squares you would need to live to 96.

To be aware that you lived through three squares you would need to ive to 104.

To enjoy the 3-squareness you would need to be a healthy 104 year old.

Did someone who was born in 529 live to 625. I would doubt it. See here for an article about how long people lived in Middle Ages. Or, more accurately, how short they lived. 

)


Reminds me of a great line from A Funny Think Happened on the Way to the Forum, a musical that takes place in the early AD's. The character Pseudolus says: 

(Looking at a bottle of wine) Was 1 a good year?


Wednesday, January 22, 2025

The Fighting Temeraire

What does an 1838 painting tell us about technological change?

A colleague and I decided to see how well LLMs could teach us a topic we knew nothing about. We picked the Romanticism art movement. I asked ChatGPT to tutor me on the topic for an hour. Chatty picked four paintings. 

Top Left: Liberty Leading the People (Delacroix, 1830)
Top Right: Wanderer above the Sea of Fog (Friedrich, 1818)
Bottom Left: The Fighting Temeraire (Turner, 1838)
Bottom Right: The Third of May 1808 (Goya, 1814)
For each of these paintings, I put the painting up a one screen and used the voice feature to have ChatGPT give me an overview of each, and then we would have a discussion about it where I would ask about various features. Ended up spending about an hour on each. Was it successful? I now know significantly more about the Romantic art period and these paintings, though of course not an expert. It was certainly a better and more enjoyable experience than freshman seminar course on art history I took in college.

Let discuss one of these paintings in more detail, the 1938 painting The Fighting Temeraire, tugged to her last berth to be broken up by Joseph Mallord William Turner, on display at the National Gallery in London. 

The Fighting Temeraire by J.M.W. Turner (click on picture for more detail)

The 98-gun ship Temeraire featured on the left fought in the Battle of Trafalgar. This painting captures the ship being towed by a steam tug through the Thames to be broken up for scrap. 

Much to love in the painting: the reddish sunset, the reflections of the boats in the water, the detail of the Temeraire and the lack of detail of the tug.

But also note the nod to technological change, the tall sailboat being taken to its end by a coal-powered tugboat, marking the new era of shipping vessels, the industrial revolution in full swing, and the beauty we lose to progress. Now a bad metaphor for the AI revolution of today.

If you want to learn more about the painting, you can watch this lecture from the National Gallery, or you could ask your favorite LLM.

Sunday, January 19, 2025

Presidential Quiz!

I made up a quiz about the American Presidents here.  

It has 40 questions. In the modern electronic age you can probably look up most or even all of the answers. So what to do about that?

1) The quiz is not for money or credits or anything, so if you ``cheat'' you only cheat yourself.

2) Be honest with yourself and take it in three hours.

3) USE the web and see how long it takes you to finish it. You can make up some way to measure how well you did by combining how many you got right with how long it took.

4) The answers, that I will post later in the week or next week, have lots of other information of interest. So  whichever of 1,2,3 you do or something else, read the solutions (even those you got right) and be enlightened. 

It is actually titled Prez Trivia Quiz. This might not be accurate.

What is trivia? I think it is knowledge that does not connect to other knowledge and hence is not important. Some of my questions are trivia, and some are not. I give examples of each:

What is the most common middle initial for a president? This is clearly trivia. I don't know the answer and its no on the quiz, but I might put it on the next time I do this, four years from now.

(ADDED LATER: A comment used an AI and gave the wrong answer. However, this encourgaged me t find out the right answer. The website I found is here. The answer is H: William HENRY Harrison, George HERBERT WALKER Bush, William HOWARD Taft, Barach HUSSEIN Obama. Second place is W with three: George WALKER Bush, Ronald WILSON Reagan, and, much to my surprise, President Woodrow Wilson was actually Thomas WOODROW Wilson. There are many initials that appeared twice. Does any of this enlighten me? About presidents no. Finding out why the AI got it wrong would be interesting but perhaps unknowable.) 

Five presidents ran again for president four or more years after leaving office. Name them and how thy did. This is not trivia (what word means the opposite of trivia? See later). Since Trump ran four years later and won it is of interest to see what circumstances in the past  lead to a former prez (a) running again, and (b) winning. 

If you are so inclined you can, for each question on the quiz, say if its trivia or not. YMMV.

I googled "opposite of trivia" and got this informative website (I am not being sarcastic) here.

Wednesday, January 15, 2025

"Our Days Are Numbered"

Proofs are amenable to chess techniques. "Our Days are Numbered".

Slide in Lev Reyzin's JMM talk "Problems in AI and ML for Mathematicians" Reyzin is paraphrasing Telgarsky. Posted with permission.

Last week I attended the Joint Mathematics Meeting in Seattle with a theme of

We Decide Our Future: Mathematics in the Age of AI

With little computational complexity in the conference, I attended many of the AI talks. You could divide them into Math for AI and AI for Math. Mathematics of course plays critical roles in machine learning optimization, and several talks focused on provably good learning algorithms, though they overemphasized the importance of such. Do you get on a plane because you understand the physics or because air travel has a strong safety record?

But let's focus on AI for Math which generated the most discussion and angst. 

I started the conference sitting on a panel on the challenges of peer review in mathematics. Math doesn't have the replication crisis that dogs other scientific communities. But math does have papers with very specialized, long, detailed proofs and few qualified referees willing to check them over with care. 

We're not there yet, but in the "near future" we ought to have AI systems that can verify well-written proofs by compiling them using proof assistants like Lean. Referees could spend less time on checking proofs and more on deciding whether the model, theorem and/or techniques merit publication. We might get to the point that you couldn't even submit a paper to a journal until the proofs have been verified.

It's what comes next that really spooks mathematicians. While AI has made significant progress in solving competition problems with DeepMind's AlphaProof and Open AI's O3, it has only played minor roles in developing new ideas for theorems. Eventually AI systems will find critical steps for publishable theorems. Who do we give the credit to? When AI systems become stronger that typical PhD students, what kinds of problems to we give the students? 

We'll get plenty of AI slop, flooding journals with mathematical papers that technically prove new theorems but don't offer any particularly interesting or illuminating insights. But we'll also get mathematicians who can unleash an army of virtual grad students to find new connections between different subfields, or make significant progress on major open problems in the field. 

Some mathematicians don't want AI trained on their papers and using their techniques and theorems, even though they wouldn't have problems with other mathematicians doing so. In any case, they might not have much choice as many publishers are making deals with AI companies.

The future of mathematicians might follow that of artists and programmers. The best will do fine since they can find proofs beyond what an AI can do. Mathematicians who know how to ask the right questions can harness AI to make themselves far more productive. All mathematicians will have to navigate this brave new world.

"At least they'll still need us to teach the courses," one mathematician told me. Don't be so sure.

Sunday, January 12, 2025

Random Thought on AI from someone in the REAL WORLD

Guest Post from Nick Sovich. 

-----------------------------------------------------------------------------

Bill Gasarch recently blogged on RANDOM THOUGHTS ON AI here . He is in the realm of theory. I am in the realm of applications so I asked if I could do a post on AI from that background. He agreed, and here's the post:

1) Words Matter

Today you tell a foundation model who to be and how to behave by giving it a system prompt and a chat prompt. The system prompt is the AI analog of 

 I’m a CS professor, I’m an expert in Ramsay theory, and my goal is to help students learn. 

The chat prompt is the AI analog of


Hi CS Professor, can you please help me write a blog post to formulate my thoughts on AI”?

Words comprise these system prompts and chat prompts. Getting the right words is very important, in the same way writing the right lines of code is important.

So if you have an AI that *can* be any 157-level IQ expert in any field, you still haveto tell it what kind of expert to be, what its worldview is, and how to communicate. Because it can’t have infinite world views and communication styles all at once.

Example System Prompt and Chat Prompt is here.


 
2) If English becomes the #1 programming language, then what’s the difference between a poet and a computer scientist?

There has been talk in the industry of English eventually becoming the number 1 programming language. What does that mean? A poet has mastery over the English language. A computer scientist has mastery over programming languages. A poet can create works of art.  A poet can be creative, but so can a computer scientist. A computer scientist can be pragmatic, but so can a poet. Will poets become better programmers than computer scientists?  Will computer scientists become better artists than poets? Does it matter?

3) What is code and what is poetry?

Bad code doesn’t compile, or it compiles but doesn’t run, or it runs but has bugs. Bad poetry doesn’t express the author’s intent, or it doesn’t evoke the reader’s emotion, or it doesn’t convey a message to society. Bad code is still code, bad poetry is still poetry. Humans can produce good and bad code and good and bad poetry. AI can produce good and bad code and bad poetry. Can AI produce good poetry?


4) AI is a tool, like auto-complete. 

This is both good and bad.

Under use the tool, and you waste a lot of time.

Over use the tool and you don't understand your own product.

5) Where should AI start and end in academia?


This is an important question worthy of a separate blog that I might do in the future.

6) AI can either replace people or empower people. 

We should look for ways that it empowers people. A nice 38 second clip about that here.

As an example:

Good: Giving the vet tech tools to record my intake questions through natural language instead of pen and paper.

Bad: Automating away all vet tech jobs and replacing them with humanoid robots that would scare my dog anyway. It’s traumatic already when a human pins her down to give her eye drops.

Example System Prompt and Chat Prompt here.

7) How many tokens and how much test-time computation would it take to fully capture my dog’s personality?

First of all, the real thing is better. To see what the latest models are capable of, I used the reasoning models to help generate prompts for the diffusion models, to see how closely I could replicate my dog’s appearance. The reasoning models capture the veterinary terminology very well, and increased specificity in prompting leads to better results from the diffusion models. The diffusion models get close, but don’t seem to have the level of expert veterinary knowledge that the reasoning models do (yet).

Example Prompt and Resulting Image here.

 
8) If spending an hour a day on social media can radicalize you politically, can spending an hour a day reasoning with an AI on technical topics make you more technical?

Spending an hour a day searching the internet for technical topics and reading them can certainly make you more technical. If AI helps you get that information more efficiently, and if the AI actually works (hallucinates less than doing a web search would lead you to incorrect technical information), then it follows that AI can be more efficient than a web search in making you more technical. AI needs to be “aligned”, just like the web search needs to be “unbiased”. And make sure that one of the topics you learn, either through web search or AI, is how to read a map if you lose internet access.

BILL's comment on point 8: While I agree that spending an hour a day (or more) reasoning with an AI on a technical topic is a good idea, make sure you don't fall into a rabbit hole.  For example, while using AI to help me with Ramsey Theory I, on a whim, asked it to write me a poem about Ramsey Theory. It wasn't that good but it was better than what I could do. I won't reproduce it for fear of sending my readers down a rabbit hole.