Wednesday, February 19, 2025

Tomorrow and Yesterday

I recently completed Tomorrow, and Tomorrow, and Tomorrow by Gabrielle Zevin, a book recommended by many including the City of Chicago. The novel covers the decades long journey of two game developers, Sadie and Sam, and how their lives interact with the games they create.

A paragraph towards the end made me rethink the whole book (not a spoiler):

Well, if we’d been born a little bit earlier, we wouldn’t have been able to make our games so easily. Access to computers would have been harder. We would have been part of the generation who was putting floppy disks in Ziploc bags and driving the games to stores. And if we’d been born a little bit later, there would have been even greater access to the internet and certain tools, but honestly, the games got so much more complicated; the industry got so professional. We couldn’t have done as much as we did on our own.

This paragraph hearkens back to my post last week, about how the era you grew up in can affect your trajectory. But also I'm a generation older than the book's main characters, and indeed Ribbit was distributed on a floppy disk in a Ziploc bag.

The novel at its heart is about two friends making games. I was lucky to have that experience myself for a couple of years in the early 80s, with high school friend Chris Eisnaugle, working on Ribbit, Excalibur and some other games that never saw the light of day. We coded for days on end while listening to music like REO Speedwagon, and taking time off for bowling or watching early Schwarzenegger movies. Coding in assembly language on slow processors with limited graphics, taking advantage of our complementary strengths and making it work. I don't regret leaving that life behind for the theoretical wonders of computational complexity, but that doesn't mean I don't miss it.

Sunday, February 16, 2025

Big Breakthrough in the exciting world of sum-free sets!

 Let \([n]\) be the set \(\{1,\ldots,n\}\). (This is standard.)

Let X be a set of integers. \(X\) is sum-free if there is NO  \(x,y,z\in X\) such that \(x+y=z\). (Note that \(x=y\) is allowed.)

Lets try to find a large sum-free set of \([n]\). One obvious candidate is 

\(\{1,3^1, 3^2,\ldots,3^{\log_3 n} \}\) (assume \(n\) is a power of 3). 

So we can get a \(\log_3 n\) sized sum free set of \([n]\). Can we do better?

YES: 

Erdos showed that every set of \(n\) reals has a sum-free subset of size \(n/3\). I have a slightly weaker version of that on slides here.

That result lead to many questions:

a) Find \(f(n)\) that goes to infinity  such that every set of \(n\) reals has a sum-free subset of size \( n/3 + f(n)\).  

b) Replace reals with other sets closed under addition: naturals, integers, some groups of interest.

c) Just care about \([n]\).

Tao and Vu had a survey in 2016 of sum-free results, see here.

We mention three results from their survey

(0) Eberhard, Green, and Manners showed that the 1/3 cannot be improved (see the survey for a more formal statement of this). So, for example, nobody will ever be able to replace 1/3 with 1/2.9.

(1) Alon and Kleitman  modified the prob argument of Erdos (in the slides) and were able to replace \( n/3\) with   \( (n+1)/3\).

(2) Bourgain showed, with a sophisticated argument,that \(n/3\) could be replaced by \((n+2)/3\)

I believe that result (2) was the best until a recent paper of Ben Bedert (see here). Did he manage to 

push it to \((n+100)/3\)? No. He did much better:

For every set of \(n\) reals there is a sum-free  subset of size \(n/3 + c\log\log n\).

Reflections

1) This is a real improvement!

2) Will it stay at \(n/3 + c\log\log n\) or will there NOW be an avalanche of results?

3) Contrast: 

Erdos's proof  I can (and have) taught to my High School Ramsey Gang (note: you DO NOT want to mess with them).

 The proof by Ben Bedert is 36 pages of dense math.

4) This leads to the obvious open question: Is there an elementary proof of some bound of the form \(n/3 + f(n)\) where \(f(n)\) goes to infinity. 



Wednesday, February 12, 2025

Research Then and Now

A student asked me if complexity research was easier when I was a student. Interesting question. Let's compare research now versus the late 80's.

The big advantage today is technology. Just a small sampling below.

Information: Google, Wikipedia, Complexity Zoo and the ever more powerful AI systems

Online Papers: arXiv, digital libraries, individual's web sites

Collaboration: Zoom, Overleaf, Theory Stack Exchange

Communication: Social Media, Online Talks and Classes, YouTube, Mobile Phones

Back in the 80's we had LaTeX and email. LaTeX was slow and you had to print out the paper to see it. Email was only text and it was difficult to share papers electronically. You had to go to the library to read papers unless you had the paper proceedings nearby. It was often easier to reprove a theorem then go find the proof.

We did have some advantages back then. Reproving a theorem made you understand it better. Reading a paper in a proceedings or journal often introduced you to other papers in the publication. We didn't have the distractions of social media and the Internet in general so you could actually focus on research. (Though there was a Mac in the MIT student area that ran Tetris)

People came to the office every day. That opened up collaborations formal and informal. I could walk into an office and ask questions to Johan Håstad, Noam Nisan, Paul Beame and Michael Kearns, all postdocs at MIT, not to mention my fellow grad students or the professors. There was a huge advantage being at a place like MIT or Berkeley, better that those advantages have been mitigated somewhat since.

But the biggest advantage of all, and I'm not afraid to admit it, low hanging fruit. Computational complexity was just 20 years old when I started grad school, and the P v NP problem a young teenager. There was surprising theorem after surprising theorem through the early 90's and not so much since. You didn't need to know deep math and most graduate students could follow nearly all of the 47 talks at my first conference (STOC 1986), not likely in the 219 STOC 2025 papers

Much of my best work was done by reading a paper or watching a talk and wondering why the authors didn't ask some question, and then asking it and sometimes solving it myself or with others. Now you need to spend time climbing the trees and going down deep branches to find new problems that only people on nearby branches would care about or even understand.

But who knows, AI may soon climb those branches for you.

Sunday, February 09, 2025

Does Lance dislike Ramsey Theory Because he's colorblind?

BILL: Lance, my wife asked if you dislike Ramsey Theory because you are colorblind.

LANCE: (laughs) It's why I don't like Ramsey Theory talks--impossible for me to follow. But I don't actually dislike Ramsey theory. I just don't like it as much as you do.

BILL: Nobody does, except possibly Graham, Rothchild,  Spencer, and your average Hungarian first grader.

LANCE: To me Ramsey Theory had one useful cool idea: The Probabilistic Method, that made people actually think Ramsey Theory was far more interesting than it really is.

BILL: Um, that is bad history and bad mathematics.

LANCE: I learned Ramsey Theory from Spencer and his book entitled Ten Lectures on the Probabilistic Method. But the Probabilistic Method was far more interesting than the Ramsey Theory. I suspect this is common: learn Ramsey Theory because of the Probabilistic Method. And some people get suckered into thinking that Ramsey Theory is interesting or important or both. My favorite application of the Probabilistic Method has nothing to do with Ramsey theory: Lautemann's proof that \(BPP \subseteq \Sigma_2\).

BILL:  A few points to make here

a) Some people call the Prob Method an application of  Ramsey Theory. I do not. The Prob Method was developed by Erdos to get lower bounds on asymptotic Ramsey numbers and was then used for many other things, that you and others find far more interesting. That's great, but that's not really an application of Ramsey Theory.

b) If the prob method was not discovered by Erdos for Ramsey Theory, would it have been discovered later when it was needed for something you care about more? Probably. But it may have been much later.

c) Ramsey Theory has real applications. I have a website of them here. And there are more. For example---

LANCE: Bill, your graduate course in Ramsey theory is titled Ramsey Theory and its ``Applications''. So you do not believe there are real applications.

BILL: Depends how you define real and applications. I put it in quotes since many of the applications are to pure math. Some are to lower bounds in models of computation, but some may still consider that to not be a real application. Rather than debate with the students what a real application is, I put it in quotes.

LANCE: Are there any real applications? That is, applications that are not to pure math, and not to lower bounds.

BILL: I would think you would count lower bounds to be a real application. 

LANCE: I am asking on behalf of the unfortunate Programming Language Student who takes your class thinking there will be real applications- perhaps they missed the quotes around applications.

BILL: I know of one real application. And its to Programming Languages! Ramsey Theory has been used  to prove programs terminate. I wrote a survey of that line of research  here.

LANCE: One? There is only One real application? Besides the application of learning the probabilistic method so they can use the method for more interesting problems. Or to save the earth from aliens.

BILL: Judging a field of Pure Math by how many applications it has does not really make sense. I find the questions and answers themselves interesting. Here is a list of theorems in Ramsey Theory that I find interesting. Do you? (This might not be a fair question either since many theorems are interesting because of their proofs.) 

a) (Ramsey's Theorem, Infinite Case) For every 2-coloring of \(K_\omega\) there exists \(H\subseteq N\) such that \(H\) is infinite and every edge between vertices in \(H\) is the same color. My slides here

b) (Van Der Warden's Theorem) For every \(c,k\) there exists W such that for all c-coloring of  \(\{1,\ldots,W\} \) there exists \(a,d\), both \(\ge 1  \) such that

\(a, a+d, \ldots, a+(k-1)d\) are all the same color.  My slides here.

c) (Subcase of Poly VDW Thm) For every \(c\) there exists W such that for all c-colorings of \(\{1,\ldots,W)\} there exists \(a,d\), both  \(\ge 1\) such that 

\(a, a+d, a+d^2,\ldots,a+d^{100}\) are the same color. My slides here.

d) For all finite colorings of the Eucidean plane there exists three points, the same color, such that the area of the triangle formed is 1. My slides: here.

So Lance, do these Theorems interest you?

LANCE: Sorry I fell asleep after you said "Ramsey". Let me look. Your slides are terrible. All of the colors are the same!

BILL: Maybe my wife was right. 

Friday, January 31, 2025

The Situation at the NSF

The National Science Foundation is one of the agencies most affected by the various executive orders issued by the Trump administration. As a critical funder of research in theoretical computer science, and science and engineering more broadly, the NSF has effectively come to a standstill sending universities scrambling. Here's a little primer on what's going on in the foundation.

The NSF, like all parts of the federal government, has to follow the Executive Orders, and the first step is to determine which orders apply to NSF business, then how these orders can be addressed in a way that satisfies the White House and other supervisory agencies. This is a process that the NSF leadership works out and negotiates.

For the immediate NSF business, most influential is the order Ending Illegal Discrimination and Restoring Merit-Based Opportunity, which states, among other things, that some DEIA measures violate anti-discrimination laws. Since NSF cannot pay for activities that violate current law, all active awards are checked whether they might be affected by that. This is done with the highest priority, since it affects the cash flow of current awards. After that, awards that are in progress towards being made, as well as entire programs and solicitations, will be reviewed for compliance with the executive order. Until compliance is assured in a way acceptable to the supervisory agencies, no new awards or other financial commitments can be made. After that, normal business should resume, although probably with a huge backlog.

The Hiring Freeze Executive Order also has a huge influence on NSF. The hiring freeze applies to all federal agencies, but NSF has a large number of rotators, usually university researchers who serve as a program director for a one to four-year term. The rotators are essential to the programs and the hiring freeze prevents new rotators from starting their role at the NSF. The hiring freeze will last for 90 days; then a plan to reduce the size of the federal workforce will be presented, and NSF might, we hope, again start hiring. In the past, NSF hiring processes were excruciatingly slow, so we need to expect NSF to be understaffed for a significant period beyond the 90 days. The recent Fork in the Road letter of the Office of Personnel Management might lead further people to leave federal employment, and the strong stand on return to in-person work might make hiring Rotators even more difficult. So, although all this is in flow and changing, it currently looks like difficult times ahead for the NSF.

What does this all mean for the research community? Some current funding locked up, hopefully for a short time, and heavy delays on new grants given higher scrutiny and lower staffing, and funding in fairness or focused heavily on broadening participation might be killed all together. A lengthy delay will mean less funding for PhD students and postdocs next academic year. Given the reduced funding and the political atmosphere, we may lose America's ability to recruit the world's top talent to our universities. 

You can read official news on how the executive orders affect NSF and also see Scott's take

Update 2/2/2025: For now, the NSF is getting back to business due to a temporary restraining order.

Wednesday, January 29, 2025

Lautemann's Beautiful Proof

In writing the drunken theorem post, I realized I never wrote a post on Lautemann's amazing proof that BPP is contained in \(\Sigma^p_2\), the second level of the polynomial-time hierarchy.

Clemens Lautemann, who passed away in 2005 at the too young age of 53, wasn't the first to prove this theorem. That honor goes to Michael Sipser who proved that BPP is in the polynomial-time hierarchy using Kolmogorov complexity and Peter Gacs who puts it into the second level using hash function, both results in the same paper

Nisan and Wigderson, after Sipser, Gacs and Lautemann, note that BPP in \(\Sigma^p_2\) follows from their pseudorandom generators, simply guess a potentially hard function and use the universal quantifier to check that it's hard. Once you have a hard function you can use their generator to derandomize BPP.

But Lautemann's proof is incredibly beautiful because he just directly gives the \(\Sigma^p_2\) (\(\exists\forall)\) expression, and two simple probabilistic method arguments to show it works. QED. 

Let L be in BPP accepted by a probabilistic polynomial-time Turing machine M with error bounded by \(2^{-n}\). Let \(A(x,r)\) be true iff \(M(x)\) using random tape \(r\) accepts. \(A(x,r)\) is deterministically polynomial-time computable. We can assume \(|r|=|x|^k\) for some \(k\). 

Here is the \(\Sigma^p_2\) expression for input \(x\) of length \(n\). All quantification is over binary strings of length \(n^k\). Let \(\oplus\) be bitwise parity and \(\vee\) be logical OR.

\(\exists z_1,\ldots,z_{n^k} \forall y\ A(x,z_1\oplus y) \vee \ldots \vee A(x,z_{n^k}\oplus y)\)

That's it, the beautiful self-contained formula. We just need to show that this expression is true if and only if \(x\) is in L. 

Suppose \(x\) is in L and pick \(z_1,\ldots,z_{n^k}\) independently at random. For a string \(y\) the probability that \(A(x,z_i\oplus y)\) is false for all \(i\) is at most \((2^{-n})^{n^k}=2^{-n^{k+1}}\). So the probability that for some \(y\) this happens is at most \(2^{n^k}2^{-n^{k+1}}=2^{n^k-n^{k+1}}\ll 1\) so for some choice (even most choices) of the \(z_i\), the expression will be true.

Now suppose \(x\) is not in L. Fix  \(z_1,\ldots,z_{n^k}\) and now pick \(y\) at random. The probability that \(A(x,z_i\oplus y)\) is true for a fixed i is at most \(2^{-n}\), so the probability that one of them is true is at most \(n^k2^{-n}\ll 1\). 

MIC DROP

Sunday, January 26, 2025

People who live through two square years

 44*44=1936.

45*45=2025. This year!

46*46= 2116.

Since my fake birthday is Oct 1, 1960 (I do not reveal my real birthday to try to prevent ID theft), which is past 1936, and I won't live to 2116 unless Quantum-AI finds a way to put my brain in a a vat, I will not see two square years in my life :-(

Since I keep a list of celebrities (defined as people I know who have some fame - so its subjective) who are over 80, I have a LIST of celebrities who were born in 1936 or earlier. I was going to list them in this post but there are to many. So I list those that I think my readers care about, and point to the full list.

Here are people who have lived through two squares who I think you care about. I leave out how old they are or what year they were born.  They are in alphabetical order by their first names. I put a * next to the people who, in 1936, were  AWARE that it was a square year. So the starred names are those who truly ENJOYED living through 2 square years.

NAME                  KNOWN TO US FOR

Andrzej Ehrenfeucht   Math. EF-games (he is the E)
Anil Nerode                   Math- Recursive Math
Aviezri Fraenkel           Math-Combinatorial Games

Buzz Aldrin             Walked on the moon

Charles Duke           Walked on the moon.

Dana Scott            Math-CS. Prog Langs. Turing Award
David Scott           Walked on the moon
Dirk Van Dalen     Math- Logic

Eric Hirsch Jr        American Educator *

Harrison Schmidt      Walked on the Moon.
Harry Furstenberg     Math-Ergodic methods in Ramsey Theory.
Heisuka Hironik        Math-Algebraic Geom-Fields Medal
Herman Chernoff      Math-Probability *

Jack Edmonds         CS. Theorist.
James Watson          Biologist-The Double Helix. Nobel Prize with Crick.*
Jane Goodall            Zoologist and Activist
Jean-Pierre Serre     Math. Algebraic X. Fields Medal.*
John Thompson       Math. Group Theory.  Fields Medal, Abel Prize

Micahel Rabin           CS/ Math. Theorsit. Turing Award.

Noam Chomsky          Linguistics. Did work on Grammars

Richard Friedberg     Physicist. Also invented Priority method in Rec Theory.
Richard Karp            CS. Theorist. Turing Award. .
Richard Stearns        CS. Theorist. Turing Award.

Stephen Smale         Math-Lots of Stuff

Tom Baker             Actor-Dr. Who.
Tom Lehrer            Math/Novelty songs..*
Tony Hoare            CS. PL. Turing Award.

Volker Strassen       Math-CS.

Walter Koenig         Actor. Star Trek
William Shatner      Actor- Star Trek

For my complete list see here

It is currently  impossible for someone to live through three square years (again, unless they get their brain in a vat, or some other not-yet-invented mechanism). In a much earlier era it was possible: 

20*20=400

21*21=441

22*22=484

So if you were born in 400 and lived to be 84 years old, three squares! While MOST people didn't live that long back then, SOME did.

10*10=100

11*11=121

12*12=144

Born in the year 100, live to be 44 years old. Was the calendar well established by then?

(ADDED LATER: a colleague emailed me about the calendar. Dionysius Exiguus is credited with inventing the AD/BC calendar (not to be confused with the band AC/DC). He was born in 470 so lets say that the modern calendar was in known from 500 on. (I am not dealing with the times its changed a bit.). SO

23*23=529

24*24=576

25*25=625

To live through three squares you would need to live to 96.

To be aware that you lived through three squares you would need to ive to 104.

To enjoy the 3-squareness you would need to be a healthy 104 year old.

Did someone who was born in 529 live to 625. I would doubt it. See here for an article about how long people lived in Middle Ages. Or, more accurately, how short they lived. 

)


Reminds me of a great line from A Funny Think Happened on the Way to the Forum, a musical that takes place in the early AD's. The character Pseudolus says: 

(Looking at a bottle of wine) Was 1 a good year?


Wednesday, January 22, 2025

The Fighting Temeraire

What does an 1838 painting tell us about technological change?

A colleague and I decided to see how well LLMs could teach us a topic we knew nothing about. We picked the Romanticism art movement. I asked ChatGPT to tutor me on the topic for an hour. Chatty picked four paintings. 

Top Left: Liberty Leading the People (Delacroix, 1830)
Top Right: Wanderer above the Sea of Fog (Friedrich, 1818)
Bottom Left: The Fighting Temeraire (Turner, 1838)
Bottom Right: The Third of May 1808 (Goya, 1814)
For each of these paintings, I put the painting up a one screen and used the voice feature to have ChatGPT give me an overview of each, and then we would have a discussion about it where I would ask about various features. Ended up spending about an hour on each. Was it successful? I now know significantly more about the Romantic art period and these paintings, though of course not an expert. It was certainly a better and more enjoyable experience than freshman seminar course on art history I took in college.

Let discuss one of these paintings in more detail, the 1938 painting The Fighting Temeraire, tugged to her last berth to be broken up by Joseph Mallord William Turner, on display at the National Gallery in London. 

The Fighting Temeraire by J.M.W. Turner (click on picture for more detail)

The 98-gun ship Temeraire featured on the left fought in the Battle of Trafalgar. This painting captures the ship being towed by a steam tug through the Thames to be broken up for scrap. 

Much to love in the painting: the reddish sunset, the reflections of the boats in the water, the detail of the Temeraire and the lack of detail of the tug.

But also note the nod to technological change, the tall sailboat being taken to its end by a coal-powered tugboat, marking the new era of shipping vessels, the industrial revolution in full swing, and the beauty we lose to progress. Now a bad metaphor for the AI revolution of today.

If you want to learn more about the painting, you can watch this lecture from the National Gallery, or you could ask your favorite LLM.