Wednesday, April 09, 2025

Why Can't We Break Cryptography?

In the recent Signalgate scandal, several senior Trump administration appointees used the Signal app on their phones to discuss an attack on the Houthis. People discussed the risk of the phones being compromised or human factors, such as adding a prominent journalist to the group chat by mistake. But mostly no one had concerns about the cryptography itself on this widely-available app.

It wasn't always this way--cryptography used to be a cat and mouse game, most notably the breaking of the German Enigma machine dramatized in the movie The Imitation Game. Then Diffie and Hellman in their classic 1976 paper stated

Theoretical developments in information theory and computer science show promise of providing provably secure cryptosystems, changing this ancient art into a science.

And in recent memory we've had several successful cybersecurity attacks, but never because the encryption was broken.

We've made incredible progress in solving problems once thought unsolvable, including language translation, chess, go, protein folding and traveling salesman. We have great SAT and Mixed-Integer Programming algorithms. We've blown through the Turing test. None of these algorithms work all of the time but no longer do hard problems seem so hard. Yet cryptography remains secure. Why? How did we get to this Optiland, where only the problems we want to be hard are hard? Quantum computers, if we have them can attack some cryptographic protocols, but we're a very long way from having those capabilities. 

My latest theory involves compression. Machine learning works by finding a representation of a function in a neural net or other form that gives an imperfect compressed version of that function, removing the random components to reveal the structure inside. You get a smaller representation that, through Occam's Razor, is a hopefully mostly accurate version of that data. For example, we learn the representation of a Go player by training a neural net by having the computer play itself over and over again.

Cryptography is designed to look completely random. No compression. If you remove the randomness you have nothing left. And thus modern algorithms have no hook to attack it. 

This is just the beginnings of a theory. I don't have a good formalization and certainly not even the right questions to ask of it.

So to me it's still a big mystery and one that deserves more thought, if we really want to understand computational complexity in the world we actually live in.

Sunday, April 06, 2025

I was invited to a Nutrition conference. Why?

 
From  November of 2024 to March of 2025 I have gotten email inviting me to speak at conferences and/or submit to journals in the following topics:

NOT EVEN CLOSE TO MY FIELD:

Addiction Medicine, Behavioral Health and Psychiatry.

Looking forward to your involvement and your invaluable contributions. 

Possible reason for the invite: My name is Bill and I am a chocoholic.


Infections Diseases

Your exceptional expertise in the field of infections diseases aligns perfectly with our conference focus.

Possible reason for the invite: I had a mild case of COVID two years ago. (I suspect most of my readers did. Did you also get an invite?)

 Vaccines

We eagerly anticipate your participation and the invaluable perspective you will bring to the event. As a first step we kindly request the title of your presentation.

Clearly they want me to talk about using Schur's Theorem and FLT (n=3 case) to show that the primes are infinite. Why that one? All of my papers are tied for relevance to vaccines.

Possible reason for the invite: I have gotten both the flu vaccine and the COVID vaccine.



Nutrition 

We are excited to extend an invitation for you to join us as a distinguished speaker on your field since you have expertise in the field of nutrition.

Possible reason for the invite: I take a Vitamin D pill every morning. 


Pharmacy 

We are excited to extend an invitation for you to join us as a distinguished speaker on your field since you have expertise in the field of pharmacy.
(Identical wording to the Nutrition invite.) 

Possible reason for the invite: My wife's first degree was in Pharmacy and they got confused. They were very confused since she got the degree under her maiden name.

Drug R+D

We are pleased to offer two nights of accommodations at the venue hotel. (I am skeptical of that)

Possible reason for the invite: I watched Breaking Bad and blogged about it here and here.


Aquaculture and Fisheries

We are thrilled to invite you as a Speaker. Your exceptional expertise in the field of aquaculture aligns perfectly with our conference focus. (The wording is a mashup of the wordings in other invites.) 

Possible reason for the invite: I am a pescatarian.

Env and Earth, medical sciences, and immuotherapy

This email did one smart thing and one dumb thing. Smart: listing three areas so maybe they will get lucky. Dumb is the following sentence which proves they are a scam (not that proof is needed):

Your publication ``On the sizes of DPDAs, PDAs, LBAs" has caught our attention. We believe your expertise would greatly enrich our journal's focus on these rapidly evolving areas.

That is indeed a  paper I wrote, with Richard Beigel (I wonder if he also got an invite). Its here. I am very happy with that paper; however, its applications to Environmental Science, Earth Science, Medical Science, or Immunothearpy are .... limited. To be fair, it has been said that Math done for its own sake often has applications that the author did not foresee. So I look forward to lower bounds on the sizes of  DPDA's being  the key to solving global warming. 

Possible reason for the invite: I live on the Earth.

Linguistics

Your publication ``On the sizes of DPDAs, PDAs, LBAs" has caught our attention. We believe your expertise would greatly enrich our journal's focus on these rapidly evolving areas.

The astute reader may notice that this is identical to what the Env-Earth-Med-Imm people send and wonder if I made a mistake with cutting-and-pasting. A fair question, indeed it would be the null hypothesis, but no--the ``linguistics'' and the ``E-E-M'' people are using the same script. 

 Apparently  linguists are also impressed with my publication on the size of DPDAs, PDAs, LBAs. This makes more sense since CFGs and PDAs are equivalent. NO IT DOESN'T"T. I don't think linguists have cared about CFGs for a long time. 

Possible reason for the invite: Aside from my paper with Richard Beigel on lower bounds for whatnot, I also had at one time a paper with Lance about the size of CFGs for {w} that attracted some attention from an AI journal that would give us a discount on page costs (we declined their kind offer). See here

Modern Management

Impressed by your research On the size of DPDAs, PDA, LBAs I cordially invite you to share your new research findings for presentation at the INSERT NAME OF CONFERENCE  on Modern Management. 

How do I know this is a scam? Because the paper appeared in 2015 and they are calling it `new.'  Also, they forgot to cut and paste the actual name of the conference. 

Why has my PDA lower bounds paper gotten so much attention? I do not know. I assume someone pointed to it and others happen to find it. Or perhaps the paper has more applications than I thought. 

Possible reason for the invite (not including my brilliant work on the sizes of devices): The conference is on Modern Management. I have manged my REU-CAAR grant since 2013. That includes managing it through COVID and through Elon. 

Diabetes

We are excited to extend an invitation for you to join us as a distinguished Speaker at the INSERT NAME OF CONFERENCE  Your expertise in the field of Diabetes, making you an an excellent

(Very similar to Nutrition and Pharmacy. The last sentence IS as it was in the email- it is incomplete and does not have a period.) 

Both Modern Management and Diabetes need to up their game--leaving in `INSERT NAME' is just incompetent. 

Possible reason for the invite: My work on Muffin Mathematics (see here). 

LNG

 Greetings to you,

I hope this message finds you well.

I would like to explore the possibility of collaborating with you on an LNG project. Would you be available to discuss this opportunity further?

Looking forward to your response.

At first I thought it was LLM and it might be a HS student who wants to work with me in AI. Then I noticed it was LNG which is Liquid Natural Gas. 

Possible reason for the invite: Since Darling drives and I don't, I pump the gas.

Social Science and Business

A journal on these two topics together emailed me that they exist but wisely made no claim that I am an expert in the field. So they may be legit.

Possible reason for the invite: The royalties on my muffin book are less than the cost of buying muffins for the audience whenever I give a talk on it. This is either a counterexample of how to run a business or a clever way to avoid taxes. Either way, it could be an article in their fine journal. 

 

CLOSE BUT NO CIGAR (That's an old phrase. Younger readers- have you heard that expression before? I was curious so I decided to run it up the flagpole and see who salutes.) 

Numerical Analysis

 It is a great pleasure to invite you to organize a Symposiums on your research subject.

Possible reason for the invite: I took a course on Numerical Analysis  in 1978 and got an A.

Fuzzy Systems 

We bring together global experts. (Am I one of them?)

Possible reason for the invite: I teach Freshman Discrete Math so I have seen many fuzzy proofs. 


Nanotechnology 

We are thrilled to invite you to be part of our prestigious event.

Possible reason for the invite: They got vaccinated for COVID and the nanotech in their vaccine, put there by the other Bill G, made them invite this Bill G. 


Optics

We are thrilled to invite you to be part of our prestigious event
(Identical wording to the invite for Nanotechnology.) 

Possible reason for the invite: I wear glasses.

Systemics, Cybernetics, and Informatics

We invite you to  submit a paper and/or organize a workshop in Systemics, Cybernetic, or Informatics [ Wow! I didn't know I was a big shot in Systemics. I don't even know what that is.]

Possible reason for the invite: I use email.

Quantum

Based on your research interests we would like to invite you to consider NAME OF JOURNAL OMITTED  for the publication of your next article if accepted after peer review.

I have one article that has the word quantum in the title, see here. I am working on a slide-packet with a colleague on quantum entanglement and the CHSH game (ah- my wife had conjectured that math games are not fun games, see hereherehere, perhaps her conjecture should be extended to Physics). I've also had some blog posts about quantum computing. 

The paper is public. The slide-packet is not, though its hard to know whats public nowadays (see here). However I really doubt my paper or my slide packet is why I got the invite. And I doubt my quantum computing posts indicate that I know any quantum computing (this is not self-deprecating- my posts are more about non-tech aspects, for example see here).

Might be legit: They didn't praise my work in quantum. 

Might not be legit: The quote from the email I have above is so generic that I could easily have gotten (and probably have) an email with the exact same wording asking me to submit to a journal on (say) Fluid Mechanics. Also, as noted, I don't work in quantum. 

Possible reason for the invite: Uncertain. 

Fabrication, circuits, etc. 

We are hiring tenure-track faculty in device and fabrication, circuits, architecture, and systems, and EDA areas with highly competitive salaries in the world and cutting-edge research facilities. I'd appreciates you recommending your students to us. [They then gave two links to check out, which I of course did not do.] 

Not my area (what does EDA stand for?)  but it is plausible that I have students in my classes who would be interested. Even so, I won't be passing this on. 

Possible reason for the invite: I got an A in a hardware course in 1977.

Mobile Applications

We have gone through your prior article entitled ''Finding Large Sets Without Arithmetic Progressions of Length three: An Empirical View and Survey II" [Really? You may be the first ones to read it.] We thought it was extremely well written [Really?]. It with excellent pleasure [is that really a phrase `excellent pleasure'?] that we extend an invitation In connection with your to contribute any article of  your choosing to our esteemed journal. Your involvement would undoubtedly enhance the value of our publication.

You may be thinking `Bill left out a period between `invitation' and `I' . - Actually, they left it out. 

Only the heading of the email says `Mobile Applications' 

Later they refer to Journal of International Journal of ...

for their esteemed  journal. 

Possible reason for the invite: This was send to me my mistake. It was meant to be send to a puzzle column where the reader is asked to spot 10 things wrong with the paragraph. 


Not Sure What The Field Is

I am the editor at [Insert Name of Journal]. We are currently seeking high-qualify manuscript for our upcoming issue, and would like to invite you to submit your work for consideration. [The word `qualify' is not a typo on my part, its what they had. It is probably a typo on their part.]

The email actually had in it the wording [insert name of journal]. This indicates that this is a generic email where they forgot to put in the name. The only reason it might not be a scam is that they didn't call me an expert or anything of the sort. 

Possible reason for the invite: [Insert Reason]

Not Sure WTF 

Since your published paper ``Review of  `What Can Be Computed?' '' has been widely recognized,  [really?] we welcome your submission of original research on hot topics [Is Ramsey Theory on Ordered Sets a Hot Topic?] in your specialized of interested fields [does that last part make sense in English?].

Possible reason for the invite: They were impressed with a book review. Some people are easy to impress. Are you? The review is in this column here so you can judge for yourself. 



WHY ALL THE INVITES?

1) I suspect that most (all?) of them are some sort of scam. I end up paying a conference fee or page costs or something. But is that really a scam? If I know exactly what I am getting--- a line on my resume--- and I know its a no-quality conference or journal, then its not a not a scam. On the other hand, I don't think an article in Pharmacy will help my resume.

2) Perhaps it is easier to email Everyone than trying to figure out whose a Math person, whose a Nutrition person, etc.

3) The UPSIDE of emailing EVERYONE is that they cast a wide net and might get someone who they didn't think of.

4) The DOWNSIDE of emailing EVERYONE is... THERE IS NO DOWNSIDE. Email is free, and its not as though emailing ME will make me respect the conference on optics any less -- or more accurate to say that my negative opinion of these venues will not harm them.

5) There is a harm to society. I delete so many emails without looking at them carefully that I might delete something I do care about. Has this happened? Not that I know of-- but I wouldn't know.

6) I normally delete all such emails; however, I kept all those that came in November 2024, Dec 2024, and Jan 2025,  and wrote the post. Then I got a few more in Feb and added to the post. Then one in March so I added that one to the post.  I thought it best to post this before I get even more. I could do a statistical study of all of these emails and write a really bad article. Maybe a Nutrition Journal will publish it. 




Tuesday, April 01, 2025

PDQ Shor (?-2025)

PDQ Shor

PDQ Shor, Peter Shor's smarter brother, passed away last week. PDQ was a Physicist/Computer Scientist/Mathematician/Astrologer/Psychic at the University of Northern South Dakota in Wakpala.

Dr. Phineas Dominic Quincy Shor III, PhD, MBA, BLT, received his education at Europa U. during one of his many alien abductions. He ended up in South Dakota after having fled every other state.

He was most famous for the concept of unnatural proofs, collected in his anthology Proofs from the Other Book, which includes his classic "interpretive dance proof" of the Pythagorean theorem. Critics complain the proof only handles the case where the right angle is on the left.

His follow up book, Proofs from the Crypt, contains his masterpiece, a 1237 page proof that conclusively shows that the empty set contains no irrational numbers.

Like his brother he's moved to the quantum space, reverse engineering Peter's work by giving a doubly exponential time quantum algorithm for multiplying numbers. He created the innovative dipping bird quantum error collection machine that constantly monitors a quantum machine collapsing all entanglement. Apple bought the device for $327 million which immediately destroyed their plans for a QiPhone.

PDQ used the proceeds to create the perpetual Turing machine, guaranteed to never halt. Until it did.

Sadly PDQ passed away from paranormal causes last week. Or was it last year? No one is quite sure. He donated his body to pseudoscience, currently lying in state in an undisclosed state. We hardly knew you.

With apologies to Peter Schickele. This April Fools post was inspired by the complexity class PDQMA.

Saturday, March 29, 2025

Survey's are done stupidly/A stupid question from a survey

 I have often began taking a survey and quit in the middle. Why?

1) It goes on to long. When I told the surveyors that he may get more people quitting for that reason so he should make it shorter he said, rather rudely, that he is an expert on surveys and they need to ask this many questions to calibrate things properly. I tried to engage him in an intelligent conversation about the tradeoff: the longer it is the better the info, but less people fill it out, so what is the optimal point? He told me I was an idiot. Well... that's not going to encourage me to fill out his survey.

2) It asks questions that are too personal. 

3) It asks questions that seem irrelevant to me for their purpose (to be fair, perhaps I do not know the real purpose)

4) They ask a really stupid question. Here is the stupidest I've seen:


Challenge
: Have you ever seen a stupider question? 

As always, I ask non rhetorically. 

Wednesday, March 26, 2025

What Happened to MOOCS?

In 2012 I wrote a blog post about the growing influence of Massively Open Online Courses, or MOOCs.

John Hennessey, president of Stanford, gave the CRA keynote address arguing that MOOCs will save universities. He puts the untenable costs of universities at personnel costs (faculty salaries) are making colleges unaffordable (not sure I fully agree). He argued that MOOCs will help teach courses more effectively. The hidden subtext: fewer professors and probably fewer universities, or as someone joked, we'll all be branch campuses of Stanford.

I ended the post "MOOCs may completely change higher education in America and around the world. Or they won't." A reader asked "Wondering what are you takes about MOOCS now?". Good question.

If you want a detailed answer I had chatty put together a deep research report. Here's my take, mostly from the US computing perspective. The term MOOC is rarely used anymore, but we have seen tremendous growth in online courses and degrees, particularly in Masters programs.

We've seen some major successes, most notably the Georgia Tech Online Masters of Science in Computer Science program that we started in 2014. By we, I mostly mean then-dean Zvi Galil's tenacity to make it happen. Zvi made the right moves (after some pushing), getting faculty buy-in, strong incentives for faculty participation, putting significant resources for course development, a very low-cost degree and most importantly insisting that we have the same if not better quality than our on-campus offerings. The program grew tremendously reaching about 10,000 students by 2020. Georgia Tech had to add a new graduation ceremony for students who finished the degree remotely but traveled to campus for graduation.

We've seen a plethora of new programs. Most domestic students can get a good computing masters degree at a fraction of a cost of an in-person program. On-campus Masters program in computing are now almost entirely international for on-campus programs can deliver something an on-line course cannot: A visa, and a chance to build a life in the United States.

These new programs vary quite a bit in quality, some truly strong, others less so. Some are outright misleading, making a deal with a university to use their name but otherwise having no connection to the school's faculty or academic departments. These programs often feature 'professional certificates' marketed under university branding but are actually developed and administered by third-party education companies.

While we learned to teach everything online during the pandemic, on-line degrees don't work as well for bachelor degrees where the on-campus experience almost matters more than the courses, or for research-intensive PhD programs.

We are not all branch campuses of Stanford but the story isn't done. Colleges continue to have financial challenges, artificial intelligence will continue to play new roles in education, not to mention the recent actions of the Trump administration. Hopefully MOOCs won't be the only thing surviving.

Sunday, March 23, 2025

Recording lectures? Posting the Recordings? Using Slides?

The issue of whether to record lectures or post slides or more generally how much material to give to the students is a new question (the last 20 years?) but I do have an anecdote from 1978.

I was taking an applied math course on Mathematical Modelling from James C Frauenthal (He would sometimes write his initials \(\binom{J}{F}\)) and he passed out his notes ahead of time. I think I was the only student who read them ahead of time. One time I had read the following passage ahead of time:

We have been somewhat cavalier in our assumptions.

During class he said

What is wrong with this mathematical model? 

I replied

We have been somewhat cavalier in our assumptions.

He was somewhat surprised, but pleased that someone was reading his notes. 

FAST FORWARD TO MODERN DAY.

 

How much material should we make available for our students? I post slides and recordings. 

PRO: If a student misses class they can catch up. Especially good if missing class is rare.

PRO: If a student is in class then they can go back to some point they were confused on.  

PRO for slides: When asking a student when they began getting lost we can find the exact slide. This is much better than the word salad that students sometimes emit when describing where they are lost.

BILL: So you understood the definition of P. So you were lost when I defined NP? 

STUDENT: No, I got lost when you described some kind of really exciting algorithm.

BILL: Exciting algorithm? What did it do?

STUDENT: You said it was a paradox.

BILL: This is a class in algorithms. We have not discussed any paradoxes.

STUDENT: Did so!

BILL: We can figure out what ails thee. What did the algorithm do?

STUDENT: Something about the whole being greater than the sum of its parts.

BILL: Parts! I think you mean that we solve sub parts and then put them together. This is the Dynamic Programming paradigm. OH- I think you confused  paradigm and paradox.

STUDENT:  That's exactly what I said. Dynamic means exciting! And paradox is just another name for paradigm.

Often it was hard to see where they got lost.  

CON: Students may skip class and not go over the slides or recordings!

CON: The technology sometimes does not work.

BILL: You missed class and expect me to redo the lecture in my office. Did you watch the recording?

STUDENT: No. The recording did not work and it's your fault!

BILL:  The first day of class I said you should come to class for the following reasons

1) You can ask questions. The paradox is that's hard to do in a large class, but with so many student cutting class, it's a small class!

2)  Taking notes is a good way to solidify knowledge.

3) Going to class forces you to keep up.

4) The technology might not work. Last semester this happened four times. Twice it was my fault, and twice is was not my fault. But that does not matter- it will happen. 

5) If  you show up in my office hours and want me to explain  what I lectured on I will be annoyed.

STUDENT: Uh,... I missed the first day.

CON: In the long term students get in the habit of not going to class.  I can't tell if this is worse than it used to be. 

CON for Slides: Its hard to be spontaneous. Some of the classrooms don't even have whiteboards to go off-script with. The following happened in the pre-slide days (apologies- I've told this story before on this  blog) on April 25, 2003 in my Automata Theory class. I had already done decidability and was going to do r.e. sets.

STUDENT: Do you know whose 100th birthday it is today?

BILL: Will there be cake? If so will they let me eat cake?

STUDENT: Today is Kolmogorov's 100th birthday.

BILL: AH! I was going to do r.e. sets but instead I will do Kolmogorov Complexity!

STUDENT: Great! Uh. Maybe. Is it hard? 

BILL: Its fun!

I then gave a lecture on Kolmogorov complexity on the fly, on the whiteboard. I made it part of the course and on the final I asked them to show that if  w is a K-random string of length n then any context free grammar for {w} in Chomsky Normal Form requires at least \( n^{0.99} \) rules (this is not the strongest result possible). 

This is impossible with slides. No more on-the-fly lectures. 

CON for slides: Some proofs are impossible to do on slides. The Poly VDW theorem and the Kruskal Tree Theorem are two of them. Fortunately those are both in Ramsey Theory that has 30 students and a whiteboard, so for those lectures I use a white board. 

PRO for slides: My handwriting isn't that good, so slides helps a great deal.

CAVEAT: I used to read a proof, write it out by hand, type it up in LaTeX, and then make  slides.  Now I go straight from reading it to slides. This is sometimes not a good idea as I am worrying about fonts and  formatting before I really understand the proof. I recently went BACK to the handwritten notes  THEN LaTeX THEN slides. That increased my understanding since (1) when doing the handwritten notes I was not distracted by fonts or formatting, and (2) at every iteration I picked up some subtle point I had missed. 

CAVEAT: When teaching a large class you really can't use the whiteboard since the people in the back row can't see. I don't know if that's an argument FOR slides or AGAINST large classes. 

SO- what do you do: record, not record, slides, no slides.And why? And does it work? 

 

Wednesday, March 19, 2025

A Failure to Communicate

With care you can explain major ideas and results in computational complexity to the general public, like the P v NP problem, zero-knowledge proofs, the PCP theorem and Shor's factoring algorithms in a way that a curious non-scientist can find interesting. Quanta magazine keeps coming back to complexity. because we have a inherently interesting field.

So why am I having such a difficult time with the new Ryan Williams result, that time can be simulated in nearly quadratically less memory, or more precisely DTIME(\(t(n)\)) \(\subseteq\) DSPACE(\(\sqrt{t(n)\log t(n)}\)), based on the Cook-Mertz space-efficient tree evaluation algorithm.

Many results in complexity are quite specialized and technical but this shouldn't be one of them. Ryan's result involves a major new relationship between time and memory, the two most basic resource measures that goes back to the 1960s. Everybody understands time--waiting while their computer or phone is spinning. They know about memory, at least that you have to pay more for an iPhone with more of it. Yet still I'm challenged to explain this result.

Maybe because for historical reasons we use SPACE instead of MEMORY, or the pesky \(\log t(n)\) factor. Someone said they got lost in the logarithms. But we can talk about the gist of the result without mention space or logarithms.

It might be the \(t(n)\). We take for granted that we measure resources as a function of the problem size but outsiders could find it a challenging concept. For P v NP I just talk about efficient computation, here you can't avoid more specific running times. 

Or maybe people don't think about time and memory. Most operations on a computer happen close to instantaneously. Nobody thinks about memory unless they run out of it. 

Or because we don't have good real-world examples. When would you prefer a space \(\sqrt{t(n)}\) time \(2^\sqrt{t(n)}\) procedure to a space \(t(n)\) time \(t(n)\) algorithm? 

AI is supposed to be good at this. OK, Claude take it away.

Ryan Williams' breakthrough paper shows that computers can solve complex problems using far less memory than previously thought possible. For 50 years, computer scientists believed that problems requiring a certain amount of time (t) needed at least t divided by a small factor of memory. Williams proved that these same problems can actually be solved using only about the square root of t memory. He achieved this by connecting these problems to the Tree Evaluation problem and leveraging a recent algorithm by Cook and Mertz. This result makes significant progress toward resolving the longstanding P versus PSPACE problem, suggesting that many algorithms could potentially run efficiently on devices with much less memory than we currently use.

Imagine a navigation app trying to find the optimal route through a complex city network. Before this discovery, engineers believed that calculating detailed routes required either substantial memory or accepting slower performance on memory-limited devices. Williams' theorem suggests these calculations could run using dramatically less memory—potentially reducing requirements from 100 MB to just 10 KB (roughly the square root). This breakthrough could enable sophisticated navigation features on devices with severe memory constraints, such as smart watches, older phones, or embedded car systems, allowing them to handle complex routing problems with multiple stops or constraints without performance degradation.

Sunday, March 16, 2025

My Post-Covid Post

 I had been meaning to write a post-COVID post for a while, but

a) Are we finally post COVID? (I think so)

b) Are the long term affects of  COVID (society not health) known yet?

However, Lance wrote a post-COVID post (see here) which inspired me to do the same. 

Random Thoughts on COVID

1) COVID probably helped Biden win the 2020 election. If Harris  had won in 2024 then Biden winning in 2020 would have been a bigger change. 

2)  VAX-skepticism is now mainstream. This had not been a partisan issue before COVID though there were some people against vaccines. Oddly enough I think mostly on the far left: a back-to-nature thing. And VAX-skepticism has gone beyond COVID- some states are letting people NOT get vaccinated which has already caused a measles epidemic.

3)  I used to get more work done at school. Now I get more work done at home. COVID forced me to enter the 21st century.

4) People come into school less often.  There are faculty whose tenure cases I will vote on who I never met. To be fair, we do have a big department so (a general theme) COVID accelerated some trends that were already there.

5) Office buildings are less full as more people work from home. I've read that this may cause an economic crisis with people who borrowed money to build NEW office buildings. There are some plans to convert office building into residential, but that seems harder than it sounds.

6) My favorite place to have lunch, THE FOOD FACTORY closed down!

7) I used to mentor around 10 HS students a year (some of the Magnet schools in the area have a research requirement-though the students mostly ARE good and ARE NOT just there for the requirement).  It was a logistical issue to get them or their parents parking passes (also an issue of what their parents DO while I am teaching their kids Ramsey Theory). Now I do most of my mentoring on zoom. I mentored 32 in 2024 (in groups- so it was not 32 projects).

8) I can now hold extra office hours at night on zoom. 

9) Before COVID I was slowly switching from whiteboard to slides since I was recording lectures and my handwriting is not very good. Now MY ENTIRE COURSE is on slides. Clyde Kruskal points out:

 If your entire course is on slides then either  your slides are too dense or your course is too shallow.

He may have a point there. Howeverin a small class I sometimes DO go to the whiteboard. I did it this semester in my Ramsey Theory course when I taught the Kruskal Tree Theorem (the set of trees under minor ordering is a well quasi order-by Joe Kruskal, Clyde's Uncle).

10) This is a bigger issue- is technology driving what topics we cover? 

11) COVID --> classes recorded and  slides that are available --> student attendance is down. Is this bad? Depends. If the students who don't show up actually keep up, its fine. If they hunt and peck through the slides so they can do the HW, that's bad. COVID might not have caused this problem,but it  accelerated it. The question of Post/Record or Not is an issue for a later blog. Pesonally,  I post and record.

12) School children who had to learn at home, probably bad for their future education.

13) Chem labs and Physics labs---do we have a class of chemists who did less lab work?

14) Some couples had to spend more time with each other than usual. This could be good or bad (for me it was good).

15) Some scenes on the TV show Monk (about an OCD Detective) now seem normal- like wiping off doors for germs.

16) Wearing masks in public is not considered weird.  It has gone back to being unusual, but it has not gone back to being weird. I know someone who found that by wearing one he does not get ordinary colds so he keeps wearing it.

17) By around May of 2020 there were about 100 or more novelty songs about COVID. I compiled a website of what I considered the best ones. Its part of my website of novelty songs, here. The three best IMHO are here, here, here.  OH- while getting those linked I found another awesome one: here

18) Some of the working-at-home or meetings-on-zoom was because of COVID. And some is technology (zoom). But some is sociological. Here is an example:

DARLING (on a Sunday in 2018): Bill, my back hurts and I don't think I should drive today, but I want to go to church. So... what  can we do?

BILL: Uh-OH, I think our church streams its service.

DARLING: Well pierce my ears and call me drafty! You're right! I remember that now. Great! You are my brilliant Bill!

BILL: And you are my darling Darling!

(We watched the service online and it was fine.)

 Suffice to say, thinking of going to church online would not take a brilliant Bill now. 

19) There is a down side: Meetings online, church on line, classes on line, one can get more distracted.

20) Faculty meetings are hybrid and I usually go on zoom. The Math dept has said that you HAVE TO GO in order to vote. They are NOT being Luddites- they see the value of in-person meetings. I do not know who is right.

If the meeting is on zoom more people are at the meeting.

If the meeting is in person then less people come but they are paying more attention. Or are they? People can be in person and still tune out, see here.

In the past someone could say I'll be out of town so I can't go to that meeting. That may be less of an excuse in the future. Maybe even now.

21) One of my wife's relatives died of COVID (before vaccines were available) and one of my friends lost his sense of smell because of COVID (before vaccines)

22) Some TV shows incorporated COVID into their story lines. For some the order a show is shot is different than the order they are shown, so you could have one with people wearing masks and COVID being in the background, and the next week nothing about COVID.  

23) I managed to still run my REU program - virtually- in summer 2020 and summer 2021. The research was as good as normal, and I could admit more students since I was not paying for housing, but the students had a much worse time because of the lack of social activities-- we did have some online but its really not the same.  (As for my REU program in Summer 2025-- there are other issues that I will discuss in a later blog post.) 

24) I used to see Lance about once a year when he came to Washington DC either on chairman-business or Dean-business, or NSF-business. I have not seen him in person since... hmm, I do not know. Might be since before COVID. I do see him on zoom once in a while. And whenever a theorist dies he gives me a call to discuss the blog-obit.

25) I am a homebody- I can stay at home for many days in a row. I watch TV, go on treadmill,  and watch TV  while on treadmill. I also  surf the web, read papers, think brilliant thoughts, and  make up slides. Other people feel a  need to GET OUTSIDE THE HOUSE. 

 26) My book club and my game night have both moved online and have not resumed being in person.

book club: Two of the people in it moved to Georgia so we thought we would not see them anymore. But then COVID hit and it's just so much easier for them and everyone else to have book club on zoom.This works pretty well. 

game night: One person is still COVID-shy (this may be reasonable in her case) hence does not want to go to gatherings. And during COVID 2 people from OUT OF STATE joined the game night.  So now it is always on line. This does LIMIT which games we can play, and some games are not as good online. 

27) Since Darling and I stayed at home so much we got out of the habit of putting our wedding rings on before leaving the house. We still have not gotten back in the habit. This may be the least important long-term effect of COVID.

28) (ADDED LATER INSPIRED BY A COMMENT) One of the comments asked (though assumed yes) that I am back to living a normal live. Thats mostly true except for the following:

I am VERY CAREFUL to not injure myself (e.g., no more jogging outside where a crack in the sidewalk could make your break a bone) because of wait times in hospitals during COVID- but it seemed like a good idea even post-COVID (if we are indeed post-COVID- the commenter challenges that). 

I do mask when I go shopping.

I test if I have symptoms (I had a mild case once.)

I get the flu vaccine- I didn't use to- but I got it since I didn't want to get the flu and THINK it was COVID.

Some of my friends and relatives don't eat in resturants anymore, or insiste I test before coming over, or... and I HAPPILY accomodate them.

The COMMENT is very good and I recommend everyone read it.

 

 

Wednesday, March 12, 2025

Covid and Complexity

As we hit five years from when the world shut down, lots of discussions on how Covid has changed society. What about academia and computer science?

It's a challenging question to ask as Covid is not the only major change in the last five years. We've seen wars in Ukraine and Gaza and a huge political changes around the world. We've had major technological changes as well, most notably the rise of machine learning, particularly large-language models. 

But Covid changed us quickly, moving immediately to online teaching, meetings and conferences. Both my children move in with us for six-months. Crowded but it was great to spend time together. 

Most of us have moved on from Covid, though a small number still take it seriously, avoiding crowded areas, wearing masks, not eating indoors at restaurants, even still isolating. We need to all respect everyone's individual risk decisions when it comes to the disease. And we should never forget the many we've lost to the disease. 

We saw many attempts at virtual and later hybrid conferences but none worked particularly well, despite some valiant efforts. There's just a limit to how long you can be engaged staring at a screen. The best we have now are recorded talks with a watch party, and a separate in-person meeting. Not just for Covid, but because international travel has become more difficult for many.

By necessity we saw a vast improvement in online collaboration tools, at least until Google killed Jamboard. With papers generally available online, there is very little you can do in your office you can't do at home. So we see less people come into work every day, faculty, students and staff. Collaborating with someone across an ocean is almost as easy as collaborating with someone at your university. I find new faculty feel less need to choose a university for its resources and colleagues than for its location.

Personally I try to avoid virtual meetings as much as I can, which is not nearly enough. I have a student I work with at another university in Chicago who prefers to make the long trek here to talk research than meet on zoom, as we are much more productive that way. Others seem to prefer and even thrive on online meetings. Each to their own. 

The students have suffered the worst from Covid especially those who lost a year or more of in-class pre-college education. We see some incoming students struggling more both from a knowledge background but also a social one, with many just watching their lectures online or not fully engaging if they come in person. It will take a whole generation before we fully recover as a society from that disease.

Sunday, March 09, 2025

Numbers that look prime but aren't

 
A long time ago I made up the question (are questions ever really made up?)

What is the least number that looks prime but isn't?

It was not quite a joke in that it has an answer despite being non-rigorous.

My answer is 91:

Dividing by 2,3,5,11 have TRICKS

Squares are well known.

So the first number that looks prime but isn't is \(7\times 13=91.\)

I recently  saw the question asked on Quora and given a more interesting answer. The answer is by Nathan Nannon, a PhD in Math at Univ of CA, Davis (Graduated 2021). I paraphrase it here and then have some questions.

-----------------

What does it mean to look prime?

FIRST CRITERION: 

1) Its decimal representation ends in 1 , 3 , 7 , or 9 , and

2) It is not on the multiplication tables that a lot of people memorize, which go up to 12.

Based on these criteria, 39 is the first composite number that looks prime.

(If people no longer learn the mult tables then the answer would be 21.) 

SECOND CRITERION: Use the trick that a number is divided by 3 if the sum of the digits is divisible by 3.  Then the first composite number that looks prime is 91 , followed by 119 , 133 , and 143.

THIRD CRITERION: Fermat's Test: If \(p\) is prime then for all \(a\), \(a^p\equiv a \pmod p\).
Numbers that pass this test and yet are composite are called Carmichael numbers.

Here are the first few  Carmichael number:

561 AH- that does not count since 5+6+1 is divisible by 3.

1105 AH-doesn't count, ends with 5.

1729 AH- Nathan Nannon points out that 1729 is the sum of two cubes (more on that later) and hence we can use \(a^3+b^3 = (a+b)(a^2-ab+b^2)\). This only works if you KNOW that 1729 is the sum of two cubes. AH- most mathematicians do know this because (1) it is the least number that can be written as the sum of 2 cubes in 2 diff ways, and (2) this fact has been popularized by the following true story (I quote from Wikipedia, see here) which explains why such numbers are called Taxicab Numbers

The name is derived from a conversation ca. 1919 involving mathematicians G. H. Hardy and Srinivasa Ramanujan. As told by Hardy:

I remember once going to see him [Ramanujan] when he was lying ill at Putney. I had ridden in taxi-cab No. 1729, and remarked that the number seemed to be rather a dull one, and that I hoped it was not an unfavourable omen. "No," he replied, "it is a very interesting number; it is the smallest number expressible as the sum of two cubes in two different ways."

Note 

1) Oddly enough, the fact that 1729 is the sum of two cubes in TWO diff ways does not make it any easier to factor. We just needed one way. 

2) To say that 1729  does NOT look prime depends on history as well as math. If not for the Hardy-Ramanujan story, would most mathematicians know that 1729 is the sum of 2 cubes. Possibly since its 1000+729. But not clear. Martians may think 1729 looks prime.


2465 AH-doesn't count, ends with 5

2821 AH- just looking at it, it is clearly divisible by 7.

6601 AH- OKAY, this one really does look prime.

UPSHOT
Depending on what criteria you use, the least number that looks prime but isn't is either 21 OR 39 OR  91 OR  6601 or something else, depending on what looks prime to you.

------------------------------------------------------

QUESTION
Is there some natural and simple criteria that rules out 6601? This may depend on your definitions of natural and simple.

QUESTION The first few Carmichael numbers had small factors. 6601 is divided by 7. Is there some function   f with \(f(n) \ll \sqrt n\) such that if \(n\) is a Carmichael number then it has a factor \(< f(n)\). ?

The next few Carmichael number after 6601 is 8911, which 7 divides. So that looks good. But alas, Jack Chernick proved (see here) that any number of the form \((6k+1)(12k+1)(18k+1)\) where \(6k+1\),\(12k+1\), and \(18k+1\) are all primes, is a Carmichael number. It is not know if this generates infinitely many Carmichael numbers. Hence if some f(n) exists then its probably \(\Omega(n^{1/3})\).


Wednesday, March 05, 2025

Taking a Stand

On February 20th we got the news from the National Science Foundation Algorithms Foundations Team that long-time NSF program director Tracy Kimbrel, was leaving the NSF, and not by choice.

Along with many others in part-time status at NSF, my service has been terminated earlier than I intended or expected.  It has been a great privilege and a great honor to serve the Algorithmic Foundations community over the last decade and a half.  It's disappointing to have it end so abruptly.  I will miss it and all of you.

Tracy is just one of many government employees losing their jobs but when you know someone it feels personal. Tracy has been a fixture at the NSF and often comes to theory conferences to talk about grant opportunities and the state of the NSF. In my yearly pre-covid pilgrimages to the foundation for panels, I always had great conversations with Tracy and watched him work, getting the information he needed from us to make the tough decisions of which projects to fund, always many more worthy than the available funding. The theory community loses with Tracy out of the NSF.

We did get some good news earlier this week with the NSF reinstating most of their probationary employees. And Trump did say near the end of his speech yesterday "we are going to conquer the vast frontiers of science" but apparently we'll do it with a much smaller NSF if Trump follows through with his plans.

Talking with some fellow academics at another university, they had almost given up. "What can we do?". 

We can push back.

Start by doing nothing. Don't preemptively change your policies and your values. Too many universities and organization are abandoning DEI programs, changing their curriculum, freezing hiring of faculty and students, in anticipation of challenges to come. We may see a time that new policies will survive the courts and force us to change, but not yet.

While the democrats in congress seem powerless, many of the governors, including my own governor JB Pritzker, have fought back, mostly in the courts, and have stopped, for now, much of the damage to the NIH and NSF. The computing societies urge congress to protect our research funding, especially in a time when we need to compete technologically with China and other countries. 

As individuals, we can take our own steps, participate in Stand Up for Science on Friday, reach out to our representatives at the national and state level, and just be part of the resistance. We can't let bullies dictate our future, we must control it for ourselves. 

Sunday, March 02, 2025

Karp recently turned 90 but there was no conference to celebrate that. Which numbers do we use and why?

Karp turned 90 in January of 2025. I searched to see if there is a 90th Birthday Conference for him.  I did not find one (is there one?).  For which years do we have celebratory birthday conferences?

Here are some conferences in honor of  60th Birthdays, by alphabetical order of last name. 

Eric Allender here

Laci Babai here

Allan Borodin here

Stephen Cook here

Rod Downey here

Juris Hartmanis (at the 1988 Structures, now Complexity, conference, predating the web). Lance posted about it here.

Russell Impagliazzo here

Ravi Kanan here

Richard Karp (I could not find the link.)

Stuart Kurtz here

Michael Rabin (I could not find the link but I recall planning to go but snow cancelled my flight.)

Ronitt Rubinfeld here

Michael Saks here

Alexander Shen here

Michael Sipser here

Shang-Hua Teng here

Leslie Valiant here (I thought he also had an 80th bday but I am wrong- he is younger than 80.) 

Vijay Vazarani here

Nikolay Vereschagin here

Avi Wigderson here

I am sure there are more. 

Having a conference for someone's 80th birthday is also done. Here are a few:

Richard Stanley here

Michael Rabin here

Stephen Cook here (This was actually a conference for 50 years of Complexity Theory: A Celebration of the work of Stephen Cook. It was 50 years since Cook published what is now called the Cook-Levin Theorem. It also happened to be the year he turned 80.) 

Donald Knuth here but also see Lance's blog post on the meeting   here  and Knuth's favorite bday     here.

 Martin Kruskal here and here for my post on it, and Clyde Kruskal (Martin's son) post on it. That was back in 2007 when I was a guest poster. And Clyde was my guest, so he was a guest-guest poster.

I am sure there are many more. 

Numbers between 60 and 80 are rare (my wife read this and noted that there are 18 of them not including endpoints) but here are some:

John Osborne  (UMCP Math Prof) had a 64th. Could not find a link. 

Dick Dudley 65.  (MIT Math professor, Not to be confused with Big Dick Dudley who was a wrestler or Underwood Dudley who wrote a book on Math Cranks, see here, which I was amazed to find out I DID NOT write a review of.

Mike Patterson here (Why 66? Why not 66!)

Harry Lewis had a 70th, see here (I asked WHY 70? He said the organizer, Margo Seltzer, wanted it then. That is another point- the organizer really controls which year and also if it happens at all.) 

Mihalis Yannakakis had a 70th here 

Ravi Kanan 70 here

Leonid Levin had a 75th, see here

Dick Lipton has a 75th, see here

Manuel Blum had a 77th since 77=7*11 is a Blum Integer. ( The only reason I know it exists is because Lance went to it.) 

I've seen 100th bday conferences. 

Turing here (This is one of many Turing Celebrations for his 100th. It was in 2012. Turing died in 1954.)

Erdos here (This was in 2012. Erdos died in 1996)

Chernoff here (He attended. He is still alive as of this writing, at the age of 101) 

Kolmogorov here (The Day K turned 100 a student told me this. I then gave a lecture on Kolm complexity instead of the planned topic, on the fly. Now that my course is all on slides, and some classrooms don't even have  a blackboard or whiteboard, I can't do that anymore. Oh well.)

I am sure there are more. 

1) Why are 60, 80, 100 the usual numbers? They are nice and round. And 60 is big enough so that the person being celebrated has done stuff, but not so big that they are dead. 

2) There should be more clever ones like Osborn (64) and Blum (77). If there was ever a conference in my honor that would be hard, since the number most associated to me is 5/12 (see here). I had not done much in math at the age of 5 months. Oh well.

Wednesday, February 26, 2025

You Need Much Less Memory than Time

Just as I was complaining that we haven't seen many surprising breakthroughs in complexity recently, we get an earthquake of a result to start the year, showing that all algorithms can be simulated using considerable less memory than the time of the original algorithm. You can reuse space (memory) but you can't reuse time, and this new result from Ryan Williams in an upcoming STOC paper provides the first stark difference.

DTIME(\(t(n)\)) \(\subseteq\) DSPACE(\(\sqrt{t(n)\log t(n)}\))

This is a vast improvement on the previous best known simulation, the classic 1977 Hopcroft-Paul-Valiant paper showing

DTIME(\(t(n)\)) \(\subseteq\) DSPACE(\(t(n)/\log t(n)\))

only slightly lower than the trivial \(t(n)\) bound. Williams gets a huge near quadratic improvement that will go down as a true classic complexity theorem. Note that the space simulation does not maintain the time bound.

Williams' proof relies on a space-efficient tree evaluation algorithm by James Cook and Ian Mertz from last year's STOC conference. Cook and Mertz's algorithm builds on earlier work on catalytic computing, highlighted in a recent Quanta article

Let me give an highly overly simplified view of the combined proof.

A \(t(n)\) time Turing machine uses at most that much space on its tapes. Split the tapes into \(\sqrt{t(n)}\) segments of size \(\sqrt{t(n)}\). Using the fact that it takes \(\sqrt{t(n)}\) time to cross an entire segment, Williams with some clever tricks models acceptance of the Turing machines as a circuit of bounded degree and depth \(\sqrt{t(n)}\), where the wires carry the contents of the size \(\sqrt{t(n)}\) segments at various times in the computation. 

Williams then applies the tree evaluation algorithm of Cook and Mertz. Cook and Mertz use finite fields to encode these segments as a combination of registers of size \(\log t(n)\) and show how to compute the value of each node of the tree using only \(\sqrt{t(n)}\) space for the local computation plus needing to only remember a constant number of registers while reusing the rest of the space when recursively computing the tree. It's pretty magical how they manage to make it all work. 

It's worth going through the proof yourself. I recommend Sections 3.1 and Footnote 6 in Williams' paper (a slightly weaker space bound but much simpler) and Sections 2-4 of the Cook-Mertz paper. Oded Goldreich has an alternative exposition of the Cook-Mertz algorithm and proof.

Williams' theorem works for multitape Turing machines and oblivious random-access machines, where the queries to the memory are fixed in advance. He shows how to use this result to compute the output a circuit of size \(s\) using nearly \(\sqrt{s}\) space. Fully general random access machines remains open, as does nondeterministic and other models of computation (random, quantum, etc).

In 1986 my advisor Mike Sipser gave the first hardness vs randomness result, showing roughly that if there were problems that took time \(2^n\) but could not be solved in space \(2^{.99n}\) on multi-tape Turing machines then RP = P. Williams' theorem kills this assumption though we've developed weaker assumptions since. 

Moving forward, can we push Williams' result to get a simulation in space \(n^\epsilon\) for \(\epsilon<1/2\). A simulation for all \(\epsilon>0\) would separate P from PSPACE. Even a slight improvement would have applications for alternating time. Maybe try to use the Cook-Mertz techniques directly in the Turing machine simulation instead of going through computation trees.

Read sections 4 and 5 of Williams' paper for some further consequences and challenges for further improvements. 

Sunday, February 23, 2025

Why my department hopes I do not die this spring

Alice is scheduled to teach X in the Spring.

Then Alice CAN'T! (illness, death, or some other reason)

What is the department to do?

1) If it's an undergraduate class then likely there are other people who are QUALIFIED. Perhaps a grad student, perhaps a postdoc, perhaps someone borrowed from another dept (Math for a theory course for example), perhaps a teacher of another section of the course, perhaps a retired teacher in the area. Might be rough because of the short notice. In Math this is easier since the courses are more standardized.

2) If it's a graduate class then either

a) It's still something that someone else could teach. The set of people is smaller but might be nonempty. 

b) Nobody else can teach it. Still, it's a graduate class, so it's small, so it can be cancelled and the students can probably find other courses to take.

There is another positive aspect to this negative situation: If nobody else can teach it then probably the entire department is small, so even more likely that the class is small.

If I die this semester then the department will be faced with the perfect storm:

a) I am teaching a graduate course that would be a lot of work for someone else to get up to speed:Ramsey Theory. (I do have good slides, so that might help.)

b) There are around 30 students taking it. (There are around 30 in most of the graduate courses, and more in the AI graduate courses.)

SO, what would they do? 

1) Cancel it. There are a few other theory grad courses the students could take, and some might want to take a non-theory course. Still awkward since those courses are already fairly full.

2) Have someone teach a similar course, like Prob method (the only part of Ramsey Theory that Lance thinks is worthwhile, see here) or combinatorics. This would be a lot of work so it may be hard to find someone who BOTH is qualified AND wants to. Perhaps a grad student, though I think we try to avoid having grad students teach grad courses. Then again, desperate times call for desperate measures. Perhaps a former grad student who is still in the area geographically and mathematically. 

I've been talking about the teacher being unable to teach the course BEFORE it begins. What if the teacher becomes unavailable DURING the semester? That's even harder to deal with.  

OKAY, the above has all been abstract and the events portrayed are rare. Here are things I have SEEN happen and what was done

1) Someone hired as a lecturer to start in the spring ends up being unable to come for the spring. This was found out in November. They were going to teach 2 ugrad courses. 2 profs did an overload.

2) Someone who was going to teach a grad course ended up being unable to do so. This was found out in December. That teacher really did have a former grad student in the area who was available and qualified. Lucky!

3) In December a teacher ends up being unable to teach with 2 lectures to go, and the final to be administered and graded. Another teacher (who has taught that course) takes over, but this is not a big deal since its not much work to finish the course. 

4) A teacher knows ahead-of-time that they won't be able to teach for 4 weeks. Two other teachers agree to do the course in that teachers absence. 

None of  the above are ideal, but solutions were found that did work (for some definition of worked) But I do wonder if there will come a time when no solution can be found.

One piece of advice: If you are not going to be able to fulfill your commitment to teach a course, let your chairman know with a lot of lead time so they can find a solution. 


Wednesday, February 19, 2025

Tomorrow and Yesterday

I recently completed Tomorrow, and Tomorrow, and Tomorrow by Gabrielle Zevin, a book recommended by many including the City of Chicago. The novel covers the decades long journey of two game developers, Sadie and Sam, and how their lives interact with the games they create.

A paragraph towards the end made me rethink the whole book (not a spoiler):

Well, if we’d been born a little bit earlier, we wouldn’t have been able to make our games so easily. Access to computers would have been harder. We would have been part of the generation who was putting floppy disks in Ziploc bags and driving the games to stores. And if we’d been born a little bit later, there would have been even greater access to the internet and certain tools, but honestly, the games got so much more complicated; the industry got so professional. We couldn’t have done as much as we did on our own.

This paragraph hearkens back to my post last week, about how the era you grew up in can affect your trajectory. But also I'm a generation older than the book's main characters, and indeed Ribbit was distributed on a floppy disk in a Ziploc bag.

The novel at its heart is about two friends making games. I was lucky to have that experience myself for a couple of years in the early 80s, with high school friend Chris Eisnaugle, working on Ribbit, Excalibur and some other games that never saw the light of day. We coded for days on end while listening to music like REO Speedwagon, and taking time off for bowling or watching early Schwarzenegger movies. Coding in assembly language on slow processors with limited graphics, taking advantage of our complementary strengths and making it work. I don't regret leaving that life behind for the theoretical wonders of computational complexity, but that doesn't mean I don't miss it.

Sunday, February 16, 2025

Big Breakthrough in the exciting world of sum-free sets!

 Let \([n]\) be the set \(\{1,\ldots,n\}\). (This is standard.)

Let X be a set of integers. \(X\) is sum-free if there is NO  \(x,y,z\in X\) such that \(x+y=z\). (Note that \(x=y\) is allowed.)

Lets try to find a large sum-free set of \([n]\). One obvious candidate is 

\(\{1,3^1, 3^2,\ldots,3^{\log_3 n} \}\) (assume \(n\) is a power of 3). 

So we can get a \(\log_3 n\) sized sum free set of \([n]\). Can we do better?

YES: 

Erdos showed that every set of \(n\) reals has a sum-free subset of size \(n/3\). I have a slightly weaker version of that on slides here.

That result lead to many questions:

a) Find \(f(n)\) that goes to infinity  such that every set of \(n\) reals has a sum-free subset of size \( n/3 + f(n)\).  

b) Replace reals with other sets closed under addition: naturals, integers, some groups of interest.

c) Just care about \([n]\).

Tao and Vu had a survey in 2016 of sum-free results, see here.

We mention three results from their survey

(0) Eberhard, Green, and Manners showed that the 1/3 cannot be improved (see the survey for a more formal statement of this). So, for example, nobody will ever be able to replace 1/3 with 1/2.9.

(1) Alon and Kleitman  modified the prob argument of Erdos (in the slides) and were able to replace \( n/3\) with   \( (n+1)/3\).

(2) Bourgain showed, with a sophisticated argument,that \(n/3\) could be replaced by \((n+2)/3\)

I believe that result (2) was the best until a recent paper of Ben Bedert (see here). Did he manage to 

push it to \((n+100)/3\)? No. He did much better:

For every set of \(n\) reals there is a sum-free  subset of size \(n/3 + c\log\log n\).

Reflections

1) This is a real improvement!

2) Will it stay at \(n/3 + c\log\log n\) or will there NOW be an avalanche of results?

3) Contrast: 

Erdos's proof  I can (and have) taught to my High School Ramsey Gang (note: you DO NOT want to mess with them).

 The proof by Ben Bedert is 36 pages of dense math.

4) This leads to the obvious open question: Is there an elementary proof of some bound of the form \(n/3 + f(n)\) where \(f(n)\) goes to infinity. 



Wednesday, February 12, 2025

Research Then and Now

A student asked me if complexity research was easier when I was a student. Interesting question. Let's compare research now versus the late 80's.

The big advantage today is technology. Just a small sampling below.

Information: Google, Wikipedia, Complexity Zoo and the ever more powerful AI systems

Online Papers: arXiv, digital libraries, individual's web sites

Collaboration: Zoom, Overleaf, Theory Stack Exchange

Communication: Social Media, Online Talks and Classes, YouTube, Mobile Phones

Back in the 80's we had LaTeX and email. LaTeX was slow and you had to print out the paper to see it. Email was only text and it was difficult to share papers electronically. You had to go to the library to read papers unless you had the paper proceedings nearby. It was often easier to reprove a theorem then go find the proof.

We did have some advantages back then. Reproving a theorem made you understand it better. Reading a paper in a proceedings or journal often introduced you to other papers in the publication. We didn't have the distractions of social media and the Internet in general so you could actually focus on research. (Though there was a Mac in the MIT student area that ran Tetris)

People came to the office every day. That opened up collaborations formal and informal. I could walk into an office and ask questions to Johan Håstad, Noam Nisan, Paul Beame and Michael Kearns, all postdocs at MIT, not to mention my fellow grad students or the professors. There was a huge advantage being at a place like MIT or Berkeley, better that those advantages have been mitigated somewhat since.

But the biggest advantage of all, and I'm not afraid to admit it, low hanging fruit. Computational complexity was just 20 years old when I started grad school, and the P v NP problem a young teenager. There was surprising theorem after surprising theorem through the early 90's and not so much since. You didn't need to know deep math and most graduate students could follow nearly all of the 47 talks at my first conference (STOC 1986), not likely in the 219 STOC 2025 papers

Much of my best work was done by reading a paper or watching a talk and wondering why the authors didn't ask some question, and then asking it and sometimes solving it myself or with others. Now you need to spend time climbing the trees and going down deep branches to find new problems that only people on nearby branches would care about or even understand.

But who knows, AI may soon climb those branches for you.

Sunday, February 09, 2025

Does Lance dislike Ramsey Theory Because he's colorblind?

BILL: Lance, my wife asked if you dislike Ramsey Theory because you are colorblind.

LANCE: (laughs) It's why I don't like Ramsey Theory talks--impossible for me to follow. But I don't actually dislike Ramsey theory. I just don't like it as much as you do.

BILL: Nobody does, except possibly Graham, Rothchild,  Spencer, and your average Hungarian first grader.

LANCE: To me Ramsey Theory had one useful cool idea: The Probabilistic Method, that made people actually think Ramsey Theory was far more interesting than it really is.

BILL: Um, that is bad history and bad mathematics.

LANCE: I learned Ramsey Theory from Spencer and his book entitled Ten Lectures on the Probabilistic Method. But the Probabilistic Method was far more interesting than the Ramsey Theory. I suspect this is common: learn Ramsey Theory because of the Probabilistic Method. And some people get suckered into thinking that Ramsey Theory is interesting or important or both. My favorite application of the Probabilistic Method has nothing to do with Ramsey theory: Lautemann's proof that \(BPP \subseteq \Sigma_2\).

BILL:  A few points to make here

a) Some people call the Prob Method an application of  Ramsey Theory. I do not. The Prob Method was developed by Erdos to get lower bounds on asymptotic Ramsey numbers and was then used for many other things, that you and others find far more interesting. That's great, but that's not really an application of Ramsey Theory.

b) If the prob method was not discovered by Erdos for Ramsey Theory, would it have been discovered later when it was needed for something you care about more? Probably. But it may have been much later.

c) Ramsey Theory has real applications. I have a website of them here. And there are more. For example---

LANCE: Bill, your graduate course in Ramsey theory is titled Ramsey Theory and its ``Applications''. So you do not believe there are real applications.

BILL: Depends how you define real and applications. I put it in quotes since many of the applications are to pure math. Some are to lower bounds in models of computation, but some may still consider that to not be a real application. Rather than debate with the students what a real application is, I put it in quotes.

LANCE: Are there any real applications? That is, applications that are not to pure math, and not to lower bounds.

BILL: I would think you would count lower bounds to be a real application. 

LANCE: I am asking on behalf of the unfortunate Programming Language Student who takes your class thinking there will be real applications- perhaps they missed the quotes around applications.

BILL: I know of one real application. And its to Programming Languages! Ramsey Theory has been used  to prove programs terminate. I wrote a survey of that line of research  here.

LANCE: One? There is only One real application? Besides the application of learning the probabilistic method so they can use the method for more interesting problems. Or to save the earth from aliens.

BILL: Judging a field of Pure Math by how many applications it has does not really make sense. I find the questions and answers themselves interesting. Here is a list of theorems in Ramsey Theory that I find interesting. Do you? (This might not be a fair question either since many theorems are interesting because of their proofs.) 

a) (Ramsey's Theorem, Infinite Case) For every 2-coloring of \(K_\omega\) there exists \(H\subseteq N\) such that \(H\) is infinite and every edge between vertices in \(H\) is the same color. My slides here

b) (Van Der Warden's Theorem) For every \(c,k\) there exists W such that for all c-coloring of

  \(\{1,\ldots,W\} \) there exists \(a,d\), both \(\ge 1  \) such that

\(a, a+d, \ldots, a+(k-1)d\) are all the same color.  My slides here.

c) (Subcase of Poly VDW Thm) For every \(c\) there exists W such that for all c-colorings of 

\(\{1,\ldots,W)\}\) there exists \(a,d\), both  \(\ge 1\) such that 

\(a, a+d, a+d^2,\ldots,a+d^{100}\) are the same color. My slides here.

d) For all finite colorings of the Eucidean plane there exists three points, the same color, such that the area of the triangle formed is 1. My slides: here.

So Lance, do these Theorems interest you?

LANCE: Sorry I fell asleep after you said "Ramsey". Let me look. Your slides are terrible. All of the colors are the same!

BILL: Maybe my wife was right. 

Friday, January 31, 2025

The Situation at the NSF

The National Science Foundation is one of the agencies most affected by the various executive orders issued by the Trump administration. As a critical funder of research in theoretical computer science, and science and engineering more broadly, the NSF has effectively come to a standstill sending universities scrambling. Here's a little primer on what's going on in the foundation.

The NSF, like all parts of the federal government, has to follow the Executive Orders, and the first step is to determine which orders apply to NSF business, then how these orders can be addressed in a way that satisfies the White House and other supervisory agencies. This is a process that the NSF leadership works out and negotiates.

For the immediate NSF business, most influential is the order Ending Illegal Discrimination and Restoring Merit-Based Opportunity, which states, among other things, that some DEIA measures violate anti-discrimination laws. Since NSF cannot pay for activities that violate current law, all active awards are checked whether they might be affected by that. This is done with the highest priority, since it affects the cash flow of current awards. After that, awards that are in progress towards being made, as well as entire programs and solicitations, will be reviewed for compliance with the executive order. Until compliance is assured in a way acceptable to the supervisory agencies, no new awards or other financial commitments can be made. After that, normal business should resume, although probably with a huge backlog.

The Hiring Freeze Executive Order also has a huge influence on NSF. The hiring freeze applies to all federal agencies, but NSF has a large number of rotators, usually university researchers who serve as a program director for a one to four-year term. The rotators are essential to the programs and the hiring freeze prevents new rotators from starting their role at the NSF. The hiring freeze will last for 90 days; then a plan to reduce the size of the federal workforce will be presented, and NSF might, we hope, again start hiring. In the past, NSF hiring processes were excruciatingly slow, so we need to expect NSF to be understaffed for a significant period beyond the 90 days. The recent Fork in the Road letter of the Office of Personnel Management might lead further people to leave federal employment, and the strong stand on return to in-person work might make hiring Rotators even more difficult. So, although all this is in flow and changing, it currently looks like difficult times ahead for the NSF.

What does this all mean for the research community? Some current funding locked up, hopefully for a short time, and heavy delays on new grants given higher scrutiny and lower staffing, and funding in fairness or focused heavily on broadening participation might be killed all together. A lengthy delay will mean less funding for PhD students and postdocs next academic year. Given the reduced funding and the political atmosphere, we may lose America's ability to recruit the world's top talent to our universities. 

You can read official news on how the executive orders affect NSF and also see Scott's take

Update 2/2/2025: For now, the NSF is getting back to business due to a temporary restraining order.

Wednesday, January 29, 2025

Lautemann's Beautiful Proof

In writing the drunken theorem post, I realized I never wrote a post on Lautemann's amazing proof that BPP is contained in \(\Sigma^p_2\), the second level of the polynomial-time hierarchy.

Clemens Lautemann, who passed away in 2005 at the too young age of 53, wasn't the first to prove this theorem. That honor goes to Michael Sipser who proved that BPP is in the polynomial-time hierarchy using Kolmogorov complexity and Peter Gacs who puts it into the second level using hash function, both results in the same paper

Nisan and Wigderson, after Sipser, Gacs and Lautemann, note that BPP in \(\Sigma^p_2\) follows from their pseudorandom generators, simply guess a potentially hard function and use the universal quantifier to check that it's hard. Once you have a hard function you can use their generator to derandomize BPP.

But Lautemann's proof is incredibly beautiful because he just directly gives the \(\Sigma^p_2\) (\(\exists\forall)\) expression, and two simple probabilistic method arguments to show it works. QED. 

Let L be in BPP accepted by a probabilistic polynomial-time Turing machine M with error bounded by \(2^{-n}\). Let \(A(x,r)\) be true iff \(M(x)\) using random tape \(r\) accepts. \(A(x,r)\) is deterministically polynomial-time computable. We can assume \(|r|=|x|^k\) for some \(k\). 

Here is the \(\Sigma^p_2\) expression for input \(x\) of length \(n\). All quantification is over binary strings of length \(n^k\). Let \(\oplus\) be bitwise parity and \(\vee\) be logical OR.

\(\exists z_1,\ldots,z_{n^k} \forall y\ A(x,z_1\oplus y) \vee \ldots \vee A(x,z_{n^k}\oplus y)\)

That's it, the beautiful self-contained formula. We just need to show that this expression is true if and only if \(x\) is in L. 

Suppose \(x\) is in L and pick \(z_1,\ldots,z_{n^k}\) independently at random. For a string \(y\) the probability that \(A(x,z_i\oplus y)\) is false for all \(i\) is at most \((2^{-n})^{n^k}=2^{-n^{k+1}}\). So the probability that for some \(y\) this happens is at most \(2^{n^k}2^{-n^{k+1}}=2^{n^k-n^{k+1}}\ll 1\) so for some choice (even most choices) of the \(z_i\), the expression will be true.

Now suppose \(x\) is not in L. Fix  \(z_1,\ldots,z_{n^k}\) and now pick \(y\) at random. The probability that \(A(x,z_i\oplus y)\) is true for a fixed i is at most \(2^{-n}\), so the probability that one of them is true is at most \(n^k2^{-n}\ll 1\). 

MIC DROP

Sunday, January 26, 2025

People who live through two square years

 44*44=1936.

45*45=2025. This year!

46*46= 2116.

Since my fake birthday is Oct 1, 1960 (I do not reveal my real birthday to try to prevent ID theft), which is past 1936, and I won't live to 2116 unless Quantum-AI finds a way to put my brain in a a vat, I will not see two square years in my life :-(

Since I keep a list of celebrities (defined as people I know who have some fame - so its subjective) who are over 80, I have a LIST of celebrities who were born in 1936 or earlier. I was going to list them in this post but there are to many. So I list those that I think my readers care about, and point to the full list.

Here are people who have lived through two squares who I think you care about. I leave out how old they are or what year they were born.  They are in alphabetical order by their first names. I put a * next to the people who, in 1936, were  AWARE that it was a square year. So the starred names are those who truly ENJOYED living through 2 square years.

NAME                  KNOWN TO US FOR

Andrzej Ehrenfeucht   Math. EF-games (he is the E)
Anil Nerode                   Math- Recursive Math
Aviezri Fraenkel           Math-Combinatorial Games

Buzz Aldrin             Walked on the moon

Charles Duke           Walked on the moon.

Dana Scott            Math-CS. Prog Langs. Turing Award
David Scott           Walked on the moon
Dirk Van Dalen     Math- Logic

Eric Hirsch Jr        American Educator *

Harrison Schmidt      Walked on the Moon.
Harry Furstenberg     Math-Ergodic methods in Ramsey Theory.
Heisuka Hironik        Math-Algebraic Geom-Fields Medal
Herman Chernoff      Math-Probability *

Jack Edmonds         CS. Theorist.
James Watson          Biologist-The Double Helix. Nobel Prize with Crick.*
Jane Goodall            Zoologist and Activist
Jean-Pierre Serre     Math. Algebraic X. Fields Medal.*
John Thompson       Math. Group Theory.  Fields Medal, Abel Prize

Micahel Rabin           CS/ Math. Theorsit. Turing Award.

Noam Chomsky          Linguistics. Did work on Grammars

Richard Friedberg     Physicist. Also invented Priority method in Rec Theory.
Richard Karp            CS. Theorist. Turing Award. .
Richard Stearns        CS. Theorist. Turing Award.

Stephen Smale         Math-Lots of Stuff

Tom Baker             Actor-Dr. Who.
Tom Lehrer            Math/Novelty songs..*
Tony Hoare            CS. PL. Turing Award.

Volker Strassen       Math-CS.

Walter Koenig         Actor. Star Trek
William Shatner      Actor- Star Trek

For my complete list see here

It is currently  impossible for someone to live through three square years (again, unless they get their brain in a vat, or some other not-yet-invented mechanism). In a much earlier era it was possible: 

20*20=400

21*21=441

22*22=484

So if you were born in 400 and lived to be 84 years old, three squares! While MOST people didn't live that long back then, SOME did.

10*10=100

11*11=121

12*12=144

Born in the year 100, live to be 44 years old. Was the calendar well established by then?

(ADDED LATER: a colleague emailed me about the calendar. Dionysius Exiguus is credited with inventing the AD/BC calendar (not to be confused with the band AC/DC). He was born in 470 so lets say that the modern calendar was in known from 500 on. (I am not dealing with the times its changed a bit.). SO

23*23=529

24*24=576

25*25=625

To live through three squares you would need to live to 96.

To be aware that you lived through three squares you would need to ive to 104.

To enjoy the 3-squareness you would need to be a healthy 104 year old.

Did someone who was born in 529 live to 625. I would doubt it. See here for an article about how long people lived in Middle Ages. Or, more accurately, how short they lived. 

)


Reminds me of a great line from A Funny Think Happened on the Way to the Forum, a musical that takes place in the early AD's. The character Pseudolus says: 

(Looking at a bottle of wine) Was 1 a good year?


Wednesday, January 22, 2025

The Fighting Temeraire

What does an 1838 painting tell us about technological change?

A colleague and I decided to see how well LLMs could teach us a topic we knew nothing about. We picked the Romanticism art movement. I asked ChatGPT to tutor me on the topic for an hour. Chatty picked four paintings. 

Top Left: Liberty Leading the People (Delacroix, 1830)
Top Right: Wanderer above the Sea of Fog (Friedrich, 1818)
Bottom Left: The Fighting Temeraire (Turner, 1838)
Bottom Right: The Third of May 1808 (Goya, 1814)
For each of these paintings, I put the painting up a one screen and used the voice feature to have ChatGPT give me an overview of each, and then we would have a discussion about it where I would ask about various features. Ended up spending about an hour on each. Was it successful? I now know significantly more about the Romantic art period and these paintings, though of course not an expert. It was certainly a better and more enjoyable experience than freshman seminar course on art history I took in college.

Let discuss one of these paintings in more detail, the 1938 painting The Fighting Temeraire, tugged to her last berth to be broken up by Joseph Mallord William Turner, on display at the National Gallery in London. 

The Fighting Temeraire by J.M.W. Turner (click on picture for more detail)

The 98-gun ship Temeraire featured on the left fought in the Battle of Trafalgar. This painting captures the ship being towed by a steam tug through the Thames to be broken up for scrap. 

Much to love in the painting: the reddish sunset, the reflections of the boats in the water, the detail of the Temeraire and the lack of detail of the tug.

But also note the nod to technological change, the tall sailboat being taken to its end by a coal-powered tugboat, marking the new era of shipping vessels, the industrial revolution in full swing, and the beauty we lose to progress. Now a bad metaphor for the AI revolution of today.

If you want to learn more about the painting, you can watch this lecture from the National Gallery, or you could ask your favorite LLM.