Wednesday, September 04, 2024

Favorite Theorems: Parity Games

August Edition

A quasipolynomial-time algorithm for a long standing open problem. Yes, we have two of them this decade.

Cristian Calude, Sanjay Jain, Bakhadyr Khoussainov, Wei Li and Frank Stephan

covered this theorem in 2017. In a parity game, Alice and Bob take turns walking along a directed graph with integer weights on the vertices and no sinks. Alice wins if the largest weight seen infinitely often is even. While it's not hard to show computing the winner sits in NP\(\cap\)co-NP and even UP\(\cap\)co-UP, the authors give the surprising result that you can determine the winner in near polynomial time.

The result has implications for modal logics, for example that model checking for the \(\mu\)-calculus can now be solved in quasipolynomial-time.

In follow-up work, Hugo Gimbert and Rasmus Ibsen-Jensen give a short proof of correctness of the parity games algorithm. Marcin Jurdziński and Ranko Lazić give an alternative algorithm that reduces the space complexity from quasipolynomial to nearly linear.

Sunday, September 01, 2024

Six degrees of separation has been proven. Really?

There is a paper (see here for an article about the paper, the link to the paper itself is later) that claims to PROVE that, on average, the distance (for some definition of distance) between any two people is 6.

1) We've blogged about this kind of thing:

My Pope Number is 2

What is your Erdos Number?

The six degrees of VDW

2) The paper's link is here and in case link rot sets in, my copy of it is here.

3) The paper has 14 authors. AH- so that's why we are, on the average, 6 degrees apart- because papers have so many authors. (Actually papers in Biology  have LOTS more than 14.)

4) The paper defines a mathematical model for social networks and analyzes a persons cost and benefit of forming a connection. 

5) Is the mathematical model realistic? I think so. But its always tricky since empirical evidence already gave the answer of six. The true test of a mathematical model is to predict something we didn't already know. 

6) One thing about applying this to the real world: What is a connection? Friendship? (also hard to define), handshakes? I like the will respond to my emails metric, though that may leave out half of my colleagues and even some of my friends.

(How come people do not respond to important emails? Perhaps a topic for a later blog.) 

7) My Jeopardy number is 1 in two different ways:

a)  My co-author Erik Demaine was mentioned in a question on Jeopardy, see here and look at the 800 dollar question in Double Jeopardy, Category Boy Genius.

b) My cousin Adam Winkler's book, Gunfight,  was mentioned in a question Jeopardy, see here. It was a 400 dollar question.

In both cases the question was easy, hence my inside information did not give me an advantage. 

(Note: they were actually mentioned in an answer on Jeop since Jeop has that weird format where they give the answer and you need to find the question. For example:

Game show that has a format that Bill Gasarch thinks is stupid

What is Jeopardy?

Thursday, August 29, 2024

My Quantum Summer

Rendering of PsiQuantum's facility in Chicago

I wasn't looking for quantum this summer but it found me. At various events I ran into some of the most recognized names in quantum computing: Peter Shor, Charlie Bennett, Gilles Brassard and Scott Aaronson (twice), Harry Buhrman and Ronald de Wolf.

I was invited to Amsterdam for a goodbye event for Harry Buhrman. Harry co-founded and co-led the CWI quantum center QuSoft and has now moved to London to join Quantinuum as chief scientist and I was invited to give a talk on Harry's classical complexity work before he joined the dark side. Ronald and Gilles gave talks after mine. 

On the way to Amsterdam I spent a few days visiting Rahul Santhanam in Oxford. Scott Aaronson and Dana Moshkovitz showed up with kids in tow. Scott gave a talk on AI not quantum in Oxford. I would see Scott again at the Complexity conference in Michigan.

Peter Shor and Charlie Bennett both attended the Levin Event I mentioned last week.

I talked to all of them about the future of quantum computing. Even though I'm the quantum skeptic in the crowd, we don't have that much disagreement. Everyone agreed we haven't yet achieved practical applications of quantum computing and that the power of quantum computing is often overstated, especially in what it can achieve for general search and optimization problems. There is some disagreement on when we'll get large scale quantum computers, say enough to factor large numbers. Scott and Harry would say growth will come quickly like we've seen in AI, others thought it would be more gradual. Meanwhile, machine learning continues to solve problems we were waiting for quantum machines to attack. 

My city of Chicago had a big quantum announcement, the Illinois Quantum and Microelectronics Park built on an old steel works site on the Southeast Side of the city built with federal, state and local funds as well as a big investment from PsiQuantum. I have my doubts as to whether this will lead to a practical quantum machine but no doubt having all this investment in quantum will bring more money and talent to the area, and we'll get a much better scientific and technological understanding of quantum. 

PsiQuantum's website claims they are "Building the world's first useful quantum computer". PsiQuantum is using photonic qubits, based on particles of light. Harry's company Quantinuum is using trapped ions. IBM and Google are trying superconducting qubits. Microsoft is using topological qubits and Intel with Silicon qubits (naturally). Who will succeed? They all might. None of them? Time will tell, though it might be a lot of time.

Monday, August 26, 2024

Whats worse for a company: being hacked or having technical difficulties? I would have thought being hacked but...

At the Trump-Musk interview:

1) There were technical difficulties which caused it to start late and have some other problems.

2) Musk and (I think) Trump claimed that this was a DDOS attack because people were trying to prevent Donald from having his say (listen to the beginning of the interview).

3) Experts have said it was not a DDOS attack, or any kind of attack.

(For the interview see here. If the link does not work either blame a DDOS hack or technical difficulties.)

When a company is hacked, to they admit it? This is hard to tell since if it never comes out, how do you know? 

Would a company rather admit to the public they had tech difficulties OR admit to the public they were attacked? I would think they would rather admit to the public they had tech difficulties. 

I suspect that X had tech difficulties because LOTS of people wanted to hear Trump.

Faced with this, what are Elon's options:

Options

1) Claim that this was caused because so many people wanted to hear Trump that the system could not handle it. This would make Trump look good, not make Elon look too bad, and is probably true so it won't be discovered later that he lied.

2) Claim that his system was attacked. This allows Trump to claim his enemies are out to get him, thus pushing the narrative that he is a victim. But Musk looks worse than if the system was just overloaded. Plus its false which will (and did) come out. However, there were absolutely no consequences to lying. 

I think its  unusual for a company to lie by claiming they were hacked when they weren't. If you know of any other examples then please comment.





Wednesday, August 21, 2024

The Levin Translation

Earlier this summer I attended a Celebration for Leonid Levin who recently turned 75. To prepare my talk I wanted to go back to Levin's 1971 two-page Russian masterpiece that covered both his independent discovery of NP-completeness and Universal Search. Boris Trakhtenbrot's wonderful Survey of Russian Approaches to Perebor has a messy translation of the Levin paper in the appendix. Could I use AI to get a cleaner version? TLDR Here's the PDF.

Let me first take a detour to when I was a graduate student in the 1980s. I went to meet my advisor Mike Sipser and found him with a couple of Russian students and postdocs carefully translating the latest paper from Alexander Razborov, then a young researcher at the Steklov Mathematical Institute in Moscow. Razborov had send a physical copy of his latest paper and they were translating word by word--would this be the paper that show NP sets don't have polynomial-size circuits and P ≠ NP? Spoiler alert: It wasn't, but the process remained etched in my memory four decades later.

But now we live in the 2020s and I turned to my friend Claude for help. I gave Claude the Russian PDF and asked Translate into English

Certainly, I'll translate the Russian text into English for you. Here's the translation:

PROBLEMS OF INFORMATION TRANSMISSION

Volume IX 1973 Issue 3

BRIEF COMMUNICATIONS

UDC 519.14

UNIVERSAL SEQUENTIAL SEARCH PROBLEMS

L. A. Levin

The article examines several well-known mass problems of the "sequential search type" and proves that these problems can only be solved in the time it takes to solve any problem of the specified type in general...

So far so good. Summarize the article.  

This article by L. A. Levin, titled "Universal Sequential Search Problems," discusses a class of computational problems known as "sequential search problems" or "enumeration problems." Here's a summary of the key points:

1. The article addresses well-known computational problems that require exhaustive search (enumeration) to solve.

2. While many classical problems have been proven to be algorithmically unsolvable, there exists a class of problems that are solvable but require an impractically large amount of computation time.

3. These problems, called "sequential search problems," include tasks like minimizing Boolean functions, searching for proofs of limited length, and determining graph isomorphism.

4. While these problems can be solved by simple algorithms that enumerate all possibilities, such algorithms require exponential time, leading mathematicians to believe that simpler algorithms are impossible.

5. The article introduces the concept of "universal sequential search problems" - problems to which all other sequential search problems can be reduced.

6. Levin presents six specific problems (numbered 1-6 in the text) and claims they are universal sequential search problems.

7. The main theorem (Theorem 1) states that if there exists any sequential search problem that cannot be solved in less than f(n) time (for some function f), then the six presented problems also cannot be solved in less than f(n) time.

8. Another important result (Theorem 2) states that for any sequential search problem, there exists an algorithm that solves it in optimal time, up to a constant factor and an additive term comparable to the input length.

9. The article suggests that this approach can be extended to most interesting sequential search problems.

10. While the article doesn't solve the fundamental question of whether these problems truly require exponential time, it establishes relationships between different problems in this class.

This work is significant in computational complexity theory, as it relates various hard problems to each other and suggests a hierarchy of difficulty among computational tasks.

The summary is almost as long as the article.

Then I noticed the term "mass problems" that occurs throughout the paper. In the translation the word "mass" seems off. What are some alternatives?

You're right to notice that the word "mass" in this context might seem unusual or potentially confusing. In the original Russian text, the term used is likely "массовые задачи" (massovye zadachi), which is a term commonly used in Russian mathematical literature. However, in English, we typically use different terminology. Here are some alternative translations that might be more appropriate in this context:

1. "General problems" or "general computational problems"

2. "Large-scale problems"

3. "Broad classes of problems"

4. "Universal problems"

5. "Widely applicable problems"

6. "Common computational problems"

7. "Algorithmic problems"

8. "Computational class problems"

9. "Generic problems"

10. "Generalized problems"

In the context of this paper, the author is likely referring to problems that are not specific to a particular instance but represent a whole class or type of problem. The term "universal" is actually used later in the paper, which aligns well with this interpretation.

So, a more idiomatic translation might replace phrases like "classical mass problems" with "classical universal problems" or "broad classes of classical problems".

This choice of wording would better convey the idea that these are problems that apply broadly across many instances or types of computational tasks, rather than being specific to a particular case.

So "mass problems" is mass as in mass audience not mass as in physics. What if we left out the word "mass" completely.

You're right, omitting the word "mass" entirely could indeed improve the clarity and flow of the translation without losing the essential meaning. Let's consider how some key sentences would read without it:

1. Instead of:

"After clarifying the concept of an algorithm, the algorithmic unsolvability of a number of classical mass problems was proven..."

We could have:

"After clarifying the concept of an algorithm, the algorithmic unsolvability of a number of classical problems was proven..."

2. Instead of:

"However, if we assume that there exists some (even artificially constructed) mass problem of the sequential search type that cannot be solved by simple algorithms..."

We could have:

"However, if we assume that there exists some (even artificially constructed) problem of the sequential search type that cannot be solved by simple algorithms..."

3. And for the theorem:

"If there exists any problem of sequential search (quasi-sequential search) type that cannot be solved in time less than f(n) for argument length comparable to n, then problems 1-6 also have this property."

This approach works well because:

1. The context already implies that we're dealing with broad classes of problems, not specific instances.

2. The term "sequential search problems" itself conveys the idea of a general class of problems.

3. It makes the text more concise and easier to read in English.

4. It avoids potential confusion that might arise from an unusual use of the word "mass" in this context.

So yes, simply omitting "mass" throughout the translation would likely result in a clearer, more idiomatic English text while preserving the original meaning of the article.

Give me the full translation without the mass, which Claude dutifully did. Why stick with a text version. Format this nicely in latex. 

Certainly! Here's the article formatted in LaTeX.

This LaTeX code will produce a nicely formatted version of the paper. It includes:

  1. Proper title, author, and abstract
  2. Sections for introduction, definitions and problems, and main results
  3. Theorem, lemma, and definition environments
  4. Proper mathematical formatting for equations and logical statements
  5. Enumerated list for the six problems
  6. Bibliography section

To compile this LaTeX document, you would need to save it as a .tex file and run it through a LaTeX compiler. The output will be a professionally formatted academic paper.

So I did and here is the final PDF

Russian papers at the time famously had no proofs so for fun I tried Can you fill in the proofs of the theorems

Certainly! I'll expand the LaTeX document to include proofs for the theorems and lemma. Please note that since the original paper doesn't provide detailed proofs, I'll have to construct plausible proofs based on the information given and general knowledge of the field. These proofs should be considered as educated guesses rather than the actual proofs from Levin's original work.

Here's the updated LaTeX code with added proofs.

This expanded version includes proof sketches for Theorem 1, Lemma 1, and Theorem 2. These proofs are constructed based on the information given in the paper and general principles of computational complexity theory. They should give a good idea of the reasoning behind the theorems, but may not exactly match Levin's original proofs.

The main ideas in these proofs are:

  1. For Theorem 1: Using the universality of problems 1-6 to show that they must be at least as hard as any other sequential search problem.
  2. For Lemma 1: Demonstrating how an arbitrary sequential search problem can be reduced to one of the listed problems (in this case, Problem 2).
  3. For Theorem 2: Constructing a universal algorithm that simulates all possible algorithms in parallel, achieving optimal time complexity up to a constant factor.

These proofs maintain the paper's focus on the relationships between different types of sequential search problems and the idea of universal sequential search problems.

You can judge the proofs on your own. 

Sunday, August 18, 2024

Request open problems in honor of Luca Trevisan

 Request for Open Problems In Memory of Luca Trevisan


Luca Trevisan passed away on June 19, 2024 at the age of 52, of cancer.

I am putting together an open problems column of open problems in his honor.

If you are interested in contributing then please email me a document with the following specifications.

1) It can be as short as half-a-page or as long as 2 pages. One way to make it short is to give many references or pointers to papers with more information.

2) It should be about an open problem that is either by Luca or inspired by Luca or a problem you think Luca would care about.

3) In LaTeX. Keep it simple as I will be cutting-and-pasting all of these into one column.

Deadline is Oct 1, 2024.

 



Wednesday, August 14, 2024

Favorite Theorems: Random Oracles


This months favorite theorem is a circuit result that implies the polynomial-time hierarchy is infinite relative to a random oracle, answering an open question that goes back to the 80's. 

Johan Håstad, Benjamin Rossman, Rocco A. Servedio and Li-Yang Tan

The authors show how to separate depth d from depth d+1 circuits for random inputs. As a corollary, the polynomial hierarchy is infinite with a random oracle, which means that if we choose an oracle R at random, with probability one, the k+1-st level of the polynomial-time hierarchy relative to R is different than the k-th level relative to R. 

Why should we care about random oracles? By the Kolmogorov zero-one law, every complexity statement holds with probability zero or probability one with a random oracle, so for every statement either it or its negation holds with probability one. And since the countable intersection of measure-one sets is measure one, every complexity statement true relative to a random oracle are all simultaneously true relative to a random oracle, a kind of consistent world. With a random oracle, we have full derandomization, like BPP = P, AM = NP and PH in \(\oplus\mathrm{P}\). We have separations like P ≠ UP ≠ NP. We have results like NP doesn't have measure zero and SAT solutions can be found with non-adaptive queries to an NP oracle. And now we have that the PH is infinite simultaneously with all these other results. 

More details on this paper from a post I wrote back in 2015. 

Saturday, August 10, 2024

The combinatorics of Game Shows

 (Inspired by Pat Sajak stepping down from Wheel of Fortune)

How many different game show are there? Many. How many could there be?

1) Based on Knowledge or something else. Jeopardy is knowledge. Wheel and Deal-No Deal are something else. The oddest are Family Feud and America Says where you have to guess what people said which might not be whats true. Reminds me of the SNL sketch about a fictional game show Common Knowledge. Oddly enough there now is a REAL game show of that name, but it actually wants true answers to questions. 

2) Do the losers get anything more than some min? On Wheel they do, but I do not know of any other show where they do. On People Puzzler they get a subscription to People Magazine!

3) Is how much the winners win based on how well they do in the game, or is it a flat number. Jeop and Wheel is based on how they do in the game. America Says, Switch, and many others the winner does something else do either win (say) $10,000 or just $1,000.

4) MONEY: I want to say can you win A LOT or NOT SO MUCH. I'll set $20,000 above or below. ON Jeop or Wheel you CAN get > $20.000. As noted in the above item there are shows where at the end you can win $10,000 but that's it.

5) Do winners get to come back the next day? Jeop and Masterminds does that but I do not know of any other show that does. 

6) Are celebrities involved? This used to be more common, but seems to have faded. Hollywood Squares was an example. Masterminds is hard to classify in this regard since the celebs are people who know a lot of stuff (they are called Masterminds) and their main claim to fame might be being on Mastermind.  Or on some other quiz show. Ken Jennings was a Mastermind in the first season. 

7) If its question-based then is it multiple choice or fill in the blank? Some do both. Common Knowledge has the first round multiple choice, and later rounds fill-in-the-blank, and the finale is multiple-choice. 

8) Do you win money or merchandise? There are many combinations of the two. 

9) Are the players teams-of-3? (Common Knowledge), Individuals (Jeopardy), just one person or team (Deal-No Deal and Cash Cab), or something else. 

10) Is it a competition so people vs people (most of the quiz shows) or is it one person answering questions (Who wants to be a Millionaire, Cash Cab). I've noticed that on Cash Cab if you are close to the answer they give it to you, which I don't think hey would do on Jeop. This could be because there is no competitor, so being flexible is not unfair to someone else. Millionaire can't do this since its strictly multiple choice. 

There are other parameters but I will stop here. I leave it to the reader to more fully expand these categories and find out how many game shows there can be.





Sunday, August 04, 2024

Determing which math problems are hard is a hard problem

 I was wondering what the hardest math problems were, and how to define it. So I googled 

Hardest Math Problems

The first hit is here. The 10 problems given there bring up the question of  what is meant by hard?

I do not think the order they problems were given is an indication of hardness. Then again, they seem to  implicitly use many definitions of hardness

1) The 4-color problem. It required a computer to solve it and had lots of cases. But even if that is why its considered hard, the solution to the Kepler Conjecture (see here) is harder. And, of course, its possible that either of these may get simpler proofs (the 4-color theorem already has, though it still needs a computer).

2) Fermat's Last Theorem. Open a long time, used lots of hard math, so that makes sense.

3) The Monty Hall Paradox. Really? If hard means confusing to most people and even some mathematicians  then yes, its hard. But on a list of the 10 hardest math problems of all time? I think not. 

4) The Traveling Salesperson problem. If they mean resolving P vs NP then yes, its hard. If they mean finding a poly time algorithm for TSP then it may be impossible.

5) The Twin Primes Conjecture. Yes that one is hard. Open a long time and the Sieve method is known to NOT be able to solve it. There is a song about it here.

6) The Poincare Conjecture. Yes, that was hard before it was solved. Its still hard. This is another issue with the list- they mix together SOLVED and UNSOLVED problems.

7) The Goldbach Conjecture. Yes, that one is hard. 

8) The Riemann hypothesis is the only problem on both Hilbert's 23 problems in 1900 and on the Clay prize list. Respect! There is a song about it here.

9) The Collatz conjecture. Hard but this might not be a good problem. Fermat was a good problem since working on it lead to math of interest even before it was solved. Riemann is a good problem since we really want to know it. Collatz has not lead to that much math of interest and the final result is not that interesting.

10) Navier-Stokes and Smoothness. Hard! Note that its a Millennium problem. 

NOTES

1) TSP, Poincare, Riemann, Navier-Stokes are all Millennium problems. While that's fine, it also means that there are some Millennium problems that were not included: The Hodge Conjecture,  The Birch and Swinnerton-Dyer Conjecture, Yang-Mills and the Mass gap (thats one problem: YM and the Mass gap). These three would be hard to explain to a layperson.Yang Mills and the Mass Gap is a good name for a rock band.

2) Four have been solved (4-color, FLT, Monty Hall (which was never open), Poincare) and six have not been solved (TSP, Twin primes, Goldbach, RH, Collatz, Navier-Stokes)

3) I have also asked the web for the longest amount of time between a problem being posed and solved. FLT seems to be the winner with 358 years, though I think the number is too precise since it not quite clear when it was posed. I have another candidate but you might not want to count it: The Greek Constructions of trisecting an angle, duplicating the cube, and squaring the circle. The problem is that the statement:

In 400BC the Greeks posed the question: Prove or Disprove that one can trisect an angle with a ruler and compass

is false on many level:

a) Nobody thought of prove or disprove back in 400BC (and that date is to precise). 

b) Why would a compass, which helps you find where North is, help you with this problem?

(ADDED LATER: Some of the comments indicate that people do not know that point b is a joke. Perhaps not a good joke, but a joke.) 

SO, when was it POSED in the modern sense is much harder to say. For more on this problem see the book Tales of Impossibility or read my review of it here.

(ADDED LATER: A comment pointed out that the constructing a trisection (and duplicating a cube and squaring the circuit) were proven impossible. I knew that but forgot to say it., and make the point of the very long time between posing and solving, so I will elaborate here:

1837: Wantzel showed that there is no way to, with a straightedge and compass, trisect an angle or duplicate the cute. This used Field Theory.

1882: Lindemann showed pi was transcendental and hence there is no straightedge and compass construction to square the circle.

So one could say it took 1882+400 years to solve the problem, but as noted above, to say the problem was posed in 400BC is not really right.)

4) Songs are needed for the other problems on this list UNION the Millennium problem. The Hodge Conjecture would be a challenge. I DID see some songs on You Tube that claimed to be about some of these problems, but they weren't. Some were instrumentals and some seemed to have no connection to the math.

5) Other lists I've seen include:

a) Prove there are no odd perfect numbers. That seems to be hard. This could have been posed before FLT was posed, but its hard to say.

b) Prove the following are transcendental: pi + e, the Euler-Mascheroni. There are other open problems here as well. 

These lists make me think more carefully about what I mean by HARD and PROBLEM and even MATH.



Sunday, July 28, 2024

In the future we will all have songs written about us, and it will be Lance's fault.

In response to my blog post about how its easier to FIND novelty songs (and other things) than it used to be (see here) Lance showed how easy it is to CREATE a novelty song  using AI. He had an AI write lyrics and music for  THE BILL, see here.

The song is pretty good and pretty accurate (except that I don't drink coffee or burn toast and I would not say that in math I'm quite the star), but this post is NOT about the song. 

There have been  songs about

The  Mandelbrot set (see here), 

Lobachevsky (see here), 

Gauss's Law (see here), 

Galois (see here),  

The Bolzano-Weierstrass Theorem (see here), 

William Rowan Hamilton (see here), 

and I will end this list with the Newton-Leibniz Rap (see here).

(I am sure there are more songs about famous mathematicians. If you know any that are better than the BW Rap, that is, any of them, please leave a comment.)

Side note: There are poems about Fermat's last theorem, as discussed in my post here.

So what do  Mandelbrot, Lobachevsky, Gauss, Galois, Bolzano, Weierstrass, Hamilton, Newton, Leibniz, and Fermat have in common? 

They are all famous and for a good reason- they all did math worth doing that is remembered many years later. 

Bill Gasarch- not so much (unless the Muffin Problem is the key to solving world hunger).

In the past the EFFORT to write a song about someone was enough so that one would only bother for famous people. 

With AI it is now EASY, as Lance did with his song THE BILL. He used ChatGPT for the lyrics and Suno for the song itself

So what does this say about the future? It will be so easy to write songs about ANYBODY that it will be done. So having a song about you will no longer be a sign that you are famous or special. We are RIGHT NOW in a transition. If I tell my nephew that there is a song about me and that I have a Wikipedia page, he is impressed. My great niece- not so much. 


Wednesday, July 24, 2024

Complexity in Michigan

Invited Speaker Nutan Limaye, Conference Chair Valentine Kabanets,
2024 PC Chair Rahul Santhanam, myself, 2025 PC Chair Srikanth Srinivasan
and 2025 Local Arrangements chair Swastik Kopparty enjoy some tapas.
I have a long history with the Computational Complexity conference. I attended the first 26 meetings (1996-2011) and 30 of the first 31. I chaired the conference committee from 2000-2006. According to DBLP I still have the most papers appearing in the conference (32). I even donated the domain name for the conference (with the caveat that I could keep the subdomain for this blog).

Only Eric Allender had a longer streak having attended the first 37 conferences through 2022 in Philadelphia (if you count the two online during the pandemic) before he retired.

But I haven't been back to Complexity since that 31st conference in Tokyo in 2016. Due to my administrative roles, various conflicts and changes in the field you just start missing conferences. But with the conference at the University of Michigan in Ann Arbor within driving distance of Chicago it was time to go home for the 39th meeting. And it's a good thing I drove as more than one person had flight delays due to the Crowdstrike bug.

The complexity conference remains relatively stable at about 75-100 registrants, the majority students and young researchers. I've moved from wise-old sage to who is that guy. But I'm having great fun talking to old acquaintances and new. I'm impressed with the newer generations of complexity theorists--the field is in good hands.

Best paper goes to Michael Forbes Low-Depth Algebraic Circuit Lower Bounds over Any Field. the work of Limaye, Srinivasan and Tavenas I talked about last month gave an explicit polynomials with superpolynomial-size over constant depth algebraic circuits but it required polynomials over large fields. Forbes extended the lower bounds to all field sizes.

Best student paper goes to Ted Pyne from MIT for Derandomizing Logspace with a Small Shared Hard Drive for showing how to reduce space for randomized log-space algorithms on catalytic machines.

Check out all the papers in the online proceedings.

From a relatively quiet business meeting: 36 papers accepted out of 104 submissions, a bit up from previous years. 75 attendees including 42 students, similar to recent years. 2025 conference at the Fields Institute in Toronto August 5-8. 2026 in Lisbon or Auckland.

The loss of Luca Trevisan, PC Chair 2005 and local arrangements chair in 2013 in San Jose, loomed large in the business meeting and at the conference.

Sunday, July 21, 2024

FLT solution annouement had its 31's anniv was about a month ago. Some poems about FLT NOT from ChatGPT

On June 21, 1993, at the Issac Newton Institute for Mathematical Science, Andrew Wiles announced that he had proven Fermat's Last Theorem. That wasn't quite right- there was a hole in the proof that was later patched up with the help of Richard Taylor (his former grad student). A correct proof was submitted in 1994 and appeared in 1995. Wiles is the sole author. 

June 21, 2024 was the  31st anniversary of the announcement. (So today is the 31-years and 1-month anniversary).  I COULD have had ChatGPT write some poems about it. But there is no need. There are already some very nice poems about it written by humans. Will humans eventually lose the ability to write such things? Would that be a bad thing? Either ponder those questions or just enjoy the poems. (My spellcheck still thinks ChatGPT is not a word. It needs to get with the times.)

1) A link to a set of poems about FLT: here.

2) Here is a poem that is not in that set but is excellent.

A challenge for many long ages
Had baffled the savants and sages
Yet at last came the light
Seems that Fermat was right
To the margins add 200 pages 

(I don't know who wrote this or even where I read it. If you know anything about where it was published or who wrote it, please let me know. ADDED LATER:Eric Angelini left a comment telling me that this limerick was written by Paul Robert Chernoff. The commets also has a link to lots of limericks that Paul Robert Chernoff wrote. Thanks!)

3)  Here is a poem by Jonathan Harvey that mentions the gap in the original proof.

A mathematician named Wiles
Had papers stacked in large piles
Since he saw a clue
He could show Fermat true
Mixing many mathematical styles

He labored in search of the light
To find the crucial insight
Young Andrew, it seems
Had childhood dreams
To prove Mr. Fermat was right

He studied for seven long years
Expending much blood, sweat, and tears
After showing the proof
A skeptic said “Poof!
There’s a hole here”, raising deep fears.

This shattered Mr. Wiles’s belief
His ship was wrecked on a reef
Then a quick switcheroo
Came out of the blue
Providing his mind much relief.

Mr. Wiles had been under the gun
But the obstacle blocking Proof One
Fixed a much older way
From an earlier day
And now Wiles has his place in the sun

4) Here is a poem by John Fitzgerald that mentions other unsolved problems including P vs NP

Fermat’s theorem has been solved,
What will now make math evolve?

There are many problems still,
None of which can cause that thrill.

Years and years of history,
Gave romance to Fermat-spree,

Amateurs and top men too,
Tried to push this theorem through.

Some have thought they reached the goal,
But were shipwrecked on the shoal,

So the quest grew stronger still;
Who would pay for Fermat’s bill?

So what is now the pearl to probe,
The snark to hunt, the pot of gold,

The fish to catch, the rainbows end,
The distant call towards which to tend?

One such goal’s the number brick,
where integers to all lengths stick:

To sides, diagonals, everyone,
Does it exist or are there none?

Then there are those famous pearls,
That have stymied kins and earls:

Goldbach, Twin Primes, Riemann Zeta;
No solutions, plenty data.

Find a perfect number odd;
Through 3n + 1 go plod;

Will the P = N P ?
Send a code unbreakably.

Are independence proofs amiss;
Continuum Hypothesis;

Find a proof which has some texture
of the Poincaré conjecture.

And so, you see, onward we sail,
there still are mountains we must scale;


But now there’s something gone from math,
At Fermat’s end we weep and laugh.


Thursday, July 18, 2024

The Story of Shor's Algorithm

The quantum factoring algorithm of Peter Shor (FOCS 1994, SIAM Review 1999) turns thirty this year. Before his algorithm, quantum computing lacked the killer app, something practical that quantum could do that seems hard for classical computers. Back in 1994, I said Shor's algorithm bought quantum computing another twenty years. How I misjudged the longevity of quantum hype. 

Peter got the idea for his algorithm from a paper by Daniel Simon solving a theoretical complexity problem. The quantum factoring algorithm is a great example of how a complexity result can open doors to new algorithmic ideas.

Simon came up with a beautifully simple example of a problem that required exponential-time on a probabilistic machine but polynomial-time on a quantum computer. Let's define addition over the \(n\)-bit strings, for \(x\) and \(y\) in \(\{0,1\}^n\), \(x+y\) is the bitwise parity of \(x\) and \(y\). For example if \(x\) is 0110 and \(y\) is 1100, \(x+y = 1010\).

Suppose we have a Boolean function \(f:\{0,1\}^n\rightarrow\{0,1\}^n\) (maps \(n\) bits to \(n\) bits) with the property that \(f(x)=f(y)\) iff \(x=y+z\) for some fixed \(z\). The problem is given \(f\) as an oracle or a circuit, find the \(z\). A classical machine would need exponential steps in to find \(z\) in the worst case.

Simon gave a simple quantum algorithm that would with a single query output a random w such that \(w\cdot z=0\). With \(n = \log N\) linearly independent \(w\), you can solve for \(z\).

Shor's asked what if we could do the same for regular integer addition instead of bitwise parity. Suppose you have a function \(f(x)=f(y)\) iff \(x-y\) is a multiple of \(z\) for a fixed \(z\). (In Simon's case over bits the only multiples are zero and one.) That means \(f\) is periodic and \(z\) is the period. Shor knew that by an algorithm by Miller, finding a period leads to factoring.

Let m be an odd number with multiple prime factors. Consider \(f(x)=a^x\bmod m\) for a randomly chosen \(a\) relatively prime to \(m\). If this function has a period \(z\), then \(a^z\bmod m=a\), \(a^{z-1}\bmod m=1\) and with probability at least one-half, the gcd of \(a^{\frac{z-1}{2}}\) and \(m\) will be a nontrivial factor of m. 

Getting all this to work on a quantum computer requires a number of addition tricks beyond what Simon did but once Shor had the inspiration the rest followed. 

Peter Shor really understood the landscape of theory from complexity to cryptography, a curiosity for quantum computing and the vision to see how it all connected together to get the quantum algorithm that almost single-handedly brought billions of dollars to the field. 

Peter just received the Shannon Award for his work on quantum error correction that would help enable quantum computers to run his algorithm. Still the largest number present day quantum computers can factor with the algorithm is 21. If (and its a big if) that number gets up past the RSA challenge numbers, Peter will have far larger prizes in his future.

Sunday, July 14, 2024

The Term Quantum Being Misused ... Again


In a post from 2015 I noted that the word quantum is often misused (see here). Have things gotten better since then? I think you know the answer. But two uses of the word quantum caught my attention

1) The episode Subspace Rhapsody of  Star Trek- Strange New Worlds is described on IMDB as follows:

An accident with an experimental quantum probability field causes everyone on the Enterprise to break uncontrollably into song, but the real danger is that the field is expanding and beginning to impact other ships--- allies and enemies alike.

 (I mentioned this episode and pointed to my website of all the songs in it  here.)

SO- is this an incorrect use of of the word quantum? Since ST-SNW is fictional, its a silly question. However, it seems like a lazy Sci-Fi convention to just use the word quantum for random technobabble. 

2) The Economist is a serious British weekly newspaper. Or so I thought until I read this passage in the June 15-21, 2024 issue, the article featured on the cover The rise of Chinese Science

Thanks to Chinese agronomists, farmers everywhere could reap more bountiful harvests. Its perovskite-based solar panels will work  just as well  in Gabon as in the Gobi desert. But a more innovative China may also thrive in fields with military uses, such as quantum computing or hypersonic weapons.

So The Economist is saying that Quantum Computing has military uses. I am skeptical of this except for the (in my opinion unlikely) possibility that QC can factor and break RSA which, if it will happen, won't be for a while. 

It also makes me wonder if the rest of the paragraph, which is on fields I don't know anything about, is also incorrect or deeply flawed. (See Gell-Man Amnesia which I've also heard called The Gell-Man Affect.) 

I am not surprised that ST:SNW uses quantum incorrectly (Or did it?  Maybe an experimental quantum probability field would cause people to sing.) but I am surprised that The Economist  misused it. I thought they were more reliable. Oh well. 

 




Wednesday, July 10, 2024

Favorite Theorems: Extracting Ramsey Graphs

June Edition

Two decades ago, I named the recently departed Luca Trevisan's paper connecting extractors to psuedorandom generators as one of my favorite theorems from 1995-2004. I'm dedicating this month's favorite theorem to him.

Suppose we have two independent sources with just a little bit of entropy each. Can I pull out a single random bit? This month's favorite theorem shows us how, with a nice application to constructing Ramsey graphs.

Eshan Chattopadhyay and David Zuckerman

More formally (feel free to skip this part) suppose we had two independent distributions U and V each of poly log min-entropy, which means for every string x of length n, the probability of choosing x from U and the probability of choosing x from V is at most \(2^{-(\log n)^c}\) for some c. There is a deterministic polytime function (which doesn't depend on U and V) such that f(x,y) with x and y chosen independently from U and V will output 1 with probability \(1/2\pm\epsilon\) for \(\epsilon\) smaller than any polynomial.

Previous work required a linear amount of min-entropy for U and V. 

As a corollary, we can use f to deterministically generate a Ramsey graph on n vertices with no cliques or independent sets of size \(2^{(\log\log n)^c}\) for a sufficiently large c. This is also an exponential improvement from previous constructions. Gil Cohen gave an independent construction that doesn't go through extractors.

There have been several papers improving the bounds of Chattopadhyay and Zuckerman. In FOCS 2023 Xin Li gave a construction of extractors with \(O(\log n)\) min-entropy, the current state-of-the-art for extracting a single random bit with constant error, and Ramsey graphs with no cliques or independent sets of size \(\log^c n\) for some constant c.

Sunday, July 07, 2024

The combinatorics of picking a Vice President

 Trump is pondering who to pick for his vice president. For a recent podcast about it go here. Spoiler alert: Doug B or Marco R or J.D. Vance. 

In 2008 I did a blog post titled  I would bet on INTRADE that INTRADE will do badly picking VP nominations where I showed that about half the time the VP candidate is not on anyone's short list, and hence would do badly in betting markets. At the time INTRADE was synonymous with betting markets. I would  not have bet that INTRADE would go out of business. 

What criteria does a prez nominee  use when picking a vice president? How many combinations are there? 

1) Someone who will help with a block of voters. 

Trump-Pence 2016: Mike Pence was thought to help Trump with Evangelicals and establishment Republicans.

Biden-Harris-2020: Kamala Harris was thought to help Biden with women and African-Americans.

JFK-LBJ-1960-LBJ was thought to help JFK in the South. 

Kerry-Edwards-2004: Edwards was thought to help win  North Carolina (Edwards state). It didn't work. 

Dukakis-Bentson-1988- Mike Dukakis (liberal) picked Lloyd Benson (moderate) as the VP. The ticket lost though its possible that Benson brought in some votes, just not enough.

There are other examples. Even for the cases where the candidate won its not clear if the VP mattered.  The podcast says that Trump thinks that this kind of thing (e.g., picking a women or an African American) won't help him get their votes. He might be right. But (my speculation) a women on the ticket might help some men be more comfortable voting for him. That is, they could think Trump is not a misogynist, see- he picked a women for VP.  Similar for an African-American.

Caveat: Perhaps a candidate who would help in Swing States.

2) Someone who will help him if he wins. 

Obama-Biden-2008:  Biden helped new-comer Obama since Biden had Congressional experience, having been a senator for X years for some large value of X.

Bush-Cheney-2000:  Dick Cheney knew Washington DC and hence could help George W Bush (who had been a governor but had no FEDERAL experience).

3) Someone who the voters can see taking over the presidency in case that is needed.

Clinton-Gore-1992: I've heard that Clinton chose Gore for that reason. I'm NOT an insider so it may not be true. 

FDR-Truman-1944: The party chose Harry Truman as VP knowing that FDR would likely pass away and we'd have President Truman. (I've read this and believe it is true on some level.) 

4) Party Unity- Pick someone who you fought in the primary to show that the party is united. Bonus: the VP nominee has been vetted and is some-known to the public. 

Biden-Harris-2020 may have had had some of this.

This mentality is rarer now since people tend to NOT pick people they ran against in the primary lately.

JFK-LBJ-1960 was in this category.  

Biden did run for the nomination in 2008 but didn't run much (I think he dropped out either right before or right after the Iowa Caucus) so that one doesn't really count.

5) DO NO HARM. Counterexamples:

Some people voted against McCain since they didn't wan to see Sarah Palin one heartbeat away from the presidency. This was especially important since McCain was old. And hence this may be important for Trump in 2024.

Biden may have the same problem with  Harris. Note that the issue is NOT if Harris would BE a bad prez, its if people THINK she would be a bad Prez.

Krisiti Noem- Trump doesn't want to answer questions about why his VP shot a dog and a goat. (Note- if Trump himself had shot a dog and a goat the party and FOX news would be defending that action.) 

6) Someone who the Prez candidate gets along with personally. I've heard that Clinton-Gore and Obama-Biden got along. JFK and LBJ did not.

7) Someone who won't outshine the president.

Dukakis-Benson=1988  might have had this problem. 

 8) All of the above might matter less than usual since there are so few undecided people in swing states. And that's NOT just because the country is polarized. Ponder the following:

In most elections its either two people NEITHER of whom has been president, so you don't quite know what they will do, OR one has been prez and the other has not, so you don't know what the newcomer will do.

But in this election BOTH have been president. We KNOW what they will do. So there is less room for doubt. 

History: This only happened once before: 

1884: Cleveland beats Blaine

1888: Harrison beats Cleveland

1892: Cleveland vs Harrison and Cleveland wins

Even though I say its hard to predict, and it could be someone NOT on the short list, here are my thoughts.

a) Marco R. The electors in the electoral college cannot vote for a Prez and vice-Prez who are residents of the same state. (Note1- This is an idiotic rule which dates from either the late 1700's or the early 1800's. Note2- Dick Cheney changes his residency from Texas to Wyoming so he could be Bush's VP in 2000.  I have NO problem with that whatsoever.) So one of Marco R or Trump would have to change residencies. Trump won't bother. Marco R is a SENATOR from Florida so I doubt he would either. Also, Marco said nasty things about Trump when he ran against him for the nomination in 2016. I am surprised Marco is on anyone's short list. NOT going to be VP nominee.

b) Doug B. Who? He doesn't outshine Trump, and he gets along with Trump. Won't bring in any voters, but Trump says he doesn't care about that. How would American's view him as a possible prez? I doubt Trump cares. QUITE POSSIBLE to be VP nominee.

c) JD Vance. Author of a thoughtful book, Hillbilly Elegy, which indirectly explains why poor white rural voters are attracted to Trump. He then became a Senator and is now all-in on Trump. This is NOT hypocritical, but its odd. In 2016 he was anti-Trump but now he is pro-Trump. Even that is NOT hypocritical using the usual standards for politicians. He has praised Trump and there may be people who think he would be a good president. He is young (39) and handsome, so I wonder if Trump worries that Vance might outshine him. Even so QUITE POSSIBLE to be VP nominee.

d) I am surprised that Tim Scott and Elise Stefanik seem to have fallen out of Trump's Short list, though they were at one time on it, so would not be to big a surprise if either becomes the VP nominee.  IF one thinks that Tim Scott will help with the African-American vote, or that Elise Stefanik will help with the women-vote (OR as noted above, would help white men feel more comfortable voting for Trump) then either would be a politically good choice. However, Trump does not think this is true, and he may be right.  I've also heard that Trump doesn't want people saying something like Tim Scott helped Trump win the African-American Vote since Trump wants to think that HE won the election without help. I would think neither will be VP but YOU NEVER KNOW.

e) Someone NOT on the horizon. This brings us back to my 2008 post- IT REALLY COULD BE SOMEONE THAT NOBODY IS TALKING ABOUT. So Who? I DON"T KNOW SINCE NOBODY IS TALKING ABOUT THEM. Maybe Lance.

Wednesday, July 03, 2024

Why is called the Turing Award (revisited)?

Avi Wigderson gave his ACM Turing Award lecture last week, and instead of telling his own story, he focused on Alan Turing and his influence on complexity. If you didn't see it live, you can watch on YouTube or below.

I want to revisit a post I wrote for the Turing centenary in 2012 asking why the prize is named after Turing. Since that post, Turing has become even more popular, especially through the 2014 movie starring Benedict Cumberbatch. I caught this picture of Turing as a legend in the Chicago Pride Parade last Sunday.

But Turing was not always a legend. The first Turing award was awarded to Alan Perlis in 1966. Turing's work during World War II remained classified until the 1970s and wasn't widely known until the 80's. Alan Turing's homosexuality would have granted him no favors in the mid-60s. 

When I gave a talk celebrating Juris Hartmanis, I posited that Juris not only received the Turing award but may have been the reason the award has its name. The Hartmanis-Stearns paper, as Avi also noted, established the Turing machine as the right model for studying computational complexity, as the model easily capture time and space (memory). That paper was published in 1965, fresh in the minds of those at the ACM creating the award. Perhaps combined with the early days of AI and Turing's intelligence paper, may have been enough to decide to name the award for Turing. 

Today there is no question that ACM made the right move in naming the award for Turing. Just watch Avi show that Turing's influence on complexity justifies the award's name on its own.


Sunday, June 30, 2024

Technology: 1966, 2006, 2023.

 In 2013 I wrote a blog to celebrate Lance's 50th birthday by contrasting what things were like when Lance was 10 to when he was 50. That post is here.

But life has even changed from 2006 to 2023. I will tell three stories, one from 1966, one from 2006, one from 2023. They all have to do with my hobby of collection novelty songs; however, I am sure there are similar stories in other realms

1) On Sept 21, 1966 there was an episode of Batman with special guest villain The Minstrel. He sang several songs in the episode that I thought were funny. My favorite was when Batman and Robin are tied up over a rotisserie, the Minstrel sings, to the tune of Rock-a-bye baby. 

Batman and Robin Rotate and Resolve

As the heat grows, your bodies Dissolve

When its still hotter, then you will Melt

Nothing left but your Utility Belt. 

I LIKED the song and WANTED it. So I found out when the episode would re-run and set up my tape recorder to record it. I still have the tape, though I don't have a tape player (see my blog post here) however it doesn't matter because a compilation of the songs in  that episode (actually two episodes) is on YouTube here.

2) On March 6, 2006 there was an Episode of Monk Mr. Monk goes to the dentist which has in it The Randy Disher Project singing Don't need a badge. This was great and I wanted that song. At the time I was buying the DVDs of Monk. When the DVD of that season came out I assumed the song  would be included as an extra. It was not :-(.  By that time I was busier than in 1966 so I  didn't have the time, patience, or tape recorder to track it down. But that does not matter since 8 years later it was on  YouTube here. But I had to wait 8 years.

3) On Aug 23, 2023 there was an episode of ST-SNW entitled Subspace Rhapsody that had NINE songs in it, sung by the crew (actually sung by the actors!)  I don't have streaming so I didn't watch it but I heard about it (people know I am interested in novelty songs so they tell me about stuff like that). I spend about 30 minutes on YouTube finding ALL NINE and putting them in my file of novelty song links, see here. And it was worth the effort- three of the songs are GREAT and the rest are pretty good (in my opinion).

Points

1) Also easier to find now then it was in 2006 and certainly in 1966: Everything. Okay, lets list some examples: Music (not just novelty), TV shows, Movies, Journal articles, Conference articles, books. But see next point. 

2) Big Caveat: For a recording from 1890 to have survived it would have to be on wax cylinder, then vinyl, then CD, maybe back to vinyl (Vinyl is having a comeback), and perhaps mp3, streaming, You Tube, or Spotify. Some music will be lost. I would like to think that the lost music is not the good stuff, but I know of cases that is incorrect (my blog post here gives an example). For journal articles there is also the issue of language. Some articles were never translated.  And some are written in a style we no longer understand. And some you really can't find. And there may be some songs where the only copy is in my collection.

3) Corollary to the Big Caveat: Some things are on YouTube one day and gone the next. There is an SNL short video Conspiracy Theory Rock which seems to come and go and come and go. I don't think its on YouTube, but I found it here. Lets hope it stays. I have that one on VHS tape but I don't have a VHS tape player. And modern e-journals might vanish. See my post on that issue here.

4) Some of my fellow collectors think they miss the days when only they had access to (say) Weird Al's Patterns which he sang on Square One Television (a math-for-kids show on PBS which I discovered and liked when I was 45). The song is on YouTube here. I find this point of view idiotic. The PRO of the modern world is I can find lots of stuff I like and listen to it (and its free!). The CON is a loss of bragging rights for people like me. Really? Seems like a very minor CON. I do not miss the days of hunting in used record shops for an old Alan Sherman record (ask your grandmother what a used record shop is and what an Alan Sherman is).

5) When I played the song Combinatorics (see here) in my discrete math class the students liked it (for some reason the TA hated it, oh well) and the students asked 

Is that a real song

I asked them to clarify the question. They couldn't. To ask if it ever came out on a physical medium is a silly question- it didn't, but that doesn't matter. Did it make money? Unlikely, but that would be a rather crass criteria. There are lots of VERY GOOD songs on You Tube (whether Combinatorics is one of them is a question I leave to the reader) so the question Is that a Real Song is either ill-defined or crass. All that matters is do you like it. 


Wednesday, June 26, 2024

E versus EXP

Why do we have two complexity classes for exponential time, E and EXP?

First the definitions:

E is the set of problems computable in time \(2^{O(n)}\).

EXP is the set of problems computable in time \(2^{\mathrm{poly}(n)}\).

The nondeterministic variants NE and NEXP have similar definitions and properties.

By the time hierarchy theorem, E is strictly contained in  EXP. But they have basically the same complexity:

  • There are polynomial-time many-one complete sets for EXP in E.
  • EXP is the closure of E under polynomial-time many-one reductions.
  • E is in NP if and only if NP = EXP. You can replace NP by PSPACE, BPP, BQP or any other class closed under poly-time many-one reductions.
Quiz: Show that PSPACE \(\neq\) E. Hint: The proof doesn't tell you which class might be larger.

EXP is the natural class for exponential time since it is closed under polynomial-time reductions and is known to contain PSPACE and all those other classes above. You have results like MIP = NEXP but not MIP = NE since MIP (interactive proofs with multiple provers) is closed under polynomial-time reductions. 

E = NE implies EXP = NEXP but not necessarily the other way around. P = NP implies both equalities but again not the other way around. You get P = NP implies E = NE because poly(\(2^n)\) = \(2^{O(n)}\). That equality plays a role in other theorems related to E and NE:

Impagliazzo-Widgerson: If E is not computed by subexponential-size (\(2^{o(n)}\))-sized circuits then P = BPP. A similar assumption for EXP would only put BPP in quasipolynomial time. 

Hartmanis-Immerson-Sewelson: show that there are sparse (polynomial-sized) sets in NP-P if and only if E \(\ne\) NE. Their paper leads to endless confusion because they state the result as EXPTIME \(\ne\) NEXPTIME without defining the terms before the terminology was set.

In fact I just fixed the Wikipedia article on EXPTIME which had the incorrect statement. Aargh!

Sunday, June 23, 2024

Soliciting open problems in honore of Luca T for my Open Problems Column

As you all know Luca Trevisan, a Giant in our field, passed away at the too-young age of 52. See Lance's post on Luca HERE. 

As the editor of the SIGACT News Open Problems Column I am putting together an open problems column in his memory.  (I did the same for Juris Hartmanis, see here, so you will have an idea of what I want.) 

If you want to submit an open problem, email me (gasarch@umd.edu) either 

a) Your IDEA for an open problem to see if its in scope, or 

b) If you are sure it's in scope,  Just Do It and send me the LaTeX code.  Page limit \le  2 page.

The problems should be either BY Luca or INSPIRED by Luca. 

I am thinking of open problems about derandomization and extractors; however, if Luca did some work in some other area that I am less familiar with (this is likely), that's fine; however,  cite that work. 



Thursday, June 20, 2024

Luca Trevisan (1971-2024)

Complexity theorist Luca Trevisan lost his battle to cancer yesterday in Milan at the age of 52. A terrible loss for our community and our hearts go out to his family.  

The community will honor Trevisan's life and legacy 12:30 PM Pacific Time Monday at the TCS4All talk that he was scheduled to give at the STOC conference in Vancouver. Register to watch the talk online.

Luca was one of the great minds of our field, an expert on randomness and pseudorandomness. He's the first computer science member of Italy's National Academy of Science. He has taught at Columbia, Berkeley and Stanford until 2019 when he moved back to his home country to join Bocconi University in Milan. 

My favorite result from Trevisan is his connections between extractors and pseudorandom generators, especially as the first works on arbitrary distributions and the latter fools computationally randomized algorithms. This paper laid the framework for better bounds for both extractors and generators. I had one paper with Trevisan, where, with Rahul Santhanam, we show time hierarchies for almost all natural semantic classes with a small amount of advice.

Trevisan had his own blog In Theory full of technical course notes and wonderful stories. Bill has two guest posts on the polynomial van der Waerden theorem in Luca's blog following up on Luca's posts on Szemeredi’s theorem

A few years ago Trevisan started the BEATCS theory blogs column to highlight theory blogs and bloggers. Bill and I were both highlighted in this column. 

Trevisan is one of the first theoretical computer scientists to come out as openly gay and many followed. We've come a long way from Turing.

More remembrances from Boaz and Scott.

In 2014 Luca Trevisan returned to Berkeley and joined the Simons Institute as its first permanent senior scientist. Christos Papadimitriou interviewed Luca for the occasion. 

Wednesday, June 19, 2024

Rethinking Heuristica

I've argued that more and more we seem to live in an Optiland, a computational utopia where through recent developments in optimization and learning we can solve the NP-problems that come up in practice and yet cryptography remains unscathed. We seem to simultaneously live in Heuristica ( and Cryptomania of Russell Impagliazzo's Five Worlds.

But we don't. Impagliazzo defined Heuristica as the world where P \(\ne\) NP but we can solve NP-complete problems easily on average. Since cryptography requires problems that are hard on average, if we are in Cryptomania we can't be in Heuristica. 

That definition made sense in 1995 but it didn't envision a world where we can solve many NP-problems in practice but not easily on average. Despite its name, Heuristica as defined does not capture solving real-world problems. To be fair, Impagliazzo entitled his paper "A Personal View of Average-Case Complexity," not "A Treatise on Solving Real World Problems". 

So we need to rethink Heuristica or create a new world (Practica?) that better captures real-world problems. How would we do so? 

When I talked with the SAT Solving researchers at Simons last spring, they had suggested that problems designed to be hard are the ones that are hard. But how can we mathematically capture that? Maybe it's connected to learning theory and TC0 (low depth circuits with threshold gates). Maybe it's connected to constraint-satisfaction problems. Maybe it's connected to time-bounded Kolmogorov complexity. 

As complexity theorists this is something we should think about. As we study the mathematics of efficient computation, we should develop and continue to revise models that attempt to capture what kinds of problems we can solve in practice.

But for now I don't have the answers, I don't even know the right questions.

Sunday, June 16, 2024

Should Prover and Verifier have been Pat and Vanna?

LANCE: I had my first Quanta Article published! I explore computation, complexity, randomness and learning and feeling the machine.

BILL: Feels to me like a mashup of old blog posts. Changing topics, I told Darling that you used Pat for Prover and Vanna for Verifier in a 1987 conference talk but those terms did not catch on. She was shocked!

LANCE: I'm shocked you two are married 32 years.

BILL: We hope to get to 64. However, she thought those were really good names for the concept (she has a masters degree in Computer Science so she knows stuff) and wondered why wouldn't those have caught on.

LANCE: I think that its frowned upon to use a cultural icon to tied to one country. There are Europeans who have no idea who Pat and Vanna are. For that matter, there are some Americans, particularly academics, who have no idea who Pat and Vanna are. And who would remember either of them once they stopped hosting the show? And who thought that would be 2024?

BILL: Who do papers on Interactive Proof Systems use?  Of course Author-Merlin games. Is the legend of King Author so well known (or at least it's well know that there IS a legend) that its okay to use those names? I think yes. 

LANCE: Did you really think his name is Author? I command thee to see Excalibur and learn the legend for yourself. Excalibur also being the name of a Computer Othello program I wrote in the 80's.

BILL: All right, Arthur. For one thing, we, or at least everyone but me, still knows who they are many years later, whereas Pat and Vanna will be lost to history. Hey Arthur and Merlin even got a science cartoon for their role in interactive proofs.

LANCE: Did Arthur and Merlin ever host a game show? I used Victor and Pulu in my thesis. I've also written papers where we use Prover and Verifier.

BILL: Pulu? Anyway, Prover and Verifier are boring!

LANCE: Sometimes boring works. We need to only use cultural icons that spans many cultures and won't be forgotten in 200 years. Just to be on the safe side, use cultural icons that are over 200 years old. 

BILL: Can you think of any cultural icon that has been used in Math or Computer Science and the name did catch on?

LANCE: The Monty Hall Problem.

BILL: I suspect there are many people who know who Monty Hall is only because of the paradox. And that is a paradox. Here is a name that didn't catch on: Sheldon's Conjecture was named after Sheldon Cooper from The Big Bang Theory. However, since it was solved, the name won't catch on, which is probably just as well. 

LANCE: How does the Chicken McNugget Theorem fit into this?

BILL: I don't know but it's making me hungry. Let's eat!

Thursday, June 13, 2024

Favorite Theorems: Algebraic Circuits

May Edition

Most of my favorite theorems tell us something new about the world of complexity. But let's not forget the greatest technical challenges in our area: proving separations that are "obviously" true. Here's the most exciting such result from the past decade.  

Superpolynomial Lower Bounds Against Low-Depth Algebraic Circuits
Nutan Limaye, Srikanth Srinivasan and Sébastien Tavenas

In this model, the inputs are variables and constants, and the goal is to create a specific formal polynomial using the gate operations of plus and times. Limaye, Srinivasan and Tavenas find an explicit polynomial such that any polynomial-size constant-depth algebraic circuit will compute it. 

How explicit? Here it is: Take d nxn matrices, multiply them together and output the top left element of the product. The \(N=dn^2\) variables are the entries of the matrices. The top left element is a polynomial of the inputs that can be computed by a simple polynomial-size circuit that just computes the iterated multiplication, just not in constant depth. The paper shows that for an unbounded d that is \(o(\log n)\), there is no constant-depth polynomial-size algebraic circuit.

The authors first prove a lower bound for set multilinear circuits and then extend to more general algebraic circuits.

Sunday, June 09, 2024

CFG-Kolm-complexity is singleton sets with Lance and Bill

For this post all Context Free Grammars (henceforth CFGs) are assumed to be in Chomsky Normal Form. The size of a CFG \(G\)  is the number of rules. We denote this by \(|G|\).

BILL: In my automata theory class I want to do some lower bounds on the size of CFGs. It is easy to show that if   \(w=0^n\) then there is a CFG G such that \(L(G)=\{w\}\) and \(|G|=O(\log n)\). I showed that if \(w\) is a Kolmogorov random string of length \(n\), and G is a CFG such that \(L(G)=\{w\}\), then \( |G|=\Omega(n/\log n\)), though this is surely known. So here is my question: Is there a natural such \(w\)? I will blog about that and make an open problems column out of it.

LANCE: Kolmogorov strings are natural!

BILL: Oh yeah. If that was true then spell check would not flag Kolmogorov as being misspelled.  So there!

LANCE: Can you ask a more rigorous question?

BILL: Okay. We can view the Kolm-result as saying that there is a function \(f\) from \(1^*\) to \(\{0,1\}^*\) such that  \(f(1^n)\) is a string of length \(n\) such that any CFG for \( \{w\}\) is large. But the function f is not computable!

LANCE: That shouldn't bother you. You wrote an entire book about how many queries to HALT and other incomputable sets are needed to solve certain problems (see here).  Also now that you know you there are such strings, you can simply search for a w and test all small CFGs. So Computable!

BILL: Still not natural. And what is the complexity? Exponential? Poly?

WE DROP THE TOPIC AND PICK IT UP AGAIN A FEW TIMES. WE (meaning mostly Lance) HAVE SOME BRILLIANT INSIGHTS THAT LEAD TO THE FOLLOWING RESULTS:  

1) For every \(w \in \{0,1\}^n\) there is a CFG G with \(L(G)=\{w\}\) and \( |G|=O(n/\log n)\)

2) If  \(w\) is a de Bruijn sequence of length \(n\) and order \(k=\log n\) (we assume n is a power of 2). Then every CFG G with \(L(G)=\{w\}\) has \( |G|=\Omega(n/\log n)\).  There is a known algorithm that will, given \(1^n\), produce a de Bruijn sequence or length n and order \(k=\log n\), in time quasilinear in \(n\). 

BILL: That bums me out for two contradictory reasons

a) The problem is NOT solved since de Bruijn is flagged by spellcheck, so the sequences are not natural.

b) The problems IS solved, so I can't use it for an open problems column. 

LANCE: Do not despair!

a) De Bruijn sequences have a Wikipedia page and therefore are natural. 

b) We can post on ArXiv. 

WE DID and a day later Markus Lohrey emailed us that, aside from the De Bruijn result, the results are already known using a different terminology, word chains.  See his survey here. Then the next day, Giovanni Pighizzini emails us that he had previously published lower bounds for De Bruijn sequences. We have since withdrawn the paper. We revised it by putting in references and history but will not put it on arxiv. The revised paper is here.

LANCE: Bill, are you bummed out? Why did we even write the paper anyway?

BILL: Not at all!  My original goal was pedagogical, and the paper we have can still be taught in automata theory next spring. PLUS, we got invited to submit to Advanced in AI and ML with a 10% discount on publication fees (see here.) Since we are used to getting 100% discount on publication fees we won't be submitting, but it was nice to be asked. 

LANCE: Yeah, nice to be asked to be parted from my money. At least I learned about word chains.

Thursday, June 06, 2024

The Godzilla Moment


On the plane earlier this week I got around to watching the Academy Award winning movie Godzilla Minus One, one of the best monster movies I've seen set in Japan during the aftermath of World War II, with a pretty emotional substory about a man dealing with his demons from the war. I had to hide my tears from the nearby passengers.

It wasn't the story that earned the movie an Oscar. Godzilla Minus One won the awards for Best Visual Effects. I found nothing wrong with the effects, but they didn't excel beyond what you see in any typical movie of the genre.

In 2008, I lamented that special effects in movies had improved so much that we had lost the amazement we felt in the 70s. Perhaps I spoke too soon, as James Cameron's Avatar came out the following year and did amaze. However, special effects have since become a commodity, something filmmakers must include because audiences expect it but rarely do you go to a movie for the effects. In the not-too-distant future, special effects will be automated with AI, becoming just another plugin for Final Cut Pro. 

It's time to retire the visual effects award, especially with new awards coming to the Oscars.

I wrote that 2008 column to mirror the lack of enthusiasm about computing at the time which also felt like a commodity. Now we're at an exciting time in computing particularly with the advances in artificial intelligence. But we should be wary, once (if?) AI gets consistently good it may feel like a commodity again and once again we become victims of our own success. 

Monday, June 03, 2024

FOCS 2024 Test of Time Award. Call for nominations and my opinion

 The call for nominations for the Test of Time Award at FOCS 2024 has been posted here.

Eligibility and past winners are here.


Points

1) It is good to have an award that waits until the dust settles and we can see what was really important.

2) The winners are all excellent papers that really have passed the test of time. 

3) And of course it is really important that they appeared in FOCS. NO IT ISN"T! See next point

4) I would prefer a test-of-time award that is independent of WHERE the paper first appeared. Tying it to FOCS or STOCS or FOCS-or-STOC seems bad. I would opt for appearing in ANY journal or conference. Appearing in a journal of low quality is not a problem since this award should be  for papers that are judged on their merit and influence, and not on their pedigree.

5) My proposal to allow any journal or conference may be impractical because some organization has to give it out, and if that organization is IEEE or ACM they will restrict to their own publications. 


6) STOC also has a test of time award, see here 76) I tried to find out of the SODA conference has a test of time award but mostly got hits about the Baking Soda Test for determining if a pregnant women is going to have a boy or  girl. It actually worlds 50% of the time! See here

7) I was not able to find any other test-of-time award for Comp Sci THEORY. 

8) I DID find test of time awards for

SIGCSE- Comp Sci Education, here. Must be for a paper published in a conference co-sponsored by SIGCSE or in an ACM journal.  So an excellent paper published elsewhere wouldn't count. 

SC2- High Performanc Computing, see here. Paper must have been published in the SC conference. 

ACM CCS - Security, Audit(?) and Control, see here I think these must appear in the CCS conference. 







Wednesday, May 29, 2024

Double Digit Delights

It started with a post from Fermat's Library.

My immediate reaction was why not list them all? Giving the smallest such number suggests there are an infinite number of them. But the value of a d-digit number grows exponentially in d, while the 2-digit sum grows quadratically so there must only be a finite number. 

Let's be a little more formal. Let's restrict ourselves to positive integers with no leading zeros. The 2-digit sum of x is the sum of all 2-digit numbers formed by concatenating the ith digit of x and the jth digit of x for all i,j with i\(\neq\)j. The 2-digit sum of 132 is 13+12+31+32+21+23 = 132. The 2-digit sum of 121 is 12+11+21+21+11+12 = 88. A number x if 2-idempotent if the 2-digit sum of x is x.

Let's look at the possible lengths of 2-idempotent numbers.

For 1-digit numbers the 2-digit sum is zero.

For 2-digit numbers the 2-digit sum is that number plus another positive number so never equal.

For 5-digit numbers, the 2-digit sum is bounded by 20*99 = 1980 < 10000. So there are no 2-idempotent numbers with 5-digits. More than 5 digits can be discarded similarly. 

For 4-digit numbers, the two digit sum is at most 12*99 = 1188. So a 2-idempotent number must begin with a one. Which now bounds it by 19*3+91*3+99*6=924. So there are no 2-idempotent numbers of 4 digits.

So every 2-idempotent must have 3 digits. I wrote up a quick Python program and the only three 2-idempotents are 132, 264 and 396. Note that 264 is 2*132 and 396 is 3*132. That makes sense, if you double every digit and don't generate carries, every two-digit part of the sum also doubles.

Biscuit asks if there is some mathematical argument that avoids a computer or manual search. You can cut down the search space. Every length 3 2-idempotent is bounded by 6*99=594 and must be even since every digit appears in the one's position twice. But I don't know how to avoid the search completely.

Two more Python searches: 35964 is the only 3-idempotent number. If you allow leading zeros then 0594 is 2-idempotent. There may (or may not) be infinitely many such numbers.

Sunday, May 26, 2024

National BBQ day vs World Quantum Day

 After my post on different holiDAYS, here, such as Talk like a Pirate Day, and Raegan Revor day, two other Days were brought to my attention

1) Lance emailed me about National BBQ day, which is May 16. See here

2) While at a Quantum Computing Prelim I saw a poster for World Quantum Day, which is April 14. See here.

The obvious question: Which of these days is better known? I Googled them again but this time note the number of hits. 

I found out that Google seems to have removed that feature!

When using Google on both Firefox and Chrome, I did not get number of hits. 

Some points about this

1) Is there a way to turn the number-of-hits feature on?

2) Bing DOES give number of hits.

World Quantum Day: 899,000 hits

National BBQ Day: 418,000 hits

To get a baseline I binged Pi Day. This did not reveal the number of hits. An unscientific set of Bing searches seems to indicate that if the number of hits is large then they are not shown.

Is hits-on-Bing a good measure of popularity? I do not know.

3) Duck Duck Go does not give number of hits. This might be part of their privacy policy.

4) I also noticed a while back that You Tube no longer allows DISLIKES, just likes. That may explain why my Muffin Math song on You Tube (see here), with Lance on the Piano,  has 0 dislikes. It does not explain why it got  19 likes.

5) Google said that the number-of-hits is really an approximation and one should not take it too seriously. 

YouTube said that (not in these words) the haters caused dislikes to be far more than they should be.

On the one hand, I want to know those numbers. On the other hand I think Google and YouTube are right about about the numbers not being that accurate. And more so for Bing which is used less so (I assume) has less data to work from.

6) Back to my question: What is better known National BBQ day or World Quantum Day? The nation and the world may never know. 

7) All of the above is speculation.