Thursday, August 29, 2024

My Quantum Summer

Rendering of PsiQuantum's facility in Chicago

I wasn't looking for quantum this summer but it found me. At various events I ran into some of the most recognized names in quantum computing: Peter Shor, Charlie Bennett, Gilles Brassard and Scott Aaronson (twice), Harry Buhrman and Ronald de Wolf.

I was invited to Amsterdam for a goodbye event for Harry Buhrman. Harry co-founded and co-led the CWI quantum center QuSoft and has now moved to London to join Quantinuum as chief scientist and I was invited to give a talk on Harry's classical complexity work before he joined the dark side. Ronald and Gilles gave talks after mine. 

On the way to Amsterdam I spent a few days visiting Rahul Santhanam in Oxford. Scott Aaronson and Dana Moshkovitz showed up with kids in tow. Scott gave a talk on AI not quantum in Oxford. I would see Scott again at the Complexity conference in Michigan.

Peter Shor and Charlie Bennett both attended the Levin Event I mentioned last week.

I talked to all of them about the future of quantum computing. Even though I'm the quantum skeptic in the crowd, we don't have that much disagreement. Everyone agreed we haven't yet achieved practical applications of quantum computing and that the power of quantum computing is often overstated, especially in what it can achieve for general search and optimization problems. There is some disagreement on when we'll get large scale quantum computers, say enough to factor large numbers. Scott and Harry would say growth will come quickly like we've seen in AI, others thought it would be more gradual. Meanwhile, machine learning continues to solve problems we were waiting for quantum machines to attack. 

My city of Chicago had a big quantum announcement, the Illinois Quantum and Microelectronics Park built on an old steel works site on the Southeast Side of the city built with federal, state and local funds as well as a big investment from PsiQuantum. I have my doubts as to whether this will lead to a practical quantum machine but no doubt having all this investment in quantum will bring more money and talent to the area, and we'll get a much better scientific and technological understanding of quantum. 

PsiQuantum's website claims they are "Building the world's first useful quantum computer". PsiQuantum is using photonic qubits, based on particles of light. Harry's company Quantinuum is using trapped ions. IBM and Google are trying superconducting qubits. Microsoft is using topological qubits and Intel with Silicon qubits (naturally). Who will succeed? They all might. None of them? Time will tell, though it might be a lot of time.

Monday, August 26, 2024

Whats worse for a company: being hacked or having technical difficulties? I would have thought being hacked but...

At the Trump-Musk interview:

1) There were technical difficulties which caused it to start late and have some other problems.

2) Musk and (I think) Trump claimed that this was a DDOS attack because people were trying to prevent Donald from having his say (listen to the beginning of the interview).

3) Experts have said it was not a DDOS attack, or any kind of attack.

(For the interview see here. If the link does not work either blame a DDOS hack or technical difficulties.)

When a company is hacked, to they admit it? This is hard to tell since if it never comes out, how do you know? 

Would a company rather admit to the public they had tech difficulties OR admit to the public they were attacked? I would think they would rather admit to the public they had tech difficulties. 

I suspect that X had tech difficulties because LOTS of people wanted to hear Trump.

Faced with this, what are Elon's options:

Options

1) Claim that this was caused because so many people wanted to hear Trump that the system could not handle it. This would make Trump look good, not make Elon look too bad, and is probably true so it won't be discovered later that he lied.

2) Claim that his system was attacked. This allows Trump to claim his enemies are out to get him, thus pushing the narrative that he is a victim. But Musk looks worse than if the system was just overloaded. Plus its false which will (and did) come out. However, there were absolutely no consequences to lying. 

I think its  unusual for a company to lie by claiming they were hacked when they weren't. If you know of any other examples then please comment.





Wednesday, August 21, 2024

The Levin Translation

Earlier this summer I attended a Celebration for Leonid Levin who recently turned 75. To prepare my talk I wanted to go back to Levin's 1971 two-page Russian masterpiece that covered both his independent discovery of NP-completeness and Universal Search. Boris Trakhtenbrot's wonderful Survey of Russian Approaches to Perebor has a messy translation of the Levin paper in the appendix. Could I use AI to get a cleaner version? TLDR Here's the PDF.

Let me first take a detour to when I was a graduate student in the 1980s. I went to meet my advisor Mike Sipser and found him with a couple of Russian students and postdocs carefully translating the latest paper from Alexander Razborov, then a young researcher at the Steklov Mathematical Institute in Moscow. Razborov had send a physical copy of his latest paper and they were translating word by word--would this be the paper that show NP sets don't have polynomial-size circuits and P ≠ NP? Spoiler alert: It wasn't, but the process remained etched in my memory four decades later.

But now we live in the 2020s and I turned to my friend Claude for help. I gave Claude the Russian PDF and asked Translate into English

Certainly, I'll translate the Russian text into English for you. Here's the translation:

PROBLEMS OF INFORMATION TRANSMISSION

Volume IX 1973 Issue 3

BRIEF COMMUNICATIONS

UDC 519.14

UNIVERSAL SEQUENTIAL SEARCH PROBLEMS

L. A. Levin

The article examines several well-known mass problems of the "sequential search type" and proves that these problems can only be solved in the time it takes to solve any problem of the specified type in general...

So far so good. Summarize the article.  

This article by L. A. Levin, titled "Universal Sequential Search Problems," discusses a class of computational problems known as "sequential search problems" or "enumeration problems." Here's a summary of the key points:

1. The article addresses well-known computational problems that require exhaustive search (enumeration) to solve.

2. While many classical problems have been proven to be algorithmically unsolvable, there exists a class of problems that are solvable but require an impractically large amount of computation time.

3. These problems, called "sequential search problems," include tasks like minimizing Boolean functions, searching for proofs of limited length, and determining graph isomorphism.

4. While these problems can be solved by simple algorithms that enumerate all possibilities, such algorithms require exponential time, leading mathematicians to believe that simpler algorithms are impossible.

5. The article introduces the concept of "universal sequential search problems" - problems to which all other sequential search problems can be reduced.

6. Levin presents six specific problems (numbered 1-6 in the text) and claims they are universal sequential search problems.

7. The main theorem (Theorem 1) states that if there exists any sequential search problem that cannot be solved in less than f(n) time (for some function f), then the six presented problems also cannot be solved in less than f(n) time.

8. Another important result (Theorem 2) states that for any sequential search problem, there exists an algorithm that solves it in optimal time, up to a constant factor and an additive term comparable to the input length.

9. The article suggests that this approach can be extended to most interesting sequential search problems.

10. While the article doesn't solve the fundamental question of whether these problems truly require exponential time, it establishes relationships between different problems in this class.

This work is significant in computational complexity theory, as it relates various hard problems to each other and suggests a hierarchy of difficulty among computational tasks.

The summary is almost as long as the article.

Then I noticed the term "mass problems" that occurs throughout the paper. In the translation the word "mass" seems off. What are some alternatives?

You're right to notice that the word "mass" in this context might seem unusual or potentially confusing. In the original Russian text, the term used is likely "массовые задачи" (massovye zadachi), which is a term commonly used in Russian mathematical literature. However, in English, we typically use different terminology. Here are some alternative translations that might be more appropriate in this context:

1. "General problems" or "general computational problems"

2. "Large-scale problems"

3. "Broad classes of problems"

4. "Universal problems"

5. "Widely applicable problems"

6. "Common computational problems"

7. "Algorithmic problems"

8. "Computational class problems"

9. "Generic problems"

10. "Generalized problems"

In the context of this paper, the author is likely referring to problems that are not specific to a particular instance but represent a whole class or type of problem. The term "universal" is actually used later in the paper, which aligns well with this interpretation.

So, a more idiomatic translation might replace phrases like "classical mass problems" with "classical universal problems" or "broad classes of classical problems".

This choice of wording would better convey the idea that these are problems that apply broadly across many instances or types of computational tasks, rather than being specific to a particular case.

So "mass problems" is mass as in mass audience not mass as in physics. What if we left out the word "mass" completely.

You're right, omitting the word "mass" entirely could indeed improve the clarity and flow of the translation without losing the essential meaning. Let's consider how some key sentences would read without it:

1. Instead of:

"After clarifying the concept of an algorithm, the algorithmic unsolvability of a number of classical mass problems was proven..."

We could have:

"After clarifying the concept of an algorithm, the algorithmic unsolvability of a number of classical problems was proven..."

2. Instead of:

"However, if we assume that there exists some (even artificially constructed) mass problem of the sequential search type that cannot be solved by simple algorithms..."

We could have:

"However, if we assume that there exists some (even artificially constructed) problem of the sequential search type that cannot be solved by simple algorithms..."

3. And for the theorem:

"If there exists any problem of sequential search (quasi-sequential search) type that cannot be solved in time less than f(n) for argument length comparable to n, then problems 1-6 also have this property."

This approach works well because:

1. The context already implies that we're dealing with broad classes of problems, not specific instances.

2. The term "sequential search problems" itself conveys the idea of a general class of problems.

3. It makes the text more concise and easier to read in English.

4. It avoids potential confusion that might arise from an unusual use of the word "mass" in this context.

So yes, simply omitting "mass" throughout the translation would likely result in a clearer, more idiomatic English text while preserving the original meaning of the article.

Give me the full translation without the mass, which Claude dutifully did. Why stick with a text version. Format this nicely in latex. 

Certainly! Here's the article formatted in LaTeX.

This LaTeX code will produce a nicely formatted version of the paper. It includes:

  1. Proper title, author, and abstract
  2. Sections for introduction, definitions and problems, and main results
  3. Theorem, lemma, and definition environments
  4. Proper mathematical formatting for equations and logical statements
  5. Enumerated list for the six problems
  6. Bibliography section

To compile this LaTeX document, you would need to save it as a .tex file and run it through a LaTeX compiler. The output will be a professionally formatted academic paper.

So I did and here is the final PDF

Russian papers at the time famously had no proofs so for fun I tried Can you fill in the proofs of the theorems

Certainly! I'll expand the LaTeX document to include proofs for the theorems and lemma. Please note that since the original paper doesn't provide detailed proofs, I'll have to construct plausible proofs based on the information given and general knowledge of the field. These proofs should be considered as educated guesses rather than the actual proofs from Levin's original work.

Here's the updated LaTeX code with added proofs.

This expanded version includes proof sketches for Theorem 1, Lemma 1, and Theorem 2. These proofs are constructed based on the information given in the paper and general principles of computational complexity theory. They should give a good idea of the reasoning behind the theorems, but may not exactly match Levin's original proofs.

The main ideas in these proofs are:

  1. For Theorem 1: Using the universality of problems 1-6 to show that they must be at least as hard as any other sequential search problem.
  2. For Lemma 1: Demonstrating how an arbitrary sequential search problem can be reduced to one of the listed problems (in this case, Problem 2).
  3. For Theorem 2: Constructing a universal algorithm that simulates all possible algorithms in parallel, achieving optimal time complexity up to a constant factor.

These proofs maintain the paper's focus on the relationships between different types of sequential search problems and the idea of universal sequential search problems.

You can judge the proofs on your own. 

Sunday, August 18, 2024

Request open problems in honor of Luca Trevisan

 Request for Open Problems In Memory of Luca Trevisan


Luca Trevisan passed away on June 19, 2024 at the age of 52, of cancer.

I am putting together an open problems column of open problems in his honor.

If you are interested in contributing then please email me a document with the following specifications.

1) It can be as short as half-a-page or as long as 2 pages. One way to make it short is to give many references or pointers to papers with more information.

2) It should be about an open problem that is either by Luca or inspired by Luca or a problem you think Luca would care about.

3) In LaTeX. Keep it simple as I will be cutting-and-pasting all of these into one column.

Deadline is Oct 1, 2024.

 



Wednesday, August 14, 2024

Favorite Theorems: Random Oracles


This months favorite theorem is a circuit result that implies the polynomial-time hierarchy is infinite relative to a random oracle, answering an open question that goes back to the 80's. 

Johan Håstad, Benjamin Rossman, Rocco A. Servedio and Li-Yang Tan

The authors show how to separate depth d from depth d+1 circuits for random inputs. As a corollary, the polynomial hierarchy is infinite with a random oracle, which means that if we choose an oracle R at random, with probability one, the k+1-st level of the polynomial-time hierarchy relative to R is different than the k-th level relative to R. 

Why should we care about random oracles? By the Kolmogorov zero-one law, every complexity statement holds with probability zero or probability one with a random oracle, so for every statement either it or its negation holds with probability one. And since the countable intersection of measure-one sets is measure one, every complexity statement true relative to a random oracle are all simultaneously true relative to a random oracle, a kind of consistent world. With a random oracle, we have full derandomization, like BPP = P, AM = NP and PH in \(\oplus\mathrm{P}\). We have separations like P ≠ UP ≠ NP. We have results like NP doesn't have measure zero and SAT solutions can be found with non-adaptive queries to an NP oracle. And now we have that the PH is infinite simultaneously with all these other results. 

More details on this paper from a post I wrote back in 2015. 

Saturday, August 10, 2024

The combinatorics of Game Shows

 (Inspired by Pat Sajak stepping down from Wheel of Fortune)

How many different game show are there? Many. How many could there be?

1) Based on Knowledge or something else. Jeopardy is knowledge. Wheel and Deal-No Deal are something else. The oddest are Family Feud and America Says where you have to guess what people said which might not be whats true. Reminds me of the SNL sketch about a fictional game show Common Knowledge. Oddly enough there now is a REAL game show of that name, but it actually wants true answers to questions. 

2) Do the losers get anything more than some min? On Wheel they do, but I do not know of any other show where they do. On People Puzzler they get a subscription to People Magazine!

3) Is how much the winners win based on how well they do in the game, or is it a flat number. Jeop and Wheel is based on how they do in the game. America Says, Switch, and many others the winner does something else do either win (say) $10,000 or just $1,000.

4) MONEY: I want to say can you win A LOT or NOT SO MUCH. I'll set $20,000 above or below. ON Jeop or Wheel you CAN get > $20.000. As noted in the above item there are shows where at the end you can win $10,000 but that's it.

5) Do winners get to come back the next day? Jeop and Masterminds does that but I do not know of any other show that does. 

6) Are celebrities involved? This used to be more common, but seems to have faded. Hollywood Squares was an example. Masterminds is hard to classify in this regard since the celebs are people who know a lot of stuff (they are called Masterminds) and their main claim to fame might be being on Mastermind.  Or on some other quiz show. Ken Jennings was a Mastermind in the first season. 

7) If its question-based then is it multiple choice or fill in the blank? Some do both. Common Knowledge has the first round multiple choice, and later rounds fill-in-the-blank, and the finale is multiple-choice. 

8) Do you win money or merchandise? There are many combinations of the two. 

9) Are the players teams-of-3? (Common Knowledge), Individuals (Jeopardy), just one person or team (Deal-No Deal and Cash Cab), or something else. 

10) Is it a competition so people vs people (most of the quiz shows) or is it one person answering questions (Who wants to be a Millionaire, Cash Cab). I've noticed that on Cash Cab if you are close to the answer they give it to you, which I don't think hey would do on Jeop. This could be because there is no competitor, so being flexible is not unfair to someone else. Millionaire can't do this since its strictly multiple choice. 

There are other parameters but I will stop here. I leave it to the reader to more fully expand these categories and find out how many game shows there can be.





Sunday, August 04, 2024

Determing which math problems are hard is a hard problem

 I was wondering what the hardest math problems were, and how to define it. So I googled 

Hardest Math Problems

The first hit is here. The 10 problems given there bring up the question of  what is meant by hard?

I do not think the order they problems were given is an indication of hardness. Then again, they seem to  implicitly use many definitions of hardness

1) The 4-color problem. It required a computer to solve it and had lots of cases. But even if that is why its considered hard, the solution to the Kepler Conjecture (see here) is harder. And, of course, its possible that either of these may get simpler proofs (the 4-color theorem already has, though it still needs a computer).

2) Fermat's Last Theorem. Open a long time, used lots of hard math, so that makes sense.

3) The Monty Hall Paradox. Really? If hard means confusing to most people and even some mathematicians  then yes, its hard. But on a list of the 10 hardest math problems of all time? I think not. 

4) The Traveling Salesperson problem. If they mean resolving P vs NP then yes, its hard. If they mean finding a poly time algorithm for TSP then it may be impossible.

5) The Twin Primes Conjecture. Yes that one is hard. Open a long time and the Sieve method is known to NOT be able to solve it. There is a song about it here.

6) The Poincare Conjecture. Yes, that was hard before it was solved. Its still hard. This is another issue with the list- they mix together SOLVED and UNSOLVED problems.

7) The Goldbach Conjecture. Yes, that one is hard. 

8) The Riemann hypothesis is the only problem on both Hilbert's 23 problems in 1900 and on the Clay prize list. Respect! There is a song about it here.

9) The Collatz conjecture. Hard but this might not be a good problem. Fermat was a good problem since working on it lead to math of interest even before it was solved. Riemann is a good problem since we really want to know it. Collatz has not lead to that much math of interest and the final result is not that interesting.

10) Navier-Stokes and Smoothness. Hard! Note that its a Millennium problem. 

NOTES

1) TSP, Poincare, Riemann, Navier-Stokes are all Millennium problems. While that's fine, it also means that there are some Millennium problems that were not included: The Hodge Conjecture,  The Birch and Swinnerton-Dyer Conjecture, Yang-Mills and the Mass gap (thats one problem: YM and the Mass gap). These three would be hard to explain to a layperson.Yang Mills and the Mass Gap is a good name for a rock band.

2) Four have been solved (4-color, FLT, Monty Hall (which was never open), Poincare) and six have not been solved (TSP, Twin primes, Goldbach, RH, Collatz, Navier-Stokes)

3) I have also asked the web for the longest amount of time between a problem being posed and solved. FLT seems to be the winner with 358 years, though I think the number is too precise since it not quite clear when it was posed. I have another candidate but you might not want to count it: The Greek Constructions of trisecting an angle, duplicating the cube, and squaring the circle. The problem is that the statement:

In 400BC the Greeks posed the question: Prove or Disprove that one can trisect an angle with a ruler and compass

is false on many level:

a) Nobody thought of prove or disprove back in 400BC (and that date is to precise). 

b) Why would a compass, which helps you find where North is, help you with this problem?

(ADDED LATER: Some of the comments indicate that people do not know that point b is a joke. Perhaps not a good joke, but a joke.) 

SO, when was it POSED in the modern sense is much harder to say. For more on this problem see the book Tales of Impossibility or read my review of it here.

(ADDED LATER: A comment pointed out that the constructing a trisection (and duplicating a cube and squaring the circuit) were proven impossible. I knew that but forgot to say it., and make the point of the very long time between posing and solving, so I will elaborate here:

1837: Wantzel showed that there is no way to, with a straightedge and compass, trisect an angle or duplicate the cute. This used Field Theory.

1882: Lindemann showed pi was transcendental and hence there is no straightedge and compass construction to square the circle.

So one could say it took 1882+400 years to solve the problem, but as noted above, to say the problem was posed in 400BC is not really right.)

4) Songs are needed for the other problems on this list UNION the Millennium problem. The Hodge Conjecture would be a challenge. I DID see some songs on You Tube that claimed to be about some of these problems, but they weren't. Some were instrumentals and some seemed to have no connection to the math.

5) Other lists I've seen include:

a) Prove there are no odd perfect numbers. That seems to be hard. This could have been posed before FLT was posed, but its hard to say.

b) Prove the following are transcendental: pi + e, the Euler-Mascheroni. There are other open problems here as well. 

These lists make me think more carefully about what I mean by HARD and PROBLEM and even MATH.