Sunday, June 04, 2023

Quantifiers: To Parenthesize or not to Parenthesize? Matrix of Formula: To Bracket or not to Bracket?

 For the book 

Computational  Intractability: A Guide to Algorithmic Lower Bounds

by Demaine-Gasarch-Hajiaghayi 

(See  here for a link to a first draft.) 

we had to make some choices about which notation to use. One of the least important ones was the following: 

When defining NP, and in a few other places should we use: 

                                                  (\exists y)(\forall y)[B(x,y)]

or 

                                                   \exists x : \forall y : B(x,y)

or 

                                                    something else.

We ended up doing it the second way.  But I wondered, which, if either, is standard. So I looked in many math and theoretical CS books looking for places they used quantifiers. Here is what I found

a) Most papers and books really don't use quantifiers at all!  This surprised me. 

b) When quantifiers are used, they are used in definitions, not theorems. 

c) One exception is in logic when they deal with formulas as objects onto themselves.  For example, the inductive definition of a formula will have a step:

                       If f(x_1,...,x_n) is a formula then (\exists x_i)[f(x_1,...,x_n)] is a formula. 

d) Here is a list of the few places I saw quantifiers used and if they used parenthesis or not. I say if it has parenthesis (abbreviated Parens)  or not, and if the matrix of the formula is in square brackets, no brackets, or something ese. 

Cook's classic paper . Page 154 Parens, no Brackets (1971) 

Stockmeyer's paper where he defines PH.  Page 6 Parens and Brackets  (1976)

Computers and Intractability by Garey & Johnson. Page 164. Parens and Brackets (1979)

Morass-like construction of aleph_2 trees in L by Devlin.  Page 2 Parens and matrix in Parens (1979)

Descriptive Complexity by Immerman. Page 38 Parens no Brackets (1999) 

Bounded Queries in Recursion Theory by Gasarch and Martin. Parens and Brackets  Throughout the book.  (1999)

Complexity Theory from Godel to Feynman by Rudich. No Parens, No Brackets in Def of PH. (2003) 

Parameterized Complexity Theory by Flum & Grohe. Page 81 no Parens and no Brackets. 

Computational Complexity: A Modern Approach by Arora & Barak. Page 40. No Parens No Brackets.(2007) 

Computational Complexity: A Conceptual Prospective by Goldreich.  Page 114 no parents, no brackets (2008) 

On Quantifer Rank Equivalence between linear orders by SidersOn page 417 they use quantifiers to state a theorem, which is unusual. Parens no brackets.

Presburger arithmetic, Rational Generating Functions, and quasi polynomials by Woods. Parens no  Brackets. (2012) 

Who witness's the Witness by Abel et al. On Page 69 (which the pointer takes you to) No Parens, no brackets. Colons between quantifiers (2018).

e) What to make of all this?

First off- the RARITY of the use of quantifiers really surprised me. The only place I saw them used a lot was my book, co-authored with Georgie Martin,  Bounded Queries in Recursion Theory. Perhaps it would have sold better if I didn't use so many quantifiers. Oh well. 

Second off- Later works don't use parens and brackets. This is most clear if you just look at Complexity Theory Books 

Garey & Johnson - 1979- parens and brackets

Flun & Grohe- 1998- no parens and no brackts

Immerman- 1999 - parens but no brackets (this is the one exception) 

Arora & Barack- 2007 no parens and  no brackets

Goldreich-2008- no parens and no brackets

If you have a complexity theory book around that is not on this list, look up the definition of NP and the definition of the Poly Hierarchy and see (a) if they use parens around the quantifiers, and (b) if they use square brackets or no brackets of something else. Please leave a comment about it so I test the conjecture that parenthesis are just so 1979. 








Sunday, May 28, 2023

On Nov 10, 2014 the TV show Scorpion mentions Software that is like ChatGPT for Music


Scorpion is a TV show that ran from 2014 to 2018. It involves a group of brilliant (to say the least) but socially awkward (to say the least) people who help the government battle threats and/or solve crimes. 

The episode Risky Business aired on Nov 10, 2014 (episode 8). 

In the episode someone wrote a program that (in todays terminology) scrapes the web for all pop songs that were hits for the last 50 years and creates hits that share those properties and hence will be popular. And it works! 

1) The episode says that if the fans of group X find out that they didn't create their own music, but its computer generated, then sales will drop. This may be true but I am not quite sure WHY its true. If I LIKE  listening to a song, I will still like it even if I know it was computer generated. 

2) While they didn't go into any detail about what the program does, it SOUNDS a lot like ChatGPT to me. 

3) Could ChatGPT do that now? Has it already? Lance happened to read an earlier version of this post and emailed me a link which says this is already happening to some extent. The link is here but might be behind a paywall. (Note- My spellchecker DOES think paywall is a word. Yeah! It also thinks that spellchecker is a word. Yeah!)

4) Is it impressive that the shows writers predicted this kind of technology way back in 2014? 

5) In the real world would someone be murdered because they are going to reveal that group X's songs were not authentic? Either TV has far more murders than the real world OR on TV they just get caught more often. I would like to think that TV has far more murders (see here). It certainly has far more interesting murders. 

Thursday, May 25, 2023

Finding Primes Pseudodeterministically

In 2003, Agrawal, Kayal and Saxena showed that primality testing is in P, i.e., you could test for primes without using any randomness.

What if you want to find a prime? Specifically given a number m in binary, can you find a prime p > m in time polynomial in the length of m. In 2009 I posted about a polymath project to find a deterministic algorithm for finding primes and the problem remains open today. 

Likely you could just try m+1,m+2, ... until you find a prime but whether that is bounded by a polynomial number of steps is a big open question in number theory. You can choose random numbers between m and 2m and find one in expected polytime given the prime number theorem. This algorithm will likely output a different prime every time you run it.

There's a new paper by Lijie Chen, Zhenjian Lu, Igor Carboni Oliveira, Hanlin Ren and Rahul Santhanam that solves this problem pseudodeterministically for infinitely many inputs. This is a randomized polynomial-time algorithm B that for infinitely many n, there is a specific prime p between 2n and 2n+1 such that B(1n) outputs p with high probability. With high probability you will get the same prime every time you run the algorithm!

The proof uses a win-win kind of argument, if a certain algorithm fails to work you can use that to derandomize. Making that argument work requires bootstrapping on variants of previous pseudorandom generator and hitting sets constructions. 

Almost surely there is a deterministic polytime algorithm for finding primes. But for now the new result of Chen et al. is a nice stop in that direction. 

Sunday, May 21, 2023

Logic and lack of Logic of Anti Vaxers

I wrote the  post below the dotted line a long time ago but never got around to posting it. Now that 


                                        WHO says COVID emergency is over.

(When I first saw I misread it as a question: 

                                        Who says COVID emergency is over? 

I decided to post it (with one update that has an ADDED LATER on it). 

If you didn't know , WHO is World Health Organization. Actually, that is true whether or not you know it.

------------------------------------------------------------------------------------

Some of the reasons anti-vaxers give are better than others. Some are consistent, some are not. 

We list some of them and invite the comments to list more. Our point is- which ones have something interesting to say? 


I am PRO VAX but I often seek out intelligent opposing views on any of my opinions. Did I find any here? I leave that as an exercise for the reader. 


Reasons anti-vaxxers give or could give. 

a) FREEDOM! That's not quite a reason. I am reminded of asking a neighbor why he flies a confederate flag and he said

FREEDOM!

I was hoping he would say either

To honor my great grandfather who died for the south in the civil war.

or

Because I don't like black people.

or

To honor those many people, black and white, who died in the civil war fighting for the south. 

or

SOMETHING with content to it. 

Any of those answered would tell me something could be discussed. Just the word FREEDOM does not. 

b) Bill Gates put microchips in the vaccine so he can keep track of where you are. Why? Bill Gates and Mark Zuckerberg can ALREADY keep track of where you are. I do wonder if this is a common anti-vax viewpoint or if its VERY RARE and the media plays it up to make anti-vaxers look stupid. Same with the notion that the vaccine makes you magnetic (that sounds cool!)

c) I want to wait and see its effects since it was rushed out. That might have made sense  X months ago, but by now its very well established.

d) I haven't gotten around to it yet. These are a very good target for the mandates or at least to be talked into doing it. It may be more fun to talk about the radical anti-vaxers; however, if the haven't gotten around to it group all got vaccinated we would be closer to herd immunity.

e) I've got COVID so I am immune. This is true for Y months (I am not sure what Y is); however, the person I know who told me this had COVID about 3 years ago.  But at least its an attempt at an argument. 

f) The medical community has been bad to (i) me, (ii) my people, (iii) some other people that I care about hence I do not trust them. The premise is actually true, so this is an attempt at an intelligent argument. 

g) Big Pharma is making a mint on this and ripping us off. This is a view of, not the extreme left but the fringe left. Robert Kennedy is a big voice here, see here. On a right-wing relatives' urging I listened to an hour-long interview with him. Mostly a rambling incoherent argument. Much of it was anti-business which got no pushback from the usually-pro-business republican's. Are there any mainstream democrats who are anti-vax? I do not think so. Also, is it true that Big Pharma is is making a mint? Ripping us off? Ripping someone off? This could be an interesting question if asked more coherently. But its not a good reason to be anti-vax. 

h) Trump said it was a hoax to get him out of office. Taking a vaccine now would be to admit its not a hoax. Such people should take Karl Popper's view of science: Conjecture that, as Ted Cruz said, if Biden wins then COVID will go away since it was a hoax made up to get Biden to win. If Biden wins and COVID does not go away then you must reject your conjecture. (ADDED LATER: Odd point- Trump has sometimes said, though not lately, that people should get the Vax that HE invented. If we called it a Trumpzine then would people take it? As of now Trump and DeathSantos are trying to anti-vax each other.) 

i) The first few days after you take it it hurts and you feel tired or get chills or other reactions. This is actually true, though one has to balance that against getting COVID.

j) The Ford Administration really messed up on the Swine Flu so why trust the government now? This is true- The Ford Admin DID mess up. Oddly enough, I have never (or perhaps rarely) heard this as a reason. This puzzles me- Why claim Bill Gates microchip stuff (which is absurd) or Vax causes autism (debunked)  and  NOT use arguments are are more reasonable?

k) Vaccines cause autism. That study was debunked a long time ago, but it still lives on.

l) Kamala Harris said she was hesitant since it was rushed.  I've only heard this one as Republicans  try to blame her for Vaccine Hesitancy. The problem with that argument is that you are claiming that the anti-vaxers, who are mostly republicans, are listening to Kamala Harris for advice.

l) If many people get Vaxed and COVID goes away then Biden will look good, and we can't have that.

m) COVID was invented by the Chinese to cripple us. Uh- even if true (which it is not, though the leaked lab hypothesis might be) that's a reason to TAKE it to thwart their plans. For that matter, Trump could have been Trumpian and PRO-MASK and  PRO-VAX by blaming the Chinese and George Soros and the Deep State and Biden and Obama and of course Hillary,  but using this to say WEAR THE MASK! TAKE THE VAX! to DEFEAT these ENEMIES who want to destroy America. Would that have worked to slow the spread of COVID? If so then would he have won re-election? 

Thursday, May 18, 2023

Community Guidelines that Ignore History

Last week I got the following email from Blogger:

As you may know, our Community Guidelines (https://blogger.com/go/contentpolicy) describe the boundaries for what we allow-- and don't allow-- on Blogger. Your post titled "Mysteries of the Seventies" was flagged to us for review. This post was put behind a warning for readers because it contains sensitive content; the post is visible at http://blog.computationalcomplexity.org/2005/06/mysteries-of-seventies.html. Your blog readers must acknowledge the warning before being able to read the post/blog.

So let me describe what happened carefully so this post doesn't also get flagged. First a history lesson. In 1972, five men were arrested for breaking into the Democratic National Committee (DNC) headquarters at the Watergate complex in Washington, D.C. They were found with bugging devices and cameras, attempting to wiretap the DNC offices. One of them had ties to the Committee to Re-Elect the President (CREEP), which supported Richard Nixon's re-election campaign.

Two Washington Post reporters, Bob Woodward and Carl Bernstein, broke the story of the coverup that would eventually lead to Nixon's resignation in 1974. I highly recommend both the book and the film All the President's Men about their investigation. Woodward and Bernstein got help from a then anonymous source. The identity of the source was subject to huge speculation and remained a mystery for three decades. In 2005 Mark Felt, associate director of the FBI, outed himself as the source.

After the announcement I wrote a short post (would have been a tweet today) mentioning that while we learned the solution of one mystery from the 1970s, another remained.

So why was this post flagged as sensitive eighteen years later? Because I used the pseudonym the Washington Post reporters gave to Mark Felt, a name take from a popular adult movie at the time. 

Sunday, May 14, 2023

Take a number and map it to the number of letters in its name

Let f: N--> N map a number to the number-of-letters in its name in English (we will consider other languages later).

So for example 14 is fourteen so it maps to 7. Let's iterate

14 --> 7 --> 5 --> 4-->4-->4 ...

It is known (perhaps well-known) that, given any starting point, the sequence eventually is all 4's. 

I recently got an email asking if this was PROVEN.  I could not find a proof on the web so I did it myself. My writeup of a proof is here

1) So now there IS a proof on the web. It may be hard to find. Does this problem have a well known name that I should add to this blog post so that it is easier to find?

2) My next thought was

For which functions f: N-->N is it the case that every sequence a, f(a), f(f(a)), ... is eventually constant. I leave that for you to ponder.

3) The asker of the question is of a more real-world bent and emailed me what happens in other languages:

Spanish has one number whose name is its own length: 5 is cinco. I leave it to the reader to show that in Spanish the sequence always becomes all 5's. (NOTE- a commenter says NO, you an have a 4-6 cycle. Cool!) 

French seems to have no such numbers, so it cannot have this behavior.

Norwegian has three such numbers: 2 is to, 3 is tre, 4 is fire. So I suspect every such sequence either is constant-2, constant-3 or contant-4.

See this article here for more the numbers 1-10 in several languages.

OBLIGATORY ChatGpt note: Lance saw this post in our blog account and was curious what ChatGpt would do with it. For what he got see here. You can decide if a program can take over the blog anytime soon. 

Thursday, May 11, 2023

Winter is Coming

I fear we are heading to a computer science winter. Now why would I say that when CS enrollments are at record levels and AI is driving incredible excitement in the field? Back in 2016 I wrote

We won’t see a decline in demand for computer science students until we automate ourselves out of a job.

With advances in machine learning, especially generative AI, you can now use AI tools with little to no knowledge of computer science and mathematics. You can do quite a bit more with just a basic knowledge of Python programming and APIs. And tools like Github co-pilot make programming that much easier.

In 2005 during the last period computer science saw small enrollments, I suggested computing became a commodity and we lost the excitement in the field, leading to a decrease of interest and students. It didn't help that potential students had a (mostly mistaken) perception that all the computing jobs were being outsourced to India.

We may soon see a time when computing loses its excitement again if everyone can just interact in English (or any other language). Students might now have a perception that computing jobs will be outsourced to AI. The recent tech layoffs don't help. Even the ones interested in computing might focus more on the various low-cost certificate programs instead of a full CS degree.

What can we do? We need to reframe our degrees or create new ones to recognize the move to data-oriented computing. We need to embrace teaching responsible AI but without fighting the future. 

CS is in a great place right now but we have to continually adjust to ensure our future relevance or we may no longer have it.

Monday, May 08, 2023

Other Ramsey's

 I often Google Ramsey stuff to find something. I often end up back on my own collection of Ramsey theory  papers. But I sometimes find OTHER uses of the phrase  Ramsey. 

To tell these Ramsey's apart we will call the math Ramsey guy Frank Ramsey since that was his name. 

1) Frank's brother Arthur Michael Ramsey (usually just Michael Ramsey). He was the Archbishop of Canterbury from 1961 until 1974.   He was ahead of his time in being Ecumenical and supporting women clergy. His Wikipedia page is here

2) The Ramsey Effect. Aaron Ramsey is a football player (what Americans call Soccer). There were four time when he scored a goal and soon after an important person died. This was dubbed The Ramsey Effect. This is of course silly, and if he asked Frank about the probabilities (unlikely-Frank died in the 1930's and Aaron was born in 1990) I am sure Frank would tell you that four is too small a number to make anything out of this. The Ramsey Effect is discussed here. There is no Wikipedia page about it, and I found it by accident so, as the kids say, its NOT a think. 

3) Jon Bennett Ramsey. A girl who was murdered when she was six. Case still unsolved. See the Wikipedia page on this here.

4) First name Ramsey: here. The only one I had heard of is Ramsey Clark.

5) Last name Ramsey: here. Lots of people! Most I had not heard of. 

(With regard to 4 and 5: there are so many famous people you can't have heard of all of them)

6) For more Ramsey Stuff see here



Thursday, May 04, 2023

Breaking Ground in Isomorphism Testing: A Leap Forward for a Bottleneck Case of Group Isomorphism

Guest post by Josh Grochow and Youming Qiao

There has, quietly, been somewhat of a breakthrough in isomorphism testing. No, not as big as Babai's 2016 Graph Isomorphism in Quasipolynomial Time. But a first foothold in climbing a wall for which no one had gotten much off the ground before. The result, due to Xiaorui Sun in this year's STOC, is an algorithm for testing isomorphism of a certain class of groups - p-groups of class 2 and exponent p if you must know, but we'll get to that - in time \(n^{O(\log^{5/6} n)}\) where n is the order of the group. To understand why we're excited about this we have to tell a bit of a story. 

In the 1970s, when Graph Isomorphism was still a mystery, people also thought more widely about isomorphism testing of other combinatorial and algebraic structures. For finite groups of order n, Robert Tarjan realized that there is an \(n^{\log n+O(1)}\)-time algorithm, simply because a group of order n has a generating set of size \(\log n\). This observation was recorded by Gary Miller in a paper in STOC'78, and independently realized by Felsch and NeubĂĽser. A natural question is then whether Group Isomorphism can be solved in time poly(n) where n is the group order.


Not only is this question natural from the perspective of studying groups computationally, it is also natural from the perspective of Graph Isomorphism. For Group Isomorphism reduces to Graph Isomorphism in polynomial-time (as does the isomorphism problem for any finite algebraic or relational structure, see Zemlyachenko, Korneenko, & Tyshkevich). While this has been known for a long time, Babai’s result on Graph Isomorphism brings the running times quite close: \(n^{O(\log^2 n)}\) for graphs, and \(n^{O(\log n)}\) for groups. So not only does Group Isomorphism stand in the way of getting Graph Isomorphism into P, but in our current state of knowledge, it even stands in the way of shaving off more than a single log in the exponent of the runtime.


Since the general Group Isomorphism problem seems difficult, attention turned to special classes of groups. It was not hard to see that isomorphism of Abelian groups could be computed in polynomial time. However, a group class that is just “one step away” from Abelian - groups G where, when you mod out by the center Z(G), what’s left is Abelian -  turned out to be difficult. Such groups are called class-2 nilpotent, and in one sense, their  group-theoretic structure is relatively straightforward: both G/Z(G) and Z(G) are Abelian. Yet, to devise an efficient isomorphism testing procedure turned out to be extremely difficult (see e.g. Garzon-Zalcstein, Rosenbaum-Wagner, O’Brien, Wilson), to the point that this is usually considered as a bottleneck for putting Group Isomorphism in P. 


Among class-2 nilpotent groups, the “key case” to resolve is widely believed, for several reasons, to be p-groups of class 2 and exponent p. In such groups, both the center Z(G) and quotient G/Z(G) are elementary abelian, i.e., of the form \((Z_p)^d\). Despite having an even simpler group-theoretic structure, this group class still turns out to be difficult! For a long time, the asymptotic growth of the exponent of the runtime for solving this restricted problem has not improved over the \(n^{\log n+O(1)}\)-time algorithm, which works for all groups.1


Xiaorui Sun’s result represents the first substantial improvement, cracking open this decades-old quest. His algorithm runs in time \(n^{O(\log^{5/6} n)}\), and its techniques are indeed novel. The starting point of this algorithm is to consider the following equivalent problem in (multi)linear algebra: let \(f, g:Z_p^d \times Z_p^d \rightarrow Z_p^e\) be two skew-symmetric bilinear maps. Do there exist change of bases A in \(GL(d, p)\) and B in \(GL(e, p)\), such that for all \(u, v\) in \(Z_p^d\), \(f(A(u), A(v))=B(g(u, v))\)?


Baer’s Correspondence sets up an equivalence of categories between p-groups of class 2 and exponent p, and skew-symmetric bilinear maps over \(Z_p\). This viewpoint allows Xiaorui to use multilinear algebra to study the structure of these bilinear maps. He also crucially depends on a result of Ivanyos and Qiao, which built on Wilson’s use of involutive algebras in this context. He also uses the individualization-and-refinement technique (but for matrix spaces, not graphs!), a characterization of spaces of matrices of low rank, and reducing a tensor to a “semi-canonical” form part of which is somewhat reminiscent of the Tucker decomposition.


All this results in an algorithm which solves the above problem on bilinear maps in time \(p^{(d+e)^{1.8} \log p}\). For groups of order \(p^n\) with \(\log_p(n)\) larger than \(\log^5 p\), Baer’s Correspondence then says that this algorithm does it; when \(\log_p n\) is smaller than \(log^5 p,\) he can fall back on the generator-enumerator algorithm, since the number of generators is at most \(log_p n\).


For us, who have been working on Group Isomorphism for more than a decade, Xiaorui’s result represents an exciting development on this classic algorithmic problem, and we look forward to seeing more progress in this direction in the near future. 


1Rosenbaum & Wagner improved the exponent to \(\frac{1}{2}\log {p(n)} + O(1)\), and later improved to \(\frac{1}{4}\log {p(n)} + O(1)\) for all groups, see p.5 of Le Gall & Rosenbaum. In 2014, at a conference on Groups, Computation, and Geometry organized by Wilson, Brooksbank, Hulpke, Kantor, and Penttila, it was concluded that modern practical methods, such as those used in GAP and MAGMA, still take \(n^{O(\log n)}\) steps in the worst case.

Monday, May 01, 2023

There are an infinite number of proofs that there are an infinite number of primes

In the last few years there have been four papers that prove the primes are infinite using some number theory and some Ramsey Theory. The papers are:

Van der Waerden and the primes by Alpoge. See here or here.

Squares in arithmetic progression and infinitely many primes by Granville. See here or here.

Fermat's last theorem implies the infinitude of primes by Elsholtz. See here or here.

Fermat's last theorem, Schur's theorem (in Ramsey Theory), and the infinitude of the primes by Gasarch See here and here.

(I included the arxiv version and the doi pointer.)

We also note that

1) Mestrovic has collected up 183 proofs that the primes are infinite, see here. Some of them are similar so if you mod out by similarity you would get fewer proofs. How many you get depends on your criteria for similarity. This comment applies to other theorems that have many proofs. 

2) Quanta recently published an article about the fact that there are an so many proofs, though highlighting the four mentioned above. See here. (This might be behind a paywall.) 

This raises the obvious question:

Why are there so many proofs that the primes are infinite? 

Some thoughts

1) Which other theorems have many proofs?

a) The Pythagorean Theorem. See here for the claim that there are 371 proofs. There is a recent claim of a proof using trigonometry here (this was thought to be impossible since it would involve a circular argument). 

b) According to Wikipedia (see here) The law of quadratic reciprocity has 240  proofs.  This paper here has some of them. That paper also shows 

QR IMPLIES primes infinite.

 Actually more: QR IMPLIES  primes \(\equiv 4 \pmod 5\) is infinite.

c) \(\sqrt 2\) is irrational has many proofs. I can't find a clean reference that states there are many proofs---if you know one, leave a comment. Wikipedia (see here) has five proofs, though there are many more. 

d) There is a mathoverflow post about theorems with many proofs here. I had thought only easy theorems had many proofs; however, there are several hard ones on this list. 

2) Primes are so basic that many parts of math can be used to proof they are infinite.

3) WHY do people do these proofs? For the four papers listed above, and likely for many of the other proofs,  the proof that primes are infinite is  a springboard to other questions or concepts. We look at those four papers: 

a) Alpoge's showed Van Der Waerden's theorem and Unique factorization IMPLIES primes infinite. Alpoge didn't use this as a springboard to other questions, but is amused that VDW can be used to prove primes infinite. I will note that the proof made me realize (a)  the proof of Unique Factorization does NOT use that primes are infinite, and (b) ANY integral domain with Unique Factorization has an infinite number of primes. 

b) Granville's showed VDW's Theorem and also a result of Fermat that there cannot be four squares in arithmetic progression IMPLIES primes infinite. He then uses this as a springboard to talk  about other interactions between Ramsey Theory and Number Theory.  Let Q(N) be the max number of squares in arithmetic sequence of length N. Find upper and lower asy bounds on Q(N). From Szemeredi's theorem (which is part of Ramsey theory) Szemeredi himself showed that for all \(\delta>0, Q(N) < \delta N\). Granville's paper shows how to get this result from a corollary to Faltings theorem. He also notes that better is known: \(Q(N) < N^{3/5 + \epsilon}\). 

c) Elsholtz showed Schur's theorem (for all c there is an S=S(c) such that for all c-colorings of {1,...,S} there exists x,y,z the same color such that x+y=z) and FLT (the n=3 case or n=4 case) IMPLIES primes infinite. He then looks at various INFINITE Ramsey Theorems that imply the primes are infinite.

d) Gasarch's proof is identical to Elsholtz. He then looks at (1) for domains with a finite number of primes, what goes wrong? (2) when does the proof apply to other integral domains? The last question involves looking at variants of FLT some of which are open questions. 

4) Gasarch also wondered about the following (though its not in his paper). The four papers above, and also other proofs that the primes are infinite, are of the form 

IMPLIES Primes are infinite

Are we really using A? How to pin this down? Find a logical system L such that 

1) From L one can show A IMPLIES Primes are infinite

2) From L one CANNOT prove Primes are infinite. (You may need some hardness assumption)

One can also examine this kind of question for other theorems like sqrt(2) is irrational. 

I have shown this to several people and am told its not doable. Oh well. 

I had a prior blog on this, after I saw Alpoge's proof,  see here.

ADDED LATER: a commenter left a link to a math overflow page that has information on the reverse mathematics of Euclid's theorem that the primes are infinite. The link is here.


5) USUALLY mathematicians want to (or should want to) find EASIER proofs of HARD theorems. 

For some of the proofs that primes are infinite, QR, \(\sqrt 2\) irrational, some other theorems that have many proofs, mathematicians want to find HARD proofs of EASY theorems. 





Wednesday, April 26, 2023

Comic Book Alignment

Talk about AI "alignment" makes me think of a time in the 1950s that a group of companies decided to

create an industry organization to self-govern their work to make sure that they were following the values of the time and avoid government oversight. Of course, I'm talking about the Comics Code Authority (CCA). 

Fueled by psychiatrist Fredric Wertham's book Seduction of the Innocent and a series of U.S. Congressional hearings, the comic publishers realized they needed to police themselves and formed a trade group, the Comics Magazine Association of America (CMAA). The CMAA created the Comics Code Authority (CCA) in 1954 to enforce a code of content guidelines that comic book publishers would adhere to. The Comics Code prohibited explicit violence, sexual content, and other adult themes in comic books, as well as a promoting a "positive portrayal" of authority figures and institutions. The CCA seal, which was a small stamp indicating that a comic book had been reviewed and approved by the organization, became a requirement for distribution by most newsstands and retailers pushing many publishers to follow the code.

I started reading comic books in the 1970s with the code in full swing. It was not a golden time for comic books with mostly bland, straightforward stories, and I gave it up as I went into high school. In college in the '80s, a friend brought me back into comics with some series, like Watchmen, having given up the code and the seal. I started reading comics voraciously, so much that I had to go cold turkey in grad school so I could focus on research. The code itself was abandoned in 2011 after even Archie Comics gave up using the seal.

There's not a direct parallel between comic book writers and large language models, but the principle is the same. If you try to enforce a collection of values, you will get blander, less interesting output. I'm not saying that all alignment is a bad idea, but that you need to realize you will lose something when you do.

Sunday, April 23, 2023

Thoughts on Gordon Moore

 Gordon Moore passed away on March 24, 2023. He was 94 years old. 

He is best known for the article 


Cramming more components onto integrated circuits. 

It appeared in the magazine Electronics (it is now defunct), Volume 38, No. 8, April 19, 1965. Do you need to track it down in the basement of your library. No. Its here and here. I wonder if Moore would have predicted that his article would be available easily over 50 years later. Or is it? Link rot is a serious problem so you might want to download it to your local files. Note that the first link is some sort of official version and the second version is my local version. Not clear which link will rot first. The article also has an addition which is an interview with Moore that was done later.

In the article Moore said that the number of components-per-circuit (I think that means chip) will double every year. Moore credits Dave House with modifying it to `doubling every 18 months' and Carver Mead with calling it `Moore's Law'.  Later it came to be quoted as computer SPEED would double every 18 months. We will take this to be Moore's Law in this blog post. 

Is Moore's law dead? Browsing Google the answer seems to be that it is slowing down but not dead yet. (IDEA for a comedy sketch: Redo the Monty Python Dead Parrot sketch about the death of Moore's law.) 

If Moore had 1 penny in April 1965 and it doubled every 18 months then how rich would he be now? How rich was he in April 2022? Compare the two numbers. 







Thursday, April 20, 2023

Health Tech


On Tuesday, at the behest of an alumnus, I spent the afternoon at HIMSS, a large health tech conference being held at the big convention center in Chicago. When I think of Health Tech, I imagine fancy medical devices, but most of the exhibitors were focused on software solutions.

Cybersecurity had the biggest theme area, no surprise given the devasting role ransomware has had on some hospital chains. The second largest theme area focused on Interoperability. Just a few years ago, the vast majority of medical data is transferred via fax. A few companies, like Epic Systems, dominate the electronic health records space and don't share nicely. There's a relatively new standard, FHIR, for transferring medical data, making it easily accessible via APIs while keeping it secure. Hopefully, we can finally kill off the fax machines in doctors offices. Patient Engagement was the other big theme area.

Of course the big discussion topics are about how AI will change health care. For example, the advances in electronic records have led to doctors spending far too much time entering data instead of seeing patients. AI could make data entry quick, easier or perhaps even unnecessary. Also AI could help provide functionality for triage and initial diagnoses, helping to extend the capabilities in a staff-limited environment and help bring down health-care costs. Many of the exhibited software systems boasted about using AI but it won't be until next year's meeting that we see the true integration of large-language models into health care technology.

Many of the challenges of technology in health care carry over to higher education. We don't generally use faxes, but why do we send transcripts by PDFs? Health and student data share similar privacy and security challenges, why can't we develop a FHIR-like system for higher education? Cybersecurity and Patient Student Engagement challenges loom large for universities as well. 

Thursday, April 13, 2023

My Week at Simons

This week finds me at the Simons Institute for Theoretical Computer Science in Berkeley California. Simons started about the same time I joined the administrative ranks and never had the opportunity to spend a full semester there. I can manage a shorter trip and purposely chose a week with no workshops and great visitors including Sam Buss, Russell Impagliazzo, Valentine Kabanets, Toni Pitassi, Ryan Williams, former student Rahul Santhanam and former postdocs Pavan Aduri and Vinod Variyam and many others including the next generations of complexity theory leaders. Simons is having programs on Meta-Complexity and an "extended reunion" for Satisfiability. Apparently I used to work on Meta-Complexity before it was a thing.

Computational complexity traditionally has tried to get ahead of new technologies, and modelled randomized, parallel, quantum computation and cryptography in the infancy of their development allowing complexity to help guide our understanding and development of these areas. In the last twenty years or so, complexity has migrated more towards mathematics, and has mostly missed technological changes like cloud computing, hierarchical memory models, edge and mobile computing for example. 

But the recent advances in optimization and machine learning cannot be ignored. There has certainly been plenty of discussion of ChatGPT and Russell gave an informal lecture yesterday trying to model large-language models at some level. I've been having some discussions about how complexity can answer questions like what it means for a model to be explainable. 

Complexity theory also ought to reckon that practically we seem to be getting the best of P = NP while avoiding losing cryptography simultaneously in Heuristica and Cryptomania among Russell's five worlds. Russell claims we're not in Heuristica, at least not now, since we can still generate hard to solve problems. But if our models aren't modeling the world we live in, perhaps it's time to rethink the models. 

Monday, April 10, 2023

Complexity and Explainable AI

About six years ago, I posted on why it was important to understand machine learning, mentioning trust, fairness, security and causality. But I then I brought in complexity.

What if P = NP? Would that help. Actually it would makes things worse. If you had a quick algorithm for NP-complete problems, you could use it to find the smallest possible circuit for say matching or traveling salesman but you would have no clue why that circuit works. 

Ryan Williams countered

If P=NP you could also find the shortest proof in your favorite formal system that the smallest possible circuit does what you wanted it to do, as well as any other claim you are wondering that may be true about the circuit. That proof might not be comprehensible to you, but it could be written in a format where proof assistant software such as HOL or Coq could parse it and convince you it is correct. So if P=NP (with feasible low constants) I think that would definitely help.

So if P = NP maybe you couldn't understand how the circuit works but any question about how it works you could answer. 

While we don't live in a world where P = NP, we are in a world where we can produce very strong learning algorithms where we can't understand the programs they produce. They are now powerful enough to apply Ryan's idea and use the algorithms themselves to understand the decisions they make.

For example I tried using Google Translate on a Hungarian obituary of Vera SĂłs. Hungarian does not use gendered pronouns and and the translation incorrectly produced male pronouns. I then fed the translation into ChatGPT which translated with the correct female pronouns. 

You can speculate why Google Translate uses male pronouns but you can't ask it why. But you can ask ChatGPT.

I used female pronouns in the translation because Vera T. SĂłs is a female name. Vera is a feminine given name, and since the context of the original text is about her life and accomplishments, it is appropriate to use female pronouns in the English translation to maintain clarity and accurately convey information about her.

So perhaps if you want to understand how ML works, perhaps we should seek stronger algorithms, not weaker ones, algorithms that can explain themselves. As they say, a dull knife is more dangerous than a sharp one.

Tuesday, April 04, 2023

Neil Jones (1941-2023)

Eric Allender graciously agreed to write this remembrance of Neil Jones.

Neil Jones passed away on March 27.

Neil's work had a profound impact on the field of computational complexity theory, although he was primarily known for his work in other areas of computer science.  For example, his 1998 ACM Fellow citation is "For outstanding contributions to semantics-directed compilation, especially partial evaluation, and to the theory of computation, formal models and their practical realization."  Note that there's no mention of "complexity" (except as it is bundled together with "theory of computation" -- and Jones also published in the area of computability theory).

So what were some ways that Neil influenced the development of computational complexity theory?

In 1972, in the 4th STOC conference, Neil collaborated with Alan Selman, to show that a notion that logicians had been studying since the 1950's coincided exactly with a natural complexity class.  More specifically, given a logical formula F, the "spectrum" of F is the set of numbers {n : F has a finite model of size n}.  What kinds of sets can be the spectrum of a first-order formula?  Is this class of sets closed under complement?  These were some of the questions that logicians had struggled with.  Jones and Selman gave a precise characterization, as NE (nondeterministic exponential time).  Thus the complement of every spectrum is a spectrum if and only if NE=coNE.  As D. Sivakumar points out in a LinkedIn comment on Neil's death: "The following year, Fagin proved that generalized spectra coincide with NP, and descriptive complexity theory was born."

Of course, a lot of other things were happening in the late 60's and early 70's:  Savitch proved Savitch's Theorem.  The first NP-completeness results appeared.  It appears that several people were trying to build on Savitch's theorem, to show that everything in P can be done in log2 space, and this motivated Steve Cook to define a problem ("Path Systems") and show (1) certain algorithms for Path Systems could not be implemented in small space, and (2) Path Systems has a small-space algorithm iff everything in P does.  This result of Cook's was similar in flavor to a theorem of Savitch, showing that a problem he called "Threadable Mazes" was in L if and only if NL=L.  Although these notions were clearly in the air, Jones (and -- simultaneously -- Meyer & Stockmeyer) were the first to explicitly formalize the notion of logspace reducibility (including closure under composition), and to notice that the NP-completeness results of Cook and Karp held also under logspace reducibility.  And Jones was the first one to go ahead and develop the general theory of logspace reducibility and the theory of completeness for subclasses of P (first for P itself (with Laaser), and later for NL (with Laaser and Lien)).  I think that this is when people started to get the idea that completeness was not a special property that only a few problems shared.  Rather: Nearly EVERY natural computational problem was likely to be complete for some reasonable complexity class.

Notably, Jones also recognized that logspace was overkill, when considering reductions.  He also wanted to have a really restricted notion of reducibility, so that one could talk meaningfully about problems being complete for L.  To this end, he defined "log-rudimentary" reducibility.  This was quite natural for him, since he had work previously on Smullyan's "Rudimentary Relations".  But log-rudimentary reductions never really caught on.  Instead, after Furst, Saxe, and Sipser kickstarted the study of AC0 circuits, a notion of AC0 reducibility was developed by Chandra, Stockmeyer, and Vishkin in the mid-1980's, which turned out to be very useful in classifying problems as being complete in various subclasses of P.  Much later, in 1991, I published a paper with Vivek Gore, showing that Neil's log-rudimentary reductions are precisely the same thing as uniform AC0 reducibility.  Thus Neil Jones had the insight to define and study a notion that would not become mainstream for another decade, and which still provides the best tool for classifying the complexity of natural problems in subclasses of P.

I only had the pleasure of meeting Neil once, during a visit to Copenhagen in 2004, although we would occasionally exchange some e-mail about topics in complexity.  It is interesting to note that the most recent paper on Neil's DBLP page deals with complexity classes.  I haven't spent much time looking at the paper, but I do see that the authors define a complexity class that lies between NL and P.  It might be interesting to see if this class coincides with SAC1 (also known as LogCFL).

I thank Lance and Bill for encouraging me to write a few lines about Neil's importance to the field. 

Saturday, April 01, 2023

Who's on April First

 


Carlos May waving to the crowd on April 1, 1972

Instead of the usual April Fools’ Day post, I present one of the best April Fools Day stunts ever. Here’s the text from an old Parade Magazine clipping I dug up recently that was published on April 1, 1985.

When it comes to innovative and wacky ideas in baseball, Bill Veeck was a true legend. As a team owner and promoter, Veeck was known for his creative approach to the sport, from planting ivy on the walls at Wrigley Field to his famous "exploding scoreboard" at Comiskey Park. But did you know about the time Veeck pulled off an unforgettable April Fools' stunt by having the 1972 Chicago White Sox wear the names from the classic "Who's on First?" sketch?

It was April 1, 1972, and the Chicago White Sox were getting ready to play a game that would go down in history. Bill Veeck had decided to pay homage to the iconic comedy routine by Bud Abbott and Lou Costello, considered by many the greatest comedy sketch ever performed. For those unfamiliar with the sketch, it revolves around a series of misunderstandings based on the names of the players on a fictional baseball team. The names sound like common phrases, leading to a hilariously confusing conversation.

In Veeck's version of the stunt, the White Sox players would take the field with the names of the "Who's on First?" team on the back of their jerseys. The players, initially skeptical of the idea, eventually embraced the spirit of April Fools' Day and played along.

As the game commenced, fans were treated to a scene straight out of the Abbott and Costello routine. Instead of their usual names, the players' jerseys featured names like "Who," "What," "I Don't Know," "Why," "Because," "Tomorrow," and "Today." Here was the starting lineup:

  1. Who - First Base: Dick Allen
  2. What - Second Base: Mike Andrews
  3. I Don't Know - Third Base: Bill Melton
  4. Why - Left Field: Carlos May
  5. Because - Center Field: Ken Berry
  6. Abbott - Right Field: Jay Johnstone
  7. I Don't Care - Shortstop: Luis Aparicio
  8. Today - Catcher: Ed Herrmann
  9. Tomorrow - Pitcher: Wilbur Wood
The right fielder is never named in the sketch. Pat Kelly pinch hit for Johnstone in the 6th wearing “Costello”. 

The confusion was not only limited to the fans in the stadium. The opposing team and the umpires struggled to keep track of the game, often leading to comical misunderstandings on the field. For instance, the umpire might have shouted, "Who's out!" only to be met with the response, "No, Who's on first!"

Though some traditional baseball fans were initially taken aback by the stunt, the majority embraced the humor, making the game one of the most memorable in White Sox history. It was a testament to Veeck's genius that he could seamlessly blend comedy with the sport he loved.

The "Who's on First?" game became a cherished part of baseball lore and added to the legend of Bill Veeck. It demonstrated his willingness to think outside the box, engage fans, and remind everyone that, at its core, baseball should be a source of fun and entertainment.

The 1972 Chicago White Sox "Who's on First?" April Fools' Day game captured the spirit of Bill Veeck's inventive approach to baseball. As we celebrate April Fools' Day this year, let's remember the time when the White Sox took the field with the most confusing lineup in baseball history and showed us all that, sometimes, laughter truly is the best medicine.

Wednesday, March 29, 2023

Alan Turing, The Opera

 

Last Thursday I attended the world premier of The Life and Death(s) of Alan Turing, a new production from Chicago Opera Theater composed by Justine Chen with the libretto (text) from David Simpatico.

The opera takes a mostly chronological trip through his life in seven scenes, focusing less on the computer science and more on Turing's struggle with homosexuality and his prosecution. Turing does fiddle with an old computer throughout the opera. The opera ends with a different take on his death (spoiler alert), where he attempts to upload his consciousness to escape his chemically castrated body. 

Between the scenes, a chorus chants random numbers and words as they floated on a scrim, in what the composer calls "chat clouds". 

These “chat clouds” transport the listeners with a sonic approximation of internet chatter, filled with information that brings them to the next moment. The aural depiction of these "chat clouds" was inspired by the animated movies and television series of Masamune Shirow's cyberpunk manga Ghost in the Shell. Another sonic influence was Walt Disney’s fantastical Snow White, one of Alan’s greatest obsessions.

I found them reminiscent of the Philip Glass' knee play from Einstein on the Beach. I really enjoyed these sequences, though another academic I ran into during intermission felt otherwise.

Over all I enjoyed the music and the opera, particularly the courtroom scene where Turing gave an impassioned defense though it turns out all in his head. The cast did well across the board, especially Jonathan Michie who sang Turing. 

Librettist David Simpatico (wearing jacking), Composer Justine Chen (in gown)
conductor Lidiya Yankovskaya behind her and Jonathan Michie (in red robe)

One can't help compare this opera to The (R)evolution of Steve Jobs, that I saw in Atlanta last year. Both operas chart a metaphysical journey of a computing giant through various important moments of their lives. During a Q&A I asked Simpatico about the two and he said he purposely avoided the Jobs opera so as to not affect how he wrote this one. Probably for the best.

Sunday, March 26, 2023

The SIGACT Book Review column list of books it wants reviewed

I am posting this for Nick Tran who is the current SIGACT Book Review Editor (before him it was Fred Green for about 6 years, and before him it was me (Bill Gasarch) for 18 years. Nobody should have the job for more than 6 years. No TV show should to on more than 6 years. The 6-year rule probably has other applications.)

Nick asked me to post the list of books that need reviewing. It is most of this post. 

If you spot one you want to review then email him (email address later)  the name of the  book you want to review and your postal address so he can send it to you or have it sent to you. Here are his specs:

Reviews of recently published or bucket-list books of interest to the TCS community are welcome. Manuscripts (NOTE FROM BILL `manuscripts'? Really? Sounds like the kind of thing you would FAX or postal mail) should be between 3 and 6 pages and include a brief introduction, a detailed content summary, an assessment of the work, and a recommendation to the book's targeted audience. 

Nick's email is ntran@scu.edu

The books are: 

ALGORITHMS

Knebl, H. (2020).  Algorithms and Data Structures: Foundations and Probabilistic Methods for Design and Analysis. Springer.

Roughgarden, T. (2022). Algorithms Illuminated: Omnibus Edition. Cambridge University Press.


MISCELLANEOUS COMPUTER SCIENCE

Amaral Turkman, M., Paulino, C., & MĂĽller, P. (2019). Computational Bayesian Statistics: An Introduction (Institute of Mathematical Statistics Textbooks). Cambridge University Press.

Nakajima, S., Watanabe, K., & Sugiyama, M. (2019). Variational Bayesian Learning Theory. Cambridge University Press.

Hidary, J. D. (2021). Quantum Computing: An Applied Approach (2nd ed.). Springer.

Apt, K. R., & Hoare, T. (Eds.). (2022). Edsger Wybe Dijkstra: His Life, Work, and Legacy (ACM Books). Morgan & Claypool.

Burton, E., Goldsmith, J., Mattei, N., Siler, C., & Swiatek, S. (2023). Computing and Technology Ethics: Engaging through Science Fiction. The MIT Press.


DISCRETE MATHEMATICS AND COMPUTING

O’Regan, G. (2020). Mathematics in Computing: An Accessible Guide to Historical, Foundational and Application Contexts. Springer Publishing.

Rosenberg, A. L., & Trystram, D. (2020). Understand Mathematics, Understand Computing: Discrete Mathematics That All Computing Students Should Know. Springer Publishing.

Liben-Nowell, D. (2022). Connecting Discrete Mathematics and Computer Science (2nd ed.). Cambridge University Press.


CRYPTOGRAPHY AND SECURITY

Oorschot, P. . C. (2020). Computer Security and the Internet: Tools and Jewels (Information Security and Cryptography). Springer.


COMBINATORICS AND GRAPH THEORY

Golumbic, M. C., & Sainte-LaguĂ«, A. (2021). The Zeroth Book of Graph Theory: An Annotated Translation of Les RĂ©seaux (ou Graphes)—AndrĂ© Sainte-LaguĂ« (1926) (Lecture Notes in Mathematics). Springer.

Beineke, L., Golumbic, M., & Wilson, R. (Eds.). (2021). Topics in Algorithmic Graph Theory (Encyclopedia of Mathematics and its Applications).  Cambridge University Press.


PROGRAMMING ETC.

Nielson, F., & Nielson, R. H. (2019). Formal Methods: An Appetizer. Springer.

Sanders, P., Mehlhorn, K., Dietzfelbinger, M., & Dementiev, R. (2019). Sequential and Parallel Algorithms and Data Structures: The Basic Toolbox. Springer.


MISCELLANEOUS MATHEMATICS

Kurgalin, S., & Borzunov, S. (2022). Algebra and Geometry with Python. Springer.


 

Thursday, March 23, 2023

A Strange Hiring Season

In my fall jobs post, I had trouble predicting this season's CS faculty job market but I didn't expect this:  strong supply and strong demand, something we haven't seen since the early '80s when the then young CS departments were ramping up.

We have a confluence of two forces that have both strengthened since my post back in November: The layoffs and hiring freezes at the major internet companies (Alphabet, Amazon, Apple, Meta and Microsoft) contrasted with major excitement in machine learning. I wrote the jobs post before ChatGPT was released and Amazon and Meta have since announced even more layoffs.

ML-mania is leading to very strong demand from undergrad and grad students for computing degrees. Across the US we have a record number of open faculty slots in computing as universities try to meet that need.

Meanwhile, PhDs who might have gone to industry are going on the academic job market instead. Also some tech researchers who have been laid off or spooked by the layoffs are considering academic jobs.

Between these two forces we will likely have a record number of faculty hires this year but we may see fewer senior people switching universities because good departments can fill their needs with junior faculty.

There is a mismatch of area. There is a big demand in CS departments to hire in machine learning because that is where the student interest is. ML is not where the big companies are cutting back. If you are on the market this year, position your research to make it relevant to learning, or at least that you are willing to teach ML courses.

By the way, this post is for computer science. Count yourself extremely lucky if you can get a tenure-track job in a non-computing field.

Sunday, March 19, 2023

New Upper Bound on R(k). WOW!

 R(k) is the least n such that for all 2-colorings of the edges of \(K_n\) there is a monochromatic \(K_k\)

(so there are k vertices such that the coloring restricted to the edges between those k vertices is constant)


Here is some history. If I miss a reference or a result, let me know in the comments


\(R(k) \le 2^{2k} = 4^k \) seems to be folklore and is a very old result. Proof could be shown to HS students. I have. Or 9 year old's who are good at math. I have. 


\(R(k)\le {2k-2 \choose k-1}\) which is approx \(\frac{4^k}{\sqrt k}\) was shown by Erdos and Szekeres in 1935. Proof could be shown to HS students. I have. Or 9 year old's who are good at math. I have. 


Thomson in 1988 got

$$R(k) \le \frac{4^k}{k^A\sqrt{\log k}}$$

for some A. This paper is behind paywalls so will be lost to history and I cannot comment on if the proof is easy or hard. (If you know of a link it, let me know and I will provide it). 

Conlon in 2006 got the denom to be super-poly. We omit the result but the paper is here. Proof used the next method of quasi-randomness. 

Sah  (his paper is here) optimized Conlon's technique to get 

$$R(k) \le 4^{k-c(\log k)^2}.$$

(According to the paper I will point to soon that has the major result, Sah's result is the best one can do with the Thomason-Conlon technique. In Complexity theory we may have formalized that and called it a barrier result.)

These results were interesting and used interesting techniques. However, the best known lower bound is  \(R(k) \ge 2^{k/2}\) (I've left out constants) so the REAL questions are

a) Can the 4 be replaced with a smaller number in the upper bound?

b) Is there a number a so that R(k) is essentially \(2^{ak}\)?  

We note in passing that the lower bound was proven by the Prob Method. That last statement obscures the history--- the Prob Method was invented by Erdos TO PROVE the lower bound. I've seen the prob method called an application of Ramsey Theory which is not quite right. Its more like Ramsey INSPIRED a technique that is used A LOT, including things that are actually practical. 

But  back to our story. SO, while all of the above results were interesting, were they making progress towards the REAL question? We can now say YES as progress HAS been made and DID use some of the techniques.

On March 16, 2023 Campos, Griffthis, Morris, Sahasrabudhe posted a paper on arxiv, here, that showed

$$R(k) \le (4-\epsilon)^k$$

 for some (quite small) epsilon. 

Here are some thoughts which are influenced by my email correspondence with Jacob Fox (an eminent combinatorist) on this topic.

1) I am NOT surprised that the result is true. 

2) I AM surprised that it's been proven. Lots of brilliant people have worked on this problem for many years so... .why now? Since its been open for around 70 years, I thought it would take another 70 years to crack.

3) Will this result lead to better results? I hope so!

4) Does R(k) have a nice upper bound? Not clear. The following is unlikely though something like it may be possible:

R(k) roughly\( (3.5)^k\) for k a power of a Fibonacci prime

R(k) roughly\( (3.8)^k \)othewise

5) (Jacob pointed me to this) There is a paper on Book-Ramsey Numbers that also (on page 2) notes a connection to Ramsey Numbers. The paper is here. A conjecture on Book-Ramsey will lead to a big improvement on Ramsey. Those kinds of results are odd since I can't tell if the upshot is

Lets work hard on Book-Ramsey so we can get better bounds on Ramsey!

or

Book-Ramsey is HARD so lets give up. (Who first said if at first you don't succeed, quit. Why make a damn fool of yourself? ?) 

6) The paper with the new result is 57 pages. It looks dense. I will wait for the movie.  I may have to wait a long time. 

Thursday, March 16, 2023

Identities in Computational Complexity

Guest post by Josh Grochow

On the birdsite, Jay Cummings tweeted:


And it got me thinking about identities in computational complexity. At first I thought: how many could there be? 10? After some thought and with some help, there turn out to be quite a few more than that, and I think they make a pretty good list!

Rules/comments on the list:

  • Some are relatively simple alternative characterizations, such as the various definitions of PH; most are pretty nontrivial theorems.
  • I didn't include existentially quantified oracle results (such as "there exists A such that PA=NPA"), despite them being some of my favorite results, because there'd be waaaay too many of those. Probably thousands? Ask Lance. In some circles he's known as the "oracle oracle" - as in, he's who you go ask if you want to know if a certain oracle exists. I did include identities involving oracles where one side of the identity didn't have an oracle, such as EXP=NPRKt or AlmostP=BPP (AlmostP is the class of languages L such that {A : L is in PA} has measure 1).
  • I also didn't include some important conditional equalities, such as "EXP in P/poly iff EXP=MA" or "NP in BPP iff NP=RP". I guess it's not really an "identity" if it's conditional. Games are only fun if the rules are at least a little constraining!
  • There are some surprising containments that either aren't equalities, or aren't known to be equalities, and I didn't include those either, despite some of them being very important. For example, BPP in ÎŁ2P, other depth reduction results (such as the chasms at depth 4 and 3), and Beigel-Tarui/Yao.

One could teach a class based on a list like this and cover a lot of good ground, but I think it'd be sad to leave out any lower bounds. Actually, by my estimate, if you were teaching from "scratch" (say, starting after an undergrad algorithms class), this list, and all its implied prerequisite material, is already probably the equivalent of about 3-4 semester-long classes!

What other complexity identities did I miss?

Finally, without further ado, a list of (some) identities in computational complexity:

RP∩coRP=ZPP

CLS=PPAD∩PLS [from Paul Goldberg @paulwgoldberg]

quasi-poly Frege =quasi-poly noncommutative formula IPS

Space classes

PSPACE=NPSPACE

NL=coNL

L=SL

DET=LGapL (see Update 1)

Interactive Proofs

NP=PCP(O(log n), 3)

IP=PSPACE

AM = MAM = AMAM = ... [From Albert Atserias @atserias]

MIP=NEXP=PCP(poly,poly)

Alternating Turing machines

ÎŁk P = ∃ Pik-1 P = NPÎŁk-1 P = ÎŁkTIME(poly(n)) [from Lance]

AP=PSPACE

APSPACE=EXP

Circuit classes within P

NC1=5-PBP

ACC0=branching programs over solvable monoids

P=AuxPDA

SAC1 = LogCFL [from Michael Levet @Michael_Levet]

REG = DSPACE(O(1)) = NSPACE(O(1)) [from Levet]

REG=DTIME1tape(o(n log n))

ACk = logk time on a CRCW PRAM

Quantum

QIP=PSPACE

MIP*=CE

QMAlog (1)=BQP [from Martin Schwarz @martin_schwarz]

QMAlog (poly)=QCMA [ditto] 

QAC0f = QNC0f [from Ale `Scinawa' Luongo @scinawa]

QMA(k) = QMA(2) for any k ≥ 2 [from Sarvagya Upadhyay @sarvagya82]

Algebraic complexity

VP=VNC2

VDET=VABP=VPs=VPws

VPe=VABP3

border-VPe=border-VABP2

For bilinear maps, tensor rank=Theta(algebraic circuit size)

Logical characterizations

FO=AC0

AC=NC=FO[poly(log)]

SO=NP

P = FO + LFP on ordered structures [thanks to Lance, and Michael Levet]

Kolmogorov-random strings as oracles

EXP=NPRKt

PSPACE=ZPPRKS

P=COMP∩{dtt reducible to RKU}

Almost-classes

AlmostP=BPP

AlmostNP=AM

AlmostPSPACE=BPexp.PSPACE


Updates
  1. March 17th update, h/t Eric Allender: Using Cook's original definition of DET as problems NC1-reducible to the integer determinant, apparently this equality is not known! See Eric's recent guest column in SIGACT News for details and a $1000-prize related to this question, along with many other interesting open questions.