Sunday, December 08, 2024

My comments on Lance's Favorite Theorems

In Lance's last post (see here) he listed his favorite theorems from 1965 to 2024.
There are roughly 60 Theorems. I mostly agree with his choices and omissions. I will point out where I don't.

I could make a comment on every single entry; however, that would be madness! Madness I say!

Instead, here are some random thoughts.  (Is that Random as in Random Restriction or Random as in Random Access Machine?  I leave that an exercise for the reader.)

1) 1965-1974

MANY BASIC RESULT WITH EASY PROOFS.
EXAMPLE:
The Cook-Levin Theorem. P, NP, and SAT is NPC

ANSWERS A QUESTION:
Ladner: Answers a very important question: YES, if P NE NP there are
intermediary sets. The set is constructed for the sole point of not being in P or NPC. Graph Isom and Factoring are natural candidates for being intermediary.

SEEMS TO HAVE BEEN FORGOTTEN:
Blum: Abstract Complexity Theory. Seems to not be taught anymore. I give a corollary for our young readers who might not know it:

There is a decidable set A such that If A is in DTIME(T(n)) then A is in DTIME((log T(n)). Hence A cannot be assigned a complexity. (The set A is constructed for the sole point of having this property. There are no natural examples or even candidates for sets that have this behavior.)

I might disagree with putting this on the list. It has not stood the test of time; however, it still seems important. 


II) 1975-1984.

This may be my favorite decade on the list; however, its been said
that everyone thinks that the best music was when they were a teenager.

EXAMPLES:

INTERESTING THEOREMS WITH INTERESTING PROOFS:
Everything on the list is in this category but I pick out three:

Baker-Gill-Solovay Oracles: The basic paper for my thesis work.

Furst-Saxe-Sipser Parity is not in constant depth.  A meaningful lower bound on a natural problem! Motivated by an Oracle open question (Sep PH from PSPACE) however, circuit complexity quickly became a field onto itself.  What is more interesting the circuit lower bound or the oracle-corollary? I would vote for the circuit lower bound. The issue was discussed here.

Valiant-Permanent. Perm is #P-complete is a theorem I've learned and forgotten many times. Scott much later gave a proof that may be more intuitive for some people (I am not one of them) see here.  The only theorem I've learned-and-forgotten more is the Hales-Jewitt Theorem.

 
III) 1985-1994.

A Decade of Surprises!

Barrington: Branching programs more powerful than we thought!

Toda: #P is more powerful then we thought!

LFKN: IP is more powerful than we thought! Bonus: used non-rel methods! (This result was not on the list but would have been if anybody except L or F or K or N had written the list.)

Nisan: Randomization is less powerful than we thought!

Lance did not have Factoring in Quantum P on the list for 1985-1994. It came out in 1994 towards the end of the year so it ended up missing both the 1985-1994 list and the 1995-2004 list. Reminds me of the Betty White awards, see here.  I do not think we disagree on the importance and merit of the result, though we disagree about altering the past- I would have included it in the post he did recently and explain that it was a late add to an old list.

IV) 1995-2004.

In 1990 a theorist told me that he could teach EVERYTHING known in complexity theory in a year-long graduate course.  Even then, that was not quite right, and may have really meant he could teach everything he thought was important. By 1995 this was no longer true. The PCP result alone would take a few months.

Theory begins to get really hard. Most of the papers before 1992 I have read and understood. Most of the papers after 1992 I have not, though I know what's in them. Sometimes not even that!

PCP: Many problems are hard to approximate.  Good to know, bad that its true. Proofs are hard!

Raz's Parallel Repetition: Really useful in later non-approx results (my favorite: Set Cover Lower bounds, see this survey here) but also Really Hard to read.

Most of the other papers are also hard and important.

AKS- Primality in P- not that hard. Indeed, surprising that it was not proven until 2002.

Lance did not include the work on Natural Proofs by Razborov and Rudich. He says why in a blog post here. I disagree with him- I would have put it in a top theorems list.

VI) 2005-2014.

Some upper bounds and some lower bounds. By now it was hard to have a surprising result since our intuitions were not so firm as to be surprised. (There is one exception in the 2015-2024 list.)

Reingold-Undirected Connectivity in Log Space: great result! I wish the proof was easier. I think people thought this would be true.

Lots of interesting lower bounds: Nash Equilibrium, Unique Game Conj, new PCP proof. None of which was surprising, though perhaps that we could proof things about these concepts is surprising.

JJUW-QIP=PSPACE. Really! I was shocked to find out that was true. No, I wasn't. I didn't understand QIP well enough. Was this surprising or not? Was the fact that this could be proven surprising or not?


VII) 2015-2024.

No real theme here though they all have hard proofs. I discuss a few.

Babai-Graph Isomorphism is the only result in this decade that I can explain to Darling. And she has a Masters Degree in SE so she knows stuff. (I recently told her the result that for all 2-colorings of R^6 there is a mono unit square  (see here). She was unimpressed.)

BZ- Dichotomy: Excellent result and explains the lack of natural intermediary problems.

CZ-Extracting Ramsey Graphs: An example of TCS helping to prove a result in math, though it also shows that the border between the two is thin. Obviously a favorite of mine.

JNVWY- MIP* = RE. This surprised people, including me.

Wednesday, December 04, 2024

Favorite Theorems: The Complete List

Now in one place all of my sixty favorite theorems from the six decades of computational complexity (1965-2024).

2015-2024

1985-1994

To mark my first decade in computational complexity during my pre-blog days, I chose my first set of favorite theorems from that time period for an invited talk and paper (PDF) at the 1994 Foundations of Software Technology and Theoretical Computer Science (FST&TCS) conference in Madras (now Chennai), India. The links below go to the papers directly, except for Szelepcsényi’s, which I can't find online.
1975-1984 (From 2006)

1965-1974 (From 2005)


Will I do this again in ten years when I'm 70? Come back in 2034 and find out.

Sunday, December 01, 2024

Conway's Trick for Divisibility. Asking its complexity is an odd question.

 (I got this material from a nice article by Arthur Benjamin here.)

 Conway suggested the following trick to determine if a number is divisible by each of the following: 

2,3,5,7,11,17,19,31

Note that

\( 152=2^3\times 19\)

\(153 =3^2 \times 17\)

\(154=2  \times 7 \times 11\)

\(155=5 \times 31\)

\(156=2^2  \times 13 \)

Here is the Div trick:

a) Input N

b) Divide N by 150 and note the remainder. So

 N=150q+r

r=N-150q 

Subtract q from r a few times: 

Note that

r-q = N-150q-q = N-151q

r-2q=N-152q

AH HA!- if 19 divides r-2q then 19 divides N. So divide r-2q by 19. (Note that r-2q is much smaller than N. Smaller enough to make this technique feasible? That is the question!)

r-3q=N-153q.

AH HA!- if 17 divides r-3q then 17 divides N. So Divide r-3q by 17.

r-4q=N-154q

AH HA- if 11 divides r-4q then 7 divides N. So Divide r-4q by7.

r-5q=N-155q

AH HA- if 31 divides r-5q then 31 divides N. So Divide r-5q by 31.

r-6q=N-156q

AH HA- if 13 divides r-6q then 13 divides N. So Divide r-6q by 13. 

Complexity with 1 division, 6 subtractions and 6 divisions of small numbers (r\le 150 and q\le N/150)

you find out the divisibility by 7,13,17,19,31.  For 2,3,5,11 there are well known tricks to use. OR you can test those as well by doing (for example) dividing r-4q=r-154 by 11.

Some Points

1) Is this method faster than just dividing N by the numbers (and using tricks for 2,3,5,11)? You would need to get into addition being faster than division, and look at the size of the numbers.

2) Is this method practical? For hand calculation YES. For computers it would be easy to code up but the main question of this post: is it better than just dividing N by numbers.

3) Are there larger runs of numbers that pick up more divisors? Yes. We present one. The order will look funny but we explain it later.

\(2000=2^4 \times 5^3 \) (you could skip this one, though dividing by 2000 is easier than by 2001)

\(2001=23\times 29\times 3\) (would divide N-2q by both 23 and 29)

\(2002=7\times 11\times 13\times 2\)

\(1998=37\times 54\)

\(2006=17\times 29\times 2\)

\(2010=67\times 30\)

\(2014=19\times 53\times 2\)

\(2013=61\times 33\)

\(2015=31\times 65\)

\(2009=41\times 49\)

\(2021=43\times 47\)

The order was suggested by Conway so that algorithm at every step adds or subtracts one of q, 2q, 4q, 6q, 8q, 12q. So after you get q you can compute these values. 

I leave it to the reader to count the number of divisions, subtractions, and size of the numbers involved.

4) For cracking RSA this technique is useless since RSA uses numbers of the form pq where p and q are large primes. For factoring randomly generated numbers I would be curious if this method is better than just dividing by numbers.

5) Project: find other sequences like those above that cover more prime factors.



Monday, November 25, 2024

We Will All Write Like AI

Will our writing all converge to a generic AI style? 

Let's take a quick detour into LaTeX. Back in the late '80s, before LaTeX was the standard, there was TeX—a system with no default formatting, which meant everyone had their own unique style for papers. Then LaTeX arrived, and suddenly all our papers looked polished and professional. But the catch was, LaTeX only got you about 80% of the way there. The original manual even mentioned that you needed to add some extra TeX commands to really finish the job. Most of us didn’t bother, though, and soon all our papers had that same uniform look. It was efficient, and it was good enough. Microsoft Word ended up doing the same thing for everyone else—you could tweak it to be different, sure, but most people didn’t. It turns out most of us are just fine with "good enough."

I generally don't like to use large language models to write for me. But I do use them to refine my work, to catch errors or suggest improvements. The thing is, AI likes to take what I write and make it sound smoother, and I often think, "Wow, that really does sound better." But here’s the tricky part: it might be better, but it’s not mine. It’s the AI's voice. And still, if the stakes aren’t too high, sometimes I let the AI’s version slide. That’s the start of a slippery slope. Before you know it, we’re all letting AI make our writing a bit more generic, a bit more uniform. And eventually, we end up writing to match the AI’s preferred style.

For this blog post, I didn’t resist at all. Could you tell this is ChatGPT's style?

Wednesday, November 20, 2024

For what d is the following true: For all 2-colorings of \(R^d\) has a mono unit square (Answering(?) the Question)

 In my last post (see here) I invited you to work on the following question:

Find a \(d\) such that

--There is a 2-coloring of \(R^d\) with no mono unit square.

--For all 2-colorings of \(R^{d+1}\) there is a mono unit square. 

Actually I should have phrased my question as What do we know about d?  

Here is what we know

a) \(d \ge 2\).  There is a 2-coloring of  \(R^2\) with no mono unit square. This is easy and I leave to you. 

b) \(d\le 5\). For all 2-colorings of \(R^6\) there is a mono unit square. I will give pointers to the relevant papers and to my slides later in this post.

c) \(d\le 4\). For all 2-colorings of \(R^5\) there is a mono unit square. This is by an observation about the proof for \(R^6\). It will be in the slides about \(R^6\).

d) \(d\le 3\). This is in a paper that the reader Dom emailed me a pointer to. Dom is better at Google Search than I am. The link is here.

MY SLIDES:

\(K_6\) is the complete graph on 6 vertices. We will be looking at 2-colorings of its edges

\(C_4\) is the cycle on 4 vertices. A mono \(C_4\) has all four edges the same color.

We need a result by Chvtal and Harary in this paper here.

Lemma: For all 2-colorings of the edges of \(K_6\) there is a mono \(C_4\).

The proof appears both in their paper,  here, and on slides I wrote here

Stefan Burr used this to prove the following theorem.

Thm: For all 2-colorings of \(R^6\) there is a mono unit square. 

The proof was appears (with credit given to Stefan Burr) in a paper by Erdos, Graham, Montgomery, Rothchild, Spencer, Straus, here, and on slides I wrote here.

Random Points

1) It is open what happens in \(R^3\). 

2) The proof for \(R^6\) uses very little geometry. Dom had a proof for \(R^6\) in a comment on my last post that used geometry. The proof for \(R^4\) uses geometry. 

3) An ill-defined open question: Find a proof that every 2-coloring of \(R^4\) has a mono unit square that does not use that much geometry and so I can make slides about it more easily.



Sunday, November 17, 2024

For what d is the following true: for all 2-colorings of \(R^d\) there is a mono unit square (Asking the Question)

 In this post I give a question for you to think about. 

My next post will have the answer and the proof. 

1) The following are known and I have a set of slides about it here

a) For all 2-colorings of \(R^2\) there exists two points an inch apart that are the same color. (You can do this one.)

b) For all 3-colorings of \(R^2\) there exists two points an inch apart that are the same color. (You can do this one.)

c) For all 4-colorings of  \(R^2\) there exists two points an inch apart that are the same color. (You cannot do this one.) 

2) SO, lets look at other shapes

A unit square is  square with all sides of length 1.

Given a coloring of \(R^d\) a mono unit square is a unit square with all four corners the same color. 

a) There is a 2-coloring of \(R^2\) with no mono unit square. (You can do this one.)

b) What is the value of d such that 

-- There is a 2-coloring of  \(R^d\) with no mono unit square.

-- For all 2-colorings of \(R^{d+1}\) there is a mono unit square. 

My next post will tell you what is known about this problem.

Until then, you are invited to think about it and see what you can find out. Perhaps you will get a better result then what is known since you are untainted by conventional thinking. Perhaps not. 

Feel free to leave comments. However, if you don't want any hints then do not read the comments.



Thursday, November 14, 2024

Favorite Theorems: Learning from Natural Proofs

October Edition

I had a tough choice for my final favorite theorem from the decade 2015-2024. Runners up include Pseudodeterministic Primes and Hardness of Partial MCSP.  But instead in memory of the recently departed Steven Rudich, this month's favorite theorem shows how to learn circuit classes that have a natural proof of hardness.

Marco L. Carmosino, Russell Impagliazzo, Valentine Kabanets and Antonina Kolokolova

In 1993, Linial, Mansour and Nisan showed that any function f computed by a constant-depth polynomial-size AND-OR-NOT circuits (AC0), has a quasipolynomial-time PAC learning algorithm that, after seeing a quasipolynomial number of uniformly randomly chosen inputs x and their value f(x), can with high probability predict f on other randomly chosen inputs. Their proof works by looking a the Fourier basis of f, showing that learning the low-degree coefficients gives a good approximation to f. Their paper was one of the first to apply Fourier transformations to computational complexity. Adam Klivans did a guest post on this paper back in 2004.

Now, suppose we allow unbounded parity gates to the circuit. The Fourier argument falls apart and learning such circuits remained open for over a quarter century until the Carmosino et al. paper. Their paper uses a different technique, building on the proof of the Nisan-Wigderson pseudorandom generator function. They show that if a circuit class has a natural proof of hardness, the constructivity and largeness properties of the natural proof can break a pseudorandom generator for that class, which can then be used to create a learning algorithm. 

The authors then applied the Razborov and Rudich naturalization of Razborov and Smolensky's lower bounds for AC0 with parity gates to get a quasipolynomial learning algorithm for that circuit class.

Monday, November 11, 2024

Steven Rudich (1961-2024)

Complexity theorist Steven Rudich passed away on October 29 at the age of 63. His works on Natural Proofs and Program Obfuscation were both highly influential. Russell Impagliazzo had a great result with him on showing that one-way permutations alone isn't enough to get secret key-exchange.

I started graduate school in 1985 at Berkeley and Steven was the unofficial student leader of the theory group when I arrived and I got a taste of his optimistic style before I moved to MIT the following summer. I'd see Rudich at various conferences and during visits to each other's institutions. I first learned about natural proofs from a talk he gave at the University of  Chicago and we had many debates about its importance for years. Alas health issues took their toll and we lost one of the great researchers and personalities in our field.

Impagliazzo has been a friend and colleague of Steven's since they were both undergrads at Wesleyan and we asked him to write a guest post about Steven. Russell sent us a very honest and personal 7-page writeup. Too long for a blog post but I strongly recommend reading this powerful story. Also see Scott Aaronson's memories.