Monday, December 23, 2024

Complexity Year in Review

Back in the day (circa 1989) we studied locally random reductions which would lead to all those exciting interactive proof results. Somehow locally random reductions got rebranded as locally correctable codes and this year's result of the year settled a long-standing open question. 

Pravesh Kothari and Peter Manohar

Roughly if you want a code where each bit is a linear combination of three other appropriately-chosen random bits with constant error, you're going to need a very long code. More in Quanta

Things Bill wanted me to mention in this post: R(5), new Mersenne prime, Busy BeaverVazirani's delayed proofformal verification of the sum-check protocol and AI song generation.

2024 was quite a year, we saw a computational complexity theorist, Avi Wigderson, win the Turing Award and computer scientists win Nobel Prizes in both chemistry and physics. Also some elections, wars and college protests. It's all a prelude to a perfect storm for US higher education with the oncoming trains of the new administration, artificial intelligence, fiscal challenges and the demographic cliff. Hang on tight, it's going to be a bumpy ride.

We remember Rance Cleaveland, Peter Higgs, Thomas Kurtz, Phil Lewis, Steven Rudich, Frank Ryan, Jim Simons, Luca Trevisan, Dominic Welsh and Niklaus Wirth.

We thank all our guest posters and collaborators Eric Allender, Martin Bullinger, Max Burkes, James De Santis, Mohammad Hajiaghayi, Neil ImmermanValentine KabanetsHarry Lewis and Larry Washington.

Enjoy the holidays and we'll see you in January. 

Wednesday, December 18, 2024

Information is Physical?

I've heard a few times recently the phrase "Information only exists in a physical state". It come from the quantum computing world where they claim quantum changes the game when it comes to representing information.

As one who has spent his career studying theoretical information that has never and never will exist in a physical state, how can we reckon with such a statement? For starters let's consider the set of all primes--how does that infinite set exist in our finite world? 

Information is physical but not directly, but rather as its description. We can discuss a computational process or more generally a mathematical model that captures the set of all primes and we can and have store that description physically.

Let's consider a single prime, the recently discovered Mersenne prime \(2^{136279841}-1\). Note how we must describe the number in a very compressed format, certainly not as a collection of \(2^{136279841}-1\) ping pong balls or even \(2^{136279841}-1\) atoms, far more than the roughly \(2^{365}\) atoms in the observable universe.

In a similar fashion, a large language model stores information through its weights--not a direct encoding of the sentences it can generate. 

Now let's think of quantum computing. The quantum algorithm is always classically defined. All the information in quantum states has a classical description. An entangled quantum state may require an exponentially large explicit description, but the algorithm generating it provides a short classical physical description. So if we allow information to only physically represented by its description then it's hard to argue that quantum is somehow special. There are differences to how quantum works but when we try to simplify the message, it can confuse people into thinking quantum is more powerful than it really is.


Sunday, December 15, 2024

Random Thoughts on AI (Human Generated)

 (I wrote this post without any AI help. OH- maybe not- I used spellcheck. Does that count? Lance claims he proofread it and found some typos to correct without any AI help.)

Random Thought on AI

I saw a great talk on AI recently by Bill Regli, who works in the field. 

Announcement of the talk: here

Video of the talk:  here

-----------------------------------------------
1) One item Bill R mentioned was that AI requires lots of Energy so
3-mile Island is being reopened. See here.

Later I recalled the song

        The Girl from 3-Mile Island

to the tune of

        The Girl from Ipanema.

The song is in my audio tape collection but that is not useful so I looked for it on the web. The copy on YouTube doesn't work; however, this website of songs about 3-mile island here included it.

In the 1990's I was in charge of the Dept Holiday Entertainment since I have an immense knowledge of, and collection of, novelty songs- many in CS and Math.

Today- My talents are no longer needed as anyone can Google Search and find stuff. I did a blog on that here. I still have SOME advantage since I know what's out there, but not as much. Indeed, AI can even write and sing songs. I blogged about that and pointed to one such song here.

SO, some people's talents and knowledge are becoming obsolete.  On the level of novelty songs I am actually HAPPY that things change- I can access so much stuff I could not before. But humans becoming obsolete is a serious issue of employment and self worth. Far more serious then MACHINES TAKE OVER THE WORLD scenarios.

---------------------------------------------------------
2) When technology made farming jobs go away, manufacturing jobs took their place. That was true in the LONG run, but in the SHORT run there were starving ex-farmers. The same may happen now.

(ADDED LATER; someone emailed me that Machines taking over farming and other things has caused standards of living to go up. YES, I agree- in the LONG run very good, but in the short run people did lose their livelihoods.)

Truck Drivers and Nurses may do better than Accountants and Lawyers:

Self Driving trucks are 10 years away and always will be.
Nurses need to have a bedside manner that AI doesn't (for now?).

One ADVANTAGE of AI is that if it makes white collar workers lose jobs the government might get serious about

Guaranteed Basic Income, and

Univ. Health care

(ADDED LATER: someone emailed me that there GBI is not the way to go. Okay, then I should rephase as when white collar workers lose their jobs then the problem of a social saftey net will suddently become important.) 

Similar: If global warming makes the Cayman Island sink then suddenly Global Warming will be an important problem to solve.

------------------------------------------------
3) An example of AI taking away jobs is the Writers Strike.

OLD WAY: There were 10 people writing Murder She Wrote Scripts.

NEW WAY: AN AI generates a first draft and only needs 2 people to polish it.

KEY: In a murder mystery the guilty person is an innocuous character you saw in the first 10 minutes or a celebrity guest star. Sometimes the innocuous character is the celebrity guest star.

-------------------------------------------------
4) ChatGPT and school and cheating.

Calculator Scenario: We will allow students to use Chat GPT as we now allow calculators. Students are not as good at arithmetic, but we don't care.  Is Chat GPT similar?

Losing battle scenario: Ban Chat GPT

My solution which works--- for now: Ask questions that Chat GPT is not good at, allow chat GPT, insist the students understand their own work, and admit they used it. Works well in Grad courses and even senior courses. Might be hard in a Freshman courses.

Lance's Solution--- Stop giving out grades. See here

----------------------------------------------
5) Bill R said that we will always need humans who are better at judgment.

Maybe a computer has better judgment. I blogged on this here

 --------------------------------------------------
6) I asked two AI people at lunch if the AI revolution is just because of faster computers and hence is somewhat limited. They both said YES.

SO- could it be that we are worrying about nothing?

This also may be an issue with academia: if we hire lots of AI people because it's a hot area, it may cool off soon. Actually I thought the same thing about Quantum Computing, but I was wrong there.

----------------------------------------------
7) LLM's use LOTS of energy. If you get to ask one How do we solve global warming? they might say

First step: Turn me off!

----------------------------------------------
8) Scott did a great  blog post about the ways AI could go. See here.

--------------------------------
9) I recently emailed Lance a math question.

He emailed me the answer 5 minutes later.

I emailed that I was impressed

He emailed that he just asked  Chat GPT. He had not meant to fool me, he just assumed I would assume that. Like if you asked me what 13498*11991 was and I answered quickly you would assume I used a calculator. And if there is a complicated word in this post that is spelled correctly then you would assume I used spellcheck - and there is no embarrassment in that.

--------------------------------
10) If a painting is done with AI does any human get credit for it?

I always thought that people who forge paintings that look JUST LIKE (say) a van Gogh should be able to be honest about what they do and get good money since it LOOKS like a van Gogh who cares that it is NOT a van Gogh.  Same with AI- we should not care that a human was not involved.

IF an AI finds a cure for cancer, Great!

If an AI can write a TV series better than the human writers, Great!

--------------------------------------------------------
11) AI will force us to make moral choices. Here is a horrifying scenario:

Alice buys a self-driving car and is given some options, essentially the trolley problem:

If your car has to choose who to run over, what do you choose?

You have the option of picking by race, gender, age, who is better dressed, anything you want.

-------------------------------------------------------
12) Climate Change has become a political problem in that

Democrats think it IS a problem
Rep think it is NOT a problem

Which is a shame since free-market solutions that would normally appeal to Reps are not being done (e.g., a Carbon Tax). Indeed, we are doing the opposite- some states impose a tax on Hybrid cars


SO- how will AI go with politics? Scenarios

a) Dems are for regulation, Reps are against it. Elon Musk worries about AI and he is a powerful Rep so this might not happen.  Then again, he supports Reps, many of whom have made E-cars in their states harder to get or own.

(ADDED LATER: I originally said states had BANNED e-cars. A commenter inquired which states did this so I looked it up. NONE so I ammended the post. Some states are making it harder to have an E-car:

Extra fees for owning an E-car. See here. The reasonaing given is that E-Cars don't pay gas taxes. While that is true, I don't really buy that- republicans seem to be agains ALL taxes EXCEPT thoseon E-cars (and lately tarrifs).

Want to phasa out E-cars: see here

Right now it is illegal to sell cars directly to customers- must go to dealers. This has nothing to do with e-cars but is clearly an idiotic law. E-cars are trying to get around it, and there is pushback on that, see here.

)

 
b) AI-doomsayers want more regulation, AI-awesomers do not, and this cuts across party lines.

c) We will ignore the issue until it's too late.

If I was a betting man ...

----------------------------------------------------------
13) International cooperation on being careful with AI. Good luck with that.

My cynical view: International Treaties only work when there is nothing at stake

The Chem Weapons ban works because they are hard to use anyway.

The treaty on exploring Antarctica was working until people found stuff there they wanted. It is now falling apart

Wednesday, December 11, 2024

It's Time to Stop Using Grades

We use grades to evaluate students and motivate them to learn. That works as long as grades remain a reasonably good measure of how well the student understands the material in a class. But Goodhart's law, "When a measure becomes a target, it ceases to be a good measure," cannot escape even this most basic of academic measurements. Grades become irrelevant or even worse, counterproductive, as chasing grades may undermine a student's ability to master the material. So perhaps it is time to retire the measure.

Grading became a weaker measure due to grade inflation and academic dishonesty. Let's do a short dive into both of these areas.

The average grade has increased about a full grade level since I went to college in the '80s, and now more than half of all grades given are A's. As college tuition increased, students started thinking of college more transactionally, expecting more from their college experience while putting less effort into classes. Administrators put more weight on student course surveys for faculty evaluation, and the easiest way to improve scores is to give higher grades. And repeat.

If everyone gets an A, no one gets an A. It just becomes harder to distinguish the strong students from the merely good.

Academic dishonesty goes back to the beginning of academics but has advanced dramatically with technology. In my fraternity, we had filing cabinets full of old homework and exams ostensibly to use as study guides. However, if a professor reused questions from year to year, one could gain an unfair advantage.

With the growth of the Internet, Chegg, and more recently large-language models, those looking for an edge never had it so good. ChatGPT-4o1 can answer nearly any undergraduate exam question in any field—it even got an easy A when I tested it with one of my undergraduate theory of computing finals.

AI becomes like steroids: those who don't use it find themselves at a disadvantage. If a pretty good student sees their peers using LLMs, they'll start using them as well, initially just as a learning aid. But there's a very fine line between using AI as a study guide and using AI to give you the answers. Many fall down a slippery slope, and this starts to undermine the mastery that comes with tackling problems on your own.

We can try and counter all this by returning to harsher grading and more heavily weighting in-person, no-tech exams, but these approaches cause other problems. Already we see companies and graduate schools devalue grades and focus on projects and research instead.

So let's acknowledge this endgame and just eliminate grades, maybe keeping only Pass and Fail for those who don't even show up. The ones who want to master the material can focus on doing so. Others can concentrate on working on projects. Still others can earn their way to a degree with little effort but also with little reward.

Sunday, December 08, 2024

My comments on Lance's Favorite Theorems

In Lance's last post (see here) he listed his favorite theorems from 1965 to 2024.
There are roughly 60 Theorems. I mostly agree with his choices and omissions. I will point out where I don't.

I could make a comment on every single entry; however, that would be madness! Madness I say!

Instead, here are some random thoughts.  (Is that Random as in Random Restriction or Random as in Random Access Machine?  I leave that an exercise for the reader.)

1) 1965-1974

MANY BASIC RESULT WITH EASY PROOFS.
EXAMPLE:
The Cook-Levin Theorem. P, NP, and SAT is NPC

ANSWERS A QUESTION:
Ladner: Answers a very important question: YES, if P NE NP there are
intermediary sets. The set is constructed for the sole point of not being in P or NPC. Graph Isom and Factoring are natural candidates for being intermediary.

SEEMS TO HAVE BEEN FORGOTTEN:
Blum: Abstract Complexity Theory. Seems to not be taught anymore. I give a corollary for our young readers who might not know it:

There is a decidable set A such that If A is in DTIME(T(n)) then A is in DTIME((log T(n)). Hence A cannot be assigned a complexity. (The set A is constructed for the sole point of having this property. There are no natural examples or even candidates for sets that have this behavior.)

I might disagree with putting this on the list. It has not stood the test of time; however, it still seems important. 


II) 1975-1984.

This may be my favorite decade on the list; however, its been said
that everyone thinks that the best music was when they were a teenager.

EXAMPLES:

INTERESTING THEOREMS WITH INTERESTING PROOFS:
Everything on the list is in this category but I pick out three:

Baker-Gill-Solovay Oracles: The basic paper for my thesis work.

Furst-Saxe-Sipser Parity is not in constant depth.  A meaningful lower bound on a natural problem! Motivated by an Oracle open question (Sep PH from PSPACE) however, circuit complexity quickly became a field onto itself.  What is more interesting the circuit lower bound or the oracle-corollary? I would vote for the circuit lower bound. The issue was discussed here.

Valiant-Permanent. Perm is #P-complete is a theorem I've learned and forgotten many times. Scott much later gave a proof that may be more intuitive for some people (I am not one of them) see here.  The only theorem I've learned-and-forgotten more is the Hales-Jewitt Theorem.

 
III) 1985-1994.

A Decade of Surprises!

Barrington: Branching programs more powerful than we thought!

Toda: #P is more powerful then we thought!

LFKN: IP is more powerful than we thought! Bonus: used non-rel methods! (This result was not on the list but would have been if anybody except L or F or K or N had written the list.)

Nisan: Randomization is less powerful than we thought!

Lance did not have Factoring in Quantum P on the list for 1985-1994. It came out in 1994 towards the end of the year so it ended up missing both the 1985-1994 list and the 1995-2004 list. Reminds me of the Betty White awards, see here.  I do not think we disagree on the importance and merit of the result, though we disagree about altering the past- I would have included it in the post he did recently and explain that it was a late add to an old list.

IV) 1995-2004.

In 1990 a theorist told me that he could teach EVERYTHING known in complexity theory in a year-long graduate course.  Even then, that was not quite right, and may have really meant he could teach everything he thought was important. By 1995 this was no longer true. The PCP result alone would take a few months.

Theory begins to get really hard. Most of the papers before 1992 I have read and understood. Most of the papers after 1992 I have not, though I know what's in them. Sometimes not even that!

PCP: Many problems are hard to approximate.  Good to know, bad that its true. Proofs are hard!

Raz's Parallel Repetition: Really useful in later non-approx results (my favorite: Set Cover Lower bounds, see this survey here) but also Really Hard to read.

Most of the other papers are also hard and important.

AKS- Primality in P- not that hard. Indeed, surprising that it was not proven until 2002.

Lance did not include the work on Natural Proofs by Razborov and Rudich. He says why in a blog post here. I disagree with him- I would have put it in a top theorems list.

VI) 2005-2014.

Some upper bounds and some lower bounds. By now it was hard to have a surprising result since our intuitions were not so firm as to be surprised. (There is one exception in the 2015-2024 list.)

Reingold-Undirected Connectivity in Log Space: great result! I wish the proof was easier. I think people thought this would be true.

Lots of interesting lower bounds: Nash Equilibrium, Unique Game Conj, new PCP proof. None of which was surprising, though perhaps that we could proof things about these concepts is surprising.

JJUW-QIP=PSPACE. Really! I was shocked to find out that was true. No, I wasn't. I didn't understand QIP well enough. Was this surprising or not? Was the fact that this could be proven surprising or not?


VII) 2015-2024.

No real theme here though they all have hard proofs. I discuss a few.

Babai-Graph Isomorphism is the only result in this decade that I can explain to Darling. And she has a Masters Degree in SE so she knows stuff. (I recently told her the result that for all 2-colorings of R^6 there is a mono unit square  (see here). She was unimpressed.)

BZ- Dichotomy: Excellent result and explains the lack of natural intermediary problems.

CZ-Extracting Ramsey Graphs: An example of TCS helping to prove a result in math, though it also shows that the border between the two is thin. Obviously a favorite of mine.

JNVWY- MIP* = RE. This surprised people, including me.

Wednesday, December 04, 2024

Favorite Theorems: The Complete List

Now in one place all of my sixty favorite theorems from the six decades of computational complexity (1965-2024).

2015-2024

1985-1994

To mark my first decade in computational complexity during my pre-blog days, I chose my first set of favorite theorems from that time period for an invited talk and paper (PDF) at the 1994 Foundations of Software Technology and Theoretical Computer Science (FST&TCS) conference in Madras (now Chennai), India. The links below go to the papers directly, except for Szelepcsényi’s, which I can't find online.
1975-1984 (From 2006)

1965-1974 (From 2005)


Will I do this again in ten years when I'm 70? Come back in 2034 and find out.

Sunday, December 01, 2024

Conway's Trick for Divisibility. Asking its complexity is an odd question.

 (I got this material from a nice article by Arthur Benjamin here.)

 Conway suggested the following trick to determine if a number is divisible by each of the following: 

2,3,5,7,11,17,19,31

Note that

\( 152=2^3\times 19\)

\(153 =3^2 \times 17\)

\(154=2  \times 7 \times 11\)

\(155=5 \times 31\)

\(156=2^2  \times 13 \)

Here is the Div trick:

a) Input N

b) Divide N by 150 and note the remainder. So

 N=150q+r

r=N-150q 

Subtract q from r a few times: 

Note that

r-q = N-150q-q = N-151q

r-2q=N-152q

AH HA!- if 19 divides r-2q then 19 divides N. So divide r-2q by 19. (Note that r-2q is much smaller than N. Smaller enough to make this technique feasible? That is the question!)

r-3q=N-153q.

AH HA!- if 17 divides r-3q then 17 divides N. So Divide r-3q by 17.

r-4q=N-154q

AH HA- if 11 divides r-4q then 7 divides N. So Divide r-4q by7.

r-5q=N-155q

AH HA- if 31 divides r-5q then 31 divides N. So Divide r-5q by 31.

r-6q=N-156q

AH HA- if 13 divides r-6q then 13 divides N. So Divide r-6q by 13. 

Complexity with 1 division, 6 subtractions and 6 divisions of small numbers (r\le 150 and q\le N/150)

you find out the divisibility by 7,13,17,19,31.  For 2,3,5,11 there are well known tricks to use. OR you can test those as well by doing (for example) dividing r-4q=r-154 by 11.

Some Points

1) Is this method faster than just dividing N by the numbers (and using tricks for 2,3,5,11)? You would need to get into addition being faster than division, and look at the size of the numbers.

2) Is this method practical? For hand calculation YES. For computers it would be easy to code up but the main question of this post: is it better than just dividing N by numbers.

3) Are there larger runs of numbers that pick up more divisors? Yes. We present one. The order will look funny but we explain it later.

\(2000=2^4 \times 5^3 \) (you could skip this one, though dividing by 2000 is easier than by 2001)

\(2001=23\times 29\times 3\) (would divide N-2q by both 23 and 29)

\(2002=7\times 11\times 13\times 2\)

\(1998=37\times 54\)

\(2006=17\times 29\times 2\)

\(2010=67\times 30\)

\(2014=19\times 53\times 2\)

\(2013=61\times 33\)

\(2015=31\times 65\)

\(2009=41\times 49\)

\(2021=43\times 47\)

The order was suggested by Conway so that algorithm at every step adds or subtracts one of q, 2q, 4q, 6q, 8q, 12q. So after you get q you can compute these values. 

I leave it to the reader to count the number of divisions, subtractions, and size of the numbers involved.

4) For cracking RSA this technique is useless since RSA uses numbers of the form pq where p and q are large primes. For factoring randomly generated numbers I would be curious if this method is better than just dividing by numbers.

5) Project: find other sequences like those above that cover more prime factors.



Monday, November 25, 2024

We Will All Write Like AI

Will our writing all converge to a generic AI style? 

Let's take a quick detour into LaTeX. Back in the late '80s, before LaTeX was the standard, there was TeX—a system with no default formatting, which meant everyone had their own unique style for papers. Then LaTeX arrived, and suddenly all our papers looked polished and professional. But the catch was, LaTeX only got you about 80% of the way there. The original manual even mentioned that you needed to add some extra TeX commands to really finish the job. Most of us didn’t bother, though, and soon all our papers had that same uniform look. It was efficient, and it was good enough. Microsoft Word ended up doing the same thing for everyone else—you could tweak it to be different, sure, but most people didn’t. It turns out most of us are just fine with "good enough."

I generally don't like to use large language models to write for me. But I do use them to refine my work, to catch errors or suggest improvements. The thing is, AI likes to take what I write and make it sound smoother, and I often think, "Wow, that really does sound better." But here’s the tricky part: it might be better, but it’s not mine. It’s the AI's voice. And still, if the stakes aren’t too high, sometimes I let the AI’s version slide. That’s the start of a slippery slope. Before you know it, we’re all letting AI make our writing a bit more generic, a bit more uniform. And eventually, we end up writing to match the AI’s preferred style.

For this blog post, I didn’t resist at all. Could you tell this is ChatGPT's style?

Wednesday, November 20, 2024

For what d is the following true: For all 2-colorings of \(R^d\) has a mono unit square (Answering(?) the Question)

 In my last post (see here) I invited you to work on the following question:

Find a \(d\) such that

--There is a 2-coloring of \(R^d\) with no mono unit square.

--For all 2-colorings of \(R^{d+1}\) there is a mono unit square. 

Actually I should have phrased my question as What do we know about d?  

Here is what we know

a) \(d \ge 2\).  There is a 2-coloring of  \(R^2\) with no mono unit square. This is easy and I leave to you. 

b) \(d\le 5\). For all 2-colorings of \(R^6\) there is a mono unit square. I will give pointers to the relevant papers and to my slides later in this post.

c) \(d\le 4\). For all 2-colorings of \(R^5\) there is a mono unit square. This is by an observation about the proof for \(R^6\). It will be in the slides about \(R^6\).

d) \(d\le 3\). This is in a paper that the reader Dom emailed me a pointer to. Dom is better at Google Search than I am. The link is here.

MY SLIDES:

\(K_6\) is the complete graph on 6 vertices. We will be looking at 2-colorings of its edges

\(C_4\) is the cycle on 4 vertices. A mono \(C_4\) has all four edges the same color.

We need a result by Chvtal and Harary in this paper here.

Lemma: For all 2-colorings of the edges of \(K_6\) there is a mono \(C_4\).

The proof appears both in their paper,  here, and on slides I wrote here

Stefan Burr used this to prove the following theorem.

Thm: For all 2-colorings of \(R^6\) there is a mono unit square. 

The proof was appears (with credit given to Stefan Burr) in a paper by Erdos, Graham, Montgomery, Rothchild, Spencer, Straus, here, and on slides I wrote here.

Random Points

1) It is open what happens in \(R^3\). 

2) The proof for \(R^6\) uses very little geometry. Dom had a proof for \(R^6\) in a comment on my last post that used geometry. The proof for \(R^4\) uses geometry. 

3) An ill-defined open question: Find a proof that every 2-coloring of \(R^4\) has a mono unit square that does not use that much geometry and so I can make slides about it more easily.



Sunday, November 17, 2024

For what d is the following true: for all 2-colorings of \(R^d\) there is a mono unit square (Asking the Question)

 In this post I give a question for you to think about. 

My next post will have the answer and the proof. 

1) The following are known and I have a set of slides about it here

a) For all 2-colorings of \(R^2\) there exists two points an inch apart that are the same color. (You can do this one.)

b) For all 3-colorings of \(R^2\) there exists two points an inch apart that are the same color. (You can do this one.)

c) For all 4-colorings of  \(R^2\) there exists two points an inch apart that are the same color. (You cannot do this one.) 

2) SO, lets look at other shapes

A unit square is  square with all sides of length 1.

Given a coloring of \(R^d\) a mono unit square is a unit square with all four corners the same color. 

a) There is a 2-coloring of \(R^2\) with no mono unit square. (You can do this one.)

b) What is the value of d such that 

-- There is a 2-coloring of  \(R^d\) with no mono unit square.

-- For all 2-colorings of \(R^{d+1}\) there is a mono unit square. 

My next post will tell you what is known about this problem.

Until then, you are invited to think about it and see what you can find out. Perhaps you will get a better result then what is known since you are untainted by conventional thinking. Perhaps not. 

Feel free to leave comments. However, if you don't want any hints then do not read the comments.



Thursday, November 14, 2024

Favorite Theorems: Learning from Natural Proofs

October Edition

I had a tough choice for my final favorite theorem from the decade 2015-2024. Runners up include Pseudodeterministic Primes and Hardness of Partial MCSP.  But instead in memory of the recently departed Steven Rudich, this month's favorite theorem shows how to learn circuit classes that have a natural proof of hardness.

Marco L. Carmosino, Russell Impagliazzo, Valentine Kabanets and Antonina Kolokolova

In 1993, Linial, Mansour and Nisan showed that any function f computed by a constant-depth polynomial-size AND-OR-NOT circuits (AC0), has a quasipolynomial-time PAC learning algorithm that, after seeing a quasipolynomial number of uniformly randomly chosen inputs x and their value f(x), can with high probability predict f on other randomly chosen inputs. Their proof works by looking a the Fourier basis of f, showing that learning the low-degree coefficients gives a good approximation to f. Their paper was one of the first to apply Fourier transformations to computational complexity. Adam Klivans did a guest post on this paper back in 2004.

Now, suppose we allow unbounded parity gates to the circuit. The Fourier argument falls apart and learning such circuits remained open for over a quarter century until the Carmosino et al. paper. Their paper uses a different technique, building on the proof of the Nisan-Wigderson pseudorandom generator function. They show that if a circuit class has a natural proof of hardness, the constructivity and largeness properties of the natural proof can break a pseudorandom generator for that class, which can then be used to create a learning algorithm. 

The authors then applied the Razborov and Rudich naturalization of Razborov and Smolensky's lower bounds for AC0 with parity gates to get a quasipolynomial learning algorithm for that circuit class.

Monday, November 11, 2024

Steven Rudich (1961-2024)

Complexity theorist Steven Rudich passed away on October 29 at the age of 63. His works on Natural Proofs and Program Obfuscation were both highly influential. Russell Impagliazzo had a great result with him on showing that one-way permutations alone isn't enough to get secret key-exchange.

I started graduate school in 1985 at Berkeley and Steven was the unofficial student leader of the theory group when I arrived and I got a taste of his optimistic style before I moved to MIT the following summer. I'd see Rudich at various conferences and during visits to each other's institutions. I first learned about natural proofs from a talk he gave at the University of  Chicago and we had many debates about its importance for years. Alas health issues took their toll and we lost one of the great researchers and personalities in our field.

Impagliazzo has been a friend and colleague of Steven's since they were both undergrads at Wesleyan and we asked him to write a guest post about Steven. Russell sent us a very honest and personal 7-page writeup. Too long for a blog post but I strongly recommend reading this powerful story. Also see Scott Aaronson's memories.

Thursday, November 07, 2024

Higher Education Under Trump

It feels eerie as pretty much everyone seemingly avoided talking about the election. But Trump back in the White House will likely have a profound effect on US Colleges and Universities. Trump is no fan of universities and his vice-president once proclaimed "the professors are the enemy". 

While most US universities act outside of federal control, almost all of them take Federal funds in scholarships and grants and Trump could use that as a lever to push his policies.

Now Trump is always unpredictable and the courts could step in but here is what could happen.

International Students

I still remember Trump's ban on Iranians in the US and what it meant to our students and faculty at Georgia Tech. While that thankfully got blocked by the courts, who knows what will happen in this next administration.

International students at all levels in higher ed play a major role in increasing both the intellectual and financial strengths of universities. International Students, especially from some countries like China, may have a harder time coming to or staying in the US under a Trump administration or may have less desire to do so.

On the other hand last June Trump said "What I want to do and what I will do is, you graduate from a college, I think you should get automatically, as part of your diploma, a Green Card to be able to stay in this country." If that's true that will greatly increase interest among foreign students but I doubt he will carry it out.

Student Loan Forgiveness
Not going to happen

Title IX
University Title IX offices get whiplash with each change of administration. Expect Title IX (sex discrimination) protections to be watered down and eliminated for transgender individuals. 

Look at Republican States
Trump may try to apply Republican policies at state universities to all universities as a whole: Dismantling DEI efforts, weakening tenure protection, allowing campus carry of guns on campus, and "free speech" policies while limiting what can be taught and cracking down on protests.

Accreditation
Trump and republicans are no fan of accreditation, and Trump has threatened that he will replace accreditors to enforce certain points of view.

Sunday, November 03, 2024

A new Mersenne Prime Has Been Found. When will we find the next one?

(Lance posted on the search for Mersenne primes in 2006 after a new one was discovered. I will comment on his post later. ADDED LATER- You Tube Video by Matt Parker on the new prime, here)

A Mersenne Prime is a prime of the form \(2^n-1\) where n is a natural number. One can show that n must be a prime. There is a list of all known Mersenne primes here. A more complete list of large primes going back to 1456 (which is not prime) when they discovered that 8191 is prime, is here.

New Mersenne primes were found in 

1985

1992 (so a 7 year gap)

1994

1996

1997

1998

1999

2001

2003

2004

2005

2005 (wow, two in one year!)

2006

2008

2008 (wow, two in one  year!)2009

2013 (so a 5 year gap)

2016

2017

2018

2024 (so a 6 year gap)

When will the next one be? Possibilities:

a) The techniques that got us to the one in 2024 are NEW and will be USED to get us some new ones soon. 

b) The techniques that got us to the one in 2024 were the old techniques on their last legs. A new idea is needed either from number theory, algorithms, or hardware. So we may not find one for a while.

I do not know which of (a) or (b) is closer to the truth. It may be hard to tell. 

There are two obstacles for making progress on numbers like this.

a) Technology, math, etc.

b) Sociology. Do enough people care about finding these numbers? 

AND these two obstacles can interact to give you either

a)  a death spiral: the problem is to hard so people don't want to work on it, and people don't want to work on it because the problem is to hard or

b) a live spiral: a breakthrough was made so lets all now really go for it!

I do not know how Mersenne primes do in this context. 

In 2006 Lance's post raised the question Why Do We Care About Finding Large Primes? 

a) We don't. Not that many people are working on this.  

b) The actual primes aren't that interesting, but the techniques used to find them might be. 

c) Why do people climb mountains?

Because they like getting frostbite!

Better off finding primes which you can do without risking life and limb. 

d) As someone who has worked on proving primes infinite using Ramsey Theory I am reluctant to tell someone else that there work is not worthwhile. I blogged on using Ramsey Theory to Proof the number of primes is infinite  here.

e) I leave the following as an open problem: Compare and contrast value and interest in finding the following:

Large Merseene Prime

Largest non-Mersenne Prime. Mersenne primes are easier to find, so most known large primes are Mersenne. Finding large non-Mersenne primes has been looked at, see here, but not much.

The Busy Beaver Function . Scott did a post on the new breakthrough here.

Small Universal Turing Machines. I did a blog post on that here. There is a cash prize for it!

Ramsey Numbers, in particular R(5). I did a blog post on the new breakthrough here.

Wednesday, October 30, 2024

FOCS 2024

Junior/Senior lunch in 80°F Chicago

Last summer I attended the Complexity Conference in Ann Arbor for the first time in eight years largely because it was within driving distance. So with FOCS in Chicago this year I didn't have much of an excuse to skip it. I haven't attended a FOCS in person since the 2010 conference in Vegas, though I have been at a few STOCs in that span. 

The 65th IEEE Symposium on Foundations of Computer Science is being held this week at the voco hotel, a beautiful venue in Wolf's Point where the three branches of the Chicago river meet—though you'd never know it from the windowless conference venue on the 14th floor. I'm enjoying reconnecting with colleagues old and new. Sometimes you can go back home again.

Even with triple sessions, we had 12 minute talks for most papers with longer online versions linked from the schedule page. I liked the 12-minute format; with 20 minutes, speakers tend to rush through proofs, while here, they could take their time giving high-level overviews of their papers.

Stats: 274 registrants. 157 students. 16 international. Large compared to recent years but it's not a huge conference. 133 accepted papers chosen among 463 submissions from a program committee of 47 members.

The Knuth Prize winner Rajeev Alur gave his lecture on Specification-guided Reinforcement Learning. 

The Machtey Prize for best student papers goes to Brice Huang for Capacity Threshold for the Ising Perceptron and Meghal Gupta, Mihir Singhal and Hongxun Wu for Optimal Quantile Estimation: Beyond the Comparison Model.

Best paper awards goes to Bernhard Haeupler, Richard Hladík, Václav Rozhoň, Robert Tarjan and Jakub Tětek for Universal Optimality of Dijkstra via Beyond-Worst-Case Heaps (Quanta Article) and Mohsen Ghaffari and Christoph Grunau for Near-Optimal Deterministic Network Decomposition and Ruling Set, and Improved MIS.

Test of time Awards

In 2025 FOCS goes down under to Sydney, Australia, December 15-17.

Sunday, October 27, 2024

Random Thoughts on the Election

 Here are my random thoughts on the election:

1) Here is a list of things I DONT care about

 a) Candidates Gender or Race. The people who say its about time we had a female president might not want to vote for a President Marjorie Taylor Green. (A while back I thought that the first african-american president and/or first female president would be a republican since such a candidates might take some usually-democratic voters into the republican camp. One of many predictions I was wrong about.) I will note here that Kamala has rarely brought up I will be the first female prez.

b) Candidates personal lives. Attempts to tie a candidates affairs to their policies never made sense to me.

c) How well the candidate did in school. I care what they know and don't know now, and also if they know what they don't know. (Who was the last president to not have a college degree? I will tell you at the end of this post.)

d) Their religion. There were people who agreed with JFK on policy issues but did not want to vote for him because he was Catholic. I didn't understand it then nor do I understand it now. Biden is Catholic but this rarely comes up. Romney was Mormon and this only came up in the Republican primary, not in the general.  So I am glad it is no longer an issue. Having said that, we still haven't had a Jewish president, a Muslim President, or an Atheist president. 

e) Do I care if the candidate X will benefit me personally? It is very hard to tell that. Someone like Elon Musk is clearly going to support Kamala since she believes global warming is true and the e-cars will be a part of dealing with it. This is an example of someone supporting someone since it benefits them personally. Oh, bad example.

 Vance thinks people vote this way as he recently said that women past child-bearing age should not care about abortion rights.

f) There are ads that say things like I served in the military defending our country, and now I want to defined your rights to XXX. I don't see how serving in the military makes them a better senator or whatever. 

g) I don't care how they are doing at polls. Some of the articles saying that candidate X is ahead  also tend to say that that shows X  is better. I campaigned for George McGovern in 1972 and he went on to lose by a lot.  Some of my friends told me that I backed a loser and tried to make that an insult (I have better friends now). This puzzled me then since the fact that my candidate lost says NOTHING about how he would do as president.

2) DO I care about if they lie? Depends on how much and about what. That Vance changes his mind about Trump (he was anti-Trump in 2016) or that Kamala changed her mind on fracking are the standard lies that politicians always tell so I don't care about that. This may be a reflection on the low standards we have. More serious are lies that they KNOW are false and might ENDANGER people.

3) DO I care if they changed their mind on an issue. All I care about is how they feel NOW, though if they changed their mind I might not believe them.

4) I  DO care about policy and ability to get things done and to consider all sides of an issue. (I won't say what policies I like since I am trying to keep this post non-partisan).

5) Some of the attacks on Kamala have been a women should not be president. This puzzles me since there will come a time when the Republicans have a female candidate.

6) I am surprised there is anyone who is still undecided. We know both candidates  VERY WELL. What more information do you need?

7) Articles like Five Reasons Kamala Will Win or Five Reasons Trump Will Win are usually crap. For example, one of the reasons Kamala will win is People are tired of  Trump's corruption. But that just means that the author of the article is tired of it, and the author does not speak for the American people. An argument like Kamala is ahead in 5 of the 7 swing states (if it were true) is the kind of argument I want to see. More to the point- an argument that someone will win should not be partisan.

8) Since JD Vance might lose Trump some votes (actually I doubt that) I have heard the old argument that Sarah Palin cost McCain the Presidency, or at least cost him some votes. I began to think do we really know that? so I looked at some articles both new and old about this. I found things like:

The American public did not want Sarah Palin a heartbeat away from the presidency because of her views, or because of her perceived lack of intelligence or blah blah. Hence she cost McCain votes.

NONE of the articles I read pointed to polls or other EVIDENCE for this point of view.

There probably is some evidence on the issue (I do not know which way it would go)  somewhere but the LACK of any INTEREST in it bothers me.

9) I am surprised there are any undecided voters at this point. Both candidates are known to the public, so I don't see what more there is to think about. I accidentally already said this, however (a) I don't want to mess up my numbering and (ii) I do feel strongly about this point, so its worth having twice.

10) Because of Americas only-2-party-system you end up voting for people you agree with on some things but not others. I can imagine Mike Pence saying:

I prefer Trump to Kamala on the abortion issue.

I prefer Kamala to Trump on the  Hang Mike Pence issue.

Gee, who do I vote for?

 Actually he has said he is voting for neither one. 

11) IF Kamala wins then the decision to make Tim Walz her VP will be seen as genius.

      IF Kamala does not win Pennsylvania and loses the election then NOT picking Josh Shapiro (popular governor of PA) will be seen as stupid.

I call bullshit on both of those. There are SO MANY factors  at play here. This reminds me of when a basketball team wins 100-99 and the sportscasters are saying things like

The decision to concentrate on 3-point shots was genius!

12) More generally, after-the-fact pundits all have their own theories about why candidate X won, even those who were confident that candidate Y was going to win, and those theories are either partisan or just uninformed. It is hard to know what works, even after the fact.

13) I had a post, here, about breaking the record for number-of-living-ex-presidents.  I speculated that  if

a) Biden won in 2020 and lost in 2024 (I overlooked the option of his not running.)

b) Carter lives to see the winner of the 2024 election inaugurated

THEN we would have SIX living ex-presidents, which would break the record. They would be 

Carter, Clinton, Bush Jr, Obama, Trump, Biden.

If Trump wins then should Trump count? I think so- he would be a president AND an ex-president. If Kamala wins I won't have that issue. When I wrote that I didn't think Carter would last this long (he is 100), but he may very well. In fact, he has already voted!

14) Allan Lichtman is an American Historian (does that mean he studies American History or that he is American AND a historian?) who has a system to predict who will win the presidential election that has been correct 9 out of the last 10 elections. See here for his system. He predicts Kamala. I personally do not put to much stock in that since its going to be VERY CLOSE (hmmm- that's my prediction, and I could be wrong). While he is a democrat his system is OBJECTIVE-- he would have to be to be right so often. Even so, he is getting hate mail. This makes no sense. Predicting that X will happen does not mean you have an opinion about if X is good or bad. 

15) When people argue passionately about a rules change (e.g., get rid of the electoral college) I do not believe them- they would argue the other way if the current rules favored their candidate. 

16) JD Vance recently came out against the laws that say you can't wear political stuff at the polling place. He said that BOTH Kamala supporters and Trump supporters should be allowed to wear political T-shirts and hats and  what not at the polling place. NO HE DIDN"T. He applauded a Trump Supporter for calling a Poll Worker a XXX and told him to YYY in for asking her to follow the polling place's rules prohibiting political merchandise. See here. (I use XXX and YYY so that this post won't get censored as happened in the past, see here. Oddly enough, this morning the ban was lifted, so for the original post that was banned see here.)  No word yet on if he is going to praise a Texas man for punching a poll worker who told him to remove his Trump hat, see here.

17) Nikki Haley bravel tells Trump that he should not call Kamala the c-word. Actually it was not brave at all- here is here quote:

You've got affiliated PACs that are doing commercials about calling Kamala the c-word. Or you have speakers at Madison Square Garden referreing to her and her pimps. This is not the way to win women.

So being nasty to Kamala is fine in itself, but Trump shouldn't do it since it will lose him votes. 



The last president who did not have a college degree was Harry Truman.


Wednesday, October 23, 2024

Family Feud vs Pointless

Every now and then I feel like doing a Gasarchian post. This is one of those weeks. I'm going to look at the mathematics behind the American game show Family Feud and the British Pointless. I caught a glimpse of Pointless while I was in Oxford over the summer and then got hooked for a while on old episodes on YouTube. 

In both shows, 100 people are surveyed to give elements in a category, like "Robert Redford Films" and you get points based on how many people match the player's or team's answer. After that the similarity ends. In Family Feud you want to maximize your points, in Pointless you want to get as few as possible.

In Family Feud the categories are vague and not fact checked. If you say "The Hustler" and five other people said "The Hustler" you would get five points, even though Redford did not appear in that film. The only sanity check is that you need at least two matches to get points. The questions are often humorous or risqué like "Things people do right before going to sleep".

In Pointless if you said "The Hustler" you would get 100 points for a wrong answer. To define wrong answers, the category has to have a very formal definition: "Any movie where Robert Redford has a credited role released before 2023. TV movies and documentaries do not count. Voice-over animated films do count."

Pointless often has British-based questions. I can't name any professional darts players but apparently there are several famous ones in the UK. 

Other differences: In Family Feud, each person surveyed gives one answer. In Pointless, each person surveyed has 100 seconds to give as many answers as they can. Family Feud ends with a quick round of five categories where two players need to get 200 points total. Pointless ends where the 2-player team needs to find a pointless answer in a category.

How would AI do in these shows? I asked ChatGPT to come up with an obscure Robert Redford movie and it came up with a good one, Situation Hopeless -- But Not Serious and for a popular one Butch Cassidy and the Sundance Kid. When I asked "give me one thing people do before they go to sleep" it gave me "check their phones". AI wants us to think the last thing we should do at night is talk to AI.

Family Feud has a British version named Family Fortunes. A US version of Pointless never got past the pilot. We're not a country that likes to aim for zero. 

Sunday, October 20, 2024

Contrast an Episode of Columbo with the recent Nobel Prizes

 I quote Lance's blog post (here) about Computing and the Nobels

a) On Wednesday October 9th half of the Chemistry Nobel was awarded to computer scientists Demis Hassabis and John Jumper for the protein-folding prediction algorithm AlphaFold, which I (Lance) consider the most impressive and game-changing application of machine learning.

b) The day before John Hopfield and Geoffrey Hinton were awarded the Physics Nobel for the development of neural nets that lead to AlphaFold, Large-Language models, and so much more.

Bottom line: The Nobel Prize in CHEM and PHYSICS went to COMPUTER SCIENTISTS. That's probably not quite right since I am sure that the people involved KNEW lots of Chemistry and Physics. 

In Feb 10, 1974 there was an episode of Columbo called Mind Over Mayhem (see here for a summary) where one of the plot threads was the following: 

a) Carl Finch, a great scientist, dies (of natural causes) and in his files is a ground breaking theory of molecular matter.

b) Neil Cahill who was working with Finch as a computer programmer knows about the file and codes up stuff in it and claims the work as his own.   He is initially not caught and he wins the Scientist of the Year Award. 

c) I won't get into who gets murdered or how Columbo catches them.(Has anyone in the real world been murdered because of an academic dispute?)

When I first saw it I had two thoughts:

1) If Neil had claimed co-authorship that would be more honest and hence would not need to be covered up or lead to murder. AND Neil would STILL get credit

2) More important for now: a computer programmer who coded up stuff was considered NOT part of the research in 1974. And now? Well the description of what the Nobel's did seems far more impressive than what Neil Cahill did, though since Neil Cahill is fictional it's hard to say. 

The question of  how much credit should a programmer on a project get? was unintentionally raised way back in 1974. And now? I have not seen ANY objection to computer scientists winning Nobel Prizes in Chem and Physics so the question seems to not be controversial. I agree with this though I am surprised by the lack of controversy. I also note that I used the term Programmer which is not accurate. They were computer scientists. Having said that, programmers also deserve credit. How much is hard to say. The distinction between computer scientists and programmers is also hard to say. But if programmers were considered part of the research in 1974, a fictional murder could have been prevented. 

(ADDED LATER: Lance saw this post after I posted it and emailed me a link to an article that DID hae some objections to giving a Nobel Prize for AI work. I disagree with the objections, but in the interest of being giving intelligent opposing viewpoints,   the link is here.) 





Wednesday, October 16, 2024

Computing and the Nobels

Herb Simon

Herbert Simon while a political scientist in the 1940s at my institution, the Illinois Institute of Technology, developed the theory of bounded rationality, realizing that people did not always make the most rational decisions because of lack of information and limited cognitive ability. After Illinois Tech, Herb Simon moved to the Carnegie Institute of Technology and in the 1960s helped found its Computer Science Department, later the Carnegie-Mellon School of Computer Science. With his colleagues at Carnegie he applied his principles to artificial intelligence with Allen Newell and J. C. Shaw leading to an ACM Turing Award in 1975. Bounded rationality would help him win the Nobel Prize in Economics in 1978.

Computing would go on to play an important role in nearly all scientific research. Most notably in 2013, biochemists Martin Karplus, Michael Levitt and Arieh Warshel won the Nobel Prize in Chemistry for their work using computers to model large molecules and simulate chemical reactions. Their entire research was done on computers, not in the lab. But no other computer scientist would win a Nobel Prize for the next 45 years. 

Demis Hassabis

That all changed last week. On Wednesday October 9th half of the Chemistry Nobel was awarded to computer scientists Demis Hassabis and John Jumper for the protein-folding prediction algorithm AlphaFold, which I consider the most impressive and game-changing application of machine learning. 

The day before John Hopfield and Geoffrey Hinton were awarded the Physics Nobel for the development of neural nets that led to AlphaFold, large-language models and so much more. Hinton with his 2018 Turing Award became only the second person, after Herb Simon, to win both prizes.

Is it physics? One podcast tried to answer this question.

Geoffrey Hinton
First of all, [Hopfield's] analogy to spin glasses is a use of a physical model. Certainly, information cannot exist in the absence of matter or energy. So ultimately, information science reduces to matter and energy as much as it reduces to mathematics. And thirdly, Nobel’s will was written in the 1890s. The first prize was awarded in 1901. Things have moved on since then. So the will prescribes physics, chemistry and physiology or medicine as the three science prizes and sometimes various Nobel committees are criticised for being a bit narrow. I think this shows. A certain creativity on the part of the academy to include a very up to date and very important field in physics by a little bit of creative accounting.

As a theorist who deals with information and computation, I disagree that these can only exist with matter and energy. The set of primes, for example, cannot fit into our finite universe.

But the third point is the most important. Nobel's will predates Turing and the development of computation as a new field. The importance of computing and artificial intelligence has take on such an importance that the Nobel Prize committees felt they needed to honor it, even if it means broadening the categories and encompassing computer science as a part of physics.

What does this mean for the Turing Award, now that computer scientists seem more eligible for Nobel prizes? I'll leave that as a open question for now.

Sunday, October 13, 2024

A Trip Down Memory Lane: Desc comp, Constant Round Sorting, Division Breakthrough, Derandomization.

came across (by accident) the link to all of the BEATCS complexity columns from 1987 to the 2016. See HERE. (If you know a link to a more recent webpage then email me or make a comment. There is a link to all of the issues of BEATCS here; however, I want a page with just the complexity  theory columns.)

This gave me a trip down memory lane and a series of blog posts: Briefly describe an  articles and also get commentary from the original authors on what they think now about the area now.

I did the first in this series, on two articles by Eric Allender, here. Today is the second: articles by Neil Immerman, Gasarch-Golub-Kruskal, Eric Allender (again!), and Valentine Kabanets.


67) Progress in Descriptive Complexity by Neil Immerman,  1999.We usually think of P, NP, and other classes in terms of TIME or SPACE. Descriptive complexity is about defining complexity classes in terms of how they can be described in a logical language. (For reviews of three books on this topic, including Neil Immerman's, see here.) I asked Neil Immerman to comment on the state of Descriptive Complexity now and he said:


Currently the two most promising and exciting areas of progress in Descriptive Complexity are, in my opinion,
  • Graph Neural Nets:  see Martin Grohe, "The Descriptive Complexity of Graph Neural Networks'', (see here);    and
  • Graph Isomorphism: see Martin Grohe,  Descriptive Complexity, Canonisation, and Definable Graph Structure Theory, Cambridge University Press, 2017, and Anuj Dawar, "The Nature and Power of Fixed-Point Logic with Counting", ACM SIGLOG News, Jan. 2015, Vol. 2, No. 1.

 

72) A Survey of Constant Round Sorting by Bill Gasarch,  Evan Golub and Clyde Kruskal in 2000.  The model of computation is that in each round one could do lots of comparisons. Transitive closure was considered free, so the model was not practical and didn't claim to be; however, lower bounds on this model implied lower bounds on more realistic models. I asked Bill Gasarch to  comment on the paper from the perspective of 2023 and he said:

I don't think the paper is out dated in terms of later results, though the model of parallelism used is not studied anymore. A survey on sorting on more realistic models would be a good idea and may actually exist.

74) The Division Breakthrough by Eric Allender, 2001. How complicated are addition, subtraction, multiplication, and division?  The addition and subtraction algorithm that you learned in grade school is a log space algorithm (your teacher should have pointed that out to you). Multiplication is also in log space---that's an easy exercise. But what about division?  When Eric Allender was doing division in grade school he wondered can this be done in log space? This was open for quite some time, but was solved- Division is in Log Space- by Chui, Davida, Litow, in 2001 (actually done by Chui in his MS thesis in 1995 but not published until 2001. Fun Fact: Chui went to law school).  Log space is an upper bound, what about a lower bound? Hesse showed that Division is complete for DLOGTIME-uniform-TC^0.  What is needed is a complete write up of all of this in a uniform notation. Eric Allender has provided that! I asked Eric if there are more open problems in the area of complexity-of-arithmetic. He responded:

What complexity class best captures the complexity of GCD (computing the Greatest Common Divisor)?  GCD is hard for\( TC^0\) (proved in a paper I wrote with Mike Saks and Igor Shparlinski) but we have no upper bound better than P.  Similarly: What is the complexity of modular exponentiation (given x, y, and m, compute x^y mod m)?  Again, we have no upper bound better than P.  Modular exponentiation is an important subroutine in the polynomial-time algorithms for checking primality, and the lack of a better upper bound for this problem is one of the main reasons that we have no better upper bound for primality-checking than P.

...and while we're talking about the primes, let me mention that my SIGACT News article( see here) from early 2023 explains that I'm offering a $1,000 reward to anyone who can provide a modest improvement on the \(TC^0\)-hardness of primality-checking.  We currently only know a non-uniform \(AC^0\) reduction from MAJORITY to primality-checking.  I'm asking for a uniform \(AC^0\) reduction.

76) Derandomization: A Brief Survey by Valentine Kabanets, 2002. A very nice survey. Has there been progress since then? I asked Valentine and he said 

There have been new approaches to the BPP vs P question. See for example a recent survey by Chen and Tell 
New ways of studying the BPP = P conjecture (a survey)
ACM SIGACT News 2023


 

Wednesday, October 09, 2024

Fall Jobs Post 2024

In the fall, I write a jobs post predicting the upcoming CS faculty job market and giving suggestions and links. In the spring I used to crowdsource a list of where everyone got jobs but have since outsourced the crowdsource to Grigory Yaroslavtsev. Let's start with a few announcements.

FOCS in Chicago will have a Graduating Bits on Sunday, October 27 from 12-2 PM. If you have job openings for postdocs and researchers the FOCS organizers are collecting them here, The CATCS also maintains a Theoretical Computer Science Jobs posting site. You are also free to add pointers to theory-related job listings as comments to this post. More generally in CS, there is the CRA Database of Job Candidates and the CRA and ACM job postings.

Mohammad Hajiaghayi is organizing a virtual theory jobs panel November 10th with Nicole Immorlica, Samir Khuller and yours truly. 

If you are a bit more senior, the Simons Institute is looking for a new director

Last year I suggested AI (which by the way just won two Nobel Prizes) wasn't dramatically affecting the CS faculty job market yet but

Many see programming, rightly or wrongly, as one of the first careers that AI will displace, which may reduce enrollment in the future, as offshoring fears drove CS enrollment down 20 years ago

It didn't take long. In two years we have gone from nearly all of our CS graduates getting jobs in the field to many of them struggling to get internships and jobs in the top companies if at all. If the past is any guide, a weak tech job market leads to fewer majors which leads to fewer slots for CS faculty. We'll start to see these trends this year and they will accelerate quickly if the tech jobs market doesn't recover.

Areas related to data such as Artificial Intelligence, Data Science and Cybersecurity, will draw the most interest. Best if you can tie your research to those areas, or at least that you are open to teaching in them.

Have a well-designed website with all your job materials and links to your papers. Make sure your Google Scholar, LinkedIn and GitHub (if relevant) sites are accurate and up to date. Make a short video describing your research to a general CS crowd. Take advantage of all the links above. Network at FOCS if you can make it. And start early.