Wednesday, October 29, 2025

AI and the Power of Nonuniform Circuits

The advances in artificial intelligence have changed the way we think about computing. For today's post, how nonuniformity plays a much larger role than I previously believed.

First, some background. Circuit complexity gives a different, more combinatorial, way to look at computation. Instead of viewing algorithms as step-by-step programs, circuit complexity captures computation as fixed hardware, gates wired to compute a function of their inputs.

Every polynomial-time computable language L can be computed by a family \(C_0,C_1,\ldots\) of polynomial-size circuits, where \(C_n\) has n inputs, the number of gates of \(C_n\) is bounded by \(n^c\) for some \(c\) and \(x=x_1\ldots x_n\) is in L iff \(C_n(x_1,\ldots,x_n)\) outputs 1.

The converse isn't true, there are languages L that have polynomial-size circuits that you can't compute in polynomial time, because each \(C_n\) might not have any relation to \(C_m\) for \(n\neq m\). We can get the equivalence by adding a uniformity condition, that there is some easily computable function \(f\) with \(f(1^n)=C_n\). Or we can keep the nonuniformity to define a larger class called P/poly. 

In practice, nonuniformity rarely helps. We can hide randomness in the nonuniformity but we could likely also use pseudorandom generators to the same effect. Nonuniformity was more a technicality that we had to deal with instead of something that seemed to give circuits significantly more power.

Until this new age of AI. A machine learning model like a neural net goes through a lengthy training process to learn its weights, the training optimized offline. The learned weights become the nonuniform circuits, as technology improves and we create larger retrained circuits, the weights change as well.

We typically use the complexity class TC\(^0\), languages accepted by poly-size, constant-depth with threshold circuits as a rough theoretical model for neural nets. For nonuniform TC\(^0\) we cannot prove any significant upper bounds, we can't even show NEXP, nondeterministic exponential time, cannot be computed by nonuniform TC\(^0\) circuits. And while computing NEXP or even P in TC\(^0\) would be incredibly surprising, these advances in artificial intelligence tell us these nonuniform circuits are far more powerful than we would ever have expected. 

Sunday, October 26, 2025

Bill's Bad Advice

 I sometimes give the following advice for research which I label Bill's Bad Advice. We will later see who it might be good advice for. Spoiler alert: the number of people for whom it is good advice is shrinking but might include Lance especially now (see his post about stepping down from admin, here).

When you come across a possible research topic or problem, or have some idea,  and are wondering if you want to pursue it,  here is my bad advice: 

1) DON"T  CARE if anyone else cares. If YOU care then that is enough to at least get started. 

2) DON"T  CARE if it has the potential for a published paper. FIRST do the work then, if you feel like it, look for a good venue. You might not bother if posting to arxiv or making an open problems column out of it or a (guest) blog post out of it is  a good endpoint.  (I run the SIGACT News Open Problems Column- feel free to contact me if you want to submit one.) 

3) DON"T CARE if it has practical implications. 

4) DON"T  CARE if you can get a grant for it. With the current state of the NSF this advice may soon become irrelevant. 

5) DON"T  CARE if someone else already did it (though at a later stage you should check on this). Even if you work on it and find someone else did it, you will have LEARNED about the problem through your efforts. You might then want to do a survey for your own benefit to consolidate your  knowledge. 

Why should you NOT CARE about any of these things? Because they get in the way of actually DOING something.  

Here are two examples of when this approach WORKED and one where it DID NOT WORK, though both classifications might depend on your definition of WORKED. 

WORKED: My work on Muffins. All I wanted was to get some High School and Undergraduate Projects out of it. I ended up with a book on it which made me twenty dollars last year! More to the point, I learned a lot, as did my co-authors and I am happy with the book. The co-authors were Undergraduates so my dept put me up for a mentoring award (I have other credentials as well). I did not win, but I got a nice letter saying they had many qualified applicants. OH- it didn't say if I was one of them. 

WORKED: I have had many Ramsey Projects where a High School Student codes stuff up and learns some Ramsey, some coding, and gets the experience of research. Sometimes they do a survey paper or open problems column. We both learn A LOT from this and the student gets a good letter from me. Do they do something NEW? Publishable? No, though some surveys and open problems columns have come out of this.  I DO tell them ahead of time that the work is unlikely to lead to original results (and hence unlikely to be good for a science competition).   

DID NOT WORK: See this blog post here about the math and here about finding out that the problem we were working on was already known and more was known than we thought. I didn't mind this, but one of the authors did. 

Note that WORKED and DID NOT WORK also depend on your goals. 

For whom is this bad advice? Good advice? 

1) It was always bad advice for young assistant professors who need to get papers and grants to get tenure. 

2) Hypothetically, once you get tenure you have job security and hence can change fields or follow my bad advice without consequences. But with grants and salary and teaching load issues, this is less the case. Perhaps I am nostalgic for a time that never was.

3) High School Students were my main audience for bad advice. It's not as important for them to get papers out as for (say) assistant professors. But even this is changing. Colleges are getting more competitive. And HS students may well want a project that can lead to Science competitions. I am not going to say things were better when I was a kid but instead pose non-rhetorical questions:

a) Are high school students getting into research earlier than they used to? I am sure the answer is yes.

b) Are we losing the safe space where a high school student can just learn things and do things and not worry so much about if it's publishable?  Yes, but I am not sure how widespread that is.

c) If we are losing that safe space, is that a bad thing? 

d) Points a,b,c apply to ugraduates who want to go to graduate school more than for high school students who want to go to college. 

4) Full Professors may have more freedom to follow my bad advice. Lance is looking for things to do now that he is no longer a dean, and indeed, is back to being a teaching-and-research professor. So he might follow my advice. However, he actually cares if people care about his work. He does not have to follow all of my advice, but he can follow some of it. 



Thursday, October 23, 2025

AI and Intro Theory

This fall, for the first time at Illinois Tech, I'm teaching Introduction to Theory of Computation. While I've taught a variation of this course a couple dozen times, I last taught this class Spring 2016 at Georgia Tech. Intro Theory is a course whose main theorems don't change but it feels different in the AI era.

Here are the big and little things for me.

NP as a Hard Problem?

Garey and Johnson's classic 1979 book on NP-completeness had this famous cartoon.

While the employee couldn't prove that a certain NP-complete problem didn't have a quick solution, at least he could tell his boss that all these famous computer scientists have also tried and failed. The implication is once you show a problem is NP-complete, you should give up. But these days, we have great tools that can often solve or approximate well many practical instances of NP-complete problems such as satisfiability, mixed-integer programming and traveling salesman. NP-completeness is no longer an ending point to give up, but a starting point to try other approaches.

Computation is Hard to Understand

Rice's theorem tells us we can't understand anything about the language a Turing machine computes in general. We can't even tell if a machine halts on blank tape. But for most well-designed programs, we knew how they worked, or at least how they were supposed to work if correctly implemented.

With machine learning, we now have neural nets that we cannot understand how they compute what they do. And like Turing machines, for the most part, all we can do to understand how they work is to look at their input/output behavior.

New Theorems

I first taught this course not long after Neil Immerman and Robert Szelepcsényi independently proved that nondeterministic space is closed under complement and it's now a staple in my course. There are more theorems proven since then that I mention, but don't prove, including the PCP theorem, Primes in P, undirected graph connectivity in deterministic log space, and very recently deterministic time \(t(n)\) in \(\sqrt{t(n)\log t(n)}\) space

AI in Teaching

I simply can no longer come up with problems that I can expect undergrads to solve that AI can't solve. So I decided to grade homework on completion instead of correctness so students who wanted to try hard wouldn't feel they needed to use AI to get full credit. For exams, in-class proctored with no electronics. I definitely preferred take-home exams; I just cannot do it anymore.

I use AI to help me format lecture notes and check over my homeworks and exams. The students mostly use AI as their first stop for help instead of office hours. For good reason, the latest LLM models know the material in this course pretty well.

I did manually grade the midterms but for fun asked AI to grade a few of them and its scores were similar to mine.

It won't be long before AI can create videos of me teaching more coherently than I do myself.

Just this week, ChatGPT Pulse out of nowhere gave me a fully formatted intuitive description of the switching lemma for me to teach in class. So I did.

At what point does the AI just take over?

Monday, October 20, 2025

Sept 16, 2025 was Pythagorean Day

     
Several people emailed me that September 16, 2025---written as 9-16-25 in the US---represents the integer side lengths of a right triangle.


9-16-25 is the only such triple that is also a valid date. This kind of mathematical alignment only happens once every 100 years.  The next occurrence will be  September 16, 2125.

Since this is such a rare event, let's explore some more math-themed dates.

1) Pythagorean Triples That Work as Future Dates

Note that 9-16-25 is not a Pythagorean triple; however, 3-4-5 is.

Here are some future  dates that are both Pythagorean triples and valid calendar dates:

March  4, 2105 is 3-4-5

May 12, 2113 is 5-12-13

June 8, 2110 is 6-8-10

July 24, 2125 is 7-24-25 (Darn---July 24, 2025 was recent and I missed it!)

August 15, 2117 is 8-15-17

I think that's it.  Recall that we need the month to be in \(\{1,\ldots,12\}\) and the day to be in \(\{1,\ldots,31\}\) with some exceptions:

Thirty days has September, April, June, and November

All the rest have thirty-one,

Excepting February, fun!

And that has twenty-eight days clear

And twenty-nine in a Leap Year

There are 24 versions of this poem at a website which is  here.

2) Why Didn't Anyone Email Me About Earlier Dates?

I wonder why nobody emailed me on, say, March 4, 2005 (3-4-5).  That's a Pythagorean triple, but maybe it just looked like three consecutive numbers. Oh well.

And what about May 12, 2013 (5-12-13)? That's a really cool Pythagorean triple. Oh well.

3) Other Math-Related Dates Using Month, Day, and Year.
So dates like Pi Day don't count---we want the full date to be interesting mathematically. Side note---I looked up how Pi Day is referred to and its Pi Day, not \(\pi\) day. Probably because not all typesetting systems can easily produce \(\pi\). 



a) Square days:

Dates where the full 8-digit number (MMDDYYYY) is a perfect square.

a1) September  27, 2025 is 9-27-2025 and \(9272025=3045^2\).

Bonus: if you write it as 27-09-2025 then: \(27092025=5205^2\).

a2) Feb 2, 2084 is 2-02-2084 and \(2022084=1422^2\).

b) Palindrome days

b1) March 11, 2030 is 03-11-30 might be the next one.

b2) I was hoping that Feb 2, 2022 (2-2-22) would be a Tuesday (Twosday) but alas, it was not. I asked ChatGPT what is the next year that ends with 22 where Feb 2 is a Tuesday. It gave me incorrect answers four times. When I pointed this out it thanked me for checking its work and then gave me a later incorrect answer. It then gave me a python program that I could run to find out myself. I found out that between the years 1622 to 9922, only looking at years ending with 22, the following pattern happens:

Feb 2, 1622 is a Wednesday
Feb 2, 1722 is a Monday
Feb 2, 1822 is a Saturday
Feb 2, 1922 is a Thursday

Feb 2, 2022 is a Wednesday
Feb 2, 2122 is a Monday
Feb 2, 2222 is a Saturday
Feb 2, 2322 is a Thursday

This pattern repeats as far as I computed, which was 9922. 

I started in 1622 since the Gregorian calendar started in 1582. I stopped in 9922 because of numerical Python issues. I do not know if the pattern goes on forever, or if one of the leap year exceptions will cause that pattern to change. Recall that

Every year divisible by 4 is a leap year. 
EXCEPT if the year is divided by 100 but not 400 then its not a leap year. I have not found any other exceptions on the web but there may still be some. 



b3) I was hoping Feb 22, XX22 (2-22-22)  was sometimes a Tuesday. Here are all the years after 1622,  ending in 22, before 10,000, where Feb 22 is a Tuesday: 2022, 2422, 2822, 3222,...,9622 and I assume in general (2022+400x). Again, the pattern might not hold if there are leap year exceptions I don't know about. 
 

-------------------------------

That is the end of my post. A bonus for my readers: a mini meta post. Long time reader and fellow SUNY Stony brook math major David Marcus (he was class of 1979, I was class of 1980) has been proofreading some of my posts before I post them. Lance suggested I used chatty instead. Instead of using chatty instead, I used chatty and then used David.  David still found mistakes. I give an example here by pointing to all three versions:

Original Version: here

After ChatGPT: here

After David: here (this is not the final version since I read it and made some minor changes)

I may at some later point look at several ORIG/AFTER CHAT/AFTER DAVID and see what errors chatty is not catching. 




Wednesday, October 15, 2025

Fall Jobs Post 2025

Each fall I try to predict the theory computer science faculty job market to come and give suggestions to those going after them. Get set for a rocky ride, with AI’s disruption of computer science, fiscal stress at universities, and new U.S. policies affecting visas and funding.

In last year's post I wrote
In two years we have gone from nearly all of our computer science graduates getting jobs in the field to many of them struggling to get internships and jobs in the top companies if at all. If the past is any guide, a weak tech job market leads to fewer majors which leads to fewer slots for computer science faculty. We'll start to see these trends this year and they will accelerate quickly if the tech jobs market doesn't recover.

In fact the tech job market for computer science graduates has become much more challenging, ironically as the tech companies have had huge market growth with advances in artificial intelligence. Academia moves slowly, some departments are still catching up on hiring, but that will soon slow. Many computer science departments are adding AI majors and specializations and computer science enrollments over the next few years will be hard to predict.

Computer science is not protected from larger forces. Many universities are facing long-term fiscal challenges, and cuts in research and student funding from the US government aren't helping, leading to hiring freezes at many schools. The demographic cliff, a drop in the US birth rate around the 2008 fiscal crisis, will start hitting colleges in earnest next year. 

The Trump administration has made it more difficult for foreigners to get student visas, leading to a projected drop in incoming international students between 30% and 40%. Recent changes to H1B's, namely the $100K fee an employer would need to pay, may discourage more students who study in the US with the hopes of building a career and a life here. International students, many of whom have gone into computer science, are a major source of tuition for many universities.

Universities would also need to pay the $100K to hire foreigners as faculty and postdocs. Startup packages for new computer science faculty typically run several hundred thousand dollars so this is not out of the question, but schools might be reluctant to pay the $100K in tight fiscal times. These rules may evolve via litigation or administrative reinterpretation. 

Even in a turbulent market, visibility and adaptability matter most. Universities outside the US are trying to take advantage by going after students and faculty. More than ever you should consider positions around the world, especially, but not only, if you are not a US citizen or permanent resident. Other countries have their own challenges, so tread carefully.

On to specific advice for students on the academic job market. 

CATCS maintains a Theoretical Computer Science Jobs posting site. More generally in computer science, there is the CRA Database of Job Candidates and the CRA and ACM job postings. 

And in general make sure you stand out. Areas related to data such as Artificial Intelligence, Data Science and Cybersecurity, will draw the most interest. Best if you can tie your research to those areas, or at least that you are open to teaching in them. Make sure your teaching statement talks about how you will handle AI in the classroom. 

Have a well-designed website with all your job materials and links to your papers. Make sure your Google Scholar, LinkedIn and GitHub (if relevant) sites are accurate and up to date. Make a short video describing your research to a general computer science crowd. Take advantage of all the links above. Network at FOCS or SODA if you can get there. Reach out directly to professors you know at schools you are interested in. And start now.

Sunday, October 12, 2025

The Most Common Name in the World is Not Charles Lin. But It Seems That Way To Me.


In 2001 I supervised Charles Lin's Master's Thesis, which was on Private Information Retrieval.

In 2025 I supervised Charles Lin's Master's Thesis, which was on Ramsey Theory.

These are different people. To celebrate the second one's thesis, I took them both out to lunch together. 

A Picture of Charles, Charles, and Bill is here.

I asked ChatGPT about common names.

1) ChatGPT tells me (and I believe them) that the number of people in American named  Charles Lin is between 200 and 900. The answer pointed out that Charles is common and Lin is common but the combination is not common. 

2) ChatGPT tells me (and I believe them) that the number of people named  John Smith is between 25,000 and 40,000. I've heard it's a common name, but I do not know anyone named that. How is that possible?

3)  I predict that people with last name Smith will be reluctant to name their child John because it is so common. Hence this name will decline. However, I made that same prediction 40 years ago. Maybe it did decline TO 40,000.

4) The most common boys' names in 2025 (so far) in the US are, in order:

 Liam, Noah, Oliver, Theodore, James, Henry, Mateo, Elijah, Lucas, William

According to this website here, the five  most common  boys' names in 1960 (the year I was born), in order, are 

 David, Michael, John, James Robert.

James is the only on both lists. 

I don't have the next  five, though I suspect Henry and William would be on it. I also suspect that Liam and Mateo would not be in the top 100.

5) The most common girls' names in 2025 (so far) in the US are, in order:

Olivia, Emma, Amelia, Charlotte, Mia, Sophia, Isabella, Evelyn, Ava, Sofia

(Should they have Sophia and Sofia both on the list? I would say yes but it is odd.)

 According to that same website,  the most common girls' names in 1960 (the year I was born), in order, are

 Mary, Susan, Linda, Karen, Donna

 None of the 1960 name are on the 2025 list. I suspect that would still true for the next 10. 

6) When I was in grade school if the teacher said   Quiet down Bill and Lisa the class went quiet. 

7) The most common first name in the world is Mohammad. The most common last name in the world is Wang. Hence the most common name in the world has to be Mohammad Wang. I've asked students to find the flaw in that logic and all 8 Mohammad Wangs in my class found the flaw. Kudos to them! (Darling says I should always tell you when I am kidding. I am kidding. Only 2 of the 8 Mohammad Wangs in my class found the flaw.)

8) (ADDED LATER) One of my readers emailed me the following two items

a) From The Big Bang Theory

Howard: It gets better. Someone has to go up with the telescope as a payload specialist, and guess who that someone is.

Sheldon: Mohammed Lee.

Howard: Who’s Mohammed Lee?

Sheldon: Mohammed is the most common first name in the world, Lee, the most common surname. As I didn’t know the answer, I thought that gave me a mathematical edge.

b) As an interesting aside, I noticed that this semester I have three students named "Luke" and three named "Ben" which made me notice that kids who grew up with the original Star Wars movies would have kids in college around now (but none of my current students are named "Leia")

9) (ADDED LATER)  I HEARD that the name `Barack' was popular when he was president.

I looked it up:

5 babies got that name in 2007

52 babies got that name in 2008

69 babies got that name in 2009

it declined since then.

BAD SCIENCE TIME: The number of babies named Barack increased 10-fold from 2007 to 2008.

TRUE but not significant. 

10)  I hope to supervise at least one more Charles Lin before I retire.


Wednesday, October 08, 2025

Big Bots Don't Cry

A few comments to last week's post Computers Don't Want suggested that human brains are just advanced computers, yet still possess agency and desires. But are we just Turing machines? I wrote about this question before but let's revisit in the world where artificial general and super intelligence may (or may not) be right around the corner. 

Much of what our brain does, the way we store and retrieve our memories, how we process our senses, how we reason and learn, are very much computational. There is something else, something that gives us an internal view of ourselves, that combined with the computational power of the brain leads to self-awareness, agency, free will, emotions and desires. When I see attempts to give computational explanations for our internal concepts, like the Blums' work on consciousness and free will, I see them capturing the properties we attribute to these concepts, but fail to capture the intuitive notion I have of them. I have some internal capability, a "soul" for the lack of a better name, that allows us to reason about ourselves.

I think of René Descartes and his Meditations on First Philosophy, famous for cogito ergo sum (I think, therefore I am) in the second meditation. Computation and even its execution are mathematical concepts and mathematics lies outside of any physical world. You can't reason from a computational object that it exists. Yet Descartes is able to reason about his own existence. In the sixth meditation, Descartes talks about substance dualism, a separation from mind and body. I view the body as containing the brain, the computational part of our being, but some other entity which, combined with our powerful computational brain, enables us to reason about ourselves and gives us free will, emotions and desires. 

I met a religion professor and asked him about this topic. He mentioned that he had a crying baby sitting next to him on the plane to Chicago. Babies cry for many reasons but sometimes just to be held by their mother or father, a need for companionship. I can imagine a computer doing the equivalent of crying but not the need to do so.

I can't explain how the soul interacts with our brains, I suspect it goes beyond some simple physical mechanism. I can't prove that I have a soul, and while I can't prove the rest of humanity also has souls, I believe they do since otherwise we wouldn't even have concepts like self-awareness. But I don't believe AI models, now or in the future, have something like a soul, and we shouldn't reason about them as though they do.

Sunday, October 05, 2025

If you use AI in your work do you brag about it or hide it?

You used AI in your work. Do you hide it or brag about it? 

1) In 2002 there was a movie Simone about an actress who is really an AI.  The Wikipedia entry tells the entire plot. I saved time by reading it in two minutes rather than watching it in 2 hours. I don't think I missed anything.

Key Plot Point: The creator of Simone hides the fact that Simone is an AI. Why hide it? Think of the publicity that would be generated by bragging about it!

The following comment would have been clever in 2002 but is so obvious now that it's banal:

Hiding that it's an AI actress is more unrealistic than having an AI actress. 

Fast Forward to 2025 and into reality: There IS an AI actress named Tilly Norwood (do AIs need last names? Apparently yes.)  She has not appeared in any movies yet but she will be on Instagram and other media soon. Are the creators trying to hide that she is not human? Quite the contrary---they are bragging about it. The Screen Actors Guild is complaining about this, see here. The complaint is that she can't be a real actress since she has no real life experiences to draw upon. If they are saying she won't be a good actress, that's for the studios and the audience and the critics to decide. If a new technology is threatening your livelihood then the argument it's not very good is a terrible argument since it may well be false and is not really what your concern is. 

2) Recently Scott used AI GPT-5-thinking to help him with a proof. Did he hide this fact? Quite the contrary, he pointed it out as an interesting application of AI. See his blog post here.

3) There was a Chemistry Nobel prize, and a Physics Nobel prize, for work done where AI played a role. Did they try to hide that AI was used? Quite the contrary.  See Lance's post on this, here.

4) Do you know of any case where AI or a computer was used and the authors wanted to hide that fact? Aside from  Chess players using a computer, and students using ChatGPT, I cannot. Not for any serious research. (My spellcheck thinks ChatGPT is not a word. This time spellcheck is wrong.) 

5) The opposite has happened: The Mechanical Turk chess-playing "machine". See here.

6) I recently saw the movie I, Robot from 2004. The main human character is against robots and says Can a robot write a symphony? He means this to be a rhetorical question whose answer is no. I counter: can a computer write a movie as dumb as I, Robot? Simone?