Thursday, October 23, 2025

AI and Intro Theory

This fall, for the first time at Illinois Tech, I'm teaching Introduction to Theory of Computation. While I've taught a variation of this course a couple dozen times, I last taught this class Spring 2016 at Georgia Tech. Intro Theory is a course whose main theorems don't change but it feels different in the AI era.

Here are the big and little things for me.

NP as a Hard Problem?

Garey and Johnson's classic 1979 book on NP-completeness had this famous cartoon.

While the employee couldn't prove that a certain NP-complete problem didn't have a quick solution, at least he could tell his boss that all these famous computer scientists have also tried and failed. The implication is once you show a problem is NP-complete, you should give up. But these days, we have great tools that can often solve or approximate well many practical instances of NP-complete problems such as satisfiability, mixed-integer programming and traveling salesman. NP-completeness is no longer an ending point to give up, but a starting point to try other approaches.

Computation is Hard to Understand

Rice's theorem tells us we can't understand anything about the language a Turing machine computes in general. We can't even tell if a machine halts on blank tape. But for most well-designed programs, we knew how they worked, or at least how they were supposed to work if correctly implemented.

With machine learning, we now have neural nets that we cannot understand how they compute what they do. And like Turing machines, for the most part, all we can do to understand how they work is to look at their input/output behavior.

New Theorems

I first taught this course not long after Neil Immerman and Robert Szelepcsényi independently proved that nondeterministic space is closed under complement and it's now a staple in my course. There are more theorems proven since then that I mention, but don't prove, including the PCP theorem, Primes in P, undirected graph connectivity in deterministic log space, and very recently deterministic time \(t(n)\) in \(\sqrt{t(n)\log t(n)}\) space

AI in Teaching

I simply can no longer come up with problems that I can expect undergrads to solve that AI can't solve. So I decided to grade homework on completion instead of correctness so students who wanted to try hard wouldn't feel they needed to use AI to get full credit. For exams, in-class proctored with no electronics. I definitely preferred take-home exams; I just cannot do it anymore.

I use AI to help me format lecture notes and check over my homeworks and exams. The students mostly use AI as their first stop for help instead of office hours. For good reason, the latest LLM models know the material in this course pretty well.

I did manually grade the midterms but for fun asked AI to grade a few of them and its scores were similar to mine.

It won't be long before AI can create videos of me teaching more coherently than I do myself.

Just this week, ChatGPT Pulse out of nowhere gave me a fully formatted intuitive description of the switching lemma for me to teach in class. So I did.

At what point does the AI just take over?

22 comments:

  1. Similarly interesting question: when do we reach the point where we always just follow AI without question? But then, lost in our amazement, something really bad will happen (because AI is not always right). So, in conclusion, I hope that the bubble cracks soon at least a bit.

    ReplyDelete
  2. When you say "It won't be long before AI can [...]" do you mean you think this will happen this year? Next year? In 2027? Or later?

    ReplyDelete
    Replies
    1. I would say the technology to create a lecture of me teaching better than the actual me will be there in a year. Another couple of years before its cost feasible.

      Delete
    2. I don't know professor, I guess you are under the influence of LLMs, education is not just a matter of facts reciting.

      Delete
    3. I agree, intro theory is a story. The point of this post is that that facts are the same, but the story has shifted.

      Can AI tell that story? It's getting there.

      Delete
  3. It looks more solid than those "other" intro courses that just plagiarise from Gödel Escher Bach and call it a day!

    ReplyDelete
  4. ""At what point does the AI just take over?
    In your posted video, you go over Rice's Theorem, which states any semantic property of a formal system is undecidable. So when will AI just take over? According to Rice's Theorem, NEVER!!!!!

    ReplyDelete
  5. Going to office hours as a woman in CS/Math always scared me. What if it's another creepy professor? What if I get human trafficked into solving Ramsey Theory graph colourings? I'd rather use AI in that case.

    ReplyDelete
  6. There is something wrong with a student if they would use an AI to do their homework for them. You don't learn by having someone else do your homework.

    ReplyDelete
  7. which NP-hard problem has AI solved in practice?

    also the algorithms course really doesn't have much to do with that people actually do as software engineers in the industry, it never did.

    there are a small number of people who might use them to write new libraries or solve new problems but most of what software engineers do in practice, both before and after LLMs, has little to no use of what we teach in algorithms course. it is not even 10% useful really, beyond interviewing of course.

    ReplyDelete
    Replies
    1. The obvious example is protein folding, and machine learning has significantly improved many SAT and MIP solvers.

      Delete

    2. > significantly improved many SAT and MIP solvers

      can you give a reference?

      Delete
    3. Sure, here are some 2022 surveys for SAT and MIP. Simple Google searches yield more recent work as well.

      Delete
    4. The theory says that if you can solve an NP-hard problem, you can solve all NP-hard problems.

      If AI is able to solve one but not the others, there is some confusion I would say.

      Is AI as a heuristic giving many good enough outputs in many cases, yes. but has it really solved protein folding problem? maybe it is solving the easy ones.

      is there a dataset for benchmarking protein folding algorithms like there is for SAT?

      maybe it is a heuristic that performs better than some previous ones significantly but has not solved the problem really.

      since heuristics for SAT are much better studied, it would be easier there to see how much impact AI have had on it as a heuristic.

      Delete
    5. "If AI is able to solve one but not the others, there is some confusion I would say."

      Yes. There's lots of confusion. The SAT stuff (and presumably the MIP stuff) doesn't "solve an NP complete problem". It finds "good enough" answers without doing the complete calculation.

      But, the real confusion is that there are TWO types of AI: AI that works with models of the domain of concern, and LLMs.

      LLMs have no model of reality, and no way to check the correctness of their outputs. Everything an LLM outputs is unverified random text. As many lawyers, a Stanford prof., and Deloitte have found out.

      The Go and protein folding programs have models of how Go and protein folding work, and compute against those models. The machine learning stuff (as I understand it) has a mathematical model of the problem it deals with, and does gradient descent against that model (in insanely-high dimensional spaces).

      Delete
  8. One question, Lance.

    I’m currently teaching an Algorithms course, and I’ve been facing a similar challenge when choosing problems for problem sets — finding questions that I expect my students to solve but that AI cannot is becoming increasingly difficult.

    As for exams, I don’t allow any electronic devices in the examination hall. One of my students recently argued that such policies are antiquated and suggested that exams should also permit the use of AI tools, since “future jobs won’t impose such artificial restrictions.”

    My knee-jerk reaction (and my actual response at the time) was that I simply don’t want to grade the AI’s work — and I ended the discussion there. To be honest, I was quite irritated that a student had the temerity to raise such a point. But later that day, after cooling down, I found myself wondering if I was the problem.

    So here’s my question: am I just clinging to the old guard? I still believe there’s real value in how exams were conducted in the pre-AI era, but I’m no longer sure how to defend that view without simply shutting down the conversation.

    ReplyDelete
    Replies
    1. Here's an attempt.

      Science, math, and technology courses actually have actual content. And that content builds on itself. If you don't know what the basic words mean, you won't be able to understand an answer to any question, whatever generated that answer. Ditto for third year course material without having learned the second year material.

      So the point of any course is to teach material that will allow the student to understand the subject one step better. To verify that the student has learned that level of material, closed book/closed device tests are one way to go. Suggestions as to better testing methods appreciated, but not methods that don't actually test.

      Delete
    2. What I always tell my students is that if all you know how to do in every class is copy whatever the output of an LLM is, why would anyone ever pay you for your work when they could just do that themselves? You need to develop some unreplaceable skills to justify a salary.

      Not just with AI, but in theory and math courses, there have always been Bart Simpson level questions by students of the form "Why do I need to know this?" like "Why do I need to know my times tables when the calculator does this for me?" There has never been a good answer because these students already do not have an open mind. Euclid was asked such a question once and said something like "give him a penny so he may feel like he earned something for what he learned."

      Delete
    3. Do you have a source for the Euclid quote?

      Delete
    4. Great quote. I found the full apocryphal quote: A youth who had begun to read geometry with Euclid, when he had learnt the first proposition, inquired, "What do I get by learning these things?" So Euclid called a slave and said "Give him threepence, since he must make a gain out of what he learns."

      Delete
  9. an interesting emerging area of research is understanding what various ML model architectures cannot do. this is interesting also because many top ML scientists think that the current wave is not going to get to cat-level intelligence for fundamental architectural reasons.

    ReplyDelete
    Replies
    1. It's amusing that ML is doing so well nowadays, given that (my memory has it, anyway) Newell and Simon argued (in the 1960s (!!!)) that gradient descent was hopeless because of local minima, and that AI programs would have to be smarter than that. (I remember a figure from one of their books in which a robot has climbed a tree and was screaming "I've gotten closer to the moon".) Still, when I was doing materials science, 2-component systems was pretty much all we could deal with (graphing phase transition (vertical axis) against composition (0% x with 100% y to 100% x with 0% y) told you what happens at various compositions), but an article in Science a while ago (some time in '24) was doing a 5-component system using ML. Wow. Just wow.

      Of course, howling at the moon remains a problem that has to be dealt with. And note that cats don't howl at the moon.

      Delete