Thursday, October 23, 2025

AI and Intro Theory

This fall, for the first time at Illinois Tech, I'm teaching Introduction to Theory of Computation. While I've taught a variation of this course a couple dozen times, I last taught this class Spring 2016 at Georgia Tech. Intro Theory is a course whose main theorems don't change but it feels different in the AI era.

Here are the big and little things for me.

NP as a Hard Problem?

Garey and Johnson's classic 1979 book on NP-completeness had this famous cartoon.

While the employee couldn't prove that a certain NP-complete problem didn't have a quick solution, at least he could tell his boss that all these famous computer scientists have also tried and failed. The implication is once you show a problem is NP-complete, you should give up. But these days, we have great tools that can often solve or approximate well many practical instances of NP-complete problems such as satisfiability, mixed-integer programming and traveling salesman. NP-completeness is no longer an ending point to give up, but a starting point to try other approaches.

Computation is Hard to Understand

Rice's theorem tells us we can't understand anything about the language a Turing machine computes in general. We can't even tell if a machine halts on blank tape. But for most well-designed programs, we knew how they worked, or at least how they were supposed to work if correctly implemented.

With machine learning, we now have neural nets that we cannot understand how they compute what they do. And like Turing machines, for the most part, all we can do to understand how they work is to look at their input/output behavior.

New Theorems

I first taught this course not long after Neil Immerman and Robert Szelepcsényi independently proved that nondeterministic space is closed under complement and it's now a staple in my course. There are more theorems proven since then that I mention, but don't prove, including the PCP theorem, Primes in P, undirected graph connectivity in deterministic log space, and very recently deterministic time \(t(n)\) in \(\sqrt{t(n)\log t(n)}\) space

AI in Teaching

I simply can no longer come up with problems that I can expect undergrads to solve that AI can't solve. So I decided to grade homework on completion instead of correctness so students who wanted to try hard wouldn't feel they needed to use AI to get full credit. For exams, in-class proctored with no electronics. I definitely preferred take-home exams; I just cannot do it anymore.

I use AI to help me format lecture notes and check over my homeworks and exams. The students mostly use AI as their first stop for help instead of office hours. For good reason, the latest LLM models know the material in this course pretty well.

I did manually grade the midterms but for fun asked AI to grade a few of them and its scores were similar to mine.

It won't be long before AI can create videos of me teaching more coherently than I do myself.

Just this week, ChatGPT Pulse out of nowhere gave me a fully formatted intuitive description of the switching lemma for me to teach in class. So I did.

At what point does the AI just take over?

Monday, October 20, 2025

Sept 16, 2025 was Pythagorean Day

     
Several people emailed me that September 16, 2025---written as 9-16-25 in the US---represents the integer side lengths of a right triangle.


9-16-25 is the only such triple that is also a valid date. This kind of mathematical alignment only happens once every 100 years.  The next occurrence will be  September 16, 2125.

Since this is such a rare event, let's explore some more math-themed dates.

1) Pythagorean Triples That Work as Future Dates

Note that 9-16-25 is not a Pythagorean triple; however, 3-4-5 is.

Here are some future  dates that are both Pythagorean triples and valid calendar dates:

March  4, 2105 is 3-4-5

May 12, 2113 is 5-12-13

June 8, 2110 is 6-8-10

July 24, 2125 is 7-24-25 (Darn---July 24, 2025 was recent and I missed it!)

August 15, 2117 is 8-15-17

I think that's it.  Recall that we need the month to be in \(\{1,\ldots,12\}\) and the day to be in \(\{1,\ldots,31\}\) with some exceptions:

Thirty days has September, April, June, and November

All the rest have thirty-one,

Excepting February, fun!

And that has twenty-eight days clear

And twenty-nine in a Leap Year

There are 24 versions of this poem at a website which is  here.

2) Why Didn't Anyone Email Me About Earlier Dates?

I wonder why nobody emailed me on, say, March 4, 2005 (3-4-5).  That's a Pythagorean triple, but maybe it just looked like three consecutive numbers. Oh well.

And what about May 12, 2013 (5-12-13)? That's a really cool Pythagorean triple. Oh well.

3) Other Math-Related Dates Using Month, Day, and Year.
So dates like Pi Day don't count---we want the full date to be interesting mathematically. Side note---I looked up how Pi Day is referred to and its Pi Day, not \(\pi\) day. Probably because not all typesetting systems can easily produce \(\pi\). 



a) Square days:

Dates where the full 8-digit number (MMDDYYYY) is a perfect square.

a1) September  27, 2025 is 9-27-2025 and \(9272025=3045^2\).

Bonus: if you write it as 27-09-2025 then: \(27092025=5205^2\).

a2) Feb 2, 2084 is 2-02-2084 and \(2022084=1422^2\).

b) Palindrome days

b1) March 11, 2030 is 03-11-30 might be the next one.

b2) I was hoping that Feb 2, 2022 (2-2-22) would be a Tuesday (Twosday) but alas, it was not. I asked ChatGPT what is the next year that ends with 22 where Feb 2 is a Tuesday. It gave me incorrect answers four times. When I pointed this out it thanked me for checking its work and then gave me a later incorrect answer. It then gave me a python program that I could run to find out myself. I found out that between the years 1622 to 9922, only looking at years ending with 22, the following pattern happens:

Feb 2, 1622 is a Wednesday
Feb 2, 1722 is a Monday
Feb 2, 1822 is a Saturday
Feb 2, 1922 is a Thursday

Feb 2, 2022 is a Wednesday
Feb 2, 2122 is a Monday
Feb 2, 2222 is a Saturday
Feb 2, 2322 is a Thursday

This pattern repeats as far as I computed, which was 9922. 

I started in 1622 since the Gregorian calendar started in 1582. I stopped in 9922 because of numerical Python issues. I do not know if the pattern goes on forever, or if one of the leap year exceptions will cause that pattern to change. Recall that

Every year divisible by 4 is a leap year. 
EXCEPT if the year is divided by 100 but not 400 then its not a leap year. I have not found any other exceptions on the web but there may still be some. 



b3) I was hoping Feb 22, XX22 (2-22-22)  was sometimes a Tuesday. Here are all the years after 1622,  ending in 22, before 10,000, where Feb 22 is a Tuesday: 2022, 2422, 2822, 3222,...,9622 and I assume in general (2022+400x). Again, the pattern might not hold if there are leap year exceptions I don't know about. 
 

-------------------------------

That is the end of my post. A bonus for my readers: a mini meta post. Long time reader and fellow SUNY Stony brook math major David Marcus (he was class of 1979, I was class of 1980) has been proofreading some of my posts before I post them. Lance suggested I used chatty instead. Instead of using chatty instead, I used chatty and then used David.  David still found mistakes. I give an example here by pointing to all three versions:

Original Version: here

After ChatGPT: here

After David: here (this is not the final version since I read it and made some minor changes)

I may at some later point look at several ORIG/AFTER CHAT/AFTER DAVID and see what errors chatty is not catching. 




Wednesday, October 15, 2025

Fall Jobs Post 2025

Each fall I try to predict the theory computer science faculty job market to come and give suggestions to those going after them. Get set for a rocky ride, with AI’s disruption of computer science, fiscal stress at universities, and new U.S. policies affecting visas and funding.

In last year's post I wrote
In two years we have gone from nearly all of our computer science graduates getting jobs in the field to many of them struggling to get internships and jobs in the top companies if at all. If the past is any guide, a weak tech job market leads to fewer majors which leads to fewer slots for computer science faculty. We'll start to see these trends this year and they will accelerate quickly if the tech jobs market doesn't recover.

In fact the tech job market for computer science graduates has become much more challenging, ironically as the tech companies have had huge market growth with advances in artificial intelligence. Academia moves slowly, some departments are still catching up on hiring, but that will soon slow. Many computer science departments are adding AI majors and specializations and computer science enrollments over the next few years will be hard to predict.

Computer science is not protected from larger forces. Many universities are facing long-term fiscal challenges, and cuts in research and student funding from the US government aren't helping, leading to hiring freezes at many schools. The demographic cliff, a drop in the US birth rate around the 2008 fiscal crisis, will start hitting colleges in earnest next year. 

The Trump administration has made it more difficult for foreigners to get student visas, leading to a projected drop in incoming international students between 30% and 40%. Recent changes to H1B's, namely the $100K fee an employer would need to pay, may discourage more students who study in the US with the hopes of building a career and a life here. International students, many of whom have gone into computer science, are a major source of tuition for many universities.

Universities would also need to pay the $100K to hire foreigners as faculty and postdocs. Startup packages for new computer science faculty typically run several hundred thousand dollars so this is not out of the question, but schools might be reluctant to pay the $100K in tight fiscal times. These rules may evolve via litigation or administrative reinterpretation. 

Even in a turbulent market, visibility and adaptability matter most. Universities outside the US are trying to take advantage by going after students and faculty. More than ever you should consider positions around the world, especially, but not only, if you are not a US citizen or permanent resident. Other countries have their own challenges, so tread carefully.

On to specific advice for students on the academic job market. 

CATCS maintains a Theoretical Computer Science Jobs posting site. More generally in computer science, there is the CRA Database of Job Candidates and the CRA and ACM job postings. 

And in general make sure you stand out. Areas related to data such as Artificial Intelligence, Data Science and Cybersecurity, will draw the most interest. Best if you can tie your research to those areas, or at least that you are open to teaching in them. Make sure your teaching statement talks about how you will handle AI in the classroom. 

Have a well-designed website with all your job materials and links to your papers. Make sure your Google Scholar, LinkedIn and GitHub (if relevant) sites are accurate and up to date. Make a short video describing your research to a general computer science crowd. Take advantage of all the links above. Network at FOCS or SODA if you can get there. Reach out directly to professors you know at schools you are interested in. And start now.

Sunday, October 12, 2025

The Most Common Name in the World is Not Charles Lin. But It Seems That Way To Me.


In 2001 I supervised Charles Lin's Master's Thesis, which was on Private Information Retrieval.

In 2025 I supervised Charles Lin's Master's Thesis, which was on Ramsey Theory.

These are different people. To celebrate the second one's thesis, I took them both out to lunch together. 

A Picture of Charles, Charles, and Bill is here.

I asked ChatGPT about common names.

1) ChatGPT tells me (and I believe them) that the number of people in American named  Charles Lin is between 200 and 900. The answer pointed out that Charles is common and Lin is common but the combination is not common. 

2) ChatGPT tells me (and I believe them) that the number of people named  John Smith is between 25,000 and 40,000. I've heard it's a common name, but I do not know anyone named that. How is that possible?

3)  I predict that people with last name Smith will be reluctant to name their child John because it is so common. Hence this name will decline. However, I made that same prediction 40 years ago. Maybe it did decline TO 40,000.

4) The most common boys' names in 2025 (so far) in the US are, in order:

 Liam, Noah, Oliver, Theodore, James, Henry, Mateo, Elijah, Lucas, William

According to this website here, the five  most common  boys' names in 1960 (the year I was born), in order, are 

 David, Michael, John, James Robert.

James is the only on both lists. 

I don't have the next  five, though I suspect Henry and William would be on it. I also suspect that Liam and Mateo would not be in the top 100.

5) The most common girls' names in 2025 (so far) in the US are, in order:

Olivia, Emma, Amelia, Charlotte, Mia, Sophia, Isabella, Evelyn, Ava, Sofia

(Should they have Sophia and Sofia both on the list? I would say yes but it is odd.)

 According to that same website,  the most common girls' names in 1960 (the year I was born), in order, are

 Mary, Susan, Linda, Karen, Donna

 None of the 1960 name are on the 2025 list. I suspect that would still true for the next 10. 

6) When I was in grade school if the teacher said   Quiet down Bill and Lisa the class went quiet. 

7) The most common first name in the world is Mohammad. The most common last name in the world is Wang. Hence the most common name in the world has to be Mohammad Wang. I've asked students to find the flaw in that logic and all 8 Mohammad Wangs in my class found the flaw. Kudos to them! (Darling says I should always tell you when I am kidding. I am kidding. Only 2 of the 8 Mohammad Wangs in my class found the flaw.)

8) (ADDED LATER) One of my readers emailed me the following two items

a) From The Big Bang Theory

Howard: It gets better. Someone has to go up with the telescope as a payload specialist, and guess who that someone is.

Sheldon: Mohammed Lee.

Howard: Who’s Mohammed Lee?

Sheldon: Mohammed is the most common first name in the world, Lee, the most common surname. As I didn’t know the answer, I thought that gave me a mathematical edge.

b) As an interesting aside, I noticed that this semester I have three students named "Luke" and three named "Ben" which made me notice that kids who grew up with the original Star Wars movies would have kids in college around now (but none of my current students are named "Leia")

9) (ADDED LATER)  I HEARD that the name `Barack' was popular when he was president.

I looked it up:

5 babies got that name in 2007

52 babies got that name in 2008

69 babies got that name in 2009

it declined since then.

BAD SCIENCE TIME: The number of babies named Barack increased 10-fold from 2007 to 2008.

TRUE but not significant. 

10)  I hope to supervise at least one more Charles Lin before I retire.


Wednesday, October 08, 2025

Big Bots Don't Cry

A few comments to last week's post Computers Don't Want suggested that human brains are just advanced computers, yet still possess agency and desires. But are we just Turing machines? I wrote about this question before but let's revisit in the world where artificial general and super intelligence may (or may not) be right around the corner. 

Much of what our brain does, the way we store and retrieve our memories, how we process our senses, how we reason and learn, are very much computational. There is something else, something that gives us an internal view of ourselves, that combined with the computational power of the brain leads to self-awareness, agency, free will, emotions and desires. When I see attempts to give computational explanations for our internal concepts, like the Blums' work on consciousness and free will, I see them capturing the properties we attribute to these concepts, but fail to capture the intuitive notion I have of them. I have some internal capability, a "soul" for the lack of a better name, that allows us to reason about ourselves.

I think of René Descartes and his Meditations on First Philosophy, famous for cogito ergo sum (I think, therefore I am) in the second meditation. Computation and even its execution are mathematical concepts and mathematics lies outside of any physical world. You can't reason from a computational object that it exists. Yet Descartes is able to reason about his own existence. In the sixth meditation, Descartes talks about substance dualism, a separation from mind and body. I view the body as containing the brain, the computational part of our being, but some other entity which, combined with our powerful computational brain, enables us to reason about ourselves and gives us free will, emotions and desires. 

I met a religion professor and asked him about this topic. He mentioned that he had a crying baby sitting next to him on the plane to Chicago. Babies cry for many reasons but sometimes just to be held by their mother or father, a need for companionship. I can imagine a computer doing the equivalent of crying but not the need to do so.

I can't explain how the soul interacts with our brains, I suspect it goes beyond some simple physical mechanism. I can't prove that I have a soul, and while I can't prove the rest of humanity also has souls, I believe they do since otherwise we wouldn't even have concepts like self-awareness. But I don't believe AI models, now or in the future, have something like a soul, and we shouldn't reason about them as though they do.

Sunday, October 05, 2025

If you use AI in your work do you brag about it or hide it?

You used AI in your work. Do you hide it or brag about it? 

1) In 2002 there was a movie Simone about an actress who is really an AI.  The Wikipedia entry tells the entire plot. I saved time by reading it in two minutes rather than watching it in 2 hours. I don't think I missed anything.

Key Plot Point: The creator of Simone hides the fact that Simone is an AI. Why hide it? Think of the publicity that would be generated by bragging about it!

The following comment would have been clever in 2002 but is so obvious now that it's banal:

Hiding that it's an AI actress is more unrealistic than having an AI actress. 

Fast Forward to 2025 and into reality: There IS an AI actress named Tilly Norwood (do AIs need last names? Apparently yes.)  She has not appeared in any movies yet but she will be on Instagram and other media soon. Are the creators trying to hide that she is not human? Quite the contrary---they are bragging about it. The Screen Actors Guild is complaining about this, see here. The complaint is that she can't be a real actress since she has no real life experiences to draw upon. If they are saying she won't be a good actress, that's for the studios and the audience and the critics to decide. If a new technology is threatening your livelihood then the argument it's not very good is a terrible argument since it may well be false and is not really what your concern is. 

2) Recently Scott used AI GPT-5-thinking to help him with a proof. Did he hide this fact? Quite the contrary, he pointed it out as an interesting application of AI. See his blog post here.

3) There was a Chemistry Nobel prize, and a Physics Nobel prize, for work done where AI played a role. Did they try to hide that AI was used? Quite the contrary.  See Lance's post on this, here.

4) Do you know of any case where AI or a computer was used and the authors wanted to hide that fact? Aside from  Chess players using a computer, and students using ChatGPT, I cannot. Not for any serious research. (My spellcheck thinks ChatGPT is not a word. This time spellcheck is wrong.) 

5) The opposite has happened: The Mechanical Turk chess-playing "machine". See here.

6) I recently saw the movie I, Robot from 2004. The main human character is against robots and says Can a robot write a symphony? He means this to be a rhetorical question whose answer is no. I counter: can a computer write a movie as dumb as I, Robot? Simone?

Wednesday, October 01, 2025

Computers Don't Want

I read through the new book If Anyone Builds It, Everyone Dies by Eliezer Yudkowsky and Nate Soares. "It" refers to Artificial Super Intelligence (ASI). A very short version of the authors' argument: You can view advanced AI as though it has its own desires and agencies, its needs will be incompatible with those of the human race, and AI has the capabilities to eliminate the humans even without the killer robots.

I have no doubt that a crafty sentient AI hellbent on destroying humanity could do so. But let's look at the first part of the argument, should we reason about AI as though it has agency and preferences? The authors make a subtle argument in their Chapter 3, that while AI doesn't have its own wants and desires, we can reason about AI as though it does. In the following chapters, the authors go all in and think of ASI as though it has preferences and acts in its own self interest.

I think of computing as a Turing machine, a device that follows a set of simple instructions, interacting with input and memory, and producing some output. The machine does not have wants or desires, all it does is follow its instructions.

But we also realize the immense complexity that can arise from such simplicity. We have Rice's Theorem that says we can't understand, in general, anything about a Turing machine's output from the code of the machine. And there's a reason why we can't prove P ≠ NP or even have a viable approach, we have no idea how to bound the complexity of efficient algorithms. But we shouldn't confuse complexity and our inability to understand the algorithm as evidence of agency and desires. Even if AI seems to exhibit goal-oriented behavior, it's a property of its training and not evidence of independent agency. 

I worry less about AI developing its own hostile agency than about how humans will wield it, whether through malicious misuse or misplaced trust. These are serious risks, but they're the kinds of risks we can work to mitigate while continuing to develop transformative technology. The "everyone dies" framing isn't just fatalistic, it's premised on treating computational systems as agents, which substitutes metaphor for mechanism.

Sunday, September 28, 2025

Clyde Kruskal talks about his Father Martin on Martin's 100th birthday

Martin Kruskal was born Sept 28, 1925 and passed away on Dec 26, 2006, at the age of 81 (we did two posts for his memorial, here  and here). Today, Sept 28, 2025, is his 100th birthday. His son Clyde Kruskal wrote today's blog post as a tribute to his father.

We note that Clyde did a blog for his Uncle Bill's 100th Birthday, here, and plans on doing one for his Uncle Joe in two years. 

-----------------------------------------------------------------------------------------

My father, Martin Kruskal, was a mathematician and physicist, best known for his early work in plasma physics, the discovery (independently of George Szerekes) of Kruskal-Szerekes coordinates, the modern discovery with Norman Zabusky of solitons, and the discovery with Clifford Gardner, John Greene, and Robert Miura of the inverse scattering transform.  He had an incredible willingness to tackle interesting, seemingly small problems\(-\)some of which later turned out to be highly influential.  Here, in chronological order, is a list of some of his more esoteric and/or less well-known contributions, not necessarily research contributions (besides me).


1. Random Mappings

Renowned mathematicians Nicholas Metropolis and Stanislaw Ulam (inventors of the Monte Carlo method) posed the problem of determining the expected number of components in a random mapping from \(N\)  to \(N\). Framing this as a random graph problem: Construct a graph on \(N\) vertices by choosing, for each vertex \(i\) (from 1 to \(N\)), a random vertex \(j\) (also from \(1\) and \(N\)), and create the undirected edge \((i,j)\). If \(i=j\), no edge is created (or equivalently, a self-loop is created).  The question is: What is the expected number of connected components in such a graph?  In 1954, in his first paper after graduate school, he showed that this value is asymptotically \((1/2)(\ln(2N) + C) + o(1)\), where \(C=0.5772\ldots\) is Euler's constant. For the paper see here.


2. Computer Chess

My father was a strong chess player and, in 1956, arguably became the first person to play a game of chess against a computer.  When I learned about this in junior high school, I figured that this must be his most significant achievement.  Because the MANIAC 1 computer was slow and had little memory, the game was played on a 6×6 chessboard without bishops and simplified rules.  My father gave the computer queen odds. Spoiler alert: He won.  For more details, including the full game, see this explanation on Wikipedia here.


3. The Card Game Delphi

In 1956, Bob Abbott invented Eleusis, a card game involving inductive inference.  While the game itself was highly original, the scoring system was somewhat arbitrary.  In 1962, my father devised the game Delphi, which, according to Martin Gardner, made several important improvements, including a much more logical scoring system.  While it solved the scoring problem, Delphi turned out to be less fun to play than Eleusis.  Later, Bob Abbott created an improved version of Eleusis called New Eleusis, which, unfortunately, still did not completely solve the scoring problem.
For the rules to Delphi, see this article in Denexa Games here.

4. The Kruskal Count

My father invented a card trick, called the Kruskal Count, with the unique characteristic that the magician never touches the deck of cards during the performance.  To make this plausible, he would claim to the audience that he was psychic.  There are various YouTube videos demonstrating and discussing it.  Shortly after devising the trick, my father, brother, and I visited a young married couple.  My father performed the trick several times on the husband, who kept accusing his wife of secretly colluding with him.  She was insulted and kept denying it\(-\)truthfully. Finally my father said to the wife, Well, the cat’s out of the bag.  You have been in on it all along, and you can admit it now. (She hadn’t.) My brother and I (both teenagers) thought it was hilarious; I doubt the couple did.

The trick turned out to have serious applications: According to Wikipedia, the underlying phenomenon has applications in cryptography, code-breaking, software tamper protection, code self-synchronization, control-flow resynchronization, design of variable-length codes and variable-length instruction sets, web navigation, object alignment, and others. There are various YouTube videos demonstrating and discussing it.

The Kruskal Count was featured in an episode of NUMB3RS, which Gasarch blogged about here.

5. Surreal Numbers

My father had a lifelong fascination with infinity starting in childhood, which deepened when he read Donald Knuth’s introductory book on surreal numbers.  For more about his interaction with Knuth after reading the book, see this blog entry here.  Wikipedia provides a concise explanation on my father’s webpage about surreal numbers and his contributions.  The entry highlights important results\(-\)both positive and negative\(-\)by Ovidiu Costin, Philip Ehrlich, and the mathematical logician Harvey Friedman (see here and here for the papers).

Martin's website has the following passage, which captures the issue beautifully.

Surreal numbers, which are defined constructively, have all the basic properties and operations of the real numbers.  They include the real numbers alongside many types of infinities and infinitesimals. Kruskal contributed to the foundation of the theory, to defining surreal functions, and to analyzing their structure.  He discovered a remarkable link between surreal numbers, asymptotics, and exponential asymptotics. A major open question, raised by Conway, Kruskal, and Norton in the late 1970s, and investigated by Kruskal with great tenacity, is whether sufficiently well behaved surreal functions possess definite integrals.  This question was answered negatively in the full generality, for which Conway et al. had hoped, by Costin, Friedman, and Ehrlich in 2015.  However, the analysis of Costin et al. shows that definite integrals do exist for a sufficiently broad class of surreal functions for which Kruskal's vision of asymptotic analysis, broadly conceived, goes through.  At the time of his death, Kruskal was in the process of writing a book on surreal analysis with O. Costin.

6. Alpha-Beta Search

While in graduate school, I came across Donald Knuth and Ronald Moore's paper on alpha-beta search (see here).  They posed the question of what is the expected branching factor in a game tree with \(c\) children per node and depth \(d\), and uniformly distributed values at the leaves.  They gave a partial solution.  Gerard Baudet found a recurrence for this value and improved the bound (see here).  This was the kind of asymptotics that my father excelled at, so I asked him if he could solve the recurrence.  He solved it by finding an asymptotic value for the number of leaves evaluated.  Right after that Judea Pearl published the asymptotic value for the branching factor (see here).

At the time I figured that the original question had been answered by Pearl, so the stronger result was not interesting.  A more mature version of me might have pursued it.  In reality, answering the original question is interesting, but for a number of reasons any further improvement is not particularly interesting, so I was probably right to drop it even if for the wrong reasons.  Unfortunately it was in the file drawer lost during the move to our new building, but if anyone cares I think I remember the exact high order term.

7.  Public Key Crypto

When he learned about public key cryptography, he invented his own system based on quadratic mapping just to understand how it works.  He gave a talk on it in 1983. He showed it to Ron Rivest who said that it was not secure. Oh well.