The one larger point I would suggest adding is to add my operational definition of progress: Progress is being made on a problem if, when the solution is published, it will cite work being published today. Of course that is “operational” only after the fact. Demillo Lipton Perlis at the end have a nice riff on this. The alchemists thought they were making progress on turning lead to gold but they weren’t, even though we know that was actually a solvable problem. Likewise jumping off of higher and higher buildings was not making progress toward heavier than air flight.
Sunday, October 17, 2021
Friday, October 15, 2021
When László Babai first announced his graph isomorphism in quasipolynomial time result, I wrote
We think of theory as a young person's game, most of the big breakthroughs coming from researchers early in their careers. Babai is 65, having just won the Knuth Prize for his lifetime work on interactive proofs, group algorithms and communication complexity. Babai uses his extensive knowledge of combinatorics and group theory to get his algorithm. No young researcher could have had the knowledge base or maturity to be able to put the pieces together the way that Babai did.
Babai's proof is an exceptional story, but it is exceptional. Most CS theorists have done their best work early in their career. I got myself into a twitter discussion on the topic. For me, I'm proud of the research I did through my forties, but I'll always be best known, research wise, for my work on interactive proofs around 1990. It would be hard to run a scientific study to determine cause and effect but here are some reasons, based on my own experiences, on why we don't see research dominated by the senior people in theory.The field changes - Computation complexity has moved from a computational-based discipline to one now dominated by combinatorics, algebra and analysis. I'm not complaining, a field should evolve over time but it plays less to my strengths. It's hard to teach this old dog new tricks.
Sunday, October 10, 2021
Lance: How come you haven't blogged on your muffin book? You've blogged about two books by Harry Lewis (see here and here) one book by the lesswrong community (see here), and you even did a mashup of a post by two different Scott A's (see here), but not on your own work.
Bill: I thought I did a post on my muffin book.
Lance: No. You have blogged about the muffin problem, and sometimes you mention either the book or the problem in passing, but you haven't had a post that says
HEY, I wrote a book!
And this is all the more strange since you asked me to have the book on our blog page.
Bill: (Searches blog with keyword muffin and finds no ref to muffin book). Well pierce my ears and call be drafty! I have not posted on the muffin book! Do you recall my thoughts on when to tell people you are working on a book?
Bill: I had a college roommate who was an aspiring science fiction writer who told me there are two kinds of people: Those who talk about writing a book, and those who write a book. I have adapted this to:
Do not tell people you are writing a book until you are picking out the cover art.
Lance: I posted about my book when I hadn't even decided on the title. But your cover art is picked out (see here). And, by the way, its very nice, though it makes me hungry. So I think you can begin talking about the book.
Bill: Indeed! I will!
Hey I have a book! (See here to buy it on amazon.)
Title: Mathematical Muffin Morsels: Nobody Wants a Small Piece
by Gasarch, Metz, Prinz, Smolyak
(The other authors were undergraduates when we wrote the book. Prinz and Smolyak are now grad students in CS, Metz is in Finance.)
Martin Gardner wrote a Mathematics Recreational column for Scientific American for many years, starting in 1956 and ending in the early 1980s. For many STEM people of my generation (Using my fake birthday of Oct 1, 1960, I am 62 years old) Martin Gardner's columns were both an inspiration and an early exposure to mathematics. His columns also made the line between Mathematical Recreation and so-called serious mathematics thin or nonexistent. (See here for a review of Martin Gardner in the 21st century, a book about the kind of math Gardner wrote of. The book makes a mockery of the distinction between recreational and serious mathematics.) He passed away in 2010 at the age of 95.
There is a gathering in his honor that is hold roughly every 2 years, called Gathering For Gardner. (It was cancelled in Spring 2020 and Spring 2021 because of COVID- though its in Atlanta where the CDC is, so they could have had it as an experiment and told the CDC the results). You have to be invited to goto it. I got an invite for 2016 from my contact at World Scientific who published my previous book, Problems with a Point: Exploring Math and Computer Science co-authored with Clyde Kruskal (I had two blogs on it, here and here, and you can buy it on amazon here.) I did three posts on G4G-2016 (here, here, and here).
Aside from seeing some great talks that I understood and liked, I also picked up a pamphlet titled:
The Julia Robinson Math Festival
A Sample of Mathematical Puzzles
Compiled By Nancy Blackman
One of the problems, credited to Alan Frank, was
How can you divide and distribute 5 muffins for 3 students so that everyone gets 5/3 and the smallest piece is as big as possible?
They had some other values for muffins and students as well.
I solved the (5,3) problem and the other ones as well. That was fun.
When I got home I began looking at the problem for m muffins and s students. I let f(m,s) be the biggest smallest piece possible for giving out m muffins to s students. I proved a general theorem, called the Floor-Ceiling theorem, that always gives an upper bound, FC(m,s) on f(m,s). I worked out formulas for
f(m,3) (its always FC(m,3),
f(m,4) (its always FC(m,4)).
While working on f(m,5) I found that f(m,5) was always FC(m,5) EXCEPT for m=11. So what's up with f(11,5)?
By the Floor Ceiling theorem f(11,5) \le 11/25. We (at that point several ugrads and HS students had joined the project) were unable to find a protocol that would show f(11,5)\ge 11/25. Personally I thought there WAS such an protocol but perhaps it was more complicated than the ones we had found (We were finding them by hand using some easy linear algebra.) Perhaps a computer program was needed. We did find a protocol for f(11,5)\ge 13/30, which surely was not optimal.
While on an Amtrak I began working out the following train of thought: The protocol for f(11,5)\le 11/25 MUST have
(1) every muffin cut into two pieces,
(2) 3 students get 4 pieces,
(3) 2 students get 5 pieces.
While working on getting a protocol for f(11,5)\le 11/25 with these properties I found that... there could be no such protocol! Then by reworking what I did I found that f(11,5)\le 13/30. So it was done! and we had a new technique, which we call The Half Method. To see the full proof see my slides here
The story above is typical: We get f(m,k) for all 1\le k\le SOMETHING, we get stuck, and then we find ANOTHER technique to show upper bounds (which in this case are limits on how well we can do). This happened about 8 times depending on how you count. After a while we realized that this could not just be an article, this was a book! World Scienfiic agreed to publish it, and its out now.
1) I got a conference paper out of it, in the Fun with Algorithms Conference, with some of the co-authors on the book, and some other people. here is the conf paper.
2) Early on we realized that f(m,s) = (m/s)f(s,m) so we only had to look at the m>s case.
3) The fact that f(m,s) exists and is rational is not obvious, but is true. In fact, f(m,s) can be found by a mixed-int program.
4) Late on in the process I found that there was a by-invite-only math newsgroup that had discussed the problem, and in fact was where Alan Frank first posted it. I obtained their materials and found that they had already shown f(m,s)=(m/s)f(s,m) and also that the answer is always rational and exists. Aside from that our results did not overlap.
5) Even later in the process Scott Huddleston emailed me (out of the blue) that he had a program that solved the muffin problem quickly. I was skeptical at first, but he did indeed have a whole new way to look at the problem and his code was very fast (I had Jacob Prinz, one of the co-authors on the book, recode it). Later Richard Chatwin (see here) seems to have proven that Scott's method always works. The approach of Scott and Richard is where to go if you want to do serious further research on Muffins. My book is where you want to go if you want to learn some easy and fun math (a HS student could read it).
6) I co-authored a column with Scott H, Erik Metz, Jacob Prinz on Muffins, featuring his technique, in Lane's complexity column, here.
7) I had an REU student, Stephanie Warman, write a muffin package based on the book.
8) I gave a talk an invited talk on The Muffin Problem at a Joint AMS-MAA meeting.
9) I gave a talk at Gathering for Gardner 2018 on The Muffin Problem.
10) I often give talks on it to groups of High School students.
11) When I teach Discrete Math Honors I talk about it and assign problems on it- it really is part of the course. As such its a good way to reinforce the pigeon hole principle.
12) I contacted Alan Frank about my work. We arranged to meet at an MIT combinatorics seminar where I was to give a talk on muffins. He brought 11 muffins, with 1 cut (1/2,1/2), 2 cut (14/30,16/30),
and 8 cut (13/30,17/30) so that the 11 of us could each get 11/5 with smallest piece 13/30.
Why did I keep working on this problem? I kept working on it because I kept hitting barriers and (with co-authors) breaking them with new techniques that were interesting. If early on a barrier was not breakable then I would have stopped. If (say) Floor-ceiling solved everything than I might have gotten a paper out of this, but surely not a book.
Lesson for all of us: look around you! Its not clear what is going to inspire a project!
Lasting effect: I am reluctant to throw out old math magazines and pamphlets since you never know when one will lead to a book.
Friday, October 08, 2021
Potbelly, a local sandwich chain, made me an offer I couldn't refuse: change my password and earn a free (and quite tasty) oatmeal chocolate chip cookie. A free cookie is a great motivator, and checking that this wasn't some clever phishing attack, changed my password and got my cookie. Not sure why Potbelly wanted me to change my password but happy to take their cookie.
Potbelly likely didn't make this offer to everyone so what if you want a cookie?
- Use an app to get a cookie delivered.
- Visit a specialty cookie store.
- Go to your local supermarket and pick up a package of Chip's Ahoy.
- Buy some pre-made cookie dough and put it in the oven.
- Buy some cookie mix, add ingredients and bake.
- Find a cookie recipe, buy the ingredients and get cooking
- Get fresh ingredients direct from a farm stand
- Grow and gather your own ingredients, ala Pancakes Pancakes
- Not even realize you are using machine learning, such as recommendations on Netflix or Facebook.
- Using ML implicitly, like talking to Alexa
- Using pre-trained ML through an app, like Google Translate
- Using pre-trained ML through an API
- Using a model like GPT-3 with an appropriate prompt
- Use an easily trained model like Amazon Fraud Detector
- An integrated machine learning environment like Sagemaker
- Use pre-built ML tools like TensorFlow or PyTorch
- Code up your own ML algorithms in C++
- Build your own hardware and software
Sunday, October 03, 2021
(Disclosure - Harry Lewis was my PhD advisor.)
It seems like just a few weeks ago I I blogged about a book of Harry Lewis's that was recently available (see here). And now I am blogging about another one. Writing two books in two years seems hard! I can only think of one other computer scientist who has done that recently (see here and here).
In 2008 Abelson, Ledeen, and Lewis wrote
Blown to Bits: Your Life, Liberty, and Happiness after the Digital Explosion
which I reviewed in SIGACT news, see here
Both computers and society have changed since 2008. Hence an update was needed.
In 2021 Adelson, Ledeen, Lewis, and Seltzer wrote a second edition.
Should you buy the new version if you bought the old version?
1) Not my problem- I got them both for free since I reviewed them.
2) Not your problem- The second edition is available free-on-line here. Is that a link to some dark corner of the dark web? No, its the formal webpage about the book. So the book is available free-on-line legally, if you care (and even if you don't care).
3) If you like paper, the book is on amazon. (If you don't like paper, the book is still on amazon).
I reviewed it in SIGACT news. A non-paywalled link: here (is that link legal? I have no idea.)
In this post I'll just mention two things that changed since the last book
1) Shared Music and pirating were an issue back in 2008. It does not seem to be anymore since there is now a variety of services that seem to make pirating not worth it: itunes, streaming services, and some bands give it away for free and ask you to pay what its worth. Movies are still struggling with this issue.
2) AI systems that reinforce existing bias is a new problem.
Thursday, September 30, 2021
If you have Netflix and interested in the academic world, I recommend The Chair, a six-episode dramatic series starring Sandra Oh as a new English department chair at a "lower tier ivy league university". The series takes many artistic liberties and compresses much in a short time period but gets much about academics right such as the tension between faculty and the administration with the chair caught in the middle, the need to create majors that attract students, faculty past their prime teaching the same courses in the same way for decades, faculty who get themselves in a hole and keep digging, alumni donors controlling academic decisions, pressure to build a diverse faculty, faculty feeling under appreciated and getting outside offers, and a wonderful exposition of how the field has changed over the past thirty years given to someone who had dropped out before finishing their PhD to take on a different career.
When I served as department chair at Georgia Tech, I dealt with most if not all of these issues above, though not at the same time. I had some challenges that today's English department doesn't face: how to handle enrollments that more than doubled while barely able to hire more faculty than were departing, not that I would trade in a second for the existential crisis that English departments are going through.
When I left Georgia Tech after seven years, I had outlasted every other current chair in the Colleges of Computing, Science and Engineering. Not sure what this says about me or about Georgia Tech.
Being chair is the most challenging job in academia. The faculty technically report to you but you aren't their boss in any traditional sense--they came to academia because of the freedom to work on what they want and they won't give it up. It's virtually impossible to fire anyone with tenure. The joke goes that a chair needs two umbrellas, one to block stuff coming from the administration going to the faculty and the other to block the stuff from the faculty from going to the administration. Since I left it has gotten much uglier in the University System of Georgia which has no mask or vaccine mandates and glad I'm not the chair to deal with that.
This all sounds like I'm discouraging of becoming a department chair and it certainly isn't a job for anyone but it can be a very rewarding job. You can help shape the future of the department by the faculty you hire and the vision you set and create an environment that helps your faculty and students succeed.
Sunday, September 26, 2021
I got my PhD from Harvard in 1985 with advisor Harry Lewis
Harry Lewis got his PhD from Harvard in 1974 with advisor Burton Dreben (Dreben was in the Philosophy department and did logic). Burton Dreben never got a PhD (more on that later). So I thought my lineage stopped there. A while back I was in an email conversation with Harry and for some odd reason Galileo came up.
He then emailed me the following:
Did you know you were descended from Galileo, via Newton? See below. The data is from the Math Genealogy project (see here). As you know Dreben had no PhD, but it would certainly be fair to call Quine his advisor anyway. And, in fact, the Math Geneology project lists Quine as Dreben's advisor. By starting with Dreben and clicking backwards I found the following:
In the list below everyone was advised (in some form) by the person below them.
William Gasarch, Harvard 1985
Harry Lewis, Harvard 1974
Burton Dreben, Harvard 1955
WVO Quine, Harvard 1932
AN Whitehead, Cambridge 1884
Edward John Routh, Cambridge 1857
William Hopkins, Cambridge 1830
Adam Sedgwick, Cambridge 1811
Thomas Jones, Cambridge 1782
Thomas Postlethwaite, Cambridge 1756
Stephen Whisson, Cambridge 1742
Walter Taylor, Cambridge 1723
Robert Smith, Cambridge 1715
Roger Coles, Cambridge 1706
Isaac Newton, Cambridge 1668
Isaac Barrow, Cambridge 1652
Vincenzo Viviani, Pisa 1642
Galileo Galilei, Pisa 1585
A few observations
1) Dreben was a philosophy professor at Harvard without a PhD. How? He was a Junior Fellow, which is for brilliant people, some of which were made professors without the burden of going through the PhD-getting ritual. Andrew Gleason was a professor of Math at Harvard without a PhD-- also a junior fellow (he solved Hilbert's 5th problem, which surely helped). Tom Cheatham was a CS professor at Harvard who did not have a PhD but was not a junior fellow. I do not know how he did that. Things are more formal now, and more people have PhD's, so I suspect it is much rarer to be a professor without a PhD. Harvard still has the Junior Fellows Program, but even they have PhDs now. If someone solved P vs NP as an ugrad, I suspect they would be hired as a professor even though they do not have a PhD. That's one way for a theorist to get out of taking graduate systems courses.
2) Note that Galileo and Vincenzo were in Pisa but then a long line of people from Cambridge. In those days schools hired their own. Is this good or bad? They know what they are getting, but you could have an old-boys-network blocking fresh new talent, and you may get stuck in your ways. Nowadays, at least in America, it is uncommon to stay at the same school as you got your PhD.
3) The shift from Pisa to Cambridge might be part of a more general phenomena--- the intellectual center for science shifting from Italy to England. What caused this? Amir Alexander, in his book Infinitesimals: How a dangerous mathematical idea shaped the modern world (see my review here ) speculates that the Catholic Church's rejection of Infinitesimals was the cause. I suspect that letting non-scientists interfere with science was the cause (a lesson for us all).
4) Lance did a blog on his lineage here. He has Gauss and Euler as ancestors.
5) To honor the myths about my two most famous academic ancestors, Galileo and Newton, I am going to travel to Italy and have Darling drop two apples of different weights off the leaning tower of Pisa and see if they hit my head at the same time.
Thursday, September 23, 2021
An undergrad thesis from North Carolina State University tries to tackle the question as to why computer science has used conferences as its main and most prestigious publication venues. The author Elijah Bouma-Sims gives a synopsis with some interesting follow up conversation in this Twitter thread.
The upshot is that the conference culture grew organically early in computing and just took hold as the field grew. My personal non-scientific theory is that technology not available to earlier fields, namely jet airplanes, allowed CS to have national and international meetings that researchers could regularly attend. Before that conferences in more established fields like math were held either locally (AMS sectional meetings) or less often (ICM held every four years), traditions that continue to this day.
Covid has temporarily suspended fully on-site conferences, and new technologies allow us to have virtual meetings. It's still not clear what will be the new normal for conferences. I hope we get to the model where we have more virtual meetings and rarer in-person meetings that people make more of an effort to attend. Conferences focused on networking instead of publications.
The culture of conference publications has been slowly changing. Many subfields in CS, though not no much theory, have moved to a hybrid model where papers are submitted to a journal and those accepted are invited to be presented at a conference.
Conferences used to be the first place you would hear about new results but that's no longer the case. Papers posted on arXiv get noticed and Google Scholar doesn't distinguished citations to an arXiv paper differently from any other publication venue.
Now you don't even need a presentation or a paper, just a promise of one. How many of you are excited about linear-size locally testable codes based on a talk announcement alone?