Wednesday, September 20, 2023

We Must Be Doing Something Right

The Chicago Tribune ran an editorial Monday that started

What’s the best four-year college in Illinois? Not the University of Chicago, Northwestern University or the University of Illinois at Urbana-Champaign.

No, the best college in the state is the Illinois Institute of Technology, of course!

The editorial was referring to the Wall Street Journal that ranked Illinois Tech 23rd in the nation, tops in Illinois, up from 117 last year. Illinois Tech also cracked the top 100 in the latest US News rankings, up from 127. 

Did we just get that much better? Yes, yes we did! 

Or maybe it had to do with changes in methodology. The Wall Street Journal's rankings this year puts a heavy emphasis on "how much will it improve the salaries they earn after receiving their diplomas". As Illinois Tech caters to students who often are the first in their families to go to college, and focuses on technical degrees, we can really raise up students who might not have otherwise had such an education. It's one of the things that brought me to the university in the first place.

US News also "increased the emphasis on how often schools' students from all socioeconomic backgrounds earned degrees and took advantage of information on graduate outcomes that was not available until recently". 

All rankings should be taken with a grain of salt, and universities will always tout rankings where do they well while conveniently ignoring others. It's impossible to linearly order colleges--there are just too many different factors that make different schools better for different people. 

But as people start to question the value of college the rankings are starting to address their concerns. And if that bumps up my university, so be it.

Sunday, September 17, 2023

ACM to go paper-free! Good? Bad?

The ACM (Association of Computing Machinery) will soon stop having print versions of most its publications. Rather than list which ones are going paper free, I list all those that are not going paper free:
Communications of the ACM
ACM Inroads
Interactions
XRDS: Crossroads

What are the PROS and CONS of this? What are the PROS and CONS of any publication or book being paper free?


1) I like getting SIGACT News on paper since 
a) It reminds me to read it
b) Reading on the screen is either on my phone which is awkward (especially for math) or my desktop (so I have to be AT my desktop). 
I DO NOT think this is my inner-Luddite talking. 
 
2) QUESTION:  Will SIGACT News and JACM and other ACM publications continue to have  page limit for articles? When I was SIGACT News Book Rev Col editor, and now as Open Problems Col editor, I have often had to ask the editor Can I have X pages this time? The answer was always yes,  so perhaps there never really was a page limit. But is having no page limit good? Not necessarily. Having a limit may force you to only write down the important parts.

3)  PRO: Its good for the ecology to not make so much paper.  While this is certainly true, I think the world  needs to rethink our entire consumer society to really make a difference for the ecology. In fact, I wonder if e-cars, carbon-offsets,  and paper free products make us feel good without really helping much.

4) CON but good timing: I recently had an open problems column with two co-authors. One of them is not in the ACM and is not in the community, but wanted to see a copy of the article. I have arranged to have a paper copy of that one issue sent to him.  If I had published this column in 2024, I could not do this. And saying Just go to link BLAH' does not have the same impact as PAPER. I could have printed it out for him, but that just does not seem like the same as having an official copy. 
I DO think this is my inner-Luddite talking. Or his.

5) For quite some time computer science  conference proceedings have not been on paper (there have been a variety of ways this is done). Before that time the following happened a lot: I am in Dave Mounts office talking about something (e.g., who should teach what). He gets a phone call but motions that it will be short so I should still hang out. While hanging out I pick up a RANDOM proceedings of the conference RANDOM  and find the one or two article in it about Ramsey Theory and read them, or at least note them and read them later. That kind of RANDOM knowledge SEEMS less common  in a paper-free age. But maybe not.  I HAVE clicked around the web and accidentally learned things. Like the facts I learned for my post on simulation theory here.

6) Similar to point 5- I used to go to the math library and RANDOMLY look at a volume of the American Math Monthly or some other similar journal and look at some articles in it.  But now that's harder since they have stopped getting journals on papers and only get them electronically. To be fair, paper versions of the journals are EXPENSIVE. 

7) In the year 1999 my grad student Evan Golub got his PhD and he had to bring a PAPER copy of it to some office where they measured margins and stuff of EVERY PAGE to make sure it was within university specs.  Why? Because in an earlier era this was important for when the thesis was put on microfilm.  Were they doing that in 1999? I doubt it.  Some of my younger readers are thinking OH, they didn't have LaTeX packages that take care of marginfor you.  Actually they DID have such packages but, to be fair, the requirement that the university literally measures margins on EVERY PAGE was completely idiotic. I am  happy to say that in 2007 when my student Carl Anderson  got his PhD nobody needed a paper version. I do not know when the rules changed but I am glad they did. 

8) The ACM should promote this paper free change by doing a rap song like Progressive Insurance did here

9) Recently I had a paper with 3 co-authors that all three of us, and some others, proofread (I thought) very carefully. The referee accepted it but with a somewhat nebulous this paper needs a better proofreading. I then PRINTED IT OUT and read it AWAY FROM MY COMPUTER (the paper is on overleaf) with a RED PEN and I found LOTS of stuff to fix that we all missed before. So there are some advantages to getting OFF of the computer, though that may require PRINTING. (I also blogged about this topic here.) 

Thursday, September 14, 2023

Mr. Jaeger and The Scroll

There were three major influencers in my educational journey: Mike Sipser, my PhD advisor at Berkeley and MIT, Juris Hartmanis who founded the field of computational complexity and led me to it at Cornell, and Philip Jaeger, a math teacher at Millburn High School in New Jersey.

Mr. Jaeger
Perhaps your "scroll" will outlast both of us.

I took two math courses with Philip Jaeger, Algebra II and Probability. Mr. Jaeger (I still can't call him Phil) also ran the computer room, a narrow room of three teletype machines where we saved programs on paper tape, where we would do simulations of poker hands for probability class among various other programming. He truly set me up for my future career. 

Mr. Jaeger was also the advisor of the computer club when I was president. 

That's me in the middle of the first photo, with Mr. Jaeger on my right.

Sometime during my high school years I took one of the rolls used in the teletype machine and wrote down a lengthy formula. According to Mr. Jaeger, the formula is for the area of a random triangle. I'm sure it made sense to me in high school.

The Scroll


The Scroll partially unscrolled

Another student Cheryl took the entire formula and put it entirely on an index card. Mr. Jaeger saved both the roll and the card.

The index card (actual size is 3x5 in)

Fast forward to 2023. Mr. Jaeger finds the roll and the card. His son, an academic in his own right, suggested that they track Cheryl and me down. I'm not hard to find. After some emails, they invited me to visit to pick up the roll. Last weekend I was in New Jersey visiting family so I did just that.

What great fun to talk to my old teacher, reminiscing about the high school people and times, and catching up after four decades. It was Mr. Jaeger who showed me the computer dating form that had me create a version for our high school.

I gave Mr. Jaeger a copy of my book and he gave me the scroll and the index card. (Cheryl, now a lawyer, wasn't interested in it). I safely brought it all home to Chicago along with all the memories. I dug up my old yearbook for first two photos above. The scroll will indeed outlast the both of us.

Mr. Jaeger, myself and the scroll.

Sunday, September 10, 2023

Asymptotics of R(4,k)- a new result!

 At the workshop 

Ramsey Theory: Yesterday, Today, and Tomorrow, Edited by Alexander Soifer, 2011. (There is a printed proceedings that you can find.)

 I saw Joel Spencer give a great talk titled 

                               80 years of R(3,k).

( Recall that  R(a,b) is the least n such that for all 2-colorings of the edges of K_n there is either a red K_a or a blue K_b).

 The talk was about improvements on both the upper and lower bound on R(3,k) and finally:

                                  R(3,k) = \(\Theta\biggl (\frac{k^2}{\ln^2 k}\biggr )\).

The obvious question was raised: What about R(4,k). The general sense I got was that this would be a much harder problem. However, there has been some progress. A recent paper, here, improved the best known lower bound, so it now stands at

                                   \( c_1\frac{k^3}{\log^4 k} \le r(4,k) \le c_2\frac{k^3}{\log^2 k} \)

How long before we see matching upper and lower bounds?

1) How long did it take to get matching upper and lower bounds for R(3,k). The name of the talk would make one think 80 years, but I would start at a paper of Erdos from 1961 which had the first non-trivial lower bounds. And it was solved in 1995, so that's 34 years. (Note, the talk of Spencer was also on later algorithmic aspects of the problem). 

2) Argument for why matching bounds on R(4,k)  will be found  in \(\le 10\) years: There are more people using more sophisticated tools then were known for the R(3,k) search.

3) Argument for why matching bounds on R(4,k) will take (\ge 20\) years: This problem is dang hard! Triangles are much easier to deal with then  4-cliques.

This is a general problem math has: If a problem is not been solved, is it just dang hard, or are people one or two (or some small finite number) steps away from solving it?

Wednesday, September 06, 2023

Books to Inspire Math

Two of my colleagues and co-authors from my early days at the University of Chicago have released books over the past few months designed to excite people with math, Howard Karloff's Mathematical Thinking: Why Everyone Should Study Math and Lide Li's Math Outside the Classroom. Karloff was a fellow professor and Li was my PhD student. Neither are currently in academia but both still found the need to inspire young people in mathematics.



Both books aim to make math fun, away from the rote problem solving from high school and early calculus courses to concepts like prime and irrational numbers (Karloff) and sequences and geometric shapes (Li). The books have some overlap, both cover deriving e from interest rates and probability including the Monty Hall problem. Both books have lots of problems to work on. 

Between the two I would suggest Karloff's book for junior high/high school age kids and Li's book for older high school and early college students given the topics covered.

At a time that math plays a larger role in our society, especially dealing with data, finding ways to get more young people interested in mathematics is important. These books fill an important niche for the mathematically curious students to dive in topics they won't likely see in their math classes. Great to see my former colleagues taking the time to reach these students through these books. 

Sunday, September 03, 2023

The CONTRADICTION of Margaritaville and other songs

Jimmy Buffett passed away on Sept 1, 2023. His Wikipedia entry (see here) says his death was peaceful and he was surrounded by friends, family, and his dog, so it was likely expected and of natural causes. I later saw a report that he had a serious skin cancer. He was 76. 

He is not related to Warren Buffett--- they actually took a DNA test to find out, see here. They are friends. Buffett isn't that common a name, see here, so it was plausible they were related, but, alas, they are not.

His signature song is Margaritaville (My spellcheck thinks that I misspelled Margaritaville   but I checked it and it looks fine. OR it's one of those things where I keep misreading it.) It wasn't just his signature song---he made a career of it outside of music, see here.

Jimmy Buffett fans are called parrot heads.

There are songs where the lyrics are misheard. Margaritaville is not one of them. Instead, its lyrics are misunderstood. This raised the question:

There are  other songs whose LYRICS and WHAT PEOPLE THINK ABOUT THEM are in CONTRADICTION. What caused the contradiction? Could I make this into a HW assignment the next time I teach logic? Not if my students are looking for their lost shaker of salt.

This link here has 25 songs with misunderstood lyrics. Margaritaville comes in at the 24th. I think it should rank higher (lower index, higher ranking) but I can't complain since I am not an expert and they put in the work (unlike my ranking of satires of Bob Dylan, here, where I am an expert and I put in the work).

I list a few of the songs, plus two more,  and WHY the contradiction. I also listened to them with the following question: ONCE you know what the song is supposed to be about and you listen to it, do you say OF COURSE THAT'S WHAT ITS ABOUT or REALLY? I STILL DON"T SEE IT. This is similar to reading a math proof knowing where it is going so perhaps you say OF COURSE. Of course, you might also say REALLY? I STILL DON"T SEE IT.

 The name of the song in the list below is also a pointer to a video of it.

Imagine by John Lennon. People think it's about peace and love. The writer John Lennon (not be be confused with Vladimr Lenin) says it's a Communist manifesto.  I just listened to it and OF COURSE it's  a Communist Manifesto- but its sung with such an optimistic loving tone that one could miss that. This is John Lennon's best known post-beatles song. 

Total Eclipse of the Heart by Bonnie Tyler. People think it's a power ballad- about love and such. Its actually a vampire love song. REALLY? I STILL DON"T SEE IT. A love song is a love song. It could be about humans, vampires, or, in the case of The Klein Four, Math, but unless they put something Vampire-ish  into it, you can't tell its about Vampires. Two notes: (1) Its Bonnie Tyler's biggest hit, and
(2) it was released in 1983 but also had a large number of sales in 2017. Why? Either guess or see here.

Blackbird by the Beatles. People think its just about a blackbird with a broken wing. Its about civil rights for blacks (or all countries- so I can't use the term African American) REALLY? I STILL DON"T SEE IT. I believe the Beatles intended that meaning.  They also would not play to audiences that had rules about Whites only, or were segregated. So YEAH for them, but I still don't see it. Or hear it. Why the contradiction? Perhaps if I heard it in 1968 I would have understood what it was about. Perhaps they really weren't that clear about it. Perhaps they had to avoid it being censored.

Born in the USA by Bruce Springsteen. This is a well-known misunderstood song, so better to say People USED TO think it was a Pride-in-America song but it was really about the plight of lower class Americans, especially Vietnam War Veterans, after the war. OF COURSE ITS ABOUT THAT. Why was there the contradiction? (1) the chorus is loud and understandable and belts out BORN IN THE USA! as if that's a good thing, (2) the other lyrics are somewhat mumbled (I had to listen to it on a you tube video with closed caption to understand the song), (3) People hear what they want to hear. 

Notes: I was GOING to look up what The Boss's top hit ever was, expecting it to be Born to Run, but that's only his 18th biggest hit. Born in the USA is 8th, and Secret Garden is 1st. Even so, I think of Born to Run as his best known song. Why? (1) It was sung as the opening number of the 2010 Emmy awards (not by him, but done really well- Jimmy Fallon does a GREAT Bruce Springsteen), see here (2) there are several parodies of it, see born to Run (COVID), Born to Run (Bridgegate), Meant To Be (a best man's song), Jedi are Done. Having went to the effort to find parodies of Born to Run I then found parodies of Born in the USA: Bored in the USA (COVID)Touched by the TSAConned in the USABorune'D in the USA (cryptocurr) And there are more. Upshot: trying to find out what someone's best-known song is can be a quagmire, but at least I  got to find some cool parodies.

Who Let the Dogs Out by the Bah  Men. People thought this song was about ... Hmmm, I don't know what people thought. Perhaps it was about someone who let the dogs out. Its actually about how BAD it is when men cat-call women. OF COURSE ITS ABOUT THAT once you see the lyrics.  Why the contradiction? Its really hard to understand anything except the chorus. Their biggest hit.

The Macarena by Los Del Rio. People tend to not listen to music that they dance to. So people really did not think it had a meaning. Also some of it is in Spanish. I can't write what it's about here since I may violate community standards as we did with a prior post (see here). See here for what the lyrics mean OR the list I pointed to above. If you listen to it or read the lyrics OF COURSE IT MEANS THAT! HOW DID I MISS IT? TOO BUSY DANCING! Why the contradiction- as I said above, its really a dance song. Their biggest hit. 

Note: Dance Songs usually don't have that many words. Knuth (see here for the original article and  here for the Wikipedia page about it which has later results) noted that the complexity of  That's the way uh-uh I like it is O(1).

Margaritaville by Jimmy Buffett (I'd be curious to see a version by Warren Buffett). This is a well-known misunderstood song, so better to say that people USED TO think it celebrated a relaxed lifestyle but its actually a sad son about a drunk. OF COURSE THE SONG IS ABOUT BEING DRUNK AND DEPRESSED. So why the contradiction? The tune is so happy-go-lucky, and Jimmy Buffett (and others) talk POSITIVELY about The Margaritaville lifestyle. Whats really odd is that the real meaning of the song IS WELL KNOWN, yet is ignored.

Note: A more realistic take on this topic, to the same tune, is  here.  A Marijuana version of the song is here. A crystal meth version  of the song is here. There are FOUR parodies that are NOT about being drunk, high, or on Crystal-meth, but about... COVID: hereherehere, and here

99 Red Balloons or  99 Luft ballons (the original German Version) To quote the original link: Whether in its original German language or in English, the happy-pop New Wave jam is easily the most danceable  song about a nuclear holocaust caused by balloons. When I listened to it and read the lyrics OF COURSE ITS ABOUT A NUCLEAR HOLOCAUST CAUSED BY BALLOONS. Why the contradiction?  The more popular version is in German, the English version is a bit mumbled (but not much), but most importantly, if there is going to be a nuclear holocaust caused by balloons  I will get up and dance!

Note:  I knew of one parody 99 Dead Baboons, but through the wonders of search and you tube I found more: the social media song, 99 90s shows, 99 unused balloons, 99 Steins of Beer 

For two more, though they are not on the list, see  'The Pina Colada' Song is Really Messed up and Why the Beastie Boys Hate `Fight for your right to Party'

TO SUM UP: songs that get misunderstood may (1) have some  hard to understand lyrics, (2) be dance songs, (3) have the melody and instruments be at odds with the lyrics, (4) have lyrics that people want to hear and others that they don't, (5) be partly or wholly in a foreign language.  I am sure there are other reasons.  

Jimmy Buffett: You will be missed!






Wednesday, August 30, 2023

What Makes a Constructive Proof?

In this weblog, we've used constructive in different ways. Often we talk about constructive as something we can create in polynomial time, like an expander. But how about constructive as in logic, when you don't get to assume the "excluded middle" where you get to assume some statement is either true or false?

The simplest well-known example is the theorem: There exists irrational \(a\) and \(b\) such that \(a^b\) is rational. 

  1. \(\sqrt{2}^\sqrt{2}\) is rational. Let \(a = b = \sqrt{2}\).
  2. \(\sqrt{2}^\sqrt{2}\) is irrational. Let \(a = \sqrt{2}^\sqrt{2}\) and \(b = \sqrt{2}\).
You don't know which \(a\) is correct. You just know it exists. (A far more complicated argument shows \(\sqrt{2}^\sqrt{2}\) is in fact irrational.) 

When I teach intro theory, my first proof that there are non-computable sets is by claiming the computable sets are countable but their are an uncountable number of sets over \(\Sigma^*\) so there must be an non-computable sets. I claim this is a non-constructive proof because I didn't give you the set and do an aside on constructive proofs using the example above. But that's not correct--the proof that there are uncountable number of sets over \(\Sigma^*\) is a constructive diagonalization. Give me an enumeration of the computable sets and I can easily construct a set not on that list.

In complexity, a well-known non-constructive theorem is by Kannan, showing that \(\Sigma^P_2\) does not have \(n^2\)-size circuits.

  1. SAT doesn't have n2-size circuits. Since SAT is in Σ2 we are done.
  2. SAT has n2-size circuits. Then by Karp-Lipton Σ4 = Σ2 so L is in Σ2 and we are done.
Jin-Yi Cai and Osamu Watanabe, and independently Sunny Daniels, gave a constructive \(\Sigma^2_P\) machine and thus a single language in \(\Sigma^P_2\) that doesn't have \(n^2\)-size circuits. But it is not a constructive proof, as the argument the machine works requires the two cases as to whether SAT has small circuits. As far as I know, a true constructive proof of Kannan's theorem remains open.

I have no problem with non-constructive proofs—I'm in a firm believer in \(P\vee\neg P\). But if you do talk about constructivity be sure and use it appropriately. 

Sunday, August 27, 2023

Theorems and Lemmas and Proofs, Oh My!

I was recently asked by a non-mathematician about the difference between the terms Theorem, Lemma, etc. My first reaction was I probably have a blog post on that. Actually, I looked and I don't seem to. Since I have, according to Ken Reagan, over 1000 posts (see here and here) I can easily confuse things I meant to write a post on with things I wrote a post on. My next thought was Lance probably has a post on that. I asked him, and he also thought he had, but also had not. So now I will!

Open Question: A well defined question that you don't know the answer to and may not even have a guess as to which way it goes. The above is not quite right: sometimes an open question is not that well defined (e.g., Hilbert's 6th problem: Makes Physics Rigorous) and sometimes you have some idea, or a rooting interest, in how it goes. I tried to find some open questions in Mathematics where people in the know don't have a consensus opinion.  I can think of two off hand: Is Graph Isom in P? and is Factoring in P. Maybe the Unique Game Conjecture, though I think people in the know think it's true. Here is a website of open questions, but I think for all of them people in the know  think we know how they go: here.

Conjecture: A statement that you think is true, and may even have some evidence that its true, but have not proven yet. I am used to using this term in math, and hence I hope someone will PROVE the conjecture. Are there conjectures in empirical sciences? If so, then how do they finally decide it's true? Also note- I blogged about conjectures and how once they are proven the conjecturer is forgotten here. EXAMPLES OF CONJECTURES: The same link as in open problems above. 

True story but I will leave out names: There was a conjecture which I will call B's Conjecture. C & S solved it AS WRITTEN but clearly NOT AS INTENDED.  Even so, C & S got a published paper out of it. This paper made M so mad that he wrote a GREAT paper that solved the conjecture as intended (and in the opposite direction). That paper also got published. So one conjecture lead to two opposite solutions and two papers. 

Wild-Ass Guess: You can take a wild-ass guess what this is. 

Hypothesis: An assumption that you may not think is true but are curious what may be derived from it. The Continuum Hypothesis is one. For some reason Riemann's problem is called The Riemann Hypothesis even thought it's really a conjecture.  So is my notion of Hypothesis wrong? In any case, if you know other things that are called Hypothesis then please leave a comment. 

Lemma: A statement that is proven but only of interest in the service of proving a theorem. There are exceptions where a Lemma ends up being very important, see here. The word is also used in English, see here, but I've never heard of the word being used that way.

Theorem: A statement that has been proven. Usually it is somewhat general. There are a few exceptions: Fermat's Last Theorem was called that before it was a Theorem. If you know other things that were called theorems but weren't,  please comment.  EXAMPLES OF THEOREMS: The Fundamental Theorem of X (fill in the X), Ramsey's Theorem, VDW's theorem, Cook-Levin Theorem, The Governor's theorem (see here). There are many more theorems that have names and many that do not. 

Corollary: A statement that follows directly from a Theorem. Perhaps an interesting subcase of a Theorem. Often this is what you really care about. When trying to find a famous corollary I instead found The Roosevelt Corollary to the Monroe Doctrine,  Corollaries of the Pythagorean theorem, and Uses of the word Corollary in English. Are there any famous corollaries in mathematics that have names?

Claim: I do the following though I do not know if its common: During a proof I have something that I need for it, but it is  tied-to-the-proof-of-the-theorem so it would be hard to make a lemma. So I prove it inside the proof of the theorem and call it a claim. I use Claim, Proof of Claim, End of Proof of Claim to delimit it. 

Porism: A statement that you can get from a theorem by a minor adjustment of the proof. I've also heard the phrase Corollary of the proof of Theorem X. I first saw this in Jefferey Hirst's Phd Thesis which is here, on Reverse Mathematics. I liked the notion so much I've used it a few times. It does not seem to have caught on; however,  there is a Wikipedia entry for the term here which also gives two examples of its use, which are not from Hirst's  thesis or my papers. 

Proposition: I see this so rarely that I looked up what the difference is between a Proposition and a Theorem. From what I read a Proposition is either of lesser importance, or is easy enough to not need to give a prove, as opposed to a Theorem which is important and needs a proof.

Axiom: A statement that one assume is true and usually they are self-evident and true. Exceptions are The Axiom of Choice which some people reject since it is non-constructive. Also some people do not thing The Axiom of Determinacy is self-evident. Same for Large Cardinal Axioms. But really, most axioms are self-evident. Note that all branches of math use Axioms.

Postulate: Euclid used the term Postulate instead of Axiom. Actually, Euclid wrote in Ancient Greek so to say he used the term Postulate is probably not quite right. However, the term Postulate seems to mean an axiom of Euclid's, or perhaps an axiom in Geometry. One exception: Bertrand's Postulate which was a conjecture but is now a theorem. The link is to a math-stacks where there is some explanation for the weird name. 

Paradox:  A Paradox is a statement that is  paradoxical. Hmmm. that last sentence is self-referential, so its not enlightening. A paradox is supposed to be a statement that seems absurd or self contradictory, though under closer examination may make sense. Russell's Paradox shows that Frege's definition of a set does not work. The Monty Hall paradox, and the Banach-Tarski Paradox are just theorems that at first glance seem to be absurd. The Monty Hall Paradox is not absurd. Darling thinks the BT-paradox means that math is broken, see this post here for more on that.





Thursday, August 24, 2023

Transcripts for the 21st Century

When I start a new academic job, I need to prove that I actually have a PhD. I have to log in my MIT alumni page, pay my $10 and they email my graduate transcript to whomever, all to verify that one bit of information. Why don't I just have a digitally signed certificate I can just hand over?

Our CS department spends an inordinate amount of time looking through transcripts of accepted Masters students to determine if they the right prerequisites for various classes. Great if could automate this process but the transcript come in PDF or JPEG and don't have a standardized format, especially from foreign countries. Also a course name does not give enough information to know what it covers. 

The Chronicle of Higher Education did a forum on The Transcript of the Future, and maybe some solutions to these problems on the horizon. Here are three potential future trends and an elephant in the room.

Modality

Since I went to school, transcripts have moved from paper to PDF. PDFs work for humans to look at, but don't work well to feed into computers to allow for better analysis. Transcripts should move to a structure format, perhaps JSON, to make them readable to machines. It's easy to go from JSON to PDF but less easy in the other direction. 

To make this work you need a standards so each university's transcript doesn't use a different format. Some standards are in the works but this doesn't seem quite settled yet, as best I can figure out from Internet searching.

Content

Once you go digital you can add much more information. You can add a syllabus, the topics a course covers, not just its title. You can add competencies, credentials, certificates, projects and skills achieved. You can add student's activities such as internships, athletics, clubs, leadership roles. You can give grad schools and companies a much fuller picture of a student beyond the grades.

The more stuff we stick into a transcript, the more standards you need to make sense of it. 

Provenance

Who owns the transcript? Right now it is the university, that's why I have to pay $10 for MIT to send it out. But why not in some common database, or on a blockchain, or a file owned by an individual with all the proper technology so it can't be forged. There are privacy and security issues that we would need to figure out. You don't want a student to lose access to a transcript because they lost a password, the way many have lost cryptocurrency. 

Artificial Intelligence

If we do have access to standardized digital transcripts, there will be the temptation to outsource to AI decision making related to them, such as job interviews (already happening) and grad admissions. We can use AI responsibly to help in the process but we need to remember that all these students are individuals and we need people to judge the people behind them.

Monday, August 21, 2023

Why I have some sympathy for the Simulation Theory (We are all characters in a video game.)

There are some people who believe that we are all characters in a video game written by Abisola (this is sometimes called The Simulation Hypothesis). I first dismissed this as nonsense. Then I read about it in the great  book But What if We're Wrong by Chuck Klosterman. He had  some reasons why The Simulation Hypothesis  is plausible. I thought about some more examples of his reasons. I still DO NOT believe it, but the reasons TO believe it do raise some questions.

There is one word that describes all of the reasons: Glitches!  That is Abisola, who wrote the code, made some mistakes that sometimes show through. Actually some might not be mistakes, perhaps Abisola, planned it. She has the bug/feature issue as do we. 

Here are some of those glitches: 

1) Real people who, from an accident, gained an ability that they did not have before.

a) Jason Padget: After being attacked and getting a concussion, woke up and was a math genius. 

b) Derek Amato: After a head injury became a brilliant composer.

c) McMahon: After a head injury woke up speaking fluent mandarin. I could not find a Wikipedia Entry for him.

d) Tony Cicero: After being struck my lightening was an excellent musician. I read this on a Quora entry but could not find it anywhere else. If you have more evidence on this one, please leave a comment and I will add it here later.

2) While I dismiss most accounts of ghosts, ESP, miracles, etc, there are so many of them that perhaps some are real and caused by glitches. Or features. 

3) This happens a lot to me and I am sure others (or analogs of it): I have a LaTeX bug. I delete a line.  The  bug goes away.   I put the line back. Now the bug is gone. 

4) What color is the dress?

5) Neil Degrasse Tyson and Elon Musk are fans of the Simulation Hypothesis.Not sure if this is any kind of justification for the Simulation Hypothesis or if Abisola coded them up to be that way.

6) All of a sudden the spell-check mechanism this blogger uses stops working AND when I leave a comment it does not automatically put my name on it, nor does it automatically bypass moderation (which is how it used to work).  Lance and I try to fix it, to no avail. The staff here tries to fix it, to no avail. a week later it works again. And YES the first thing I did was turn everything off and on again and that didn't work. (Update: this problem seems to come and go.) 

7) A watched pot never boils. Lost socks in the laundry. Etc.

8) On a more positive side, The Unreasonable Effectiveness of Mathematics in the Natural Sciences and The unreasonable effectiveness of Physics in the Mathematical Sciences. You can google to find more unreasonable effectiveness's. Thanks Abisola, though I wish you made Physics and Math easier. 

9) I emailed a HIGH-TECH colleague. My system says that YES I send that mail. He remembers READING that email when it came. Later on he can't find it- not in is normal files, not in trash, not in spam. gmail search can't find it anywhere. (My spell check thinks gmail is not a word. Nor Gmail. Really?)

10) The speed by which humans went from PONG to DWARF FORTRESS is not plausible. Maybe Abisola  found a way to speed up her program.  (This observation has been made by others.)

11) A relative got Eye Surgery recently. (a) The technology for the surgery was fantastic- outpatient, able to drive in 3 days, drive at night in 7 days, totally painless. (b) Still waiting for the paperwork that will allow him to drive without glasses. Gee- eye surgery should be hard, and paperwork should be easy. I think Abisola  found switching the two to be amusing. 

12) Aaronson's law of dark irony, see here.

13) The possibility of  Elon Musk and Mark Zuckerberg having an actual physical fight (see here), makes no sense. Whatever they disagree on (e.g., how a social network should be run, who is wealthier, who has the biggest....) will NOT be settled by a fight. If M wins then we know that M can beat Z in a fight. That is all that will be established. If Z wins then we know that Z can beat M in a fight. That is all that will be established. In both cases a fight will not settle whatever they disagree on. So why might they fight? Ask Abisloa if it is a bug or a feature. 

14) I am sure you can add your own reasons. 

Tuesday, August 15, 2023

Turning Sixty

I turn sixty today, spending my birthday reviewing a computer science department in Asia. Sixty is a good time to look back and reflect in a rambling way.

I started grad school in 1985. The P v NP problem was only 14 years old when I started. 38 years later we are no closer to solving it.

Nevertheless the field of computational complexity has remained strong, producing exciting research nearly every year. We're not seeing the surprising results that we saw through the early 90s but we've gained a much better understanding of pseudorandomness, coding theory, proof complexity, communication complexity, quantum complexity and circuit complexity (to an extent). The hard problems remain hard but that hasn't hampered progress in the field.

The field hasn't grown as dramatically as some others in theoretical computer science but neither has it shrunk. We have many great young researchers coming up in the field and its future is secure for decades to come.

I have some regrets for the field. Computational complexity has moved more towards mathematics with a focus on technical difficulty over conceptual novelty. We were quick to pick up on probabilistic, parallel and quantum computing, much slower on the cloud and machine learning. Combinatorial optimization has gotten extremely good, we can solve many NP-complete problems in practice, a point we rarely acknowledge.

For myself, I had an active research career for a good two decades. But then the field moved, away from my strength in the structure of complexity classes and more towards more combinatorial, algebraic and analytic techniques. A field should evolve but I found it difficult to keep up. So I focused on this blog, wrote a book, took on larger leadership and administrative roles. I try to follow what's going on in the field, but I'm happy to leave the research to the next generations, especially to my former students, several of whom have become leaders in the field themselves.

Someone recently asked me if I have regrets in my research career. I said that I’ve lived through some incredible advances in computing, but my research has played no significant role in any of it. 

Nevertheless as the computing world, if not the world as a whole, continually gets more complex, computational complexity has a continual mission to make sense of it. And so we shall.

Wednesday, August 09, 2023

The Acting Professor

When I taught Programming for Engineers at Northwestern in 2008, the textbook gave access to PowerPoint slides I could use to teach the class. Since C++ is not a specialty of mine, I tried using the slides for the course. It just felt wrong and lazy--it wasn't me teaching and the students were picking up on it. So I went back to teaching my own way and even though I would occasionally make a mistake (or two or ten), they were my mistakes and the class learned better with me.

The Chronicle of Higher Education recently ran a series on Courseware where it goes much further.

Romano’s instructor was using a courseware product from the publishing titan Cengage. In a departure from traditional supplementary class materials, like textbooks, many courseware tools offer the “soup to nuts” of an entire course: Not only the digital version of a textbook, but homework assignments and assessments that an instructor can select from a bank of premade options. Educational videos, slide presentations, and study flashcards. Auto-grading and performance-analytics capabilities.

It all made for an underwhelming, and often frustrating, learning experience. “There were never ways we could learn from the instructor,” said Romano, who double-majored in political science and environmental science. “It was just a really weird class.” 

At what point are lecturers just actors, reading the material and running the course on autopilot? Is this an advantage over pre-recorded online courses?

With AI perhaps you remove the instructor completely and a course just becomes a fancy computer game. Will the students learn? Will they want to?

Sunday, August 06, 2023

Permutable Primes and Compatible Primes

This post is about an open problems column by Gasarch-Gezalyn-Patrick so they can be considered co-authors on this post. The column is here.

Known: A permutable prime is a prime so that, no matter how you permute the digits, you get a prime.

The first 22 of them are: 

2, 3, 5, 7, 11, 13, 17, 31, 37, 71, 73, 79, 97, 113, 131, 199, 311, 337, 373, 733, 919, 991.

11 might seem like a silly example. However, all of the known  ones past 991 are just all 1's. Let \(R_n\) be the base 10 number which is represented by n 1's. 

The next three permutable primes are \(R_{23}\), \(R_{317}\), \(R_{1031}\). 

Those are all of the permutable primes that are known. The 2  conjecture is that there are 

(1) there are an infinite number of them

(2)  all those past 991 are of the form \(R_n\). 

There is more information and more conjectures about them in the open problems col pointed to above. 

I know of three online papers on permutable primes, see my website of them here.

New: A number n is a k-compatible (henceforth k-comp) prime if there are k permutations of n that are prime but not k+1 such permutations.

Examples:

a) 103 is 3-comp: 013, 031, 103 are  prime BUT 130, 310,301 are NOT prime.

b) 97 is 2-comp: 79 and 97 are prime and there are no other perms of 97.

c) 41 is 1-comp: 41 is prime BUT 14 is NOT prime. 

We have some (though not much) empirical data that suggests the following. Fix L, the length of numbers being considered. If you look at L-length primes that are 1-comp, 2-comp, etc the number of them will (with some exceptions)  first increase then (with some exceptions) decrease, though past k=L/2 (actually smaller) the are no L-length, k-comp primes. 

For rigorous conjectures and more information  see the paper pointed to above. 


Wednesday, August 02, 2023

The College Visits

Campus tour at an AI generated university
My wife's cousin and her daughter came and visited Chicago. The daughter, between junior and senior year of high school, is on the tail end of college tour season and visited Northwestern and University of Chicago while here. She's visited a dozen plus schools at this point.

As an academic and administrator I think of universities in terms of the faculty who are there, their academic strengths and reputation and sometimes the internal and external politics (people like to talk). All academic departments and universities have issues, in their own unique ways.

The visiting prospective students get a thoroughly different view: A tour highlighting the architecture and amenities, giving cherry-picked statistics that puts the school in its best light. The impressions students get have little to do with the quality of the education and can get affected by the personality of the tour guide or even the weather the day of the visit.

The daughter is interested in computer science but the Northwestern tour focused on South campus, the more artsy part of the university where she wouldn't be spending much of her days. Her favorite school, which I won't name, has by far the weakest CS program of the ones she has visited. But it seems those first impressions are also the lasting ones.

If you are a high school student take the tour but don't let that be your only impression. Track down the places on campus that matter to you, whether a department, college or extra-curricula and talk to the people there, particularly the students. Understand the parts of the university that matter to you, not just the ones that they put on display. 

Or perhaps avoid the campus visits entirely. You'll probably make a better choice if you aren't distracted by the size of the spires. 

Sunday, July 30, 2023

Another problem with CHATgpt

 I was giving a recruiting talk for my REU program and I had some slides with testimonials from students:


-------------------------------------------------------------------------

TESTIMONIAL ONE

This REU experience was greatly beneficial in expanding my knowledge and experience with machine learning. Dr. Gasarch, the mentors, my team, and the professors were all very supportive and encouraging, and I learned so much from them over the course of the program. The program was a perfect way to explore different research aspects and allow me to get a better idea of how research is conducted. I am very thankful for this experience.


TESTIMONIAL TWO

The experience of REU CAAR was excellent. I participated in some research before, yet this is the first time for me to do research in a group, which was great!

Though auction design as a topic was not familiar to me before, I learned it by reading several papers. Our program includes both mathematical and computer science components. That is nice as I am interested in both, and our group members divided the work so we all worked on stuff we cared about.

Aside from the research, the lunches and talks were interesting. Thanks to Professor Gasarch, his helper Auguste,  and all the mentors. I would recommend it to anyone interested in computer science or mathematics.
------------------------------------------------------------------

The students laughed and all said 

Those are obviously produced by CHAT-GPT.

They weren't. In fact, they were emailed to me by students in August 2022 before CHAT-GPT was a thing. I told the audience that; however,  I doubt they believed me and I didn't want to dwell on the point. The more you say its not CHAT-GPT, the more it sounds like it is.

This incident is minor. But it portends a bigger problem: Will everything be suspect? How do you prove something was not produced by CHAT-GPT? For some things that might not matter, but for testimonials it matters that it's from a human being.  For what other writings is it important that they came from a human being?  Here are some examples and counterexamples and questions

1) A witness to a crime has his written statement done by CHAT-GPT. IF its just helping him write up what he really saw, thats fine(?).  IAMNAL (I am not a lawyer)

2) You writes a novel but its later found out that its by CHAT-GPT. IF you are  up front about this on the cover of the book AND the novel is good, I see no problem with this. But what about the other way around- it really IS written by you  but nobody believes you. And if nobody cares then it will be
even harder to convince people that it was written by you. Imagine the following:

AUDIENCE: The novel was clearly written by a CHAT-GPT, but that's fine since it was a great novel and you provided the input.

YOU: No, It really was written by me.

AUDIENCE: (annoyed) whatever you say. 

3) When a celebrity writes a book we just assume it was ghostwritten (I do anyway). And I don't care--- all I care about is if the book is good or bad. But ghostwriters cost money. CHAT-GPT will allow us all to have ghostwriters. But I am thinking of the other direction: If a writer (celeb or not) actually writes a book without a ghostwriter or CHAT-GPT, it would be hard to convince us of that. 

4) I wonder if the number of times it matters that it was no CHAT-GPT is small and getting smaller. 

Wednesday, July 26, 2023

Paper Writing Before Overleaf

 A twitter question from a young researcher.

Reminds me of the time one of my children asked me how I drove places before GPS.

I'm not old enough to remember paper writing before computers. Collaboration mostly happened when people were in the same physical location. There were more single-authored and single-institution papers back then. You'd often work on a writeup when you were at the same conference, or even travel just to finish up a paper. 

By the late 80's people started to get comfortable sending latex files by email. BibTeX wasn't popular yet so LaTeX files tended to be self-contained. Mail back then added a ">" in front of any line beginning with "From" so you'd often see "¿From" in papers from that time.

We'd just take turns working on the paper. If we were really sophisticated, we'd work on different sections at the same time using a master file with \include statements. We added notes directly into the latex using comment lines beginning with %LANCE: Fix this!

Oh, and before GPS I used a physical map to get around that I kept in the car. The biggest challenge was folding the map back up afterwards. My kids didn't believe me.

Saturday, July 22, 2023

"Never give AI the Nuclear Codes" Really? Have you seen Dr. Strangelove?

(I covered a simlar topic here.) 

 In the June 2023 issue of The Atlantic is an article titled:

                                    Never Give AI Intelligence the Nuclear Codes

                                    by Ross Andersen

(This might be a link to it: here. Might be behind a pay wall.) 

As you can tell from the title they are against having AI able to launch nuclear weapons. 

I've read similar things elsewhere. 

There is a notion that PEOPLE will be BETTER able to discern when a nuclear strike is needed (and, more importantly, NOT needed) then an AI.  Consider the following two scenarios:

1) An AI has the launch codes and thinks a nuclear strike is appropriate (but is wrong).  It does so and there is no overide for a human to intervene. 

2) A Human knows in his gut (and some of what he sees) that a nuclear attack is needed. The AI says NO ITS NOT (The AI knows A LOT more than the human and is correct). The human overides the AI (pulls out the plug?) and launches the attack. 

Frankly I think (2) is more likely than (1). Perhaps there should be a mechanism so that  BOTH the AI and the Human have to agree. Of course, both AI's and Humans are clever and may find a way to overide. 

Why do people think that (1) is more likely than (2)? Because they haven't seen the movie Dr. Strangelove. They should!

Wednesday, July 19, 2023

More on Pseudodeterminism

Back in May I posted about a paper that finds primes pseudodeterministically and Quanta magazine recently published a story on the result. Oddly enough the paper really isn't about primes at all.

Consider any set A such that 

  1. A is easy to compute, i.e., A is in P.
  2. A is very large, that is there is a c such that for all n, the number of strings of length n is at least 2n/nc. If you throw a polynomial number of darts you are very likely to hit A.
Chen et al show that for any such A, there is a probabilistic polynomial time Turing machine M such that for infinitely many x in A, M on input 1|x| will output x with high probability. 

The set of primes has property 1 by Agrawal, Kayal and Saxena and property 2 by the prime number theorem. Why focus the paper on primes though? Because deterministic prime number finding is a long-standing open problem and there aren't many other natural problems that have properties 1 and 2 where we don't know easy deterministic algorithms to find examples.

With the general result, we can think about oracle results. It's pretty easy to create an A, such that A has property 2 and no deterministic polynomial-time algorithm with oracle access to A can find an element of length n on input 1n for infinitely many n. Since A is always in PA we get property 1 for free. That would suggest to push the result to finding primes deterministically would require using properties of the primes beyond being large and easy. 

Maybe not. Rahul Santhanam, one of the authors, tells me their proof doesn't relativize, though whether the result relativizes is an open question, i.e. whether there is an A fulfilling property 2 such that any pseudodeterministic algorithm with oracle access to A will fail to find more than a finite number of elements of A. It does seem hard to construct this oracle, even if we just want the pseudodeterministic algorithms to fail for infinitely many n. 

Nevertheless it seems unlikely you'd be able to use these techniques to prove a deterministic algorithm for all large easy A. If you want to find primes deterministically you'll probably need some to prove some new number theory. 

Sunday, July 16, 2023

Finding and answer to a math question: 1983 vs 2023.

In 1983, as a grad student, I knew that HALT \( \le_T \) KOLG but didn't know how to prove it. I asked around and A MONTH later through a series of connections I got the proof from Peter Gacs.

 See here  for the full story and see here for a write up of the proof

In 2023, as professor, I heard about a pennies-on-a-chessboard problem and wanted to know what was known about it. I blogged about it and AN HOUR later I found what I wanted. 

See here for my blog post asking about and see here for my write up of all that's known. 

Why the difference from a MONTH to an HOUR. 

1) In 1983 there was no Google. I don't recall if there were other search engines, but even if there were there wasn't much to search. I do note that in 2023 Googling did not help me, though it may have helped my readers who pointed me to the information I wanted. 

2) In 1983 email wasn't even that common. So the notion of  just email such-and-such for the answer did not occur to me. (Also note that I am a last-blocker, see here.) 

3) Here is a factor that should have been in my favor: the community of theorists was smaller so we kind of knew each other and knew who was doing what. AH- but who is this  we ? I started at Harvard in 1980 but really only began doing theory in 1981 and didn't really know the players yet. However, I think the smallness of the community DID help Mihai Gereb (a grad student at Harvard) to know to contact Peter Gacs to help  me find the proof. Contrast- today the community is so large that I might not know the players (I didn't for the penny-chess problem) though with email and the blog (next item) I can reach out to people who Do know the players.

4) The Blog- I have the ability to ASK several people at one time, and one of them might know. That worked in this case and in other cases, but it does not always work. 

5) People without a blog can do something close to what I did- ask several people. And that can be more targeted. (ADDED LATER- a commenter pointed out that I should point out that Math Overflow and Stack exchange and sites like that are also options. Often when I Google a question I have been taken to those sites. I have had less luck asking them directly, but that could just be me.) 

QUESTION: Is it easier to find things out NOW or 40 years ago? I think NOW by far, but I do see some possibly counterarguments. There is a balance- there were less places to look back then. Indeed, the answer to my question was NOT a journal article or a conference paper, it was another blog.

Lastly THANKS to my readers for pointing me to the sources I needed! Kudos! 

ADDED  LATER -a link to the Erdos-Guy paper that I need to put here to aid one of my comments is here

 

Thursday, July 13, 2023

Whither Twitter

Twitter tells me it's my twitterversary, 15 years since I first started tweeting. Not sure I'll make it to sweet sixteen.

No longer do tweets show up on this blog page. Twitter can't tell the difference between some AI engine slurping up tweets and their own display widget showing tweets on a weblog. Bill complains that he can no longer follow my Twitter as he refuses to set up an account on the site and you can't read tweets without one. Also I will soon lose access to TweetDeck without paying the $8/month that wouldn't solve the other problems. This is ignoring that fact that since Elon has taken over Twitter, the site has just become far less fun.

I have accounts on Mastodon and now Threads. The two combined have less that 100 followers compared to the 5400 I have on Twitter. Not sure how many of those 5400 are bots or people who no longer read twitter regularly. 

For now my tweets get imported into Mastodon which now appear on this blog page. Eventually once Threads gets proper web support and APIs I expect that will become the new twitter and I'll move there. We'll have to see.

Sunday, July 09, 2023

A further comment on Chernoff--- and the future of ....

Ravi Boppana recently did a guest blog on Chernoff turning 100 for us here

Consider this unpublished comment on that post:
 
 --------------------------------------------------
As we delve into the depths of Professor Chernoff's impressive centennial celebration, it strikes me that the most astounding aspect isn't the breadth of his influence, but the fact that his eponymous 'Chernoff Bound' remains as relevant today as it was when first conceived in 1952. It's not just a mathematical theorem - it's a testament to the timeless power of innovative thinking and a reminder that truly exceptional ideas can cross boundaries, transcending disciplines and even generations.

As statisticians, computer scientists, and mathematicians, we are not just the beneficiaries of Professor Chernoff's scientific legacy; we are the continuation of it. Every time we use Chernoff bounds in our work, we're not merely applying a theorem - we're participating in a story that began over 70 years ago, and will hopefully continue for many more.

So, as we say 'Happy 100th Birthday' to Professor Chernoff, let's also raise a toast to his contributions that have shaped our field and will continue to guide future generations. It's a living testament that the bounds of his impact are far from being reached. Here's to a legacy that defies the bounds, much like his own theorem!
--------------------------------------------------------

This SEEMS like an intelligent comment.

This IS an intelligent comment.

(One of the comments on this blog post points out that the ChatGPT comment is INCORRECT. Can a comment be INCORRECT and still INTELLIGENT? I think so if it contributes to the conversation.) 

Who wrote it? You can probably guess: ChatGPT. Lance asked ChatGPT for a comment and this is what he got. 

We have, for many years, often gotten bot-generated posts that pick out a few key words and then have a link to buy something. My favorite was

                                  Nice post on P vs NP. For a good deal on tuxedo's click here. 
 
I would like to think it was written by a human who thinks Lance will crack P vs NP and wants him to look good when picking up the Millennium  Prize. But of course I doubt that.

Of more importance is that the attempts to look like a real post were pathetic and content free. In the future (actually, the future is now) ChatGPT may be used to generate an intelligent comment that has a link in it at the end, or worse, in a place we don't notice. So we will need to be more vigilant. This has not been a problem for us yet, but I suspect it will be. 


Wednesday, July 05, 2023

Automating Mathematicians

The New York Times published an article with the ominous title A.I. Is Coming for Mathematics, Too. We know by the work of Gödel and Turing that we can't automate mathematics. But can we automate mathematicians? Is AI coming for us?

Reading the article I'm not immediately scared. The focus is on logical systems like like the Lean theorem prover and SAT solvers. 

Developed at Microsoft by Leonardo de Moura, a computer scientist now with Amazon, Lean uses automated reasoning, which is powered by what is known as good old-fashioned artificial intelligence, or GOFAI — symbolic A.I., inspired by logic. So far the Lean community has verified an intriguing theorem about turning a sphere inside out as well as a pivotal theorem in a scheme for unifying mathematical realms, among other gambits.

So Lean is being used mainly to logically verify known theorems. This could actually make life harder for mathematicians, if journals start requiring formal verification for submitted papers.

I'd love a higher-level AI. One that takes a journal-style proof and verifies it, or even better takes a proof sketch and fills in the details (and write it up in LaTeX) . In other words, let me think of high level ideas and leave the messiness to AI. 

I'm won't be holding my breath. Right now, generative AI has limits in its ability to reason, and reasoning is at the heart of mathematics. 

On the other hand, AI now plays a mean game of chess and go, which we also think of reasoning. So maybe automating mathematicians is closer than we think. It might go further and start suggesting new theorems and proving them on their own.

Ultimately like most other fields AI won't eliminate the need for mathematicians. But like nearly every career moving forward, those who will succeed will not be the ones that push back on AI but those who work smarter and harder to use AI as a tool to do even greater things. Best to think about how to do that now before it's too late.

Saturday, July 01, 2023

Chernoff Turns 100

Guest post by Ravi Boppana

Herman Chernoff celebrates a milestone today: he turns 100 years old. 

We in theoretical computer science know Professor Chernoff primarily for his ubiquitous Chernoff bounds.  The Chernoff bound is an exponentially small upper bound on the tail of a random variable, in terms of its moment generating function.  This bound is particularly useful for the sum of independent random variables.   

Many, many results in theoretical computer science use Chernoff bounds.  For one set of examples, Chernoff bounds are employed in the analysis of algorithms such as Quicksortlinear probing in hashing, MinHash, and a randomized algorithm for set balancing.  For another example, Chernoff bounds are used to reduce the error probability in complexity classes such as BPP.  These examples merely scratch the surface of the wide-ranging impact that Chernoff bounds have had. 

Professor Chernoff introduced the Chernoff bound in his seminal paper from 1952.  Chernoff credits another Herman (Herman Rubin) for the elegant proof of the bound in this paper.  Similar bounds had been established earlier by Bernstein and by Cramér

In his distinguished career, Chernoff was a professor for decades at Stanford, MIT, and ultimately Harvard.  In May, Harvard proudly hosted a symposium in honor of Professor Chernoff's centenary, which he attended.  The photo above shows him at the symposium, looking as cheerful as ever (photo credit: Professor Sally Thurston). 

Beyond his remarkable research accomplishments, Professor Chernoff has passionately guided an entire generation of exceptional statisticians.  According to the Mathematical Genealogy Project, he has advised 26 PhD students, leading to a lineage of 288 mathematical descendants.  Chernoff himself earned his PhD at Brown University in 1948 under the supervision of Abraham Wald.  

Professor Chernoff and his wife, Judy Chernoff, have been married for more than 75 years.  A Boston TV news story said that the Chernoffs are believed to be the oldest living couple in Massachusetts.  At the symposium in May, Professor Chernoff doubted the claim, though he had previously acknowledged that it might be true.  Maybe his cherished field of statistics can be used to estimate the likelihood of the claim.   

On this extraordinary milestone day, we extend our heartfelt congratulations and warmest wishes to Professor Chernoff.  Happy 100th birthday, Professor Chernoff!  Mazel tov. 

Saturday, June 24, 2023

Can you put n pennies on an n x n chessboard so that all of the distances are distinct/how to look that up?

 In Jan 2023 I went to the Joint Math Meeting of the AMS and the MAA and took notes on things to look up later. In one of the talks they discussed a problem and indicated that the answer was known, but did not give a reference or a proof. I emailed the authors and got no response. I tried to search the web but could not find it. SO I use this blog post to see if someone either knows the reference or can solve it outright, and either leave the answer in the comments, point to a paper that has the answer in the comments, or email me personally. 

--------------------------------------------------------------------

A chessboard has squares that are 1 by 1. 

Pennies have diameter 1.

QUESTION: 

For which n is there a way to place n pennies on squares of the n x n chessboard so that all of the distances between centers of the pennies are DIFFERENT?

-----------------------------------------------------------

I have figured out that you CAN do this for n=3,4,5. I THINK the talk said  it cannot be done for n=6. If  you know or find a proof or disproof then please tell me. I am looking for human-readable proofs, not computer proofs.  Similar for higher n.

I have a writeup of the n=3,4,5 cases here (ADDED LATER- I will edit this later in light of the very interesting comments made on this blog entry.) 

----------------------------------------------------------------------

With technology and search engines it SHOULD be easier to find out answers to questions then it was in a prior era. And I think it is. But there are times when you are still better off asking  someone, or in my case blog about it, to find the answer. Here is hoping it works!

ADDED LATER: Within 30 minutes of posting this one of my readers wrote a program and found tha tyou CAN do it for n=6 and gives the answer. Another commenter pointed to a website with the related quetion of putting as many pawns as you can on an 8x8 board.

ADDED LATER: There are now comments on the blog pointing to the FULL SOLUTION to the problem, which one can find here. In summary: 

for n=3,...,7  there IS a way to put n pennies on a chessboard such that all distances are distinct.

for n=8,...,14 a computer search shows that there is no such way.

for n=15 there is an INTERESTING PROOF that there is no such way (good thing - the computer program had not halted yet. I do not know if it every did.) 

for n\ge 16 there is a NICE proof that there IS such way. 

I am ECSTATIC!- I wanted to know the answer and now I do and its easy to understand!


Thursday, June 22, 2023

Don't Negotiate with Logic

Computer science and mathematicians often try to use logic to negotiate whether it be at a university or life in general. I've tried it myself and it doesn't usually work. Even if you have that (rare) perfect argument, remember Upton Sinclair's wordsIt is difficult to get a man to understand something, when his salary depends on his not understanding it.

So make sure their salary depends on them understanding it. Or more to the point, in a world with limited resources, why it makes sense for them to help you. 

  1. Ideally go for the win-win. Why a certain decision helps the department/college/university as well as yourself. Asking for a small investment as a seed towards a large grant for example.
  2. How would the decision make you or your students more successful? The success of a department is measured by the success of the faculty and students. On the other hand, why would a different decision hold you and your students back.
Even outside the university, make your objectives in line with the objectives of the person you are negotiating with to lead to a better outcome.

Of course sometimes you are haggling over a price or a salary when it really is a zero-sum game. There it's good to know the BATNA, Best Alternative To a Negotiated Agreement, for yourself and the other entity. In other words, if they aren't selling to you what other options do you and they have?

There are whole books written about negotiating strategies. Mostly it comes down to making it work for both parties. That's what matters, not the logic.

Sunday, June 18, 2023

Ted Kaczynski was at one time the worlds most famous living mathematician

(This post is inspired by the death of Ted Kaczynski who died on June 10, 2023.) 

From 1978 until 1995 23 mailbombs were sent to various people. 3 caused deaths, the rest caused injuries. The culprit was nicknamed The Unabomber (I wonder if he liked that nickname.) For more on his story see here.

The culprit was Ted Kaczynski. He had a BS in Math from Harvard in Mathematics in 1962, and a PhD in Math from The Univ of Michigan in 1967. He got a job in the Berkeley math dept but resigned in 1969. He soon thereafter moved to a shack in the woods (I wonder if his prison accommodations were better) and began sending out the mailbombs. 

When he was caught in 1995 he was (a) famous and (b) a mathematician. That last point is debatable in that I doubt he was doing math while living in his shack. But we will ignore that point for now. Would you call him a famous mathematician? If so then he was, in 1995, the most famous living mathematician. 

In  1995 Andrew Wiles proved Fermat's Last theorem (this is not quite right- there was a bug and it was fixed with help from Richard Taylor) and he was, for a brief time, the world's most famous living mathematician, though perhaps Wiles and Kaczinski were tied. Wiles made People magazine's 25 most intriguing people of the year! (NOTE- I originally had, incorrectly that Wiles had proven it in 1986. A comment alerted me to the error which makes the story MORE interesting since Ted and Andrew were competing for Most Famous Living Mathemticians!)

Terry Tao won the Fields medal (2006) AND the MacArthur Genius award (2006) AND the breakthrough award (2015). The last one got him a spot on  The Colbert Report (2014) (See here,) For those 15 minutes he might have been the most famous living mathematician. He did not have much competition for the honor.  

And then there is Grigori Perelman who solved the Ponicare Conjecture and declined the Fields Medal and the Millennium prize (Colbert commented on this, see here.) For a very brief time Perelman may have been the most famous living mathematician. He did not have much competition for the honor. 

The most famous mathematicians of all time: Pythagoras of Samos, Euclid, Lewis Carroll. 

1) Pythagoras might not count since its not clear how much he had to do with his theorem.

2) Lewis Carroll is the most interesting case. He IS famous. He DID do Mathematics. He DID mathematics while he wrote the books that made him famous. So he is a famous mathematician but he is not famous for his math. But that does not quite seem right. 

3) The Math version of AND and the English version of AND are different. Lewis Carroll is FAMOUS and Lewis Caroll is A MATHEMATICIAN but it doesn't seem quite right to call him a FAMOUS MATHEMATICIAN. Same for Ted K.  Andrew W was, for a short time, a legit FAMOUS MATHEMATICIAN. 

3) Stephen Hawkings has appeared on ST:TNG and his voice on The Simpsons, Futurama, The Big Bang Theory. He is famous for a combination of his disability, his expository work, and his Physics. Is he a famous actor

4) Science expositors like Carl Sagan and Neil deGrasse Tyson are famous for being expositors of science, not quite for their science. How do Professor Proton and Bill Nye the Science Guy fit into this?

5) Looking at Ted K, Andrew W, Terry T, Grigori P one other point comes up: All of them were famous for a short time but it faded QUICKLY. So- fame is fleeting!