Sunday, May 18, 2025

Is Satire Dangerous in the AI-Age?

There have been times when satire has been mistaken for reality. A list of Onion stories that were mistaken for reality (or was it a mistake?) is here. When I say mistaken for reality I mean that a large set of people were fooled.

My own Ramsey-History-Hoax (blog here, latest version of the paper here) has fooled some people; however the number of people is small since the number of people who know the underlying math is small. 

In my last blog (see here) I said that the Pope Leo XIV majored in math (that is true) and then I gave a false title for his thesis (I HAVE SINCE REMOVED THE ENTIRE PASSAGE). 

 Later in the post I said that his ugrad thesis was not on that topic, but  gave another false title (I HAVE RMEOVED THAT AS WELL.) 

I thought the reader would know that it was false, but one comment inquired about it so I left a comment admitting it was false.

This is all very minor: Not that many people read this blog and very few non-math people would care about what the topic of the  Pope's undergraduate thesis.

The last part of the last sentence is false. Its the POPE! People Do care about his background. 

But surely my blog post isn't so well read so as to make the fictional  title of his thesis a hoax that fools a lot of people. 

Even so, I left a comment wondering if LLM's might learn the incorrect title of the Pope's ugrad thesis. 

A reader named E posted the following:

 It might be too late. I did this search this evening:

E: Did Pope Leo XIV study Ramsey Theory?

Gemini: Pope Leo XIV, whose given name is Robert Francis Prevost,
earned a Bachelor of Science degree in mathematics from Villanova
University in 1977. His undergraduate thesis focused on Rado's Theorem
for Nonlinear Equations.

0) This may not be too bad- one would have to ask about The Pope and Ramsey Theory to get that answer. But in the future this answer might pop up on the question`What did the Pope Study as an Undergraduate' or similar questions.

1) Might future satires or April Fool's Day jokes be mistaken for reality in the future by AI and hence reach a much larger audience than this blog does?

2) If so, should we be careful with what we post (not sure how to do that)?

3) What about people who have a much larger following than complexityblog  (yes, there are such people)?

4) In the past one had to be a celebrity or similar to change peoples perception of reality (see Stephen Colbert and Wikipedia here). Now a complexity blogger may be able to change people's perception of reality. Hence I ask

Is Satire Dangerous in the AI-Age?


 

 

 

 

 



Wednesday, May 14, 2025

A Bittersweet Anniversary

The National Science Foundation was founded on May 10, 1950, 75 years ago last Saturday. No doubt the NSF has seen better days, but first let's take a look back.

At the end of World War II, Vannevar Bush, Director of the Office of Scientific Development, wrote Science, The Endless Frontier, where he laid out the importance of scientific research and the need for the US government to foster that research. 

A new agency should be established, therefore, by the Congress for the purpose. Such an agency, moreover, should be an independent agency devoted to the support of scientific research and advanced scientific education alone. Industry learned many years ago that basic research cannot often be fruitfully conducted as an adjunct to or a subdivision of an operating agency or department. Operating agencies have immediate operating goals and are under constant pressure to produce in a tangible way, for that is the test of their value. None of these conditions is favorable to basic research. Research is the exploration of the unknown and is necessarily speculative. It is inhibited by conventional approaches, traditions, and standards. It cannot be satisfactorily conducted in an atmosphere where it is gauged and tested by operating or production standards. Basic scientific research should not, therefore, be placed under an operating agency whose paramount concern is anything other than research. Research will always suffer when put in competition with operations.

The report laid out the National Research Foundation that would actually spread across three agencies, DARPA, NIH, and the NSF.

While Bush didn't significantly mention computing, given the time, Computing would become a central part of NSF's mission with the establishment of the Computing and Information Science and Engineering (CISE) directorate in 1986, placing Computing at the same level as the Math and Physical Sciences Directorate and the Engineering Directorate.

In 1999, the President’s Information Technology Advisory Committee (PITAC) issued a report that led to the NSF Information Technology Research (ITR) program, which became one of the largest NSF research initiatives of the early 2000s. The report helped reframe computing not just as infrastructure but as a scientific discipline in its own right, deserving of the same kind of basic science funding as physics or biology.

CISE has led many initiatives through the years, for example the TRIPODS program established several centers devoted to the theoretical study of data science.

In recent weeks, the NSF director stepped down, hundreds of grants were canceled, new grants were put indefinitely on hold, indirect costs on new grants will be capped at 15%, and many staff members were pushed out. Divisions below the directorates are slated for elimination, advisory committees have been disbanded, and Trump's proposed budget cuts NSF’s allocation by about half. The CISE AD (Assistant to the NSF Director, or head of CISE), Greg Hager, stepped down last week and through the CRA sent a message to the community.

Under these circumstances, my ability to carry out my vision, to provide a voice for computing research, and to provide authentic leadership to the community are diminished to the point that I can have more impact outside NSF than within it. Echoing Dr. Nelson’s powerful article, leaving “allows me to speak more clearly in my own language,” and, in doing so, even more effectively amplify the work of the incredible, dedicated CISE leadership and staff who continue to strive to fulfill NSF’s mission. 

As I move beyond NSF, I will continue to make the case for computing research. Computing is central to so much in today’s world and computing advances are now core assets to the Nation’s research enterprise. NSF’s support for the past 75 years has forcefully demonstrated the value of computing research for advancing national health, prosperity and welfare; enhancing national economic competitiveness; securing the national defense and helping promote all of science and engineering. NSF-funded work has continually catalyzed new innovations, created new industries, and made us the envy of the world.  

We all need to join Greg in continuing the fight to ensure that Vannevar Bush's vision continues to survive another 75 years and beyond.

Sunday, May 11, 2025

Random Thought on the New Pope (the actual New Pope, not the TV series). He was a math major!

 The New Pope is Pope Leo XIV (pre-Pope name is Robert  Prevost). 

1) Pope names are one of the few places we still use Roman Numerals. I saw an article that was probably satirical that Americans prefer Roman Numerals (the numbers Jesus used) over Arabic Numerals. Also note that Pope Francis did not have a Roman Numeral- that is because he is the first Pope Francis. They could call him Pope Francis I now, rather than later, to avoid having to rewrite things. (John Paul I took the name John Paul I.)

2) Over the years I have  read the following and had the following thoughts (Spoiler- I was wrong on all of them)

a) The last non-Italian Pope was Pope Adrian VI who was Pope from Jan 9 1522 to Sept 14 1523. His Wikipedia entry: here. He was 80 years old when he became Pope and died of  a heart attack.

BILL THOUGHT: We will never see a non-Italian Pope again.

REALITY: John Paul II from Poland was Pope from 1978 until 2005. His Wikipedia page is here

MORE REALITY: Since then we've had Pope Benedict XVI (Germany), Pope Francis I (Argentina),and Pope Leo  XIV (America). I now wonder if we will ever have an Italian Pope again but I make no predictions. 

b) There will never be an American Pope because people think that America already has too much power and if there ever was an American Pope then people would think it was engineered by the CIA.

BILL THOUGHT: That sounded correct to me. Not that the election would be engineered by the CIA, but that people would think that. 

REALITY: Pope Leo XIV is American. Some MAGA people are calling Pope Leo a Woke Marxist Pope (see here). Not the type the CIA would install. 

QUESTION: How much power does the Pope really have? I ask non-rhetorically as always. 

c) The shortest Pope Reign was Pope Urban VII (1590) who reigned for 13 days. The tenth shortest was Pope Benedict V (964) who reigned for 33 days. I originally thought the short reigns were from assassinations, but I looked it up and there were two that may have been murdered, but the rest died of natural causes. Having looked it up I wrote it up here.

BILL THOUGHT: The 10th shortest reign was 33 days. With better health care and less skullduggery in the Papacy that won't happen again.

REALITY: Pope John-Paul I in 1978 died of a heart attack after being Pope for 33 days. 

d) The last Pope to resign was Pope Gregory XII in 1415 (his Wikipedia page is here). He resigned to heal a  schism in the church (its more complicated than that, see his Wikipedia page).

BILL THOUGHT: We will never see a Pope resign again.

REALITY: Pope Benedict XVI resigned (see here for the Wikipedia page on the resignation) in 2013. He said it was for health reasons.

BILL THOUGHT: Now that Pope Benedict has resigned it will be easier for later popes who feel they are not healthy enough for the job to resign. But I make no predictions. 

3) Pope Leo XIV has a degree in Mathematics. I emailed the following to my Ramsey Theory class which is an excerpt from his Wikipedia Page with one incorrect sentence.  (NOTE- I USED TO HAVE A TITLE OF THE POPE'S UGRAD MATH THESIS, WHICH WAS FAKE, BUT SINCE AI'S PICKED IT UP AS REAL I HAVE DELETED THAT, AND ALSO DELETED A LATER PART OF THIS POST WHERE I  GIVE THE REAL TITLE, WHICH IS ALSO FAKE.)

Prevost earned a Bachelor of Science (BS) degree in mathematics from Villanova University, an Augustinian college, in 1977.  He obtained a Master of Divinity (MDiv) from Catholic Theological Union in Chicago in 1982, also serving as a physics and math teacher at St. Rita of Cascia High School in Chicago during his studies. He earned a Licentiate of Canon Law in 1984, followed by a Doctor of Canon Law degree in 1987 from the Pontifical University of Saint Thomas Aquinas in Rome. His doctoral thesis was titled The Role of the Local Prior in the Order of Saint Augustine. Villanova University awarded him an honorary Doctor of Humanities degree in 2014

4) He is not the first Pope who knew some mathematics. In a general sense people used to be more well-rounded before fields got so specialized. So in that sense I am sure that some prior Popes knew some math. But more concretely Pope Sylvester II was, according to the article When the Pope was a Mathematician Europe's leading mathematician, (at the time a modest distinction) reigning as Pope Sylvester from  997 to1003. His Wikipedia page is here.

5) Since Pope Leo XIV was a mathematician, as Pope he won't only know about sin but also about cos.

6) The name Leo struck me since one of my TAs is named Leo. I asked him, if he became Pope, would he change his name. He said 

Hmm, after careful consideration, I think I would take another name. I like being Leo, but I think I would want to try out a different name. I looked up Papal names, and I would probably pick something cool like Boniface, Honorius, or Valentine. But I would do the name change after the semester ends so as not to screw up the payroll office. 

7) Popes did not always change their names. For a long article on that see here. For a short version here are some points:

a) The first Pope to change his name was born Mercurious, a Pagan God Name, and changed it to be Pope John II. That was in 533. 

b) The name-change did not become standard for a while. Before the year 1000 only 3 Popes changed their names, all to John. The other two had given name Peter and felt they should not take the name Peter since Peter was the first Pope and an apostle. Kind of like having a jersey number retired by a sports team.

c) After the year 1000 some changed,some didn't, but the last one to not change was Pope Marcellus II in 1555. His reign was 22 days, though I doubt that is related to not changing his name. 

8) Math question: What is the average length of a Papacy and what is the average length of a presidency (of America)?

The first Pope was Peter, began in 30AD.

The 266th Pope was Francis whose reign ended in 2025.

SO we have 266 Popes in 1995 years, so the average reign is 7.5 years.

The first president was George Washington whose presidency began in 1789.

The 46th president was Joe Biden and it ended in 2025.

SO we have 46 presidents (we ignore the Grover C thing) in 236 years, so the avg reign is 5.1 years.

The 7.5 and 5.1 are more different than they look since the length of presidents is usually 4 or 8 years,while the length of a Papal reign has had a min  of 13 days and a max of 31 years (Pope Pius IX).

I'l be curious what the standard deviation and deviance are for both Papal Reigns and Presidents. I suspect that it's much bigger for Papal reigns, and not just because the presidency is at most 8 years (with one exception-FDR was president for 12 years). 

9) There was betting and betting markets on the next Pope. This raises the question of how often someone NOT on the short list (so probably not bet on much) becomes Pope. Lets look at the last few:

Pope Leo XIV- not on the short list

Pope Francis- not on the short list

Pope Benedict XVI- a favorite

Pope John Paul II- not on the short list

Pope John Paul I- I don't know and I will stop here.

Upshot: it may be foolish to bet on the next Pope. Even more so than betting on the Vice Prez nominee which I commented on here.

10) Art imitates life: Some of the cardinals at the conclave watched the movie Conclave to get an idea of what a conclave is like. I suspect the real conclave was much less dramatic than the movie Conclave. 

11) Trump thinks that since the Pope is American, America should annex the Vatican. Or does he? See this article here.  

12) Pope Leo has an opinion about AI (I wonder if his math background helps); see here. This is a good example of the limits to the Pope's power-does anyone who can do anything care what Pope Leo XIV thinks? I ask non-rhetorically as always.




Thursday, May 08, 2025

Using AI for Reviews

I reviewed a paper recently and I had to agree not to use AI in any aspect of the reviewing process. So I didn't but it felt strange, like I wouldn't be able to use a calculator to check calculations in a paper. Large language models aren't perfect but they've gotten very good and while we shouldn't trust them to find issues in a paper, they are certainly worth listening to. What we shouldn't do is have AI just write the review with little or no human oversight, and the journal wanted me to check the box probably to ensure I wouldn't just do that, though I'm sure some do and check the box anyway.

I've been playing with OpenAI's o3 model and color me impressed especially when it comes to complexity. It solves all my old homework problems and cuts through purported P v NP proofs like butter. I've tried it on some of my favorite open problems where it doesn't make new progress but it doesn't create fake proofs and does a good job giving the state of the art, some of which I didn't even know about beforehand.

We now have AI at the level of new graduate students. We should treat them as such. Sometimes we give grad students papers to review for conferences but we need to look over what they say afterwards, the same way we should treat these new AI systems. Just because o3 can't find a bug doesn't mean there isn't one. The analogy isn't perfect, we give students papers to review so they can learn the state of the art and become better critical thinkers, in addition to getting help in our reviews. 

We do have a privacy issue. Most papers under review are not for public consumption and if uploaded into a large-language model they could become training data and be revealed if someone asks a relevant question. Ideally we should use a system that doesn't train on our inputs if we use AI for reviewing but both the probability of leakage and amount of damage is low, so I wouldn't worry too much about it.

If you are an author, have AI review your paper before you submit it. Make sure you ask AI to give you a critical review and make suggestions. Maybe in the future we'd required all submitted papers to be AI-certified. It would make the conference reviewers jobs less onerous.

For now, humans alone or AI alone is just not the best way to do conference reviews. For now when you do a review, working with an AI system as an assistant will lead to a stronger review. I suspect in the future, perhaps not that far, AI alone might do a better job. We're not there yet, but we're further than you'd probably expect.

Sunday, May 04, 2025

My response to Scott's least controversal post ever!

In a recent post by Scott (see here or just read my post which includes his post) he listed topis that he conjectured would NOT cause an outrage.

I was going to write a long comment in his comments section,  which would only be read by people who got to comment 100 or so. OR I could comment on it in my blog.

SO, here is his blog post and my comments on it.

------------------------------

A friend and I were discussing whether there’s anything I could possibly say, on this blog, in 2025, that wouldn’t provoke an outraged reaction from my commenters.  So I started jotting down ideas. Let’s see how I did.

1) Pancakes are a delicious breakfast, especially with blueberries and maple syrup.

BILL: Pancakes have a terrible ratio of health to enjoyment.

2) Since it’s now Passover, and no pancakes for me this week, let me add: I think matzoh has been somewhat unfairly maligned. Of course it tastes like cardboard if you eat it plain, but it’s pretty tasty with butter, fruit preserves, tuna salad, egg salad, or chopped liver.

BILL: UNFAIRLY MALIGNED. That's an interesting concept in itself since there are so many opinions on the internet there is not really a consensus on.... anything.  My 2 cents: I like the taste of cardboard and hence I like the taste of matzoh.

3) Central Texas is actually really nice in the springtime, with lush foliage and good weather for being outside.

BILL: I WILL DEFER to Scott, who is now a Texan, on this one.

4) Kittens are cute. So are puppies, although I’d go for kittens given the choice.

BILL: PETS are a waste of time and energy. My opinion shows something more important: Scott and I disagree on this but we are not OUTRAGED at each other.

5) Hamilton is a great musical—so much so that it’s become hard to think about the American Founding except as Lin-Manuel Miranda reimagined it, with rap battles in Washington’s cabinet and so forth. I’m glad I got to take my kids to see it last week, when it was in Austin (I hadn’t seen it since its pre-Broadway previews a decade ago). Two-hundred fifty years on, I hope America remembers its founding promise, and that Hamilton doesn’t turn out to be America’s eulogy.

BILL: Agree. Also lead to the best math novelty song of all time, See here.

6) The Simpsons and Futurama are hilarious.

BILL: The cliche The Simpsons was better in its first X seasons is true, but it can still crank out an excellent episode once in a while. The episode  Treehouse of Horrors: Simpsons Wicked This Way Comes  (from 2024--Wikipedia entry here)  is a microcosm of the series: Two okay satires of two okay stories by Ray Bradbury and then a BRILLIANT satire of Fahrenheit 451. (Spell check thinks Treehouse is not a word .I looked it up to see what the geat God Google would say.  The Treehouse of Horror episodes of the Simpsons use Treehouse. I googled Is Treehouse One word and got a YES. This is a rare time when spellcheck is just wrong.)

BILL: I think Futurama benefited from being on the air, then off, then on, then off, then on (is it on now?) since it came back with new stories. 

7) Young Sheldon and The Big Bang Theory are unjustly maligned. They were about as good as any sitcoms can possibly be.

BILL: AGREE though again, some malign, some praise, some just watch it and laugh.  I've had 5 blog posts inspired by these shows, and a few more that mention them in passing. I recently saw TBBT on a list of OVERRATED shows so someone must be liking it to cause it to be on that list.

BILL: In an earlier era it would be hard to watch every episode of a TV show since they were on once, and then maybe some reruns but maybe not.  I've seen every episode of both TBBT and YS without even trying to. 

BILL: (Added later inspired by a comment) For the entire run of the series YS the actress who played Missy, Raegan Revod did not have a Wikipedia page. I noted this in two prior blog posts. I am happy to say that she finally does, see here. It is a long overdue honor. (Is it an honor?) 

8) For the most part, people should be free to live lives of their choosing, as long as they’re not harming others.

BILL: TRICKY- the For the most part causes arguments and outrage. Example: Helmet laws for motorcyclists. Should they be free to get brain injuries that the rest of society must pay for?  I ask non-rhetorically as always.

9) The rapid progress of AI might be the most important thing that’s happened in my lifetime. There’s a huge range of plausible outcomes, from “merely another technological transformation like computing or the Internet” to “biggest thing since the appearance of multicellular life,” but in any case, we ought to proceed with caution and with the wider interests of humanity foremost in our minds.

BILL: I doubt it's the biggest thing since the appearance of multicellular life.  My blog on AI here. I agree that caution is needed, though in two ways:

a) Programs are written that we don't understand and might be wrong in serious ways. (You can replace programs with other things.)

b) The shift in the job market may be disruptive. People point to that farmers stopped farming and moved to factory work, but there was an awful transition time. And the AI-shift might be much faster. Fortunately for me, ChatGPT is terrible at solving problems in Ramsey Theory. For now.

10) Research into curing cancer is great and should continue to be supported.

BILL: This one seems obvious but one has to ask the broader question: Which medical things should be funded and why? More generally, what should the government fund and why? These require careful thought. 

11) The discoveries of NP-completeness, public-key encryption, zero-knowledge and probabilistically checkable proofs, and quantum computational speedups were milestones in the history of theoretical computer science, worthy of celebration.

BILL: Of course I agree. But the following questions haunt me:

a) What is a natural problem and do we spend too much time on unnatural ones. Even Graph Isom which seems like a natural problem does not have any applications (see my blog posts here and a ChatGPT  generated post on this topic here).

b) Darling has asked me IF WE PROVE P NE NP THEN HOW WILL THAT HELP SOCIETY? Good question.

12) Katalin Karikó, who pioneered mRNA vaccines, is a heroine of humanity. We should figure out how to create more Katalin Karikós.

BILL: Cloning?

BILL: This raises the general question of how much ONE PERSON is responsible for great scientific discoveries.

13) Scientists spend too much of their time writing grant proposals, and not enough doing actual science. We should experiment with new institutions to fix this.

BILL: Also writing up papers and waiting for referees reports. A paper I submitted with students 3 years ago was accepted (Yeah) with many helpful comments (Yeah) but way too late to help those students get into grad school (they did anyway- Yeah). We had forgotten what we wrote and why we cared.  (Boo) We did get the corrections done and resubmitted it. So I could say it will be out soon. But that's the weird thing-we posted it to arxivs three years ago so its been out for a while. 

14) I wish California could build high-speed rail from LA to San Francisco. If California’s Democrats could show they could do this, it would be an electoral boon to Democrats nationally.

BILL: This seems fine but seems like an arbitrary thing to want as opposed to other pairs of cities and other achievement.

15) I wish the US could build clean energy, including wind, solar, and nuclear. Actually, more generally, we should do everything recommended in Derek Thompson and Ezra Klein’s phenomenal new book Abundance, which I just finished.

BILL: You inspired me to recommend the book Abundance to my book club. This is the second time that's happened- I also had them read the Stephen Pinker Book Enlightenment Now based on your blogs recommendation.

BILL: Some of the problem is political and some is technical. I don't know how much of each.

16) The great questions of philosophy—why does the universe exist? how does consciousness relate to the physical world? what grounds morality?—are worthy of respect, as primary drivers of human curiosity for millennia. Scientists and engineers should never sneer at these questions. All the same, I personally couldn’t spend my life on such questions: I also need small problems, ones where I can make definite progress.

BILL: Indeed-I like well defined questions that have answers, even if they are hard to answer. The  questions you raise are above my pay grade. 

17) Quantum physics, which turns 100 this year, is arguably the most metaphysical of all empirical discoveries. It’s worthy of returning to again and again in life, asking: but how could the world be that way? Is there a different angle that we missed?

BILL: I can either have an opinion on this or defer to one of the worlds leading authorities  on the topic.

18) If I knew for sure that I could achieve Enlightenment, but only by meditating on a mountaintop for a decade, a further question would arise: is it worth it? Or would I rather spend that decade engaged with the world, with scientific problems and with other people?

BILL: If that enlightenment includes obtaining a proof that P NE NP then sign me up!

19) I, too, vote for political parties, and have sectarian allegiances. But I’m most moved by human creative effort, in science or literature or anything else, that transcends time and place and circumstance and speaks to the eternal.

BILL: I find myself less interested in politics and more interested in math. Non-partisan example: I read many articles about who Trump will pick for his VP. Then he picked one. I then read many articles about who Harris will pick for her VP.Then she picked one. I WISH I HAD SPENT THAT TIME ON THE POLYNOMIAL-HALES-JEWITT THEOREM INSTEAD!

20) As I was writing this post, a bird died by flying straight into the window of my home office. As little sense as it might make from a utilitarian standpoint, I am sad for that bird.

BILL: If we could ,without too much effort, make this not happen in the future,  that would be good. There were some suggestions for that in your blog comments.



Wednesday, April 30, 2025

P v NP Papers Galore

As someone who has literally written a book on the topic, I have had many people over the years send me their attempts at P v NP proofs. On average, I receive about one a month, but I've had seven in the last two weeks. And not just the usual email with a PDF attachment. A DM in X. A phone call with a baby in the background. Via Linkedin, in Spanish. One with forty-one follow up emails.

P v NP is still the most important problem in theoretical computer science and perhaps all of mathematics, and not that difficult to understand, at least on an intuitive level. I can see the fascination in wanting to solve it, much the way my teenage self dreamed of being the first to prove Fermat's last theorem (damn you Wiles).

But why the spate of "proofs" right now? Maybe it's an artifact of the crazy world we live in.

In one sense, I appreciate the continued interest in this great question, even though these papers rarely (really never) provide new insights into the P v NP problem. Most of my colleagues just ignore these emails, I usually try to respond once, but most authors will come back and I just don't have time for those continued conversations.

These papers come in three categories.

  1. Claiming to prove P ≠ NP by arguing that a polynomial-time machine must search a large space of solutions. To truly prove P ≠ NP, one cannot make assumptions about how the machine might work.
  2. Giving an algorithm for an NP-complete problem which works on some small test case. I'd usually ask for them to solve SAT competition problems, solve RSA challenge problems or mine themselves some bitcoin. 
  3. Giving a new philosophical or physical spin on the P v NP problem and claiming that tells us about whether P = NP. But P v NP is at its heart a mathematical question, and no amount of philosophy or physics will help settle it.
I have a new suggestion for those who think they've settled P v NP: Run your paper through an AI system, preferably a reasoning model, using the prompt "Give me a critical review of this paper". If you can't convince AI, you're not likely to convince me.

Sunday, April 27, 2025

A personal view of the NSF hot mess: My REU program

I wrote this about a month ago but wanted to wait until after the REU PI conference (which was April 21-22-23) to post it. I add a few comments based on what has happened since, which I preface with ADDED.

------------------------------------------------

Lance (see here, here) and Scott (see herehere) have commented on the hot mess at the NSF.  (ADDED: Funding has gotten worse, see this  blog post by Lance,  this post by Scott and this other post by Scott. If you want a break from all this bad news see this blog post by Scott.)

I agree with their posts and do not have anything to add about the general situation. 

Hence I give you a personal view. While not as important as the general problem, what is happening to me may be  considered one of many canaries in the coal mines. (Do my readers know that expression and where it came from? If not then see here.) 

Random points about my NSF-REU grant.

1) I got my REU grant, REU-CAAR (Combinatorics and AI for Real problems--that's not what it stood for then but its what it stands for now) in 2013 for 2013-14-15.  (To see what REU grants are either go to my post about them here or goto my current website about it here.) 

2) It has been renewed for 2016-17-18 and 2019-2020-2021 and 2022-2023-2024 and 2025-26-27. The last one with a caveat.

3) In all but the last one, being recommended for a grant was equivalent to getting the funding. But for the 2025-2026-2027 I have not seen a dime and I assume I will not get funding in time for a Summer 2025 program to be run normally. (ADDED: I was correct on this.)

4) I am running the program anyway- mostly local students (don't need housing) who don't need stipends. There may be a little (not much) money for a few stipends, from other sources.
 
ADDED:  Here is a list of approaches people who have been promised money but haven't gotten it are doing

a) Run a program with very little money and have the students come WITHOUT stipend, WITHOUT housing, WITHOUT transportation, WITHOUT food. Mostly local students. The program can still run but is against the whole point of REU grants which is broadening of students and giving students from non-research schools a chance to do research. (Only 3 of my students are from Non Research Schools. Another 4 are from High Schools, so not sure how that counts for this.) One odd pro: In my case I have 25 students- I can make more offers since I am not paying anything. 

b) Run a program with some money you have lying around. You may decide to give stipends but NOT housing.You may (like approach (a)) hence take mostly local students.  But all students get stipends.

c) You assume you will get money by (say) May 15). So you take applicants, accept and reject as appropriate. PRO-if the money comes in, you run a normal program. CON- you may end up cancelling in (say) Mid May leaving students in the lurch. 

d) Do not begin trying to run a program UNTIL you get funding. If you get funded late then run a small program of mostly local students. 

e) There are probably other approaches or combinations of the above. 

5) I've heard that the reason I won't see money in  time is NOT that REU grants are DEI but because of the staff cuts at NSF make it harder to get funds out the door. 

6) Will I get funded in time for 2026? I'd be surprised either way. Is it possible for both A and NOT(A) to be surprising? I'll make that an REU project in 2026 if I get funded by then.
 
7) One of the original motivations for the REU program was to give students at non-research schools a chance to do research. Hence I use comes from a non-research school  as a non-merit criteria. Is using that criteria DEI? I doubt Elon has thought that through. (HE MIGHT HAVE- see comment 9.5 that I added after I posted this.) 

8) Another non-merit criteria I use is how many students want to work on which projects. For example, if 10 very qualified applicants want to work on Ramsey Theory, I can't take all of them. I urge the applicants to specify at least 3 projects they are happy to work on, though many do not do that.  Is using the distribution of projects, a non-merit criteria, DEI? I doubt Elon has thought that through. (HE MIGHT HAVE- see comment 9.5 that I added after I posted this.) 

9) Another non-merit criteria I use is veterans. It is rare that a veteran applies to my program, but it does happen and they get a preference (we've had 3 veterans).  Is that non-merit criteria DEI? I doubt Elon has thought that through. (HE MIGHT HAVE- see comment 9.5 that I added after I posted this.)

9.5) An astute reader emailed me that comments 7-8-9 may be unfair.

The guidelines on NSF  grants are here. I quote an FAQ question


4. Can I still propose broadening participation activities (e.g.,
outreach) in fulfillment of the Broader Impacts criterion?

Investigators should prioritize the first six broader impacts goals as
defined by the America COMPETES Reauthorization Act of 2010 (see here)
Investigators wishing to address goal seven — expanding participation
in STEM for women and underrepresented groups — must ensure that all
outreach, recruitment, or participatory activities in NSF projects are
open and available to all Americans. Investigators may conduct these
types of engagement activities to individuals, institutions, groups,
or communities based on protected characteristics only as part of
broad engagement activities. Investigators may also expand
participation in STEM based on non-protected characteristics,
including but not limited to institutional type, geography,
socioeconomic status, and career stage. However, engagement activities
aimed at these characteristics cannot indirectly preference or exclude
individuals or groups based on protected characteristics.

Instutitional type can be interpreted as non-research schools being okay.  Career stage can be interpreted as returning students. Not sure about veterans of too-many-ramsey-theorists.


10) One of the things that made America great in the past was our scientific achievement. Hence we need a president who wants to Make America Great Again.  We can abbreviate that to MAGA.

11) A good Popperian scientist would STUDY the NSF (and other programs) SEE what is wasteful and what is not and ACCEPT what they find. Had they done this it would have lead to some minimal changes at the NSF and the NIH and other organizations. But instead they just asserted that the NSF and NIH waste money. (Spellcheck thinks that Popperian is not a word. Oh well. For those who don't know what that means, see Karl Popper's Wikipedia entry here.)

12) Is the overhead on grants too high? That is a fair question to ask. But cutting overhead from 50% to 15% overnight is disruptive and does not get into the issue of how high overhead should be. 

13) Will industry step in and fund research? I doubt it will be enough. 



Wednesday, April 23, 2025

Real People

Right after the election I wrote a post predicting what would happen to higher education under Trump, most of which is coming true, but I had a massive failure of imagination missing the direct attacks on major universities. I won't detail all the challenges to higher ed, which change daily with every new executive order and court ruling. The Chronicle has a good tracker of these changes.

But often lost in all the news are the actual people who aren't making the news hurt by these actions, through no fault of their own: A student who had his visa cancelled while out of the country so he can't get back in. PhD students who just lost their funding when their advisor's grant was cancelled. A postdoc in a similar situation just months before he starts an academic job. A young faculty member who had to hold off submitting a Career award proposal until a TRO restored the indirect cost. A recently tenured professor at a good US university interviewing outside of the country. Potential students in other countries trying to decide if they should still go to school and build a life in the US. 

The number of people, grants and universities affected is still on the low end, nearly all students continue to study, graduate and work without any problems, and many universities have offered legal and financial support to affected students and faculty. In my humble opinion, the strong educational opportunities inside the US still greatly exceed those outside. Universities have weathered many challenges before: McCarthyism in the 50s, campus occupations and protests in the 60s, and budget challenges from the great depression, to the fiscal crisis and covid. Trump's time as president has an end date and we'll get through all of this, but it requires all of us to push back and remind the administration and the public about the important role our universities play in the health of the country.

And as much as it pains me to say this as a Cornell alum, I'm rooting for Harvard.

Wednesday, April 16, 2025

Eisenhower's Farewell Address

I'm short on time time this week so I thought it would be good to look back, some 64 years ago, to Dwight Eisenhower's farewell address. It calls for balance between the industrial-military Complex and the scientific-technological elite. While written for a different time, it's well worth taking the time to watch or read the full speech and think what is says about today's complex world.

Dwight D. Eisenhower — Farewell Address

The White House, January 17, 1961 

I. The American Experience

Good evening, my fellow Americans.

First, I should like to express my gratitude to the radio and television networks for the opportunities they have given me over the years to bring reports and messages to our nation.

My special thanks go to them for the opportunity of addressing you this evening.

Three days from now, after half a century in the service of our country, I shall lay down the responsibilities of office as, in traditional and solemn ceremony, the authority of the Presidency is vested in my successor.

This evening I come to you with a message of leave-taking and farewell, and to share a few final thoughts with you, my countrymen.

Like every other citizen, I wish the new President, and all who will labor with him, Godspeed. I pray that the coming years will be blessed with peace and prosperity for all.

Our people expect their President and the Congress to find essential agreement on issues of great moment — the wise resolution of which will better shape the future of the nation.

My own relations with the Congress, which began on a basis of mutual confidence and ended on the same note, have been throughout marked by a spirit of co-operation. In the councils of government, we must guard against the acquisition of unwarranted influence, whether sought or unsought, by the military-industrial complex.

The potential for the disastrous rise of misplaced power exists and will persist.

We must never let the weight of this combination endanger our liberties or democratic processes.

We should take nothing for granted.

Only an alert and knowledgeable citizenry can compel the proper meshing of the huge industrial and military machinery of defense with our peaceful methods and goals, so that security and liberty may prosper together.

II. The Balance in Government

Akin to, and largely responsible for the sweeping changes in our industrial-military posture, has been the technological revolution during recent decades.

In this revolution, research has become central; it also becomes more formalized, complex, and costly.

A steadily increasing share is conducted for, by, or at the direction of the Federal government.

Today, the solitary inventor, tinkering in his shop, has been overshadowed by task forces of scientists in laboratories and testing fields.

In the same fashion, the free university, historically the fountainhead of free ideas and scientific discovery, has experienced a revolution in the conduct of research.

Partly because of the huge costs involved, a government contract becomes virtually a substitute for intellectual curiosity.

The prospect of domination of the nation’s scholars by Federal employment, project allocations, and the power of money is ever present — and is gravely to be regarded.

Yet, in holding scientific research and discovery in respect, as we should, we must also be alert to the equal and opposite danger that public policy could itself become the captive of a scientific-technological elite.

It is the task of statesmanship to mold, to balance, and to integrate these and other forces, new and old, within the principles of our democratic system — ever aiming toward the supreme goals of our free society.

III. Peace and Responsibility

Another factor in maintaining balance involves the element of time.

As we peer into society’s future, we — you and I, and our government — must avoid the impulse to live only for today, plundering for our own ease and convenience the precious resources of tomorrow.

We cannot mortgage the material assets of our grandchildren without risking the loss also of their political and spiritual heritage.

We want democracy to survive for all generations to come, not to become the insolvent phantom of tomorrow.

Down the long lane of the history yet to be written, America knows that this world of ours, ever growing smaller, must avoid becoming a community of dreadful fear and hate, and be, instead, a proud confederation of mutual trust and respect.

Such a confederation must be one of equals. The weakest must come to the conference table with the same confidence as do we, protected as we are by our moral, economic, and military strength.

That table, though scarred by many past frustrations, cannot be abandoned for the certain agony of the battlefield.

Disarmament, with mutual honor and confidence, is a continuing imperative.

Together we must learn how to compose differences, not with arms, but with intellect and decent purpose.

Because this need is so sharp and apparent, I confess that I lay down my official responsibilities in this field with a definite sense of disappointment.

As one who has witnessed the horror and the lingering sadness of war — as one who knows that another war could utterly destroy this civilization which has been so slowly and painfully built over thousands of years — I wish I could say tonight that a lasting peace is in sight.

Happily, I can say that war has been avoided.

Steady progress toward our ultimate goal has been made.

But so much remains to be done.

As a private citizen, I shall never cease to do what little I can to help the world advance along that road.

IV. Final Thoughts

So — in this my last good night to you as your President — I thank you for the many opportunities you have given me for public service in war and peace.

I trust that in that service you find some things worthy; as for the rest of it, I know you will find ways to improve performance in the future.

You and I — my fellow citizens — need to be strong in our faith that all nations, under God, will reach the goal of peace with justice.

May we be ever unswerving in devotion to principle, confident but humble with power, diligent in pursuit of the nation’s great goals.

To all the peoples of the world, I once more give expression to America’s prayerful and continuing aspiration:

We pray that peoples of all faiths, all races, all nations, may have their great human needs satisfied — that those now denied opportunity shall come to enjoy it to the full — that all who yearn for freedom may experience its spiritual blessings.

Those who have freedom will understand, also, its heavy responsibilities — that all who are insensitive to the needs of others will learn charity — that the scourges of poverty, disease and ignorance will be made to disappear from the earth, and that, in the goodness of time, all peoples will come to live together in a peace guaranteed by the binding force of mutual respect and love.

Now, on Friday noon, I am to become a private citizen. I am proud to do so. I look forward to it.

Thank you, and good night.

Monday, April 14, 2025

I want an application of this application of Ramsey Theory to Semigroups

 I recently read the following theorem

Def: A semigroup is a pair \((G,*)\) where \(G\) is a set and  * is a binary operation on \(G\) such that * is associative. NOTE: we do not require an identity element, we do not require inverses, we do not require commutative. We DO require  that G is closed under *.

Theorem: Let (G,*) be a finite semigroup. There exists x in G such that \(x*x=x\). 

Proof: Let \(x_1,x_2,\ldots,x_r\) be a sequence of elements of G (repetition is allowed---indeed required since we will need \( r  >|G| \).)  Let \(r\) be such that any |G|-coloring of  \(K_r\) has a mono triangle.

Such an \( r\) exists by Ramsey's Theorem \((|G| \) colors, seek mono \(K_3\)). 

Consider the following coloring: for \(i<j\), \(COL(i,j) = x_i* \cdots* x_{j-1} \). 

By the choice of \(r\) there exists  \(i<j<k\) such that 

\(x_i* \cdots * x_{j-1} = x_j *\cdots *x_{k-1} = x_i* \cdots *x_{k-1} \). We call this \(x\).

Since \(x_i  *\cdots * x_{k-1} = x_i *\cdots  *x_{j-1} * x_j \cdots *x_{k-1}\) we have \(x*x=x\).

End of  Proof 

Great! Lets find some semigroups to apply this to. 

1) If G has an identity element \(e\)  then the Theorem is trivial, take \(x=e\). So we seek a semigroup without identity. 

2) Can't we just take a group and remove its identity element? No- then it won't be closed under *.

3) Can't we just take the set of N that are \ge 1, under addition. No good- that's infinite. Note that the theorem does not hold there.

4) Can't we just google. I kept getting infinite examples or being told that I can ADD the identity to a semigroup to get an identity.

5) Can't we just ask AI. I used Claude which gave me a trivial 2-element example. I then asked for an example with more than 10 elements. It DID give me one:

\(G=\{1,\ldots,12\} \)

\(x*y=\min\{x,y,10\}\)

For this semigroup (and similar one) the theorem is trivial since \(\forall x\le 10, x*x=x\).

I asked Claude for an example with more than 10 elements that does not use MIN and it said 

 Due to capacity constraints NO CAN DO.

6) SO what I really want is the following:

Give me a FINITE semigroup G WITHOUT identity for which the statement

                                     is there an \(x\) with \(x*x=x\)  

 is not obviously true- so that the Theorem above is interesting.



Wednesday, April 09, 2025

Why Can't We Break Cryptography?

In the recent Signalgate scandal, several senior Trump administration appointees used the Signal app on their phones to discuss an attack on the Houthis. People discussed the risk of the phones being compromised or human factors, such as adding a prominent journalist to the group chat by mistake. But mostly no one had concerns about the cryptography itself on this widely-available app.

It wasn't always this way--cryptography used to be a cat and mouse game, most notably the breaking of the German Enigma machine dramatized in the movie The Imitation Game. Then Diffie and Hellman in their classic 1976 paper stated

Theoretical developments in information theory and computer science show promise of providing provably secure cryptosystems, changing this ancient art into a science.

And in recent memory we've had several successful cybersecurity attacks, but never because the encryption was broken.

We've made incredible progress in solving problems once thought unsolvable, including language translation, chess, go, protein folding and traveling salesman. We have great SAT and Mixed-Integer Programming algorithms. We've blown through the Turing test. None of these algorithms work all of the time but no longer do hard problems seem so hard. Yet cryptography remains secure. Why? How did we get to this Optiland, where only the problems we want to be hard are hard? Quantum computers, if we have them can attack some cryptographic protocols, but we're a very long way from having those capabilities. 

My latest theory involves compression. Machine learning works by finding a representation of a function in a neural net or other form that gives an imperfect compressed version of that function, removing the random components to reveal the structure inside. You get a smaller representation that, through Occam's Razor, is a hopefully mostly accurate version of that data. For example, we learn the representation of a Go player by training a neural net by having the computer play itself over and over again.

Cryptography is designed to look completely random. No compression. If you remove the randomness you have nothing left. And thus modern algorithms have no hook to attack it. 

This is just the beginnings of a theory. I don't have a good formalization and certainly not even the right questions to ask of it.

So to me it's still a big mystery and one that deserves more thought, if we really want to understand computational complexity in the world we actually live in.

Sunday, April 06, 2025

I was invited to a Nutrition conference. Why?

 
From  November of 2024 to March of 2025 I have gotten email inviting me to speak at conferences and/or submit to journals in the following topics:

NOT EVEN CLOSE TO MY FIELD:

Addiction Medicine, Behavioral Health and Psychiatry.

Looking forward to your involvement and your invaluable contributions. 

Possible reason for the invite: My name is Bill and I am a chocoholic.


Infections Diseases

Your exceptional expertise in the field of infections diseases aligns perfectly with our conference focus.

Possible reason for the invite: I had a mild case of COVID two years ago. (I suspect most of my readers did. Did you also get an invite?)

 Vaccines

We eagerly anticipate your participation and the invaluable perspective you will bring to the event. As a first step we kindly request the title of your presentation.

Clearly they want me to talk about using Schur's Theorem and FLT (n=3 case) to show that the primes are infinite. Why that one? All of my papers are tied for relevance to vaccines.

Possible reason for the invite: I have gotten both the flu vaccine and the COVID vaccine.



Nutrition 

We are excited to extend an invitation for you to join us as a distinguished speaker on your field since you have expertise in the field of nutrition.

Possible reason for the invite: I take a Vitamin D pill every morning. 


Pharmacy 

We are excited to extend an invitation for you to join us as a distinguished speaker on your field since you have expertise in the field of pharmacy.
(Identical wording to the Nutrition invite.) 

Possible reason for the invite: My wife's first degree was in Pharmacy and they got confused. They were very confused since she got the degree under her maiden name.

Drug R+D

We are pleased to offer two nights of accommodations at the venue hotel. (I am skeptical of that)

Possible reason for the invite: I watched Breaking Bad and blogged about it here and here.


Aquaculture and Fisheries

We are thrilled to invite you as a Speaker. Your exceptional expertise in the field of aquaculture aligns perfectly with our conference focus. (The wording is a mashup of the wordings in other invites.) 

Possible reason for the invite: I am a pescatarian.

Env and Earth, medical sciences, and immuotherapy

This email did one smart thing and one dumb thing. Smart: listing three areas so maybe they will get lucky. Dumb is the following sentence which proves they are a scam (not that proof is needed):

Your publication ``On the sizes of DPDAs, PDAs, LBAs" has caught our attention. We believe your expertise would greatly enrich our journal's focus on these rapidly evolving areas.

That is indeed a  paper I wrote, with Richard Beigel (I wonder if he also got an invite). Its here. I am very happy with that paper; however, its applications to Environmental Science, Earth Science, Medical Science, or Immunothearpy are .... limited. To be fair, it has been said that Math done for its own sake often has applications that the author did not foresee. So I look forward to lower bounds on the sizes of  DPDA's being  the key to solving global warming. 

Possible reason for the invite: I live on the Earth.

Linguistics

Your publication ``On the sizes of DPDAs, PDAs, LBAs" has caught our attention. We believe your expertise would greatly enrich our journal's focus on these rapidly evolving areas.

The astute reader may notice that this is identical to what the Env-Earth-Med-Imm people send and wonder if I made a mistake with cutting-and-pasting. A fair question, indeed it would be the null hypothesis, but no--the ``linguistics'' and the ``E-E-M'' people are using the same script. 

 Apparently  linguists are also impressed with my publication on the size of DPDAs, PDAs, LBAs. This makes more sense since CFGs and PDAs are equivalent. NO IT DOESN'T"T. I don't think linguists have cared about CFGs for a long time. 

Possible reason for the invite: Aside from my paper with Richard Beigel on lower bounds for whatnot, I also had at one time a paper with Lance about the size of CFGs for {w} that attracted some attention from an AI journal that would give us a discount on page costs (we declined their kind offer). See here

Modern Management

Impressed by your research On the size of DPDAs, PDA, LBAs I cordially invite you to share your new research findings for presentation at the INSERT NAME OF CONFERENCE  on Modern Management. 

How do I know this is a scam? Because the paper appeared in 2015 and they are calling it `new.'  Also, they forgot to cut and paste the actual name of the conference. 

Why has my PDA lower bounds paper gotten so much attention? I do not know. I assume someone pointed to it and others happen to find it. Or perhaps the paper has more applications than I thought. 

Possible reason for the invite (not including my brilliant work on the sizes of devices): The conference is on Modern Management. I have manged my REU-CAAR grant since 2013. That includes managing it through COVID and through Elon. 

Diabetes

We are excited to extend an invitation for you to join us as a distinguished Speaker at the INSERT NAME OF CONFERENCE  Your expertise in the field of Diabetes, making you an an excellent

(Very similar to Nutrition and Pharmacy. The last sentence IS as it was in the email- it is incomplete and does not have a period.) 

Both Modern Management and Diabetes need to up their game--leaving in `INSERT NAME' is just incompetent. 

Possible reason for the invite: My work on Muffin Mathematics (see here). 

LNG

 Greetings to you,

I hope this message finds you well.

I would like to explore the possibility of collaborating with you on an LNG project. Would you be available to discuss this opportunity further?

Looking forward to your response.

At first I thought it was LLM and it might be a HS student who wants to work with me in AI. Then I noticed it was LNG which is Liquid Natural Gas. 

Possible reason for the invite: Since Darling drives and I don't, I pump the gas.

Social Science and Business

A journal on these two topics together emailed me that they exist but wisely made no claim that I am an expert in the field. So they may be legit.

Possible reason for the invite: The royalties on my muffin book are less than the cost of buying muffins for the audience whenever I give a talk on it. This is either a counterexample of how to run a business or a clever way to avoid taxes. Either way, it could be an article in their fine journal. 

 

CLOSE BUT NO CIGAR (That's an old phrase. Younger readers- have you heard that expression before? I was curious so I decided to run it up the flagpole and see who salutes.) 

Numerical Analysis

 It is a great pleasure to invite you to organize a Symposiums on your research subject.

Possible reason for the invite: I took a course on Numerical Analysis  in 1978 and got an A.

Fuzzy Systems 

We bring together global experts. (Am I one of them?)

Possible reason for the invite: I teach Freshman Discrete Math so I have seen many fuzzy proofs. 


Nanotechnology 

We are thrilled to invite you to be part of our prestigious event.

Possible reason for the invite: They got vaccinated for COVID and the nanotech in their vaccine, put there by the other Bill G, made them invite this Bill G. 


Optics

We are thrilled to invite you to be part of our prestigious event
(Identical wording to the invite for Nanotechnology.) 

Possible reason for the invite: I wear glasses.

Systemics, Cybernetics, and Informatics

We invite you to  submit a paper and/or organize a workshop in Systemics, Cybernetic, or Informatics [ Wow! I didn't know I was a big shot in Systemics. I don't even know what that is.]

Possible reason for the invite: I use email.

Quantum

Based on your research interests we would like to invite you to consider NAME OF JOURNAL OMITTED  for the publication of your next article if accepted after peer review.

I have one article that has the word quantum in the title, see here. I am working on a slide-packet with a colleague on quantum entanglement and the CHSH game (ah- my wife had conjectured that math games are not fun games, see hereherehere, perhaps her conjecture should be extended to Physics). I've also had some blog posts about quantum computing. 

The paper is public. The slide-packet is not, though its hard to know whats public nowadays (see here). However I really doubt my paper or my slide packet is why I got the invite. And I doubt my quantum computing posts indicate that I know any quantum computing (this is not self-deprecating- my posts are more about non-tech aspects, for example see here).

Might be legit: They didn't praise my work in quantum. 

Might not be legit: The quote from the email I have above is so generic that I could easily have gotten (and probably have) an email with the exact same wording asking me to submit to a journal on (say) Fluid Mechanics. Also, as noted, I don't work in quantum. 

Possible reason for the invite: Uncertain. 

Fabrication, circuits, etc. 

We are hiring tenure-track faculty in device and fabrication, circuits, architecture, and systems, and EDA areas with highly competitive salaries in the world and cutting-edge research facilities. I'd appreciates you recommending your students to us. [They then gave two links to check out, which I of course did not do.] 

Not my area (what does EDA stand for?)  but it is plausible that I have students in my classes who would be interested. Even so, I won't be passing this on. 

Possible reason for the invite: I got an A in a hardware course in 1977.

Mobile Applications

We have gone through your prior article entitled ''Finding Large Sets Without Arithmetic Progressions of Length three: An Empirical View and Survey II" [Really? You may be the first ones to read it.] We thought it was extremely well written [Really?]. It with excellent pleasure [is that really a phrase `excellent pleasure'?] that we extend an invitation In connection with your to contribute any article of  your choosing to our esteemed journal. Your involvement would undoubtedly enhance the value of our publication.

You may be thinking `Bill left out a period between `invitation' and `I' . - Actually, they left it out. 

Only the heading of the email says `Mobile Applications' 

Later they refer to Journal of International Journal of ...

for their esteemed  journal. 

Possible reason for the invite: This was send to me my mistake. It was meant to be send to a puzzle column where the reader is asked to spot 10 things wrong with the paragraph. 


Not Sure What The Field Is

I am the editor at [Insert Name of Journal]. We are currently seeking high-qualify manuscript for our upcoming issue, and would like to invite you to submit your work for consideration. [The word `qualify' is not a typo on my part, its what they had. It is probably a typo on their part.]

The email actually had in it the wording [insert name of journal]. This indicates that this is a generic email where they forgot to put in the name. The only reason it might not be a scam is that they didn't call me an expert or anything of the sort. 

Possible reason for the invite: [Insert Reason]

Not Sure WTF 

Since your published paper ``Review of  `What Can Be Computed?' '' has been widely recognized,  [really?] we welcome your submission of original research on hot topics [Is Ramsey Theory on Ordered Sets a Hot Topic?] in your specialized of interested fields [does that last part make sense in English?].

Possible reason for the invite: They were impressed with a book review. Some people are easy to impress. Are you? The review is in this column here so you can judge for yourself. 



WHY ALL THE INVITES?

1) I suspect that most (all?) of them are some sort of scam. I end up paying a conference fee or page costs or something. But is that really a scam? If I know exactly what I am getting--- a line on my resume--- and I know its a no-quality conference or journal, then its not a not a scam. On the other hand, I don't think an article in Pharmacy will help my resume.

2) Perhaps it is easier to email Everyone than trying to figure out whose a Math person, whose a Nutrition person, etc.

3) The UPSIDE of emailing EVERYONE is that they cast a wide net and might get someone who they didn't think of.

4) The DOWNSIDE of emailing EVERYONE is... THERE IS NO DOWNSIDE. Email is free, and its not as though emailing ME will make me respect the conference on optics any less -- or more accurate to say that my negative opinion of these venues will not harm them.

5) There is a harm to society. I delete so many emails without looking at them carefully that I might delete something I do care about. Has this happened? Not that I know of-- but I wouldn't know.

6) I normally delete all such emails; however, I kept all those that came in November 2024, Dec 2024, and Jan 2025,  and wrote the post. Then I got a few more in Feb and added to the post. Then one in March so I added that one to the post.  I thought it best to post this before I get even more. I could do a statistical study of all of these emails and write a really bad article. Maybe a Nutrition Journal will publish it. 




Tuesday, April 01, 2025

PDQ Shor (?-2025)

PDQ Shor

PDQ Shor, Peter Shor's smarter brother, passed away last week. PDQ was a Physicist/Computer Scientist/Mathematician/Astrologer/Psychic at the University of Northern South Dakota in Wakpala.

Dr. Phineas Dominic Quincy Shor III, PhD, MBA, BLT, received his education at Europa U. during one of his many alien abductions. He ended up in South Dakota after having fled every other state.

He was most famous for the concept of unnatural proofs, collected in his anthology Proofs from the Other Book, which includes his classic "interpretive dance proof" of the Pythagorean theorem. Critics complain the proof only handles the case where the right angle is on the left.

His follow up book, Proofs from the Crypt, contains his masterpiece, a 1237 page proof that conclusively shows that the empty set contains no irrational numbers.

Like his brother he's moved to the quantum space, reverse engineering Peter's work by giving a doubly exponential time quantum algorithm for multiplying numbers. He created the innovative dipping bird quantum error collection machine that constantly monitors a quantum machine collapsing all entanglement. Apple bought the device for $327 million which immediately destroyed their plans for a QiPhone.

PDQ used the proceeds to create the perpetual Turing machine, guaranteed to never halt. Until it did.

Sadly PDQ passed away from paranormal causes last week. Or was it last year? No one is quite sure. He donated his body to pseudoscience, currently lying in state in an undisclosed state. We hardly knew you.

With apologies to Peter Schickele. This April Fools post was inspired by the complexity class PDQMA.

Saturday, March 29, 2025

Survey's are done stupidly/A stupid question from a survey

 I have often began taking a survey and quit in the middle. Why?

1) It goes on to long. When I told the surveyors that he may get more people quitting for that reason so he should make it shorter he said, rather rudely, that he is an expert on surveys and they need to ask this many questions to calibrate things properly. I tried to engage him in an intelligent conversation about the tradeoff: the longer it is the better the info, but less people fill it out, so what is the optimal point? He told me I was an idiot. Well... that's not going to encourage me to fill out his survey.

2) It asks questions that are too personal. 

3) It asks questions that seem irrelevant to me for their purpose (to be fair, perhaps I do not know the real purpose)

4) They ask a really stupid question. Here is the stupidest I've seen:


Challenge
: Have you ever seen a stupider question? 

As always, I ask non rhetorically. 

Wednesday, March 26, 2025

What Happened to MOOCS?

In 2012 I wrote a blog post about the growing influence of Massively Open Online Courses, or MOOCs.

John Hennessey, president of Stanford, gave the CRA keynote address arguing that MOOCs will save universities. He puts the untenable costs of universities at personnel costs (faculty salaries) are making colleges unaffordable (not sure I fully agree). He argued that MOOCs will help teach courses more effectively. The hidden subtext: fewer professors and probably fewer universities, or as someone joked, we'll all be branch campuses of Stanford.

I ended the post "MOOCs may completely change higher education in America and around the world. Or they won't." A reader asked "Wondering what are you takes about MOOCS now?". Good question.

If you want a detailed answer I had chatty put together a deep research report. Here's my take, mostly from the US computing perspective. The term MOOC is rarely used anymore, but we have seen tremendous growth in online courses and degrees, particularly in Masters programs.

We've seen some major successes, most notably the Georgia Tech Online Masters of Science in Computer Science program that we started in 2014. By we, I mostly mean then-dean Zvi Galil's tenacity to make it happen. Zvi made the right moves (after some pushing), getting faculty buy-in, strong incentives for faculty participation, putting significant resources for course development, a very low-cost degree and most importantly insisting that we have the same if not better quality than our on-campus offerings. The program grew tremendously reaching about 10,000 students by 2020. Georgia Tech had to add a new graduation ceremony for students who finished the degree remotely but traveled to campus for graduation.

We've seen a plethora of new programs. Most domestic students can get a good computing masters degree at a fraction of a cost of an in-person program. On-campus Masters program in computing are now almost entirely international for on-campus programs can deliver something an on-line course cannot: A visa, and a chance to build a life in the United States.

These new programs vary quite a bit in quality, some truly strong, others less so. Some are outright misleading, making a deal with a university to use their name but otherwise having no connection to the school's faculty or academic departments. These programs often feature 'professional certificates' marketed under university branding but are actually developed and administered by third-party education companies.

While we learned to teach everything online during the pandemic, on-line degrees don't work as well for bachelor degrees where the on-campus experience almost matters more than the courses, or for research-intensive PhD programs.

We are not all branch campuses of Stanford but the story isn't done. Colleges continue to have financial challenges, artificial intelligence will continue to play new roles in education, not to mention the recent actions of the Trump administration. Hopefully MOOCs won't be the only thing surviving.

Sunday, March 23, 2025

Recording lectures? Posting the Recordings? Using Slides?

The issue of whether to record lectures or post slides or more generally how much material to give to the students is a new question (the last 20 years?) but I do have an anecdote from 1978.

I was taking an applied math course on Mathematical Modelling from James C Frauenthal (He would sometimes write his initials \(\binom{J}{F}\)) and he passed out his notes ahead of time. I think I was the only student who read them ahead of time. One time I had read the following passage ahead of time:

We have been somewhat cavalier in our assumptions.

During class he said

What is wrong with this mathematical model? 

I replied

We have been somewhat cavalier in our assumptions.

He was somewhat surprised, but pleased that someone was reading his notes. 

FAST FORWARD TO MODERN DAY.

 

How much material should we make available for our students? I post slides and recordings. 

PRO: If a student misses class they can catch up. Especially good if missing class is rare.

PRO: If a student is in class then they can go back to some point they were confused on.  

PRO for slides: When asking a student when they began getting lost we can find the exact slide. This is much better than the word salad that students sometimes emit when describing where they are lost.

BILL: So you understood the definition of P. So you were lost when I defined NP? 

STUDENT: No, I got lost when you described some kind of really exciting algorithm.

BILL: Exciting algorithm? What did it do?

STUDENT: You said it was a paradox.

BILL: This is a class in algorithms. We have not discussed any paradoxes.

STUDENT: Did so!

BILL: We can figure out what ails thee. What did the algorithm do?

STUDENT: Something about the whole being greater than the sum of its parts.

BILL: Parts! I think you mean that we solve sub parts and then put them together. This is the Dynamic Programming paradigm. OH- I think you confused  paradigm and paradox.

STUDENT:  That's exactly what I said. Dynamic means exciting! And paradox is just another name for paradigm.

Often it was hard to see where they got lost.  

CON: Students may skip class and not go over the slides or recordings!

CON: The technology sometimes does not work.

BILL: You missed class and expect me to redo the lecture in my office. Did you watch the recording?

STUDENT: No. The recording did not work and it's your fault!

BILL:  The first day of class I said you should come to class for the following reasons

1) You can ask questions. The paradox is that's hard to do in a large class, but with so many student cutting class, it's a small class!

2)  Taking notes is a good way to solidify knowledge.

3) Going to class forces you to keep up.

4) The technology might not work. Last semester this happened four times. Twice it was my fault, and twice is was not my fault. But that does not matter- it will happen. 

5) If  you show up in my office hours and want me to explain  what I lectured on I will be annoyed.

STUDENT: Uh,... I missed the first day.

CON: In the long term students get in the habit of not going to class.  I can't tell if this is worse than it used to be. 

CON for Slides: Its hard to be spontaneous. Some of the classrooms don't even have whiteboards to go off-script with. The following happened in the pre-slide days (apologies- I've told this story before on this  blog) on April 25, 2003 in my Automata Theory class. I had already done decidability and was going to do r.e. sets.

STUDENT: Do you know whose 100th birthday it is today?

BILL: Will there be cake? If so will they let me eat cake?

STUDENT: Today is Kolmogorov's 100th birthday.

BILL: AH! I was going to do r.e. sets but instead I will do Kolmogorov Complexity!

STUDENT: Great! Uh. Maybe. Is it hard? 

BILL: Its fun!

I then gave a lecture on Kolmogorov complexity on the fly, on the whiteboard. I made it part of the course and on the final I asked them to show that if  w is a K-random string of length n then any context free grammar for {w} in Chomsky Normal Form requires at least \( n^{0.99} \) rules (this is not the strongest result possible). 

This is impossible with slides. No more on-the-fly lectures. 

CON for slides: Some proofs are impossible to do on slides. The Poly VDW theorem and the Kruskal Tree Theorem are two of them. Fortunately those are both in Ramsey Theory that has 30 students and a whiteboard, so for those lectures I use a white board. 

PRO for slides: My handwriting isn't that good, so slides helps a great deal.

CAVEAT: I used to read a proof, write it out by hand, type it up in LaTeX, and then make  slides.  Now I go straight from reading it to slides. This is sometimes not a good idea as I am worrying about fonts and  formatting before I really understand the proof. I recently went BACK to the handwritten notes  THEN LaTeX THEN slides. That increased my understanding since (1) when doing the handwritten notes I was not distracted by fonts or formatting, and (2) at every iteration I picked up some subtle point I had missed. 

CAVEAT: When teaching a large class you really can't use the whiteboard since the people in the back row can't see. I don't know if that's an argument FOR slides or AGAINST large classes. 

SO- what do you do: record, not record, slides, no slides.And why? And does it work? 

 

Wednesday, March 19, 2025

A Failure to Communicate

With care you can explain major ideas and results in computational complexity to the general public, like the P v NP problem, zero-knowledge proofs, the PCP theorem and Shor's factoring algorithms in a way that a curious non-scientist can find interesting. Quanta magazine keeps coming back to complexity. because we have a inherently interesting field.

So why am I having such a difficult time with the new Ryan Williams result, that time can be simulated in nearly quadratically less memory, or more precisely DTIME(\(t(n)\)) \(\subseteq\) DSPACE(\(\sqrt{t(n)\log t(n)}\)), based on the Cook-Mertz space-efficient tree evaluation algorithm.

Many results in complexity are quite specialized and technical but this shouldn't be one of them. Ryan's result involves a major new relationship between time and memory, the two most basic resource measures that goes back to the 1960s. Everybody understands time--waiting while their computer or phone is spinning. They know about memory, at least that you have to pay more for an iPhone with more of it. Yet still I'm challenged to explain this result.

Maybe because for historical reasons we use SPACE instead of MEMORY, or the pesky \(\log t(n)\) factor. Someone said they got lost in the logarithms. But we can talk about the gist of the result without mention space or logarithms.

It might be the \(t(n)\). We take for granted that we measure resources as a function of the problem size but outsiders could find it a challenging concept. For P v NP I just talk about efficient computation, here you can't avoid more specific running times. 

Or maybe people don't think about time and memory. Most operations on a computer happen close to instantaneously. Nobody thinks about memory unless they run out of it. 

Or because we don't have good real-world examples. When would you prefer a space \(\sqrt{t(n)}\) time \(2^\sqrt{t(n)}\) procedure to a space \(t(n)\) time \(t(n)\) algorithm? 

AI is supposed to be good at this. OK, Claude take it away.

Ryan Williams' breakthrough paper shows that computers can solve complex problems using far less memory than previously thought possible. For 50 years, computer scientists believed that problems requiring a certain amount of time (t) needed at least t divided by a small factor of memory. Williams proved that these same problems can actually be solved using only about the square root of t memory. He achieved this by connecting these problems to the Tree Evaluation problem and leveraging a recent algorithm by Cook and Mertz. This result makes significant progress toward resolving the longstanding P versus PSPACE problem, suggesting that many algorithms could potentially run efficiently on devices with much less memory than we currently use.

Imagine a navigation app trying to find the optimal route through a complex city network. Before this discovery, engineers believed that calculating detailed routes required either substantial memory or accepting slower performance on memory-limited devices. Williams' theorem suggests these calculations could run using dramatically less memory—potentially reducing requirements from 100 MB to just 10 KB (roughly the square root). This breakthrough could enable sophisticated navigation features on devices with severe memory constraints, such as smart watches, older phones, or embedded car systems, allowing them to handle complex routing problems with multiple stops or constraints without performance degradation.

Sunday, March 16, 2025

My Post-Covid Post

 I had been meaning to write a post-COVID post for a while, but

a) Are we finally post COVID? (I think so)

b) Are the long term affects of  COVID (society not health) known yet?

However, Lance wrote a post-COVID post (see here) which inspired me to do the same. 

Random Thoughts on COVID

1) COVID probably helped Biden win the 2020 election. If Harris  had won in 2024 then Biden winning in 2020 would have been a bigger change. 

2)  VAX-skepticism is now mainstream. This had not been a partisan issue before COVID though there were some people against vaccines. Oddly enough I think mostly on the far left: a back-to-nature thing. And VAX-skepticism has gone beyond COVID- some states are letting people NOT get vaccinated which has already caused a measles epidemic.

3)  I used to get more work done at school. Now I get more work done at home. COVID forced me to enter the 21st century.

4) People come into school less often.  There are faculty whose tenure cases I will vote on who I never met. To be fair, we do have a big department so (a general theme) COVID accelerated some trends that were already there.

5) Office buildings are less full as more people work from home. I've read that this may cause an economic crisis with people who borrowed money to build NEW office buildings. There are some plans to convert office building into residential, but that seems harder than it sounds.

6) My favorite place to have lunch, THE FOOD FACTORY closed down!

7) I used to mentor around 10 HS students a year (some of the Magnet schools in the area have a research requirement-though the students mostly ARE good and ARE NOT just there for the requirement).  It was a logistical issue to get them or their parents parking passes (also an issue of what their parents DO while I am teaching their kids Ramsey Theory). Now I do most of my mentoring on zoom. I mentored 32 in 2024 (in groups- so it was not 32 projects).

8) I can now hold extra office hours at night on zoom. 

9) Before COVID I was slowly switching from whiteboard to slides since I was recording lectures and my handwriting is not very good. Now MY ENTIRE COURSE is on slides. Clyde Kruskal points out:

 If your entire course is on slides then either  your slides are too dense or your course is too shallow.

He may have a point there. Howeverin a small class I sometimes DO go to the whiteboard. I did it this semester in my Ramsey Theory course when I taught the Kruskal Tree Theorem (the set of trees under minor ordering is a well quasi order-by Joe Kruskal, Clyde's Uncle).

10) This is a bigger issue- is technology driving what topics we cover? 

11) COVID --> classes recorded and  slides that are available --> student attendance is down. Is this bad? Depends. If the students who don't show up actually keep up, its fine. If they hunt and peck through the slides so they can do the HW, that's bad. COVID might not have caused this problem,but it  accelerated it. The question of Post/Record or Not is an issue for a later blog. Pesonally,  I post and record.

12) School children who had to learn at home, probably bad for their future education.

13) Chem labs and Physics labs---do we have a class of chemists who did less lab work?

14) Some couples had to spend more time with each other than usual. This could be good or bad (for me it was good).

15) Some scenes on the TV show Monk (about an OCD Detective) now seem normal- like wiping off doors for germs.

16) Wearing masks in public is not considered weird.  It has gone back to being unusual, but it has not gone back to being weird. I know someone who found that by wearing one he does not get ordinary colds so he keeps wearing it.

17) By around May of 2020 there were about 100 or more novelty songs about COVID. I compiled a website of what I considered the best ones. Its part of my website of novelty songs, here. The three best IMHO are here, here, here.  OH- while getting those linked I found another awesome one: here

18) Some of the working-at-home or meetings-on-zoom was because of COVID. And some is technology (zoom). But some is sociological. Here is an example:

DARLING (on a Sunday in 2018): Bill, my back hurts and I don't think I should drive today, but I want to go to church. So... what  can we do?

BILL: Uh-OH, I think our church streams its service.

DARLING: Well pierce my ears and call me drafty! You're right! I remember that now. Great! You are my brilliant Bill!

BILL: And you are my darling Darling!

(We watched the service online and it was fine.)

 Suffice to say, thinking of going to church online would not take a brilliant Bill now. 

19) There is a down side: Meetings online, church on line, classes on line, one can get more distracted.

20) Faculty meetings are hybrid and I usually go on zoom. The Math dept has said that you HAVE TO GO in order to vote. They are NOT being Luddites- they see the value of in-person meetings. I do not know who is right.

If the meeting is on zoom more people are at the meeting.

If the meeting is in person then less people come but they are paying more attention. Or are they? People can be in person and still tune out, see here.

In the past someone could say I'll be out of town so I can't go to that meeting. That may be less of an excuse in the future. Maybe even now.

21) One of my wife's relatives died of COVID (before vaccines were available) and one of my friends lost his sense of smell because of COVID (before vaccines)

22) Some TV shows incorporated COVID into their story lines. For some the order a show is shot is different than the order they are shown, so you could have one with people wearing masks and COVID being in the background, and the next week nothing about COVID.  

23) I managed to still run my REU program - virtually- in summer 2020 and summer 2021. The research was as good as normal, and I could admit more students since I was not paying for housing, but the students had a much worse time because of the lack of social activities-- we did have some online but its really not the same.  (As for my REU program in Summer 2025-- there are other issues that I will discuss in a later blog post.) 

24) I used to see Lance about once a year when he came to Washington DC either on chairman-business or Dean-business, or NSF-business. I have not seen him in person since... hmm, I do not know. Might be since before COVID. I do see him on zoom once in a while. And whenever a theorist dies he gives me a call to discuss the blog-obit.

25) I am a homebody- I can stay at home for many days in a row. I watch TV, go on treadmill,  and watch TV  while on treadmill. I also  surf the web, read papers, think brilliant thoughts, and  make up slides. Other people feel a  need to GET OUTSIDE THE HOUSE. 

 26) My book club and my game night have both moved online and have not resumed being in person.

book club: Two of the people in it moved to Georgia so we thought we would not see them anymore. But then COVID hit and it's just so much easier for them and everyone else to have book club on zoom.This works pretty well. 

game night: One person is still COVID-shy (this may be reasonable in her case) hence does not want to go to gatherings. And during COVID 2 people from OUT OF STATE joined the game night.  So now it is always on line. This does LIMIT which games we can play, and some games are not as good online. 

27) Since Darling and I stayed at home so much we got out of the habit of putting our wedding rings on before leaving the house. We still have not gotten back in the habit. This may be the least important long-term effect of COVID.

28) (ADDED LATER INSPIRED BY A COMMENT) One of the comments asked (though assumed yes) that I am back to living a normal live. Thats mostly true except for the following:

I am VERY CAREFUL to not injure myself (e.g., no more jogging outside where a crack in the sidewalk could make your break a bone) because of wait times in hospitals during COVID- but it seemed like a good idea even post-COVID (if we are indeed post-COVID- the commenter challenges that). 

I do mask when I go shopping.

I test if I have symptoms (I had a mild case once.)

I get the flu vaccine- I didn't use to- but I got it since I didn't want to get the flu and THINK it was COVID.

Some of my friends and relatives don't eat in resturants anymore, or insiste I test before coming over, or... and I HAPPILY accomodate them.

The COMMENT is very good and I recommend everyone read it.