Wednesday, April 24, 2024

Is Persistence an Anachronism?

Guest post by Martin Bullinger

Very recently, Vijay Vazirani's paper A Theory of Alternating Paths and Blossoms, from the Perspective of Minimum Length got accepted to Mathematics of Operations Research. For the first time, it gives a complete and correct proof that the Micali-Vazirani algorithm finds a maximum cardinality matching in time \(\mathcal O\left(m\sqrt{n}\right)\). I would like to give an account of the extraordinary story of this proof and how Vazirani's contribution inspires persistence.

My fascination for matching already started during my undergrad when I gave a talk on Edmonds' blossom algorithm. It was at this time that I first heard about the Micali-Vazirani (MV) algorithm. Naturally, I was quite excited when I got to know Vazirani personally years later. When I talked to him about the MV algorithm I was, however, shocked: Vazirani admitted that even to that day, there did not exist a complete proof of its correctness. How can a theoretical result be accepted to FOCS without a proof?

Now, 44 years after publication of the algorithm, a proof exists and has been peer-reviewed in great depth. But why did it take so long? Apparently, some results just need time. Sometimes a lot of time. Think of Fermat's Last Theorem, whose proof took 358 years! So what is the story behind the MV algorithm? It can without a doubt be seen as a lifework. Together with his fellow PhD student Silvio Micali, Vazirani discovered it in the first year of his PhD in 1979-80. Without even attempting a proof, it was published in the proceedings of FOCS 1980. The first proof attempt by Vazirani was published in 1994 in Combinatorica. Unfortunately, this proof turned out to be flawed. It took another 30 years until his current paper.

What kept Vazirani going for so long? In the acknowledgements of his paper, he thanks matching theory for its gloriously elegant structure. Vazirani was driven by his passion for the subject matter---but passion by itself can only go so far. Even more important was his belief in the correctness of the algorithm and the theory, which he had broadly outlined in his 1994 paper. Similar to Andrew Wiles' story, his perseverance led him to the idea which clinched the proof. In Vazirani's case, this was to use the new algorithmic idea of double depth-first search, which forms the core of the MV algorithm, and now, its proof as well. But Vazirani's result is also the story of an excellent research environment. Finding deep results requires colleagues or friends to discuss ideas with. Vazirani had these in the form of strong postdocs and PhD students. About ten years ago, he had been discussing ideas towards his proof with his former postdoc Ruta Mehta, and in the last three years, he discussed the final touches of his proof with his current PhD student Rohith Gangam. Needless to say, both of them gained a lot from these discussions.

So why should we care for the MV algorithm? I have several reasons. First, without doubt, it is a historic result within combinatorial optimization. Matching is one of the most fundamental objects in discrete mathematics and we keep finding new applications for it, for example, in health, labor markets, and modern day matching markets on the Internet, basically in every part of our lives. But there is more. Once again, one can look at Vazirani's paper where he describes the impact of matching to the development of the theory of algorithms: Matching theory has led to foundational concepts like the definition of the complexity classes \(\mathcal P\) (Edmonds, 1965a) and \(\# \mathcal P\) (Valiant, 1979), the primal-dual paradigm (Kuhn, 1955), and polyhedral combinatorics (Edmonds, 1965b). The impact of matching on complexity theory was an earlier topic of this blog.

Despite being around for decades, the MV algorithm is still the fastest known algorithm for computing a maximum cardinality matching. This is surprising, to put it mildly. Similar to many other fundamental problems in combinatorial optimization, I would have expected the discovery of better algorithms in the last four decades. Why has this not happened? Vazirani appears to have gotten to the essence of the problem: a profound theory that interleaves algorithmic invariants and graph-theoretic concepts. It seems to be the kind of theory which would play an active role in the field of combinatorial optimization.

However, Vazirani's result proves something else, possibly even more important: the massive gains to be made by single-minded persistence. In a world in which departments and promotion procedures focus on publishing large numbers of papers, it seems impossible to work on one result for more than a year, let alone for decades. Vazirani managed to achieve both: pursue his passion and get the unfinished job done, but not let it come in the way of the rest of his otherwise-active research career. As a young researcher, this inspires me! In the end, it is through such persistence that science will take big steps forward.

This blog post evolved from many enjoyable discussions, which I had with Vijay Vazirani during a research stay at UC Irvine in spring 2024. I am grateful to Ruta Mehta for feedback on the initial version of this post. Vazirani recently presented his paper in a mini series of two talks available online.

Sunday, April 21, 2024

Intelligent Comments on Bill's G.H. Hardy/Avi W post that we did not post.

I posted (see here) about Avi Wigderson being a counterexample to two of G.H. Hardy's opinions:

1) Hardy thought Math was a young man's game. I got some good comments on this. Some agreed and some disagreed.

2) Hardy thought applied math is dull. I got no comments on this one. I assume everyone agreed with my assessment that Hardy was wrong about this.

AND I got the following comment: 

Avi Wigderson's brilliance shatters the false assumptions of  G.H. Hardy, proving that intelligence knows no limits. His groundbreaking ideas challenge the status quo and inspire a new generation of thinkers.

I agree! I was going to post it! Then I realized that 
a) There were two links below it. A bad sign

b) While Avi is a brilliant thinker, is he known for challenging the status quo? The NW result pointed to P=BPP which was not thought to be true at the time. (Micheal Sipser thought P=BPP, see here, but he was an outlier. If my memory is wrong then leave a polite comment.)  Aside from that result  I don't think Avi's  work challenges the status quo. If I am WRONG leave a polite comment correcting my error. 

c) As a side note- I think of challenging the status quo as being over-rated. Wiles proof of FLT did not challenge the status quo. Many great results do not challenge the status quo. If Avi shows P NE NP that would be awesome even though it is the prevailing opinion. 

d) Upon reading the comment carefully,  it looks like it was AI generated. 

e) It fooled me for about 5 minutes. It didn't fool Lance for a second. 

3) But this comment inspired Lance to use ChatGPT to generate comments to my post: one agreeing, and one disagreeing

BEGIN COMMENT THAT  AGREED
Avi Wigderson’s recent Turing Award is a compelling moment to reflect on the fluid nature of mathematical careers and the erroneous stereotypes surrounding them. G.H. Hardy’s assertion in *A Mathematician’s Apology* that mathematics is a young man's game is eloquently countered by Avi’s illustrious career and the achievements of others well beyond the supposed prime age. This challenges Hardy's notion and suggests it warrants reevaluation as a mere conjecture rather than a stated fact.

Moreover, Hardy’s dismissal of applied mathematics overlooks the dynamic interplay between various disciplines. Even in his era, the boundaries between pure and applied math, along with physics and computer science, were permeable and productive. Avi Wigderson’s work, though not strictly applied math, beautifully illustrates how applied considerations can drive significant theoretical advancements.

In this light, the recognition of Wigderson’s contributions is not just a celebration of his individual genius but also a testament to the evolving and interconnected landscape of mathematics, which continues to defy the narrow confines set by earlier academic opinions.
END OF COMMENT

This is an intelligent comment; however, it mostly reflects back what I said. Indeed, it is a more eloquent exposition of what I said. Fortunately I can't be fired from blogging and replaced with an AI since I don't get paid anyway. 

BEGIN COMMENT THAT DISAGREED
While Avi Wigderson's Turing Award certainly highlights his exceptional contributions to theoretical computer science, invoking this as a refutation of G.H. Hardy's views on mathematics as a young man's game might be an oversimplification. Hardy's assertion, though controversial, was not without merit when considering the context of his times and the examples he witnessed firsthand. It's important to remember that exceptions like Wigderson do not necessarily disprove the rule but rather highlight remarkable outliers.

Furthermore, Hardy's critique of applied mathematics as being dull must be understood in the philosophical context of his personal commitment to pure mathematics. While we may not agree with his perspective, dismissing it entirely fails to appreciate the depth of passion that fueled his work and the work of many pure mathematicians. The interplay between disciplines enriches mathematics, indeed, but Hardy’s emphasis on the beauty of pure theory has inspired generations and continues to hold significant value in the mathematical community. 
END OF COMMENT
 
This does not seem to be just reflecting back what I wrote. 
 
The argument about Avi is an outlier is reasonable. 
 
Here is my counterargument for defending Hardy's negative opinion of applied math:  just because the  notion that applied math is dull fueled passion does not make it correct. And one must also see what harm the notion caused.
 
 If I debated a human on this point they might change their mind, or give better counterarguments. Can ChatGP change its mind? Does that question even make sense? I suspect that ChatGPT could give better or at least different counterarguments.







Thursday, April 18, 2024

Favorite Theorems: Quantum Provers

For our next favorite theorem, we look at the surprising power of provers who share entangled bits. If you can prove something to an arbitrarily computable verifier, then two entangled provers can convince a polynomial-time verifier.

MIP* = RE
Zhengfeng Ji, Anand Natarajan, Thomas Vidick, John Wright and Henry Yuen

A quick tour:

  • A powerful prover convincing a skeptical computable deterministic verifier is another way of capturing computably enumerable (traditionally called recursively enumerable or RE). You can convince a verifier that a program halts by giving the halting time, and the verifier can simulate the machine for that many steps.
  • A powerful prover convincing a skeptical polytime deterministic verifier is another way of capturing the class NP, like giving a 3-coloring of a graph that can be easily checked.
  • If you allow the verifier to ask random questions, you can convince the verifier with high confidence that a graph is not colorable, or more generally any PSPACE problem.
  • If you add two separated provers that a skeptical probabilistic verifier can play off each other, the provers can convince the verifier that any problem in NEXP, non-deterministic exponential time.
One of many quantum variations of interactive proofs, MIP* has two provers that cannot communicate but have entangled quantum bits. This change could go either way: 
  • The provers to coordinate their answers better and so they wouldn't convince the verifier for all the languages in NEXP
  • The verifier could ask more complex questions to the provers which they could answer using the entanglement, allowing the provers to convince the verifier for even more complex languages.
Turns out it's the later in a very strong way.

Ito and Vidick showed that you can create a protocol that prevents the provers coordinating better, recovering all problems in NEXP. Natarajan and Wright showed you can ask more questions, showing that provers with entangled bits can convince a verifier of everything in NEEXP, non-deterministic double exponential time (\(2^{2^{n^c}}\)), already a proof too large for the verifier to even point into. The MIP* = RE paper takes that all the way to the computably enumerable sets, all the languages you would get with a classical prover convincing a deterministic verifier unrestricted by time.

Monday, April 15, 2024

Avi Wigderson is a counterexample to TWO stupid thoughts of G.H. Hardy

 Recently

1) Avi Wigderson won the Turing Award (See blog posts by Fortnow-here, Scott-here, Lipton-Regan here, and the ACM announcement here).  The last time I could find when Fortnow-Gasarch, Scott, Lipton-Regan all blogged on the same topic was when Goldwasser-Micali won the Turing Award- see the blog entries (here, here,here). We rarely coordinate. For that matter, even Fortnow and Gasarch rarely coordinate.

2) My joint book review of G.H. Hardy's  A Mathematician's Apology (1940) and L.N. Trefethen's An Applied Mathematician's Apology appeared in SIGACT News. 

These two events would seem unrelated. However, I criticize two points in Hardy's book; and those two points relate to Avi.  The book review is here

POINT ONE: Hardy says that Mathematics is a young man's game and that if you are over 40 then you are over the hill. He gives some fair example (Gauss, Newton) and some unfair ones (Galois, Ramanujan who died before they were 40.) Rather than STATE this fact he should have made it a CONJECTURE to be studied. I would make it two conjectures: 

Was it true for math that Hardy would know about? Since most people died younger in those days, there might be to small a sample size. Euler and Leibniz might be counterexamples.

Is it true now? AVI is clearly a counterexample. Other modern counterexamples: Michael Rabin, Leslie  Valiant, Roger Apery (proved zeta(3) irrational at the age of 62), Yitang Zhang (bounded gaps between primes at age 58, which, alas, is not a prime-- would have been really cool if it was a twin prime), Louis de Branges (proof of the Bieberbach conjecture at 51), Andre Weil, Jean-Pierre Serre. Is that enough people to disprove Hardy's conjecture? 

Despite the counterexamples I provided, we have all seen some mathematicians stop producing after a time. I offer two reasons for this

a) (Andrew Gleason told me this one) A mathematician works in a field, and the field dries up. Changing fields is hard since math has so much prereq knowledge.  CS has less of that problem. One can see if in the counterexamples above, and in other counterexamples, the fields they were in didn't dry up. 

b) The Peter Principle: Abosla is a great research so lets make her department chair!

My conjecture: The notion that math is a young mans game is false. 

POINT TWO: Applied Math is dull. Trefethan's book makes a good counter argument to this. I will say something else.

Even in Hardy's time he would have seen (if his head was not so far up his ass) that math, applied math, physics, compute science and perhaps other areas interact with each other. It is common to say that things done in pure math get applied. However, there are also cases where pure math uses a theorem from applied math. Or where Physics MOTIVATES a topic in pure or applied math. The boundaries are rather thin and none of these areas has the intellectual or moral high ground. There is the matter of personal taste, and if G.H. Hardy prefers pure math, that's fine for him. But he should not mistake his tastes for anything global. And is well known, he thought pure math like number theory would never apply to the real world. He was wrong about that of course. But also notice that Cryptography motivated work in number theory.  I am not sure if I would call AVI's work applied math,but it was certainty motivated by applied considerations.

Wednesday, April 10, 2024

Avi wins the Turing Award

The ACM announced that Avi Wigderson, a force in computational complexity and beyond, will receive the 2023 A. M. Turing Award (Quanta article). This is the first primarily complexity theorist to win the award since Andy Yao in 2000. Avi can add this to his AbelNevanlinna and Knuth prizes. Avi has served on the faculty at the Institute for Advanced Study since 1999 after many years at Hebrew University. He's hosted many postdocs and visiting faculty and students who've gone onto great careers of their own.

Avi is my first co-author to win the Turing award. Our paper was just one link in a chain of papers, from Nisan-Wigderson to Impagliazzo-Wigderson showing how circuit bounds yield derandomization. Philosophically these results tell us randomness is just computation we cannot predict.

But Avi did so much more. NP has zero-knowledge proofs. Zig-Zag expanders that led to Reingold's SL = L. Monotone circuit lower-bounds using communication complexity. Upper and lower bound for matchingOptimal Extractors. And that's just the tip of the iceberg. 

Notably none of these papers are solely authored or even have much overlap in their author lists. Avi shared his wisdom with many, nearly 200 distinct co-authors according to DBLP. 

Beyond his research, Avi has been a great advocate for our field, advocating to the NSF as a founding member of the SIGACT Committee for the Advancement for Theoretical Computer Science, and on the Simons Foundation scientific board which led to Simons Fellows and the Simons Institute. Avi wrote a book placing computational complexity as a great mathematical discipline that just also happens to have great applications in so many different fields.

Congrats to Avi and this capstone to an incredible career and individual. 

Tuesday, April 09, 2024

Rance Cleaveland passed away on March 27, 2024. He will be missed

 
My friend and colleague Rance Cleaveland passed away on March 27, 2024 at the age of 62.  He was a professor at The University of Maryland at College Park in the Computer Science Department. He worked in Software Engineering. He did program verification and model checking. He had his own company, so he did both theoretical and practical work.

He joined UMCP in 2005. I had known him and some of his work before then so we got together for lunch about once a month to talk about the department(he was new to the dept. so I filled him in on things) and about computer science.He would probably be considered a theorist in Europe, though he was considered a Software Engineer in America.

The department's announcement is here

Below is

 
1) A note from Rance's Grad student Peter Fontana, who got his PhD in 2014.

2) An email that Jeroen Keiren sent to the concurrency mailinglist.

3) A picture of Peter, Jeroen, and Rance in that order left to right. 

-------------------------------------------

Peter Fontana's Note:

PERSONAL:

I’m truly shocked and saddened to hear that Rance Cleaveland passed away. Rance advised me (as a Ph.D. student of his at UMCP) in model checking (also called formal verification).

Rance was an extremely kind advisor and extremely skilled in leadership and communication. He also had all of the swiftness, communication, and people understanding of a skilled manager. He always encouraged us to find the simplest example possible to illustrate a nuanced corner-case of a property we wanted to prove or disprove. The simplicity made complicated things clearer. He was also an extremely clear communicator and extremely skilled with people. Rance was always patient and kind, eager to guide rather than to chastise.  I will truly miss him.

COMPUTER SCIENCE

Model checking involves the following:

(1) abstracting a programs as a state machine (automaton) with labels,

(2) writing a desirable property (such as “bad event X will never happen”) as a formula in a logic,

(3) using a computer to automatically show (over all possible cases) that the specified property is true or false.  This is model checking.

Theorists will note that this process of model checking is asking if state q of a machine satisfy a property phi, which is the model checking problem. If you are in the world of boolean formulas and propositions, the NP-complete satisfiability problem (SAT) asks: does there exists a boolean assignment q that s satisfy formula \phi? The analogous model checking problem is: given boolean assignment q (each proposition is either T or F), does q satisfy \phi? For boolean assignments, the model-checking problem is in P.

While Rance worked in a variety of areas related to formal verification that spanned process algebras, different logics, different automata types, and cyber-physical systems, with me we improved the art of timed automata model checking using a timed modal-mu calculus (timed logic). Timed automata and timed logics extend state machines by introducing a finite number of clocks. These clocks all advance (like time advancing) and can reset, and timed logics now have timing constraints (“within T time” is the most common constraint). We worked on extending the state of the art of what we could model-check on timed automata, both with theoretical proofs and by implementing a model checker (in C++) to model-check these richer timed properties. It turns out that this model checking work is decidable (model checking the simplest formulas in timed automata was previously shown to be PSPACE HARD).  I inherited work started by others, enhanced it, and passed it on; that work is currently being enhanced by others today. Our approach was novel in that we used a proof system of predicates and were able to model check more expressive formulas on timed automata with this enhanced system (See this paper). For details see my PhD thesis here.

----------------------------------------------------------------------------------------------

Jeroen Keiren's message to the concurrency mailinglist

Dear colleagues,

It is with great sadness that I share the news of the sudden and unexpected passing of
our colleague Rance Cleaveland on 27 March 2024.

My thoughts are with his family, friends and loved ones during this sad time.

I am convinced that those of us that interacted with Rance throughout his career will remember him as a friendly person. He was always happy to discuss research, but also dwell on more personal topics or give his honest, yet in my experience always constructive, advice.

Rance obtained his PhD at Cornell, and held academic positions at Sussex, NC State and Stony Brook, before accepting his current position at the University of Maryland, College Park in 2005. UMD shared an obituary for Rance here.

Rance was not only interested in the theoretical foundations of our field, witnessed by over 150 scientific publications. He also had a strong focus on building the tools (for instance the Concurrency Workbench), and commercializing it (through his company Reactis).

Rance was also an active member of our community. He was one of the co-founders of TACAS, for which he was still serving on the steering committee, as well as one of the co-founders of the Springer journal Software Tools for Technology Transfer.

In the words of his wife “For Rance, the concurrency community was truly his intellectual home and he appreciated working with colleagues in Europe and the US - and around the world.”

I’m sorry to bring this news.

With kind regards,

Jeroen Keiren
Assistant professor, Formal System Analysis group
Eindhoven University of Technology, The Netherlands

------------------------------------

A Picture of Peter, Jeroen, and Rance, left to right:





Wednesday, April 03, 2024

Answer to Question. MetaQuestion remains unsolved

 In a prior post I asked the following question:


find x,y,z positive natural numbers such that the following is true:


$$ \frac{x}{y+z} + \frac{y}{x+z} + \frac{z}{x+y} = 4. $$

I first saw the question in a more fun way:



I did not put the answer in the post (should I have? That was the meta question.)

The question has an infinite number of (x,y,z) that work, so I'll just give the least one:



x= 154476802108746166441951315019919837485664325669565431700026634898253202035277999


y = 36875131794129999827197811565225474825492979968971970996283137471637224634055579


z = 4373612677928697257861252602371390152816537558161613618621437993378423467772036


1) For details on how you could have found the answer see here. Or watch a YouTube video on it here.

2) Did I really expect my readers to get this one? Note that I posted it on April Fools Day, though it is a legit problem with a legit answer.

3) The image that says that 95% of all people couldn't solve it---I wonder what their sample size was and where it was drawn from. I suspect that among mathematicians 99% or more can't solve it. 

4)  Comments on the comments I got:

a) Austin Buchanan says that Wolfram Alpha says NO SOLUTION. I wonder if Wolfram Alpha cannot handle numbers of this size.

b) Anonymous right after Austin had a comment that I MISREAD as saying that they found it using a python program. I asked that person to email me, and it turns out that NO-- they recalled where to look (on the web I assume).

c) Several commenters solved it by  looking at the web. Math Overflow and Quora had solutions. So did other places. This may make the meta question should a blogger post the solution a moot point for a well known problem. If you get a problem off the web its quite likely its well known, or at least well enough known, to have the answer also on the web. If you make up a problem yourself then its harder to tell.

5) I think its a very hard problem to solve unless you have the prior KNOWLEDGE to solve it, so it would not be a good math competition problem. 

6) The cute pictures of fruit in the presentation of the problem makes it LOOK like its a cute problem. It not. 

7) Only one comment on the meta question about should a blogger post the solution at the same time as the problem  (There were more comments about the unimportant question of whether 0 is a natural number.) The one comment says that a blogger SHOULD NOT - let the reader enjoy/agonize for a while. I agree.

8) Determining if a given math problem is interesting is a hard problem; however, that will be a topic for another blog. (Tip for young bloggers, if there are any (blogs are so 2010):  If you do ONE idea per blog then your blog can last longer.)





Monday, April 01, 2024

A Math Question and a Meta Question

 1) Question: find x,y,z natural numbers such that the following is true:


$$ \frac{x}{y+z} + \frac{y}{x+z} + \frac{z}{x+y} = 4. $$

I was first presented the problem a more fun way:


(NOTE- a commenter pointed out that ``Natural Numbers'' and `Positive Whole Values' are different since some people (and I AM one of them) include 0 as a natural. SO- to clarify, I want x and y and z to be naturals that are \(\ge 1\). )

2) Meta Question: When a blogger poses a question like that should they also post the answer? Have a pointer to the answer? Not have an answer? Pros and Cons:

a) If you do not list the answer at all (or post it in a few days) then people will not be tempted to look at the answer. They have to really work on it. (Maybe they can find the answer on the web). 

b) There are people whose curiosity far exceeds their ego. So they want to work on in for (say) 5 minutes and then LOOK AT THE ANSWER! I am one of those people. 

c) When you first look at a problem and work on it you are curious. If you have to wait a few days to get the answer then you may lose interest. 

I invite you to work on both the question and the meta question.  I will not be blocking people who post the answer in the comments, so if you want to work on it you might not want to look at the answers.

I will post the answer in a few days. 



Wednesday, March 27, 2024

The Softening of Computing Jobs: Blip or Trend?

Tech companies are performing exceptionally well, driving the S&P 500 to new heights with their soaring stock prices. However, the tech sector, apart from AI, expects a job decline to persist throughout the year, accompanied by a tougher hiring process. The situation becomes even more challenging for foreign students requiring sponsorship to secure a job after college, or older tech workers without the latest skills.

Despite these challenges, tech jobs remain more abundant than in most other fields, although the era of immediate employment with nearly six-figure salaries straight out of college with any CS degree has ended.

In discussions with industry leaders, I encounter varied perspectives on whether these changes represent a temporary fluctuation or a long-term trend. Let's explore some of the reasons for this 

The Elon Effect

Since Elon Musk took over Twitter in the fall of 2022, he cut about 80% of the employees. The platform had some hiccups but didn't fall apart. You might not like what is now called X became but technically it still functions pretty well. Many other companies started looking at their workforce and starting thinking whether they needed all the software develops they've hired.

Over Supply

We've seen tremendous layoffs among the larger tech companies, paring down from over hiring during the pandemic, and massive growth of computer science programs at universities across the country and world. We just have too many job searchers going after too few tech jobs.

Uncertainty

Companies hold back hiring in the face of uncertainty. Uncertainty in elections in the US and abroad, international relations particularly with China, regulatory and legal landscapes, wars, interest rates, and the economy. 

Artificial Intelligence

Almost everyone I talk to thinks (wrongly) that their careers will not dramatically change with AI, except for programmers where we already see significant productivity improvements with ML tools like Github co-pilot. So many companies are re-assessing how many software developers they need. AI also adds to the uncertainly as the tools continue to improve, but how much and how fast remain difficult to predict. 

Blip or Trend?

The supply will balance itself out in the future, though possibly through a drop in the number of CS majors. The world will get more certain, hopefully in a good way. But how AI will affect the tech (and every other) job will take years if not decades to play out.

So what advice for those looking for tech jobs: build skills, get certificates, perhaps a Masters to wait out the job market, learn AI both as a skill in demand but also to make yourself more productive. Be aggressive (but not too aggressive), network, enter competitions, build a portfolio. The jobs are not as plentiful but they are out there. 

Monday, March 25, 2024

I know what A-B-C-D-F mean but what about V? X? HP?

 I am looking at LOTS of transcript of students who applied for my program REU-CAAR so I sometimes come across grades that I don't understand. The transcript does not have a guide to them, and I have been unable to find the meaning on line.

Normal grades of A,B,C,D,F possibly with + or - I DO understand, as do you, though standards differ from school to school.

UMCP also has 

P for Pass in a course the student chose to take Pass-Fail

W for withdrawing from a course

WW which will be on all courses in a semester- so the student dropped out that semester 

XF means failed because you cheated. I suspect people outside of UMCP would not know that, though the `F' part looks bad. 

NG seems to mean they placed out of the course somehow. Might stand for No Grade. 

I've seen

V at Georgia Tech. Lance told me that means the student audited the course. 

Q at University of  Texas at Austin.I do not know that means. 

X at Depauw. I do not know what that means.

HP at Harvey Mudd. I do not know what that means.

WV at Harvey Mudd. I suspect some kind of withdrawal but I don't know. 

SX at Cornell. I do not know what that means.

E is used at some schools for FAIL and at other schools for EXCELLENT

Some schools in India use O for Outstanding- higher than an A. 

Fortunately it is rare that I NEED to know a grade that has a letter I don't understand the meaning of. However, another problem is names of courses. 

Analysis could mean Calculus with or without proofs, one or many variables. 

Discrete Math could mean an easy course on how to proof simple thing or a hard course in combinatorics. Often I can tell if its a Freshman, Sophomore, Junior, or Senior course so that may help tell what it is. 

Foundations could mean... a lot of things. 

So what can be done? The only thing I can think of is to have schools include a legend on their transcripts that tells what each grade means. Why hasn't this already been done? Speculation

a) Harder than it seems to do.

b) Not really an important problem (this blog is the only time I've every seen it mentioned)

There may be some tradeoff between how easy something is to do and how important a problem is to solve in order to take action. And this problem does not reach that threshold.

This would seem to be a problem for admissions to grad school as well, yet I have not heard of people complaining about it there either.



Wednesday, March 20, 2024

Can you feel the machine?

In the recent academy award winning movie Oppenheimer, Niels Bohr tests a young Oppenheimer.
Bohr: Algebra's like sheet music, the important thing isn't can you read music, it's can you hear it. Can you hear the music, Robert?
Oppenheimer: Yes, I can.

I can't hear the algebra but I feel the machine. 

A Turing machine has a formal mathematical definition but that's not how I think of it. When I write code, or prove a theorem involving computation, I feel the machine processing step by step. I don't really visualize or hear the machine, but it sings to me, I feel it humming along, updating variables, looping, branching, searching until it arrives at its final destination and gives us an answer. I know computers don't physically work exactly this way, but that doesn't stop my metaphorical machine from its manipulations.

I felt this way since I first started programming in Basic in the 1970's, or maybe even before that on calculators or using the phone or sending a letter. I could feel the process moving along.

So when I took my first CS theory class, I could feel those finite automata moving along states and the PDAs pushing and popping. I even felt the languages pumping. After graduate complexity, I knew I found my calling.

And that's why I prefer Kolmogorov Complexity arguments more than information theory. Information doesn't sing to me but I can see the compression.

As computational complexity evolved, focusing more on combinatorics, algebra and analysis, and often on models that have little to do with computation, I sometimes find myself lost amongst the static math. But give me a Turing machine and I can compute anything!

Sunday, March 17, 2024

Grad Student Visit Day: That was then, this is now.

(Harry Lewis helped me with this post.)

March 15 was UMCP Computer Science Grad Student Visit Day. I suspect many of my readers are at schools that had their Grad Student Visit Day recently, or will have it soon. 

In 1980 I got into Harvard Grad School in Applied Sciences. I went there in the Spring to talk to some people and take a look at the campus. I paid my own way- it  did not dawn on me to ask them to reimburse me and I doubt they had the mechanism to do so. 

I know a student who got into two grad schools in 1992 and contacted them about a visit. Both set up the visit and reimbursed his travel, and the two trips helped him decide which school to go to. His criteria: The grad students looked happy at one and sad at the other. He went to the school where the grad students looked happy. Was it the right choice? He got his PhD so... it was not the wrong choice.

That was then. 

In 1980 no  schools that I know of had anything called Grad Student Visit Day.  In 1992 I suspect some did but it was a mixed bag. 

Now all schools that  I know of (including Harvard)  have a day in the spring where prospective grad students in CS are invited to come to campus. There are talks about grad school at that school, a very nice lunch, and 1-on-1 (or perhaps finite-to-1) meetings of grad students with faculty. Students are reimbursed for travel and lodging (within reason). There are variants on all of this, but that's the basic structure. The idea is to convince grad students to go to that school. It also serves the purpose of helping grad students who are already coming to get a sense of the place and a free lunch. It costs money: reimbursing students for travel and food for the students on the day itself. 

Random thoughts.

1) In 1980 no grad schools in CS did this. In 2024 (and I think for quite some time except the COVID years) all CS grad students do this. Does anyone know when the change happened? I suspect the early 1990's. 

2) Do other departments do this? For example Math? Physics? Applied Math? Chemistry? Biology? Engineering?  I doubt Art History does. 

3) Does it really help convince students to go to that school? I suspect that at this point if a school DIDN"T do it they would look bad and might lose students. Is there a way out? See the next  two points. 

4) Do students judge a school based on the professors they see (``Oh, UMCP has more people doing a combination of ML and Vision then I thought- I'll go there!'') or on the quality of the food ("UMCP had ginger bread for one of their desserts, which is my favorite dessert, so I'll go there.'') or how smoothly the trip went (``UMCP had an easy mechanism for reimbursement, whereas school X had me  fill out a lot of  forms.'')

5) Are we really advancing the public good for which we (schools) either have tax-exempt status OR are using tax-payer money by spending extra money to provide better meals and softer beds than our competitors do? Maybe we should all agree with each other to not waste money trying to outdo each other on stuff of no educational or research significance to the students. But wait---THIS MIGHT BE A VIOLATION OF ANTITRUST LAW. We are competitors, and under federal law are not allowed to (I think) cooperate to prevent a bidding war, even when it would be in the public interest that we do so in order to save money to use on the stuff that matters. 

6) One benefit I get from this as a professor: During the talks  I  hear things about my department I didn't know. 


Wednesday, March 13, 2024

Translation in Context

La Scala in Milan

Google translate generally impresses but consider this translation from a short Italian news article. I boldfaced a few items.
Not scheduled at the premiere of Mozart's Die Entführung aus dem Serail (The Abduction from the Seraglio) which debuted this evening at La Scala.

During the second act, in fact, someone dropped a mobile phone from the stage and hit a spectator in the front row on the cheek.

'I'll report you,' the man who was hit shouted. And then at halftime he explained to the ushers that he would keep the smartphone until the owner arrived to report it. In the meantime, the performance, in the historic staging by Giorgio Strehler conducted by Thomas Guggeis, continued with applause from the audience for almost every aria.

The stage? Last I checked Mozart didn't write any mobile phones into his opera, though you can never tell with those European directors. It certainly would have made the abduction easier. Even if a phone was dropped from the stage how did it jump over the orchestra pit into the audience.

So I asked ChatGPT to translate the story.

An unexpected event occurred during the premiere of Mozart's "Die Entführung aus dem Serail" (The Abduction from the Seraglio) which debuted this evening at La Scala.

During the second act, indeed, a cellphone fell from a box and struck a spectator in the front row on the cheek.

"I'll sue you!" the struck gentleman shouted. And then, during the interval, he explained to the ushers that he would keep the smartphone until the owner arrived to sue him. In the meantime, the performance, in the historic staging by Giorgio Strehler with the direction of Thomas Guggeis, continued amidst the audience's applause almost after every aria.

That makes more sense. So what's going on? The word for box is "palco" and the word for stage is "palcoscenico" sometimes shortened to "palco" when the context is clear (Italian speakers can fact check me). LLMs capture the context and seem to yield better translations.

Also note the difference in what the man shouted. According to ChatGPT

The Italian verb "denunciare" means "to report" or "to denounce" in English. When conjugated in the first person present indicative, it becomes "denuncio," translating to "I report" or "I denounce." It's used to express the action of making a formal accusation or report against someone, often in a legal or official context. So, in the context you provided earlier, "Ti denuncio" translates to "I'll sue you" or more literally, "I report you" or "I denounce you," indicating the speaker's intent to take formal action against the person responsible for dropping the cellphone.

We love to talk about the failures of LLMs, but here it improves and explains the translations. 

I wonder if the owner ever did try to get his phone back. 

Sunday, March 10, 2024

The Thrill of Seeing Your Name in Print is Gone

 In the 1980's and 1990's when I got a paper accepted to a journal or conference  it seemed important to see it in print. Having a paper accepted was nice, but it didn't seem real until I held the conference proceedings or journal in my hand. I can't imagine that now.

Random Thoughts

1) Now when a paper gets ACCEPTED, that's when it's REAL. For some journals the paper goes on their website very soon after that. For many papers  this does not matter since the paper has been on arXiv's for months, so having it on a journal's website (perhaps behind a paywall) does not increase its being out-there anyway. Caveat- there may be people who subscribe to that journal who are not aware of the paper until they get the issue. Then they go to arXiv to get an e-copy.

2) For e-journals there is no such thing as holding a journal in your hands. 

3) There are still some people who want to see their name in print. That was part of this blog and this blog

4) This post was inspired by the following: I had a paper accepted for a special issue of Discrete Mathematics , a memorial issue for Landon Rabern (a Combinatorist who died young). The paper is here (that's the arXiv version). I recently got the journal, on paper, mailed to me. So I got to see my name in print. My wife was thrilled. I didn't care. I don't know if they send a paper copy of the journal to all authors, for all issues,  or do they only do that for  special issue. (My spellcheck thinks that combinatorist is not a word. Or perhaps I am spelling it wrong. ADDED LATER: There is a debate about this word in the comment. Really!)

5) Note for aspiring bloggers: Getting that issue was the inspiration for this post. When you are inspired try to write the blog post soon before you lose interest.

6) Before I got the paper copy I didn't know (or care) if there was a paper copy.

7) I recently asked an editor for a BEATCS column if BEATCS also has a paper copy. He didn't know. That is how unimportant it has become. So why did I ask? I was planning a  blog post on which journals are paper free and which aren't- though I don't think I'll write that after all- to much work, and this post and some prior posts (the ones pointed to in item 3) cover that ground.

8) I wonder for my younger readers: Did you EVER feel a thrill seeing your name in print? Have you ever even seen your name in print? If you don't understand the question, ask your grandparents what in print means.

Wednesday, March 06, 2024

Favorite Theorems: Sensitivity

Our next favorite theorem gave a relatively simple proof of the sensitivity conjecture, a long-standing problem of Boolean functions.

by Hao Huang

Consider the following measures of Boolean functions \(f:\{0,1\}^n\rightarrow\{0,1\}\) on an input x
  • Decision tree complexity: How many bits of an input do you need to query to determine \(f(x)\)
  • Probabilistic decision tree complexity
  • Quantum decision tree complexity
  • Certificate complexity: Least number of bits that forces the function
  • Polynomial degree complexity: What is the smallest degree of a polynomial p over the reals such that \(p(x_1,\ldots,x_n)=f(x_1,\ldots,x_n)\) for \(x\in\{0,1\}^n\). 
  • Approximate polynomial degree complexity: Same as above but only require \(|p(x_1,\ldots,x_n)-f(x_1,\ldots,x_n)|\leq 1/3\).
  • Sensitivity: Over all x, what is the maximum number of i such that if  x' is x with the ith bit flipped then \(f(x)\neq f(x')\).
  • Block sensitivity: Same as sensitivity but you can flip disjoint blocks of bits.
Based mostly on the 1994 classic paper by Nisan and Szegedy, all of the above measures, except for sensitivity, are polynomially related, in particularly if say f has polylogarithmic decision-tree complexity then they all have polylogarithmic complexity. The sensitivity conjecture states that sensitivity is polynomially related to the rest. I wrote about the conjecture in 2017 and a combinatorial game that, if solved the right way, could settle the conjecture.

Twenty-five years later Huang settled the conjecture in the positive, and now all the above measures are known to be polynomially-related. His proof uses eigenvalues of some cleverly constructed matrices. 

So what's left? The game remains open. Also whether the rational degree is polynomially-related to all the above. But sensitivity was the big one!

Sunday, March 03, 2024

The letter to recommend John Nash was ``The Best Recomendation Letter Ever''- I do not think so.

There is an article about the letter Richard Duffin wrote for John Nash that helped John Nash get into Princeton: here. The title of the article is 


The Best Recommendation Letter Ever.

The article appeared in 2015. 

The letter is dated Feb 11, 1948. 

The letter itself is short enough that I will just write it:

Dear Professor Lefschetz:

This is to recommend Mr. John F Nash Jr, who has applied for entrance to graduate college at Princeton.

Mr. Nash is nineteen years old and is graduating from Carnegie Tech in June. He is a mathematical genius.

Yours sincerely, 

Richard Duffin


I am right now looking through 365  applicants for my REU program. (Am I bragging or complaining? When it was 200 I was bragging. At 365 I am complaining.) 

If I got a letter like that would I be impressed?

No.

A good letter doesn't just say this person is  genius. It has to justify that. Examples: 

She did research with me on topological  algebraic topology. I was impressed with her insights. We are working on a paper that will be submitted to the journal of algebraic topological algebra. See the following arXiv link for the current draft. 

They  had my course on Ramsey theory as a Freshman and scored the highest A in the class. However, more impressive is that, on their own, they discovered that R(5)=49 by using their knowledge of both History and Mathematics. 

Writing a letter for Jack Lotsofmath  makes me sad we live in an era of overinflated letters. I worked with him on recursion theory when he was a high school student; however, he ended up teaching me 0''' priority arguments. 

So my question is

1) Why did just stating that John Nash was a genius good enough back in 1948? Was Richard Duffin a sufficiently impressive person so that his name carried some weight? His Wikipedia entry is here.

2) Maybe its just me, but if a letter comes from a very impressive person I still want it to say what why the applicant is so great. 

3) Was there more of an old-boys-network in 1949? Could the thinking have been if Duffin thinks he's good, then he's good. The old-boys-network was bad since it excluded blacks, women, Jews, Catholics, and perhaps people not of a certain social standing. But did it err on the other side-- did they take people who weren't qualified because they were part of the in crowd? And was Duffin's letter a way to say but this guy really is good and not just one of us.

4) I suspect there were both less people applying to grad school and less slots available. I do not know how that played out.

5) Having criticized the letter there is one aspect I do like.

Today letters sometimes drone on about the following:

The letter writers qualification to judge:

Example:  I have supervised over 1000 students and have been funded by the NSF on three large grants and the NIH on one small grant. Or maybe its the other way around.

The research project, which is fine, but the letter needs to say what the student DID on the project.

Example: The project was on finite versions of Miletti's Can Ramsey Theory proof. This had never been attempted in the history of mankind! This is important because the Can Ramsey Theory is fundamental to Campbell's soup. This connects up with my upcoming book on the Can Ramsey Theorem to be published by Publish or Perish Press, coming out in early 2025.

 Irrelevant things for my program or for grad school, though perhaps relevant for College: 

Example: He is in the fencing club and the Latin club and was able to trash talk his opponents in Latin. They didn't even know they were being insulted!

So I give credit to Duffin for keeping it short and to the point. Even so, I prefer a letter to prove its assertions.





Wednesday, February 28, 2024

A Quantum State

Illinois' most famous citizen working on a quantum computer

The governor of Illinois, JB Pritzker, unveiled his budget last week including $500 million for quantum computing research. Is this the best way to spend my tax dollars?

As long-time readers know, I have strong doubts about the real-world applications of quantum computing and the hype for the field. But the article does not suggest any applications of quantum computing, rather

Pritzker says he's optimistic that the Illinois state legislature will embrace his proposal as a catalyst for job creation and investment attraction.

That does make sense. Investing in quantum may very well bring in extra federal and corporate investment into quantum in Chicago. At the least it will bring in smart people to Illinois to fill research roles. And it's not if this money would go to any other scientific endeavor if we don't put it into quantum.

So it makes sense financially and scientifically even if these machines don't actually solve any real-world problems. Quantum winter will eventually come but might as well take advantage of the hype while it's still there. Or should we?

A physicist colleague strongly supports Illinois spending half a billion on quantum. He lives in Indiana. 

Sunday, February 25, 2024

When is it worth the time and effort to verify a proof FORMALLY?

(This post was inspired by Lance's tweet and later post on part of IP=PSPACE being formally verified.) 

We now have the means to verify that a proof (prob just some proofs)  is correct (one could quibble about verifying the verifier but we will not address that here). 

The Coq proof assistant (see here). This was used to verify the proof of the four-color theorem, see here.

The Lean Programming Language and Proof Verifier (see here). It has been used to verify Marton's Conjecture which had been proven by Gowers-Green-Tao (see here for a quanta article, and here for Terry Tao's blog post about it.) 

The HOL (Higher Order Logic)  is a family of interactive theorem proving systems (see here). The Wikipedia entry for HOL (proof assistant) (see here) says: 

The CakeML project developed a formally proven compiler for ML [ML is a programming language, not to be confused with Machine Learning.]

The ISABELLE is an HOL system. See here. The Wikipedia page says that it is used to verify hardware and software. It has recently been used to verify the Sumcheck Protocol which was developed to help prove IP=PSPACE (See here.)One of the authors of the IP=PSPACE paper, Lance Fortnow, said of this

Now I can sleep soundly at night.

The Kepler Conjecture was proven by Thomas Hales. (Alfred Hales is the co-author of the Hales-Jewitt. I do not know if Thomas and Alfred Hales are related, and Wikipedia does not discuss mathematicians blood-relatives, only their academic ancestors.)  The proof was... long. Over a period of many years it was verified by HOL light and Isabelle. See here. (Note- the paper itself says it used HOL-light and Isabelle, but the Wikipedia site on Lean says that Hales used Lean.) 

I am sure there are other proof assistants and other theorems verified by them and the ones above. 

My question is 

Which proofs should we be spending time and effort verifying? 

Random Thoughts

1) The time and effort is now much less than it used to be so perhaps the question is moot.

2) Proofs that were done with the help of a program (e.g., the four-color theorem) should be verified.

3) The theorem has to have a certain level of importance that is hard to define, but item 1 might make that a less compelling criteria. 

4) A proof in an area where there have been mistakes made before should be verified. That is the justification given in the paper about verifying part of the IP=PSPACE proof.

5) A rather complicated and long proof should be verified. That was the case for Marton's Conjecture. 

6) A cautionary note: Usually when a long and hard proof comes out, people try to find an easier proof. Once a theorem is proven (a) we know its true (hmmm..) and (b) we have some idea how the proof goes. Hence an easier proof may be possible. In some cases just a better write up and presentation is done which is also a good idea. I wonder if after having a verifier say YES people will stop bothering getting  easier proofs.

7) Sometimes the people involved with the original proof also were involved in the verification (Kepler Conjecture, Marton's Conjecture) and sometimes not (IP=PSPACE, 4-color theorem). 

8) When I teach my Ramsey Theory class I try to find easier proofs, or at least better write ups, of what is in the literature. I sometimes end up with well-written but WRONG proofs. The students, who are very good, act as my VERIFIER. This does not always work:  there are still some students who, 10  years ago, believed an  incorrect proof of the a-ary Canonical Ramsey Theorem, though they went on to live normal lives nonetheless. Would it be worth it for me to use a formal verifier? I would guess no, but again, see item (1). 

9) Might it one day be ROUTINE that BEFORE submitting an article you use a verifier? Will using a verifier be as easy as using LaTeX? 

10) The use of these tools for verifying code--- does that put a dent in the argument by Demillo-Lipton-Perlis (their paper is here, and I had two blog posts on it here and here) that verifying software is impossible?'

11) HOL stands for High Order Logic. I could not find out what Isabelle, Lean, or Coq stand for. I notice that they all use a Cap letter then small letters so perhaps they don't stand for anything. 

Wednesday, February 21, 2024

Sumchecks and Snarks

Last summer as I lamented that my research didn't have real world implications, one of the comments mentioned the sumcheck protocol used for zero-knowledge SNARKs. I tried to figure out the connection back then but got lost in technical papers. 

With the formal verification of the sumcheck protocol announced last week I tried again this time using Google's new Gemini Ultra. Gemini gave a readable explanation. Let me try to summarize here.

Sumcheck comes from the paper where Howard Karloff, Carsten Lund, Noam Nisan and I showed that co-NP has interactive proofs. It was Carsten who first came up with the sumcheck protocol in fall of 1989. Here's a simple version:

Suppose there is a hard to compute polynomial p of low-degree d and a prover claims p(0)+p(1) = v. The prover gives a degree-d polynomial q to the verifier claiming q = p. The verifier checks q(0)+q(1) = v and picks an r from a large range. If q was actually p then p(r) = q(r). But if p(0)+p(1) \(\neq\) v then p and q are different polynomials and with high probability p(r) \(\neq\) q(r) since p and q can only agree on at most d points.

The protocol reduces checking a sum to checking a single randomly chosen location. Our first proof used the sumcheck protocol to reduce checking the permanent of a nxn matrix to a (n-1)x(n-1) matrix and then we repeat until the matrix is small enough to compute the permanent directly.

Now let's break down the properties of zkSNARK (zero-knowledge succinct non-interactive arguments of knowledge). 

  • Efficient: A prover who knows the secret input can generate the proof efficiently.
  • Zero-Knowledge: A prover can convince a verifier that they know a secret input that satisfies a statement without revealing anything else about that secret.
  • Succinct: The proof itself is extremely small and easy to verify.
  • Non-Interactive: No repeated interaction between prover and verifier is necessary. The prover generates one proof, and anyone can verify it.

Our original sumcheck protocol is not all efficient, the prover needs to solve #P-hard problems, but in SNARKs we assume the prover already knows the problem's solution.

Our protocol requires significant interaction, in particular that r is chosen after q is set. But for SNARKs since we can assume the prover is efficient, we can use r as a hash of q. It should be difficult to find a q different than p such that q(hash(q)) \(\neq\) p(hash(q)).

Our protocol is not at all zero-knowledge, if p = q then the protocol reveals p(0) and p(1). Lots more tricks needed to make it zero-knowledge and succinct but I leave that to the interested reader.

Sunday, February 18, 2024

ChatGPT thinks Graph Isomorphism has real applications. Is it right?

Lance did a post on Babai's result on Graph Isomorphism (see here). I then did a post asking if Graph Isomorphism has real applications (see here).  Lance proofread my post (There were some typos! Really!) and then he was inspired to ask ChatGPT what it thought. The conversation is below (I added some fonts, and my opinion clearly marked,  but nothing else).  I leave it to the reader to determine what ChatGPT got right and got wrong. In areas that I am unfamiliar with I tend to think that ChatGPT is right, perhaps a case of Gell-man Amnesia (see here).

LANCE: What are some applications of Graph Isomorphism?

ChatGPT:

Isomorphism, a concept from the field of graph theory in mathematics and computer science, refers to the problem of determining whether two graphs are isomorphic, meaning there is a one-to-one correspondence between their vertex sets that preserves adjacency. Despite its theoretical underpinnings, graph isomorphism has practical applications in various domains:

Chemical Informatics: It's crucial for identifying chemical compounds. Since chemical structures can be represented as graphs (with atoms as vertices and bonds as edges), determining if two graphs are isomorphic can indicate whether two compounds are structurally identical.

BILL'S OPINION: I suspect simple heuristics work fine. Also I suspect that chemicals are more complicated than graphs. More generally, instead of asking Does GI have any real applications? I should have asked Do the last 45 years of graph isomorphism algorithms produced by our community have any applications? I suspect the answer is no.

Pattern Recognition: In image and pattern recognition tasks, objects can be represented as graphs, and finding isomorphic graphs helps in identifying similar patterns or shapes within different datasets.

BILL'S OPINION: I suspect that in real world pattern recognition, especially for images, approx matches is what you really want. What makes GI hard/interesting is that you need exact Isomorphism. So I doubt GI really helps for Pattern Recognition.

Network Analysis: Graph isomorphism is used to analyze and compare different network structures in social networks, biological networks (like neural connections and protein interactions), and the Internet, to understand their underlying properties and dynamics.

BILL'S OPINION: This sounds plausible to me. The people doing the studying are other academics, however, that can still count. 

Computer Vision: For object recognition and classification, where objects within an image can be represented as graphs, and their identification involves finding isomorphic subgraphs within larger scene graphs.

BILL'S OPINION: Same issue as with Pattern matching. 

Cryptography: Graph isomorphism problems can serve as the basis for cryptographic protocols, including public key cryptography, due to the computational difficulty of solving graph isomorphism for general cases.

BILL'S OPINION: Just NO.  One reason factoring is so useful for crypto is that one can easily generate hard instances. For GI you cannot (at least not yet). I think ChatGPT confused GI being in Zero-Knowledge with GI actually being used for Crypto. More on this later.

Database Indexing: In databases, especially graph databases, graph isomorphism helps in query optimization and efficient data retrieval by identifying identical or similar graph structures.

BILL'S OPINION: I can believe this! Gell-man Amnesia? 

These applications demonstrate the wide-ranging impact of graph isomorphism beyond theoretical computer science, influencing fields where structural and relational data analysis is crucial.


LANCE: Can you give me specific examples of these?

(BILL: I won't comment on these, except in one case,  as it would be the same comments as above.) 

ChatGPT:

Certainly! Here are specific examples for the practical applications of graph isomorphism mentioned:

Chemical Informatics: A classic example is the identification of isomers in chemistry. Isomers are compounds with the same molecular formula but different structures. Graph isomorphism algorithms can distinguish between different isomers by comparing their graph representations, where vertices represent atoms and edges represent chemical bonds. For instance, butane and isobutane are structural isomers with different connectivity among the same set of atoms, which can be identified as distinct through graph isomorphism.

Pattern Recognition: In character recognition systems, letters or symbols can be represented as graphs, where the strokes are edges and intersections are vertices. Graph isomorphism algorithms can then be used to match a scanned, handwritten, or printed character against a database of graph-represented characters to identify the letter or symbol.

Network Analysis: In social network analysis, researchers might use graph isomorphism to identify similar subnetworks within larger networks. For example, detecting communities or cliques within social networks that have identical or similar interaction patterns, which can be crucial for understanding social dynamics or the spread of information.

Computer Vision: An application in computer vision could be recognizing objects regardless of their orientation in an image. By representing objects as graphs, where vertices correspond to object features and edges to spatial relationships between these features, graph isomorphism can help in identifying an object by finding a match between its graph representation and that of a known object.

Cryptography: A specific use case in cryptography is the design of cryptographic primitives based on the Graph Isomorphism problem, such as the Goldreich-Goldwasser-Halevi (GGH) cryptosystem. This cryptosystem constructs a public key from a hard instance of a lattice problem, which is closely related to finding isomorphisms between noisy graph representations, leveraging the computational difficulty of graph isomorphism for security.

BILL: The pointer to the Wikipedia page on the GGH system is here. It has nothing to do with graph isomorphism. So this one is complete garbage.

Database Indexing: In the context of graph databases used for storing knowledge graphs or social network data, an example application is the optimization of query processing. When a user queries the database to find data patterns that match a specific graph structure, graph isomorphism algorithms can help quickly locate all instances of the data that are isomorphic to the query graph, enhancing search efficiency and performance.

These examples highlight how graph isomorphism plays a crucial role in solving real-world problems across different domains by leveraging the structure and properties of graphs.

LANCE: Give me a specific research paper in Chemistry that uses graph isomorphism

ChatGPT
A notable research paper that applies graph isomorphism in the field of chemistry is:

Title: "Development and Application of a Novel Graph-Theoretical Approach for Molecular Similarity Analysis"

Authors: Raymond E. Carhart, David H. Smith, Robert Venkataraghavan


Journal: Journal of Chemical Information and Computer Sciences

Year: 1985, Volume: 25, Issue: 1, Pages: 64–73, DOI: 10.1021/ci00045a002

Abstract:

In this paper, the authors introduce a novel method for analyzing molecular similarity based on graph-theoretical concepts. They present an algorithm, known as the MCS (Maximum Common Subgraph) algorithm, which is used to identify the largest subgraph common to two chemical compounds represented as graphs. This process relies on graph isomorphism to determine the structural similarity between molecules. The approach was groundbreaking at the time because it provided a quantitative method for assessing molecular similarity, which is crucial for drug discovery and the development of chemical databases.

The significance of this work lies in its application of graph isomorphism to solve a practical problem in chemistry, demonstrating the interdisciplinary nature of graph theory and its potential to impact research beyond theoretical computer science. This paper has been influential in the fields of cheminformatics and drug design, where molecular similarity plays a key role in identifying potential therapeutic agents.

BILL'S COMMENT: This information from ChatGPT would be more impressive if this paper actually existed. 

(ADDED LATER- there is some debate in the comments about if the paper exists. 

DOES NOT EXIST: Bill could not find it on the web.

DOES NOT EXIST: Bill went to the website of that journal and did a search and did not find it. 

DOES NOT EXIST: The DOI does not point to it.

DOES EXIST: Some slide packet pointed to it as a reference. 

If someone leaves a comment with a pointer to it, or emails me a pointer to it, I will update this post.)





Thursday, February 15, 2024

Change

An academic field often organizes itself pulling ideas from its own field. For example, students on the economics faculty job search can signal at most two schools, without giving any other rules on how the signals should be used, allowing equilibrium behavior to rule.

In computer science, our field is decentralized without any single organization setting policies or much coordination. We're very data oriented, so we do a good job like with the Taulbee Survey. And because we're a field based on programming and logic, we tend to outsource decision to specific rules, formulas for ranking departments, judging faculty productivity (h-index, major conference publications, course evaluations) and graduation requirements.

Computing itself has changed over the last decade with the growth of artificial intelligence and learning from data. We see many examples when computers trained on data often dramatically out-perform those built on logic, certainly in say machine translation, facial recognition and spam detection for example.

Will the changes in computing change the way we think about how we run our field? Probably, but over a long period of time. People don't change their ways, but eventually the people change.

Sunday, February 11, 2024

Are there any REAL applications of Graph Isomorphism?

Lance's post on Babai's result on Graph Isomorphism (henceforth GI) inspired some random thoughts on GI. (Lance's post is here.) 

1) Here is a conversation with someone who I will call DAVE. This conversation took place in the late 1980's. 

BILL: You got your ugrad degree in engineering and then went to CS grad school. You are a pragmatic guy. So why did you work on graph isomorphism?

DAVE: My advisor was working on it. The 1980's was a good time for GI: the problems restricted to bounded genus, bounded degree, and bounded eigenvalue multiplicity were proven to be in P. Tools from Linear Algebra and Group Theory were being used. People thought that GI might be shown to be in P within the next five  year. Then it all stopped for quite some time. 

But back to your question about GI being practical, I asked my advisor for real applications of GI. He said:  

Organic Chemists use graph isomorphism to match chemical structures.

By the time I found out this wasn't true, I already had my PhD. I quickly switched to Computational Geometry which really does have applications. Maybe. 

2) So why was graph isomorphism studied? Much to my surprise, a survey by Grohe and Schweitzer (see here) says that 

Graph isomorphism as a computational problem first appeared in the chemical documentation literature of he 1950's (for example Ray and Kirsh) as the problem of matching a molecular graph against a database of such graphs.

(The article by Ray and Kirsh is here.) 

So at one time it was thought that GI would apply to Chemistry. Did it? I suspect some simple algorithms and heuristics  did, but (1) chemicals are more complicated than graphs, and (2) by the time we are talking about graph isomorphism of bounded eigenvalue multiplicity, the algorithms got to complicated to really use. BUT these are just my suspicions (what is the difference between a suspicion and a guess?) so I welcome comments or corrections.

2) The same article also says:

Applications [of GI] spans a broad field of areas ranging from Chemistry to computer vision. Closely related is the problem of detecting symmetries of graphs and of general combinatorial structures. Again this has many application domains, for example, combinatorial optimization, the generation of combinatorial structures, and the computation of normal forms. 

That sounds impressive, but I would like to know if the applications are to the real world, or are they do other theory things. 

3) The prominence of GI in the CS theory literature is because its one of the few natural problems that is in NP, not in P, but not known to be (and unlikely to be) NP-complete.

4) Graph Isomorphism IS a natural problem. What is an unnatural problem? 

BILL: Do you find the following interesting: There is an r.e. set that is not decidable and not Turing-equivalent to HALT.

DARLING: Yes. Unless it was some dumb-ass set that people like you constructed for the whole point of being r.e., not decidable, and not Turing-equivalent to HALT. 

BILL: You nailed it!
 
5) So, is GI in P? This is truly open question in that it really could go either direction. There is no real consensus. 
 
6) I have heard that Babai's result is as far as current techniques can take us. So we await a new idea.

Wednesday, February 07, 2024

Favorite Theorems: Graph Isomorphism

We start our favorite theorems of the last decade with a blockbuster improvement in a long-standing problem. 

Graph Isomorphism in Quasipolynomial Time by László Babai

The graph isomorphism problem takes two graphs G and H both on n nodes and asks if there a permutation \(\sigma\) such that \((i,j)\) is an edge of G if and only if \((\sigma(i),\sigma(j))\) is an edge of H. While we can solve graph isomorphism in practice, until Babai's paper we had no known subexponential algorithm for the problem.

Graph Isomorphism has a long history in complexity. It has its own self-named complexity class in the zoo. It's one of the few decision problems in NP not known to be NP-complete or in P. Graph Isomorphism and its complement are the poster child problems for complexity classes AM, SZK and SPP, public-coin proof systems, and the minimum-size circuit problem. Köbler, Schöning and Torán wrote a whole book on the structural complexity of graph isomorphism.

Graph Isomorphism has the kind of group structure that suggests a quick quantum algorithm, but that remains an annoying open problem. 

Babai found a simple algorithm and proof based on Johnson Graphs. Who am I kidding? You would need several months to work through the paper even for an expert group theorist. 

Babai has been working on graph isomorphism since the 80's. In November 2015, he announced a talk with that title and the theory world went crazy. They needed a larger room. We invited Babai down to Georgia Tech to give a talk. A week before he sends me an email saying the proof no longer gave the quasipolynomial time bound and asks if he should still give the talk. We say of course and he gives the talk on January 9, 2017, describing the problem with the proof. And then he announces "It's been fixed". I witnessed history in the making. 

Sunday, February 04, 2024

The advantage of working on obscue things

 I got an email from an organization that wants to publicize one of my papers. Which paper did they want to publicize?

1) If the organization was Quanta, they are KNOWN so I would trust it. Indeed, they have contacted me about my paper proving primes infinite from Ramsey theory (they contacted all of the authors of papers that did that). The article they published is here and I blogged about those proofs here.

2) Someone writing an article on my muffin work for a recreational journal contacted me- also believable.

3) But the email I got was from an organization I never heard of and they wanted to publicize An NP-Complete Problem in Grid Coloring. This  is not a topic of general interest, even among complexity theorist or Ramsey theorists.  Indeed, the paper has three authors and only two of them care about it. Hence I suspected the organization was a scam or quasi-scam. I looked them up and YES, at some point they would have wanted money for their efforts. 

I want to say none of my work is of general interest but that may depend on what is meant by general and by interest. Suffice to say that if a scammer claims they want to publicize my work and picks a random paper to ask me about, the probability that its one where I will say Yes, I can see that one being worth getting to a wider audience is negligible. Indeed, Darling was amazed that Quanta cared about using Ramsey Theory to prove primes infinite. She may have a point there.

But if I worked in Quantum or ML or Quantum ML  then it would be harder for me to say for a random paper they can't possibly think that this was wider appeal, so its obviously a scam. 

That is the advantage of working on obscure things.

Wednesday, January 31, 2024

University Challenges

Seems like US universities have been in the news quite a bit recently. You'd think for the great academics and research. Alas, no.

I decided to make a list of the not so unrelated topics. The list got long and still from from complete.

  • Public perception of universities has been dropping
  • Enrollment: Up in 2023 for the first time since the pandemic. But still down 900,000 undergrads since five years ago and the demographic cliff is just a couple years away.
  • Fiscal Challenges even at Penn State and University of Chicago.
  • On the other hand are those with huge endowments.
  • Where have the men gone? While CS is still predominantly male, men make up only about 40% of undergrads on four-year colleges. The percentages are lower for African-American and Hispanic men.
  • The increase in teaching faculty over tenure-track. 
  • AI - How best to use it to help the educational process without letting students cheat, and where do you draw the line. 
  • State government exerting control. We are seeing a number of conservative states pushing back against DEI and the perceived liberal bias.
  • Affirmative Action - How do universities maintain a diverse student body in the wake the supreme court ruling last summer.
  • Admissions policies that favor the rich, notably legacy admissions and sports other than football and basketball, where wealthier kids have the time, training and equipment to succeed.
  • SAT - Most schools have eliminated the SAT exam but should they bring it back
  • College is seen more as a place to build career skills. STEM fields especially in computing have seen huge gains in enrollment while humanities and social sciences have been decimated. How should colleges respond?
  • Competition from certificate programs, online degrees, apprenticeships and boot camps.
  • Athletics - Chasing ever-increasing broadcast revenue has restructured conferences (Goodbye Pac-12). Name-Image-Likeness has made some college athletes, notably in football and basketball, significant money. Alumni collectives and easier transfer rules have turned college student athletes into free agents. Meanwhile some lower revenue sports are getting cut.
  • Colleges as a political football as college graduates trend democratic.

The Israel-Gaza-Hamas conflict alone has supercharged a number of issues.
  • When and how should universities take a stand on political issues. 
  • Congressional hearings into university policies
  • Free speech, especially when it creates disruption, makes people feel unsafe and leads to discrimination. Where do you draw the line?
  • Activist donors and boards. As universities rely more on "net-high worth individuals", these large donors can hold considerable sway, including pushing out presidents. 
So as a college professor or graduate student, how do you deal with all of the above? Best to ignore it all and focus on your research and teaching.

Sunday, January 28, 2024

Certifying a Number is in a set A using Polynomials

 (This post was done with the help of Max Burkes and Larry Washington.)


During this post \(N= \{0,1,2,\ldots \}\) and  \(N^+=\{1,2,3,\ldots \}\).

Recall: Hilbert's 10th problem was to (in todays terms) find an algorithm that would, on input a polynomial \(p(x_1,\ldots,x_n)\in Z[x]\), determine if there are integers \(a_1,\ldots,a_n\) such that \(p(a_1,\ldots,a_n)=0\).

From the combined work of Martin Davis, Yuri Matiyasevich, Hillary Putnam, and Julia Robinson it was shown that there is no such algorithm.  I have a survey on the work done since then, see here
The following is a corollary of their work:


Main Theorem:
Let \(A\subseteq N^+\) be an r.e. set. There is a polynomial \(p(y_0,y_1,\ldots,y_n)\in Z[y_0,y_1,\ldots,y_n]\) such that

\((x\in A) \hbox{ iff } (\exists a_1,\ldots,a_n\in {\sf N})[(p(x,a_1,\ldots,a_n)=0) \wedge (x> 0)] \}.\)

Random Thoughts:

1) Actual examples of polynomials \(p\) are of the form

\(p_1(y_0,y_1,\ldots,y_n)^2 + p_2(y_0,y_1,\ldots,y_n)^2 + \cdots + p_m(y_0,y_1,\ldots,y_n)^2\)

as a way of saying that we want \(a_1,\ldots,a_n\) such that the following are all true simultaneously:

\(p_1(x,a_1,\ldots,a_n)=0\), \(p_2(x,a_1,\ldots,a_n)=0\), \(\ldots\), \(p_m(x,a_1,\ldots,a_n)=0\),

2) The condition \(x>0\) can be phrased

\((\exists z_1,z_2,z_3,z_4)[x-1-z_1^2-z_2^2-z_3^2-z_4^2=0].\)

This phrasing uses that every natural number is the sum of 4 squares.


The Main theorem gives a ways to certify that \(x\in A\): Find \(a_1,\ldots,a_n\in Z\)
such that \(p(x,a_1,\ldots,a_n)=0\).

Can we really find such \(a_1,\ldots,a_n\)?

A High School student, Max Burkes, working with my math colleague Larry Washington, worked on the problem of finding \(a_1,\ldots,a_n\).

Not much is known on this type of problem. We will see why soon. Here is a list of what is known.

1) Jones, Sato, Wada, Wiens (see here) obtained a 26-variable polynomial \(q(x_1,\ldots,x_{26})\in Z[x_1,\ldots,x_{26}]\) such that

\(x\in \hbox{PRIMES iff } (\exists a_1,\ldots a_{26}\in N)[(q(a_1,\ldots,a_{26}=x) \wedge (x>0)].\)

To obtain a polynomial that fits the main theorem take

\((x,x_1,\ldots,x_{26},z_1,z_2,z_3,z_4)=(x-q(x_1,\ldots,x_{26}))^2+(x-z_1^2+z_2^2+z_3^2+z_4^2)^2.\)

Jones et al. wrote the polynomial \(q\) using as variables \(a,\ldots,z\) which is cute since thats all of the letters in the English Alphabet. See their paper pointed to above, or see Max's paper here.

2) Nachiketa Gupta, in his Masters Thesis, (see here) tried to obtain the the 26 numbers\ (a_1,\ldots,a_{26}\) such that \(q(a_1,\ldots,a_{26})=2\) where \(q\) is the polynomial that Jones et al. came up with.  Nachiketa Gupta found  22 of them.  The other 4 are, like the odds of getting a Royal Fizzbin, astronomical.  Could todays computers (21 years later) or AI or Quantum or Quantum AI obtain those four numbers?  No, the numbers are just to big.

3) There is a 19-variable polynomial \(p\) from the Main Theorem for the set

\(\{ (x,y,k) \colon x^k=y\}.\)

See Max's paper here Page 2 and 3, equations 1 to 13. The polynomial \(p\) is the sum of squares of those equations. So for example \(r(x,y,z)=1\) becomes \((r(x,y,z)-1)^2\).

Max Burkes found the needed numbers to prove \(1^1=1\) and \(2^2=4.\) The numbers for the \(2^2=4\) are quite large, though they can be written down (as he did).

 
Some Random Thoughts:

1) It is good to know some of these values, but we really can't go much further.

2) Open Question: Can we obtain polynomials for primes and other r.e. sets so that the numbers used are not that large. Tangible goals:

(a) Get a complete verification-via-polynomials that 2 is prime.

(b) The numbers to verify that \(2^3=8\).

3) In a 1974 book about progress on Hilbert's problems (I reviewed it in this book rev col, see here) there is a chapter on Hilbert's 10 problem by Davis-Matiyasevich-Robinson that notes the following. Using the polynomial for primes, there is a constant \(c\) such that, for all primes \(p\) there is a computation that shows \(p\) is prime in \( \le c \) operations. The article did not mention that the operations are on enormous numbers.

OPEN: Is there some way to verify a prime with a constant number of operations using numbers that are not quite so enormous.