Google Analytics

Monday, November 28, 2022

Would you be more productive if you didn't log on? It worked for Christopher Havens!

There are times when NOT having computer access (is that possible anymore?) can make you MORE productive. 

1) I did a lot of my Muffin Work when I was in Mexico for a bar matzah and had no computer access, and no Television in English (though Texas Walker Ranger, and Kindergarden Cop, were actually pretty good in Spanish even though I don't now Spanish.) 

2) I did work on NIM with Cash when I was stuck at an airport for 8 hours. 

3) I proofread (on paper!) most of my book with Erik D and Mohammad H when I was at a relatives house for four days  who had weak wifi and only got  ABC, NBC, CBS, PBS, some local channels, and COZI (not sure why they got COZI, though I am glad since I caught a good episode of Columbo).

4) Thanksgiving at my Mom's apartment (she's 93 years young!), with no computer,  I relearned the proof of the Hales-Jewitt theorem. I seem to learn/forget/learn/forget that one a lot. 

There are times when being AWAY from technology is helpful. I sometimes go to the Math Library and try to NOT use my cell phone. 

Having said that I do think that overall having access to papers online is great and that overall academic productivity has increased.  But there are times when the computer can distract you and time AWAY from it is good. 

Which brings us to the story of Christopher Havens. He works in Number Theory and logs on very rarely, perhaps never. He just works on math 10 hours a day. He has  a paper (with co-authors).  It take discipline to resist the urge to log on. How did he manage this?

He is in prison for murder.

Here is a podcast with him as a guest. 

Here is an article about him. 

Here is a math article where he is a co-author. 

Here is the short version of all of this: 

1) He is guilty of murder and in a max security prison in America.  It was a drug related shooting (why do people only talk about drug deals gone bad when they should also talk about drug deals gone good?) . When I first read Prison Inmate Solves Math Problem I thought that maybe a white collar criminal who majored in Math and was in a min security prison with access to the web (do white collar criminals have access to the web?)  But NO, Christopher Havens really did murder someone and is in max security.

2) He really has turned his life around. He really is not the person he was, and when he gets out I cannot imagine he will go back to drugs and crime. I suspect he will work on the Math Prison Project which I mention later. 

3) His mother says that when he was in High School (which is as far as he got for education)
he was helping students in math who were  2 grades above him, but he has no recollection of this.

4) When he was the hole (solitary confinement) someone who did Prison Education gave him (and others, he was not picked out) an envelope of math problems to work on--- pre-algebra. Christopher did them well and liked it and began requesting more advanced math books and learned math by himself, working 10 hours a day. When he requested a book it was random which ones he would get. I don't know why some were blocked. I don't think he knows why some were blocked. 

5) Some mathematicians from Italy (Italy?) contacted him and they began corresponding and yada-yada-yada, he has a paper now.

6) He has conceptualized the Math Prison Project to help other prisoners do math, though I would suspect not on the level he is on.  Then again, maybe the reason that P vs NP is still open is that we all have to many distractions, and conference deadlines, that a prisoner would not have. 

7) Some articles say that he solved a ancient problem in math that Euclid couldn't solve. This is not true. He helped solve some problems about continued fractions. 

8) There is going to be a movie about him, see here. I predict it will take an interesting story and make it less interesting and more fictional. 

What to make of all this? 

1) KUDOS to him!

2) I don't know which side of the nature/nurture argument this goes to

a) He OBVIOUSLY had math talent naturally or else he couldn't have learned all of that math.
b) He shows that HARD WORK and TENACITY can overcome other issue.

3) back to my original point- if you had the FREEDOM to work 10 hours a day JUST on math and had no other distractions, but also limited access to books and people,  would you be MORE productive? LESS productive? Also note- no faculty meetings, no teaching obligations, and no word processor to distract you. 



Monday, November 21, 2022

A Celebration of Juris


On November 4th I travelled to my undergraduate alma mater Cornell for a Celebration of the Life and Career of Juris Hartmanis who passed away in July. The workshop attracted many Cornell faculty and students, many of Hartmanis' former colleague and students, grad and undergrad, as well as his family. For the most part, the talks did not focus on technical content but rather memories of the great man. 

I talked about how Hartmanis founded the field of Computational Complexity and brought me into it. Herbert Lin told the story behind Computing the Future, a 1992 agenda for the future of computer science led by Hartmanis and the challenge to the report by John McCarthy, one of the founders of AI. Should the agenda of computer science be solely in the hands of academic computer scientists, or should it take into account its role in the larger scientific and world-wide community? We still face these questions today.

Ryan Williams gave a powerful talk about how Hartmanis personally intervened to ensure Ryan had a future in complexity. We are all better off for that.

After the workshop, Ryan and I walked around the campus and Collegetown reminiscing on how things have changed in the two decades since Ryan was an undergrad and the four decades (!) since I was. Most of the bars and restaurants have disappeared. The Arts quad is mostly the same, while the engineering building have been mostly rebuilt. There's a new computer science building with another on the way

I stayed in town to catch the Cornell football game the next day, as I once was on that field playing tuba for the marching band. They tore down the west stands to put up a parking lot and the east stands were sparsely filled watching Penn dominate the game.

Good bye Juris. You created a discipline, started one of the first CS departments, and plotted the future of both computational complexity and computer science as a whole. A master and commander indeed.

Thursday, November 17, 2022

Fall Jobs Post 2022

In the fall I try to make my predictions on the faculty job market for the spring. The outlook this year is hazy as we have two forces pushing in opposite directions. 

Most of the largest tech companies are having layoffs and hiring freezes amidst a recession, higher expenses and a drop in revenue from cloud and advertising. Meanwhile computing has never had a more exciting (or scary) year of advances, particularly in generative AI. I can't remember such a dichotomy in the past. In the downturn after the 2008 financial crisis computing wasn't particularly exciting as the cloud, smart phones and machine learning were then just nascent technologies.

We'll probably have more competition in the academic job market as many new PhDs may decide to look at academic positions because of limited opportunities in large tech companies. We might even see a reverse migration from industry to academia from those who now might see universities as a safe haven.

What about the students? Will they still come in droves driven by the excitement in computing or get scared off by the downturn in the tech industry. They shouldn't worry--the market should turn around by the time they graduate and even today there are plenty of tech jobs in smaller and midsize tech companies as well as companies that deal with data, which is pretty much every company.

But perception matters more than reality. If students do stay away that might reduce pressure to grow CS departments.

Onto my usual advice. Give yourself a good virtual face. Have a well-designed web page with access to all your job materials and papers. Maintain your Google Scholar page. Add yourself to the CRA's CV database. Find a way to stand out, perhaps a short video describing your research. 

Best source for finding jobs are the ads from the CRA and the ACM. For theoretical computer science specific postdoc and faculty positions check out TCS Jobs and Theory Announcements. If you have jobs to announce, please post to the above and/or feel free to leave a comment on this post. Even if you don't see an ad for a specific school they may still be hiring, check out their website or email someone at the department. You'll never know if you don't ask.

Monday, November 14, 2022

Who first thought of the notion of Polynomial Time?

(Updated version of  Computational Intractability: A Guide to Algorithmic Lower Bound by Demaine-Gasarch-Hajiaghayi is here)


Any question like who first though of X is often hard to answer. I blogged about who first came up with the Fib numbers here. I've heard rumors that IBM had search engines way before Google but could not figure out how to make money off of it. There are other examples. 

I had learned that Cobham defined P in the paper The intrinsic computational difficulty of functions, in 1965. It was The conference on Logic, Methodology, and Philosophy of Science. The paper is here.  Jack Edmonds had the notion of P in the paper Paths, Trees, and Flowers here in 1965.

While it is true that Cobham defined P in that paper, and he might have been the first one to do so, was the notion somehow around earlier. I first thought the answer was no. Why?  Because if you look at Joe Kruskal's paper on MST  (see here) you don't see anything resembling time complexity. No O(E+Vlog V) or whatnot. So I thought that if the notion of  this algorithm runs in such-and-such time was not in the air, then certainly any notion of P could not have been. 

Hence I was surprised when I accidentally (more on that later) came across the following: 

In 1910 (really, 1910)  H.C.Pocklington analyzed two algorithms for solving quadratic congruences and noticed that 

one took time proportional to a power of the log of the modulus, where as

the other took time proportional to the modulus itself or its square root. 

THAT is the distinction between P and NOT-P. 

The paper is titled The determination of the exponent to which a number belongs, the practical solution of certain congruences, and the law of quadratic reciprocity. It appeared in 1910, in the Proceedings of the Cambridge Philosophical  Society, Volume 16, pages 1-5. (I could not find it online. If you know of a place online for it, leave a comment.) 

ADDED LATER: Here is the article in pieces: Page 1Pages2,3Pages4,5.

How did I come across this? And why had I NOT come across this in my roughly 40 years working in complexity theory? 

I came across it while reading a blog of Scotts, The Kolmogorov Option, see here where Pocklington is mentioned in passing. I am surprised how arbitrary the set of things ones knows can be. I have put the Pocklington story in the Demaine-Gasarch-Hajiaghayi book Computational Intractability: A Guide to Algorithmic Lower Bounds so that this knowledge gets to be better known.

ADDED LATER: That Cobham and Edmonds are known for discovering or inventing P is an example of  the well known 

Columbus Principle: Things are named after the LAST person to discover them (note that Columbus was the last person to discover America.)

Bonus Question: Most principles where the author is not on it, the author might be unknown. NOT in this case. I KNOW who coined the term `Columbus Principle' Do you? (It was not me.) 







Thursday, November 10, 2022

The Structure of Data and Machine Learning

Terry Tao entitled his 2006 Fields Medal Lecture "The Dichotomy between structure and randomness" and state the Structure Theorem: Every object is a superposition of structured object and a pseudorandom error. He gave many examples and how he used these results to prove (with Ben Green) that the primes contain arbitrarily long linear progressions.

There is a nice Kolmogorov interpretation. Recall K(x) is the smallest program that produces the string x. For a finite set S, K(S) is the smallest program that recognizes membership in S. For a string x, take the largest set S containing x such that K(x) is close to K(S) + log(|S|). S is the structure and x is a random element of S. Vereshchagin and Vitanyi have a nice paper on this topic.

Machine learning aims to discover the set S from a set of examples x. This is why I think of P = NP giving us an ideal machine learning algorithms--use P = NP to find a circuit that describes S for a time-bounded version of Kolmogorov complexity. Recent tools in machine learning seem to find this S without needing the full power of P = NP.

Consider languages where German or Japanese is a random example of a "natural language". Linguistics tries to understand the structure of natural languages. Recent ML translations algorithms use that structure (without understanding it) to translate between pairs of languages. 

How about generative AI? Diffusion methods create a set S of all reasonable images by turning images into random noise. To create images it reverses that process, starting with random noise to create random elements of S. Prompts just add conditions to the process.

I read the Vereschagin-Vitanyi paper back in 2004 and the Kolmogorov structure model goes back to the 1970s. Finding the structure seemed intractable for any interesting application. Now that ML is proving this wrong, the world is a different place.

Monday, November 07, 2022

Euclidean TSP is NP-hard but not known to be in NP. Why not known?

BILL: Lance, I was looking into TSP, metric TSP, Euclidean TSP since I am going to teach about P, NP, NPC, and approximating NPC problems and I came across the following from Lipton's book The P=NP Question and Godel's Letter (I paraphrase):

Graham proved that Euclidean TSP was NP-hard. But it is open if it is in NP. The difficulty hinges on a beautiful problem that is still open over thirty years later: Can we tell, given naturals a1,...,an,k if


\sqrt{a1} + ... + \sqrt{an} \le k

What I want to know is, is it still open? Lipton's book was written in 2010, so that was. uh, uh...

LANCE: 11 years ago.

BILL:  Which is a long time ago- so has it been cracked?

LANCE: No it has not. And, by the way, I blogged on it in 2003.


This raises some questions:

1) Is  the sqrt problem above in P? NP? (I have seen it stated that the problem is in PSPACE.) 

2) Where do people think the problem is?

3) Why is it still open? Some options (I am sure there are more.)

a) Not that many people are working on it. But if not, why not?

b) The problem is just dang hard! That's probably why P vs NP is still unsolved and why FLT took so long, and why my proof of the Riemann hypothesis was so long in coming.) I am reminded of Erdos' quote on The Collatz Conjecture: Mathematics may not be ready for such problems. And you all know what Erdos said about R(5). 

c) Reasons a and b above may lead to a death spiral: People THINK its hard so they don't work on it, then since nobody works on it no progress is made, reinforcing that its hard. 



Thursday, November 03, 2022

Should you quit Twitter and Texas?

Generally with some exceptions, I use Facebook for personal stuff, LinkedIn for Illinois Tech stuff and Twitter and this blog for CS stuff. Many of you got to this post through the Twitter link. Now that Elon Musk has bought the social media company, should I and the rest of the academic twitterverse move on to something else?

I'd say not yet. Let's see what Elon does to the place. Maybe he can allow more points of view, without turning it into a cesspool. Or maybe he ruins it. It'll be a network effect--if too many academics leave Twitter, I'd have to follow or I'd have few followers. I wonder where they will go. I hope it isn't TikTok.

On a similar vein, I often here of those who suggest we don't hold conferences in certain jurisdictions for political reasons, for example Texas, because of its laws against abortion and transgender rights. I don't believe computer science, as a field, should be making decisions based on politics. Academics who live in these states don't generally hold the same views as the political leaders in those states.

Should we not have meetings in Illinois because some in our field might be opposed to abortion? Or do we just assume everyone has the same political views in the field. Individuals can make their own choices as to whether to attend, but it's best when politics is left out of academics. FOCS 2022 is wrapping up today in Denver. Seems like a safe choice--perhaps all US conferences in the future should be in Colorado. 

There are limits--I wouldn't attend or organize a conference in Russia in the near future. But if we start eliminating locations based on politics, we'll only be able to meet up in the metaverse, and we won't have social media to tell us how to get there.

Sunday, October 30, 2022

What was the recent Nobel Prize in Physics really about?(Guest Post)

 David Marcus was a Math major a year ahead of me at SUNY Stony brook (he graduated in 1979,

I graduated in 1980). He then got a PhD from MIT in Math, and is a reader of this blog.  Recently he emailed me that he thinks the current Nobel Prize Winners in Physics do not understand their own work. Is it true? Let's find out!

------------------------

(Guest blog from David Marcus)

2022 Nobel Prize in Physics Awarded for Experiments that Demonstrate Nonlocality

The 2022 Nobel Prize in Physics was recently awarded to experimenters who demonstrated that the world is nonlocal. The curious thing is that neither the writers of the Nobel Prize press release nor the recipients seem to understand that this is what they demonstrated.


For example, the press release (see here) says: "John Clauser developed John Bell's ideas, leading to a practical experiment. When he took the measurements, they supported quantum mechanics by clearly violating a Bell inequality. This means that quantum mechanics cannot be replaced by a theory that uses hidden variables." That is not what the experiments mean, and the statement is false.

The word "locality" means that doing something here cannot instantly change something other there.

The experimental setup is the following: You prepare two particles, A and B, and send them in opposite directions so that they are far apart. You and your colleague do experiments on each particle at the same time. If you and your colleague perform the same experiment, then, from your experiment on A, you can predict with certainty the result of your colleague's experiment on B (and vice versa).

In a paper in 1935, Einstein, Podolsky, and Rosen pointed out that, assuming locality, the experimental results at A and B must be determined by the source that prepared the particles. They didn't actually say, "assuming locality", but they implicitly assumed it. (If you disagree with them, please offer an alternative.)

In 1964, John Bell published his paper. In it, he considered three of the experiments that could be done on the particles A and B. Assuming the results are determined by the source (which follows from Einstein, Podolsky, and Rosen's argument), he derived an inequality on the correlations between the results of the three experiments on the two particles. The math is simple; for details, see here.

The Nobel Prize winners did experiments, and their results violated Bell's inequality (or similar inequalities). Hence, the world is nonlocal.

The simplest theory that agrees with experiment is Bohmian Mechanics. This is a deterministic theory of particles whose motion is governed by a wave (the wave function being the solution of the Schrödinger equation). Of course, Bohmian Mechanics is nonlocal, as is the world.

Thursday, October 27, 2022

The Media Coverage of the Matrix result is Terrible (though not worse than usual)

 BILL: A computer program (or an AI or an ML or whatever) found a BETTER way to do matrix mult! Its in the same spirit as Strassen. I've always wondered if Strassen was practical  since it is simple, and computers have come a long way since 1969, though I suspect not (I WAS WRONG ABOUT THAT). I'll blog about and ask if Strassen will ever be used/practical   (I did that post here).

READERS: Uh, Bill,  (1) Strassen IS used and practical and (2) the new algorithm only works in  GF(2). (Lance did a post about the new algorithm where he makes this explicit here.) Some readers claimed it was GF(2^k) and some that it was fields if char 2. In any case NO it is not a general algorithm.

BILL: There is good news and what others might consider bad news but I do not.

GOOD NEWS: I learned that Strassen IS practical and used, which I did not know. 

GOOD NEWS: I learned that I was WRONG about the new algorithm since I just assumed it worked in general, and updated the post so others would not be deceived. 

BAD NEWS: Darling asked if I was embarrassed to be wrong. If I am embarrassed that easily I would have quit blogging in 2009. 

DARLING: So Bill, how did you get it so wrong?

BILL: Well obviously my bad for not doing my due diligence. But that's not what's interesting. What's interesting is that if you read the articles about it for the popular press you would have NO IDEA that it only works for mod 2. Its like reading that quantum computing will solve world hunger.

DARLING: It won't?

BILL: No it won't. 

DARLING: I was being sarcastic. 

BILL: Anyway, the coverage pushed two points

a) IMPRESSIVE that a computer could FIND these things that humans could not. This is TRUE (gee, how do I know that? The Gell-Mann Effect,  is that people disgusted when they read a newspaper article on something they know about and find the mistakes later assume that the other articles are fine. SHOUT OUT to Jim Hefferon who telling me the name Gell-Mann Effect and left a comment with a pointer. The original version of this post had a BLANK there.) 

b) The algorithm is practical! They did not quite say that but it was implied. And certainly there was NO mention of it only working in GF(2). And I was fooled into thinking that it might be competitive with Strassen. 

READERS (of this blog entry, I predict) Uh, Bill, the popular press getting science news wrong and saying its more practical than it is probably predates the Bible. I can imagine  the Cairo Times in 2000BC writing `Scientists discover that in any right triangle with sides a,b,c  a^2+b^2=c^2 and this will enable us to build food silos and cure Hunger. In reality they knew that the 3,4,5 triangle was a right triangle, were no where near a proof of a general theorem, and I doubt it would have helped cure hunger. 

BILL: This time the news was REALLY CLOSE to what I do (if  R(5) is found by a computer and the media claims its practical I'll either have a very angry blog or repost my April Fools' day article on Ramsey Theory's application to  History) AND I posted incorrectly about it. So, to quote many a bad movie

THIS TIME ITS PERSONAL!

Monday, October 24, 2022

Cheating in Chess and in Class

In the 24th move of the second game of the 1978 Chess Championship, a cup of blueberry yogurt was delivered to the defending champion Anatoly Karpov who offered a draw shortly thereafter. The challenger Victor Korchnoi claimed the flavor of yogurt was a coded message to Karpov and later in the tournament all food deliveries had to be decided on in advance. The good old days.

With computer chess programs now far more powerful than humans, chess cheating has become far more common and came to a head last month with the controversy between Magnus Carlsen and Hans Niemann. Did Niemann cheat to win in his win over Carlsen in St. Louis or was it just a rare upset? How can we tell?

This brings up cheating by students in class. Reports and statistics show that cheating has increased over the last few years. The pandemic played a role, but a good rule is that pandemic didn't change behaviors, rather accelerated changes already in progress. Technology has made it easier to cheat. It's difficult to near impossible to create a homework problem where someone couldn't just look up an answer. Sites like Chegg provide solutions to all sorts of problems while there are many sites where you can hire someone to write a paper or do a project for you. Advances in generative AI, like GPT-3 and GitHub co-pilot will soon make cheating as easy as clicking a button.

But it's more than technology. As students view university education less about learning and more about getting the credentials for a job, the inhibitions to cheat disappear. And while the vast majority of students don't significantly cheat, it's hard for anyone to avoid using Google when they get stuck on a problem. 

We can continue to use technology to fight the technology in a every growing arms race to catch cheaters but it can feel like a losing war. We should take solace that the students who work hard solving problems and projects will be the ones who will succeed in life. 

Thursday, October 20, 2022

Alpha Tensor

In a recent post, Bill used the announcement of a new AI multiplication algorithm to discuss the applications of Strassen's famous algorithm. For this post I'd like to focus on the new algorithm itself, Alpha Tensor, the algorithm behind the algorithm, what it has actually accomplished and what it means for us theorists. 

To multiply two 2x2 matrices in the usual way you need eight multiplication steps. In 1969 Strassen surprised the world by showing how to multiply those matrices using only seven multiplications.

You can recurse on larger matrices. For 4x4 matrices you can use 72=49 multiplications instead of the naïve 64. In general for nxn matrices you need roughly nlog27 ≈ n2.81 multiplications.

No one has found an algorithm for 4x4 matrices that uses less than 49 from recursing on Strassen. Alpha Tensor does so for the special case of working over GF[2], where addition and subtraction are interchangeable. Their algorithm does not work for general fields such as the real numbers.

Here's the full table of Alpha Tensor results from the Nature paper for multiplying a nxm matrix by a mxp matrix. The Modular column is for GF[2] and the standard column is for general fields. Alpha tensor does improve on the best known for general fields for specific problems like multiplying a 3x4 matrix by a 4x5 matrix. Much of the press failed to make this distinction for 4x4 multiplication leading to some confusion.


What does this mean for theory? Recursing on 4x4 matrices now reduces the time for matrix multiplication to roughly n2.78 nowhere close to the best theoretical upper bound of about n2.37. The Alpha tensor result may be more practical though time will tell.

Manuel Kauers and Jakob Moosbauer shortly after Alpha Tensor announcement, reduced the 5x5 case over GF[2] to 95 multiplications. Nice to see the last word isn't by machine (yet!) but that shouldn't reduce the excitement over Alpha Tensor. Often we see a breakthrough followed by a small improvement. Note that 95 multiplications for 5x5 matrices won't give a faster asymptotic algorithm for nxn multiplication than Strassen.

What excites me the most is not the algorithm, but the algorithm to find the algorithm. Alpha Tensor uses the tools that AlphaZero used to play Chess and Go to search the large search space of potential algorithms using Monte Carlo Tree search, basically searching at random and learning and updating the probabilities of the search. Before using machine learning, we had few good approaches to searching large combinatorial spaces of this nature. 

In general, most new algorithms come from new approaches, not just from the structural decomposition we see in matrix multiplication so theorists won't be out of a job anytime soon. Nevertheless this is just another lesson that using ML has dramatically improved our ability to search through a large number of possibilities looking for a specific solution. 

The Alpha Tensor Nature paper was submitted in October of 2021. A year is eternity in the ML world. I wonder what is happening now that we don't know about.

Tuesday, October 18, 2022

BYTE LYNX- an awesome video game/Am I an influencer?


Tucker Bane is a friend of mine who has an AWESOME video game available

that is called BYTE LYNX.

I am curious- Can I be an INFLUENCER!

Lets find out!


At the link below there are


a) Reviews of the game.

b) Videos of the game being played.

c) The ability to purchase and download the game.


Link is here



Thursday, October 13, 2022

How Not to Pass a Polygraph Test


Many years ago I was asked to serve on an advisory board for an organization that did confidential research. To be on the board I had to have US top secret clearance. The first step was filling out a lengthy form which asked deep details about every aspect of my life. Then there were the interviews with myself and many people I interacted with, especially internationally, and I have many international colleagues. Eventually I got past these steps.

The final step required taking a polygraph (lie-detector) test. So I flew to Baltimore to visit a non-descript office building near the airport. I failed the test. Twice more I went to Baltimore and failed those tests as well. 

Just to be clear, I never tried to falsify or hide information on these tests. In one case I was asked "Do you live in Atlanta?" I said no. The person administering the test stopped and said I put my address down as Atlanta. I said my mailing address was Atlanta but (at the time) I lived just north of the border in Sandy Springs. She said I should use Atlanta for the test, in other words I should lie. The test didn't go well after that.

In another case, I was asked if I was ever arrested. For the record, I have never been arrested but the answer came up as inconclusive. The administrator, different than before, trusted the machine more than me and the rest of the day didn't go well. 

Perhaps the test wasn't meant to just test whether I was telling the truth, but also my ability to keep a secret. At least that would make more sense why I failed three times. More likely I took questions too literally, a product of a mathematician's mind.

I never joined the advisory board but that wasn't the worst of it. In 2014 the Chinese hacked into the US Office of Personnel Management taking information from, among others, those who applied for security clearance. It's the main reason I keep security freezes with the credit bureaus.

Sunday, October 09, 2022

Will Strassen's Matrix Mult Alg ever be practical?

All time bounds are asymptotic and really O-of.

Recall that Strassen found a clever way to multiply  two 2x2 matrices with 7 mults (and lots of adds)  leading to a matrix mult alg in n^{\log_2 7} = n^{2.87...}


Recently (see here) a deep-mind-AI found a way to multiply  two 4x4 matrices with 47 mults (and lots of adds) leading to a matrix mult alg in n^{\log_4 47} = n^{2.777...}. NOTE ADDED: The new algorithm only works over GF[2] for 4x4 matrices.

Much better is known, see our blog posts here and here.


The more advanced algorithms are complicated and have large constants so will never be practical. But Strassen's result, and now the new algorithm, SEEM to me they could be practical.

(ADDED LATER- many of the comments inform me that Strassen IS practical and IS being used. Great! Now we know!)

Thoughts about Strassen that also apply to the  new algorithm. 

1) n has to be large for Strassen to given an improvement. But as we deal with larger data sets the value of n is getting larger. 

2) People are mostly interested in sparse matrices for which there are better methods. I've heard that for a while- but is it still true? I thought ML used dense matrices. 

3) Strassen is hard to code up. Actually it doesn't look that hard to code up. However, I have never tried to code it up, so maybe there are subtle points there.

4) Strassen only works on matrices of size 2^n x 2^n. You can pad matrices out but that might kill whatever time advantage you get. (The new alg only works on  4^n x 4^n). 

5) Strassen uses recursion and there is the hidden cost of recursion. I think that is a think of the past and our younger readers do not know what I am talking about. 

6) (This is obvious) the recursion would only go down to a certain level and THEN you would use ordinary Matrix Mult. This may also add time. 


I suspect that 2 and 4 are the most important reasons Strassen (or the new algorithm) is not practical BUT I would like to hear your thoughts?

Does any package NOW use Strassen's Algorithm?

Side Note: I like to ask students if they think there is a better-than-cubic algorithm for Matrix Mult. They do not. Then I show it to them and tell them THIS is why LOWER BOUNDS are hard. You have to show that NO, nobody clever will find a trick you hadn't thought of. 


Thursday, October 06, 2022

Art and Technology

Last weekend I went to one of Chicago's jewels, the Art Institute, and saw the opening of a new exhibit by Berlin-based artist Josephine Pryde entitled The Vibrating Slab referring mainly to the phone that constantly tries to gain our attention. The exhibit used photographs and sculptures to tie smart phones to prehistoric rocks. No pictures here because ironically they didn't allow us to take photos with our "slabs".

After I saw the Pryde exhibit I went to see the artist herself give a presentation. She related a story where she talked about going to the movies and seeing Top Gun: Maverick, not knowing it is a new release. Tom Cruise, who controlled a computer with hand movements in Minority Report, goes old-school in Maverick. Cruise and several young actors, through various plot contrivances, flew 20th century planes in a movie that could have taken place in the 90s. According to IMBD, at the insistence of Tom Cruise, minimal green screen and CGI aerial shots exist in the film, and even the close up cockpit shots were taken during real in-flight sequences. Old school indeed. Kind of the opposite of say the Disney series The Mandalorian filmed in a soundstage with everything generated by CGI.

Pryde's exhibit looked at the interaction with technology as art. Upstairs from Pryde's exhibit was art from technology, a series of prints that David Hockney created on another slab, the iPad, in Normandy during the early days of the Covid pandemic. 

No. 340, 21st May 2020 - David Hockney

On the way from Pryde's exhibit to the lecture I passed through the Art Institute's impressionism collection and compared real Monets with the fake one I created with Dall-E. Monet manages to capture a detailed scene with broad brush strokes--if you zoom in the detail disappears. Dall-E can't quite pull that off.

Vétheuil by Monet

Monet Dagsthul by Dall-E


Tuesday, October 04, 2022

Is it okay for a paper or book to say `for more on this topic see Wikipedia Entry BLAH.

 One of the proofreaders for Computational Intractability: A Guide to Algorithmic Lower Bounds

(available here)

made the following objection, which raises some questions.

I object to telling readers to see a Wikipedia Entry.  Wikipedia is marvelous, but it is unstable. I have been led astray by short-lived editorial changes made by trolls. 

The proofreader is surely correct that  `See Wikipedia entry X' should be minimized. And indeed, I have gone through all of the cases we had of such things and tried to minimize them. But there are times when there seems to be no way around it. Or maybe there is but I can't see it. 

a) I want to refer to the set of problems that are (exists R)-complete. The ONLY list I know of is on Wikipedia here.

b) I want to discuss the complexity of the video game braid. There is a nice Wikipedia page about the game braid  here. There are some sites that have videos about the game, but not reallyan  explanations of it. I DID find a site that looks pretty good, here, but is that site more stable than the Wikipedia entry? There did not seem to be an official site. (I had the same issue with the 15-puzzle and some other puzzles that do not seem to have a natural official site). 

c) I want to refer the reader to a list of algorithms for discrete log. Wikipedia has a great site on this here. Is there a good article that does the same? Is it behind paywalls?


I tend to thing that the  Wikipedia sites above are stable and accurate. It helps that they are not on controversial topics.  They should be fine. Articles that are behind paywalls are much worse. As for articles on authors websites- are they more or less stable than Wikipedia?







Thursday, September 29, 2022

Machine Learning and Complexity

 

Schloss Dagstuhl by Monet by Dall-E

At Dagstuhl earlier this month, I hung out for a little bit with the participants of the other seminar, Knowledge Graphs and their Role in the Knowledge Engineering of the 21st Century. Knowledge graphs are what you would expect them to be, nodes are objects like "Berlin" and "Germany" with directed edges with labels like "capital". Think of having knowledge graphs of hundreds of millions of nodes and how that could help answer queries about the world. These secondary workshops are shorter and focus on creating a new vision, in this case how to maximize the importance of knowledge graphs in an increasing ML-focused world.

Perhaps we need such a visioning seminar for complexity. While we often get lost in the mathematical questions and techniques in our field, computational complexity is designed to understand the difficulty of solving various problems. Machine learning and advances in optimization should be changing that conversation. If you imagine a world where P = NP (and I did exactly that in chapter 2 of my 2013 book) much of what you consider is starting to happen anyway. ML does fail to break cryptography but then again, isn't this the best of all possible worlds? 

Look at what Scott Aaronson said back in 2006.

If P=NP, then the world would be a profoundly different place than we usually assume it to be. There would be no special value in “creative leaps,” no fundamental gap between solving a problem and recognizing the solution once it’s found. Everyone who could appreciate a symphony would be Mozart; everyone who could follow a step-by-step argument would be Gauss; everyone who could recognize a good investment strategy would be Warren Buffett. 

If I can be a Monet, can Mozart be far behind? ML trading by some hedge funds are beating Warren Buffett but remember if everyone trades perfectly, no one beats the average. Gauss is going to be trickier but it's coming. There's a reason Scott is spending a year at OpenAI to understand "what, if anything, can computational complexity contribute to a principled understanding of how to get an AI to do what we want and not do what we don’t want".

Monday, September 26, 2022

Is the complexity of approximating Vertex Cover of degree 3 open? (ADDED LATER-NO)

 RECALL:

A max-problem f has a A P Time App Scheme (PTAS) if there is an algorithm ALG such that

 ALG(\epsilon) \ge (1-\epsilon)f(x).


A min-problem f has a A P Time App Scheme (PTAS) if there is an algorithm ALG such that

 ALG(\epsilon) \le (1+\epsilon)f(x).


(Note that the poly can depend on epsilon so it may be something like n^{1/epsilon}.)


MAX3SAT is, given a formula with \le 3 literals per clause, find an assignment

that maximized the number of clauses satisfied.


VCB-a is Vertex cover where graphs have degree \le a


The following are known:

0) MAX3SAT is in APX.

1) The PCP paper, here, showed that if MAX3SAT has a PTAS then P=NP.

2) Papadimitriou and Yannakakis (here)  had showed much earlier that MAX3SAT \le VCB-4 with an approx preserving reduction.

3) From (1) and (2) we have that VCB-4 has a PTAS then P=NP. (VC is in APX by an easy 2-approx).

4) Clearly VCB-2 is in P.

The following seems to be open, though if you know otherwise pleae leave a comment:


Is VCB-3 a) in P? b) NPC? (ADDED LATER- NPC- See comments.) 

Is the following true: if VCB-3 has a PTAS then P=NP. (ADDED LATER- NO PTAS-See Comments)


NOTE- all of the above is true for Ind Set-4 and Dom Set-4. So that leads to more open problems.


Wednesday, September 21, 2022

POSTED UPDATED VERSION OF Computers and Intractability: A guide to Algorithmic Lower Bounds posted (New title)

We have posted a revised version of 


Computational Intractability: A Guide to Algorithmic Lower Bounds

by Demaine-Gasarch-Hajiaghayi

The book is here.

(For the original post about it, edited it to use the new title (see below), see HERE.) 


We  changed the title (the title above is the new one) 

since the earlier title looked too much

like the title of Garey's and Johnson's classic. While that was intentional we 

later felt that it was too close to their title and might cause confusion. 

Of course changing the title might also cause confusion; however, 

this post (and we will email various people as well) will stem that confusion. 


We welcome corrections, suggestions and comments on the book. Email us at hardness-book@mit.edu


Monday, September 19, 2022

There are two different definitions of Integer Programming. Why?

Alice and Bob have the following conversation.

===============================

ALICE: In your book you define INT PROG as, given a matrix A and vectors b,c,

find the integer vector x such that Ax\le b and c DOT x is maximized.

This is not correct! You also need x\ge 0.


BOB: Really? I always heard it without that extra constraint, though I am

sure they are equivalent and both NP-complete (Alice nods).

Where did you see it defined with that extra constraint?


ALICE:

Wikipedia entry in IP

Chapter of a book at an MIT website

Something on Science Direct

A course at Duke

An article by Papadimitriou 

An article on arxiv

The book Graphs, Networks and Algorithms by Dieter Jungnickel

Bob, do you have examples where they do not use that extra constraint. 

BOB: 

Math Works

Lecture notes from UIUC

Lecture notes from Lehigh Univ.

The book Parameterized Complexity Theory by Flum and Grohe

The book Computers and Intractability : A Guide to the Theory of NP-Completeness by Garey and Johnson

ALICE: Both of our lists are impressive. So now what? 

--------------------------------------------------------------------

(This is Bill again.)

What indeed!

1) Why are there two definitions of Int Prog?

2) When is it good to use which one? 



Thursday, September 15, 2022

Monarachy: A Problem with Definitions

 As I am sure you know, Queen Elizabeth II passed away at the age of 96 recently.  I am not a royal-watcher, but I am a royal-watcher-watcher. That is, the question of why people care about the lives of these people intrigues me. A few notes

1) Was she a good Queen? People tend to think so; however, since the job is somewhat ill-defined its hard to say. 

2) The Queen is supposed to be above politics (she does not vote- I was surprised to find out that legally she can, but she really can't). We know very few of Queen Elizabeth II's opinions on political events. But the notion of political is not well defined. One would think that if she did an appeal for people to take the COVID vax that would not be political, but somehow it is (I do not know if she did such an appeal). King Charles III believes in global warming and that we need to do something about it. This again should not be political but is. 

3) She is the second longest reigning Monarch. First is King Louis XIV who first became king at the age of 4. I had a blog complaining about this here. However, there is a more interesting point I want to make. From the first to the last day of King Louis XIV reign not much had changed. Technology, politics, other things just didn't change much. By contrast the world changed A LOT between Queen Elizabeth II first and last day:

a) The British were an important power in 1952. Less so now.

b) When her father died she was in Kenya and it took 4 hours to get the news to her. Now that would be immediate. 

c) Divorce was considered bad in 1952 and is why King Edmond VIII could not be king (he wanted to marry a twice-divorced woman whose ex-husbands were still alive). And now three of the Queen's children have been divorced.

d) Gay people.. enough said. There has even been a royal gay wedding, see here

Black people (can't call them African-Americans), Women,... you fill it in. 

e) When Charles wanted to get married it seemed to be important that he marry a virgin. We cannot imagine this mentality anymore. When Prince William and Kate got married they were already living together and this was NOT an issue for ANYONE. I looked up what the Church of England thought of it and all I got was some very bland comments like That's what young people do nowadays. 

3) Is the monarchy a good thing? As an American I feel I do not have a right to an opinion. If the citizens of the United Kingdom approve of the monarch (polls show they do) then who am I do tell them they are wrong? Even so, lets look at reasons for it

a) Tourism. It has been said that the Monarchy leads to MONEY from tourism. So it is worth the price? Nobody seems to know and it would be hard to tell. However, I don't think the citizens of the United Kingdom view  money as the reason for Monarchy. The American analog is giving Disneyland tax breaks to be in Florida which generates jobs. I doubt they think of the Monarchy in those mundane transactional terms. 

b) CS Lewis said 

Where men are forbidden to honour a king they honour millionaires, athletes, or film stars instead: even famous prostitutes and gangsters. For spiritual nature, like bodily nature, will be served; deny it food and it will gobble poison.

This is  bit odd- they must all pretend to like the monarchy to make it work. A long time ago when Charles and Dianna were both having affairs, 80% of the citizens the United Kingdom thought that was okay so long as they are discreet so the people don't find out. But- those ARE the people.

Also odd- CS Lewis was a theologian and a  believing Christian; however, his comment above can apply to God as well as to Kings. 





Monday, September 12, 2022

Thirty Years of Dagstuhl

 

Dagstuhl old-timers at the original castle

I'm back at Dagstuhl for the seminar on Algebraic and Analytic Methods in Computational Complexity. My first seminar at Dagstuhl was back in 1992. I've been coming for thirty years and have been here roughly thirty times. My last trip was pre-covid (virtual Dagstuhls don't count) and I really needed this chance to hang out and talk complexity with colleagues old and new.

Some changes since my last trip. The room doors have locks (there are rumors of an incident). You have to create your own keycard on a new machine logging into your Dagstuhl account. I had a long random password through a password manager and it was not so easy as process.

The main conference room has been updated with tech for hybrid meetings, and new led lights. Books were removed from the library to create a coffee breakout space.

No Bill this time so no typecasts. Still the best part of the week is talking and hearing about complexity. Today I learned about the orientations of Sperner's lemma, that there is one more triangle oriented according to the direction of the corner vertices than those oriented the other way. Christian Ikenmeyer used this fact to motivate a study of closure properties of #P-functions.

Wednesday, August 31, 2022

The NIST Process for Post-Quantum Cryptography

Guest post by Jonathan Katz

Over the past few months there have been several interesting developments in the NIST post-quantum standardization process.

By way of background, since the advent of Shor's algorithm in 1994 we have known that a large-scale, general-purpose quantum computer would be able to break all currently deployed public-key cryptography in (quantum) polynomial time. While estimates vary as to when (or even whether!) quantum computers will become a realistic threat to existing public-key cryptosystems, it seems prudent to already begin developing/deploying newer "post-quantum" schemes that are plausibly secure against quantum computers.

With the above in mind, NIST initiated an open process in 2017 for designing post-quantum cryptographic standards. Researchers from around the world submitted candidate algorithms for public-key encryption/key exchange and digital signatures. These were winnowed down over a series of rounds as cryptographers publicly debated the relative merits of different proposals, or showed security weaknesses in some candidates.

On July 5 of this year, NIST announced that it had selected four of the submissions as finalists for standardization. Only one candidate for public-key encryption was chosen, along with three digital signature schemes. Three of the four selected algorithms rely on the hardness of lattice problems; the only non-lattice scheme is a hash-based signature scheme. (It is possible to build digital signatures using "symmetric-key" assumptions alone.) In addition, four other public-key encryption schemes not based on lattices were designated for further study and possible standardization at a later point in time.

Less than one month later, Castryck and Decru announced a classical attack on SIKE, one of the public-key encryption schemes chosen for further study. SIKE is based on a conjectured hard problem related to isogenies on supersingular elliptic curves. The attack was not just theoretical; the researchers were able to implement the attack and run it in less than a day or less, depending on the security level being considered. Details of the attack are quite complex, but Galbraith gives a high-level overview. Subsequent improvements to the attack followed.

It is worth adding that the above follows an entirely classical attack shown roughly six months earlier on Rainbow, another submission to the NIST standardization process that made it to the 3rd round. (Rainbow is a signature scheme that relies on an entirely different mathematical problem than SIKE.) For completeness, note that none of the four finalists are impacted by any of these attacks.

A few reflections on the above:
  • It is amazing that the factoring and RSA problems are still hard (for classical computers), more than 40 used after they were proposed for cryptography. The same goes for the discrete-logarithm problem (in certain groups).
  • It is not easy to find other hard mathematical problems! Code-based cryptography has been around about as long as factoring, but has been somewhat unpopular for reasons of efficiency. Lattice-based cryptosystems still seem to give the leading candidates.
  • We need more (non-cryptographers) studying cryptographic assumptions. The attacks on SIKE involved deep mathematics; attacks on lattice problems may involve algorithmic ideas that cryptographers haven't thought of.

Wednesday, August 24, 2022

Computational Intractability: A Guide to Algorithmic Lower Bounds. First draft available! Comments Welcome!

(This post is written by Erik Demaine, William Gasarch, and Mohammad Hajiaghayi)

In 1979 Garey and Johnson published the classic

        Computers and  Intractability: A Guide to NP-Completeness

There has been A LOT of work on lower bounds since then.

Topics that were unknown in 1979 include

Parameterized Complexity,

 Lower bounds on approximation,

Other hardness assumptions (ETH, 3SUM-conjecture, APSP-conjecture, UGC, Others), 

Online Algorithms,

Streaming Algorithms, 

Polynomial Parity Arguments, 

Parallelism, and 

Many new problems have been shown complete or hard in NP, PSPACE, and other classes.


Hence a sequel is needed. While it is impossible for one book to encompass all, or even a large fraction, of the work since then, we are proud to announce a book that covers some of that material:

Computational Intractability: A Guide to Algorithmic Lower Bounds

by Erik Demaine, William Gasarch, and Mohammad Hajiaghayi. MIT Press. 2024

See HERE for a link to a first draft.

We welcome corrections, suggestions and comments on the book. Either leave a comment on this blog post or emailing us at hardness-book@mit.edu



Monday, August 22, 2022

20 Years of the Computational Complexity Weblog

Birthday Cake

first posted on this blog twenty years ago today, still the oldest and longest running weblog in theoretical computer science, possibly in all of computer science. In those twenty years we've had nearly 3000 posts and over 23,000 comments and 10,000,000 page views. Bill Gasarch joined me officially as a co-blogger over 15 years ago on March 30, 2007. 

We've seen major results in computational complexity but the biggest challenges remain, in particular major separations of complexity classes. We've also had a front row seat to a computing revolution with  the growth of cloud and mobile computing, social networks connecting us for better or for worse, and incredible successes of machine learning. It's almost as though we've been handed an oracle that gives us much of the goodness of P = NP while leaving cryptography intact. 

What will the next twenty years bring? We'll be watching, posting and tweeting. Hope you keep reading and commenting. 

Thursday, August 18, 2022

Conference Modality

We have had an almost normal summer conference season, for some sense of normal. At one of those conferences I participated in an hybrid conversation about whether the conference should be in-person, virtual or hybrid the following year. Here are some general thoughts.

In-Person

The traditional conference format. People travel from near and far to a hotel, conference center or campus location. Talks given in large rooms, often in parallel. A reception, some social activities, participants gathering in small groups to go out for meals. 

Positives: In-person maximizes interaction between participants. Being physically away from your home means you can focus your time on the conference and your fellow participants. This was more true before the mobile devices/laptops/internet days, but still most participants will spend more time on-site than on-line.

Negatives: Expensive--with registration, hotel and air fare, even a domestic participant could have to pay $2000 or up, higher for those traveling internationally. Visas can be hard to get. Some still feel unsafe in large groups. People often leave early, pity the last speakers. And don't forget the carbon footprint. 

As countries declare war on other countries or states deny certain rights, there is a push against meetings in certain places. Note the disclaimer for next year's FCRC. You might upset some people if you have conferences at these locations (and others if you don't).

Virtual

Virtual conferences would never in the past have been taken seriously but Covid forced our hands. 

Talks are streamed or pre-recorded. Some interaction with chats in talks, zoom get togethers or though a systems like virtual chair. Even if we had a perfect "metaverse" experience where we could get together as though we were in person, not being there in person means we wouldn't make it a priority.

The big advantages are costs are low, it's easy to attend talks, and no danger of spreading disease. Still a virtual conference can feel too much like just a collection of streamed and recorded talks.

Hybrid

So let's make the conference hybrid and have the best of both worlds. Alas, it doesn't work out that way. It's nearly impossible to have good interaction between in-person and virtual participants--basically you have to run two separate meetings. Do you allow virtual talks or require an author to show up in person. 

How do you set prices? Lower prices for virtual increases assess but decreases in-person attendance. Participants (or their advisors) might opt to save expenses and attend virtually instead of in-person, reducing the networking opportunities for everyone. 

Most conferences tend to take the hybrid route to avoid the complaints if one went fully in-person or virtual, but hybrid just pretty much guarantees a mediocre experience for all.

Opinion

My suggestion is some years run the conference virtually and others in hybrid. We already have too many conferences, a byproduct of our field using conferences as the primary publication venue. I suggest following conferences like the International Congress of Mathematicians or the Game Theory World Congress, held every four years. If the main conference of a field is held every four years, researchers, particularly senior researchers, make a bigger effort to be there. You can have the virtual meetings the other years so researchers, particularly students, can continue to present their work.

No easy solutions and CS conferences have not worked well for years. Maybe the struggle to define future conferences will allow us to focus more on the connecting researchers than just "journals that meet in a hotel".

Monday, August 15, 2022

A non-controversial question about the Documents Donald Trump had in his house

This is a non-partisan post. In the interest of disclosure I will divulge that I do not think private citizens should have top secret government documents in their house.

One phrase I kept hearing in the reporting was (I paraphrase and may have the number wrong)

                                  The FBI removed 15 boxes of documents

Documents? Like--- on paper? 

a) Are the documents also online someplace? Perhaps they intentionally are not so that they can't be hacked. 

b) Is having top secret documents only on paper safer than having them in electronic form? Normally I would think so. Donald Trump  having them is a very unusual case. 

c) Having to store all of those documents on paper would seem to have storage problems. I can imagine someone with NO bad purposes making copies and taking them home since they are tired of only being about to read them in a special room. 

d) A problem with having them ONLY on paper is that if an accident happens and they get destroyed there is no backup. Or is there? Are there copies somewhere? That leads to twice the storage problems. 

e) There is a tradeoff of security and convenience. Is having the documents only on paper is an extreme point on the tradeoff, but it may be the right one. It may depend on how important it is to keep the documents secret. 

f) I've heard (apocryphal?) stories about some top secret document also available in public though quite legal sources (e.g., a physics journal that discusses nuclear stuff).  Does the government make to much classified? If so then the problem arises of people not taking the classification seriously and getting careless. I doubt that is what happened here. 

g) The question I am most curious about is why did he take them? For most of his other actions his motivations are clear (e.g., he is pushing STOP THE STEAL since he wants to be president). But for this one its not clear. Unfortunately,  I doubt we'll ever find out. Maybe the answer is in some documents either on paper or electronic. 





Monday, August 08, 2022

The Godfather of Complexity

Juris Hartmanis 1928-2022

On Friday, July 29th, I was in the immigration line at an airport in Mexico. My phone rang with Bill Gasarch on the Caller ID but starting vacation I declined the call. The voicemail gave me the sad news that Juris Hartmanis, the person who founded computational complexity and brought me into it passed away earlier that day. I messaged Bill and told him to write an obit and I'd follow with something personal when I returned.


Hartmanis and Stearns in 1963

In November 1962 Hartmanis, working with Richard Stearns at the GE Labs in Schenectady, determined how to use Turing machines to formalize the basic idea to measure resources, like time and memory, as a function of the problem being solved. Their classic Turing-award winning paper On the Computational Complexity of Algorithms, not only gave this formulation but showed that increasing resources increased the problems one can solve. The photo above, from a post celebrating the 50th anniversary of the paper, shows Hartmanis and Stearns with the main theorem of their paper on the board.

Twenty-one years later, a junior at Cornell University still trying to find his way took undergraduate theory from the man himself. Juris brought the topics to life and I found my passion. At the beginning of the class, he said the highest grade usually went to an undergrad followed by the grad students in the class. I was a counterexample, as I had the second highest grade. Never did find out who beat me out.

In spring of my senior year, 1985, I forgave the traditional senior-slump Wines for graduate complexity with Juris. He focused the course around the isomorphism conjecture he developed with his student Len Berman, which implied P≠NP, and Hartmanis believed using the conjecture might lead to settling P v NP. He offered an automatic A to anyone who could prove the isomorphism conjecture. I guess any other proof of P≠NP only warranted a B?

I would later be obsessed by the isomorphism conjecture as an assistant professor, coming up with not one but two oracles making it true. The isomorphism conjecture didn't end up settling P vs NP, but then again neither did any other approach.

It wasn't just me, there was a reason that many of the great American complexity theorists, including Ryan Williams, Scott Aaronson and my own PhD advisor Michael Sipser, were undergrads at Cornell. Many more were PhD students of Hartmanis.

Juris Hartmanis had a certain gravitas in the community. Maybe it was his age, the way he dressed up, his seminal research in the field, or just that Latvian accent. He founded the CS department at Cornell in the 60s and served as head of the CISE directorate at the National Science Foundation in the 90s. His 60th birthday party at the 3rd Structures in Complexity conference (now the Computational Complexity Conference) was the only time I've seen complexity theorists in ties.

Juris Hartmanis (center) being toasted by Janos Simon

A few of my favorite Hartmanis quotes.
  • "We all know P is different than NP. We just don't know how to prove it." - Still true.
  • "I only make mistakes in the last five minutes of the class." - Sometimes he made a mistake with ten minutes left but only admit it in the last five minutes.
  • "Primality is a problem not yet know to be in P but is hanging on by its fingernails with its grip continuing to loosen each day." - Juris Hartmanis said this in 1986, with primality hanging on for another 16 years.
Thanks Juris for creating the foundations of our field and inspiring so many people, yours truly included, to dedicate ourselves to it.

Much more to read:

Sunday, August 07, 2022

The Held Prize for comb. opt. AND Disc Opt AND Alg AND Complexity theory AND related parts of CS.

 Dan Spielman asked me to blog about the Held Prize. I first present what he send me, and then have some thoughts.


FROM DAN: 

--------------------------------------------------------------------------------------------------

Nominations are now being accepted for the National Academy of Sciences’ 2023 Michael and Sheila Held Prize. The Held Prize honors outstanding, innovative, creative, and influential research in the areas of combinatorial and discrete optimization, or related parts of computer science, such as the design and analysis of algorithms and complexity theory. This $100,000 prize is intended to recognize recent work (defined as published within the last eight years). Additional information, including past recipients, eligibility requirements, and more, can be found at here.

All nominations must be submitted online. Unless otherwise stated, the following materials must be submitted: 

A letter from the nominator describing the candidate's work and why he or she should be selected for the award. No more than three (3) pages.

Curriculum vitae. No more than two (2) pages (similar to CVs included with NSF proposals).

Bibliography listing no more than twelve (12) of the nominee's most significant publications.

Suggested citation. A 50-word summary stating why the nominee should be considered for this award. (Citation

examples)

Two letters of support. Support letters must be written by individuals from institutions outside both the

nominator's and the nominee’s institution. Up to three letters of support are accepted.

Nominations will be accepted through Monday, October 3, 2022. Please help spread the word that the nomination process is underway. 

----------------------------------------------------------------------------------------------------

BILL COMMENTS

1) The scope seems rather broad (Dan confirmed this in private email) in that its Comb Opt AND Discrete Opt OR related fields like algorithms and complexity theory. 

2) The research has to be Outstanding AND Innovative AND creative AND influential. That seems hard to do :-(  If they made it an OR instead of an AND I may ask someone to nominate me for my Muffin Work. It does use 0-1 programming!

3) The past winners are, of course, very impressive. But there is one I want to point out to emphasize that the scope is broad: Amit Sahai won in 2022, and here is what the webpage says about it:

For a leading role in development of cryptographic Software Obfuscation and its applications, starting from initial conception of "Indistinguishability Obfuscation" and culminating in new constructions based upon well-founded cryptographic assumptions. These breakthroughs highlight how computational complexity can enable secrecy while computing in insecure environments.

4) Comb Opt and Discrete Opt seem to be Operations Research. So this inspires the following question:

What are the similarities and differences between Operations Research and Research on Algorithms? 

I tend to think of Operations Research  as being more tied to the real world- but is that true?

5) Not enough 2-letter combinations  for what you want to say: I had to use the term Operations Research instead of the abbreviation OR since I was using OR for or. Oh well. 



Saturday, July 30, 2022

Juris Hartmanis passed away on July 29 at the age of 94

 Juris Hartmanis, one of the founders of Complexity Theory, passed away on July 29 at the age of 94.  The Gödel's Last Letter blog has an obit posted here.  Scott Aaronson has some words and a guest post by Ryan Williams here. When other bloggers post obits I will update this paragraph to point to them. 

Here is one non-blog obit: here.  For an interview with Juris see here.

Hartmanis and Stearns shared the 1993  Turing award for the paper On the Computational Complexity of Algorithms (see here for the paper and see here for his Turing Award Talk). In that paper they define DTIME(T(n)) for Turing Machines. They also proved some theorems about how changes to the model (1-tape, 2-tape, 1-dim, 2-dim others) change the notion of DTIME(T(n))- so they were well aware that the definition was not robust. They also have some theorems about computable numbers. 

We are used to the notion that you can measure how long a computation takes my counting the number of steps on a Turing Machine. Before the Hartmanis-Stearns paper this was not how people thought of things. Knuth (I suspect independently) was looking at what we might now call concrete complexity- how many operations does an algorithm need. Hartmanis-Stearns were beginning what is now called Complexity Theory. 

 Recall that later, the Cook-Levin Theorem used Turing Machines. 

If Hartmanis-Stearns did not start the process of putting complexity on a rigorous mathematical basis how might complexity theory have evolved? It is possible we would not have the Cook-Levin Theorem or the notion of NP. It is possible that we would ASSUME that SAT is hard and use that to get other problems hard, and also do reverse reductions as well to get some problems equivalent to SAT. Indeed- this IS what we do in other parts of theory with assuming the following problems are hard (for various definitions of hard): 3SUM, APSP, Unique Games. And in Crypto Factoring, DL, SVP, and other problems. 

Hartmanis did other things as well. I list some of them that are of interest to me - other people will likely list other things of interest to them. 

0) He had 21 PhD Students, some of them quite prominent. The list of them is here.

1) The Berman-Hartmanis Conjecture: All NP-Complete sets are poly isomorphic. Seems true for all natural NP-complete sets. Still open. This conjecture inspired a lot of work on sparse sets including that if a sparse set S is btt-hard for NP, then P=NP (proven by Ogiwara-Watanabe)

2) The Boolean Hierarchy: we all know what NP is. What about sets that are the difference of two NP sets? What about sets of the form A - (B-C) where A,B,C are all in NP? These form a hierarchy. We of course do not know if the hierarchy is proper, but if it collapse then PH collapses.

3) He helped introduce time-bounded Kolmogorov complexity into complexity theory, see here.

4) He was Lance Fortnow's undergraduate advisor. 

5) There is more but I will stop here.



Sunday, July 24, 2022

100 Best Number Theory books of all Time---except many are not on Number Theory

I was recently emailed this link:


That sounds like a good list to have!  But then I looked at it. 

The issue IS NOT that the books on it are not good. I suspect they all are.

The issue IS that many of the books on the list are not on Number Theory.

DEFINITLY NOT:

A Mathematicians Apology by Hardy

The Universe Speaks in Numbers by Farmelo (looks like Physics)

Category theory in Context by Riehl

A First Course in Mathematical logic and set theory by O'Leary

Astronomical Algorithms by Meeus (Algorithms for Astronomy)

Pocket Book of Integrals and Math Formulas by Tallardia

Entropy and Diversity by Leinster

BORDERLINE:

Too many to name, so I will name categories (Not the type Riehl talks about)

Logic books. Here Number Theory  seems to mean Peano Arithmetic and they are looking at what you can and can't prove in it. 

Combinatorics books:  Indeed, sometimes it is hard to draw the line between Combinatorics and Number Theory, but I still would not put a book on Analytic Combinatorics on a list of top books in Number Theory. 

Discrete Math textbooks: Yes, most discrete math textbooks have some elementary number theory in them, but that does not make them number theory books.

Abstract Algebra, Discrete Harmonic Analysis, other hard math books: These have theorems in Number Theory as an Application.  But they are not books on number theory. 

WHAT OF ALL THIS? 

Lists like this often have several problems

1) The actual object of study is not well defined.

2) The criteria for being good is not well defined.

3) The list is just one person's opinion. If I think the best math-novelty song of all time is William-Rowan-Hamilton (see  here) and the worse one is the Bolzano--Weierstrass rap (see here) that's just my opinion. Even if I was the leading authority on Math Novelty songs and had the largest collection in the world, its still just my opinion. (Another contender for worst math song of all time is here.)

4) Who is the audience for such lists? For the Number Theory Books is the audience ugrad math majors? grad math majors? Number Theorists? This needs to be well defined.

5) The list may tell more about the people making the list then the intrinsic qualify of the objects. This is more common in, say, the ranking of presidents. My favorite is Jimmy Carter since he is the only one with the guts to be sworn in by his nickname Jimmy, unlike  Bill Clinton (sworn in as William Jefferson Clinton- a name only used by his mother when she was mad at him) or Joe Biden (sworn in as Joseph Robinette Biden which I doubt even his mother ever used). My opinion may seem silly, but it reflects my bias towards nicknames, just as the people who rank presidents use their bias. 














Wednesday, July 20, 2022

What is known about that sequence

 In my last post I wrote:


---------------------------

Consider the recurrence


a_1=1

for all n\ge 2, a_n = a_{n-1} + a_{n/2}.

For which M does this recurrence have infinitely many n such that a_n \equiv  0 mod M?


I have written an open problems column on this for SIGACT News which also says
what is known (or at least what I know is known).  It will appear in the next issue.

I will post that open problems column here on my next post.

Until then  I would like you to work on it, untainted by what I know. 
----------------------------------------

I will now say what is known and point to the open problems column, co-authored with Emily Kaplitz and Erik Metz. 

If  M=2 or M=3 or M=5 or M=7 then there are infinitely many n such that a_n \equiv 0 mod M

If M\equiv 0 mod 4 then there are no n such that a_n \equiv 0 mod M

Empirical evidence suggests that

If M \not\equiv 0 mod 4 then there are infinitely many n such that a_n\equiv 0 mod M

That is our conjecture. Any progress would be good- for example proving it for M=9. M=11 might be easier since 11 is prime. 

The article that I submitted is HERE