Wednesday, December 21, 2022

Complexity Year in Review 2022

Complexity result of the year goes to

by Shuichi Hirahara

Consider the following version of Occam's Razor: find the smallest program consistent with your data. Hirahara shows that solving this problem is randomly hard for NP. The paper also takes a major step to understanding the complexity of the Minimum Circuit Size Problem, a major area of study in computational complexity over the past few years.

Runner up goes to Maximum Flow and Minimum-Cost Flow in Almost-Linear Time by Li Chen, Rasmus Kyng, Yang P. Liu, Richard Peng, Maximilian Probst Gutenberg and Sushant Sachdeva. The title speaks for itself.

In the computing world, 2022 has become the year of machine learning taking over. It seemed like almost every week we saw a new ML breakthrough from AlphaTensor, improving on Strassen, to AlphaFold mapping out the structure of nearly every known protein. AI generating new text, pictures and code and playing Diplomacy. Not to mention ChatGPT that will disrupt education and everything else. Enough to almost make us forget the tech layoffs and the downfall of Twitter as we know it.

We remember Fred BrooksJuris Hartmanis, Joel Moses, Rolf Niedermeier, Alexander VardyGerhard Woeginger and Fastlane.

Thanks to our guest posters Boaz Barak, David and Tomas HarrisJonathan KatzDavid MarcusJelani NelsonPrahalad Rajkumar and Aravind Srinivasan.

Bill and I wish you the best for the holidays and look forward to keeping you informed and entertained with minimal AI assistance. 

Sunday, December 18, 2022

Voter Suppression, Harvard Style

The following appeared on Harry Lewis's blog, here, hence it is written in his voice, though it is a co-authored. You'll see why later.  I then have some comments on it. 


Voter Suppression, Harvard-Style

(This piece is jointly authored by Harry Lewis (Harvard) and Bill Gasarch

(University of Maryland at College Park). Harry was Bill's PhD Advisor.)

There are elections in Hong Kong, but to get on the ballot you have to be nominated by a committee controlled by Beijing government.

Elections for the Harvard Board of Overseers—one of Harvard’s two governing bodies—are almost as well-controlled. A Harvard Alumni Association (HAA) nominating committee curates a slate of candidates, from which alumni make their selections.

But an alternative route to get on the Harvard ballot exists, at least in theory. So-called “petition” candidates have always been rare—but after several climate activists were elected in 2020, the rules were changed to make it even harder. Among other things, the number of petitions to get on the ballot was raised by a factor of fifteen, to more than three thousand.

This year, noted civil libertarian Harvey Silverglate, concerned about freedom of expression at Harvard, is trying to make it onto the ballot.

The authors are computer scientists. We are neither technologically naïve nor afraid of computers. Harry has long been concerned about issues of student freedom and Harvard governance, and suggested to Bill, Harry’s sometime PhD student, that he sign Silverglate’s petition. This is an account of Bill’s trip through the resulting electronic purgatory.

To add your name, you have to fill out a web form. To access the web form, you need a HarvardKey. To get a HarvardKey, you have to fill out another web form. So far, so good.

The HarvardKey web form wanted Bill’s 10-digit HAA ID, which he was told to find on the address sticker of his copy of a recent Harvard Magazine (sent to all alumni). Bill had one handy, so he looked and found … a 9-digit number. He tried entering that number—no luck. He noticed it began with three 0s, and tried adding a fourth—that did not work either.

The web form had a number to call. Someone answered, and said some information would be needed before dealing with digits. Name (fine). Year of degree (fine). MIDDLE name (well, fine, though no one but Bill’s mother ever used it, and only when indignant). Date of birth (well, OK, but now we’re getting into territory we don’t casually reveal any more). When he got his MASTER’s degree. Bill did not know—that’s just something Harvard gives en route to the PhD. Turned out he actually didn’t need to know, an estimate was good enough. The person on the phone gave him his HAA ID, which bore no relation to the number on his address sticker.

Let’s pause there. Some people never call tech support because they have never found it helpful to do so. Any such person with a 9-digit address sticker number could not participate in the petition process.

Bill entered his HAA ID and received an error message saying that … KEY-5003 was missing. Happily, Bill had kept the support person on the phone (this was not his first rodeo).

Missing KEY-5003 turned out to mean that Harvard did not have his email address. He supplied it and was told he would get an email confirmation later in the day.

He did get an email later in the day. It listed eleven steps to claim his HarvardKey. Step 6 was to wait for a confirming email (he thought this WAS the confirming email), but after step 5 the system told him he was not in the system and it could not continue.

Another call to a support line. No, Bill was told, he has to wait 24 hours to get his email address updated, and would not get a confirming email. Just try tomorrow. Like the email said. Except that it didn’t say that, nor had the person he spoke to on the previous call.

Bill waited 24 hours and tried again, and got a little further through the eleven steps—and then was told to wait ANOTHER 24 hours for the account to activate.

24 hours later he tried again, from home, and failed again. Then he went to his office and succeeded—no clue why.

Now finally he got to the petition, which required Bill’s graduation year—and Silverglate’s­­, which Bill found but shouldn’t have been needed since this petition was specific to Silverglate.

Three days and two phone calls to sign the petition. To be fair, the people Bill spoke to on the phone were kind and helpful. Probably they themselves were struggling with the systems.

And we knew already that HAA is technologically challenged. A few weeks ago, it abruptly announced that it could no longer handle email forwarding. After alumni blowback, it just as abruptly announced that it would NOT end its forwarding service—oddly, while cautioning that the service was unlikely ever to work very well.

When election officials want to suppress the vote somewhere, they under-resource the voting process, forcing voters to cross town and wait in long lines. What happened to Bill is so comical that it is hard to imagine that the specifics were intentional. On the other hand, under-resourcing the petitioning process, allowing it to be so defective, misinformed, and hard to use that many people won’t exercise their franchise—isn’t that a form of voter suppression?

Why not be true to Harvard’s motto, Veritas, and just post on the web, For the alumni to choose the Overseers is an anachronism. Today’s alumni voters can’t be trusted to do it wisely. Since we can’t get rid of this system, we are going to make it all but impossible to nominate by petition. Try if you wish, but if you do, abandon all hope, ye who enter here.


I originally emailed Harry what I had to do to sign the petition with my point being

`Yes, I am a luddite but that has NOTHING TO DO with why I am having a problem.''

(I had meant to do a blog on that topic. I may still.)

Harry saw this for what it was- Voter Suppression. He wrote the Op-Ed using my story and made me a co-author. (We have never published together so it amused me that our first pub together would be 37 years after he was my advisor- perhaps a record. Does having a joint blog post count?) What is above is basically that Op-Ed (I think the version Harry submitted omitted that my mom is the only one who uses my middle name.) 

He submitted this as an Op-Ed to three places sequentially. 

1) The New York Times turned us down (not a surprise) by not even acknowledging it had been submitted (really?).

2) The Boston Globe turned us down (a surprise).

3) The Harvard Crimson turned us down (a shock!).

Voter Suppression followed by censorship.

AND back to the original issue: Harvey does not have enough people on the petition yet. I urge you to READ Harry Lewis's post on Harvey here and IF you are a Harvard Alumni AND you agree that Harvey would be a good overseer THEN sign the petition. Or at least try.

Thursday, December 15, 2022

FinTech is Dead, Long Live FinTech

Bill didn't feel he had the expertise to share new insights on the FTX affair. Never stopped me.

FTX is nothing short of corporate malfeasance in a poorly regulated industry, and since Bill had posted, Sam Bankman-Fried was arrested and charged with fraud and conspiracy. The whole affair has rippled through the cryptocurrency ecosphere and will likely lead to a much higher regulatory framework than the industry was hoping for.

Cryptocurrencies have had an interesting 2022 starting with the Superbowl ads Bill was rallying against. FTX even sponsored baseball umpires, sports arenas and fortune cookies including this prescient one

Those were in the glory times of FinTech, i.e. early 2022. Even before FTX had its problems, we had the Three Arrows Capital bankruptcy and the TerraUSD stablecoin collapse. Bitcoin lost over 60% of its value during the year. 

Of all the problems we've seen with cryptocurrency, it's never because of the crypto. I'm still shocked how the person or people called Satoshi Nakamoto got it mostly right on the first try. They did fail to account for the energy needed to mine bitcoin once bitcoin became valuable, but on the other hand who could have predicted bitcoin would become so valuable. Certainly not me.

I still don't see much of a market for most cryptocurrencies beyond speculation and illegal activities, though we have much of both. Nevertheless it feels weird in 2022 that we still use physical bills and coins. I am hopeful for a CBDC (Central Bank Digital Currency), not a stablecoin tied to a dollar, but US currency itself just in digital form.

We now use digital tickets for most events and transport. Even driver's licenses and passports are moving digital--they only gets scanned anyway. There are a few remaining uses of paper--property titles, legal wills, social security cards, birth, death, marriage and divorce certificates. Perhaps we could use blockchain or even centralized databases to eliminate these as well.

Blockchain could also be helpful to track our digital goods, so we can use the music, movies, books, games, virtual clothes and accoutrements on different platforms without having to buy them twice. (HT Siva)

The most important use of FinTech will be for those who have the least, those who pay large fees to have a bank account, cash a paycheck or send money to family overseas. FinTech can be a great democratizing tool, if we use it that way and not just as another get rich scheme.

Sunday, December 11, 2022

Commercials are not logical. FTX edition.

Some people asked me to comment on FTX since I teach Crypto. My insights are no better than anyone else; however, I have wanted to do a blog post about the illogic of commercials, so I will do that with FTX as an example. 

ALL conversations in the post are fictional.


Alice: Bob,  why did you invest $1,000,000 with FTX?

Bob: Because Tom Brady endorsed it. (See here for an article about that and here for a commercial Tom Brady did for FTX.) 

Alice: But Tom Brady is a football player, not a finance person. 

Bob: Well... I know that now. 

(For an absolutely fantastic and now ironic commercial for FTX see here.)


The people in commercials are  paid to hawk the product. 

a) Are they experts? In the above case about about FTX  NO, they are not. So this should not work. 

b) There are commercials where the people ARE experts. For example, a basketball player endorsing Sneaker brand (not quite an expert, but they DO use the product). But even this should not work since the viewers KNOW that the person is being PAID to tell us it's a good product. 

c) Commercial spokesman (Geico-Gecko, Progressive-Flo, Dos  XX- the worlds most interesting man, see his commercial here and a Ramsey Meme based on it here) are even stranger- they are fictional characters who are urging me to buy something.  So the question are they experts? doesn't even make sense. Someone pretending to be someone they are not is pretending to like a product they do not use. 

d) Some commercials pretend to be  informative but are not. For example, the Insurance Companies seem to brag about something that ALL of the insurance companies do (bundling, not-paying-for-what-you-don't-need). 

e) A truly new product may have an informative commercial, just to tell me that its out there. For example. Stephen Colbert's Americone Dream Ice Cream which is awesome. (My spellchecker thinks Americone is not a word. This time they are probably right.) 

f) SO, if I did a commercial for Stephen Colbert's Americone Dream Ice Cream would you try it? Lets say I wasn't being paid and I truly did it for my love of that ice cream. STILL doesn't make sense- my tastes and yours may differ. However, you can try it and decide, and  it costs far less than $1,000,000.


So what are we left with? The only commercials that make logical sense would be those where 

a) the person is an expert

b) the person is not being paid

c) the person is telling you something about the product that distinguishes it from its competitors OR  its a new product

d) Its not a matter of taste- there is an absolute standard that is easily understood. 

Do you know of ANY commercials like that? Influencers who are NOT paid by the people whose product they are hawking (are there such influencers?) might begin to qualify. 


So LOGICALLY commercials should not work. So why do they?

a) They don't. Its possible they just are not that effective. See here for DON"T WORK and here for 8 times when it did work. I think this is a great commercial (watch it to the end) but it does not make me want to out and buy soup. (Stan Freberg wrote some GREAT commercials. They are on You Tube and I recommend them highly for entertainment. He also has several great novelty-song albums.) 

b) Indirectly. They build brand awareness.

c) Because people are stupid. This is not an interesting answer since I still want to know what their reasoning is, whether or not its faulty.

d) Because you are part of a movement. Drink the Uncola (7UP) to be rebellious! Or the 1984-Apple commercial (see here). These are both odd since drinking 7UP or having an Apple Computer seem to me to be the opposite of rebellious. 

e) Buying the product to express your philosophy: 

Ted: Carol, why did you invest in $1,000,000 in FTX? 

Carol: Because I believe in effective altruism.

Ted: Do you still?

Carol: This might need a rethink. But for now I'll  invest by getting a Freedom Unlimited credit card that gets cash back since Kevin Hart says I should (see here).

Thursday, December 08, 2022

Harry Lewis's talk on The Birth of Binary on Dec 8 (Thats today!)

 (Added Later: the talk is now available: here. When I tried it it said I did not have permission

but I got it to work anyway.) 

More on the information on Harry Lewis's  talk is in his blog post about it:here

1) On Dec 8, 2022 Harry Lewis is giving the annual Thoraf Skolem Memorial Lecture on

                                                     The Birth of binary 

Its in Oslo. BOO- I can't get there!

It will be on Zoom- YEAH I can see it! Here is the link: here

It will be at 7:15AM East Coast Time- BOO- I needs my sleep!
(Plus, I am posting this after its over)

It will be recorded-YEAH I can see it later. CAVEAT- Will I?

(It will be linked to at the link in item 4 below.)

2) Lloyd Strickland and Harry Lewis have a book on the subject: here titled

                               Leibniz on binary: The Invention of Computer Arithmetic.

3) Questions like who invented binary  or who invented parallelism or who first proved the dual-muffin theorem can be hard to answer. First  is an ill defined term. In the current era we can see who published first, which is usually well defined, though might not get at the heart of the issue.

Which brings us to Leibniz on Binary.  He had lots of notes, not really intended for modern readers or even for readers in his own time. The notes lay out binary notation and some algorithms, but not in a modern way. Hence the book is quite valuable to tell a modern audience what Leibniz did. Leibniz published very little of it.  Even so, seeing what he did, the statement
                                             Leibnitz invented binary
is reasonable. Some of his notes dealt with what we would now call complexity, so I need to add him to my already-long list of people who had modern ideas about complexity. 

4) For more on the Skolem Lecture, see here

Monday, December 05, 2022

The Future of Education is Personal


With all the excitement about ChatGPT, how will machine learning disrupt education, say five to ten years down the road?

My guess: individualized tutors. Imagine a tutor working with you teaching you important concepts, walking you through examples, answering your questions, going at your own pace, like the Oxford system. The Oxford tutor system doesn't scale well, or at least it wouldn't if we have human tutors. But we can scale using machine learning and we're not far away from being able to do so. Such tutors will be infinitely patient, have full knowledge of all written material, speak in any language with any voice and personality.

You can "meet" with your tutor in many different ways, from a deep fake video chat or with augmented or virtual reality to have a tutor in the room with you, or perhaps a physical robot, neural implant or something we haven't even though of yet. In poorer countries you can get tutored with something as simple as text messages on a cell phone. 

A tutor can take on any form. You could get tutored by fictional characters such as Yoda, Darth Vader or Miss Frizzle. ML can capture the personality of real people--imaging a course about Kurt Vonnegut taught by the author, government from Henry Kissinger or a course in quantum computing from your own personal Scott Aaronson. But most importantly you can have a tutor who looks and sounds like you, with your own language, gender, race and ethnicity.

Somehow we'll have to find ways to include the social aspects such as working in groups, socializing, playing sports and living together. But that one-one-one teaching experience that most of us cannot afford  today will be cheaply available tomorrow.

And what will this all mean for teachers, professors and universities? A good question for future blog posts.

Thursday, December 01, 2022

How do we keep the community connected?

A colleague said how they enjoyed watching the collapse of Twitter under Elon Musk. But I use Twitter to keep connected to the CS community. In Twitter I hear not only new results but ones that excite particular people. I watch the debate between those who see ML as revolutionary and those who see ML as revolting. I see the issues that our community worries about and those that they celebrate. I follow people as they progress in their careers or outright change them. Mostly it just makes me feel part of an academic community that goes beyond my own institution. 

A bit surprisingly, so far Twitter hasn't collapsed. But it could and I expect many of my followers and those I follow spend less time there. I set up a Mastodon account but not much happens over there, though feel free to tell me who I should be following. There are many other social networks but none that bring us as a field together.

Blog posts and their comments play a role but not like they used to. There's CS Theory StackExchange which has some good (and not so good) technical discussions but we don't really have conversations there. 

How about conferences now that researchers are (mostly) willing to attend in person? Since conferences in CS are the primary publication venue, we have too many meetings and many won't go, or at least won't go in person, if they don't have a paper in the conference.

So, once again, I suggest a big theory conference we hold every four years that everyone who's anyone will make sure to attend. Hey, it works for the World Cup. 

Monday, November 28, 2022

Would you be more productive if you didn't log on? It worked for Christopher Havens!

There are times when NOT having computer access (is that possible anymore?) can make you MORE productive. 

1) I did a lot of my Muffin Work when I was in Mexico for a bar matzah and had no computer access, and no Television in English (though Texas Walker Ranger, and Kindergarden Cop, were actually pretty good in Spanish even though I don't now Spanish.) 

2) I did work on NIM with Cash when I was stuck at an airport for 8 hours. 

3) I proofread (on paper!) most of my book with Erik D and Mohammad H when I was at a relatives house for four days  who had weak wifi and only got  ABC, NBC, CBS, PBS, some local channels, and COZI (not sure why they got COZI, though I am glad since I caught a good episode of Columbo).

4) Thanksgiving at my Mom's apartment (she's 93 years young!), with no computer,  I relearned the proof of the Hales-Jewitt theorem. I seem to learn/forget/learn/forget that one a lot. 

There are times when being AWAY from technology is helpful. I sometimes go to the Math Library and try to NOT use my cell phone. 

Having said that I do think that overall having access to papers online is great and that overall academic productivity has increased.  But there are times when the computer can distract you and time AWAY from it is good. 

Which brings us to the story of Christopher Havens. He works in Number Theory and logs on very rarely, perhaps never. He just works on math 10 hours a day. He has  a paper (with co-authors).  It take discipline to resist the urge to log on. How did he manage this?

He is in prison for murder.

Here is a podcast with him as a guest. 

Here is an article about him. 

Here is a math article where he is a co-author. 

Here is the short version of all of this: 

1) He is guilty of murder and in a max security prison in America.  It was a drug related shooting (why do people only talk about drug deals gone bad when they should also talk about drug deals gone good?) . When I first read Prison Inmate Solves Math Problem I thought that maybe a white collar criminal who majored in Math and was in a min security prison with access to the web (do white collar criminals have access to the web?)  But NO, Christopher Havens really did murder someone and is in max security.

2) He really has turned his life around. He really is not the person he was, and when he gets out I cannot imagine he will go back to drugs and crime. I suspect he will work on the Math Prison Project which I mention later. 

3) His mother says that when he was in High School (which is as far as he got for education)
he was helping students in math who were  2 grades above him, but he has no recollection of this.

4) When he was the hole (solitary confinement) someone who did Prison Education gave him (and others, he was not picked out) an envelope of math problems to work on--- pre-algebra. Christopher did them well and liked it and began requesting more advanced math books and learned math by himself, working 10 hours a day. When he requested a book it was random which ones he would get. I don't know why some were blocked. I don't think he knows why some were blocked. 

5) Some mathematicians from Italy (Italy?) contacted him and they began corresponding and yada-yada-yada, he has a paper now.

6) He has conceptualized the Math Prison Project to help other prisoners do math, though I would suspect not on the level he is on.  Then again, maybe the reason that P vs NP is still open is that we all have to many distractions, and conference deadlines, that a prisoner would not have. 

7) Some articles say that he solved a ancient problem in math that Euclid couldn't solve. This is not true. He helped solve some problems about continued fractions. 

8) There is going to be a movie about him, see here. I predict it will take an interesting story and make it less interesting and more fictional. 

What to make of all this? 

1) KUDOS to him!

2) I don't know which side of the nature/nurture argument this goes to

a) He OBVIOUSLY had math talent naturally or else he couldn't have learned all of that math.
b) He shows that HARD WORK and TENACITY can overcome other issue.

3) back to my original point- if you had the FREEDOM to work 10 hours a day JUST on math and had no other distractions, but also limited access to books and people,  would you be MORE productive? LESS productive? Also note- no faculty meetings, no teaching obligations, and no word processor to distract you. 

Monday, November 21, 2022

A Celebration of Juris

On November 4th I travelled to my undergraduate alma mater Cornell for a Celebration of the Life and Career of Juris Hartmanis who passed away in July. The workshop attracted many Cornell faculty and students, many of Hartmanis' former colleague and students, grad and undergrad, as well as his family. For the most part, the talks did not focus on technical content but rather memories of the great man. 

I talked about how Hartmanis founded the field of Computational Complexity and brought me into it. Herbert Lin told the story behind Computing the Future, a 1992 agenda for the future of computer science led by Hartmanis and the challenge to the report by John McCarthy, one of the founders of AI. Should the agenda of computer science be solely in the hands of academic computer scientists, or should it take into account its role in the larger scientific and world-wide community? We still face these questions today.

Ryan Williams gave a powerful talk about how Hartmanis personally intervened to ensure Ryan had a future in complexity. We are all better off for that.

After the workshop, Ryan and I walked around the campus and Collegetown reminiscing on how things have changed in the two decades since Ryan was an undergrad and the four decades (!) since I was. Most of the bars and restaurants have disappeared. The Arts quad is mostly the same, while the engineering building have been mostly rebuilt. There's a new computer science building with another on the way

I stayed in town to catch the Cornell football game the next day, as I once was on that field playing tuba for the marching band. They tore down the west stands to put up a parking lot and the east stands were sparsely filled watching Penn dominate the game.

Good bye Juris. You created a discipline, started one of the first CS departments, and plotted the future of both computational complexity and computer science as a whole. A master and commander indeed.

Thursday, November 17, 2022

Fall Jobs Post 2022

In the fall I try to make my predictions on the faculty job market for the spring. The outlook this year is hazy as we have two forces pushing in opposite directions. 

Most of the largest tech companies are having layoffs and hiring freezes amidst a recession, higher expenses and a drop in revenue from cloud and advertising. Meanwhile computing has never had a more exciting (or scary) year of advances, particularly in generative AI. I can't remember such a dichotomy in the past. In the downturn after the 2008 financial crisis computing wasn't particularly exciting as the cloud, smart phones and machine learning were then just nascent technologies.

We'll probably have more competition in the academic job market as many new PhDs may decide to look at academic positions because of limited opportunities in large tech companies. We might even see a reverse migration from industry to academia from those who now might see universities as a safe haven.

What about the students? Will they still come in droves driven by the excitement in computing or get scared off by the downturn in the tech industry. They shouldn't worry--the market should turn around by the time they graduate and even today there are plenty of tech jobs in smaller and midsize tech companies as well as companies that deal with data, which is pretty much every company.

But perception matters more than reality. If students do stay away that might reduce pressure to grow CS departments.

Onto my usual advice. Give yourself a good virtual face. Have a well-designed web page with access to all your job materials and papers. Maintain your Google Scholar page. Add yourself to the CRA's CV database. Find a way to stand out, perhaps a short video describing your research. 

Best source for finding jobs are the ads from the CRA and the ACM. For theoretical computer science specific postdoc and faculty positions check out TCS Jobs and Theory Announcements. If you have jobs to announce, please post to the above and/or feel free to leave a comment on this post. Even if you don't see an ad for a specific school they may still be hiring, check out their website or email someone at the department. You'll never know if you don't ask.

Monday, November 14, 2022

Who first thought of the notion of Polynomial Time?

(Updated version of  Computational Intractability: A Guide to Algorithmic Lower Bound by Demaine-Gasarch-Hajiaghayi is here)

Any question like who first though of X is often hard to answer. I blogged about who first came up with the Fib numbers here. I've heard rumors that IBM had search engines way before Google but could not figure out how to make money off of it. There are other examples. 

I had learned that Cobham defined P in the paper The intrinsic computational difficulty of functions, in 1965. It was The conference on Logic, Methodology, and Philosophy of Science. The paper is here.  Jack Edmonds had the notion of P in the paper Paths, Trees, and Flowers here in 1965.

While it is true that Cobham defined P in that paper, and he might have been the first one to do so, was the notion somehow around earlier. I first thought the answer was no. Why?  Because if you look at Joe Kruskal's paper on MST  (see here) you don't see anything resembling time complexity. No O(E+Vlog V) or whatnot. So I thought that if the notion of  this algorithm runs in such-and-such time was not in the air, then certainly any notion of P could not have been. 

Hence I was surprised when I accidentally (more on that later) came across the following: 

In 1910 (really, 1910)  H.C.Pocklington analyzed two algorithms for solving quadratic congruences and noticed that 

one took time proportional to a power of the log of the modulus, where as

the other took time proportional to the modulus itself or its square root. 

THAT is the distinction between P and NOT-P. 

The paper is titled The determination of the exponent to which a number belongs, the practical solution of certain congruences, and the law of quadratic reciprocity. It appeared in 1910, in the Proceedings of the Cambridge Philosophical  Society, Volume 16, pages 1-5. (I could not find it online. If you know of a place online for it, leave a comment.) 

ADDED LATER: Here is the article in pieces: Page 1Pages2,3Pages4,5.

How did I come across this? And why had I NOT come across this in my roughly 40 years working in complexity theory? 

I came across it while reading a blog of Scotts, The Kolmogorov Option, see here where Pocklington is mentioned in passing. I am surprised how arbitrary the set of things ones knows can be. I have put the Pocklington story in the Demaine-Gasarch-Hajiaghayi book Computational Intractability: A Guide to Algorithmic Lower Bounds so that this knowledge gets to be better known.

ADDED LATER: That Cobham and Edmonds are known for discovering or inventing P is an example of  the well known 

Columbus Principle: Things are named after the LAST person to discover them (note that Columbus was the last person to discover America.)

Bonus Question: Most principles where the author is not on it, the author might be unknown. NOT in this case. I KNOW who coined the term `Columbus Principle' Do you? (It was not me.) 

Thursday, November 10, 2022

The Structure of Data and Machine Learning

Terry Tao entitled his 2006 Fields Medal Lecture "The Dichotomy between structure and randomness" and state the Structure Theorem: Every object is a superposition of structured object and a pseudorandom error. He gave many examples and how he used these results to prove (with Ben Green) that the primes contain arbitrarily long linear progressions.

There is a nice Kolmogorov interpretation. Recall K(x) is the smallest program that produces the string x. For a finite set S, K(S) is the smallest program that recognizes membership in S. For a string x, take the largest set S containing x such that K(x) is close to K(S) + log(|S|). S is the structure and x is a random element of S. Vereshchagin and Vitanyi have a nice paper on this topic.

Machine learning aims to discover the set S from a set of examples x. This is why I think of P = NP giving us an ideal machine learning algorithms--use P = NP to find a circuit that describes S for a time-bounded version of Kolmogorov complexity. Recent tools in machine learning seem to find this S without needing the full power of P = NP.

Consider languages where German or Japanese is a random example of a "natural language". Linguistics tries to understand the structure of natural languages. Recent ML translations algorithms use that structure (without understanding it) to translate between pairs of languages. 

How about generative AI? Diffusion methods create a set S of all reasonable images by turning images into random noise. To create images it reverses that process, starting with random noise to create random elements of S. Prompts just add conditions to the process.

I read the Vereschagin-Vitanyi paper back in 2004 and the Kolmogorov structure model goes back to the 1970s. Finding the structure seemed intractable for any interesting application. Now that ML is proving this wrong, the world is a different place.

Monday, November 07, 2022

Euclidean TSP is NP-hard but not known to be in NP. Why not known?

BILL: Lance, I was looking into TSP, metric TSP, Euclidean TSP since I am going to teach about P, NP, NPC, and approximating NPC problems and I came across the following from Lipton's book The P=NP Question and Godel's Letter (I paraphrase):

Graham proved that Euclidean TSP was NP-hard. But it is open if it is in NP. The difficulty hinges on a beautiful problem that is still open over thirty years later: Can we tell, given naturals a1,...,an,k if

\sqrt{a1} + ... + \sqrt{an} \le k

What I want to know is, is it still open? Lipton's book was written in 2010, so that was. uh, uh...

LANCE: 11 years ago.

BILL:  Which is a long time ago- so has it been cracked?

LANCE: No it has not. And, by the way, I blogged on it in 2003.

This raises some questions:

1) Is  the sqrt problem above in P? NP? (I have seen it stated that the problem is in PSPACE.) 

2) Where do people think the problem is?

3) Why is it still open? Some options (I am sure there are more.)

a) Not that many people are working on it. But if not, why not?

b) The problem is just dang hard! That's probably why P vs NP is still unsolved and why FLT took so long, and why my proof of the Riemann hypothesis was so long in coming.) I am reminded of Erdos' quote on The Collatz Conjecture: Mathematics may not be ready for such problems. And you all know what Erdos said about R(5). 

c) Reasons a and b above may lead to a death spiral: People THINK its hard so they don't work on it, then since nobody works on it no progress is made, reinforcing that its hard. 

Thursday, November 03, 2022

Should you quit Twitter and Texas?

Generally with some exceptions, I use Facebook for personal stuff, LinkedIn for Illinois Tech stuff and Twitter and this blog for CS stuff. Many of you got to this post through the Twitter link. Now that Elon Musk has bought the social media company, should I and the rest of the academic twitterverse move on to something else?

I'd say not yet. Let's see what Elon does to the place. Maybe he can allow more points of view, without turning it into a cesspool. Or maybe he ruins it. It'll be a network effect--if too many academics leave Twitter, I'd have to follow or I'd have few followers. I wonder where they will go. I hope it isn't TikTok.

On a similar vein, I often here of those who suggest we don't hold conferences in certain jurisdictions for political reasons, for example Texas, because of its laws against abortion and transgender rights. I don't believe computer science, as a field, should be making decisions based on politics. Academics who live in these states don't generally hold the same views as the political leaders in those states.

Should we not have meetings in Illinois because some in our field might be opposed to abortion? Or do we just assume everyone has the same political views in the field. Individuals can make their own choices as to whether to attend, but it's best when politics is left out of academics. FOCS 2022 is wrapping up today in Denver. Seems like a safe choice--perhaps all US conferences in the future should be in Colorado. 

There are limits--I wouldn't attend or organize a conference in Russia in the near future. But if we start eliminating locations based on politics, we'll only be able to meet up in the metaverse, and we won't have social media to tell us how to get there.

Sunday, October 30, 2022

What was the recent Nobel Prize in Physics really about?(Guest Post)

 David Marcus was a Math major a year ahead of me at SUNY Stony brook (he graduated in 1979,

I graduated in 1980). He then got a PhD from MIT in Math, and is a reader of this blog.  Recently he emailed me that he thinks the current Nobel Prize Winners in Physics do not understand their own work. Is it true? Let's find out!


(Guest blog from David Marcus)

2022 Nobel Prize in Physics Awarded for Experiments that Demonstrate Nonlocality

The 2022 Nobel Prize in Physics was recently awarded to experimenters who demonstrated that the world is nonlocal. The curious thing is that neither the writers of the Nobel Prize press release nor the recipients seem to understand that this is what they demonstrated.

For example, the press release (see here) says: "John Clauser developed John Bell's ideas, leading to a practical experiment. When he took the measurements, they supported quantum mechanics by clearly violating a Bell inequality. This means that quantum mechanics cannot be replaced by a theory that uses hidden variables." That is not what the experiments mean, and the statement is false.

The word "locality" means that doing something here cannot instantly change something other there.

The experimental setup is the following: You prepare two particles, A and B, and send them in opposite directions so that they are far apart. You and your colleague do experiments on each particle at the same time. If you and your colleague perform the same experiment, then, from your experiment on A, you can predict with certainty the result of your colleague's experiment on B (and vice versa).

In a paper in 1935, Einstein, Podolsky, and Rosen pointed out that, assuming locality, the experimental results at A and B must be determined by the source that prepared the particles. They didn't actually say, "assuming locality", but they implicitly assumed it. (If you disagree with them, please offer an alternative.)

In 1964, John Bell published his paper. In it, he considered three of the experiments that could be done on the particles A and B. Assuming the results are determined by the source (which follows from Einstein, Podolsky, and Rosen's argument), he derived an inequality on the correlations between the results of the three experiments on the two particles. The math is simple; for details, see here.

The Nobel Prize winners did experiments, and their results violated Bell's inequality (or similar inequalities). Hence, the world is nonlocal.

The simplest theory that agrees with experiment is Bohmian Mechanics. This is a deterministic theory of particles whose motion is governed by a wave (the wave function being the solution of the Schrödinger equation). Of course, Bohmian Mechanics is nonlocal, as is the world.

Thursday, October 27, 2022

The Media Coverage of the Matrix result is Terrible (though not worse than usual)

 BILL: A computer program (or an AI or an ML or whatever) found a BETTER way to do matrix mult! Its in the same spirit as Strassen. I've always wondered if Strassen was practical  since it is simple, and computers have come a long way since 1969, though I suspect not (I WAS WRONG ABOUT THAT). I'll blog about and ask if Strassen will ever be used/practical   (I did that post here).

READERS: Uh, Bill,  (1) Strassen IS used and practical and (2) the new algorithm only works in  GF(2). (Lance did a post about the new algorithm where he makes this explicit here.) Some readers claimed it was GF(2^k) and some that it was fields if char 2. In any case NO it is not a general algorithm.

BILL: There is good news and what others might consider bad news but I do not.

GOOD NEWS: I learned that Strassen IS practical and used, which I did not know. 

GOOD NEWS: I learned that I was WRONG about the new algorithm since I just assumed it worked in general, and updated the post so others would not be deceived. 

BAD NEWS: Darling asked if I was embarrassed to be wrong. If I am embarrassed that easily I would have quit blogging in 2009. 

DARLING: So Bill, how did you get it so wrong?

BILL: Well obviously my bad for not doing my due diligence. But that's not what's interesting. What's interesting is that if you read the articles about it for the popular press you would have NO IDEA that it only works for mod 2. Its like reading that quantum computing will solve world hunger.

DARLING: It won't?

BILL: No it won't. 

DARLING: I was being sarcastic. 

BILL: Anyway, the coverage pushed two points

a) IMPRESSIVE that a computer could FIND these things that humans could not. This is TRUE (gee, how do I know that? The Gell-Mann Effect,  is that people disgusted when they read a newspaper article on something they know about and find the mistakes later assume that the other articles are fine. SHOUT OUT to Jim Hefferon who telling me the name Gell-Mann Effect and left a comment with a pointer. The original version of this post had a BLANK there.) 

b) The algorithm is practical! They did not quite say that but it was implied. And certainly there was NO mention of it only working in GF(2). And I was fooled into thinking that it might be competitive with Strassen. 

READERS (of this blog entry, I predict) Uh, Bill, the popular press getting science news wrong and saying its more practical than it is probably predates the Bible. I can imagine  the Cairo Times in 2000BC writing `Scientists discover that in any right triangle with sides a,b,c  a^2+b^2=c^2 and this will enable us to build food silos and cure Hunger. In reality they knew that the 3,4,5 triangle was a right triangle, were no where near a proof of a general theorem, and I doubt it would have helped cure hunger. 

BILL: This time the news was REALLY CLOSE to what I do (if  R(5) is found by a computer and the media claims its practical I'll either have a very angry blog or repost my April Fools' day article on Ramsey Theory's application to  History) AND I posted incorrectly about it. So, to quote many a bad movie


Monday, October 24, 2022

Cheating in Chess and in Class

In the 24th move of the second game of the 1978 Chess Championship, a cup of blueberry yogurt was delivered to the defending champion Anatoly Karpov who offered a draw shortly thereafter. The challenger Victor Korchnoi claimed the flavor of yogurt was a coded message to Karpov and later in the tournament all food deliveries had to be decided on in advance. The good old days.

With computer chess programs now far more powerful than humans, chess cheating has become far more common and came to a head last month with the controversy between Magnus Carlsen and Hans Niemann. Did Niemann cheat to win in his win over Carlsen in St. Louis or was it just a rare upset? How can we tell?

This brings up cheating by students in class. Reports and statistics show that cheating has increased over the last few years. The pandemic played a role, but a good rule is that pandemic didn't change behaviors, rather accelerated changes already in progress. Technology has made it easier to cheat. It's difficult to near impossible to create a homework problem where someone couldn't just look up an answer. Sites like Chegg provide solutions to all sorts of problems while there are many sites where you can hire someone to write a paper or do a project for you. Advances in generative AI, like GPT-3 and GitHub co-pilot will soon make cheating as easy as clicking a button.

But it's more than technology. As students view university education less about learning and more about getting the credentials for a job, the inhibitions to cheat disappear. And while the vast majority of students don't significantly cheat, it's hard for anyone to avoid using Google when they get stuck on a problem. 

We can continue to use technology to fight the technology in a every growing arms race to catch cheaters but it can feel like a losing war. We should take solace that the students who work hard solving problems and projects will be the ones who will succeed in life. 

Thursday, October 20, 2022

Alpha Tensor

In a recent post, Bill used the announcement of a new AI multiplication algorithm to discuss the applications of Strassen's famous algorithm. For this post I'd like to focus on the new algorithm itself, Alpha Tensor, the algorithm behind the algorithm, what it has actually accomplished and what it means for us theorists. 

To multiply two 2x2 matrices in the usual way you need eight multiplication steps. In 1969 Strassen surprised the world by showing how to multiply those matrices using only seven multiplications.

You can recurse on larger matrices. For 4x4 matrices you can use 72=49 multiplications instead of the naïve 64. In general for nxn matrices you need roughly nlog27 ≈ n2.81 multiplications.

No one has found an algorithm for 4x4 matrices that uses less than 49 from recursing on Strassen. Alpha Tensor does so for the special case of working over GF[2], where addition and subtraction are interchangeable. Their algorithm does not work for general fields such as the real numbers.

Here's the full table of Alpha Tensor results from the Nature paper for multiplying a nxm matrix by a mxp matrix. The Modular column is for GF[2] and the standard column is for general fields. Alpha tensor does improve on the best known for general fields for specific problems like multiplying a 3x4 matrix by a 4x5 matrix. Much of the press failed to make this distinction for 4x4 multiplication leading to some confusion.

What does this mean for theory? Recursing on 4x4 matrices now reduces the time for matrix multiplication to roughly n2.78 nowhere close to the best theoretical upper bound of about n2.37. The Alpha tensor result may be more practical though time will tell.

Manuel Kauers and Jakob Moosbauer shortly after Alpha Tensor announcement, reduced the 5x5 case over GF[2] to 95 multiplications. Nice to see the last word isn't by machine (yet!) but that shouldn't reduce the excitement over Alpha Tensor. Often we see a breakthrough followed by a small improvement. Note that 95 multiplications for 5x5 matrices won't give a faster asymptotic algorithm for nxn multiplication than Strassen.

What excites me the most is not the algorithm, but the algorithm to find the algorithm. Alpha Tensor uses the tools that AlphaZero used to play Chess and Go to search the large search space of potential algorithms using Monte Carlo Tree search, basically searching at random and learning and updating the probabilities of the search. Before using machine learning, we had few good approaches to searching large combinatorial spaces of this nature. 

In general, most new algorithms come from new approaches, not just from the structural decomposition we see in matrix multiplication so theorists won't be out of a job anytime soon. Nevertheless this is just another lesson that using ML has dramatically improved our ability to search through a large number of possibilities looking for a specific solution. 

The Alpha Tensor Nature paper was submitted in October of 2021. A year is eternity in the ML world. I wonder what is happening now that we don't know about.

Tuesday, October 18, 2022

BYTE LYNX- an awesome video game/Am I an influencer?

Tucker Bane is a friend of mine who has an AWESOME video game available

that is called BYTE LYNX.

I am curious- Can I be an INFLUENCER!

Lets find out!

At the link below there are

a) Reviews of the game.

b) Videos of the game being played.

c) The ability to purchase and download the game.

Link is here

Thursday, October 13, 2022

How Not to Pass a Polygraph Test

Many years ago I was asked to serve on an advisory board for an organization that did confidential research. To be on the board I had to have US top secret clearance. The first step was filling out a lengthy form which asked deep details about every aspect of my life. Then there were the interviews with myself and many people I interacted with, especially internationally, and I have many international colleagues. Eventually I got past these steps.

The final step required taking a polygraph (lie-detector) test. So I flew to Baltimore to visit a non-descript office building near the airport. I failed the test. Twice more I went to Baltimore and failed those tests as well. 

Just to be clear, I never tried to falsify or hide information on these tests. In one case I was asked "Do you live in Atlanta?" I said no. The person administering the test stopped and said I put my address down as Atlanta. I said my mailing address was Atlanta but (at the time) I lived just north of the border in Sandy Springs. She said I should use Atlanta for the test, in other words I should lie. The test didn't go well after that.

In another case, I was asked if I was ever arrested. For the record, I have never been arrested but the answer came up as inconclusive. The administrator, different than before, trusted the machine more than me and the rest of the day didn't go well. 

Perhaps the test wasn't meant to just test whether I was telling the truth, but also my ability to keep a secret. At least that would make more sense why I failed three times. More likely I took questions too literally, a product of a mathematician's mind.

I never joined the advisory board but that wasn't the worst of it. In 2014 the Chinese hacked into the US Office of Personnel Management taking information from, among others, those who applied for security clearance. It's the main reason I keep security freezes with the credit bureaus.

Sunday, October 09, 2022

Will Strassen's Matrix Mult Alg ever be practical?

All time bounds are asymptotic and really O-of.

Recall that Strassen found a clever way to multiply  two 2x2 matrices with 7 mults (and lots of adds)  leading to a matrix mult alg in n^{\log_2 7} = n^{2.87...}

Recently (see here) a deep-mind-AI found a way to multiply  two 4x4 matrices with 47 mults (and lots of adds) leading to a matrix mult alg in n^{\log_4 47} = n^{2.777...}. NOTE ADDED: The new algorithm only works over GF[2] for 4x4 matrices.

Much better is known, see our blog posts here and here.

The more advanced algorithms are complicated and have large constants so will never be practical. But Strassen's result, and now the new algorithm, SEEM to me they could be practical.

(ADDED LATER- many of the comments inform me that Strassen IS practical and IS being used. Great! Now we know!)

Thoughts about Strassen that also apply to the  new algorithm. 

1) n has to be large for Strassen to given an improvement. But as we deal with larger data sets the value of n is getting larger. 

2) People are mostly interested in sparse matrices for which there are better methods. I've heard that for a while- but is it still true? I thought ML used dense matrices. 

3) Strassen is hard to code up. Actually it doesn't look that hard to code up. However, I have never tried to code it up, so maybe there are subtle points there.

4) Strassen only works on matrices of size 2^n x 2^n. You can pad matrices out but that might kill whatever time advantage you get. (The new alg only works on  4^n x 4^n). 

5) Strassen uses recursion and there is the hidden cost of recursion. I think that is a think of the past and our younger readers do not know what I am talking about. 

6) (This is obvious) the recursion would only go down to a certain level and THEN you would use ordinary Matrix Mult. This may also add time. 

I suspect that 2 and 4 are the most important reasons Strassen (or the new algorithm) is not practical BUT I would like to hear your thoughts?

Does any package NOW use Strassen's Algorithm?

Side Note: I like to ask students if they think there is a better-than-cubic algorithm for Matrix Mult. They do not. Then I show it to them and tell them THIS is why LOWER BOUNDS are hard. You have to show that NO, nobody clever will find a trick you hadn't thought of. 

Thursday, October 06, 2022

Art and Technology

Last weekend I went to one of Chicago's jewels, the Art Institute, and saw the opening of a new exhibit by Berlin-based artist Josephine Pryde entitled The Vibrating Slab referring mainly to the phone that constantly tries to gain our attention. The exhibit used photographs and sculptures to tie smart phones to prehistoric rocks. No pictures here because ironically they didn't allow us to take photos with our "slabs".

After I saw the Pryde exhibit I went to see the artist herself give a presentation. She related a story where she talked about going to the movies and seeing Top Gun: Maverick, not knowing it is a new release. Tom Cruise, who controlled a computer with hand movements in Minority Report, goes old-school in Maverick. Cruise and several young actors, through various plot contrivances, flew 20th century planes in a movie that could have taken place in the 90s. According to IMBD, at the insistence of Tom Cruise, minimal green screen and CGI aerial shots exist in the film, and even the close up cockpit shots were taken during real in-flight sequences. Old school indeed. Kind of the opposite of say the Disney series The Mandalorian filmed in a soundstage with everything generated by CGI.

Pryde's exhibit looked at the interaction with technology as art. Upstairs from Pryde's exhibit was art from technology, a series of prints that David Hockney created on another slab, the iPad, in Normandy during the early days of the Covid pandemic. 

No. 340, 21st May 2020 - David Hockney

On the way from Pryde's exhibit to the lecture I passed through the Art Institute's impressionism collection and compared real Monets with the fake one I created with Dall-E. Monet manages to capture a detailed scene with broad brush strokes--if you zoom in the detail disappears. Dall-E can't quite pull that off.

Vétheuil by Monet

Monet Dagsthul by Dall-E

Tuesday, October 04, 2022

Is it okay for a paper or book to say `for more on this topic see Wikipedia Entry BLAH.

 One of the proofreaders for Computational Intractability: A Guide to Algorithmic Lower Bounds

(available here)

made the following objection, which raises some questions.

I object to telling readers to see a Wikipedia Entry.  Wikipedia is marvelous, but it is unstable. I have been led astray by short-lived editorial changes made by trolls. 

The proofreader is surely correct that  `See Wikipedia entry X' should be minimized. And indeed, I have gone through all of the cases we had of such things and tried to minimize them. But there are times when there seems to be no way around it. Or maybe there is but I can't see it. 

a) I want to refer to the set of problems that are (exists R)-complete. The ONLY list I know of is on Wikipedia here.

b) I want to discuss the complexity of the video game braid. There is a nice Wikipedia page about the game braid  here. There are some sites that have videos about the game, but not reallyan  explanations of it. I DID find a site that looks pretty good, here, but is that site more stable than the Wikipedia entry? There did not seem to be an official site. (I had the same issue with the 15-puzzle and some other puzzles that do not seem to have a natural official site). 

c) I want to refer the reader to a list of algorithms for discrete log. Wikipedia has a great site on this here. Is there a good article that does the same? Is it behind paywalls?

I tend to thing that the  Wikipedia sites above are stable and accurate. It helps that they are not on controversial topics.  They should be fine. Articles that are behind paywalls are much worse. As for articles on authors websites- are they more or less stable than Wikipedia?

Thursday, September 29, 2022

Machine Learning and Complexity


Schloss Dagstuhl by Monet by Dall-E

At Dagstuhl earlier this month, I hung out for a little bit with the participants of the other seminar, Knowledge Graphs and their Role in the Knowledge Engineering of the 21st Century. Knowledge graphs are what you would expect them to be, nodes are objects like "Berlin" and "Germany" with directed edges with labels like "capital". Think of having knowledge graphs of hundreds of millions of nodes and how that could help answer queries about the world. These secondary workshops are shorter and focus on creating a new vision, in this case how to maximize the importance of knowledge graphs in an increasing ML-focused world.

Perhaps we need such a visioning seminar for complexity. While we often get lost in the mathematical questions and techniques in our field, computational complexity is designed to understand the difficulty of solving various problems. Machine learning and advances in optimization should be changing that conversation. If you imagine a world where P = NP (and I did exactly that in chapter 2 of my 2013 book) much of what you consider is starting to happen anyway. ML does fail to break cryptography but then again, isn't this the best of all possible worlds? 

Look at what Scott Aaronson said back in 2006.

If P=NP, then the world would be a profoundly different place than we usually assume it to be. There would be no special value in “creative leaps,” no fundamental gap between solving a problem and recognizing the solution once it’s found. Everyone who could appreciate a symphony would be Mozart; everyone who could follow a step-by-step argument would be Gauss; everyone who could recognize a good investment strategy would be Warren Buffett. 

If I can be a Monet, can Mozart be far behind? ML trading by some hedge funds are beating Warren Buffett but remember if everyone trades perfectly, no one beats the average. Gauss is going to be trickier but it's coming. There's a reason Scott is spending a year at OpenAI to understand "what, if anything, can computational complexity contribute to a principled understanding of how to get an AI to do what we want and not do what we don’t want".

Monday, September 26, 2022

Is the complexity of approximating Vertex Cover of degree 3 open? (ADDED LATER-NO)


A max-problem f has a A P Time App Scheme (PTAS) if there is an algorithm ALG such that

 ALG(\epsilon) \ge (1-\epsilon)f(x).

A min-problem f has a A P Time App Scheme (PTAS) if there is an algorithm ALG such that

 ALG(\epsilon) \le (1+\epsilon)f(x).

(Note that the poly can depend on epsilon so it may be something like n^{1/epsilon}.)

MAX3SAT is, given a formula with \le 3 literals per clause, find an assignment

that maximized the number of clauses satisfied.

VCB-a is Vertex cover where graphs have degree \le a

The following are known:

0) MAX3SAT is in APX.

1) The PCP paper, here, showed that if MAX3SAT has a PTAS then P=NP.

2) Papadimitriou and Yannakakis (here)  had showed much earlier that MAX3SAT \le VCB-4 with an approx preserving reduction.

3) From (1) and (2) we have that VCB-4 has a PTAS then P=NP. (VC is in APX by an easy 2-approx).

4) Clearly VCB-2 is in P.

The following seems to be open, though if you know otherwise pleae leave a comment:

Is VCB-3 a) in P? b) NPC? (ADDED LATER- NPC- See comments.) 

Is the following true: if VCB-3 has a PTAS then P=NP. (ADDED LATER- NO PTAS-See Comments)

NOTE- all of the above is true for Ind Set-4 and Dom Set-4. So that leads to more open problems.

Wednesday, September 21, 2022

POSTED UPDATED VERSION OF Computers and Intractability: A guide to Algorithmic Lower Bounds posted (New title)

We have posted a revised version of 

Computational Intractability: A Guide to Algorithmic Lower Bounds

by Demaine-Gasarch-Hajiaghayi

The book is here.

(For the original post about it, edited it to use the new title (see below), see HERE.) 

We  changed the title (the title above is the new one) 

since the earlier title looked too much

like the title of Garey's and Johnson's classic. While that was intentional we 

later felt that it was too close to their title and might cause confusion. 

Of course changing the title might also cause confusion; however, 

this post (and we will email various people as well) will stem that confusion. 

We welcome corrections, suggestions and comments on the book. Email us at

Monday, September 19, 2022

There are two different definitions of Integer Programming. Why?

Alice and Bob have the following conversation.


ALICE: In your book you define INT PROG as, given a matrix A and vectors b,c,

find the integer vector x such that Ax\le b and c DOT x is maximized.

This is not correct! You also need x\ge 0.

BOB: Really? I always heard it without that extra constraint, though I am

sure they are equivalent and both NP-complete (Alice nods).

Where did you see it defined with that extra constraint?


Wikipedia entry in IP

Chapter of a book at an MIT website

Something on Science Direct

A course at Duke

An article by Papadimitriou 

An article on arxiv

The book Graphs, Networks and Algorithms by Dieter Jungnickel

Bob, do you have examples where they do not use that extra constraint. 


Math Works

Lecture notes from UIUC

Lecture notes from Lehigh Univ.

The book Parameterized Complexity Theory by Flum and Grohe

The book Computers and Intractability : A Guide to the Theory of NP-Completeness by Garey and Johnson

ALICE: Both of our lists are impressive. So now what? 


(This is Bill again.)

What indeed!

1) Why are there two definitions of Int Prog?

2) When is it good to use which one? 

Thursday, September 15, 2022

Monarachy: A Problem with Definitions

 As I am sure you know, Queen Elizabeth II passed away at the age of 96 recently.  I am not a royal-watcher, but I am a royal-watcher-watcher. That is, the question of why people care about the lives of these people intrigues me. A few notes

1) Was she a good Queen? People tend to think so; however, since the job is somewhat ill-defined its hard to say. 

2) The Queen is supposed to be above politics (she does not vote- I was surprised to find out that legally she can, but she really can't). We know very few of Queen Elizabeth II's opinions on political events. But the notion of political is not well defined. One would think that if she did an appeal for people to take the COVID vax that would not be political, but somehow it is (I do not know if she did such an appeal). King Charles III believes in global warming and that we need to do something about it. This again should not be political but is. 

3) She is the second longest reigning Monarch. First is King Louis XIV who first became king at the age of 4. I had a blog complaining about this here. However, there is a more interesting point I want to make. From the first to the last day of King Louis XIV reign not much had changed. Technology, politics, other things just didn't change much. By contrast the world changed A LOT between Queen Elizabeth II first and last day:

a) The British were an important power in 1952. Less so now.

b) When her father died she was in Kenya and it took 4 hours to get the news to her. Now that would be immediate. 

c) Divorce was considered bad in 1952 and is why King Edmond VIII could not be king (he wanted to marry a twice-divorced woman whose ex-husbands were still alive). And now three of the Queen's children have been divorced.

d) Gay people.. enough said. There has even been a royal gay wedding, see here

Black people (can't call them African-Americans), Women,... you fill it in. 

e) When Charles wanted to get married it seemed to be important that he marry a virgin. We cannot imagine this mentality anymore. When Prince William and Kate got married they were already living together and this was NOT an issue for ANYONE. I looked up what the Church of England thought of it and all I got was some very bland comments like That's what young people do nowadays. 

3) Is the monarchy a good thing? As an American I feel I do not have a right to an opinion. If the citizens of the United Kingdom approve of the monarch (polls show they do) then who am I do tell them they are wrong? Even so, lets look at reasons for it

a) Tourism. It has been said that the Monarchy leads to MONEY from tourism. So it is worth the price? Nobody seems to know and it would be hard to tell. However, I don't think the citizens of the United Kingdom view  money as the reason for Monarchy. The American analog is giving Disneyland tax breaks to be in Florida which generates jobs. I doubt they think of the Monarchy in those mundane transactional terms. 

b) CS Lewis said 

Where men are forbidden to honour a king they honour millionaires, athletes, or film stars instead: even famous prostitutes and gangsters. For spiritual nature, like bodily nature, will be served; deny it food and it will gobble poison.

This is  bit odd- they must all pretend to like the monarchy to make it work. A long time ago when Charles and Dianna were both having affairs, 80% of the citizens the United Kingdom thought that was okay so long as they are discreet so the people don't find out. But- those ARE the people.

Also odd- CS Lewis was a theologian and a  believing Christian; however, his comment above can apply to God as well as to Kings. 

Monday, September 12, 2022

Thirty Years of Dagstuhl


Dagstuhl old-timers at the original castle

I'm back at Dagstuhl for the seminar on Algebraic and Analytic Methods in Computational Complexity. My first seminar at Dagstuhl was back in 1992. I've been coming for thirty years and have been here roughly thirty times. My last trip was pre-covid (virtual Dagstuhls don't count) and I really needed this chance to hang out and talk complexity with colleagues old and new.

Some changes since my last trip. The room doors have locks (there are rumors of an incident). You have to create your own keycard on a new machine logging into your Dagstuhl account. I had a long random password through a password manager and it was not so easy as process.

The main conference room has been updated with tech for hybrid meetings, and new led lights. Books were removed from the library to create a coffee breakout space.

No Bill this time so no typecasts. Still the best part of the week is talking and hearing about complexity. Today I learned about the orientations of Sperner's lemma, that there is one more triangle oriented according to the direction of the corner vertices than those oriented the other way. Christian Ikenmeyer used this fact to motivate a study of closure properties of #P-functions.

Wednesday, August 31, 2022

The NIST Process for Post-Quantum Cryptography

Guest post by Jonathan Katz

Over the past few months there have been several interesting developments in the NIST post-quantum standardization process.

By way of background, since the advent of Shor's algorithm in 1994 we have known that a large-scale, general-purpose quantum computer would be able to break all currently deployed public-key cryptography in (quantum) polynomial time. While estimates vary as to when (or even whether!) quantum computers will become a realistic threat to existing public-key cryptosystems, it seems prudent to already begin developing/deploying newer "post-quantum" schemes that are plausibly secure against quantum computers.

With the above in mind, NIST initiated an open process in 2017 for designing post-quantum cryptographic standards. Researchers from around the world submitted candidate algorithms for public-key encryption/key exchange and digital signatures. These were winnowed down over a series of rounds as cryptographers publicly debated the relative merits of different proposals, or showed security weaknesses in some candidates.

On July 5 of this year, NIST announced that it had selected four of the submissions as finalists for standardization. Only one candidate for public-key encryption was chosen, along with three digital signature schemes. Three of the four selected algorithms rely on the hardness of lattice problems; the only non-lattice scheme is a hash-based signature scheme. (It is possible to build digital signatures using "symmetric-key" assumptions alone.) In addition, four other public-key encryption schemes not based on lattices were designated for further study and possible standardization at a later point in time.

Less than one month later, Castryck and Decru announced a classical attack on SIKE, one of the public-key encryption schemes chosen for further study. SIKE is based on a conjectured hard problem related to isogenies on supersingular elliptic curves. The attack was not just theoretical; the researchers were able to implement the attack and run it in less than a day or less, depending on the security level being considered. Details of the attack are quite complex, but Galbraith gives a high-level overview. Subsequent improvements to the attack followed.

It is worth adding that the above follows an entirely classical attack shown roughly six months earlier on Rainbow, another submission to the NIST standardization process that made it to the 3rd round. (Rainbow is a signature scheme that relies on an entirely different mathematical problem than SIKE.) For completeness, note that none of the four finalists are impacted by any of these attacks.

A few reflections on the above:
  • It is amazing that the factoring and RSA problems are still hard (for classical computers), more than 40 used after they were proposed for cryptography. The same goes for the discrete-logarithm problem (in certain groups).
  • It is not easy to find other hard mathematical problems! Code-based cryptography has been around about as long as factoring, but has been somewhat unpopular for reasons of efficiency. Lattice-based cryptosystems still seem to give the leading candidates.
  • We need more (non-cryptographers) studying cryptographic assumptions. The attacks on SIKE involved deep mathematics; attacks on lattice problems may involve algorithmic ideas that cryptographers haven't thought of.