Sunday, June 30, 2024

Technology: 1966, 2006, 2023.

 In 2013 I wrote a blog to celebrate Lance's 50th birthday by contrasting what things were like when Lance was 10 to when he was 50. That post is here.

But life has even changed from 2006 to 2023. I will tell three stories, one from 1966, one from 2006, one from 2023. They all have to do with my hobby of collection novelty songs; however, I am sure there are similar stories in other realms

1) On Sept 21, 1966 there was an episode of Batman with special guest villain The Minstrel. He sang several songs in the episode that I thought were funny. My favorite was when Batman and Robin are tied up over a rotisserie, the Minstrel sings, to the tune of Rock-a-bye baby. 

Batman and Robin Rotate and Resolve

As the heat grows, your bodies Dissolve

When its still hotter, then you will Melt

Nothing left but your Utility Belt. 

I LIKED the song and WANTED it. So I found out when the episode would re-run and set up my tape recorder to record it. I still have the tape, though I don't have a tape player (see my blog post here) however it doesn't matter because a compilation of the songs in  that episode (actually two episodes) is on YouTube here.

2) On March 6, 2006 there was an Episode of Monk Mr. Monk goes to the dentist which has in it The Randy Disher Project singing Don't need a badge. This was great and I wanted that song. At the time I was buying the DVDs of Monk. When the DVD of that season came out I assumed the song  would be included as an extra. It was not :-(.  By that time I was busier than in 1966 so I  didn't have the time, patience, or tape recorder to track it down. But that does not matter since 8 years later it was on  YouTube here. But I had to wait 8 years.

3) On Aug 23, 2023 there was an episode of ST-SNW entitled Subspace Rhapsody that had NINE songs in it, sung by the crew (actually sung by the actors!)  I don't have streaming so I didn't watch it but I heard about it (people know I am interested in novelty songs so they tell me about stuff like that). I spend about 30 minutes on YouTube finding ALL NINE and putting them in my file of novelty song links, see here. And it was worth the effort- three of the songs are GREAT and the rest are pretty good (in my opinion).

Points

1) Also easier to find now then it was in 2006 and certainly in 1966: Everything. Okay, lets list some examples: Music (not just novelty), TV shows, Movies, Journal articles, Conference articles, books. But see next point. 

2) Big Caveat: For a recording from 1890 to have survived it would have to be on wax cylinder, then vinyl, then CD, maybe back to vinyl (Vinyl is having a comeback), and perhaps mp3, streaming, You Tube, or Spotify. Some music will be lost. I would like to think that the lost music is not the good stuff, but I know of cases that is incorrect (my blog post here gives an example). For journal articles there is also the issue of language. Some articles were never translated.  And some are written in a style we no longer understand. And some you really can't find. And there may be some songs where the only copy is in my collection.

3) Corollary to the Big Caveat: Some things are on YouTube one day and gone the next. There is an SNL short video Conspiracy Theory Rock which seems to come and go and come and go. I don't think its on YouTube, but I found it here. Lets hope it stays. I have that one on VHS tape but I don't have a VHS tape player. And modern e-journals might vanish. See my post on that issue here.

4) Some of my fellow collectors think they miss the days when only they had access to (say) Weird Al's Patterns which he sang on Square One Television (a math-for-kids show on PBS which I discovered and liked when I was 45). The song is on YouTube here. I find this point of view idiotic. The PRO of the modern world is I can find lots of stuff I like and listen to it (and its free!). The CON is a loss of bragging rights for people like me. Really? Seems like a very minor CON. I do not miss the days of hunting in used record shops for an old Alan Sherman record (ask your grandmother what a used record shop is and what an Alan Sherman is).

5) When I played the song Combinatorics (see here) in my discrete math class the students liked it (for some reason the TA hated it, oh well) and the students asked 

Is that a real song

I asked them to clarify the question. They couldn't. To ask if it ever came out on a physical medium is a silly question- it didn't, but that doesn't matter. Did it make money? Unlikely, but that would be a rather crass criteria. There are lots of VERY GOOD songs on You Tube (whether Combinatorics is one of them is a question I leave to the reader) so the question Is that a Real Song is either ill-defined or crass. All that matters is do you like it. 


Wednesday, June 26, 2024

E versus EXP

Why do we have two complexity classes for exponential time, E and EXP?

First the definitions:

E is the set of problems computable in time \(2^{O(n)}\).

EXP is the set of problems computable in time \(2^{\mathrm{poly}(n)}\).

The nondeterministic variants NE and NEXP have similar definitions and properties.

By the time hierarchy theorem, E is strictly contained in  EXP. But they have basically the same complexity:

  • There are polynomial-time many-one complete sets for EXP in E.
  • EXP is the closure of E under polynomial-time many-one reductions.
  • E is in NP if and only if NP = EXP. You can replace NP by PSPACE, BPP, BQP or any other class closed under poly-time many-one reductions.
Quiz: Show that PSPACE \(\neq\) E. Hint: The proof doesn't tell you which class might be larger.

EXP is the natural class for exponential time since it is closed under polynomial-time reductions and is known to contain PSPACE and all those other classes above. You have results like MIP = NEXP but not MIP = NE since MIP (interactive proofs with multiple provers) is closed under polynomial-time reductions. 

E = NE implies EXP = NEXP but not necessarily the other way around. P = NP implies both equalities but again not the other way around. You get P = NP implies E = NE because poly(\(2^n)\) = \(2^{O(n)}\). That equality plays a role in other theorems related to E and NE:

Impagliazzo-Widgerson: If E is not computed by subexponential-size (\(2^{o(n)}\))-sized circuits then P = BPP. A similar assumption for EXP would only put BPP in quasipolynomial time. 

Hartmanis-Immerson-Sewelson: show that there are sparse (polynomial-sized) sets in NP-P if and only if E \(\ne\) NE. Their paper leads to endless confusion because they state the result as EXPTIME \(\ne\) NEXPTIME without defining the terms before the terminology was set.

In fact I just fixed the Wikipedia article on EXPTIME which had the incorrect statement. Aargh!

Sunday, June 23, 2024

Soliciting open problems in honore of Luca T for my Open Problems Column

As you all know Luca Trevisan, a Giant in our field, passed away at the too-young age of 52. See Lance's post on Luca HERE. 

As the editor of the SIGACT News Open Problems Column I am putting together an open problems column in his memory.  (I did the same for Juris Hartmanis, see here, so you will have an idea of what I want.) 

If you want to submit an open problem, email me (gasarch@umd.edu) either 

a) Your IDEA for an open problem to see if its in scope, or 

b) If you are sure it's in scope,  Just Do It and send me the LaTeX code.  Page limit \le  2 page.

The problems should be either BY Luca or INSPIRED by Luca. 

I am thinking of open problems about derandomization and extractors; however, if Luca did some work in some other area that I am less familiar with (this is likely), that's fine; however,  cite that work. 



Thursday, June 20, 2024

Luca Trevisan (1971-2024)

Complexity theorist Luca Trevisan lost his battle to cancer yesterday in Milan at the age of 52. A terrible loss for our community and our hearts go out to his family.  

The community will honor Trevisan's life and legacy 12:30 PM Pacific Time Monday at the TCS4All talk that he was scheduled to give at the STOC conference in Vancouver. Register to watch the talk online.

Luca was one of the great minds of our field, an expert on randomness and pseudorandomness. He's the first computer science member of Italy's National Academy of Science. He has taught at Columbia, Berkeley and Stanford until 2019 when he moved back to his home country to join Bocconi University in Milan. 

My favorite result from Trevisan is his connections between extractors and pseudorandom generators, especially as the first works on arbitrary distributions and the latter fools computationally randomized algorithms. This paper laid the framework for better bounds for both extractors and generators. I had one paper with Trevisan, where, with Rahul Santhanam, we show time hierarchies for almost all natural semantic classes with a small amount of advice.

Trevisan had his own blog In Theory full of technical course notes and wonderful stories. Bill has two guest posts on the polynomial van der Waerden theorem in Luca's blog following up on Luca's posts on Szemeredi’s theorem

A few years ago Trevisan started the BEATCS theory blogs column to highlight theory blogs and bloggers. Bill and I were both highlighted in this column. 

Trevisan is one of the first theoretical computer scientists to come out as openly gay and many followed. We've come a long way from Turing.

More remembrances from Boaz and Scott.

In 2014 Luca Trevisan returned to Berkeley and joined the Simons Institute as its first permanent senior scientist. Christos Papadimitriou interviewed Luca for the occasion. 

Wednesday, June 19, 2024

Rethinking Heuristica

I've argued that more and more we seem to live in an Optiland, a computational utopia where through recent developments in optimization and learning we can solve the NP-problems that come up in practice and yet cryptography remains unscathed. We seem to simultaneously live in Heuristica ( and Cryptomania of Russell Impagliazzo's Five Worlds.

But we don't. Impagliazzo defined Heuristica as the world where P \(\ne\) NP but we can solve NP-complete problems easily on average. Since cryptography requires problems that are hard on average, if we are in Cryptomania we can't be in Heuristica. 

That definition made sense in 1995 but it didn't envision a world where we can solve many NP-problems in practice but not easily on average. Despite its name, Heuristica as defined does not capture solving real-world problems. To be fair, Impagliazzo entitled his paper "A Personal View of Average-Case Complexity," not "A Treatise on Solving Real World Problems". 

So we need to rethink Heuristica or create a new world (Practica?) that better captures real-world problems. How would we do so? 

When I talked with the SAT Solving researchers at Simons last spring, they had suggested that problems designed to be hard are the ones that are hard. But how can we mathematically capture that? Maybe it's connected to learning theory and TC0 (low depth circuits with threshold gates). Maybe it's connected to constraint-satisfaction problems. Maybe it's connected to time-bounded Kolmogorov complexity. 

As complexity theorists this is something we should think about. As we study the mathematics of efficient computation, we should develop and continue to revise models that attempt to capture what kinds of problems we can solve in practice.

But for now I don't have the answers, I don't even know the right questions.

Sunday, June 16, 2024

A Trip Down Memory Lane: Eric Allender on asy lower bounds and isomorphism

I recently came across (by accident) the link to all of the BEATCS complexity columns from 1987 to the 2016. See HERE. (If you know a link to a more recent webpage then email me or make a comment. There is a link to all of the issues of BEATCS here; however, I want a page with just the complexity  theory columns.)

This gave me a trip down memory lane and a series of blog posts: Briefly describe an  articles and also get commentary from the original authors on what they think now about the area now.

First up: Eric Allender

62 and 66) Some pointed questions concerning asymptotic lower bounds and news from the isomorphism  front by Eric Allender,   1997 and 1998. The article I point to is a later article that combined both articles.  This paper  is about why Asymptotic Lower Bounds ARE relevant to real computations, and also why the isomorphism of complete sets is relevant
to real computations. I asked Eric about (a) how do people outside of theory regard lower bounds today, and (b) any more news on the isomorphism front?

Here's my answer to question (a):

Sadly, one still frequently runs into folks who contend that complexity theory has no value for practitioners.  I wish that people who hold those views would read my article, and -- if they find it unpersuasive -- that they would put forward some new, more convincing arguments about why complexity theory is useless, rather than trotting out the same old "straw man" arguments that I so effectively destroy in this article (in my own humble opinion).

The article was written for a general audience, but I don't think that it ever reached the audience I had in mind.  Google Scholar says that it was cited 8 times by authors other than myself -- and some of those 8 are duplicates.

Re-reading the article, I think that it's aged fairly well, although there are few things that are a bit out-of-date.  For instance, in Section 2, I describe the paucity of meaningful conditional lower bounds on the time-complexity of problems in P.  The development of fine-grained complexity theory has done a lot to fill in this gap in our understanding.  And parameterized complexity (which gets a brief mention in this same paragraph)  has a much bigger footprint than when I wrote the article.

Of course, there have been some significant developments, regarding Graph Isomorphism (e.g., Babai's work).  And in Section 5, I say that it's open if Ntime(2^n) is in AC^0[6], but Williams settled that question.

There are still not very many examples of concrete circuit lower bounds for reasonable-sized inputs, of the type that I cite from Stockmeyer's thesis (which finally appeared in a J.ACM article some 28 years after Stockmeyer got his PhD).  I was able to find exactly one new result in this direction, by Wesolowski and Williams, see here

Here is my answer to (b) 

 The biggest news on this front is Manindra Agrawal's spectacular isomorphism theorem (JCSS 2011), which improves on the \(AC^0\)-isomorphism results that are discussed in the paper.  (He gives a completely uniform isomorphism theorem, which avoids the uniformity issues in the earlier results showing that complete sets are \(AC^0\)-isomorphic  Manindra also wrote an excellent survey on the isomorphism conjecture see here.  More recently, there has also been some nice work, discussing isomorphisms via reductions more powerful than poly-time: (Harkins, Hitchcock, and Pavan, FSTTCS 2007 and Computability (the journal of the Association CiE) 2014).

My article tries to make a point, saying that isomorphisms might be useful in showing that the factoring problem is not complete for any "standard" complexity class.  Upon some more reflection (and discussions with Michal Koucky) I came to see that some of the approaches I suggested are probably not very workable.  I discuss this in another survey I wrote, see here.

Should Prover and Verifier have been Pat and Vanna?

LANCE: I had my first Quanta Article published! I explore computation, complexity, randomness and learning and feeling the machine.

BILL: Feels to me like a mashup of old blog posts. Changing topics, I told Darling that you used Pat for Prover and Vanna for Verifier in a 1987 conference talk but those terms did not catch on. She was shocked!

LANCE: I'm shocked you two are married 32 years.

BILL: We hope to get to 64. However, she thought those were really good names for the concept (she has a masters degree in Computer Science so she knows stuff) and wondered why wouldn't those have caught on.

LANCE: I think that its frowned upon to use a cultural icon to tied to one country. There are Europeans who have no idea who Pat and Vanna are. For that matter, there are some Americans, particularly academics, who have no idea who Pat and Vanna are. And who would remember either of them once they stopped hosting the show? And who thought that would be 2024?

BILL: Who do papers on Interactive Proof Systems use?  Of course Author-Merlin games. Is the legend of King Author so well known (or at least it's well know that there IS a legend) that its okay to use those names? I think yes. 

LANCE: Did you really think his name is Author? I command thee to see Excalibur and learn the legend for yourself. Excalibur also being the name of a Computer Othello program I wrote in the 80's.

BILL: All right, Arthur. For one thing, we, or at least everyone but me, still knows who they are many years later, whereas Pat and Vanna will be lost to history. Hey Arthur and Merlin even got a science cartoon for their role in interactive proofs.

LANCE: Did Arthur and Merlin ever host a game show? I used Victor and Pulu in my thesis. I've also written papers where we use Prover and Verifier.

BILL: Pulu? Anyway, Prover and Verifier are boring!

LANCE: Sometimes boring works. We need to only use cultural icons that spans many cultures and won't be forgotten in 200 years. Just to be on the safe side, use cultural icons that are over 200 years old. 

BILL: Can you think of any cultural icon that has been used in Math or Computer Science and the name did catch on?

LANCE: The Monty Hall Problem.

BILL: I suspect there are many people who know who Monty Hall is only because of the paradox. And that is a paradox. Here is a name that didn't catch on: Sheldon's Conjecture was named after Sheldon Cooper from The Big Bang Theory. However, since it was solved, the name won't catch on, which is probably just as well. 

LANCE: How does the Chicken McNugget Theorem fit into this?

BILL: I don't know but it's making me hungry. Let's eat!

Thursday, June 13, 2024

Favorite Theorems: Algebraic Circuits

May Edition

Most of my favorite theorems tell us something new about the world of complexity. But let's not forget the greatest technical challenges in our area: proving separations that are "obviously" true. Here's the most exciting such result from the past decade.  

Superpolynomial Lower Bounds Against Low-Depth Algebraic Circuits
Nutan Limaye, Srikanth Srinivasan and Sébastien Tavenas

In this model, the inputs are variables and constants, and the goal is to create a specific formal polynomial using the gate operations of plus and times. Limaye, Srinivasan and Tavenas find an explicit polynomial such that any polynomial-size constant-depth algebraic circuit will compute it. 

How explicit? Here it is: Take d nxn matrices, multiply them together and output the top left element of the product. The \(N=dn^2\) variables are the entries of the matrices. The top left element is a polynomial of the inputs that can be computed by a simple polynomial-size circuit that just computes the iterated multiplication, just not in constant depth. The paper shows that for an unbounded d that is \(o(\log n)\), there is no constant-depth polynomial-size algebraic circuit.

The authors first prove a lower bound for set multilinear circuits and then extend to more general algebraic circuits.

Sunday, June 09, 2024

CFG-Kolm-complexity is singleton sets with Lance and Bill

For this post all Context Free Grammars (henceforth CFGs) are assumed to be in Chomsky Normal Form. The size of a CFG \(G\)  is the number of rules. We denote this by \(|G|\).

BILL: In my automata theory class I want to do some lower bounds on the size of CFGs. It is easy to show that if   \(w=0^n\) then there is a CFG G such that \(L(G)=\{w\}\) and \(|G|=O(\log n)\). I showed that if \(w\) is a Kolmogorov random string of length \(n\), and G is a CFG such that \(L(G)=\{w\}\), then \( |G|=\Omega(n/\log n\)), though this is surely known. So here is my question: Is there a natural such \(w\)? I will blog about that and make an open problems column out of it.

LANCE: Kolmogorov strings are natural!

BILL: Oh yeah. If that was true then spell check would not flag Kolmogorov as being misspelled.  So there!

LANCE: Can you ask a more rigorous question?

BILL: Okay. We can view the Kolm-result as saying that there is a function \(f\) from \(1^*\) to \(\{0,1\}^*\) such that  \(f(1^n)\) is a string of length \(n\) such that any CFG for \( \{w\}\) is large. But the function f is not computable!

LANCE: That shouldn't bother you. You wrote an entire book about how many queries to HALT and other incomputable sets are needed to solve certain problems (see here).  Also now that you know you there are such strings, you can simply search for a w and test all small CFGs. So Computable!

BILL: Still not natural. And what is the complexity? Exponential? Poly?

WE DROP THE TOPIC AND PICK IT UP AGAIN A FEW TIMES. WE (meaning mostly Lance) HAVE SOME BRILLIANT INSIGHTS THAT LEAD TO THE FOLLOWING RESULTS:  

1) For every \(w \in \{0,1\}^n\) there is a CFG G with \(L(G)=\{w\}\) and \( |G|=O(n/\log n)\)

2) If  \(w\) is a de Bruijn sequence of length \(n\) and order \(k=\log n\) (we assume n is a power of 2). Then every CFG G with \(L(G)=\{w\}\) has \( |G|=\Omega(n/\log n)\).  There is a known algorithm that will, given \(1^n\), produce a de Bruijn sequence or length n and order \(k=\log n\), in time quasilinear in \(n\). 

BILL: That bums me out for two contradictory reasons

a) The problem is NOT solved since de Bruijn is flagged by spellcheck, so the sequences are not natural.

b) The problems IS solved, so I can't use it for an open problems column. 

LANCE: Do not despair!

a) De Bruijn sequences have a Wikipedia page and therefore are natural. 

b) We can post on ArXiv. 

WE DID and a day later Markus Lohrey emailed us that, aside from the De Bruijn result, the results are already known using a different terminology, word chains.  See his survey here. Then the next day, Giovanni Pighizzini emails us that he had previously published lower bounds for De Bruijn sequences. We have since withdrawn the paper. We revised it by putting in references and history but will not put it on arxiv. The revised paper is here.

LANCE: Bill, are you bummed out? Why did we even write the paper anyway?

BILL: Not at all!  My original goal was pedagogical, and the paper we have can still be taught in automata theory next spring. PLUS, we got invited to submit to Advanced in AI and ML with a 10% discount on publication fees (see here.) Since we are used to getting 100% discount on publication fees we won't be submitting, but it was nice to be asked. 

LANCE: Yeah, nice to be asked to be parted from my money. At least I learned about word chains.

Thursday, June 06, 2024

The Godzilla Moment


On the plane earlier this week I got around to watching the Academy Award winning movie Godzilla Minus One, one of the best monster movies I've seen set in Japan during the aftermath of World War II, with a pretty emotional substory about a man dealing with his demons from the war. I had to hide my tears from the nearby passengers.

It wasn't the story that earned the movie an Oscar. Godzilla Minus One won the awards for Best Visual Effects. I found nothing wrong with the effects, but they didn't excel beyond what you see in any typical movie of the genre.

In 2008, I lamented that special effects in movies had improved so much that we had lost the amazement we felt in the 70s. Perhaps I spoke too soon, as James Cameron's Avatar came out the following year and did amaze. However, special effects have since become a commodity, something filmmakers must include because audiences expect it but rarely do you go to a movie for the effects. In the not-too-distant future, special effects will be automated with AI, becoming just another plugin for Final Cut Pro. 

It's time to retire the visual effects award, especially with new awards coming to the Oscars.

I wrote that 2008 column to mirror the lack of enthusiasm about computing at the time which also felt like a commodity. Now we're at an exciting time in computing particularly with the advances in artificial intelligence. But we should be wary, once (if?) AI gets consistently good it may feel like a commodity again and once again we become victims of our own success. 

Monday, June 03, 2024

FOCS 2024 Test of Time Award. Call for nominations and my opinion

 The call for nominations for the Test of Time Award at FOCS 2024 has been posted here.

Eligibility and past winners are here.


Points

1) It is good to have an award that waits until the dust settles and we can see what was really important.

2) The winners are all excellent papers that really have passed the test of time. 

3) And of course it is really important that they appeared in FOCS. NO IT ISN"T! See next point

4) I would prefer a test-of-time award that is independent of WHERE the paper first appeared. Tying it to FOCS or STOCS or FOCS-or-STOC seems bad. I would opt for appearing in ANY journal or conference. Appearing in a journal of low quality is not a problem since this award should be  for papers that are judged on their merit and influence, and not on their pedigree.

5) My proposal to allow any journal or conference may be impractical because some organization has to give it out, and if that organization is IEEE or ACM they will restrict to their own publications. 


6) STOC also has a test of time award, see here 76) I tried to find out of the SODA conference has a test of time award but mostly got hits about the Baking Soda Test for determining if a pregnant women is going to have a boy or  girl. It actually worlds 50% of the time! See here

7) I was not able to find any other test-of-time award for Comp Sci THEORY. 

8) I DID find test of time awards for

SIGCSE- Comp Sci Education, here. Must be for a paper published in a conference co-sponsored by SIGCSE or in an ACM journal.  So an excellent paper published elsewhere wouldn't count. 

SC2- High Performanc Computing, see here. Paper must have been published in the SC conference. 

ACM CCS - Security, Audit(?) and Control, see here I think these must appear in the CCS conference.