Thursday, December 23, 2021

Complexity Year in Review 2021

The pandemic hampered many activities but research flourished with a number of great results in complexity. Result of the year goes to

Locally Testable Codes with constant rate, distance, and locality by Irit Dinur, Shai Evra, Ron Livne, Alexander Lubotzky and Shahar Mozes
and independently in
Asymptotically Good Quantum and Locally Testable Classical LDPC Codes by Pavel Panteleev and Gleb Kalachev

They achieved the seemingly impossible, a code that can be checked with constant queries, constant rate, constant distance and constant error. Here's Irit's presentation, a Quanta article and an overview by Oded Goldreich. Ryan O'Donnell presents the Panteleev-Kalachev paper, which also resolved a major open question in quantum coding theory.

Other great results include Superpolynomial Lower Bounds Against Low-Depth Algebraic Circuits by Nutan Limaye, Srikanth Srinivasan and Sébastien Tavenas, The Complexity of Gradient Descent by John Fearnley, Paul W. Goldberg, Alexandros Hollender and Rahul Savani, Slicing the Hypercube is not easy by Gal Yehuda and Amir Yehudayoff, and The Acrobatics of BQP by Scott Aaronson, DeVon Ingram, and William Kretschmer. The latter paper answers a 16-year old open question of mine that suggests you cannot pull out quantumness like you can pull out randomness from a computation.

In computing overall, the story continues to be the growth in machine learning and the power of data. We're entering a phase where data-driven programming often replaces logic-based approaches to solving complex problems. This is also the year that the metaverse started to gain attention. Too early to know where that will take use, but the virtual space may become as disruptive as the Internet over the next decade, and its potential effect in research and education should not be ignored. In the next pandemic, we may wonder how we survived earlier pandemics without it.

The NSF might be going through some major changes and significant increases, or not, especially with the Build Back Better bill on hold. The Computing Research Policy Blog can help you through the morass. 

We remember Matthew Brennan, Benny ChorAlan HoffmanArianna Rosenbluth, Walter SavitchAlan Selman, Bob Strichartz and Stephen Wiesner

We thank our guest posters Paul BeameVarsha Dani, Evangelos GeorgiadisBogdan GrechukDavid Marcus and Hunter Monroe.

In May I posted on emerging from the pandemic. Then came Delta. Now comes Omicron pushing us back online next month. I hope Pi-day doesn't bring us the Pi-variant.

Wishing for a more normal 2022.

Friday, December 17, 2021

Fifty Years of P vs. NP and the Possibility of the Impossible


I have a new article Fifty Years of P vs. NP and the Possibility of the Impossible, to mark the anniversary of the 1971 publication of Steve Cook's seminal paper, a month too late in the January 2022 Communications of the ACM

Initially Moshe Vardi asked me to update my 2009 CACM survey The Status of the P versus NP Problem. The P vs NP problem hasn't changed much but computing has gone through dramatic changes in the last dozen years. I try to view P vs NP in the lens of modern optimization and learning, where we are heading to a seemingly impossible Optiland (a play on Impagliazzo's Pessiland), where we can solve many of the toughest NP-complete problems in practice and yet cryptography remains unscathed.

CACM produced a slick companion video to the article.

Fifty Years of P Versus NP and the Possibility of the Impossible from CACM on Vimeo.

Sunday, December 12, 2021

Did Lane Hemaspaandra invent the Fib numbers?

 (I abbreviate Fibonacci by Fib throughout. Lane Hemaspaandra helped me with this post.) 

We all learned that Fib invented or discovered the Fib Numbers:

f_0=1,

f_1=1, and

for all n\ge 2, f_n = f_{n-1} + f_{n-2}.

We may have learned that they come up in nature (NOT true, see here) or that they were important in mathematics (questionable--see this blog post here which says no, but some comments give good counterarguments). You also learned that Fibonacci was the mathematician who first studied them. Also not true!  This one surprised me. 

1) I came across this blog post: here that says they were invented by Hemachandra first. Wow--I then recalled that Lane Hemaspaandra's birth surname  was Lane Hemachandra (he married Edith Spaan and they are now both Hemaspaandra). So naturally I emailed him to ask how a 20th-century person could invent something earlier than 1170. He told me a picture of him in the basement ages while he stays young. 

2) It would be nice to say OH, let's call them Hemachandra numbers (would that be  easier than convincing the world to use tau instead of pi,? See The Tau Manifesto) and let students know that there were people other than Europeans who did math back then. But even that story is not as simple as it seems. Lane emailed me this article: here that tells the whole muddled story. (In what follows I leave out the accents.)

Virahanka seems to be the formulator of the Fib recurrence, though not quite the numbers. His motivation was Sanskrit Poetry.  He did this between 600 and 800 AD.

Gopala, in work prior to 1135, was aware of Virhanka's work. In particular he know about the inductive rule. But he also set the initial values and generated numbers, so he was the first to have the sequence we now call the Fib numbers. His motivation was Sanskrit Poetry. 

Hemachandra in 1150 also formulated them, independently.  His motivation was Sanskrit poetry.

(I will learn some Sanskrit poetry the next time I teach Discrete Math so I can give the students an application of the material!)

So does Virhanka win? Hardly:

Acarya Pingala's writings from the 5th or 6th century BC (YES- BC!) indicate that he knew about the Fib numbers in the context of (you guessed it!) Sanskrit poetry. 

3) I would support changing the name of the Fib Numbers to the Pingala numbers. This is both good and bad news for Lane:

a) Bad news in that he does not get a sequence of number that shares his pre-marriage name.

b) Good news in that if I settled on Hemachandra numbers then Lane and Edith would have to decide if 0 or 1 or 2 of them want to change their name to Hemachandra. I would guess not--too complicated. Plus one name change in a life is enough. 

4) The story (especially  the articles I pointed to) shows just how complicated history can get. Even a straightforward question like:

Who first formulated the Fib Numbers?

might not have a well-defined answer. Perhaps this is  the wrong question since if people formulate the concept independent of each other, they should all get some credit.  Even if the authors are 1000 years apart. 

Side note: Independent Discovery may be harder to assert now since, with the web,  Alice could have seen Bob's paper so it may be hard to call Alice's discovery independent. As I have mentioned before on this blog, my students have a hard time with the notion of Cook and Levin coming up with NP-completeness independently  since  surely one would have posted it and the other would have seen it. An era before posting was possible! Unimaginable to them. Sometimes even to me. 

Wednesday, December 08, 2021

Defending the Status Quo

When the Wall Street Journal's editorial board and the New York Post endorse your efforts, that should ring warning bells.

Several members of the theory and mathematics community and have written and endorsed an Open Letter on K-12 Mathematics that attacks the proposed revisions to the California Mathematics Framework. I have mixed feelings about these efforts. 

Certainly the CMF has its issues, and the FAQs protest too much. But the letter goes too far in the other direction, arguing mainly for the status quo that worked well for those who signed the letter, very few of which have significant experience in K-12 education. The open letter allows for only incremental change unlikely to lead to any significant improvements.

Before you sign the letter, take a look at the CMF introduction

To develop learning that can lead to mathematical power for all California students, the framework has much to correct; the subject and community of mathematics has a history of exclusion and filtering, rather than inclusion and welcoming. There persists a mentality that some people are “bad in math” (or otherwise do not belong), and this mentality pervades many sources and at many levels. Girls and Black and Brown children, notably, represent groups that more often receive messages that they are not capable of high-level mathematics, compared to their White and male counterparts. As early as preschool and kindergarten, research and policy documents use deficit-oriented labels to describe Black and Latinx and low-income children’s mathematical learning and position them as already behind their white and middle-class peers. These signifiers exacerbate and are exacerbated by acceleration programs that stratify mathematics pathways for students as early as sixth grade.

Students internalize these messages to such a degree that undoing a self-identity that is “bad at math” to one that “loves math” is rare. Before students have opportunities to excel in mathematics, many often self-select out of mathematics because they see no relevance for their learning, and no longer recognize the inherent value or purpose in learning mathematics.

You may or may not agree with the CMF approach, but it's hard to deny the real challenges they are trying to address and students they are trying to help. If you don't agree with the CMF, work with them to come up with a good alternative that helps create a more inclusive mathematical citizenry. An outright rejection of the approach won't fix problems and probably won't be taken seriously, except from the conservative press.

Update (1/12/22): Boaz Barak and Jelani Nelson respond to this post.

Sunday, December 05, 2021

Yes Virginia, there is a Santa Clause for Complexity Theorists, If you Only Believe

(Guest Post by Hunter Monroe) In this  guest post and discussion paper, I present a remarkable set of structurally similar conjectures which, if you only believe them, conjure up a dream world for theorists by asserting a new form of diagonalization based on naturally nonrelativizing facts invoking a deep linkage to underlying noncomputable languages. These conjectures, all stronger than the things to be proved, imply that the polynomial hierarchy does not collapse because the arithmetic hierarchy does not collapse, and P≠NP≠coNP. The diagonalizations imply the existence of hard instances, with the result that many complexity classes have speedup, including the Π side of PH, and proof speedup for tautologies stems from proof speedup for arithmetic. These conjectures do two things: (1) let us explore a hypothetical world where many open problems about uniform complexity classes are resolved and consider steps beyond e.g. to circuit complexity, and (2) reduce numerous open questions to a single plausible claim about how Turing machines have limited information about noncomputable languages. This would potentially allow a slew of open questions to be resolved at once with a skeleton key.

The following conjecture implying $\textbf{P!=NP}$ is remarkable: it hints at a deeper, unnoticed relationship between complexity and noncomputability; it is equivalent to speedup for all paddable $\textbf{coNP}$-complete languages and in proof length for tautologies; tweaked versions would separate other complexity classes; and if true it is a nonrelativizing fact.

Conjecture: (*) For any deterministic TM $M$ accepting the $\textbf{coNP}$-complete language ``nondeterministic TM $N$ on input $x$ does not halt within $t$ steps'' ($\texttt{coBHP}$), there exists a $\langle N',x'\rangle\in\texttt{coHP}$ ($N'$ does not halt on $x'$, ever) with $M$'s running time $f(t)=T_M(N',x',1^t)$ not bounded by any polynomial.

If true, (*) is a nonrelativizing fact; there is no hard $\langle N, x\rangle$ for $M$ with an exponential time oracle. The noncomputable language $\texttt{coHP}$ potentially explains why (*) is true, by analogy with this trivial theorem:

Theorem. For any $M$ accepting $\texttt{coBHP}$, there exists some non-halting $\langle N',x'\rangle\in\texttt{coHP}$ with $f(t)=T_M(N',x',1^t)$ not bounded by a constant.

Otherwise, $M$ would accept $\texttt{coHP}$ and have too much information about a non-c.e. language; (*) is just a stronger version. In the extreme, any $M$ is completely ignorant about some $\langle N',x'\rangle$ and requires on the order of $2^t$ steps to rule out every potentially halting branch. Tweaking (*) yields conjectures implying that $\textbf{PH}$ does not collapse:

Conjectures: For any $M^{\Pi^p_i}$ that accepts $\Pi^p_{i+1}=\{\langle N^{\Pi^p_i},x,1^t\rangle|$ $N^{\Pi^p_i}$ does not halt on input $x$ in $t$ steps$\}$, there is a non-halting $\langle N'^{\Pi^p_i},x'\rangle\in \Pi_i$ with $M^{\Pi^p_i}$'s running time not bounded by any polynomial.

By invoking every level $\Pi_i$ of the arithmetic hierarchy ($\textbf{AH}$), these conjectures state that the noncollapse $\textbf{PH}$ is due to the noncollapse of $\textbf{AH}$. The conjecture (*) can be calibrated depending on the desired separation to equip $M$ with an oracle or nondeterminism or constrain its resources, to choose a resource-bounded complete problem and underlying non-c.e. language, and to fine tune how hard a hard instance needs to be.

Proof speedup for tautologies (equivalent to (*)) may stem from the proof speedup for arithmetic that occurs when adding undecidable statements as new axioms, allowing new theorems to be proved and shortening the proof of existing theorems. This literature translates any arithmetic theorem free of existential quantifiers into a tautology by replacing $k$-bit numbers with $k$ Boolean variables. The analogy with (*) suggests this stronger conjecture may in fact also be equivalent:

Conjecture: The following two statements are equivalent: (1) there is no optimal propositional proof system; and (2) Any propositional proof system $P$ is outperformed by a sufficiently powerful conservative extension $T$ of the Peano arithmetic, and $T$ can be improved further by adding any undecidable statement in $T$ as a new axiom.

So (*) is a Swiss army knife for generating conjectures that give us a vision of a world in which answering essentially one question would serve as a skeleton key that unlocks many open problems.

Wednesday, December 01, 2021

TheoretiCS: A New TCS Journal

Guest Post from Paul Beame on behalf of the TheoretiCS Foundation

I am writing to let you know of the launch today of TheoretiCS, a new fully open-access journal dedicated to Theoretical Computer Science developed by the members of our community that I have been involved in and for which I gave a brief pre-announcement about at STOC.

This journal has involved an unprecedented level of cooperation of representatives of leading conferences from across the entire Theoretical Computer Science spectrum. This includes representatives from STOC, FOCS, SODA, CCC, PODC, SoCG, TCC, COLT, ITCS, ICALP, which may be more familiar to readers of your blog, as well as from LICS, CSL, CONCUR, ICDT, MFCS and a number of others.

Two Points of Emphasis

  • Our quality objective - TheoretiCS aims at publishing articles of a very high quality, and at becoming a reference journal on par with the leading journals in all of Theoretical Computer Science
  • The inclusive view of Theoretical Computer Science that this journal represents, which is evident in the choice of two excellent co-editors-in-chief, Javier Esparza and Uri Zwick, and an outstanding inaugural editorial board.

Guiding principles and objectives

  • We believe that our field (and science in general) needs more 'virtuous' open-access journals, a whole eco-system of them, with various levels of specialization and of selectivity. We also believe that, along with the structuring role played by conferences in theoretical computer science, we collectively need to re-develop the practice of journal publications.
  • The scope of TheoretiCS is the whole of Theoretical Computer Science, understood in an inclusive meaning (concretely: including, but not restricted to, the Theory of Computing and the Theory of Programming; or equivalently, the so-called TCS-A and TCS-B, reminiscent of Jan van Leeuwen et al.'s Handbook of Theoretical Computer Science).
  • Our aim is to rapidly become a reference journal and to contribute to the unity of the Theoretical Computer Science global community. In particular, we will seek to publish only papers that make a very significant contribution to their respective fields, that strive to be accessible to a wider audience within theoretical computer science, and that are, generally, of a quality on par with the very best journals in the field.
  • TheoretiCS adheres to the principles of 'virtuous' open-access: there is no charge to read the journal, nor to publish in it. The copyright of the papers remains with the authors, under a Creative Commons license.

Organization and a bit of history

The project started in 2019 and underwent a long gestation. From the start, we wanted to have a thorough discussion with a wide representation of the community, on how to best implement the guiding principles sketched above. It was deemed essential to make sure that all fields of theoretical computer science would feel at home in this journal, and that it would be recognized as a valid venue for publication all over the world.

This resulted in the creation of an Advisory Board, composed of representatives of most of the main conferences in the field (currently APPROX, CCC, COLT, CONCUR, CSL, FOCS, FoSSaCS, FSCD, FSTTCS, ICALP, ICDT, ITCS, LICS, MFCS, PODC, SoCG, SODA, STACS, STOC, TCC) and of so-called members-at-large. 

Logistics and answers to some natural questions

  • The journal is published by the TheoretiCS Foundation, a non-profit foundation established under German law. Thomas Schwentick, Pascal Weil, and Meena Mahajan are officers of the foundation.
  • TheoretiCS is based on the platform episciences.org, in the spirit of a so-called overlay journal.
  • The Advisory Board, together with the Editors-in-Chief and the Managing Editors, spent much of their efforts in designing and implementing an efficient 2-phase review system: efficient in terms of the added-value it brings to the published papers and their authors, and of the time it takes. Yet, as this review system relies in an essential fashion on the work and expertise of colleagues (like in all classical reputable journals), we can not guarantee a fixed duration for the evaluation of the papers submitted to TheoretiCS.
  • Being charge-free for authors and readers does not mean that there is no cost to publishing a journal. These costs are supported for the foreseeable future by academic institutions (at the moment, CNRS and Inria, in France; others may join).
  • The journal will have an ISSN, and each paper will have a DOI. There will be no print edition.