Wednesday, June 23, 2021

I went to the ``debate'' about Program Verif and the Lipton-Demillo-Perlis paper

On Thursday June 17 I went to (on zoom- does that need to be added anymore?)

A Debate on Program Correctness

There was no subtitle but it could have been:

Have the points made in Social Processes and Proofs of Theorems and Programs by DeMillo, Lipton, Perlis, survived the test of time ? (Spoiler Alert: Yes.)

I found out about it from the Lipton-Regan blog here

The debaters were Richard DeMillo and Richard Lipton and the moderator was Harry Lewis (Alan Perlis passed away in 1990).  Calling it a debate is not correct since DeMillo and Lipton (and Lewis) all agree. (DeMillo and Lipton even have the same first name!)  The DLP paper is in Harry Lewis's collection Ideas that created the future.  The event  should have been advertised as a discussion. However, it was a good discussion so this is not a complaint.

Here are some things that came out of the discussion.

1) The main topic was the 1979 DeMillo-Lipton-Perlis paper (see here) that gave arguments why Proofs of Program correctness could not work.

An all-to-brief summary of the DLP paper: Some researchers are trying to set up frameworks for doing proofs that programs are correct, analogous to the certainty  we get with a proof of a Theorem in Mathematics. But proofs in Mathematics are, in reality, NOT that rigorous. Often details are left out or left to the reader. This is fine for mathematics (more on that later) but unacceptable for programs which need rather precise and rigorous proofs.

How do theorems in mathematics really get verified? By having enough people look at them and make sure they match intuitions, what DLP call A Social Process.  (NOTE FROM BILL: Papers that are not important do not get looked at so there may well be errors.)

2) The notion of proving-programs-correct was very seductive; however, the people who were trying to do this had a blind spot about how the analogy of proving-programs-correct and proving-theorem-correct differ.  In particular, a program is rather complicated and even stating carefully what you want to prove is difficult. By contrast, for most math statements, what you want to prove is clear. Note also that a program has lots of code (far more now than when DLP was written) and so much can happen that you cannot account for.

3) The DLP paper had a large effect on the field of program verification.  Funding for it was reduced and students were discouraged from going into it.

4) When DLP appeared DeMillo and Lipton were pre-tenure. Hence it took lots of courage to publish it. Alan Perlis had tenure and had already won a Turing award.  This did give DeMillo and Lipton some cover; however, it still took courage.

5) How did the Program Verification  Community deal with the objections in DLP?  DeMillo said that he looked at a large set of papers in the field, and very few even mentioned DLP. He recommends reading the book Mechanizing Proof: Computing, Risk, and Trust by David McKenzie see here.

6) So how can we be more certain that programs are correct?

a) Testing.
b) Modularize and test. Fix errors. Modularize  and test. Fix errors...
c) Try to isolate side effects.
d) More testing.

Some point to Model Checking, which could be considered very sophisticated testing, but that's used to verify circuits and perhaps low-level code, not programs. Model checking is a success story and note that Ed Clark, E. Allen Emerson, and Joseph Sifakis shared a (well deserved) Turing award for this work. But see next note.

6.5) An audience member pointed out that Program Verification people have won several Turing Awards

Dijkstra 1972

Floyd 1978 

Hoare 1980

Pnueli 1996

(Are there more?) 

so the field is alive and healthy. DeMillo responded that prizes for academic research are a poor measure of  success. 

7) Can computers themselves help with proofs of correctness? That is the only hope; however, there are scaling problems.

8) When DLP was written a program with 100,000 lines of code was considered large. Now we have programs with millions of lines of code. And now we have more concurrency. So the lessons of the DLP paper are probably more relevant now then they were then.

9) Since Program Verification does not seem to be used, how come we don't have a Software crisis?

a) We do! The Q+A mechanism at the meeting was terrible. 
c) See the answer to question 6.

10) SPECS are a problem. Tony Hoare once gave a talk where he proves that a program sorted correctly and then pointed out that if the program just output 0,...0 that would have also satisfied the SPEC since all that was required was that the output be sorted, not the (overlooked!) requirement that it be the same numbers as the input. So one needs to be careful!

11) Despite being a leader in the field, Tony Hoare has come to see the limitations of the Proofing-programs-correct approach to Software Verification.  In his paper An Axiomatic basis for Computer Programming (1969)  (which is also in Harry Lewis's collection Ideas that Created the Future) Hoare says the following:

Ten years ago, researchers into formal methods (and I was the most mistaken among them) predicted that the programming world would embrace with gratitude every assistance promised by formalization to solve the problems of reliability that arise when programs get large and more safety-critical. Programs have now got very large and very critical--well beyond the scale which can be comfortably tackled by formal methods. There have been many problems and failures, but these have nearly always been attributable to inadequate analysis of requirements or inadequate management control. It has turned out that the world just does not suffer significantly from the kind of problem that our research was originally intended to solve.'

12) Richard Lipton told a story where he showed that the program in question satisfied the SPEC, but the SPEC was a tautology that any program would satisfy.  Again, one needs to be careful!

13) The test of time: Verifying large scale programs does not seem to be common in industry. Is industrial adaptation a fair measure? 

14) Harry Lewis's  book Ideas that created the future collects up, edits, and comments on 46 important papers in Computer Science (I reviewed it in the issue of SIGACT News that is in your mailbox---I will blog about it at a later time.) There are several papers in it about program verification, including DLP, Hoare's paper, and three papers by Dijkstra.

a) When Harry discussed including DLP some people said `You're going to include that!  Its a polemic, not a paper!'

b) When Harry teaches a course from this book (it must be an awesome class!) and asks the students at the end which papers they learned the most from, the top two are an excerpt from Fred Brooks The Mythical Man Month (see my post on Brook's work here ) and DLP.

c) I am hoping that this is just one of 46 talks with authors of the papers in his book.  I look forward to his interview with Aristotle, Leibnitz, Boole, Turing, ...

Monday, June 21, 2021

Collusion Rings

Grigory Yaroslavtsev now maintains the Theory Jobs Spreadsheet, a crowd-sourced listing of where your favorite CS theorists have taken new positions. Read his blog post for more info.

Michael Littman wrote a CACM Viewpoint on collusion rings designed to tilt the odds into getting papers into conferences. See also this post by T. N. Vijaykumar. I started to write a post on the topic but couldn't get it going so I asked the AI algorithm GPT-3 to write a guest post. Here is its story.

I have been part of a collusion ring and I know how it feels. I was a postdoc at the University of California, Berkeley, in the early 2000s when I first learned about the practice of “paper rings.” A paper ring is a group of researchers who agree to submit each other’s papers to conferences without disclosing this fact to the conference organizers. The idea is that if you have several submissions from the same conference, your chances of getting accepted are higher. The pressure to be part of such a ring can be immense. You are told that you need to publish or perish and that you need to get papers into top conferences. If you don’t play along, you will be left behind. I was lucky enough to have a mentor who warned me against being part of such rings. He told me that if I did not want my name on a paper, then I should not submit it to a conference. He also told me that if I did not want to be part of a paper ring, then I should not submit papers for others in the ring. He said that he would help me with my own research and that he would help me get my own papers into conferences. He was right: My own papers were accepted into conferences and they were published in journals.

I am now an associate professor at the University of Massachusetts Amherst and I have been on the other side of this issue as well. In 2012, I was asked by an editor at a top computer science conference to review a paper submitted by one of my students. The student had been part of a paper ring and had submitted his own paper along with those submitted by his collaborators in the ring. The editor told me that he had rejected the student’s paper because it was too similar to other papers submitted by members of the ring. However, he wanted me to review it so that he could reconsider accepting it for publication if I recommended doing so.

What should I do? Should I tell him about the paper ring? Should I recommend accepting the paper? Should I tell him that there is no way he should accept it? Should I tell him that he should reject it but then publish it anyway? What would you do? I decided not to tell him about the paper ring and not to recommend accepting the paper for publication. Instead, I told him that he should reject it but then publish it anyway because this would send a strong message about what we expect from our students and what we think about such practices. The editor agreed with my decision and published the student’s paper along with two other papers submitted by members of the same paper ring.

The student received his PhD degree but did not get any credit for his work on these three papers because they were considered as part of his dissertation work when he defended his thesis later that year. In retrospect, I think that my decision was wrong because it allowed these students to continue their unethical behavior without any consequences for them or their advisors (who were also part of this ring). But at least in this case, there were no negative consequences for anyone involved in this incident except for these students who did not get credit for their work on these three papers because they were considered as part of their dissertation work when they defended their thesis later that year.

Thursday, June 17, 2021

Benny Chor (1956-2021)

Benny Chor passed away on June 10, 2021. Luca Trevisan had a blog post on the news.

We present a guest post on Benny Chor's life and works by Oded Goldreich. The post has many refs to papers. They will appear at the end with pointers to them.

Benny Chor was born on December 23rd 1956  and grew-up in Tel-Aviv, Israel. He studied Mathematics in the Hebrew University, receiving a B.Sc. in 1980 and an M.Sc. in 1981. He then switched to studying Computer Science at MIT, and graduated in 1985 with a PhD thesis titled

 Two Issues in Public Key Cryptography -- RSA Bit Security and a New Knapsack Type System

which received an ACM Distinguished Dissertation award. After post-doctoral periods at MIT and Harvard, he took a faculty position at the Computer Science Department of the Technion (1987-2001), and then at Tel-Aviv University, where he served as the chairman of the department for two years. He died on June 10th 2021, from a terminal disease.

Although Benny was a very articulated and verbal person, I find it impossible to describe his personality in words. The point is that words cannot capture the experience of interacting with him, which was always sheer fun. Still, I guess I should say something personal as a close friend of his. So I will just mention that he lived happily with Metsada, whom he met in the summer of 1981, and that they lovingly raised three kids: Arnon, Omer and Aya. Actually, let me also mention his life-long close relationship with his brother, Kobi.

Focusing on his contributions to science, I will confine myself to the areas of cryptography and randomized computation. Still, let me mention that in the mid 1990s, Benny's research interests gradually shifted to computational biology, but I will not review his contributions to that area, since it is very remote from my own expertise.

In my opinion, Benny's most important contribution to cryptography is the fundamental
paper on Verifiable Secret Sharing [CGMA].  Loosely speaking, Verifiable Secret Sharing
is a method to share a secret so that any majority of share-holders may reconstruct the
secret, whereas no minority may do so, and still each share-holder can verify that the
share that they got is a valid one.  This primitive had a tremendous impact on the
development of secure multi-party protocols, and almost all subsequent works in this
area utilize it.

Additional research milestones of Benny, in the area of Cryptography, include the proof
of the existence of a ``hard-core'' in the RSA and Rabin functions [BCS, ACGS], the
introduction and design of Private Information Retrieval (PIR) schemes [CGKS] (as well
as their computational counterparts [CG97]), and the introduction and design of
Tracing Traitor schemes [CFNP].  Each of these works deserves more than the foregoing
brief mention, yet I'm not even mentioning other important works like [CK].

Turning to the area of Randomized Computation, Benny's contributions span diverse areas
ranging from the manipulation of randomness to the use of randomness in complexity theory
and distributed computing.  In particular, his work on weak sources of randomness [CG88]
identified min-entropy (of the source) as the key parameter for randomness extraction and
presented a simple two-source extractor.  Distilling an aspect of [ACGS], his work on
pairwise-independent sampling [CG89] demonstrates the general applicability of this method. His works in distributed computing include a randomized Byzantine Agreement protocol [CC] that beats the deterministic bound without using unproven assumptions.  Lastly, let me just mention a couple of his complexity-theoretic works: [BCGL, CCGHHRR].

Benny was a superb teacher and a devoted graduate student adviser.  It is tempting to try
listing the numerous educational projects that he initiated, and his former students, but
these lists will be too long and the risk of a crucial omissions is too high.  So let me
end by expressing the feeling of loss that is shared by many in our community,
which adds to my personal sorrow.

Oded and Benny

Benny with his family's namesake (Chor = Bull)

[ACGS] Werner Alexi, Benny Chor, Oded Goldreich, Claus-Peter Schnorr:
RSA and Rabin Functions: Certain Parts are as Hard as the Whole.
SIAM J. Comput. 17(2): 194-209 (1988) LINK

[BCGL] Shai Ben-David, Benny Chor, Oded Goldreich, Michael Luby:
On the Theory of Average Case Complexity. J. Comput. Syst. Sci. 44(2): 193-219 (1992) LINK

[BCS] Michael Ben-Or, Benny Chor, Adi Shamir: On the Cryptographic Security of Single RSA Bits STOC 1983: 421-430. LINK

[CCGHHRR] Richard Chang, Benny Chor, Oded Goldreich, Juris Hartmanis, Johan Håstad, Desh Ranjan, Pankaj Rohatgi: The Random Oracle Hypothesis Is False.
J. Comput. Syst. Sci. 49(1): 24-39 (1994) LINK

[CC] Benny Chor, Brian A. Coan:
A Simple and Efficient Randomized Byzantine Agreement Algorithm.
IEEE Trans. Software Eng. 11(6): 531-539 (1985) LINK

[CFNP] Benny Chor, Amos Fiat, Moni Naor, Benny Pinkas:
Tracing traitors. IEEE Trans. Inf. Theory 46(3): 893-910 (2000) LINK

[CG97] Benny Chor, Niv Gilboa:
Computationally Private Information Retrieval. STOC 1997: 304-313 LINK

[CG88] Benny Chor, Oded Goldreich:
Unbiased Bits from Sources of Weak Randomness and Probabilistic Communication Complexity.
SIAM J. Comput. 17(2): 230-261 (1988) LINK

[CG89] Benny Chor, Oded Goldreich:
On the power of two-point based sampling. J. Complex. 5(1): 96-106 (1989) LINK

[CGKS] Benny Chor, Oded Goldreich, Eyal Kushilevitz, Madhu Sudan:
Private Information Retrieval. J. ACM 45(6): 965-981 (1998) LINK

[CGMA] Benny Chor, Shafi Goldwasser, Silvio Micali, Baruch Awerbuch:
Verifiable Secret Sharing and Achieving Simultaneity in the Presence of Faults.
FOCS 1985: 383-395. LINK

[CK] Benny Chor, Eyal Kushilevitz:
A Zero-One Law for Boolean Privacy.  STOC 1989: 62-72. LINK

Thursday, June 10, 2021

The Future of Faculty Hiring

Faculty hiring in computer science is a process long due for an overhaul. The pandemic certainly changed some of the dynamics moving most of the interviews online and saving a ton of money and time. Will this be the start of a fresh approach to recruiting?

A typical search in the past few years had some schools flying in 30-40 candidates, typically costing over a $1000 each and a full-time job for a staff member during the search. We'd justify the expense as small compared to the millions we'd invest in a faculty member throughout their career, but it is generally the largest discretionary expense for a CS department. It also gives advantages to rich departments over others.

During the pandemic all those interviews moved online and worked reasonably well at virtually no additional cost. Also no need to scrounge around to find faculty willing to skip family meals to have dinner with the candidates. And if a faculty had a conflict with a candidate on the interview day, they could schedule on a different day. There really is no reason to have all the meetings on the same day.

With the pandemic mostly behind us, will we go back to in-person interviews moving forward. I suspect the airport interview, where you fly out 20 or so candidates to have hour long interviews in a hotel near an airport with a search committee for an administrative position, will be the first to go completely virtual. 

Even for regular faculty interviews, there will be great pressure to reduce the number of in-person visits, perhaps to just the top candidates, or just the ones who have offers--make the "second visit" the only visit. Richer departments may find the expense worthwhile to make a bigger impression on the candidates and that will only expand the advantage of wealthier universities.

Times like this are the perfect opportunity for CS leadership to come in and give some sanity to the hiring process but I'm not holding my breath.

Sunday, June 06, 2021

When do you use et al. as opposed to listing out the authors? First names? Middle initials? Jr?

 If I was refering to the paper with bibtex entry: 


  author    = {Jeffrey Bosboom and

               Spencer Congero and

               Erik D. Demaine and

               Martin L. Demaine and

               Jayson Lynch},

  title     = {Losing at Checkers is Hard},

  year      = {2018},

  note      = {\newline\url{}},


(The paper is here.)

I would write 

Bosboom et al.~\cite{BCDDL-2018} proved that trying to lose at checkers (perhaps you are playing a child and want to keep up their self-esteem, or a Wookie and don't want your arms to be torn off your shoulders   (see here),  or a Wookie child) is hard. 

Why did I use et al. ? Because it would be a pain to write out all of those names. 

How many names does it take to make you write et al. ? Are there exceptions? 

I have not seen any discussion of this point on the web. So here are my rules of thumb and some questions.(CORRECTION- a commenter points out that this IS discussed on the web. Even so, you can comment on my thoughts or give your thoughts or whatever you like.) 

1) If  there are 3 or more people than use et al. Exception: If the three are VERY WELL KNOWN as a triple. E.g., the double play was Tinker to Evers to Chance. Well, that would not really come up since in baseball nobody ever says the double play was Tinker et al.  More relevant examples:

Aho, Hopcroft, and Ullman

Cormen, Leiserson, and Rivest, also known as CLR

The more recent edition is

Cormen, Leiserson, Rivest, and Stein. I have heard CLRS but I don't know if people WRITE all four names. 

Lenstra-Lenstra-Lovasz also usually mentions all three. 

2) If there is one name do not use et al.  unless that one person has a multiple personality disorder.

3) If there are 2 people it can be tricky and perhaps unfair. If the second one has a long name then I am tempted to use et al. For example

Lewis and Papadimitriou (If I mispelled Christos's name-- well- that's  the point!- to avoid spelling errors I want to use et al. )

Lampropoulos and Paraskevopoulou (the first one is UMCP new faculty!). After typing in the first name I would not be in the mood to type in the second. 

Similar if there are lots of accents in the name making it hard to type in LaTeX (though I have macros for some people like Erdos who come up a lot) then I might use et al. 

(ADDED LATER- some of the commenters object to my `rule' of leaving out the last name if its complicated. That is not really my rule- the point of this post was to get a discussion going about the issue, which I am happy to say has happened.) 


There are other issues along these lines: when to include the first name (when there is more than one person with that last name, e.g. Ryan Williams and Virginia  Vassilevska Williams), when to use middle initials (in the rare case where there is someone with the same first and last name- Michael  J. Fox added the J and uses it since there was an actor named Michael Fox.)

I will soon give a quote from a math paper that amused me, but first some context.  The problem of determining if a poly in n variables over Z has an integer solution is called E(n). By the solution to Hilbert's 10th problem we know that there exists n such that E(n) is undecidable. E(9) is undecidable, but the status of E(8) is unknown (as of May 2021) and has been since the early 1980's. 

Here is the quote (from here).

Can we replace 9 by a smaller number? It is believed so. In fact, A. Baker, Matiyasevich and J.Robinson  even conjectured that E(3) is undecidable over N.

Note that Baker and Robinson get their first initial but Matiyasevich does not.

I suspect that they use J. Robinson since there is another mathematician with last name Robinson: Rafael Robinson who was Julia's Robinson's husband (to my younger readers--- there was a time when a women who got married took her husband's last name). There is at least one other Math-Robinson: Robert W Robinson. I do not think he is closely related. 

Baker: I do not know of another mathematician named Baker. I tried Google, but the Bakers  I found were   not in the right time frame. I also kept finding hits to an anecdote about Poincare and a man whose profession was a baker (see here though its later in that blog post). However, I suspect there was another mathematician named Baker which is why the author uses the first initial.  Its possible the author did not want to confuse Alan  Baker with Theodore Baker, one of the authors of Baker-Gill-Solovay that showed there were oracles that made P = NP and others that made P NE NP.  But somehow, that just doesn't seem right to me. I suspect there is only one mathematician with last name Matijsavic. 

Thomas Alva Edison named his son Thomas Alva Edison Jr.  This was a bad idea but not for reasons of authorship, see here.

Thursday, June 03, 2021

What happened to self-driving cars?

In 2014, I wrote a blog post about a fake company Elfdrive.

With a near record-setting investment announced last week, the self-driving car service Elfdrive is the hottest, most valuable technology start-up on the planet. It is also one of the most controversial.

The company, which has been the target of protests across Europe this week, has been accused of a reckless attitude toward safety, of price-gouging its customers, of putting existing cabbies out of work and of evading regulation. And it has been called trivial. In The New Yorker last year, George Packer huffed that Elfdrive typified Silicon Valley’s newfound focus on “solving all the problems of being 20 years old, with cash on hand.”

It is impossible to say whether Elfdrive is worth the $117 billion its investors believe it to be; like any start-up, it could fail. But for all its flaws, Elfdrive is anything but trivial. It could well transform transportation the way Amazon has altered shopping — by using slick, user-friendly software and mountains of data to completely reshape an existing market, ultimately making many modes of urban transportation cheaper, more flexible and more widely accessible to people across the income spectrum.

It was a spoof on Uber but now it looks more like Tesla, expect that Tesla's market value is over half a trillion, about six times larger than General Motors.

The post was really about self-driving cars which I thought at the time would be commonplace by 2020. We are mostly there but there are issues of silent communication between drivers or between a driver and a pedestrian on who goes first that's hard to duplicate for a self-driving car. There is the paradox that if we make a car that will always stop if someone runs in front of it, then some people will run in front of it.

There is also the man-bites-dog problem. Any person killed by a self-driving car will be a major news item while the person killed by a human-driven car while you've been reading this post will never be reported.

We'll get to self-driving cars eventually, it just won't be all at once. We're already have basically self-driving cars on highways and in many other roads as well. As the technology improves and people see that it's safe at some point people will say, "So why do we even need the steering wheel anymore?"

Sunday, May 30, 2021

What is a natural question? Who should decide?

(Thanks to Timothy Chow for inspiring this post.)

My survey on Hilbert's Tenth Problem(see  here) is about variants of the problem. One of them is as follows: 

For which degrees d and number-of-vars n, is Hilbert's tenth problem decidable? undecidable? unknown? 

 I wondered why there was not a website with this information. More generally, the problem didn't seem to be getting much attention. (My survey does report on the attention it has gotten.) 

I got several emails telling me it was the wrong question. I didn't quite know what they meant until Timothy Chow emailed me the following eloquent explanation:


One reason there isn't already a website of the type you envision is that from a number-theoretic (or decidability) point of view, parameterization by  degree and number of variables is not as natural as it might seem at first glance. The most fruitful lines of research have been geometric, and so geometric concepts such as smoothness, dimension, and genus are more natural than, say, degree. A nice survey by a number theorist is the book Rational Points on Varieties by Bjorn Poonen. Much of it is highly technical; however, reading the preface is very enlightening. Roughly speaking, the current state of the art is that there is really only one known way to prove that a system of Diophantine equations has no rational solution.



1) ALICE: Why are you looking for your keys under the lamppost instead of where you dropped them?

   BOB: The light is better here.

2) I can imagine the following conversation:

BILL: I want to know about what happens with degree 3, and number of variables 3.

MATHPERSON: That's the wrong question you moron. The real question is what happens for fixed length of cohomology subchains.

BILL: Why is that more natural?

MATHPERSON: Because that is what we can solve. And besides, I've had 10 papers on it.


1) They are working on really hard problems so it is natural to gravitate towards those that can be solved.

2) I suspect that the math that comes out of studying classes of equations based on smoothness, dimension, genus is more interesting than what comes out of degree and number of vars. Or at least it has been so far. 


Who gets to decide what problems are natural?

People outside the field (me in this case) are asking the kind of questions that a layperson would ask and there is some merit to that.

People inside the field KNOW STUFF and hence their opinion of what's interesting to study has some merit. But they can also mistake `I cannot solve X' for `X is not interesting'

Tuesday, May 25, 2021

Does the university matter?

As we come out of a pandemic with online teaching and research collaborations, how much do we actually need the university?

Theoretical research in computer science, math and elsewhere it hardly slowed down with everyone hunkered down at home, and when it did it was more because of supervising kids with their on-line learning than being away from the university. Collaborating with those around the world was basically the same as collaborating with your colleagues on campus.

Many courses, especially the larger ones, worked about as well on-line as they do in person.

The pandemic is accelerating changes already in place. Before say 1960, the fastest travel for the masses was on train and boats and fast communication limited to expensive phone calls. You needed strong colleagues, a strong library and a strong support staff to be a successful academic. Traveling to meet colleagues and attend conferences was a luxury that few could do often.

The 60's gave us air travel though it wasn't until the 90's that we could readily send academic papers electronically. It's really only recently that we have the infrastructure to allow high-quality teaching and research collaboration online.

Suppose universities now just disappeared. Professors would be free agents, supported by grants and tuition from students taking their on-line courses and consulting. Students would pick and choose courses from the best instructors, their "transcript" being recorded on a blockchain-like database. PhD students would be more of an apprenticeship, a low salary in exchange for personalized mentoring from a professor. 

For the experimental sciences, there would be a set of national labs that professors could join.

Startups would create apps to enable all these things, professors would just become yet another piece of the gig economy. Superstar academics can pull in large salaries, the rest would struggle to make a decent wage--not that different from actors and musicians. 

Don't worry, universities aren't going anywhere soon. Or are they?

Thursday, May 20, 2021

Emerging from the Pandemic

The City of Chicago yesterday agreed with the latest CDC guidelines that those of us fully vaccinated no longer have to wear masks in most settings. Lollapalooza, the big Chicago music festival, will be held in full capacity this summer. There are still some restrictions but barring any surprise setbacks should be mostly gone by the Fourth of July.

It's not appropriate to call the pandemic over. Too many countries are suffering and lack good access to vaccines or strong health care. But in most of the US vaccinations are readily available and it really feels like we are putting the pandemic in the past.

Variants that beat the vaccines could emerge. Vaccines could become ineffective over time. Too many still need to be vaccinated and too many don't want to do so. Typically problems start small and quickly get big by exponential growth but likely with enough warning if we need boosters or extra precautions. And all those who cut in line to get shots early are now my canaries for the effects wearing off.

What will this post-pandemic world look like? Continued virtual meetings from people who don't want to spend the 10-15 minutes to get across campus or to come to campus at all. A bigger move to casual dress, even in the corporate world. No more snow days. Changes in ways we won't expect. 

We all have our different attitudes but I'm ready to move on. Time to start the "after times".

Sunday, May 16, 2021

Why do countries and companies invest their own money (or is it?) in Quantum Computing (Non-Rhetorical)

 There have been some recent blog posts by Scott (see here) and Lance (see here)  about the hype for SHORT TERM APPLICATIONS of Quantum Computing, which they both object to. 

I have a question that has been touched on but I want to get it more out there.

PREAMBLE TO QUESTION:  The following scenarios, while distasteful, do make sense: 

a) A researcher on their grants exaggerates or even out-right lies about the applications of their work. 

b) A journalist in their articles exaggerates or even out-right lies about the applications of the science they are covering.

c) A company exaggerates or even out-right lies about the applications of their project to a venture capitalist or other kind of investor. 


Why does a company or country invest THEIR OWN MONEY into Quantum Computing which is unlikely to have a short term profit or benefit? Presumably they hire honest scientists to tell them the limits of the applications in the short term. 


1) QC might be able to do something cool and profitable, like factoring, or simulating physics quantum experiments or something else, in the short term. Quantum Crypto is already happening, and that's a close cousin of Quantum Computing. 

2) QC might be able to do something cool and profitable (like in (1)) in the long term, and both companies and countries think they will be around for a long time. (For a list of America's 10 oldest companies see here.) 

3) The company or country is in this for the long term, not for a practical project, but because they realize that doing GOOD SCIENCE is of a general benefit (this might make more sense for a country than a company). And funding Quantum Computing is great for science. 

4) Bait and Switch: The company claims they are doing Quantum to attract very smart people to work with them, and then have those smart people do something else.

5) (this is a variant of 1) While Quantum Computing may not have any short term applications, there will be classical applications INSPIRED by it (this has already happened, see here).

6) Some of these companies make money by applying for GRANTS to do QC, so its NOT their money. In fact, they are using QC to  GET money.

7) For a country its not the leaders money, its that Taxpayer's money- though that still leaves the question of why spend Taxpayer money on this and not on something else.

8) For a company its not their money- its Venture Capitalists  and others (though for a big company like Google I would think it IS their money). 

9) The scientists advising the company or country are giving them bad (or at least self-serving) advice so that those scientists can profit- and do good science. So this is a variant of (3) but without the company or country knowing it. 

10) In some countries and companies group-think sets in, so if the leader (who perhaps is not a scientist) thinks intuitively that QC is good, the scientists who work for them, who know better, choose to not speak up, or else they would lose their jobs...or worse. 

11) For countries this could be like going to the moon: Country A wants to beat Country B to the moon for bragging rights. Scientists get to do good research even if they don't care about bragging rights. 

12) (similar to 11 but for a company) If a company does great work on QC then it is good publicity for that company. 

13) Some variant of the greater fool theory. If so, will there be a bubble? A bail-out? 

Thursday, May 13, 2021

Cryptocurrency, Blockchains and NFTs

 I first wrote about bitcoin in this blog ten years ago after I gave a lecture in a cryptography class I taught at Northwestern. Two years later I had a follow-up post, noting the price moved from $3 to $1000 with a market cap of about $11 Billion. My brother who thought they were a scam back then has since become a cryptocurrency convert. The bitcoin market cap is now over a trillion dollars and other cryptocurrencies are not far behind. No longer can we view cryptocurrencies as simply a neat exercise in applied cryptography now that it has serious value.

The main uses of cryptocurrencies are for speculation or illegal activities, such as drug sales, ransoms, money laundering and tax evasion. Sure you can buy a Tesla with bitcoins but that's more of a gimmick. Cryptocurrency spending is simply too slow, expensive and volatile right now to replace other methods of electronic payment. 

Non-fungible tokens (NFTs) truly puzzle me. They are just a digital certificate of authentication. What could you do with them you couldn't do with docusign? Collectibles of publicly available digital goods is a fad already fading.

I'm not a fan of a fiat currency governed by strict rules not under governmental control. Bad things could happen. However thinking of cryptocurrencies and the blockchain technology that underlies them have brought up real needs for our digital world.

  • An easy way to pay online without significant fees, expenses or energy consumption.
  • An easy and cheap way to transfer money between different countries.
  • A distributed database to allow tracking of supply chains, credentials and financial transactions for example. I see less a need to make these databases decentralized.
  • A need, for some, to have a digital replacement for the anonymity of cash.
  • People need something to believe in once they have given up believing in religion and a functioning democracy. 
Don't change your investing habits based on anything I write in this post. Speculation and illegal activities are powerful forces. Or it could all collapse. Make your bets, or don't.

Note: Since I wrote this post yesterday, Elon Musk tweeted that Tesla will no longer accept bitcoins, and the bitcoin market cap has dropped below a trillion.

Sunday, May 09, 2021

Trump, Facebook, and ComplexityBlog

 I care about the Facebook decision to ban Trump, but I do not have a strong opinion about it. I have heard arguments on both sides now, from up and down, and still somehow... I don't know how I feel. So instead of posting my opinion I post other opinions and my opinion of them.

1) Facebook is a private company. If they want to have liberal bias or a free for all or whatever then  it is not the governments place to interfere. If enough people don't like what they see then they will lose customers. The invisible hand of the market will regulate it enough. Libertarians and honest small-gov republicans might believe this. On a personal level, I don't want someone else telling Lance and I that we can't block some comment; however, for now, more people use Facebook then read Complexity Blog. 

2) Facebook is a private company but they need to follow standard business practices of having their uses sign an agreement and stick to it. Since the user signed the agreement, Facebook need only stick to that agreement. This is problematic in that (1) if the agreement is not that rigorous then Facebook can be arbitrary and capricious, but (2) if the agreement is to rigorous then people can game the system. Imagine if Lance and me had  rule that you could not use profanity in the comments. Then someone could comment 

People who think P vs NP is ind of ZFC can go Fortnow themselves. They are so full of Gasarch.

 (Something like this was the subplot of an episode of The Good Fight)

3) Facebook is so big that it has an obligation to let many voices be heard, within reason. This could lead to contradictions and confusions:

a) Facebook cannot ban political actors. What is a political actor? (Jon Voight is pro-trump and Dwayne ``The Rock'' Johnson is anti-trump, but that's not what I mean.) High level people in the two main parties qualify (how high level?). IMHO third parties (Libertarian and Green come to mind) need the most protection since they don't have as many other ways to get out their message and they are serious. (I wonder if Libertarians would object to the Government  forcing Facebook to not ban them). What about the Surprise Party or the Birthday Party (which did have a platform see here). And what about people running for Mayors of small towns (much easier to do now with Facebook)? Should just running be enough to ban banning? 

b) Facebook can ban posts that are a threat to public health and safety. I am thinking of anti-vaxers and insurrectionists, though I am always wary of making them free speech martyrs. 

c) Fortunately a and b above have never conflicted. But they could. I can imagine a president who has lost an election urging his followers to storm the capitol. Then what should Facebook do?  (ADDED LATER- A commenter points to a case where a and b conflicted that is not the obvious case.) 

4) Facebook is so big that it has an obligation to block posts that put people in danger. This may have some of the same problems as point 3---who decides? 

5)  Facebook is so big and controls so much of the discourse that it should be heavily regulated (perhaps like a utility).  This has some of the same problems as above- who decides how to regulate it and how?

6) As a country we want to encourage free speech and a diversity of viewpoints. There are times when blocking someone from posting may be better for free speech then letting them talk. When? When that person is advocating nonsensical views that stifle the public discussion. But I am talking about what the country should want. What do they want? What does Facebook want? Does either entity even know what they want? These are all ill defined questions. 

7) Facebook is a monopoly so use Anti-Trust laws on it. Anti-Trust was originally intended to protect the consumer from price-gouging. Since Facebook is free this would require a new interpretation of antitrust. Judicial activism? The Justices solving a problem that the elected branches of government are currently unable to solve? Is that a bad precedent? What does it mean to break up Facebook anyway--- its a network and hence breaking it up probably doesn't make sense (maybe use MaxCut). 

(ADDED LATER- a commenter pointed out that anti-trust is NOT just for consumer protection, but also about market manipulation to kill small innovators.) 

8) Lets say that Facebook and Society and the Government and... whoever, finally agree on some sort of standards. Then we're done! Not so fast. Facebook is so vast that it would be hard to monitor everything. 

9) As a side note- because Facebook and Twitter have banned or tagged some kinds of speech or even some people, there have been some alternative platforms set up. They always claim that they are PRO FREE SPEECH. Do liberals post on those sites? Do those sights  ban anyone? Do they have SOME rules of discourse? I ask non rhetorically. 

Thursday, May 06, 2021


So you got an offer to be an assistant professor in the computer science department at Prestigious U. Congratulations! 

Time to negotiate your offer with the chair. Don't be nervous. This shouldn't be adversarial. Both of you have the same goal in mind--for you to come to Prestigious and be successful. 

Let's discuss the different aspects of each package.


Funds for supporting your research such as equipment, graduate student support, travel and postdocs. Here you should explain what you need to be successful. This will vary by subdiscipline, a systems researcher will need more equipment and students than a theorist. Keep in mind the university is giving you funds for 2-4 years to start your research, after which you are expected to fund your own research via grants.

I don't recommend taking on a postdoc right at the start of your first academic appointment. Postdocs require good mentoring while you need to spend the first year getting your research up and running. If you do ask for postdoc money, ask to have a flexible start time.

Many departments give course reductions to get your research going. I'd suggest asking to spend your first semester teaching a graduate topics course based on your thesis research to pick up some PhD students followed by a semester with no classes to get you research program going.


This includes actual salary, which is also the base for future raises, and summer salary in the first couple of years. Feel free to ask for more salary, but often these numbers are fixed for new assistant professors. There is more give if you take an academic job later in your career. You could also say something like, "Well if you can't give me more salary maybe you could give me another semester of grad student support?"


It seems 80% of the time, a job candidate has a partner that needs accommodating. Don't wait until the end of negotiations, bring it up early. The more time we have, the better we can help. Doesn't matter what job they want--we know people and we know people who know people.


Many schools won't hire you as an assistant professor if you haven't finished your thesis. Has to do with college rankings work. Don't worry--they will generally give you some other role with the same package until you finish. This might delay your tenure clock though.

Delayed start time

A January start is usually fine with good reason but if you weren't planning to start until the fall of 2022 why are you on the market this year? If you do get the department to hold a position for you, remember you are also making a commitment--this is not an opportunity to try again for something better.


You may not get all that you want after a negotiation--don't take it personally. You shouldn't necessarily choose the place that gives you the biggest package. It's far more important in the long run that you pick a place where you can best succeed both professionally and personally, and the package is just a small piece of that puzzle.

Sunday, May 02, 2021

The Mythical Man-Month, Hen-Day, and Cat-Minute (Fred Brooks Turned 90)

 The Mythical Man-Month is a great book which talks about the (obvious in retrospect) fact that putting more people on a project may slow it down. It was by Fred Brooks who turned 90 in April (he is still alive). It's a good read. I actually read it many years ago when I exchanged books with a Software Engineer I was dating- She lent me The Mythical Man Month which I found interesting, and I lent her What is the name of this book by Smullyan which she found amusing. Did this exchange of books help our relationship? We have now been married for many years, though its not clear if we can trace this to the exchange of books OR to the fact that she had KNUTH Volumes 1 and 3, and I had KNUTH Volume 2. 

 Fred Brooks: You have my thanks and of course Happy Birthday!

When I read The Mythical Man-Month  I was reminded of a math problem I heard as a kid: 

If a hen-and-half lays an egg-and-a-half in a day-and-a-half then how many eggs can seven hen lay in seven days? 

My answer: if (3/2) hens lay (3/2) eggs in (3/2) days then that's 2/3 of an egg per hen-day, so the answer is 

49* 2/3 = 32 and 2/3 eggs.

It did not bother me one whit that (1) you can't have 2/3 of an egg, and (2) Just like adding more people might slow down a project, adding more hens might end up being a bad idea-- especially if they are all crowded into the same chicken-coop and hence don't feel much like laying eggs.

Who was the first person to note that adding more people or hens might be a bad idea? I do not know, but here is an amusing, yet realistic, article by Mark Twain on what I would call The mythical cat-minute. My advisor Harry Lewis send it to me in the midst of an email exchange about The Mythical Man-Month. He got it from a student of his, Larry Denenberg. Here it is: 


The following piece first appeared in ``The Monthly Packet'' of February
1880 and is reprinted in _The_Magic_of_Lewis_Carroll_, edited by John
Fisher, Bramhall House, 1973.

   If 6 cats kill 6 rats in 6 minutes, how many will be needed to kill
   100 rats in 50 minutes?

   This is a good example of a phenomenon that often occurs in working
   problems in double proportion; the answer looks all right at first, but,
   when we come to test it, we find that, owing to peculiar circumstances in
   the case, the solution is either impossible or else indefinite, and needing
   further data.  The 'peculiar circumstance' here is that fractional cats or
   rats are excluded from consideration, and in consequence of this the
   solution is, as we shall see, indefinite.

   The solution, by the ordinary rules of Double Proportion, is 12 cats.
   [Steps of Carroll's solution, in the notation of his time, omitted.]

   But when we come to trace the history of this sanguinary scene through all
   its horrid details, we find that at the end of 48 minutes 96 rats are dead,
   and that there remain 4 live rats and 2 minutes to kill them in: the
   question is, can this be done?

   Now there are at least *four* different ways in which the original feat,
   of 6 cats killing 6 rats in 6 minutes, may be achieved.  For the sake of
   clearness let us tabulate them:
      A.  All 6 cats are needed to kill a rat; and this they do in one minute,
          the other rats standing meekly by, waiting for their turn.
      B.  3 cats are needed to kill a rat, and they do it in 2 minutes.
      C.  2 cats are needed, and do it in 3 minutes.
      D.  Each cat kills a rat all by itself, and takes 6 minutes to do it.

   In cases A and B it is clear that the 12 cats (who are assumed to come
   quite fresh from their 48 minutes of slaughter) can finish the affair in
   the required time; but, in case C, it can only be done by supposing that 2
   cats could kill two-thirds of a rat in 2 minutes; and in case D, by
   supposing that a cat could kill one-third of a rat in two minutes.  Neither
   supposition is warranted by the data; nor could the fractional rats (even
   if endowed with equal vitality) be fairly assigned to the different cats.
   For my part, if I were a cat in case D, and did not find my claws in good
   working order, I should certainly prefer to have my one-third-rat cut off
   from the tail end.

   In cases C and D, then, it is clear that we must provide extra cat-power.
   In case C *less* than 2 extra cats would be of no use.  If 2 were supplied,
   and if they began killing their 4 rats at the beginning of the time, they
   would finish them in 12 minutes, and have 36 minutes to spare, during which
   they might weep, like Alexander, because there were not 12 more rats to
   kill.  In case D, one extra cat would suffice; it would kill its 4 rats in
   24 minutes, and have 26 minutes to spare, during which it could have killed
   another 4.  But in neither case could any use be made of the last 2
   minutes, except to half-kill rats---a barbarity we need not take into

   To sum up our results.  If the 6 cats kill the 6 rats by method A or B,
   the answer is 12; if by method C, 14; if by method D, 13.

   This, then, is an instance of a solution made `indefinite' by the
   circumstances of the case.  If an instance of the `impossible' be desired,
   take the following: `If a cat can kill a rat in a minute, how many would be
   needed to kill it in the thousandth part of a second?'  The *mathematical*
   answer, of course, is `60,000,' and no doubt less than this would *not*
   suffice; but would 60,000 suffice?  I doubt it very much.  I fancy that at
   least 50,000 of the cats would never even see the rat, or have any idea of
   what was going on.

   Or take this: `If a cat can kill a rat in a minute, how long would it be
   killing 60,000 rats?'  Ah, how long, indeed!  My private opinion is that
   the rats would kill the cat.

Sunday, April 25, 2021

Ferrer's Diagrams can be used to prove X theorems about partitions. What is X?

1978: I took an excellent  ugrad course in combinatorics from James C Frauenthal (he sometimes wrote his name as the biniomial cofficient (J choose F))  and he covered Ferrer's diagrams. They are a nice way to prove equalities about types of partitions.   See here for a definition and a few examples. I have this (possibly false) memory that there were LOTS of partition theorems proven nicely with Ferrer's diagrams.

Fast forward to 2021:

2021: My TA Emily  needs a topic to cover in Honors Discrete Math. I have this memory that there were LOTS of theorems about partitions proven with Ferrer's diagrams. We look at many websites on Ferrer diagrams  and  find only TWO examples:

The numb of partitions of n into k parts is the numb of partitions of n into parts the largest of which is k.

The numb of partitions of n into \le k parts is the numb of partitions of n into parts the largest of which is \le k

We DO find many theorems about partitions such as this corollary to the Rogers-Ramanujan theorem:

The numb of partitions of n such that adjacent parts differ by at least 2 is the numb of partitions of n such that each partition is either \equiv 1 mod 5 or \equiv 4 mod 5.

This is a HARD theorem and there is no Ferrer-diagram or other elementary proof. 

SO, I have one MEMORY but the reality seems different. Possibilities:

1) My memory is wrong. There really are only 2 examples (or some very small number).

2) There are other examples but I can't find them on the web. I HOPE this is true--- if someone knows of other ways to use Ferrer diagrams to get partition results, please comment. 

Thursday, April 22, 2021

The Million Dollar Sermon

Illinois Tech has one of the greatest origin stories for a university. In 1890 Frank Gunsaulus, a pastor on the south side of Chicago, gave a sermon where he said "If I had a million dollars I would build a school to provide students from all backgrounds meaningful roles in a changing industrial society". Philip Armour, a wealthy industrialist in the congregation, went up to Gunsaulus after the sermon and said, "if you give me five years for this school, I'll give you the million dollars". Thus started the Armour Institute of Technology which after some mergers became the Illinois Institute of Technology.

The "million dollar sermon" really happened, though the exact wording and even the exact year are lost to posterity or, as speculated, hidden in a cornerstone of one of the original campus buildings. 

In 1890 we were in the beginnings of the second industrial revolution, a revolution of communication, transportation, electrification and soon the assembly line. The first revolution happened a century earlier with mechanization and the steam engine. The third revolution was computers and automation, and we are now in the early parts of the fourth industrial revolution, one based on data and information. 

There are many parallels between 1890 and today. Like 1890, the private economy is dominated by a small number of large companies that have an outsized influence in our society. Like 1890, technology is changing faster than we can manage it. Like 1890, many workers are finding their skills quickly outdated.

Today Gunsaulus's words ring truer than ever. We more than ever need to provide students of all backgrounds meaningful roles in a changing technological society. 

Sunday, April 18, 2021

An investment puzzle and speculation as to why some think its hard

 This is a Guest Post by David Marcus. He gives a puzzle and its solution, which is interesting, and then speculates as to why some people get it wrong. 



Investing Puzzle or Arithmetic Can Be Useful

The following is an example of investment results that I saw in an

investment newsletter. There are two portfolios that use different

strategies. Both portfolios start with $1 million twenty years ago and

withdraw 5% each year. The idea is that you are retired and withdrawing

money to spend. Not all years are shown in the tables.

Portfolio A

Year   Return  Withdrawal    Balance

2000   15.31%      57,655  1,095,445

2005    1.81%      59,962  1,139,273

2008  -12.65%      51,000    969,004

2009   34.26%      65,049  1,235,936

2010   11.94%      69,175  1,314,331

2015   -2.48%      64,935  1,233,764

2020   10.27%      66,935  1,271,765

Total Withdrawal: 1,685,190

Change in Balance: 27.18%


Portfolio B

Year   Return  Withdrawal    Balance

2000   -0.95%      49,524    940,956

2005    3.80%      44,534    846,154

2008  -20.11%      35,922    682,523

2009   18.27%      40,360    766,833

2010   11.57%      42,777    812,764

2015    0.99%      50,767    964,567

2020   13.35%      65,602  1,246,433

Total Withdrawal: 1,425,573

Change in Balance: 24.64%

Portfolio A has a final balance that is 25,000 more than Portfolio B's and

had about 260,000 more in withdrawals. Does the example lend credence to

the Portfolio A strategy being better than the Portfolio B strategy?



Investing Puzzle or Arithmetic Can Be Useful: Analysis

Summary: The two portfolios have about the same performance over the 20

years. The difference is mainly due to Portfolio A having a good year or

years near the beginning before much money was withdrawn. The example

merely shows that it is better to withdraw money after a gain rather than


Detailed Analysis:

The scenario is: Start with X = $1 million. Withdraw 5% a year.

Define "gain factor" to be 1 plus the percentage return. For example, if a

portfolio returns 5%, then the gain factor is 1.05.

Let A_j, j = 1, ..., 20, be the gain factors each year for portfolio A.

Let B_j, j = 1, ..., 20 be the gain factors each year for portfolio B.

The final amount in portfolio A is

   F = X * A_1 * 0.95 * A_2 * 0.95 * ... * A_20 * 0.95 .

The final amount in portfolio B is

   G = X * B_1 * 0.95 * B_2 * 0.95 * ... * B_20 * 0.95 .

From the "Change in Balance" values or the balances for year 2020, we see

that F and G are almost the same:

   F = 1.271865 * X,

   G = 1.246433 * X.

But, as we learned in elementary school, multiplication is commutative, so

   F = X * 0.95^20 * \prod_{j=1}^20 A_j,

   G = X * 0.95^20 * \prod_{j=1}^20 B_j.

Since F and G are almost the same, the total gains (product of the gain

factors) for the two portfolios are almost the same, i.e.,

   \prod_{j=1}^20 A_j \approx \prod_{j=1}^20 B_j.

Then what accounts for the big difference in the amounts withdrawn?

Portfolio A must have had some good years near the beginning. (We see in

the tables that Portfolio A did better in 2000 than Portfolio B.) So, all

the example shows is that it is better to withdraw your money after your

gains rather than before.

To take an extreme example, suppose an investment is going to go up 100%

this year. It is better to take your money out at the end of the year

(after the gain) than at the beginning of the year (before the gain). This

is a triviality.

The example tells us nothing useful about the two strategies.

Note: The total gains aren't exactly the same, but the timing of the yearly

gains is what is driving the results. We have (rounding off)

   ( F - G ) / 0.95^20 = 70942.81 .

So, if there had been no withdrawals, the difference in the portfolio

balances would have been about $71,000, much less than the $260,000 +

$25,000 difference with the withdrawals.



Many people have trouble with this puzzle. The difficulty may be that such

people don't make a mental model (or a written model) of the process that

is producing the balance. If you write down (or have in your head) a

formula for the balance, then you see that the gain factors are independent

of the withdrawal factors. That is, we could withdraw more or less money,

or even deposit money, without affecting the gain factors we would use in

the model. This then leads us to consider the gain factors on their own,

and to recognize that the gain factors are the true measures of how the

strategies perform.

Thursday, April 15, 2021

Ordering Beauty

First, congratulations to fellow complexity theorist and blogger Scott Aaronson for receiving the 2020 ACM Prize in Computing for "groundbreaking contributions to quantum computing". The prize is ACM's highest honor for mid-career researchers. Well deserved! 

Now back to our regularly scheduled post...

Every freshman at Cornell back in 1981 had to take two seminar courses, basically one-shot courses in an usually humanities area which required no prerequisites but lots of writing. I took my first course in philosophy. The instructor, a PhD student, at one point described his research, a philosophical argument that there is an intrinsic total ordering of beauty, say that Beethoven would always sit above the Beatles, no matter the beholder. I didn't believe him then and still don't today. A few months ago the Washington Post ran a story with the same theme entitled Maradona was great, and maybe the greatest. Can we make similar claims about artists?

Somehow we have this belief when it comes to conference submissions, that there is some perfect ordering of the submissions and a good PC can suss it out. That's not really how it works. Let's say a conference has an accept rate of 30%. Typically 10% of the submissions are strong and will be accepted by any committee. About half the submissions are just okay or worse and would be rejected. The other 40% of the submissions will be chosen seemingly randomly based on the tastes of the specific members of the program committee. Experiments in the NeurIPS and ESA conferences have bourn this out. 

Why not make the randomness explicit instead of implicit? Have the PC divide the papers into three piles, definite accepts, definite rejects and the middle. Take the third group and randomly choose which ones to accept. It will create a more interesting program. Also randomness removes biases, randomness doesn't care about gender, race and nationality or whether the authors are at senior professors at MIT or first year grad students at Southern North Dakota. 

We put far too much weight on getting accepted into a conference given the implicit randomness of a PC. If we make the randomness explicit that would devalue that weight. We would have to judge researchers on the quality of their research instead of their luck in conferences.

Given that conferences, especially the virtual ones, have no real limits on the number of papers and talks, you might say why not just accept all the papers in the middle. Works for me.

Sunday, April 11, 2021

Is the following reaction to getting the first COVID shot logical?

 Alice works at a charity that puts together bag and box lunches for children.

They all wear masks and they are 12 feet apart and very careful, and nobody there has gotten COVID.

Then Alice gets here first COVID shot and says:

I am not going to work for that charity until I have had my second shot and waited  4 weeks so I am immune. 

She is really scared of getting COVID NOW that  she is on the verge of being immune. 

Is that logical? She was not scared before. So does it make sense to be scared now? I see where she is coming from emotionally, but is there a logical argument for her viewpoint? I ask nonrhetorically.

bill g. 

Thursday, April 08, 2021

Quantum Stories

Scott Aaronson wrote last month about the hype over quantum computing. I'd thought I'd drop a few stories.

I was once asked to review a grant proposal (outside the US) that claimed it would find a quantum algorithm for NP-hard problems. I wrote a scathing review but the grant was funded because I failed to prove that it was impossible. I replied that they should fund my research to teleport people from Chicago to Paris because they couldn't prove I couldn't do it. I never got a response.

I was at an NSF sponsored meeting on quantum computing. I suggested, as a complexity theorist, that we need to explore the limits of quantum computing. A senior researcher said we shouldn't mention that in the report or it might hurt our chances of funding the field if they think quantum computing might not be a complete success.

I went to a Microsoft Faculty Research Summit which had a big focus on quantum computing. I complained of the quantum computing hype. My friends in the field denied the hype. Later at the summit a research head said that Microsoft will solve world hunger with quantum computing.

I was meeting with a congressional staffer who had worked on the National Quantum Initiative which coincidentally was being announced that day. I said something about high risk, high reward. He looked shocked--nobody had told him before that quantum computing is a speculative technology.

Quantum computing has generated a large number of beautiful and challenging scientific questions. Thinking about quantum has helped generate classical complexity and algorithmic results. But quantum computing having a real-world impact in the near or mid-term is unlikely. Most scientists I know working directly in quantum research are honest about the limitations and challenges in quantum computing. But somehow that message is not often getting to the next layers up, the policy makers, the research managers, the university administrators, the media and the venture capitalists. 

But who knows, maybe some quantum heuristic that doesn't need much entanglement will change the world tomorrow. I can't prove it's impossible.

Sunday, April 04, 2021

Do any NP-reductions use deep mathematics? Non-rhetically

BILL: Lets say we didn't know that FACTORING NPC --> NP=coNP.
then what direction would we think Factoring in P or NPC? 

STUDENT: In P. After all, Number Theory is a deep subject and I can imagine some deep Theorem in it leading to FACTORING in P. 

BILL: That cuts both ways. I can imagine some deep theorem in NT being the key to showing 

FACTORING poly-reduces to  SAT

Deep mathematics HAS been used for algorithms. Factoring algs is one example, The graph minor theorem yielding MANY problems in P is another. Can you give me an example where deep mathematics has been used for an NPC reduction?

BILL: Oh. Gee. I do not know of any. 

STUDENT: If only you had some mechanism to ask the theory community. Maybe you could call it a web log, or weblog.

BILL: If only...

1) Are there any NPC reductions that use deep math? (I realize that the phrase `deep math' is not well defined,)

2) Are there other reductions that use deep math?

3) If P NE NP then: 
For all epsilon there is no approx-alg for MAX3SAT which yields  \ge  (7/8 + epsilon)OPT

For all delta  there is no approx-alg for CLIQ which yields > n^{-delta} OPT

No approx-alg for SET COVER which yields \ge (ln n + o(1))OPT. 

All of these proofs use  the PCP machinery or something similar. My impression is that the original PCP theorem, while long, hard, and impressive, did not use deep math. I have a vague memory of some paper or PhD thesis stating that the ONLY theorem needed was that a poly of degree d over a finite field has \le d roots. 

However to get the optimal lower bounds seemed to use some deep math. But I am more ASKING than telling. 

Thursday, April 01, 2021

Want to Buy a Theorem?

This is embarrassing to admit but after a few badly timed trades on GameStop options I find myself a bit tight on money. To raise some cash, I reluctantly decided to sell one of my prized possessions, one of my theorems.

For Sale: Boolean formula satisfiability cannot be solved in both logarithmic space and quasilinear time. For a more formal and slightly more general statement and a proof, see this paper.

Bidding starts at 12 BTC (about $705,000). 

The winning bid, upon verified payment, will receive:

  • The ability to give the theorem the name of your choice such as your own name, your best friend's mother's name or "Teddy McTheoremface".
  • A non-fungible token (NFT) attesting ownership of the theorem and the name you have chosen for it.
  • Anyone citing this result will be required to note that you own it and use the name you chose above. You cannot, however, limit the use of the theorem or receive compensation for its use. 
  • By virtue of owning this theorem you will a Fortnow number of zero. This immediately gives you an Erdős number of 2. If you have previously written a paper with Paul Erdős then both of us will now have an Erdős number of 1.
Frequently Asked Questions

Q: Why this theorem?

A: The theorem is in one of my few solely authored papers and I can't afford to share the proceeds of the sale. 

Q: Doesn't Ryan Williams and others have stronger theorems?

A: The results are incomparable. Ryan gives bounds on a single algorithm with low time and space. My theorem allows different machines for the time and space bounds.

Also, to the best of my knowledge, Ryan's theorem is not for sale.

Q: Doesn't the proof of the theorem rely on other people's theorems such as Nepomnjascii? Shouldn't he get some of the value from this sale?

A: I'm not selling the proof of the theorem, just the theorem itself.

Q: If I purchase this theorem will I get to write next year's April Fools post?

A: No.

Monday, March 29, 2021

Slicing the Hypercube

Here's a neat result I heard about at virtual Dagstuhl last week, a new lower bound on the number of hyperplanes that cuts all the edges of a hypercube.

A n-dimensional hypercube has 2n vertices corresponding to the binary strings of length n. Edges are between two vertices that differ in exactly one bit, for a total of (n/2) 2n edges. Hypercubes played a large role in Hao Huang's recent proof of the sensitivity conjecture. 

A hyperplane is just a linear inequality in x1,…,xn the bits of the string corresponding to a vertex. An edge is cut if the inequality is true for one vertex and false for the other.

With n hyperplanes you can cut all the edges in two very different ways. 

  • The hyperplanes xi > 0 for each i. These are n orthogonal hyperplanes.
  • The hyperplanes x1+…+xn > i for each i from 0 to n-1. These are n parallel hyperplanes.
Do you need n hyperplanes to cut all the edges? Mike Paterson found 5 explicit hyperplanes that cuts all the edges of a 6-dimensional hypercube (see survey by Mike Saks). Scaling that up means you need 5n/6 hyperplanes to cut an n-dimensional hypercube. That remains the best known upper bound.

For the lower bound, in 1971 Patrick O'Neil showed that any hyperplane can cut at most O(n0.5) fraction of the edges (sharp by the hyperplane x1+…+xn > n/2). Thus you need at least O(n0.5) hyperplanes which for 50 years was the best known bound.

Gil Yehuda and Amir Yehudayoff have given a new lower bound of O(n0.57). The paper gives a O(n0.51) bound but Yehudayoff said in a talk last week the bound has been improved.

Yehudayoff didn't go into much details in his 20 minute talk but the proofs uses geometry, probability and antichains. 

The result has some applications to complexity, namely you need at least n1.57 wires in a depth-two threshold circuit for parity. But the main fun is the question itself, that we finally made progress and there is still a long way to go.

Thursday, March 25, 2021

The key to my Taylor series problem: Buddy can you spare a penny, nickel, dime, or quarter

 In my last blog post I posed a question about finding the coeff of x^100 in a particular Taylor Series. The question and answer are given  here:

The key to the problem was to recognize that it was asking how many ways you can make change of a dollar using pennies, nickels, dimes, and quarters. This can be done by hand (its 242). 

1) Someone who I gave the problem to solved it by available software, but when he saw the answer was 242 he realized how to do it via making change.

2) How hard would this problem be to do completely by hand- say as a Putnam Problem? Its hard for me to say since I started with the trick and then found the problem. 

3) Is this a well known trick? 

Monday, March 22, 2021

A Taylor Series Problem

 I post a problem today and its solution on Thursday.

Comments are fine, though if you don't want to get a hint, don't read them. 

Find the coefficient of x100 in the Taylor series for the rational function which has 

numerator 1 

and denominator

x41 - x40 - x36 + x35 -x31 + x30 + x26 - x25- x16 + x15 + x11 - x10 + x6 - x5 -x+1

For better readability see my pdf file with the problem in it here

Is there a clever way to do the problem?  If the way to do it was to actually do the Taylor series then 

1) I wouldn't post it

2) I probably could not do it (or it would take too long to bother)  though maybe there are freely available programs that could. 

So yes, there is a clever solution. At least I think it's clever. 

Wednesday, March 17, 2021

I read the news today oh boy

I'm absolutely elated that Lázló Lovász and Avi Wigderson won the Abel Prize. More from the New York Times, Quanta Magazine and Gil Kalai. Another example of how complexity theory and theoretical computer science has reached the upper echelons of mathematics.

I'm horrified of the spa shootings in Atlanta. I've driven by those spas many times when I lived there. 

I'm saddened by the closing of Mills College in Oakland, California. I lived in a dorm Berkeley rented at Mills College during my crazy year there.

I've got mixed emotions about the death of James Levine. What he did musically with the Metropolitan opera and orchestra was nothing short of incredible. What he did to the young people he abused is inexcusable. I remember watching a Met taped videocast of Die Walküre with my daughter, enjoying the production but telling her "Do you realize we are watching an opera composed by an anti-Semite and conducted by a child molester?"  Can you separate art from artist?

Monday, March 15, 2021

I didn't understand The Art Market before the NFT sale, and I understand it less now

 (This post is a sequel to my post from Feb 13, 2007 which you can find here. While the gap from 2007 until 2021 seems large, its not as long as the gap between Knuth Vol 3 and Knuth Vol 4, nor as long as the gap between Stan Freberg Vinyl Record History of America Part I and his CD History of America Part 2, both novelty records, and quite good.)

My 2017 post was about people posting a clip on youtube and calling it `rare footage of...' 

My point was: How can something be rare if its on you tube?

I also pondered: if someone can make a REALLY REALLY GOOD copy of the Mona Lisa, that is at talent that should be respected and such a person should be able to sell it for about the same price as the original (not sure if I want the copy to be worth A LOT or the original to be worth LESS than it is.)

IF Art is to-look-at-cause-its-pretty then it should not matter who the artist is. 

IF Art is an investment then that could be risky since it does not have intrinsic value. 

IF Art is neither to-look-at or an investment then... What is it? We'll see below that one answer might be Bragging Rights. 

This is NOT a RANT, this is an admitance of my lack of understanding. (Spell check things admitance is not a word. Maybe its admitence. No, Hmmm.) 

And now there is a new wrinkle in all of this: 

69 million for a NFT (Non-Fungable Token) of an art work:story here

1) What is the buyer getting? Bragging rights?

2) Can anyone SEE the art but doesn't OWN it? I don't know..

3) If someone hacked in and got a perfect copy of the art and posted it on a website, would that be theft? Nothing was taken. 

4) In the normal art world does it happen that prices go DOWN because people wake up and say WHAT? Why did I EVER think that White on White was worth 10 million dollars? 

5) Might this happen here also? 

6) Is My Feb 13 blog, which is in a diff format (or I didn't know how to use the blogger interface then) going to one day be worth something?

Friday, March 12, 2021

Cake Cutting in Overtime

There's a new proposal out of Baltimore for a new way to handle overtime in National Football League games. This post is about American Football, soccer has its own overtime issues we won't talk about here.

Instead of randomly choosing who gets the ball, which gives an advantage to the team on offense, the Raven's coach John Harbaugh suggests a "spot and choose" rule based on cake cutting. One team picks a starting position and the other team decides whether to be on offense or defense.

Sounds intriguing but there's a problem. In cake cutting, if you cut off a crumb, everyone would definitely choose the rest of the cake. But in football suppose the spotting team chose the offensive's team one-yard line (99 yards needed for a touchdown). For spot and choose to work the one-yard line would have to be an obvious choice for defense. But many teams might still choose starting at the one-year line on offense. There's a chance you can march down the field and if not you can always punt it away. So the team that gets to choose whether to be on offense could get an advantage no matter what the spotting team did.

I still like the idea of "spot and choose". Maybe you let the first team choose not only the yard line but what down to start. Because no one would start at their one yard line at 4th down.

Are there variations for spot and choose in other sports? I like using game theory to figure out how to play actual games. 

Sunday, March 07, 2021

When do I need to warn about Spoilers?

In a recent post here I mentioned in passing a plot point from the last season of The Big Bang Theory. Note that the last season was in 2019.  WARNING- do not read that post if you are watching The Big Bang Theory and do not want a plot point revealed. 

Someone who perhaps thinks Lance and I are the same person (are we? See here) left Lance a tweet complaining about the spoiler. At least I think they are complaining. The tweet is in Spanish and its here.


1) Some country is two years behind America on showing The Big Bang Theory. 

2) The person who tweeted has them on DVR (or something like that) and is watching them a few years after they air (I watched Firefly on a DVD I borrowed from a friend 10 years after it went off he air. Ask your grandparents what a DVD used to be.) 

3) They are kidding us and making fun of the notion of spoilers.

This raises the question: When is it okay to post spoilers without warning? A few random thoughts:

1) ``Don't tell me who won the superb owl! I have it on tape and want to watch it without knowing who won!''  This always seemed odd to me.  Routing for events to happen that have already happened seems weird to me. When I was 10 years old I was in New York listening to a Knicks-Celtics Basketball game on the radio and during halftime I accidentally found a Boston radio station that had the game 30 minutes later (I did not realize that the channel I was on originally was 30 minutes behind). So I heard how the game ended, then switched back listening to a game knowing how it would end. I didn't route for my team (the Knicks, who lost) but it just felt very weird listening to it. If I had thought of it I might have noticed how the different broadcasts differ and got a paper out of the data, but as a 10 year old I was not thinking about how to pad my resume quite yet. 

2) I like seeing a mystery twice- first time I don't know who did it, second time I do but can look for clues I missed the first time.

3) I would have thought 2 years after a show is off the air its fine to spoil. But... maybe not.

4) It also matters how important the plot point is. I didn't think the plot point I revealed was that important. 

5) Many TV shows are predictable so I am not sure what `spoiler' even means. If I said to Darling:

 The bad guy is an unimportant character we meet in the first 10 minutes.

that does not show I've seen it before. It shows that I am a master of TV-logic.

6) With Arc TV shows this is more of a problem. While it was possible to spoil an episode (Captain Kir will survive but Ensign Red Shirt will bite the dust) it was impossible to spoil a long-term arc. TV has gotten to complicated. And I say that without having watched Game of Thrones. 

Sunday, February 28, 2021

Using number-of-PhD's as a measure of smartness is stupid.

In Thor:Ragnorak Bruce Banner mentions that he has 7 PhDs. Gee, I wonder how he managed to slip that into a conversation casually.  Later in the movie:

Bruce: I don't know how to fly one of those (it an Alien Spacecraft)

Thor: You're a scientist. Use one of your PhD's 

Bruce: None of them are for flying alien spaceships.

On the episode Double Date of Archer (Season 11, Episode 6) Gabrielle notes that she has 2 PhD's whereas Lana only has 1 PhD. 

I am sure there are other examples of a work of fiction using number of PhDs as a way to say that someone is smart. In reality the number of PhD's one has is... not really a thing. 

In reality if a scientist wants to do work in another field they... do work in that field.

Godel did research in Physics in the 1950's, but it would have been silly to go back and get a PhD in it.

Fortnow did research in Economics, but it would have been silly to go back and get a PhD in it. 

Amy Farrah Fowler worked in neurobiology and then in Physics. Her Nobel prize in physics (with Sheldon Cooper) is impressive, getting a PhD in Physics would be ... odd. Imagine someone looking at here resume: She has a Nobel Prize in Physics, but does she have a PhD? Did she pass her qualifying exams?  This is the flip side of what I mentioned in a prior post about PhD's: Not only does Dr. Doom want to take over the world, but his PhD is from The University of Latveria, which is not accredited. 

There are other examples.

There ARE some people who get two PhDs for reasons of job market or other such things. That's absolutely fine of course. However, I wonder if in the real world they brag about it. I doubt it. 

Is there anyone who has 3 PhDs? I would assume yes, but again, I wonder if they brag about it. Or should. 

WHY do TV and movies use number-of-PhDs as a sign of genius? I do not know- especially since there are BETTER ways say someone is a genius in a way the audience can understand:  number-of-Nobel-prizes, number-of-times-mentioned-in-complexityblog,  number of Dundie's (see here), etc. 

Thursday, February 25, 2021

Complexity is the Enemy of Speed

The title of this post came from an opinion piece in the Wall Street Journal yesterday on vaccine distribution. Many attempts to get the vaccines to the right groups first have slowed down distribution and sometime even caused vaccines to go to waste. Rules to help spread vaccines across minority groups often backfire. Often when some rules lead to inequity, we try to fix it with more rules when we need less much less. Attempts to distribute vaccines to multiple medical and pharmacy sites have made it difficult to get appointments even if you are eligible.

Randomness is the simplest way to fairness. The movie Contagion got it right, just choose birthdays by picking balls from a bin to distribute the vaccine. Then people can just show up at a few chosen sites with proof of birthday. No need to sign up.

You could argue to add back conditions like age, medical conditions, jobs but that just leads you down the same problematic path. The fastest way to get past this pandemic is to get vaccines into arms. Trust the randomness.

Monday, February 22, 2021

Good Names and Bad Names of Game Shows and theorems

 In my post on Alex Trebek, see here, I noted that Jeopardy! is not a good name for the game show since it doesn't tell you much about the show. Perhaps Answers and Questions is a better name.

The following game shows have names that tell you something about the game and hence have better names: 

Wheel of Fortune, The Price is Right, Lets make a Deal, Beautiful women have suitcases full of money (the original name for Deal-No Deal), Win Ben Stein's Money, Beat the Geeks. 

In Math we often name a concept  after a person. While this may be a good way to honor someone, the name does not tell us much about the concept and it leads to statements like:

A Calabi-Yau manifold is a compact complex Kahler manifold with a trivial first Chern class. 

A Kahler manifold is a Hermitian manifold for which the Hermitian form is closed.

A Hermitian manifold is the complex analog of the Riemann manifold. 

(These examples are from an article I will point to later---I do not understand any of these terms, though I once knew what a Riemann manifold was. I heard the term Kahler Manifold in the song Bohemian Gravity.  It's at about the 4 minute 30 second place.) 

While I am amused by the name Victoria Delfino Problems (probably the only realtor who has problems in math named after her, see my post here) it's not a descriptive way to name open problems in descriptive set theory. 

Sometimes  a name becomes SO connected to a concept that it IS descriptive, e.g.:

The first proof of VDW's theorem yields ACKERMAN-LIKE bounds. 

but you cannot count on that happening AND it is only descriptive to people already somewhat in the field. 

What to do? This article makes the  ballian point that we should   STOP DOING THIS and that the person who first proves the theorem should name it in a way that tells you something about the concept. I would agree. But this can still be hard to really do.

In my book on Muffin Mathematics (see here) I have a sequence of methods called

Floor Ceiling, Half, Mid, Interval, Easy-Buddy-Match, Hard-Buddy-Match, Gap, Train. 

There was one more method that I didn't quite name, but I used the phrase `Scott Muffin Problem' to honors Scott Huddleton who came up with the method, in my description of it. 

All but the last concept were given ballian names.  Even so, you would need to read the book to see why the names make sense. Still, that would be easier than trying to figure out what a Calabi-Yau manifold is.