Thursday, October 31, 2019

Statistics to Scare

So how do you parse the following paragraph from Monday's NYT Evening Breifing.
A study in JAMA Pediatrics this year found that the average Halloween resulted in four additional pedestrian deaths compared with other nights. For 4- to 8-year-olds, the rate was 10 times as high.
The paragraph  means the percent increase for pedestrian deaths for 4-8 year olds was ten time the percent increase for people as a whole, a number you cannot determine from the information given. Using the fact that roughly 7% of Americans are in the 4-8 year range, that yields a little under three additional deaths for 4-8 year olds and about one for the other age ranges.

The paper unfortunately sits behind a firewall. But I found a press release.
Children in the United States celebrate Halloween by going door-to-door collecting candy. New research suggests the popular October 31 holiday is associated with increased pedestrian traffic fatalities, especially among children. Researchers used data from the National Highway Traffic Safety Administration to compare the number of pedestrian fatalities from 1975 to 2016 that happened on October 31 each year between 5 p.m. and 11:59 p.m. with those that happened during the same hours on a day one week earlier (on October 24) and a day one week later (on November 7). During the 42-year study period, 608 pedestrian fatalities happened on the 42 Halloween evenings, whereas 851 pedestrian fatalities happened on the 84 other evenings used for comparison. The relative risk (an expression of probability) of a pedestrian fatality was higher on Halloween than those other nights. Absolute mortality rates averaged 2.07 and 1.45 pedestrian fatalities per hour on Halloween nights and the other evenings, respectively, which is equivalent to the average Halloween resulting in four additional pedestrian deaths each year. The biggest risk was among children ages 4 to 8. Absolute risk of pedestrian fatality per 100 million Americans was small and declined from 4.9 to 2.5 between the first and final decades of the study interval. 
Doing the math, we see a 43% increase and a more than quintupling the number of pedestrian deaths for the youngsters. That sounds scary indeed. though it only adds up to a handful of deaths.  Moreover the authors didn't take into account the larger number of pedestrians on Halloween, particularly among 4-8 year olds.

A smaller fraction of people die as pedestrians on Halloween today then on a random day when I was a kid. I wonder if that's because there are fewer pedestrians today.

Also from the New York Times, a sociologist has found "no evidence that any child had been seriously injured, let alone killed, by strangers tampering with candy." I feel lied to as a kid.

So the upshot: Tell your kids to take the usual precautions but mostly let them dress up, have fun trick-or-treating and enjoy their candy.

Monday, October 28, 2019

Random non-partisan thoughts on the Prez Election


This post is non-partisan, but in the interest of full disclosure I disclose that I will almost surely be voting for the Democratic Nominee. And I say almost surely because very weird things could happen.I can imagine a republican saying, in 2015 I will almost surely be voting for the Republican Nominee and then later deciding to not vote for Trump.


My Past Predictions: Early on in 2007 I predicted it would be Obama vs McCain and that Obama would win. Was I smart or lucky? Early in 2011 I predicted Paul Ryan would be the Rep. Candidate. Early in 2015 and even into 2016 I predicted that Trump would not get the nomination. After he got the nomination I predicted he would not become president. So, in answer to my first question, I was lucky not smart. Having said all of this I predict that the Dem. candidate will be Warren. Note- this is an honest prediction, not one fueled by what I want to see happen. I predict Warren since she seems to be someone who can bridge the so-called establishment and the so-called left (I dislike the terms LEFT and RIGHT since issues and views change over time). Given my past record I would not take me too seriously. Also, since this prediction is not particularly unusual, if I am right this would NOT be impressive (My Obama prediction was impressive, and my Paul Ryan prediction would have been very impressive had I been right.)

Electability: My spell checker doesn't think its a word. Actually it shouldn't be a word. It's a stupid concept. Recall

JFK was unelectable since he was Catholic.

Ronald Reagan was unelectable because he was too conservative.

A draft dodging adulterer named Bill Clinton could not possible beat a sitting president who just won a popular war.

Nobody named Barack Hussein Obama, who is half-black, could possibly get the nomination, never mind the presidency. And Hillary had the nomination locked up in 2008--- she had no any serious challengers.

(An article in The New Republic in 2007 predicted a brokered convention for the Republicans where Fred Thompson, Mitt Romney, and Rudy Guilliani would split the vote, and at the same time a cake walk for Hillary Clinton with
Barak Obama winning Illinois in the primaries but not much else. Recall that 2008 was McCain vs Obama.)

Donald Trump will surely be stopped from getting the nomination because, in the end, The Party Decides.

Republican voters in 2016 will prefer Rubio to Trump since Marco is more electable AND more conservative. Hence, in the space of Rep. Candidates, Rubio dominates Trump. So, by simple game theory, Trump can't get the nomination. The more electable Rubio, in the 2016 primaries, won Minnesota, Wash DC, and Puerto Rico (Puerto Rico has a primary. Really!) One of my friends thought he also won Guam (Guam?) but I could not find evidence of that on the web. Okay, so why did Trump win? Because voters are not game theorists.

ANYWAY, my point is that how can anyone take the notion of electability seriously when unelectable people have gotten elected?

Primaries: Dem primary voters are torn between who they want to be president and who can beat Trump. Since its so hard to tell who can beat who, I would recommend voting for who you like and not say stupid things like

American would never elect a 76 year old socialist whose recently had a heart attack.

or

Trump beat a women in 2016 so we can't nominate a women

or

America is not ready to elect a gay president yet. (America is never ready to do X until after it does X and then the pundits ret-con their opinions.For example, of course America is ready for Gay-Marriage. Duh.)

Who won the debate?
Whoever didn't bother watching it :-). I think the question is stupid and has become who got out a clever sound bite. We need sound policy, not sound bites!

Monday, October 21, 2019

Differentiation and Integration

Recently there was an excellent xkcd about differentiation and integration, see here.

This brings up thoughts on diff and int:

1) For some students Integration is when math gets hard.

Diff (at least on the level of Calc I) is rote memorization. A computer program can EASILY do diff

Integration by humans requires more guesswork and thought, Computers can now do it very well but I think that it was  harder to get to work.

Someone who has worked on programs for both, please comment.

2) When I took Honors Calculus back in 1976 (from Jeff Cheeger at SUNY Stonybrook) he made a comment which really puzzled the class, and myself, but later I understood it:

             Integration is easier than Differentiation

The class thought this was very odd since the problem of, GIVEN a function, find its diff was easier than GIVEN a function, find its int.  And of course I am talking about the kinds of functions one is
given in Calc I and Calc II, so this is not meant to be a formal statement.

What he meant was that integration  has better mathematical properties than differentiation.  For example, differentiating the function f(x)=abs(x) (absolute value of x)  is problematic at 0, where it has no problem with integration anywhere (alas, if only our society was as relaxed about integration as f(x)=abs(x) is).

So I would say that the class and Dr. Cheeger were both right (someone else might say they were both wrong) we were just looking at different notions of easy and hard.

Are there other cases in math where `easy' and `hard' can mean very different things?



Thursday, October 17, 2019

2019 Fall Jobs Post

Starting PhD students over time would always assume that the computer science academic job market would be a strong or as weak when they graduate as it is when they were starting, and they would always be wrong. That may have changed. We've had such a stretch of strong growth in computer science, starting as we pulled out of the financial crisis in 2012, that students who started in the strong market back then see only a much stronger market today.

Every fall I recap advice for students, and others, looking for academic jobs. Best source are the ads from the CRA and the ACM. For theoretical computer science specific postdoc and faculty positions check out TCS Jobs and Theory Announcements. If you have jobs to announce, please post to the above and/or feel free to leave a comment on this post. Even if you don't see an ad, almost surely your favorite university is looking to hire computer scientists. Check out their website or email someone at the department. The CRA just published a member book, a collection of one pagers for several departments, almost all of which are trying to grow.

Needless to say we're trying to greatly expand computing at Illinois Tech, come join us.

Something new this year, CATCS is collecting and disseminating profiles of junior theory researchers on the job market this year. Definitely take advantage whether to sign up as a job seeker or to reach out to theorists on the market once the profiles are posted. The CRA also maintains a CV database for candidates for academic, industrial and government research positions.

While this is a job-seekers market, you still need to put your best foot forward. Reach out to professors at conferences, such as the upcoming FOCS. Polish your CV and get your Google Scholar page in good shape. Practice your job talk, a bad one can kill your visit. Research the people you will see during the interview ahead of time, I like to write down one interesting discussion topic for each. You'll need to sell yourself to non-theorists. Data, cybersecurity and quantum are hot this year, highlight your work in those areas without making it look fake.

In any case have fun! You'll meet lots of interesting people in your job search and eat way too much.

Sunday, October 13, 2019

The Sheldon Conjecture (too late for Problems with a Point)


Chapter 5 of Problems with a Point (by Gasarch and Kruskal) is about how mathematical objects get their names. If it was an e-book that I could edit and add to (is this a good idea or not? later on that) then I would have added the following.

The Sheldon Conjecture

Background: Nobel Laureate Sheldon Cooper has said that 73 is the best number because

a) 73 is prime.

b) 73 is the 21st prime and note that 7*3=21.

c) The mirror of 73, namely 37, is prime.

d) 37 is the 12th prime, and 12 is the mirror of 21.

Sheldon never quite said its the only such number; that was conjectured by Jessie Byrnes, Chris Spicer, and Alyssa Turnquist here. They called it Sheldon's Conjecture probably since Sheldon Cooper should have conjectured it

Why didn't Sheldon make Sheldon's conjecture? This kind of question has been asked before:



Could Euler have conjectured the prime number theorem

Why didn't Hilbert (or others) pursue Ramsey Theory?

(readers are encouraged to give other examples)

I doubt we'll be asking this about Sheldon Cooper since he is a fictional character.

I am delighted that

a) There is a Sheldon's Conjecture.

b) It has been solved by Pomerance and Spicer, see here

Actually (b) might not be so delightful--- once a conjecture is proven its stops being called by the name of the conjecturer. If you don't believe me just ask Vazsonyi or Baudet. If you don't know who they are then (1) see here and (2) that proves my point. So perhaps I wish it had not been solved so The Sheldon Conjecture would live on as a name.

Another issue this brings up: Lets say that Problems with a Point was an online book that I was able to edit easily. Then I might add material on The Sheldon Conjecture. And while I am at it, I would add The Monty Hall Paradox to the chapter on how theorems get there names. Plus, I would fix some typos and references. Perhaps update some reference. Now lets say that all books were online and the authors could modify them. Would this be good or bad?

1) Good- The book would get better and better as errors got removed.

2) Good- The book would get to include material that is appropriate but came out after it was published.

3) Good- The book would get to include material that is appropriate but the authors forgot to include the first time around.

4) Bad- For referencing the book or for book reviews of the book, you are looking at different objects. The current system has First Edition, Second Edition, etc, so you can point to which one you are looking at. The easily-edited books would have more of a continuous update process so harder to point to things.

5) Bad- When Clyde and I emailed the final version to the publisher we were almost done. When we got the galleys and commented on them we were DONE DONE! For typos and errors maybe I want to fix them online, but entire new sections--- when we're done we are DONE.

6) Bad- at what point is it (i) a corrected version of the old book, (ii) a new edition of the old book, (iii) an entirely new book? Life is complicated enough.

I would prob like a system where you can fix errors but can't add new material. Not sure if that's really a clean distinction.

Thursday, October 10, 2019

William Kruskal's 100th birthday

Today, Oct 10, 2019 is William Kruskal's 100th birthday (he's dead, so no cake. Oh well.) William Kruskal was a great statistician. To honor him we have a guest post by his nephew Clyde Kruskal. We also note that the Kruskal Family is one of the top two math families of all time (see here). William is a reason why the other two Kruskal brothers went into mathematics: As a much older sibling (6 years older than Martin and 8 years older than Joseph), he encouraged their early mathematical development.

Here are some pictures of William Kruskal and of the Kruskal Family: here

Guest Post by Clyde Kruskal


I was asked to blog about my uncle, the statistician, William H. Kruskal, on the centennial of his birth. We called him Uncle Bill. He is best known for co-inventing the Kruskal-Wallis test.

There are two stories that I know about Bill's childhood, which must have been family lore:

(1) As a young child, Bill was a prolific reader. His reading comprehension outstripped his conversational English. One morning, having just read the word ``schedule'' in a book, and obviously having never heard it pronounced, he sat down to breakfast and asked:
"What is the ske·DU·le for today?"

(2) My grandparents once had Bill take an occupational assessment test. The tester said that Bill was a very bright child, and should become a traffic engineer to solve the problems with traffic congestion. (This would have been the 1920s!) As you probably know, Uncle Bill did not succeed in ending traffic congestion. Oh well.


Recently there has been a controversy over whether to ask about citizenship in the 2020 census. In the late 1900s there was a different controversy: whether to adjust the known undercount statistically. In general, Democrats wanted to adjust the count and Republicans did not (presumably because Democratic states tended to have a larger undercount). A national committee was set up to study the issue, with four statisticians in favor and four against. I was surprised to learn that Uncle Bill was on the commission as one of those against adjustment, since, I thought his political views were more closely aligned with those of the Democrats. He was very principled, basing his views only on statistical arguments. I saw him give a talk on the subject, which seemed quite convincing (but, then again, I did not see the other side). They ended up not adjusting.


For more on William Kruskal, in general, and his view on adjusting the census, in particular, see the pointers at the end of this post.


I have more to say. I just hope that I am on the ske·DU·le to blog about Uncle Bill at the bicentennial of his birth.


The William Kruskal Legacy: 1919-2005 by Fienberg, Stigler, and Tanur

A short biography of William Kruskal by J.J. O'Connor and E.F. Robertson

William Kruskal: Mentor and Friend by Judith Tanur

William Kruskal: My Scholarly and Scientific Model by Stephen Fienberg

A conversation with William Kruskal by Sandy Zabell

Testimony for house subcommittee on census and population for 1990 (see page 140)






Monday, October 07, 2019

What comes first theory or practice? Its Complicated!

Having majored in pure math I had the impression that usually the theory comes first and then someone works out something to work in practice. While this is true sometimes it is often NOT true and this will not surprise any of my blog readers.  Even so, I want to tell you about some times it surprised me. This says more about my ignorance than about math or applications or whatnot.

1) Quantum

a) Factoring was proven to be in BQP way before actual quantum computers could do this in reasonable time (we're still waiting).

b) Quantum Crypto- This really is out there. I do not know what came first, the theory or the practice. Or if they were in tandem.

c) (this one is the inspiration for the post)  When I first heard the term Quantum Supremacy I thought it meant the desire for a THEOREM that problem A is in BQP but is provably not in P.  For example, if someone proved factoring is not in P (unlikely this will be proven, and hey- maybe  factoring is in P). Perhaps some contrived problem like those constructed by diagonalization (my spell checker thinks that's not a word. Having worked in computability theory, I think it is. Darn- my spellchecker thinks computability is not word.) Hence when I heard that Google had a paper proving Quantum Supremacy (I do not recall if I actually heard the word  proven) I assumed that there was some theoretical breakthrough. I was surprised and not in the slightest disappointed to find out it involved actual quantum computers.

Question: When the term Quantum Supremacy was first coined, did they mean theoretical, or IRL, or both?

2) Ramsey Theory

a) For Ramsey's Theorem and Van Der waerden's theorem and Rado's theorem and others I could name, first a theorem showed a upper bound on a number, then later computers and perhaps some math got better bounds on that number.

b) Consider the following statement:

For all c there exists P such that for all c-colorings of {1,...,P} there exists x,y,z the same color such that x2 +y2 = z2.

Ronald Graham conjectured the c=2 case and offered $100 in the 1980's. (I do not know if he had any comment on the general case.)  I assumed that it would be proven with ginormous bounds on the P(c) function, and then perhaps some reasonable bound would be found by clever programming and some math. (see here for the Wikipedia Entry about the problem, which also has pointers to other material).

Instead the c=2 case was proven with an exact bound, P(2)=7825, by a computer program, in 2016. The proof is 200 terabytes. So my prediction was incorrect.

As for the result

PRO: We know the result is true for c=2 and we even know the exact bound. Wow! and for Ramsey Theory its unusual to have exact bounds!

CON: It would be good to have a human-readable proof. This is NOT an anti-technology statement. For one thing, a human-readable proof might help us get the result for c=3 and beyond.

3) This item is a cheat in that I knew the empirical results first. However, I will tell you what I am sure I would have thought (and been wrong) had I not know them.

Given k, does the equation


x3 +y3 +z3 = k

have a solution in Z? I would have thought that some hard number theory would determine
for which k it has a solution (with a proof that does not give the actual solutions)  and for then a computer programs would try to find the solutions. Instead (1) some values of k are ruled out by simple mod considerations, and (2) as for the rest, computers have found solutions for some of them. Lipton-Regan (here) and Gasarch (here) have blogged about the k=33 case. Lipton-Regan also comment on the more recent k=42 case.


Thursday, October 03, 2019

Quantum Supremacy: A Guest Post by Abhinav Deshpande

I am delighted to introduce you to Abhinav Deshpande, who is a graduate student at the University of Maryland, studying Quantum Computing. This will be a guest post on the rumors of the recent Google breakthrough on Quantum Supremacy. For other blog posts on this exciting rumor, see Scott Aaronson's postScott Aaronson's second post on itJohn Preskill's quanta articleFortnow's post,
and there may be others.

Guest post by Abhinav:

I (Abhinav) thank Bill Fefferman for help with this post, and Bill Gasarch for inviting me to do a guest post.


The quest towards quantum computational supremacy

September saw some huge news in the area of quantum computing, with rumours that the Google AI Lab has achieved a milestone known as 'quantum computational supremacy', also termed 'quantum supremacy' or 'quantum advantage' by some authors. Today, we examine what this term means, the most promising approach towards achieving this milestone, and the best complexity-theoretic evidence we have so far against classical simulability of quantum mechanics. We will not be commenting on details of the purported paper since there is no official announcement or claim from the authors so far.

What it means

First off, the field of quantum computational supremacy arose from trying to formally understand the differences in the power of classical and quantum computers. A complexity theorist would view this goal as trying to give evidence to separate the complexity classes BPP and BQP. However, it turns out that one can gain more traction from considering the sampling analogues of these classes, SampBPP and SampBQP.  These are classes of distributions that can be efficiently sampled on classical and quantum computers, respectively. Given a quantum circuit U on n qubits, one may define an associated probability distribution over 2^n outcomes as follows: apply U to the fiducial initial state |000...0> and measure the resulting state in the computational basis. This produces a distribution D_U.

A suitable way to define the task of simulating the quantum circuit is as follows:

Input: Description of a quantum circuit U acting on n qubits.

Output: A sample from the probability distribution D_U obtained by measuring U|000...0> in the computational basis.

One of the early works in this field was that of Terhal and DiVincenzo, which first considered the complexity of sampling from a distribution (weak simulation) as opposed to that of calculating the exact probability of a certain outcome (strong simulation). Weak simulation is arguably the more natural notion of simulating a quantum system, since in general, we cannot feasibly compute the probability of a certain outcome even if we can simulate the quantum circuit. Subsequent works by Aaronson and Arkhipov, and by Bremner, Jozsa, and Shepherd established that if there is a classically efficient weak simulator for different classes of quantum circuits, the polynomial hierarchy collapses to the third level.


So far, we have only considered the question of exactly sampling from the distribution D_U. However, any realistic experiment is necessarily noisy, and a more natural problem is to sample from a distribution that is not exactly D_U but from any distribution D_O that is ε-close in a suitable distance measure, say the variation distance.

The aforementioned work by Aaronson and Arkhipov was the first to consider this problem, and they made progress towards showing that a special class of quantum circuits (linear optical circuits) is classically hard to approximately simulate in the sense above. The task of sampling from the output of linear optical circuits is known as boson sampling. At the   time, it was the best available way to show that quantum computers  may solve some problems that are far beyond the reach of classical computers.

Even granting that the PH doesn't collapse, one still needs to make an additional conjecture to establish that boson sampling is not classically simulable.  The conjecture is that additively approximating the output probabilities of a random linear optical quantum circuit is #P-hard.  The reason this may be true is that output probabilities of random linear optical quantum circuits are Permanents of a Gaussian random matrix, and the Permanent is as hard to compute on a random matrix as it is on a worst-case matrix. Therefore, the only missing link is to go from average-case hardness of exact computation to average-case hardness of an additive estimation. In addition, if we make a second conjecture known as the "anti-concentration" conjecture, we can show that this additive estimation is non-trivial: it suffices to give us a good multiplicative estimation with high probability.

So that's what quantum computational supremacy is about: we have a computational task that is efficiently solvable with quantum computers, but which would collapse the polynomial hierarchy if done by a classical computer (assuming certain other conjectures are true). One may substitute "collapse of the polynomial hierarchy" with stronger conjectures and incur a corresponding tradeoff in the likelihood of the conjecture being true.

Random circuit sampling

In 2016, Boixo et al. proposed to replace the class of quantum circuits for which some hardness results were known (commuting circuits and boson sampling) by random circuits of sufficient depth on a 2D grid of qubits having nearest-neighbour interactions. Concretely, the proposed experiment would be to apply random unitaries from a specified set on n qubits arranged on a 2D grid for sufficient depth, and then sample from the resulting distribution. The two-qubit unitaries in the set are restricted to act between nearest neighbours, respecting the geometric This task is called random circuit sampling (RCS).

At the time, the level of evidence for the hardness of this scheme was not yet the same as the linear optical scheme. However, given the theoretical and experimental interest in the idea of demonstrating a quantum speedup over classical computers, subsequent works by Bouland, Fefferman, Nirkhe and Vazirani, and Harrow and Mehraban bridged this gap (the relevant work by Aaronson and Chen will be discussed in the following section). Harrow and Mehraban proved anticoncentration for random circuits. In particular, they showed that a 2-dimensional grid of n qubits achieve anticoncentration in depth O(\sqrt{n}), improving upon earlier results with higher depth due to Brandao, Harrow and Horodecki. Bouland et al. proved the same supporting evidence for RCS as that for boson sampling, namely a worst-to-average-case reduction for exactly computing most output probabilities, even without the permanent structure possessed by linear optical quantum circuits.

Verification

So far, we have not discussed the elephant in the room: of verifying that the output distribution supported on 2^n outcomes. It turns out that there are concrete lower bounds such as those due to Valiant and Valiant, showing that verifying whether an empirical distribution is close to a target distribution is impossible if one has few samples.

Boixo et al. proposed a way of certifying the fidelity of the purported simulation. Their key observation was to note that if their experimental system is well modelled by a noise model called global depolarising noise, estimating the output fidelity is possible with relatively few outcomes. Under global depolarising noise with fidelity f, the noisy distribution takes the form D_N = f D_U + (1-f) I, where I is the uniform distribution over the 2^n outcomes. Together with another empirical observation about the statistics of output probabilities of the ideal distribution D_U, they argued that computing the following cross-entropy score would serve as a good estimator of the fidelity:

f ~ H(I, D_U) - H(D_exp, D_U), where H(D_A,D_B) is the cross-entropy between the two distributions: H(D_A, D_B) = -\sum_i p_A log (p_B).

The proposal here was to experimentally collect several samples from D_exp, classically compute using brute-force the probabilities of these outcomes in the distribution D_U, and estimate the cross-entropy using this information. If the test outputs a high score for a computation on sufficiently many qubits and depth, the claim is that quantum supremacy has been achieved.

Aaronson and Chen gave alternative form of evidence for the hardness of scoring well on a test that aims to certify quantum supremacy similar to the manner above. This sidesteps the issue of whether a test similar to the one above does indeed certify the fidelity. The specific problem considered was "Heavy Output Generation" (HOG), the problem of outputting strings that have higher than median probability in the output distribution. Aaronson and Chen linked the hardness of HOG to a closely related problem called "QUATH", and conjectured that QUATH is hard for classical computers.

Open questions

Assuming the Google team has performed the impressive feat of both running the experiment outlined before and classically computing the probabilities of the relevant outcomes to see a high score on their cross-entropy test, I discuss the remaining positions a skeptic might take regarding the claim about quantum supremacy.

"The current evidence of classical hardness of random circuit sampling is not sufficient to conclude that the task is hard". Assuming that the skeptic believes that the polynomial hierarchy does not collapse, a remaining possibility is that there is no worst-to-average-case reduction for the problem of *approximating* most output probabilities, which kills the proof technique of Aaronson and Arkhipov to show hardness of approximate sampling.

"The cross-entropy proposal does not certify the fidelity." Boixo et al. gave numerical evidence and other arguments for this statement, based on the observation that the noise is of the global depolarising form. A skeptic may argue that the assumption of global depolarising noise is a strong one.

"The QUATH problem is not classically hard." In order to give evidence for the hardness of QUATH, Aaronson and Chen examined the best existing algorithms for this problem and also gave a new algorithm that nevertheless do not solve QUATH with the required parameters.

It would be great if the community could work towards strengthening the evidence we already have for this task to be hard, either phrased as a sampling experiment or together with the verification test.

Finally, I think this is an exciting time for quantum computing and to witness this landmark event. It may not be the first probe of an experiment that is "hard" to classically simulate, since there are many quantum experiments that are beyond the reach of current classical simulations, but the inherent programmability and control present in the experimental system is what enables the tools of complexity theory to be applied to the problem. A thought that fascinates me is the idea that we may be exploring quantum mechanics in a regime never probed this carefully before, the "high complexity regime" of quantum mechanics. One imagines there are important lessons in physics here.