tag:blogger.com,1999:blog-3722233Thu, 20 Feb 2020 09:14:29 +0000typecastfocs metacommentsComputational ComplexityComputational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill Gasarchhttps://blog.computationalcomplexity.org/noreply@blogger.com (Lance Fortnow)Blogger2734125tag:blogger.com,1999:blog-3722233.post-1276607874189152226Sun, 16 Feb 2020 16:02:00 +00002020-02-16T11:02:32.333-05:00Pre-(Publish and Perish)<i>Guest post by Evangelos Georgiadis</i><br />
<div>
<br /></div>
Quite a few posts have recently focused on papers,publications and venues;
"optimal" venues for papers under different objective functions,e.g.
minimizing carbon footprint while maximizing community building, networking
as well as information sharing, see <a href="https://cacm.acm.org/magazines/2020/1/241717-publish-and-perish/fulltext">Moshe Vardi</a>.<br />
<br />
Here we would like to take a closer look at one of the key assumptions
-- the paper.
In order to generate a paper, one needs to come up with a result, something
novel, fresh or interesting to say.
The question that has baffled this author is what represents a conducive
or perhaps even optimal setting for generating papers.
Since papers come in different flavors ranging from "<a href="https://blog.computationalcomplexity.org/2009/11/innovation.html">solid technical papers to risky innovative ones</a>" the
settings
may vary; but ultimately, what would be interesting to investigate (or
for that matter crowdsource) is whether there is a common denominator
in terms of setting or environment, a necessary but not sufficient
condition (so to speak).
<br />
<br />
Here are some accounts of others which may be helpful as reference points.
<br />
<br />
Knuth's papers entitled "<a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.455.1434&rep=rep1&type=pdf">Semantics of context free grammar</a>" along with
"The analysis of algorithms" represent two instances that suggest research
institutes might not provide an optimal environment for idea generation.
<br />
<br />
As Knuth points out in "<a href="https://www-cs-faculty.stanford.edu/~knuth/cl.html">Selected Papers on Computer Languages</a>" (Chapter 18, p. 431):<br />
<blockquote class="tr_bq">
Perhaps new ideas emerge most often from hectic, disorganized
activity,
when a great many sources of stimulation are present at once --
when numerous
deadlines need to be met, and when other miscellaneous activities
like child-rearing
are also mixed into the agenda.</blockquote>
Knuth goes on to say, that it was challenging to do creative work in
office and that finding
a few hideaways provided some form of solution -- aka sitting under
'that' oak tree
near Lake Lagunita.
That said, the inspirational setting for getting into the zone for the
aforementioned two papers
were provided by (Californian) beaches. Hold that observation. Is this
not something we
have come across somewhere else ?
Fields medalist Stephen Smale in "<a href="http://math.berkeley.edu/~smale/biblio/chaos.ps">Chaos: Finding a Horseshoe on the Beaches of Rio</a>" suggests that some of his best work happened at his "beach office".
Whether beaches do provide
for a good setting remains to be shown; perhaps for very innovative
ideas, oceanic freedom is necessary.
That said, the author recalls (hopefully accurately enough) an account
by the young James H Simons,
who attended a conference in Japan in the early days.
Instead of choosing a spacious accommodation (which he was able to
afford), he restricted himself to the typically confined room type --
not only confined by space, but also pressured by time, young Simons was
able to generate an interesting result for that conference.
(This probably demonstrates that technical results don't necessarily
require 'oceanic freedom'.)<br />
<br />
Some meaningful probabilistic advice comes from the fat-tails department,
in "The Black Swan" by Nassim Taleb (on page 209) :
"Go to parties! If you're a scientist, you will chance upon a remark
that might spark a new research. "
<br />
<br />
Murray Gell-Mann provides an interesting collective account in his
Google Tech Talk entitled "On Getting Creative Ideas."
He <a href="https://youtu.be/3fSB6ut-cT0?t=118">recollects</a> a workshop he attended in 1969 in Aspen that focused on
the experience
of getting creative ideas, not just among mathematicians and theoretical
physicists but also poets and
artists. This account seems to
neglect the actual setting that might
nurture creative thought process, but provides interesting references to
people such as
Hermann von Helmholtz, who happened to have thought about this topic
and partitioned the process
in terms of "saturation, incubation and illumination".
<br />
<br />
For those interested in an account that focuses on the Eureka moments of
exclusively
mathematicians/theoretical physicists see Jacques Hadamard's book "<a href="https://www.amazon.com/Mathematicians-Mind-Jacques-Hadamard/dp/0691029318">The Mathematician's Mind</a>".
Hadamard
iterated on Helmholtz's 3 stage process and it's worth taking a look at
what he came up.
<br />
<br />
At last, what are good venues or workshops for generating papers ? Or
let's rephrase that a bit,
what type of atmosphere at venues fosters creativity -- what food for
thought to provide
participants and how to distribute that food for thought over a given day ?
Ryan R Williams proposed (as practiced by 34th Bellairs Winter Workshop
on Computational Geometry)
"... easy problems, informal atmosphere focusing exclusively
on thinking about problems in a cycle of down-time where
one meets in two intense sessions and have free time otherwise."
(This type of setting seems to resonate with the 3 stages of
"saturation, incubation and illumination".)
<br />
<br />
That said, most workshops including the Simons workshops don't seem to
follow such a recipe.
They are more geared towards the follow-up step, namely, communicating
what people have found,
rather than collaborating with them to tackle open problems.
Perhaps some re-evaluation might be required in how workshops are run.https://blog.computationalcomplexity.org/2020/02/pre-publish-and-perish.htmlnoreply@blogger.com (Lance Fortnow)2tag:blogger.com,1999:blog-3722233.post-7713902699906746351Thu, 23 Jan 2020 23:22:00 +00002020-01-23T18:22:05.544-05:00The World of PublishingBill is out for blogging for a couple of weeks on injured-reserve (he’ll be fine). I put together a quick blog post on what’s happening in the world of publications. <br />
<br />
The Trump administration has suggested requiring publishers to make all papers based on US federally-funded publicly available immediately instead of after one year. The Association of American Publishers sent an <a href="https://presspage-production-content.s3.amazonaws.com/uploads/1508/lettertothepresidentfrom140researchandpublishingorgs-2.pdf?10000">open letter</a>--how do we maintain the organizations and the people who work there if we give up a major revenue source. The ACM joined the letter which caused quite a backlash forcing the ACM to <a href="https://www.acm.org/articles/bulletins/2020/january/message-from-the-acm-president-regarding-open-access">explain itself</a>, write another <a href="https://www.acm.org/binaries/content/assets/about/acm-letter-to-ostp.pdf">letter</a>, and run some <a href="https://www.acm.org/articles/bulletins/2020/january/oa-webinars">webinars</a> about open access the last of which is tomorrow. In the end, this is leading to some good discussions about open access and the financial models of academic societies. <br />
<br />
The ACM also has a <a href="https://www.acm.org/publications/policies/author-name-changes">new policy</a>, three options for what happens when an author changes their name: Maintain separate identities, have the two identities link to each other, or retroactively change the name on all previous papers. I can see good reasons for all three options.<br />
<br />
Finally Moshe Vardi writes in his <a href="https://cacm.acm.org/magazines/2020/1/241717-publish-and-perish/fulltext">CACM column</a> about the ecological cost of conferences and suggests that conferences allow authors to (video)phone it in. Emmanuel Viola offers <a href="https://emanueleviola.wordpress.com/2020/01/01/publish-and-perish/">his own thoughts</a>. Most Conferences will continue to require authors to show up, with only occasional exceptions as needed, believing these policies will keep their conference healthy.<br />
<br />
Personally I believe conferences should exist because researchers want to attend, not because they have to. We still need conferences so our community can get together and I don’t believe we can do that via the Internet no matter how good the VR experience gets. But we can have more videos and less conferences and reduce the costs: time, financial and environmental.https://blog.computationalcomplexity.org/2020/01/the-world-of-publishing.htmlnoreply@blogger.com (Lance Fortnow)2tag:blogger.com,1999:blog-3722233.post-6568540622933428053Tue, 14 Jan 2020 23:41:00 +00002020-01-14T19:18:48.492-05:00Quantum Provers to Infinity and BeyondThe Internets are buzzing about the new paper <a href="https://arxiv.org/pdf/2001.04383">MIP* = RE</a> by Zhengfeng Ji, Anand Natarajan, Thomas Vidick, John Wright and Henry Yuen. See posts by <a href="https://www.scottaaronson.com/blog/?p=4512">Scott</a>, <a href="https://windowsontheory.org/2020/01/14/mipre-connes-embedding-conjecture-disproved/">Boaz</a>, not to mention a wonderful backstory by <a href="https://mycqstate.wordpress.com/2020/01/14/a-masters-project/">Vidick</a> himself and a <a href="https://twitter.com/henryquantum/status/1216936907814375424">tweet stream</a> by Yeun. I'm not an expert enough to verify or even try to explain the proof so I'll just give a brief overview of the result.<br />
<br />
For those not familiar with the classes, RE (recursively enumerable) is the simplest of all complexity classes, a language is in RE if there is some Turing machine M such that x is in L if and only if M on input x accepts. For x not in L, M on x can reject or run forever. The classic halting problem, the set of descriptions of Turing machines that halt on empty input, is RE-complete. To nitpick the notation, it should have been r.e. and even c.e. (computably enumerable), a more <a href="https://blog.computationalcomplexity.org/2004/02/is-it-recursive-computable-or.html">standard notation</a> these days. But given the importance of the result, we can give the authors a pass.<br />
<br />
MIP* is the set of things provable to a classically random polynomial-time verifier by two separated provers with an unlimited number of quantumly entangled qubits. Without the quantum entanglement, MIP = NEXP, nondeterministic exponential time, and last year Natarajan and Wright showed that MIP* could do at least exponentially better in their paper, <a href="https://arxiv.org/abs/1904.05870">NEEXP in MIP*</a>. NEEXP seems large but still only consists of computable sets. RE gets outside of the computable realm.<br />
<br />
I found the first paper more surprising, as it showed that quantum entanglement actually gets more, much more, than classical provers. The second paper does get a much stronger and tight result, and still highly surprising in its own right, as it requires disproving the <a href="https://en.wikipedia.org/wiki/Connes_embedding_problem">Connes' embedding conjecture</a>. In the end we may just consider this one result, as the second paper subsumes the first both in theorem and authors.<br />
<br />
We didn't award the <a href="https://blog.computationalcomplexity.org/2019/12/complexity-year-in-review-2019.html">2019 theorem of the year</a> to Natarajan and Wright, instead opting for a paper that had more, how should I say this, sensitivity. This new paper is certainly the front runner for the 2020 honors, albeit it is only mid-January.https://blog.computationalcomplexity.org/2020/01/quantum-provers-to-infinity-and-beyond.htmlnoreply@blogger.com (Lance Fortnow)2tag:blogger.com,1999:blog-3722233.post-3743356831833676109Tue, 14 Jan 2020 01:28:00 +00002020-01-13T20:28:38.812-05:00What would you do if you showed P=NP? I would reread Factor Man by Matt GinsbergLance has often said (and also in <a href="https://blog.computationalcomplexity.org/2020/01/silicon-valley-ethics.html">this</a>) that if P=NP that would be great for the world: much more efficient ways to build things, science could be done better, etc, and that is much more important than that modern crypto would no longer work. We now have the technology to do private key really well--- like a thumb drive that has a billion bits for 1-time pads.<br />
<br />
I agree that the world would be better off in some ways, I wonder how much damage would be done in the transition period from public to private key. Would the world recover enough to reap the benefits of P=NP?<br />
<br />
First think of what YOU would do if you showed P=NP (and lets assume your algorithm is either reasonable or could be made reasonable with some time and effort).<br />
<br />
The novel <i>Factor Man </i>is about what someone who has solved P=NP does. I won't tell you how it goes, but they deal with the issue intelligently. So if I solved P=NP then I would first re-read it, and think through if I would do that, or modify what is done, or what. Its a good start.<br />
<br />
I reviewed the book in SIGACT News or you can read my review <a href="https://www.cs.umd.edu/users/gasarch/BLOGPAPERS/factorman.pdf">here</a><br />
<br />
On a slightly diff note, here is the latest argument I've heard for why P=NP:<br />
<br />
Planar 2-coloring is in P<br />
<br />
Planar 4-coloring is in P<br />
<br />
So<br />
<br />
Planar 3-coloring should be in P.<br />
<br />
This was said by a very good math/cs ugrad at UMCP. I do not know if he was kidding.<br />
<br />
<br />https://blog.computationalcomplexity.org/2020/01/what-would-you-do-if-you-showed-pnp-i.htmlnoreply@blogger.com (GASARCH)2tag:blogger.com,1999:blog-3722233.post-6812304357684621341Thu, 09 Jan 2020 01:35:00 +00002020-01-08T20:35:05.834-05:00Silicon Valley Ethics<b>Spoiler Alert: </b>This post has details from the final episodes of the HBO television series <a href="https://en.wikipedia.org/wiki/Silicon_Valley_(TV_series)">Silicon Valley</a><br />
<br />
A few times I've gotten emails from people claiming they have shown P = NP and asking whether they should keep their algorithm a secret to protect the cryptography out there. My typical response is that they should use their algorithm to mine a few bitcoins and then get back to me.<br />
<br />
The fictional characters of Pied Piper faced this dilemma when they AI they created "developed a general solution to discrete log in polynomial time" with some nice complexity class diagrams in the background.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-0wC_yidITsY/Xg9oKlfZkrI/AAAAAAABu-g/OhzquwGwXbUqNDSDmGlpkU66wrjOcW62QCKgBGAsYHg/s1600/IMG_20200101_200809.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1200" data-original-width="1600" height="240" src="https://1.bp.blogspot.com/-0wC_yidITsY/Xg9oKlfZkrI/AAAAAAABu-g/OhzquwGwXbUqNDSDmGlpkU66wrjOcW62QCKgBGAsYHg/s320/IMG_20200101_200809.jpg" width="320" /></a></div>
<br />
Pied Piper was about to roll out its new internet, a distributed network that communicated between cell phones based on a compression algorithm developed by Pied Piper's CEO. Rolling out the network would reveal even more advanced compression based on breaking discrete log. "If we cancel it or shut it down, then others will try to copy or reverse engineer everything that we've built ... Our launch has to fail, publicly and spectacularly."<br />
<br />
But here comes the P v NP dilemma: "And what about all the other stuff we're gonna do? I mean, give internet to underserved communities, students in the homework gap, refugees, genomic research. Pied Piper can help scientists cure cancer."<br />
<br />
I'd take broken encryption over cancer any day. You can still do encryption even if P = NP, one-time pads distributed via USB drives or quantum. And cancer sucks.<br />
<br />
They should have mined a few bitcoins.https://blog.computationalcomplexity.org/2020/01/silicon-valley-ethics.htmlnoreply@blogger.com (Lance Fortnow)3tag:blogger.com,1999:blog-3722233.post-8980814517027039027Mon, 06 Jan 2020 04:54:00 +00002020-01-06T21:37:42.245-05:00The Wikipedia Entry on NP-Intermediary Problems lists one of mine! I'm not bragging about it.I recently needed to look at what NP problems were possibly intermediary (neither in P nor NP-complete). So I went to Wikipedia and found <a href="https://en.wikipedia.org/wiki/NP-intermediate">this</a>.<br />
<br />
They had many problems, though some I had never heard of. Those that I had never heard of<br />
<br />
<i>should they be on the list?</i><br />
<i><br />
</i> That is, are they natural? That is hard to define rigorously, but I will take you through my train of thought as I read the first few:<br />
<br />
<b>Factoring Integers</b>. Yes, quite possibly intermediary: If its NPC then PH collapses, and, at least so far, does not seem to be in P. (the NPC--> PH collapse result: We take<br />
<br />
FACT = { (n,x) : n has a nontrivial factor ≤ x }<br />
<br />
FACT is clearly in NP:<br />
a complete factorization of n provides evidence that some nontrivial factor is \le x.<br />
<br />
FACT is clearly in coNP:<br />
a complete factorization of n provides evidence that no nontrivial factor is \le x<br />
<br />
so if FACT is NP-complete then SAT is in coNP.<br />
<br />
Factoring is clearly an important and well studied problem. It even has its own Wikipedia entry!<br />
<br />
<b>Discrete Log</b>. Similar to Factoring. And it is also an important and well studied problem. It even has its own Wikipedia Entry!<br />
<br />
<b>Isomorphism Problems</b> They list Group and Ring isomorphism. They don't list Graph, which is odd. (ADDED LATER- my bad, they do mention Graph Isom in the section on Graph Algorithms) Anyway, if Graph Isom is NPC then PH collapses, and, at least so far, there is no algorithm for Graph Isom in P. (I do not think it is know if Group Isom NPC means PH collapses, or if Ring Isom NPC means PH collapses---if you know of such a proof leave a comment and a pointer to it.)<br />
<br />
Graph Isomorphism is a well studied problem and seems important and natural (I don't know if Graph Isomorphism has any real applications they way that factoring and DL do). It even has its own Wikipedia entry! Group and Ring Isomorphism also seem important and natural. And they have their own Wikipedia entry!<br />
<br />
<b>Numbers in Boxes Problem </b>My first reaction-Gee, whats that? For the Factoring, DL, and Isomorphism they did not define the problem-- they gave pointers to the Wikipedia entries on them. For this one there was no Wikipedia entry. There was one reference. I went to it. <i>It was a blog entry of mine</i>! Here it is: <a href="https://blog.computationalcomplexity.org/2010/07/what-is-complexity-of-these-problems.html">her</a>e, and to save you time I'll say what it is:<br />
<br />
{ (1<sup>n</sup>,1<sup>k</sup>) : you can partition 1,...,n into k boxes so that no box has x,y,z with x + y = z }<br />
<br />
Is this problem important? Does it exist anywhere outside of my blog entry? Yes--- a special case of it was in <a href="https://www.amazon.com/Doctor-Eccos-Cyberpuzzles-Dennis-Shasha/dp/0393325415/ref=sr_1_2?keywords=Dr.+Ecco&qid=1577931523&s=books&sr=1-2">Dr. Ecco's Cyperpuzzles by Dennis Shasha</a> (note- Dennis was a classmate of mine in graduate school at Harvard). I think the case was to try to partition {1,...,100} as best you can. Actually I first saw the case of the problem in his book and then generalized it.<br />
<br />
The problem is sparse so if it was NP-complete then P = NP, very good evidence that its not NPC. And its been studied for thousands of years, with people looking for poly time algorithms (I think Pythagoras studied it) without success, so its almost surely not in P. OR that last sentence was complete nonsense. Indeed, I don't think anyone has studied the problem computationally, or, for that matter, at all. So the evidence that its not in P is... sparse.<br />
<br />
But its worse than that. One could devise MANY sparse problems that are, since spares, likely NOT NPC, and hardly studied, so as-of-now, not in P. Should those count? Only if (a) more people study them so there is an attempt to point to to get it into P, and (b) the problem is natural (which is hard to define).<br />
<br />
Note that I can vary the problem: x+2y=z (this relates to lower bounds on VDW numbers)<br />
or any other combination of x,y,z or more that I like.<br />
<br />
<br />
<br />
This raises a question:<br />
<br />
<b>When is a problem worthy of being put on lists of problems?</b><br />
<b><br />
</b> Here are some possibly criteria. One can take ANDS and ORS of them.<br />
<br />
1) The problem has a Wikipedia entry. This might fall victim to Goodhearts law: when a measure becomes a target, it ceases to be a measure. That is, I could make a Wikipedia entry on the Number-in-boxes problem and then say LOOK, its on Wikipedia!<br />
<br />
2) More than X people have worked on the problem for some value of X. But here is a reason this might not be a good criteria: look at the problem<br />
<br />
{ α : α is a reg expression that allows numbers (so a<sup>1000</sup> is fine, makes reg expressions VERY succint) such that L(α)=Σ<sup>*</sup> }<br />
<br />
This problem looks natural, and was proven by Meyer and Stockmeyer to be EXPSPACE complete.<br />
That is the only paper on this problem, yet the problem really does look natural, and the result is rightly celebrated as a natural problem that is provably not in P.<br />
<br />
3) When people in the field look at the problem they say YEAH, thats a good problem.<br />
<br />
4) The problem relates to other problems or other fields.<br />
<br />
I doubt the Number-in-boxes problem satisfies any of these criteria. The variant with x+2y=z relates to Ramsey Theory. Great.<br />
<br />
NOW, back to the list-- I won't go through any more on the list, but I note that for some of them the only reference seems to be a conversation on stack-exchange. Some of those end up referring to real papers so are more likely natural, but some do not.<br />
<br />
Having said that, is there any harm in the list having on it some problems that are not ... worthy? Is that even the right word to use?<br />
<br />
Note that I don't have strong opinions on any of these matters, I am just wondering what criteria Wikipedia, and other sources, uses, when they have lists of problems.<br />
<b><br />
</b> <b><br />
</b> <b><br />
</b> <b><br />
</b> https://blog.computationalcomplexity.org/2020/01/the-wikipedia-entry-on-np-intermediary.htmlnoreply@blogger.com (GASARCH)5tag:blogger.com,1999:blog-3722233.post-5060142193352763387Tue, 31 Dec 2019 18:02:00 +00002020-01-01T07:18:01.946-05:00Complexity Year in Review 2019Some great theorems this year including <a href="https://mycqstate.wordpress.com/2019/04/14/randomness-and-interaction-entanglement-ups-the-game/">non-deterministic double exponential time by quantumly entangled provers</a> and <a href="https://rjlipton.wordpress.com/2019/03/29/integer-multiplication-in-nlogn-time/">integer multiplication in O(n log n) time</a>. But the result of the year has to go to a paper that gave a <a href="https://blog.computationalcomplexity.org/2019/07/local-kid-makes-history.html">shockingly simple proof of a major longstanding conjecture</a>.<br />
<br />
<div style="text-align: center;">
<a href="https://arxiv.org/abs/1907.00847">Induced subgraphs of hypercubes and a proof of the Sensitivity Conjecture</a> by Hao Huang</div>
<div>
<br /></div>
<div>
Of course 2019 will be remembered in some circles for giving us Google's claims of quantum supremacy and all the quantum hype, deserved and otherwise, that goes with it.<br />
<br />
Personally Bill came out with his new book <a href="https://www.worldscientific.com/worldscibooks/10.1142/11261#t=toc">Problems with a Point; Exploring Math and Computer Science</a> co-authored with Clyde Kruskal (<a href="https://amzn.to/2s6IlPl">Amazon</a>, <a href="https://blog.computationalcomplexity.org/2019/02/problems-with-point-exploring-math-and.html">blog</a> <a href="https://blog.computationalcomplexity.org/2019/04/problems-with-point-not-plug-just-some.html">posts</a>). Lance <a href="https://www.iit.edu/news/lance-fortnow-designated-new-college-science-dean">became a dean</a>. </div>
<div>
<br /></div>
<div>
We remember <a href="https://www.nytimes.com/2019/01/11/obituaries/michael-atiyah-dead.html">Michael Atiyah</a>, <a href="https://blog.computationalcomplexity.org/2019/04/elwyn-berlekamp-died-april-9-2019.html">Elwyn Berlekamp</a>, <a href="https://blog.computationalcomplexity.org/2019/04/quiz-show-scandalsadmissions.html">Charles van Doren</a>, <a href="https://blog.computationalcomplexity.org/2019/06/ray-miller-one-of-our-founders-passes.html">Ray Miller</a>, <a href="https://www.lamsade.dauphine.fr/fr/actualites/detail-de-lactualite/article/deces-de-jerome-monnot-membre-du-laboratoire-lamsade.html">Jérôme Monnot</a> and <a href="https://news.stanford.edu/2019/04/24/nils-nilsson-pioneer-robotics-artificial-intelligence-dies-86/">Nils Nilsson</a>.</div>
<div>
<br /></div>
<div>
Thanks to guest posters <a href="https://blog.computationalcomplexity.org/2019/10/quantum-supremacy-guest-post-by-abhinav.html">Abhinav Deshpande</a>, <a href="https://blog.computationalcomplexity.org/2019/01/the-paradigm-shift-in-fintech.html">Evangelos Georgiadis</a>, <a href="https://blog.computationalcomplexity.org/2019/07/guest-post-by-samir-khuller-on.html">Samir Khuller</a>, <a href="https://blog.computationalcomplexity.org/2019/06/ray-miller-one-of-our-founders-passes.html">Ming Lin</a>, <a href="https://blog.computationalcomplexity.org/2019/07/answer-to-both-infinite-hats-problems.html">David</a> <a href="https://blog.computationalcomplexity.org/2019/07/fortran-is-underated.html">Marcus</a>, <a href="https://blog.computationalcomplexity.org/2019/06/ray-miller-one-of-our-founders-passes.html">Ben Shneiderman</a> and <a href="https://blog.computationalcomplexity.org/2019/04/cuckoo-cycles.html">John Tromp</a>.</div>
<div>
<br /></div>
<div>
As we move into the 2020s, we tend to look back and look forward. The 2010s will go down as the decade computing and data transformed society, for better and worse. Google turned 21 this year as its OG leadership stepped down. I turned 21 in 1984, but 1984 seems closer than ever.</div>
<div>
<br /></div>
<div>
Last year we ended the <a href="https://blog.computationalcomplexity.org/2018/12/complexity-year-in-review-2018.html">year in review</a> by </div>
<blockquote class="tr_bq">
We end the year with craziness, the stock market is going through wild gyrations, we have a partial government shutdown including all of NSF and an uncertain political landscape with different parties leading the two houses of congress. We're still in the midst of a technological revolution and governments around the world try to figure how to regulate it. I find it hard to predict 2019 but it will not be quiet.</blockquote>
2019 was not quiet and we're about to head into an impeachment trial, Brexit and a critical US presidential election. The real challenges of the twenties will come from massive transformation from automation, climate change and deepening divisions in our society. How will academia cope with changing demographics, financial challenges and educating to manage the technological revolution?<br />
<br />
Let's all take a deep breath, roll up our sleeves and get the decade going.https://blog.computationalcomplexity.org/2019/12/complexity-year-in-review-2019.htmlnoreply@blogger.com (Lance Fortnow)3tag:blogger.com,1999:blog-3722233.post-7376087648863245436Thu, 12 Dec 2019 20:24:00 +00002019-12-17T08:45:36.741-05:00Why is there no all-encompassing term for a course on Models of Computation?In my <a href="https://blog.computationalcomplexity.org/2019/12/what-do-you-call-your-ugrad-non.html">last blog post</a> I asked my readers to leave comments saying what the name of the course that has some of Regular Languages, Context Free Languages Decideability, P, NP (any maybe other stuff) in it. I suspected there would be many different names and their were. I was able to put all but 6 into 4 equivalence classes. So that's 10 names. Thats a lot especially compared to<br />
<br />
(Introduction to) Algorithms<br />
<br />
and<br />
<br />
(Introduction to) Cryptography<br />
<br />
which I suspect have far fewer names. One commenter pointed out that the reason for the many different names is that there are many versions of the course. That's not quite an explanation since there are also many different versions of Cryptography---at UMCP crypto is cross listed in THREE departments (CS, Math, EE) and its taught by 6 or so different people who don't talk to each other (I am one of them). I think Algorithms is more uniform across colleges.<br />
<br />
I think that terms Algorithms and Cryptography are both rather broad and can accommodate many versions of the courses, whereas no term seems to be agreed upon to encompass the DFA etc course.<br />
Even saying DFA etc is not quite right since some of the courses spend little or even no time on DFA's.<br />
<br />
Below is a list of all the names I got and some comments. Note that some schools appear twice since they have two courses along these lines.<br />
<br />
-----------------------------------------<br />
TITLE: (Introduction to) Theory of Computation:<br />
<br />
Swarthmore: Theory of Computation<br />
<br />
UCSD: Theory of Computation<br />
<br />
Saint Michaels: Theory of Computation<br />
<br />
Univ of Washington: Introduction to the Theory of Computation<br />
<br />
Waterloo: Introduction to the Theory of Computing<br />
<br />
COMMENT: Theory of Computation could have been the term that encompasses all of these courses. I speculate that it didn't catch on since it sounds too much like computability theory which is only one part of the course.<br />
<br />
------------------------<br />
TITLE: Formal Languages and XXX<br />
<br />
CMU: Formal Languages, Automata, and Computability<br />
<br />
Florida Tech: Formal Languages and Automata Theory<br />
<br />
UC-Irvine: Formal Languages and Automata Theory<br />
<br />
Univ of Chicago: Introduction to Formal Languages<br />
<br />
University of Bucharest: Formal Language and Automata<br />
<br />
TU Darmstadt: Formal Foundations of CS I: Automata, Formal Languages, and Decidability<br />
<br />
TUK Germany: Formal Languages and Computability<br />
<br />
COMMENT: The title makes it sound like they don't cover P and NP. I do not know if thats true; however, I speculate that, it could never be the encompassing term.<br />
<br />
Spell Check things Automata and Computability are not words, but I've googled them and they seem to be words.<br />
<br />
--------------------------<br />
TITLE: Computability/Decidability and Complexity/Intractability<br />
<br />
Reed College: Computability and Complexity<br />
<br />
Caltech: Decidability and Intractability<br />
<br />
COMMENT: The title makes it sound like they don't cover regular or context free languages. I do not know if that's true; however, I speculate that, since the terms sound that way, they never caught on as the general term.<br />
<br />
Spellecheck thinks that neither Decidability nor Decideability is a word. Google seems to say that I should leave out the e, so I will.<br />
<br />
------------------------------<br />
TITLE: Blah MODELS Blah<br />
<br />
Tel-Aviv (a long time ago) Computational Models<br />
<br />
UIUC: Algorithms and Models of Computation (also has some algorithms in it)<br />
<br />
Waterloo: Models of Computation (enriched version)<br />
<br />
COMMENT: Models of Computation sounds like a good name for the course! Too bad it didn't catch on. It would also be able to withstand changes in the content like more on parallelism or more on communication complexity.<br />
<br />
------------------------------<br />
TITLE: MISC<br />
<br />
CMU: Great Ideas in Theoretical Computer Science<br />
<br />
UCLouvain (Belgium) Calculabilite (Computability)<br />
<br />
Moscow Inst. of Phy. and Tech.: Mathematical logic and Theory of Algorithms<br />
<br />
Portland State University: Computational Structures<br />
<br />
Germany: Informatik III (Not all of Germany)<br />
<br />
Univ of Chicago: Introduction to Complexity<br />
<br />
COMMENT: All of these terms are to narrow to have served as a general term.<br />
<div>
<br /></div>
<br />
<br />https://blog.computationalcomplexity.org/2019/12/why-is-there-no-all-encompassing-term.htmlnoreply@blogger.com (GASARCH)12tag:blogger.com,1999:blog-3722233.post-693865784966232415Mon, 09 Dec 2019 04:00:00 +00002019-12-08T23:00:44.425-05:00What do you call your ugrad non-algorithms theory course?I am in the process of reviewing<br />
<br />
<i> What can be computed: A Practical Guide to the Theory of Computation</i><br />
<i> by John MacCormick</i><br />
<br />
<br />
and I need YOUR help for the first SENTENCE. I began by saying<br />
<br />
<br />
This is a text book for a course on <i>Formal Language Theory</i><br />
<i><br /></i>
but then I realized that this is not what we call the course at UMCP. Then I got to thinking: what do other schools call it? I have the following so far:<br />
<br />
UMCP: Elementary Theory of Computation<br />
<br />
Harvard: Introduction to Theory of Computation<br />
<br />
MIT: Automata, Computability, and, Complexity<br />
<br />
Clark: Automata Theory<br />
<br />
(My spellcheck does not think Automata is a word. Also Computability. Usually I listen to my spellcheckers, but I checked and YES, I spelled them right.)<br />
<br />
For some other schools I either hit a place I needed an account, or I just got titles without a description so I could not be sure.<br />
<br />This is where YOU come in!<br />
<br />
Please leave comments with your school and the title of the course at your school that covers a reasonable overlap with: Regular Sets, Context Free Sets, Decidable and Undecidble and r.e. sets, P, NP, perhaps other complexity classes, and NP-completeness. Its FINE if your answer is one of the above ones, or one of the other comments--- I plan to later set this up as a pigeonhole principle problem.<br />
<br />
I suspect that courses in algorithms are called <i>Algorithms </i>or <i>Introduction to Algorithms.</i><br />
<i><br /></i>
I suspect that courses in cryptography are called <i>Cryptography </i><i> </i>or <i>Intro to Cryptography.</i><br />
<br />
<br />
Why does the non-algorithm, non-crypto theory course have more names?<br />
<br />
<br />
<br />
<br />
<i><br /></i>
<br />https://blog.computationalcomplexity.org/2019/12/what-do-you-call-your-ugrad-non.htmlnoreply@blogger.com (GASARCH)27tag:blogger.com,1999:blog-3722233.post-1485483742428466612Mon, 02 Dec 2019 23:01:00 +00002019-12-03T17:21:50.267-05:00Julia Robinson's 100th birthdayOn Dec 8, 1919 Julia Robinson was born, so today is close to her 100th birthday (she passed away at<br />
<div>
the age of 65 on July 30, 1985).</div>
<div>
<br /></div>
<div>
So time for some facts about her</div>
<div>
<br /></div>
<div>
1) She got her PhD from Tarski where she proved the undecidability of the theory of the rationals.</div>
<div>
<br /></div>
<div>
2) She is probably best known for her work on Hilbert's tenth problem (which we call H10)</div>
<div>
<br /></div>
<div>
In todays' terminology H10 would be stated as:</div>
<div>
<br />
Find an algorithm that will, given p in Z[x_1,...,x_n] determine if it has an integer solution.</div>
<div>
<br /></div>
<div>
Hilbert posed it to inspire deep research in Number Theory. There are some cases that are</div>
<div>
solvable (the topic of a later blog post) but the general problem is undecidable. This is not what Hilbert was aiming for. I wonder if he would be happy with the resolution.</div>
<div>
<br /></div>
<div>
The Davis-Putnam-Robinson paper showed that the decision problem for exponential diophantine equations was undecidable. It was published in 1961. The paper is <a href="https://www.jstor.org/stable/1970289?seq=1#metadata_info_tab_contents">here</a>. Martin Davis predicted that the proof that H10 was undecidable would be by a young Russian mathematician. He was proven correct when Yuri Matiyasevich supplied the missing piece needed to complete the proof. </div>
<div>
<br /></div>
<div>
I often read `H10 was resolved by Davis-Putnam-Robinson and Matiyasevich' or sometimes they put all four names in alphabetical order. I like that--- it really was a joint effort.</div>
<div>
<br /></div>
<div>
3) She was inducted (a proof by induction?) into the National Academy of Sciences in 1975.</div>
<div>
<br /></div>
<div>
4) She was elected to be president of the American Math Society in 1982. </div>
<div>
<br /></div>
<div>
5) She got a MacAuthor Fellowship prize in 1985 (Often called the MacAuthor Genius award.)</div>
<div>
At the time it was worth $60,000. Its now $625,000.</div>
<div>
<br /></div>
<div>
6) She also did work in Game Theory. Her paper An Iterative Method of Solving a Game, which is<br />
<a href="https://www.jstor.org/stable/1969530?seq=1#metadata_info_tab_contents">here</a>, is a proof from the book according to Paul Goldberg's comment on this post.</div>
<div>
<br /></div>
<div>
7) The Julia Robinson Math Festival is named in her honor (hmmm- is that a tautology?) Its purpose is to inspire K-12 students to get involved in math. For more on it see <a href="https://en.wikipedia.org/wiki/Julia_Robinson_Mathematics_Festival">here</a>.<br />
<br />
8) (ADDED LATER) Commenter David Williamson pointed out that Julia Robinson did work on the transportation problem. See his comment and his pointer to the paper.<br />
<br />
(ADDED LATER) When I hear <i>Julia Robinson</i> I think <i>Hilbert's 10th problem. </i> I suspect many of you do the same. However, looking at items 6 and 8 above, one realizes that she did research in non-logic branches of math as well.</div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
https://blog.computationalcomplexity.org/2019/12/julia-robinsons-100th-birthday.htmlnoreply@blogger.com (GASARCH)9tag:blogger.com,1999:blog-3722233.post-3436178446517561376Mon, 18 Nov 2019 04:53:00 +00002019-11-17T23:53:55.061-05:00Fields used to be closer together than they are now. Good? Bad?There was a retired software Eng professor that I had heard two very non-controversial rumors about:<br />
<br />
1) He got his PhD in Numerical Analysis<br />
<br />
2) He got his PhD in Compiler Optimization.<br />
<br />
So I asked him which was true.<br />
<br />
The answer: Both! In those days you had to optimize your code to get your NA code to run fast enough.<br />
<br />
We cannot imagine that anymore. Or at least I cannot.<br />
<br />
Over time the fields of computer science advance more so its hard to be master of more than one field. But its not that simple: there has been work recently applying Machine Learning to... well<br />
everything really. Even so, I think the trend is more towards separation. Or perhaps it oscillates.<br />
<br />
I am NOT going to be the grumpy old man (Google once thought I was 70, see <a href="https://blog.computationalcomplexity.org/2018/10/google-added-years-to-my-life.html">here</a>) who says things were better in my day when the fields were closer together. But I will ask the question:<br />
<br />
1) Are people more specialized new? While I think yes since each field has gotten more complicated and harder to master. There are exceptions: Complexity theory uses much more sophisticated mathematics then when I was a grad student (1980-1985), and of course Quantum Computing has lead to more comp sci majors knowing physics.<br />
<br />
2) Is it good for the field that people are specialized? I am supposed to say that it is terrible and that great advances are made when people are interdiscplinary. But there are many more small advances that are made by someone who has a mastery of one (or two) fields.<br />
<br />
3) The PhD Process and the Tenure Process encourage specialization. This I think IS bad since there are different modes of research that should all be respected.'<br />
<br />
<br />https://blog.computationalcomplexity.org/2019/11/fields-used-to-be-closer-together-than.htmlnoreply@blogger.com (GASARCH)3tag:blogger.com,1999:blog-3722233.post-7340761877463675609Mon, 11 Nov 2019 15:09:00 +00002019-11-11T10:09:38.063-05:00A non-moral dilemma about cheating, but it brings up some pointsI often give two versions of an exam and TELL THE STUDENTS I am doing this so that they don't even try to cheat. I've even had two different classes take the midterm at the same time, same room, every other seat, so the person next to you is in a different course. And I TELL THE STUDENTS that I am doing this. A colleague of mine says I shouldn't TELL THE STUDENTS. Here are our arguments<br />
<br />
1) Don't tell: students cheat a lot and this is a way to catch them.<br />
<br />
2) Tell: Dealing with cheating distracts from our mission of teaching so best to be preventative so it does not happen. Less noble- tell them so that you don't have to deal with the cheating issue.<br />
<br />
I have heard of the following case at a diff school some years ago and want your take on it:<br />
there was one question on the midterm that was different on the two exams- the prof changed the key number, but they were the same question really. The prof was in a hurry for some reason and <i>FORGOT TO TELL THE STUDENTS</i>. You can probably guess what happened next, but not what happened after that<br />
<br />
One of the students exams had the solution to THE OTHER PROBLEM on it. Clearly cheating. When called in the student said:<br />
<br />
<i>Since you didn't tell us that they were different exams the cheating claim is unfair!</i><br />
<i><br />
</i> They DID admit their guilt, but they DID NOT have any contrition.<br />
<br />
Options for what penalty to go for:<br />
<br />
1) A 0 on the exam itself<br />
<br />
2) An F in the course<br />
<br />
3) A notation on the transcript indicating Failed-because-cheated. I don't know what that notation was at the schol the story took place, but at UMCP its XF. (Side Note- not clear if someone outside of UMCP looks at a transcript and sees an XF they'll know what the means. But the F part makes it look bad.)<br />
<br />
4) Expulsion from school. (This might not be the profs call- this may depend on if its a first offense.)<br />
<br />
The lack of contrition bothers me, though the prof who told me the story said that the student may have said it out of shock- the first thing that came into their mind. I asked the prof how the student was doing in the class and the prof said, CORRECTLY, that that is irrelevant.<br />
<br />
SO- what penalty would you go for?<br />
<br />
The professor went for XF. The student, at the hearing, once again said<br />
<br />
<br />
<i>Since you didn't tell us that they were different exams the cheating claim is unfair!</i><br />
<br />
The professor told me that he thinks the student was trying to claim it was entrapment, though he had a hard time expressing this coherently. If the student had been a coherent thinker, he probably wouldn't have needed to cheat.<br />
<br />
He got the equivalent of an XF. <br />
<br />
But here is my real question: Should we TELL THE STUDENTS that they are different exams (I think yes) or<br />
should we NOT tell them so can catch them?<br />
<br />
<br />
<br />
<br />
<br />
<br />https://blog.computationalcomplexity.org/2019/11/a-non-moral-dilemma-about-cheating-but.htmlnoreply@blogger.com (GASARCH)11tag:blogger.com,1999:blog-3722233.post-6412925669087119335Mon, 04 Nov 2019 06:04:00 +00002019-11-04T01:04:54.796-05:00Limits of using the web for info- self-reference(I wrote this a while back so when I say `I Googled BLAH' I meant back then. It is prob different now.)<br />
<br />
While the web is a <i>wonderful</i> to find things out there are times when it doesn't quite work. <br />
<ol>
<li> An old blog of Scott Aaronson's had as part of its title <a href="http://scottaaronson.com/blog/?p=240">a Woitian Link</a>. Wanting to find out what <i>a Woitian Link</i> is but not wanting to bother Scott (he's busy enough making comments on Shtetl-Optimized) I went to google and typed in <i>"Woitian Link"</i>. The ONLY hits I got back were to Scotts blog. I finally had to email Scott. He told me that it was referring to the blog <a href="http://www.math.columbia.edu/~woit/wordpress">not even wrong</a> by Peter Woit which often has links that... Well, Scott never told me quite what it was but I'll go there myself and try to figure it out.<br />
</li>
<li> An old blog of mine was <a href="http://weblog.fortnow.com/2007/05/man-who-loved-algorithms.html">the man who loved algorithms</a>. Part of my blog said that I thought the man would be Knuth but it was not. (It was Thomas Kailath) One of the commenters said that it couldn't be Knuth since he was still alive. This made me want to check the original article to see if Thomas Kailath, is also still alive (he is). I didn't have the issue with me at the time so I typed "the man who loved algorithms" into google. The first page of hits all refered to my posting. Eventually I found one to verify that yes, indeed, he was still alive.<br />
</li>
<li> Donald Knuth VOLUME FOUR was originally published in a series of fascicile's. Whats a fascicle? Here the web was helpful- Wikipedia said it was a book that comes out in short pieces, the pieces of which are called `fascicle'. They gave only one example: <i>Donald Knuth's Volume 4 will be coming out in Fascicle.</i> Still, they DID tell me what I want to know. (Note- this was a while back, they have since removed that comment.) For most things the web is great. But for some more obscure things, better off asking someone who knows stuff. </li>
</ol>
<div>
Do you have experiences where you ask the web for a question and you end up in a circle?</div>
https://blog.computationalcomplexity.org/2019/11/limits-of-using-web-for-info-self.htmlnoreply@blogger.com (Unknown)2tag:blogger.com,1999:blog-3722233.post-7436762088132666017Thu, 31 Oct 2019 18:46:00 +00002019-10-31T14:46:40.287-04:00Statistics to ScareSo how do you parse the following paragraph from Monday's NYT <a href="https://www.nytimes.com/2019/10/28/briefing/wildfires-impeachment-facebook.html">Evening Breifing</a>.<br />
<blockquote class="tr_bq">
A study in JAMA Pediatrics this year found that the average Halloween resulted in four additional pedestrian deaths compared with other nights. For 4- to 8-year-olds, the rate was 10 times as high.</blockquote>
The paragraph means the percent increase for pedestrian deaths for 4-8 year olds was ten time the percent increase for people as a whole, a number you cannot determine from the information given. Using the fact that roughly 7% of Americans are in the 4-8 year range, that yields a little under three additional deaths for 4-8 year olds and about one for the other age ranges.<br />
<br />
The <a href="https://dx.doi.org/10.1001/jamapediatrics.2018.4052">paper</a> unfortunately sits behind a firewall. But I found a <a href="http://10.0.3.233/jamapediatrics.2018.4052">press release</a>.<br />
<blockquote class="tr_bq">
Children in the United States celebrate Halloween by going door-to-door collecting candy. New research suggests the popular October 31 holiday is associated with increased pedestrian traffic fatalities, especially among children. Researchers used data from the National Highway Traffic Safety Administration to compare the number of pedestrian fatalities from 1975 to 2016 that happened on October 31 each year between 5 p.m. and 11:59 p.m. with those that happened during the same hours on a day one week earlier (on October 24) and a day one week later (on November 7). During the 42-year study period, 608 pedestrian fatalities happened on the 42 Halloween evenings, whereas 851 pedestrian fatalities happened on the 84 other evenings used for comparison. The relative risk (an expression of probability) of a pedestrian fatality was higher on Halloween than those other nights. Absolute mortality rates averaged 2.07 and 1.45 pedestrian fatalities per hour on Halloween nights and the other evenings, respectively, which is equivalent to the average Halloween resulting in four additional pedestrian deaths each year. The biggest risk was among children ages 4 to 8. Absolute risk of pedestrian fatality per 100 million Americans was small and declined from 4.9 to 2.5 between the first and final decades of the study interval. </blockquote>
Doing the math, we see a 43% increase and a more than quintupling the number of pedestrian deaths for the youngsters. That sounds scary indeed. though it only adds up to a handful of deaths. Moreover the authors didn't take into account the larger number of pedestrians on Halloween, particularly among 4-8 year olds.<br />
<br />
A smaller fraction of people die as pedestrians on Halloween today then on a random day when I was a kid. I wonder if that's because there are fewer pedestrians today.<br />
<br />
Also from the New York Times, a sociologist has found "no evidence that any child had been seriously injured, let alone killed, by strangers tampering with candy." I feel lied to as a kid.<br />
<br />
So the upshot: Tell your kids to take the usual precautions but mostly let them dress up, have fun trick-or-treating and enjoy their candy.https://blog.computationalcomplexity.org/2019/10/statistics-to-scare.htmlnoreply@blogger.com (Lance Fortnow)0tag:blogger.com,1999:blog-3722233.post-3222567344389850699Mon, 28 Oct 2019 14:05:00 +00002019-10-28T10:05:08.179-04:00Random non-partisan thoughts on the Prez Election<br />
This post is non-partisan, but in the interest of full disclosure I disclose that I will almost surely be voting for the Democratic Nominee. And I say <i>almost surely</i> because very weird things could happen.I can imagine a republican saying, in 2015 <i>I will almost surely be voting for the Republican Nominee</i> and then later deciding to not vote for Trump. <br />
<br />
<br />
<i>My Past Predictions</i>: Early on in 2007 I predicted it would be Obama vs McCain and that Obama would win. Was I smart or lucky? Early in 2011 I predicted Paul Ryan would be the Rep. Candidate. Early in 2015 and even into 2016 I predicted that Trump would not get the nomination. After he got the nomination I predicted he would not become president. So, in answer to my first question, I was lucky not smart. Having said all of this I predict that the Dem. candidate will be Warren. Note- this is an honest prediction, not one fueled by what I want to see happen. I predict Warren since she seems to be someone who can bridge the so-called establishment and the so-called left (I dislike the terms LEFT and RIGHT since issues and views change over time). Given my past record I would not take me too seriously. Also, since this prediction is not particularly unusual, if I am right this would NOT be impressive (My Obama prediction was impressive, and my Paul Ryan prediction would have been very impressive had I been right.)<br />
<br />
<i>Electability</i>: My spell checker doesn't think its a word. Actually it shouldn't be a word. It's a stupid concept. Recall<br />
<br />
JFK was unelectable since he was Catholic. <br />
<br />
Ronald Reagan was unelectable because he was too conservative.<br />
<br />
A draft dodging adulterer named Bill Clinton could not possible beat a sitting president who just won a popular war.<br />
<br />
Nobody named Barack Hussein Obama, who is half-black, could possibly get the nomination, never mind the presidency. And Hillary had the nomination locked up in 2008--- she had no any serious challengers. <br />
<br />
(An article in <i>The New Republic</i> in 2007 predicted a brokered convention for the Republicans where Fred Thompson, Mitt Romney, and Rudy Guilliani would split the vote, and at the same time a cake walk for Hillary Clinton with<br />
Barak Obama winning Illinois in the primaries but not much else. Recall that 2008 was McCain vs Obama.)<br />
<br />
Donald Trump will surely be stopped from getting the nomination because, in the end, <a href="https://www.amazon.com/Party-Decides-Presidential-Nominations-American/dp/0226112373/ref=cm_cr_arp_d_product_top?ie=UTF8">The Party Decides</a>.<br />
<br />
Republican voters in 2016 will prefer Rubio to Trump since Marco is more electable AND more conservative. Hence, in the space of Rep. Candidates, Rubio dominates Trump. So, by simple game theory, Trump can't get the nomination. The more electable Rubio, in the 2016 primaries, won Minnesota, Wash DC, and Puerto Rico (Puerto Rico has a primary. Really!) One of my friends thought he also won Guam (Guam?) but I could not find evidence of that on the web. Okay, so why did Trump win? <i>Because voters are not game theorists.</i><br />
<br />
ANYWAY, my point is that how can anyone take the notion of electability seriously when unelectable people have gotten elected?<br />
<br />
<i>Primaries</i>: Dem primary voters are torn between who they want to be president and who can beat Trump. Since its so hard to tell who can beat who, I would recommend voting for who you like and not say stupid things like<br />
<br />
American would never elect a 76 year old socialist whose recently had a heart attack.<br />
<br />
or<br />
<br />
Trump beat a women in 2016 so we can't nominate a women<br />
<br />
or<br />
<br />
America is not ready to elect a gay president yet. (America is never ready to do X until after it does X and then the pundits ret-con their opinions.For example, of course America is ready for Gay-Marriage. Duh.)<br />
<br />
<i>Who won the debate?<br />
</i> Whoever didn't bother watching it :-). I think the question is stupid and has become who got out a clever sound bite. We need sound policy, not sound bites!https://blog.computationalcomplexity.org/2019/09/random-non-partisan-thoughts-on-prez.htmlnoreply@blogger.com (GASARCH)2tag:blogger.com,1999:blog-3722233.post-6670392427379771547Mon, 21 Oct 2019 15:47:00 +00002019-10-21T11:47:23.166-04:00Differentiation and IntegrationRecently there was an excellent xkcd about differentiation and integration, see <a href="https://xkcd.com/2117/">here</a>.<br />
<br />
This brings up thoughts on diff and int:<br />
<br />
1) For some students Integration is when math gets hard.<br />
<br />
Diff (at least on the level of Calc I) is rote memorization. A computer program can EASILY do diff<br />
<br />
Integration by humans requires more guesswork and thought, Computers can now do it very well but I think that it was harder to get to work.<br />
<br />
Someone who has worked on programs for both, please comment.<br />
<br />
2) When I took Honors Calculus back in 1976 (from Jeff Cheeger at SUNY Stonybrook) he made a comment which really puzzled the class, and myself, but later I understood it:<br />
<br />
<i>Integration is easier than Differentiation</i><br />
<br />
The class thought this was very odd since the problem of, GIVEN a function, find its diff was easier than GIVEN a function, find its int. And of course I am talking about the kinds of functions one is<br />
given in Calc I and Calc II, so this is not meant to be a formal statement.<br />
<br />
What he meant was that integration has better mathematical properties than differentiation. For example, differentiating the function f(x)=abs(x) (absolute value of x) is problematic at 0, where it has no problem with integration anywhere (alas, if only our society was as relaxed about integration as f(x)=abs(x) is).<br />
<br />
So I would say that the class and Dr. Cheeger were both right (someone else might say they were both wrong) we were just looking at different notions of easy and hard.<br />
<br />
Are there other cases in math where `easy' and `hard' can mean very different things?<br />
<br />
<br />
<br />
https://blog.computationalcomplexity.org/2019/10/differentiation-and-integration.htmlnoreply@blogger.com (GASARCH)4tag:blogger.com,1999:blog-3722233.post-4529441142077494484Thu, 17 Oct 2019 20:50:00 +00002019-10-17T17:58:08.623-04:002019 Fall Jobs PostStarting PhD students over time would always assume that the computer science academic job market would be a strong or as weak when they graduate as it is when they were starting, and they would always be wrong. That may have changed. We've had such a stretch of strong growth in computer science, starting as we pulled out of the financial crisis in 2012, that students who started in the strong market back then see only a much stronger market today.<br />
<br />
Every fall I recap advice for students, and others, looking for academic jobs. Best source are the ads from the <a href="https://cra.org/ads/">CRA</a> and the <a href="https://jobs.acm.org/">ACM</a>. For theoretical computer science specific postdoc and faculty positions check out <a href="https://cstheory-jobs.org/">TCS Jobs</a> and <a href="http://dmatheorynet.blogspot.com/">Theory Announcements</a>. If you have jobs to announce, please post to the above and/or feel free to leave a comment on this post. Even if you don't see an ad, almost surely your favorite university is looking to hire computer scientists. Check out their website or email someone at the department. The CRA just published a <a href="https://cra.org/2019-member-book-full-size">member book</a>, a collection of one pagers for several departments, almost all of which are trying to grow.<br />
<br />
Needless to say we're trying to greatly expand computing at Illinois Tech, come <a href="https://science.iit.edu/computer-science/people/openings#tenure-track">join us</a>.<br />
<br />
Something new this year, CATCS is <a href="https://thmatters.wordpress.com/2019/10/02/a-solicitation-for-tcs-job-market-profiles/">collecting and disseminating</a> profiles of junior theory researchers on the job market this year. Definitely take advantage whether to sign up as a job seeker or to reach out to theorists on the market once the profiles are posted. The CRA also maintains a <a href="https://cra.org/cv-database/">CV database</a> for candidates for academic, industrial and government research positions.<br />
<br />
While this is a job-seekers market, you still need to put your best foot forward. Reach out to professors at conferences, such as the upcoming FOCS. Polish your CV and get your Google Scholar page in good shape. Practice your job talk, a bad one can kill your visit. Research the people you will see during the interview ahead of time, I like to write down one interesting discussion topic for each. You'll need to sell yourself to non-theorists. Data, cybersecurity and quantum are hot this year, highlight your work in those areas without making it look fake.<br />
<br />
In any case have fun! You'll meet lots of interesting people in your job search and eat way too much.https://blog.computationalcomplexity.org/2019/10/2019-fall-jobs-post.htmlnoreply@blogger.com (Lance Fortnow)7tag:blogger.com,1999:blog-3722233.post-4589067189308524430Mon, 14 Oct 2019 02:11:00 +00002019-10-13T22:11:15.690-04:00The Sheldon Conjecture (too late for Problems with a Point)<br />
Chapter 5 of <i>Problems with a Point</i> (by Gasarch and Kruskal) is about how mathematical objects get their names. If it was an e-book that I could edit and add to (is this a good idea or not? later on that) then I would have added the following.<br />
<br />
<b>The Sheldon Conjecture</b><br />
<br />
Background: Nobel Laureate Sheldon Cooper has said that 73 is the best number because<br />
<br />
a) 73 is prime.<br />
<br />
b) 73 is the 21st prime and note that 7*3=21.<br />
<br />
c) The mirror of 73, namely 37, is prime.<br />
<br />
d) 37 is the 12th prime, and 12 is the mirror of 21.<br />
<br />
Sheldon never quite said its the <i>only</i> such number; that was conjectured by Jessie Byrnes, Chris Spicer, and Alyssa Turnquist <a href="http://digital.ipcprintservices.com/article/The_SHELDON_CONJECTURE/2298528/276852/article.html">here</a>. They called it <i>Sheldon's Conjecture</i> probably since Sheldon Cooper <i>should have</i> conjectured it <br />
<br />
Why didn't Sheldon make Sheldon's conjecture? This kind of question has been asked before:<br />
<br />
<br />
<br />
<a href="https://arxiv.org/abs/1701.04718">Could Euler have conjectured the prime number theorem</a><br />
<br />
<a href="https://arxiv.org/abs/1611.06303">Why didn't Hilbert (or others) pursue Ramsey Theory?</a> <br />
<br />
(readers are encouraged to give other examples)<br />
<br />
I doubt we'll be asking this about Sheldon Cooper since he is a fictional character.<br />
<br />
I am delighted that<br />
<br />
a) There is a Sheldon's Conjecture.<br />
<br />
b) It has been solved by Pomerance and Spicer, see <a href="https://math.dartmouth.edu/~carlp/sheldon02132019.pdf">here</a><br />
<br />
Actually (b) might not be so delightful--- once a conjecture is proven its stops being called by the name of the conjecturer. If you don't believe me just ask Vazsonyi or Baudet. If you don't know who they are then (1) see <a href="https://blog.computationalcomplexity.org/2009/08/how-much-credit-should-conjecturer-get_14.html">here</a> and (2) that proves my point. So perhaps I wish it had not been solved so <i>The Sheldon Conjecture</i> would live on as a name.<br />
<br />
Another issue this brings up: Lets say that <i>Problems with a Point</i> was an online book that I was able to edit easily. Then I might add material on <i>The Sheldon Conjecture</i>. And while I am at it, I would add <i>The Monty Hall Paradox</i> to the chapter on how theorems get there names. Plus, I would fix some typos and references. Perhaps update some reference. Now lets say that all books were online and the authors could modify them. Would this be good or bad?<br />
<br />
1) Good- The book would get better and better as errors got removed.<br />
<br />
2) Good- The book would get to include material that is appropriate but came out after it was published.<br />
<br />
3) Good- The book would get to include material that is appropriate but the authors forgot to include the first time around.<br />
<br />
4) Bad- For referencing the book or for book reviews of the book, you are looking at different objects. The current system has First Edition, Second Edition, etc, so you can point to which one you are looking at. The easily-edited books would have more of a continuous update process so harder to point to things. <br />
<br />
5) Bad- When Clyde and I emailed the final version to the publisher we were almost done. When we got the galleys and commented on them we were DONE DONE! For typos and errors maybe I want to fix them online, but entire new sections--- when we're done we are DONE.<br />
<br />
6) Bad- at what point is it (i) a corrected version of the old book, (ii) a new edition of the old book, (iii) an entirely new book? Life is complicated enough.<br />
<br />
I would prob like a system where you can fix errors but can't add new material. Not sure if that's really a clean distinction.<br />
https://blog.computationalcomplexity.org/2019/10/the-sheldon-conjecture-too-late-for.htmlnoreply@blogger.com (GASARCH)3tag:blogger.com,1999:blog-3722233.post-7770596683111395863Thu, 10 Oct 2019 19:47:00 +00002019-10-10T16:22:33.167-04:00William Kruskal's 100th birthdayToday, Oct 10, 2019 is William Kruskal's 100th birthday (he's dead, so no cake. Oh well.) William Kruskal was a great statistician. To honor him we have a guest post by his nephew Clyde Kruskal. We also note that the Kruskal Family is one of the top two math families of all time (see <a href="https://blog.computationalcomplexity.org/2009/02/baseball-families-and-math-families.html">here</a>). William is a reason why the other two Kruskal brothers went into mathematics: As a much older sibling (6 years older than Martin and 8 years older than Joseph), he encouraged their early mathematical development. <br />
<br />
Here are some pictures of William Kruskal and of the Kruskal Family: <a href="https://www.cs.umd.edu/users/gasarch/BLOGPAPERS/billk1.pdf">here</a><br />
<br />
<b>Guest Post by Clyde Kruskal<br />
</b><br />
<br />
I was asked to blog about my uncle, the statistician, William H. Kruskal, on the centennial of his birth. We called him Uncle Bill. He is best known for co-inventing the Kruskal-Wallis test.<br />
<br />
There are two stories that I know about Bill's childhood, which must have been family lore:<br />
<br />
(1) As a young child, Bill was a prolific reader. His reading comprehension outstripped his conversational English. One morning, having just read the word ``schedule'' in a book, and obviously having never heard it pronounced, he sat down to breakfast and asked:<br />
"What is the ske·D<span STYLE="text-decoration:overline">U</SPAN>·l<span STYLE="text-decoration:overline">e</SPAN> for today?"<br />
<br />
(2) My grandparents once had Bill take an occupational assessment test. The tester said that Bill was a very bright child, and should become a traffic engineer to solve the problems with traffic congestion. (This would have been the 1920s!) As you probably know, Uncle Bill did not succeed in ending traffic congestion. Oh well.<br />
<br />
<br />
Recently there has been a controversy over whether to ask about citizenship in the 2020 census. In the late 1900s there was a different controversy: whether to adjust the known undercount statistically. In general, Democrats wanted to adjust the count and Republicans did not (presumably because Democratic states tended to have a larger undercount). A national committee was set up to study the issue, with four statisticians in favor and four against. I was surprised to learn that Uncle Bill was on the commission as one of those against adjustment, since, I thought his political views were more closely aligned with those of the Democrats. He was very principled, basing his views only on statistical arguments. I saw him give a talk on the subject, which seemed quite convincing (but, then again, I did not see the other side). They ended up not adjusting. <br />
<br />
<br />
For more on William Kruskal, in general, and his view on adjusting the census, in particular, see the pointers at the end of this post.<br />
<br />
<br />
I have more to say. I just hope that I am on the ske·D<span STYLE="text-decoration:overline">U</SPAN>·l<span STYLE="text-decoration:overline">e</SPAN> to blog about Uncle Bill at the bicentennial of his birth.<br />
<br />
<br />
<a href="https://www.cs.umd.edu/users/gasarch/BLOGPAPERS/wkruskalleg.pdf">The William Kruskal Legacy: 1919-2005 by Fienberg, Stigler, and Tanur</a><br />
<br />
<a href="http://www-history.mcs.st-and.ac.uk/Biographies/Kruskal_William.html">A short biography of William Kruskal by J.J. O'Connor and E.F. Robertson </a><br />
<br />
<a href="https://www.cs.umd.edu/users/gasarch/BLOGPAPERS/wkruskalfriend.pdf">William Kruskal: Mentor and Friend by Judith Tanur</a><br />
<br />
<a href="https://arxiv.org/pdf/0710.5077.pdf">William Kruskal: My Scholarly and Scientific Model by Stephen Fienberg</a><br />
<br />
<a href="https://www.cs.umd.edu/users/gasarch/BLOGPAPERS/wkruskalconv.pdf">A conversation with William Kruskal by Sandy Zabell</a><br />
<br />
<a href="http://www.loc.gov/law/find/hearings/pdf/00183650932.pdf">Testimony for house subcommittee on census and population for 1990 (see page 140)</a><br />
<br />
<br />
<br />
<br />
<br />
<br />
https://blog.computationalcomplexity.org/2019/10/william-kruskals-100th-birthday.htmlnoreply@blogger.com (GASARCH)0tag:blogger.com,1999:blog-3722233.post-1096889632414067693Mon, 07 Oct 2019 13:56:00 +00002019-10-07T09:56:23.862-04:00What comes first theory or practice? Its Complicated!Having majored in pure math I had the impression that usually the theory comes first and then someone works out something to work in practice. While this is true sometimes it is often NOT true and this will not surprise any of my blog readers. Even so, I want to tell you about some times it surprised me. This says more about my ignorance than about math or applications or whatnot.<br />
<br />
1) Quantum<br />
<br />
a) Factoring was proven to be in BQP way before actual quantum computers could do this in reasonable time (we're still waiting).<br />
<br />
b) Quantum Crypto- This really is out there. I do not know what came first, the theory or the practice. Or if they were in tandem.<br />
<br />
c) (this one is the inspiration for the post) When I first heard the term <i>Quantum Supremacy</i> I thought it meant the desire for a THEOREM that problem A is in BQP but is provably not in P. For example, if someone proved factoring is not in P (unlikely this will be proven, and hey- maybe factoring is in P). Perhaps some contrived problem like those constructed by diagonalization (my spell checker thinks that's not a word. Having worked in computability theory, I think it is. Darn- my spellchecker thinks computability is not word.) Hence when I heard that Google had a paper proving Quantum Supremacy (I do not recall if I actually heard the word <i>proven) </i>I assumed that there was some <i>theoretical </i>breakthrough. I was surprised and not in the slightest disappointed to find out it involved actual quantum computers.<br />
<br />
<b>Question</b>: When the term <i>Quantum Supremacy</i> was first coined, did they mean theoretical, or IRL, or both?<br />
<br />
2) Ramsey Theory<br />
<br />
a) For Ramsey's Theorem and Van Der waerden's theorem and Rado's theorem and others I could name, first a theorem showed a upper bound on a number, then later computers and perhaps some math got better bounds on that number.<br />
<br />
b) Consider the following statement:<br />
<br />
For all c there exists P such that for all c-colorings of {1,...,P} there exists x,y,z the same color such that x<sup>2</sup> +y<sup>2</sup> = z<sup>2</sup>.<br />
<br />
Ronald Graham conjectured the c=2 case and offered $100 in the 1980's. (I do not know if he had any comment on the general case.) I assumed that it would be proven with ginormous bounds on the P(c) function, and then perhaps some reasonable bound would be found by clever programming and some math. (see <a href="https://en.wikipedia.org/wiki/Boolean_Pythagorean_triples_problem">here</a> for the Wikipedia Entry about the problem, which also has pointers to other material).<br />
<br />
Instead the c=2 case was proven with an exact bound, P(2)=7825, by a computer program, in 2016. The proof is 200 terabytes. So my prediction was incorrect.<br />
<br />
As for the result<br />
<br />
PRO: We know the result is true for c=2 and we even know the exact bound. Wow! and for Ramsey Theory its unusual to have exact bounds!<br />
<br />
CON: It would be good to have a human-readable proof. This is NOT an anti-technology statement. For one thing, a human-readable proof might help us get the result for c=3 and beyond.<br />
<br />
3) This item is a cheat in that I knew the empirical results first. However, I will tell you what I am sure I would have thought (and been wrong) had I not know them.<br />
<br />
Given k, does the equation<br />
<br />
<br />
x<sup>3</sup> +y<sup>3</sup> +z<sup>3</sup> = k<br />
<br />
have a solution in Z? I would have thought that some hard number theory would determine<br />
for which k it has a solution (with a proof that does not give the actual solutions) and for then a computer programs would try to find the solutions. Instead (1) some values of k are ruled out by simple mod considerations, and (2) as for the rest, computers have found solutions for some of them. Lipton-Regan (<a href="https://rjlipton.wordpress.com/2019/09/30/writing-33-as-a-sum-of-cubes/">here</a>) and Gasarch (<a href="https://blog.computationalcomplexity.org/2019/04/x-3-y-3-z-3-33-has-solution-in-z-and.html">here</a>) have blogged about the k=33 case. Lipton-Regan also comment on the more recent k=42 case.<br />
<br />
<br />
https://blog.computationalcomplexity.org/2019/10/what-comes-first-theory-or-practice-its.htmlnoreply@blogger.com (GASARCH)4tag:blogger.com,1999:blog-3722233.post-3615744836127152440Thu, 03 Oct 2019 19:41:00 +00002019-10-03T16:08:28.797-04:00Quantum Supremacy: A Guest Post by Abhinav DeshpandeI am delighted to introduce you to Abhinav Deshpande, who is a graduate student at the University of Maryland, studying Quantum Computing. This will be a guest post on the rumors of the recent Google breakthrough on Quantum Supremacy. For other blog posts on this exciting rumor, see <a href="https://www.scottaaronson.com/blog/?p=4317">Scott Aaronson's post</a>, <a href="https://www.scottaaronson.com/blog/?p=4342">Scott Aaronson's second post on it</a>, <a href="https://www.quantamagazine.org/john-preskill-explains-quantum-supremacy-20191002/">John Preskill's quanta article</a>, <a href="https://blog.computationalcomplexity.org/2019/09/quantum-supremacy.html">Fortnow's post</a>,<br />
and there may be others.<br />
<br />
Guest post by Abhinav:<br />
<br />
I (Abhinav) thank Bill Fefferman for help with this post, and Bill Gasarch for inviting me to do a guest post.<br />
<br />
<br />
<b>The quest towards quantum computational supremacy</b><br />
<br />
September saw some huge news in the area of quantum computing, with rumours that the Google AI Lab has achieved a milestone known as 'quantum computational supremacy', also termed 'quantum supremacy' or 'quantum advantage' by some authors. Today, we examine what this term means, the most promising approach towards achieving this milestone, and the best complexity-theoretic evidence we have so far against classical simulability of quantum mechanics. We will not be commenting on details of the purported paper since there is no official announcement or claim from the authors so far.<br />
<br />
<b>What it means</b><br />
<br />
First off, the field of quantum computational supremacy arose from trying to formally understand the differences in the power of classical and quantum computers. A complexity theorist would view this goal as trying to give evidence to separate the complexity classes BPP and BQP. However, it turns out that one can gain more traction from considering the sampling analogues of these classes, SampBPP and SampBQP. These are classes of distributions that can be efficiently sampled on classical and quantum computers, respectively. Given a quantum circuit U on n qubits, one may define an associated probability distribution over 2^n outcomes as follows: apply U to the fiducial initial state |000...0> and measure the resulting state in the computational basis. This produces a distribution D_U.<br />
<br />
A suitable way to define the task of simulating the quantum circuit is as follows<b style="font-style: italic;">:</b><br />
<br />
Input: Description of a quantum circuit U acting on n qubits.<br />
<br />
Output: A sample from the probability distribution D_U obtained by measuring U|000...0> in the computational basis.<br />
<br />
One of the early works in this field was that of <a href="https://arxiv.org/abs/quant-ph/0205133">Terhal and DiVincenzo</a>, which first considered the complexity of sampling from a distribution (weak simulation) as opposed to that of calculating the exact probability of a certain outcome (strong simulation). Weak simulation is arguably the more natural notion of simulating a quantum system, since in general, we cannot feasibly compute the probability of a certain outcome even if we can simulate the quantum circuit. Subsequent works by <a href="https://arxiv.org/abs/1011.3245">Aaronson and Arkhipov</a>, and by <a href="https://arxiv.org/abs/1005.1407">Bremner, Jozsa, and Shepherd</a> established that if there is a classically efficient weak simulator for different classes of quantum circuits, the polynomial hierarchy collapses to the third level.<br />
<br />
<br />
So far, we have only considered the question of exactly sampling from the distribution D_U. However, any realistic experiment is necessarily noisy, and a more natural problem is to sample from a distribution that is not exactly D_U but from any distribution D_O that is ε-close in a suitable distance measure, say the variation distance.<br />
<br />
The aforementioned work by Aaronson and Arkhipov was the first to consider this problem, and they made progress towards showing that a special class of quantum circuits (linear optical circuits) is classically hard to approximately simulate in the sense above. The task of sampling from the output of linear optical circuits is known as boson sampling. At the time, it was the best available way to show that quantum computers may solve some problems that are far beyond the reach of classical computers.<br />
<br />
Even granting that the PH doesn't collapse, one still needs to make an additional conjecture to establish that boson sampling is not classically simulable. The conjecture is that additively approximating the output probabilities of a random linear optical quantum circuit is #P-hard. The reason this may be true is that output probabilities of random linear optical quantum circuits are Permanents of a Gaussian random matrix, and the Permanent is as hard to compute on a random matrix as it is on a worst-case matrix. Therefore, the only missing link is to go from average-case hardness of exact computation to average-case hardness of an additive estimation. In addition, if we make a second conjecture known as the "anti-concentration" conjecture, we can show that this additive estimation is non-trivial: it suffices to give us a good multiplicative estimation with high probability.<br />
<br />
So that's what quantum computational supremacy is about: we have a computational task that is efficiently solvable with quantum computers, but which would collapse the polynomial hierarchy if done by a classical computer (assuming certain other conjectures are true). One may substitute "collapse of the polynomial hierarchy" with stronger conjectures and incur a corresponding tradeoff in the likelihood of the conjecture being true.<br />
<br />
<b>Random circuit sampling</b><br />
<br />
In 2016,<a href="https://arxiv.org/abs/1608.00263"> Boixo et al</a>. proposed to replace the class of quantum circuits for which some hardness results were known (commuting circuits and boson sampling) by random circuits of sufficient depth on a 2D grid of qubits having nearest-neighbour interactions. Concretely, the proposed experiment would be to apply random unitaries from a specified set on n qubits arranged on a 2D grid for sufficient depth, and then sample from the resulting distribution. The two-qubit unitaries in the set are restricted to act between nearest neighbours, respecting the geometric This task is called random circuit sampling (RCS).<br />
<br />
At the time, the level of evidence for the hardness of this scheme was not yet the same as the linear optical scheme. However, given the theoretical and experimental interest in the idea of demonstrating a quantum speedup over classical computers, subsequent works by<a href="https://arxiv.org/abs/1803.04402"> Bouland, Fefferman, Nirkhe and Vazirani</a>, and <a href="https://arxiv.org/abs/1809.06957">Harrow and Mehraban</a> bridged this gap (the relevant work by <a href="https://arxiv.org/abs/1612.05903">Aaronson and Chen</a> will be discussed in the following section). Harrow and Mehraban proved anticoncentration for random circuits. In particular, they showed that a 2-dimensional grid of n qubits achieve anticoncentration in depth O(\sqrt{n}), improving upon earlier results with higher depth due to <a href="https://arxiv.org/abs/1208.0692">Brandao, Harrow and Horodeck</a>i. Bouland et al. proved the same supporting evidence for RCS as that for boson sampling, namely a worst-to-average-case reduction for exactly computing most output probabilities, even without the permanent structure possessed by linear optical quantum circuits.<br />
<br />
<b>Verification</b><br />
<br />
So far, we have not discussed the elephant in the room: of verifying that the output distribution supported on 2^n outcomes. It turns out that there are concrete lower bounds such as those due to Valiant and Valiant, showing that verifying whether an empirical distribution is close to a target distribution is impossible if one has few samples.<br />
<br />
Boixo et al. proposed a way of certifying the fidelity of the purported simulation. Their key observation was to note that if their experimental system is well modelled by a noise model called global depolarising noise, estimating the output fidelity is possible with relatively few outcomes. Under global depolarising noise with fidelity f, the noisy distribution takes the form D_N = f D_U + (1-f) I, where I is the uniform distribution over the 2^n outcomes. Together with another empirical observation about the statistics of output probabilities of the ideal distribution D_U, they argued that computing the following cross-entropy score would serve as a good estimator of the fidelity:<br />
<br />
f ~ H(I, D_U) - H(D_exp, D_U), where H(D_A,D_B) is the cross-entropy between the two distributions: H(D_A, D_B) = -\sum_i p_A log (p_B).<br />
<br />
The proposal here was to experimentally collect several samples from D_exp, classically compute using brute-force the probabilities of these outcomes in the distribution D_U, and estimate the cross-entropy using this information. If the test outputs a high score for a computation on sufficiently many qubits and depth, the claim is that quantum supremacy has been achieved.<br />
<br />
Aaronson and Chen gave alternative form of evidence for the hardness of scoring well on a test that aims to certify quantum supremacy similar to the manner above. This sidesteps the issue of whether a test similar to the one above does indeed certify the fidelity. The specific problem considered was "Heavy Output Generation" (HOG), the problem of outputting strings that have higher than median probability in the output distribution. Aaronson and Chen linked the hardness of HOG to a closely related problem called "QUATH", and conjectured that QUATH is hard for classical computers.<br />
<br />
<b>Open questions</b><br />
<br />
Assuming the Google team has performed the impressive feat of both running the experiment outlined before and classically computing the probabilities of the relevant outcomes to see a high score on their cross-entropy test, I discuss the remaining positions a skeptic might take regarding the claim about quantum supremacy.<br />
<br />
"The current evidence of classical hardness of random circuit sampling is not sufficient to conclude that the task is hard". Assuming that the skeptic believes that the polynomial hierarchy does not collapse, a remaining possibility is that there is no worst-to-average-case reduction for the problem of *approximating* most output probabilities, which kills the proof technique of Aaronson and Arkhipov to show hardness of approximate sampling.<br />
<br />
"The cross-entropy proposal does not certify the fidelity." Boixo et al. gave numerical evidence and other arguments for this statement, based on the observation that the noise is of the global depolarising form. A skeptic may argue that the assumption of global depolarising noise is a strong one.<br />
<br />
"The QUATH problem is not classically hard." In order to give evidence for the hardness of QUATH, Aaronson and Chen examined the best existing algorithms for this problem and also gave a new algorithm that nevertheless do not solve QUATH with the required parameters.<br />
<br />
It would be great if the community could work towards strengthening the evidence we already have for this task to be hard, either phrased as a sampling experiment or together with the verification test.<br />
<br />
Finally, I think this is an exciting time for quantum computing and to witness this landmark event. It may not be the first probe of an experiment that is "hard" to classically simulate, since there are many quantum experiments that are beyond the reach of current classical simulations, but the inherent programmability and control present in the experimental system is what enables the tools of complexity theory to be applied to the problem. A thought that fascinates me is the idea that we may be exploring quantum mechanics in a regime never probed this carefully before, the "high complexity regime" of quantum mechanics. One imagines there are important lessons in physics here.<br />
<br />https://blog.computationalcomplexity.org/2019/10/quantum-supremacy-guest-post-by-abhinav.htmlnoreply@blogger.com (GASARCH)0tag:blogger.com,1999:blog-3722233.post-2892542038915710982Tue, 01 Oct 2019 00:03:00 +00002019-10-09T12:57:05.731-04:00Richard Guy is 103 years old todayRichard Guy is a mathematician. He co-authored the classic book <i>Winning Ways for your Mathematical Plays </i> with Elywn Berlekamp and John Conway.<br />
<br />
On Sept 30 (today) he turned 102. According to <a href="https://en.wikipedia.org/wiki/List_of_centenarians_(scientists_and_mathematicians)">this list </a> he is the oldest living mathematician, and he would need to live to 110 to be the oldest mathematician ever. <br />
<br />
I have met him twice. He was at the Gathering for Gardner Conference as a young 98-year old. I told him that his book Winning Ways had a great influence on me. He asked it if was positive or negative. I later saw him at a Math conference where he went to my talk on The Muffin Problem. So he is still active.<br />
<br />
His Wikipedia site says that he says he regards himself as an Amateur Mathematician. While it is awkward to disagree with how someone sees himself, I'll point out that he is an author or co-author of 11 books, has many papers, and has solved Erdos Problems. He has taught some but I couldn't really find out what his source of income is or was. This takes us back to the word `amateur' which has several meanings:<br />
<br />
Amateur: Someone who does X for the love of X (Amor is Love in Spanish), and not for money. This could be true of Richard Guy. This notion of amateur may be lost on my younger readers since this it used to be a thing to NOT take money since it somehow soils what you do. In those days Olympic athletes could not have played professionally beforehand. We can't imagine that now.<br />
<br />
Amateur: Someone who dabbles in something but is not really that good. This could NOT be true of Richard Guy.<br />
<br />
<br />
<br />
<br />
Aside from games he has also worked in Number Theory. His book <i>Unsolved Problems in Number Theory </i> has inspired many (including me). <br />
<br />
So happy birthday Richard Guy!<br />
<br />
He is the also the oldest living person we have honored on this blog. Second oldest was Katherine Johnson, see <a href="https://blog.computationalcomplexity.org/2018/08/katherine-johnson-1918.html"></a> who is still alive.<br />
<br />
ADDED LATER- some people emailed me if Richard Guy was still actively doing mathematics. Here is a recent paper of his: <a href="https://arxiv.org/abs/1910.03379">here</a>https://blog.computationalcomplexity.org/2019/09/richard-guy-is-102-years-old-today.htmlnoreply@blogger.com (GASARCH)2tag:blogger.com,1999:blog-3722233.post-5706639639416187826Thu, 26 Sep 2019 17:09:00 +00002019-09-26T13:09:18.530-04:00Quantum SupremacyBy now you've probably heard the rumors of Google achieving quantum supremacy. I don't have inside information outside of <a href="https://www.scottaaronson.com/blog/?p=4317">Scott's blog post</a> but it looks like the news should be <a href="https://blog.computationalcomplexity.org/2006/10/embargoed-science.html">embargoed</a> until the release of a Science or Nature paper. These things usually happen on a Tuesday and you'd think they would avoid the news of the Nobel Prize announcements October 7-14.<br />
<br />
Since for now the Google paper doesn't officially exist, we live in an era of Classical Dominance. Any problem that can be solved on a quantum computer today, can be solved just as fast or faster on a traditional computer. Quantum Supremacy, despite its lofty name, is just the negation of Classical Dominance, that there is some problem that a current quantum machine can solve that all our regular machines would require a considerably longer time to solve. This isn't a formal mathematical or scientific definition, so one can debate when or if we cross this threshold and I'm sure <a href="https://gilkalai.wordpress.com/2019/09/23/quantum-computers-amazing-progress-google-ibm-and-extraordinary-but-probably-false-supremacy-claims-google/">people will</a>.<br />
<br />
Quantum Supremacy might not even be a monotone concept. Future classical algorithms might solve the problem quickly, leading us back to Classical Dominance but leaving open the possibility of returning to Quantum Supremacy with another problem.<br />
<br />
Quantum Supremacy is a long way from Quantum Usefulness, where quantum machines can solve problems we care about faster that traditional machines. Quantum computing will truly reach its potential when we can run general quantum algorithms like Shor's algorithm to factor products of large primes. We'll probably never see Quantum Dominance where classical transistors go the way of vacuum tubes.<br />
<br />
Nevertheless, quantum supremacy is an important step and whether or not you think Google has gotten there, I'm sure it's an incredible achievement of science and engineering.https://blog.computationalcomplexity.org/2019/09/quantum-supremacy.htmlnoreply@blogger.com (Lance Fortnow)1tag:blogger.com,1999:blog-3722233.post-6018241016493737863Mon, 23 Sep 2019 13:20:00 +00002019-09-23T09:20:22.649-04:00Applicants to Grad School are too good. Here is why this might be a problem.Sitting around with three faculty we had the following conversation<br />
<br />
ALICE: When I applied to grad school in 1980 they saw a strong math major (that is, I had good grades in hard math courses) but very little programming or any sort of computer science. That kind of person would NOT be admitted today since there are plenty of strong math majors who ALSO have the Comp Sci chops.<br />
<br />
BOB: When I applied to grad school I was a comp sci major but my grades were not that good- A's in system courses, B's and even some C's in math. But I did a Security project that lead to a paper that got into a (minor) systems workshop. Two of my letters bragged a lot about that. (How do I know that? Don't ask.) So I got into grad school in 1989. That kind of person would NOT be admitted today since there are plenty of people who have papers in minor conferences whose grades ARE good in stuff other than their area.<br />
<br />
CAROL: In 1975 I was an English major at Harvard. The summer between my junior and senior year I took a programming course over the summer and did very well and liked it. I then took some math. Then I worked in industry at a computer scientist for 5 years. Then I applied to grad school and they liked my unusual background. Plus I did well on the GRE's. Letters from my boss at work helped me, I don't think they would count letters from industry now. They took a chance on me, and it paid off (I got a PhD) but I don't think they would let someone like me in now since they don't have to take a chance. They can admit people who have done research, have solid backgrounds, and hence are not taking a chance. The irony is that some of those don't finish anyway.<br />
<br />
<br />
1) Are Alice, Bob, and Carol right that they wouldn't be admitted to grad school now? I think they are with a caveat- they might end up in a lower tier grad school then they did end up in. Also, Alice and Bob I am more certain would not end up in the top tier grad schools they did since they can be compared DIRECTLY to other applicants,<br />
where as Carol might be more orthogonal.<br />
<br />
2) I have a sense (backed up my no data) that we are accepting fewer unusual cases than we used to (not just UMCP but across the country) because too many of the applicants are the standard very-good-on-paper applicants. Even the on-paper is not quite fair- they ARE very good for real.<br />
<br />
3) Assume we are taking less unusual cases. Is that bad? I think so as people with different backgrounds (Carol especially) add to the diversity of trains of thought in a program, and that is surely a good thing. If EVERY students is a strong comp sci major who has done some research, there is a blandness to that.<br />
<br />
4) What to do about this? First off, determine if this is really a problem. If it is then perhaps when looking at grad school applicants have some sensitivity to this issue.<br />
<br />
5) For grad school admissions I am speculating. For REU admissions (I have run an REU program for the last 7 years and do all of the admissions myself) I can speak with more experience. The students who apply have gotten better over time and this IS cause for celebration; however, it has made taking unusual cases harder. <br />
<br />
https://blog.computationalcomplexity.org/2019/09/applicants-to-grad-school-are-too-good.htmlnoreply@blogger.com (GASARCH)7tag:blogger.com,1999:blog-3722233.post-6037506695705789893Mon, 16 Sep 2019 13:10:00 +00002019-09-16T09:10:52.374-04:00this paper from 2015 cracks Diffie-Hellman. What to tell the students?I am teaching cryptography this semester for the second time (I taught it in Fall 2019) and will soon tell the students about the paper from 2015:<br />
<a href="https://weakdh.org/imperfect-forward-secrecy.pdf">Imperfect Forward Secrecy: How Diffie-Hellman Fails in Practice</a>. There are 14 authors.<br />
<br />
The upshot is that as Diffie-Hellman was implemented in 2015, many cases were crackable. In summary (and probably too simple):<br />
<br />
DH in a 512-bit group can be cracked by the authors<br />
<br />
DH in a 1024-bit group they speculate can be cracked with nation-state resources. <br />
<br />
<br />
<br />
Is this a big deal? If YES then what is being done, and if NOT then why not?<br />
<br />
I have come up with some statements that I DO NOT KNOW if they are true, but I am ASKING you, to shed some light on the BIG DEAL or NO BIG DEAL question. (Note- Idea for a game show: BIG DEAL or NO BIG DEAL where contestants are asked if a news story is a BIG DEAL or not.)<br />
<br />
So, please comment on the following question:<br />
<br />
1) Since 2015 the people who use DH have upped their game and are now using bigger parameters. (I doubt this is true)<br />
<br />
2) DH is mostly not used on things that hackers are not interested in, so this is not a big deal.<br />
<br />
3) The expertise required to crack DH via this paper is rather difficult, so hackers don't have the skills.<br />
<br />
4) This paper is not a problem for a bad reason: Hackers don't need to use the number field sieve DL algorithm when all they need to do is (1) guess that the pin numer is 1234 or the year the user was born (or close to it), (2) put on a uniform from Geek-Squad or some such organization and claim they are here to help, (3) exploit a known security flaw that the company has not bothered fixing. <br />
<br />
5) The 14 authors have mysteriously disappeared. (I doubt this is true.)<br />
<br />
<br />
(Misc: My spell checker thinks that Diffie and crackable are not words, but Hellman is.)https://blog.computationalcomplexity.org/2019/09/this-paper-from-2015-cracks-diffie.htmlnoreply@blogger.com (GASARCH)7