tag:blogger.com,1999:blog-3722233Wed, 10 Feb 2016 16:02:11 +0000typecastfocs metacommentsComputational ComplexityComputational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill Gasarchhttp://blog.computationalcomplexity.org/noreply@blogger.com (Lance Fortnow)Blogger2349125tag:blogger.com,1999:blog-3722233.post-5228107042238383034Mon, 08 Feb 2016 21:56:00 +00002016-02-09T06:16:02.781-05:00The Moral Hazard of Avoiding Complexity AssumptionsMoshe Vardi's CACM editor letter <a href="http://cacm.acm.org/magazines/2016/2/197412-the-moral-hazard-of-complexity-theoretic-assumptions/fulltext">The Moral Hazard of Complexity-Theoretic Assumptions</a> practically begs a response from this blog. I also encourage you to read the <a href="http://cacm.acm.org/magazines/2016/2/197412-the-moral-hazard-of-complexity-theoretic-assumptions/fulltext#comments">discussion</a> between Moshe and Piotr Indyk and a <a href="https://twitter.com/vardi/status/693520910096187393">twitter discussion</a> between Moshe and Ryan Williams.<br />
<br />
I take issue mostly with the title of Vardi's letter. Unfortunately we don't have the tools to prove strong unconditional lower bounds on solving problems. In computational complexity we rely on hardness assumptions, like P ≠ NP, to show that various problems are difficult to solve. Some of these assumptions, like the strong exponential-time hypothesis, the Unique Games Conjecture, the circuits lower bounds needed for full derandomization, are quite strong and we can't be completely confident that these assumptions are true. Nevertheless computer scientists will not likely disprove these assumptions in the near future so they do point to the extreme hardness of solving problems like getting a better than quadratic upper bound for edit distance or a better than 2-ε approximation for vertex cover.<br />
<br />
If you read Vardi's letter, he doesn't disagree with the above paragraph. His piece focuses instead on the press that oversells theory results, claiming the efficiency of Babai's new graph isomorphism algorithm or the impossibility of improving edit distance. A science writer friend <a href="http://blog.computationalcomplexity.org/2004/04/view-of-science-writer.html">once told me</a> that scientists always want an article to be fully and technically correct, but he doesn't write for scientists, he writes for the readers who want to be excited by science. Scientists rarely mislead the press, and we shouldn't, but do we really want to force science writers to downplay the story? These stories might not be completely accurate but if we can get the public interested in theoretical computer science we all win. To paraphrase Oscar Wilde, as I <a href="https://twitter.com/fortnow/status/693056927560044545">tweeted</a>, the only thing worse than the press talking about theoretical computer science results is the press not talking about theoretical computer science results.http://blog.computationalcomplexity.org/2016/02/the-moral-hazard-of-avoiding-complexity.htmlnoreply@blogger.com (Lance Fortnow)8tag:blogger.com,1999:blog-3722233.post-2961869012416534054Thu, 04 Feb 2016 12:20:00 +00002016-02-04T07:20:48.000-05:00Go Google GoIn 2009 I <a href="http://blog.computationalcomplexity.org/2009/08/computer-go.html">posted</a> about a surprising new approach that moved computer Go from programs that lose to beginners to where it could beat good amateurs. That approach, now called Monte Carlo Tree Search, involves evaluating a position using random game play and doing a bounded-depth tree search to maximize the evaluation.<br />
<br />
Google last week <a href="https://googleblog.blogspot.com/2016/01/alphago-machine-learning-game-go.html">announced</a> AlphaGo, a program that uses ML techniques to optimize MCTS. This program beat the European Go champion five games to none, a huge advance over beating amateurs. In March AlphaGo will play the world Go champion, Lee Sedol, in a five game match. The world will follow the match closely (on YouTube naturally). For now we should keep our expectations in check, Deep Blue failed to beat Kasparov in its first attempt.<br />
<br />
Google researchers describe AlphaGo in detail in a readable <a href="http://dx.doi.org/10.1038/nature16961">Nature article</a>. To oversimplify they train deep neural nets to learn two functions, the probability of a win from a given position, and the probability distribution used to choose the next move in the random game play in MCTS. First they train with supervised learning based on historical game data between expert players and then reinforcement learning by basically having the program play itself. AlphaGo uses these functions to guide the Monte Carlo Tree Search.<br />
<br />
AlphaGo differs quite a bit from chess algorithms.<br />
<ul>
<li>AlphaGo uses no built-in strategy for Go. The same approach could be used for most other two player games. I guess this approach would fail miserably for Chess but I would love to see it tried.</li>
<li>Machine learning has the nice property that you can train offline slowly and then apply the resulting neural nets quickly during gameplay. While we can refine the chess algorithms offline but all the computation generally happens during the game.</li>
<li>If a computer chess program makes a surprising move, good or bad, one can work through the code and figure out why the program made that particular move. If AlphaGo makes a surprising move, we'll have no clue why.</li>
<li>I wonder if the same applies to human play. A chess player can explain the reasoning behind a particular move. Can Go players do the same or do great Go players rely more on intuition?</li>
</ul>
<div>
Machine learning applications like AlphaGo seemingly tackle difficult computational problems with virtually no built-in domain knowledge. Except for generating the game data for the supervised learning, humans play little role in how AlphaGo decides what to do. ML uses the same techniques to translate languages, to weed out spam, to set the temperature in my house, and in the near future to drive my car. Will they prove our theorems? Time will tell. </div>
http://blog.computationalcomplexity.org/2016/02/go-google-go.htmlnoreply@blogger.com (Lance Fortnow)5tag:blogger.com,1999:blog-3722233.post-7706006768032362792Tue, 02 Feb 2016 04:42:00 +00002016-02-02T08:23:58.095-05:00Math questions that come out of the Iowa CaucusLink for info I refer to <a href="http://www.foxnews.com/politics/elections/2016/primary-caucus-results/iowa">here</a><br />
<br />
Jim Gilmore, republican, has 0% of the vote. Does that mean that literally NOBODY voted for him?<br />
<br />
Hillary beat Bernie , but BOTH get 21 delegates. I hardly call that a win. I call that a tie.<br />
<br />
<br />
Why do we refer to Hillary and Bernie by their first names, but most of the republicans by their last name. The only exception is Jeb! who we call Jeb to dist from his brother. Also, I think he wants to play down his relation to his brother. Not that it will matter.<br />
<br />
Cruz/Trump/Rubio will get 8,7,6 delegates.(This might change but not by a lot). I'd call that a tie. Later states will be winner-take-all which make no sense in a race with this many people (though it may go down soon-- Huckabee has already <i>suspended his campaign </i>which seems like an odd way of saying <i>I quit).</i> But in winner-take-all states there will be real winners. Why are there winner-take-all states? It was a deal made so that those states wouldn't move their primaries up.<br />
<br />
This is a terrible way to pick a president. I don't mean democracy which is fine, I mean the confusing combination of Caucus's and Primaries, with some states winner-take-all, some by proportion, and Iowa and NH having... more power than they should. This was NOT a planned system it just evolved that way. But its hard to change.<br />
<br />
If you are a registered Republican but want Hillary to win then do you (1) vote for the republican you like the best, or (2) vote for the Republican that Hillary can most easily beat. The problem with (2) is that you could end up with President Trump.<br />
<br />
Stat analysis of polls and looking at past trends have their limits for two reasons:<br />
<br />
1) The amount of data is small. The modern primary system has only been in place since 1972. Some nominations are incumbents which are very different from a free-for-all. The only times both parties had free-for-alls were 1988, 2000, 2008, and 2016.<br />
<br />
2) Whatever trends you do find, even if they are long term (e.g., the tallest candidate wins) might just change. The old Machine Learning warning: <i>Trends hold until they don't.</i><br />
<i><br /></i>
Most sciences get BETTER over time. Polling is a mixed bag. On the one hand, using modern technology you can poll more people. On the other hand, people have so many diff ways to contact them that its hard to know what to do. For example, its no longer the case that everyone has a landline.<br />
<br />
Is this headline a satire?:<a href="http://www.thedailybeast.com/cheats/2016/02/01/ben-carson-leaves-iowa-to-do-laundry0.html">here</a><br />
<br />
<br />
AI project- write a program that tells political satire from political fact. Might be hard.<br />
<br />
My wife pointed out that<br />
<br />
Hillary WINS since she didn't lose!<br />
<br />
Bernie WINS since an insurgent who TIES the favorite is a win<br />
<br />
Cruz WINS since... well, he actually DID win<br />
<br />
Trump WINS since his support is mostly real and he didn't collapse. And he was surprisingly gracious in his concession speech. (This is the weakest `he WINS' argument on this list)<br />
<br />
Rubio might be the BIG WINNER since he did way better than expected. Thats a rather odd criteria and it makes people want to set their expectations low.<br />
<br />
SO--- like a little league game where they all tried hard, THE'RE ALL WINNERS!<br />
<br />
The American Public--- not so much.<br />
<br />
<br />
<br />
<br />http://blog.computationalcomplexity.org/2016/02/math-questions-that-come-out-of-iowa.htmlnoreply@blogger.com (GASARCH)3tag:blogger.com,1999:blog-3722233.post-5940936868926407886Thu, 28 Jan 2016 13:30:00 +00002016-01-28T08:30:14.426-05:00We Still Can't Beat RelativizationAs we celebrate our successes in computational complexity here's a sobering fact: We have had no new non-relativizing techniques in the last 25 years.<br />
<br />
A little background: In 1975, a few years after Cook established the P v NP problem, <a href="http://blog.computationalcomplexity.org/2006/03/favorite-theorems-relativization.html">Baker, Gill and Solovay</a> created two sets A and B such that every NP machine that could ask questions about A could be simulated by a P machine that could ask questions to A and there is some language accepted by an NP machine that could ask questions to B that cannot be solved by a P machine that could ask questions to B. In other words P<sup>A</sup> = NP<sup>A</sup> but P<sup>B</sup> ≠ NP<sup>B</sup>.<br />
<br />
All the known proof techniques at the time relativized, i.e., if you proved two classes were the same or different, they would be the same of different relative to any set. BGS implied that these techniques could not settle the P v NP problem.<br />
<br />
In the 80's and 90's we got very good at creating these relativized worlds and, with a couple of exceptions, we created relativized worlds that make nearly all the open questions of complexity classes between P and PSPACE both true and false.<br />
<br />
There were some results in the that were non-relativizing for technical uninteresting reasons. In the late 80s/early 90s we had some progress, results like NP having zero-knowledge proofs, co-NP (and later PSPACE) having interactive proofs and NEXP having (exponential-sized) probabilistically checkable proofs, despite relativized worlds making those statements false. But then it stopped. We had a few more nonrelativizing results but those just used the results above, not any new techniques.<br />
<br />
In 1994 I wrote on <a href="http://www.cs.uchicago.edu/~fortnow/papers/relative.pdf">survey</a> on what relativization meant post-interactive proofs, still mostly up to date. We have seen new barriers put up such as <a href="http://dx.doi.org/10.1006/jcss.1997.1494">natural proofs</a> and <a href="http://dx.doi.org/10.1145/1490270.1490272">algebrization</a>, but until we can at least get past traditional relativization we just cannot make much more progress with complexity classes.<br />
<br />http://blog.computationalcomplexity.org/2016/01/we-still-cant-beat-relativization.htmlnoreply@blogger.com (Lance Fortnow)5tag:blogger.com,1999:blog-3722233.post-1802341424653728987Mon, 25 Jan 2016 02:05:00 +00002016-01-24T21:06:01.761-05:00When do we stop giving the original reference?<br />
I was preparing a talk which included my result that there is a sane reduction 4-COL \le 3-COL (the paper is <a href="http://arxiv.org/abs/1407.5128">here</a>) when I realized that if I am going to claim the reduction is <i>by Gasarch </i>then I should find out and credit the person who proved 3-COL NP-complete (I assume that the result 3-COL \le 4-COL is too trivial to have an author). I could not find the ref on the web (I am sure that one can find it on the web, but a cursory glance did not yield it). The reference was not in Goldreich's complexity book, nor the CLR Algorithms book. I suspect its not in most recent books.<br />
<br />
The reference is Stockmeyer, SIGACT NEWS 1973,<br />
<br />
Few modern paper references Cook or Levin's original papers for the Cook-Levin Theorem. I've even heard thats a sign its a crank paper.<br />
<br />
Few modern papers reference Ramsey's original paper for Ramsey's Theorem.<br />
<br />
But the questions arises--- when is a result such common knowledge that a ref is no longer needed?<br />
<br />
A result can also go through a phase where the reference is to a book that contains it, rather than the original paper.<br />
<br />
On the one hand, one SHOULD ref the original so that people know who did it and what year it was from. On the other hand there has to be a limit to this or else we would all be refering to Euclid and Pythagoras. <br />
<br />
Where does Stockmeyer's result fall? I do not know; however, I've noticed that I didn't reference him, and I will correct that.<br />
<br />http://blog.computationalcomplexity.org/2016/01/when-do-we-stop-giving-he-original.htmlnoreply@blogger.com (GASARCH)7tag:blogger.com,1999:blog-3722233.post-7635095660554461082Thu, 21 Jan 2016 11:26:00 +00002016-01-21T06:26:49.973-05:00The Growing Academic Divide<blockquote class="tr_bq">
<blockquote class="tr_bq">
The decreases of the past three years bring the number of advertised jobs to a new low, below the level reached after the severe drop between 2007–08 and 2009–10. </blockquote>
</blockquote>
So reads the <a href="https://www.mla.org/content/download/40038/1744556/Rpt_JIL_14-15.pdf">Report on the Modern Language Association Job Information List</a>. The MLA is the main scholarly organization for the humanities in the US. The bright spot--<a href="http://www.theguardian.com/higher-education-network/2015/sep/25/japans-humanities-chop-sends-shivers-down-academic-spines">we aren't Japan</a>.<br />
<br />
Meanwhile in computer science we continue to see large enrollment increases in major, minors and students just wanting to take computer science courses. In the upcoming <a href="http://cra.org/events/snowbird-2016/">CRA Snowbird Conference</a> "a major focus of the conference will be booming enrollments, with a short plenary followed by parallel sessions devoted to the topic, its various ramifications, and ideas to help you deal with it, including best practices for managing growth."<br />
<br />
The 2015 November CRA News had 83 pages of faculty job ads, up from 75 in 2014 and 34 in 2012. This doesn't even count that fact that many departments are looking to hire two, three, four or more positions in CS. It will be an interesting job market this spring.<br />
<br />
All of this is driven by jobs. We can't produce enough strong computer scientists to fill industry demand. And it's becoming increasingly hard for a humanities major to get a good first job.<br />
<br />
It's nice to be on the side of growth but it's a shame that faculty hiring seems to be a zero-sum game. We need poets as well as nerds. We've help create a world where we have made ourselves indispensable but is this a world we really want to live in?http://blog.computationalcomplexity.org/2016/01/the-growing-academic-divide.htmlnoreply@blogger.com (Lance Fortnow)8tag:blogger.com,1999:blog-3722233.post-1146430372215875744Tue, 19 Jan 2016 00:53:00 +00002016-01-18T19:53:09.249-05:00Is it okay to praise an article or book in an article or book?I recently had a paper accepted (YEAH!). The referees had some good corrections and one that puzzled me.<br />
<br />
<i>you wrote ``our proof is similar to the one in the wonderful book by Wilf on generationg functions [ref]''. You should not call a book wonderful as that is subjective. You can say it's well known.</i><br />
<br />
<br />
I asked a pretension of professors about this. Is it okay to praise an article or book? Is it okay to state an opinon? Would the following be acceptable:<br />
<br />
1) <i>In Cook's groundbreaking paper SAT was shown to be NP-complete</i>. It IS grounbreaking, so maybe thats okay.<br />
<br />
2) <i>Ramsey's paper, while brilliant, is hard for the modern reader to comprehend. Hence we give an exposition. </i>If I am writing an exposition then I might need to say why the original is not good to read so this is informative.<br />
<br />
3) <i>Ryan Williams proved an important lower bound in [ref]. </i>Is this okay to write? For most people yes, but NOT if you are Ryan Williams. (He never wrote such.)<br />
<br />
4) <i>William Gasarch proved an unimportant lower bound in [ref].</i> Is this okay to write ? Only if you ARE William Gasarch (He never wrote such).<br />
<br />
The version that will be in a journal will indeed NOT call Wilf's book wonderful. The version on arXiv which will be far more read (not behind a paywall) will call Wilf's book wonderful.<br />
<br />
<br />
<br />
<br />http://blog.computationalcomplexity.org/2016/01/is-it-okay-to-praise-article-or-book-in.htmlnoreply@blogger.com (GASARCH)17tag:blogger.com,1999:blog-3722233.post-6356524135894160924Fri, 15 Jan 2016 04:18:00 +00002016-01-14T23:18:47.867-05:00A limit on the DFA-CFG divide for finite setsIt is easy to see that for L<sub>n</sub> = {a,b}*a{a,b}<sup>2<sup>n</sup></sup><br />
<br />
There IS a CFG of size O(n)<br />
<br />
ANY DFA is of size double-exp-in-n<br />
<br />
I was wondering if we can increase this gap. That is, a statment like: For all but a finite number of n there exists a lang L<sub>n</sub> such that<br />
<br />
There IS a CFG of size O(n)<br />
<br />
ANY DFA is of size triple-exp-in-n.<br />
<br />
I also looked at finite sets. For COMPLIMENT of { ww : |w|=2<sup>n</sup> }<br />
<br />
There IS a CFG of size O(n)<br />
<br />
ANY DFA (in fact any DPDA) is of size double-exp-in-n.<br />
<br />
This is stated in my paper <a href="http://arxiv.org/abs/1503.08847">here</a> though the real interesting math needed to get it are in <a href="http://www.cs.toronto.edu/~yuvalf/CFG-LB.pdf">here</a> where they get lower bounds on the size of CFG's, generalizing techniques from <a href="https://cs.uwaterloo.ca/~shallit/Papers/re3.pdf">here</a>.<br />
<br />
SO my question still stands- can we get a triple exp separation? I have shown that this CANNOT be achieved with finite sets:<br />
<br />
THEOREM: If L is a finite lang then there exists n such that (1) Any CFG in Chomsky Normal Form for L has to be of size \ge log n, AND (2) there IS a DFA of size 2<sup>n</sup>.<br />
<br />
Proof: Let n be such that the longest string in L if of length n. In order for a Chomsky Normal Form grammar to generate this string it needs \ge log n nonterminals. Since the lang has at most 2<sup>n</sup><br />
strings in it, there is a DFA of size 2<sup>n</sup> for it.<br />
End of proof.<br />
<br />
So I have shown that one approach won't work. I am hoping that YOU know an approach to get<br />
a trip-exp-sep and leave a comment about it.<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />http://blog.computationalcomplexity.org/2016/01/a-limit-on-dfa-cfg-divide-for-finite.htmlnoreply@blogger.com (GASARCH)5tag:blogger.com,1999:blog-3722233.post-5924614008711571824Mon, 11 Jan 2016 16:54:00 +00002016-01-11T12:40:44.070-05:00A question in Formal Lang Theory about Size of Desc of Languages.<br />
(This post is inspired by me thinking more about <a href="http://blog.computationalcomplexity.org/2015/05/sizes-of-dfas-nfas-dpdas-ucfg-cfgs-csls.html">this blog entry </a>and <a href="http://arxiv.org/abs/1503.08847">this paper.</a>)<br />
<br />
Upper bounds on n are really O(n).<br />
Lower bounds of f(n) are really Omega(f(n))<br />
<br />
DPDA = Deterministic Push Down Automata<br />
<br />
NDFA = Nondet. Finite Aut.<br />
<br />
DFA = Det. Finite Aut.<br />
<br />
It is known that FOR ALL n there is a lang L<sub>n</sub> such that there is an NDFA of size n, but ANY DFA for L<sub>n</sub> is of size at least 2<sup>n</sup> : L<sub>n</sub> = (a,b)<sup>*</sup>a(a,b)<sup>n</sup>. for DFA-exactly 2<sup>n</sup>, NDFA- n+2. One can also get 2<sup>n</sup> and n.<br />
<br />
It is known that FOR ALL n there is a lang L<sub>n</sub> such that there is a DPDA of size n, but ANY NDFA for L<sup>n</sup> is of size at least roughly 2<sup>n</sup> size: L<sup>n</sup>={a<sup>2<sup>n </sup></sup>}. (ADDED LATER TO CLARIFY- NOTE THAT<br />
L<sup>n</sup> is a one-element set, hence regular.)<br />
<br />
Can we acheive both at the same time? Is the following true: for all n there exists L<sub>n</sub> such that<br />
<br />
1) There is a size n DPDA for L<sub>n</sub>.<br />
<br />
2) Any NDFA for L<sub>n</sub> requires size 2<sup>n</sup>.<br />
<br />
3) There IS an NDFA for L<sub>n</sub> of size 2<sup>n</sup>. <br />
<br />
4) Any DFA for L<sub>n</sub> requires size 2<sup>2<sup>n</sup><sup>.</sup></sup><br />
<br />
If we replace DPDA with PDA then { (a,b)*a(a,b)<sup>2<sup>n</sup></sup> } works.<br />
<br />
<br />
<br />http://blog.computationalcomplexity.org/2016/01/a-question-in-formal-lang-theory-about.htmlnoreply@blogger.com (GASARCH)5tag:blogger.com,1999:blog-3722233.post-8397512848780513117Wed, 06 Jan 2016 17:58:00 +00002016-01-06T12:58:32.042-05:00Rūsiņš Freivalds (1942-2016)<div class="separator" style="clear: both; text-align: center;">
<a href="http://www.lu.lv/kontakti/foto/32605.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="http://www.lu.lv/kontakti/foto/32605.jpg" height="200" width="200" /></a></div>
Rūsiņš Mārtiņš Freivalds <a href="http://www.delfi.lv/news/national/politics/muziba-aizgajis-profesors-rusins-martins-freivalds.d?id=46907459">passed away</a> on Monday from a heart attack at the age of 73. I met Freivalds several times often through Carl Smith, who <a href="http://blog.computationalcomplexity.org/2004/07/carl-smith-1950-2004.html">passed away</a> himself in 2004. Rūsiņš, Carl, Bill, myself and a couple of others have a <a href="http://dx.doi.org/10.1016/S0304-3975(97)00175-8">joint paper</a> on inductive inference and measure theory. Freivalds is the sixth co-author I've lost and it never gets easier.<br />
<br />
<a href="https://lv.wikipedia.org/wiki/R%C5%ABsi%C5%86%C5%A1_M%C4%81rti%C5%86%C5%A1_Freivalds">Rūsiņš Freivalds</a> did much of the early work in probabilistic algorithms and automata, and in inductive inference, a computability-theoretic approach to learning. In the 70's he found fast probabilistic algorithms for checking multiplication of integers and matrices. More recently he has looked at quantum finite automata. He supervised several PhD students including Andris Ambainis, one of the leading researchers in quantum algorithms.<br />
<br />
Mostly I remember Freivalds as the leader of the Latvian theoretical computer science community and just an extremely nice and friendly colleague. I am glad to have known and worked with him.http://blog.computationalcomplexity.org/2016/01/rusins-freivalds-1942-2016.htmlnoreply@blogger.com (Lance Fortnow)0tag:blogger.com,1999:blog-3722233.post-7146034999597319981Mon, 04 Jan 2016 03:51:00 +00002016-01-03T22:51:49.784-05:00Predictions for 2016This is the first post of 2016! (that is not a factorial). Hence I will make some predictions and at the end of the year I'll see how I did<br />
<br />
1) The USA Prez election will have two main candidates: Hillary Clinton and Ted Cruz. Clinton will win by floating rumors that Cruz was born in a foreign country.<br />
<br />
2) There will be a proof that there exists a constant c such that the methods that got a lower bound of (3+1/86)n on an explicit function cannot be extended to cn. <br />
<br />
3) P vs NP will not be solved. But see next point.<br />
<br />
4) I will be asked to look at at least two resolutions of P vs NP. This year I was asked to look at two<br />
proofs that P=NP. Neither was correct.<br />
<br />
5) GI will still not be in known to be in P.<br />
<br />
6) There will be a proof that Babai's techniques cannot get GI into P.<br />
<br />
7) The fact that 2016=2<sup>5</sup>3<sup>2</sup>7 will be useful in a math competition.<br />
<br />
8) Posting a video of a talk (as Babai did) will become an acceptable way to claim a result, rather than having a paper on arXiv. Getting a paper in a Journal will become less and less relevant for big results.<br />
(This is a topic for a later blog post.)<br />
<br />
9) The Unique Game conjecture... When it was first stated it seemed like maybe it could be proven or disproven. The longer it stays open the harder it seems. Clyde Kruskal tells me that a good method for guessing how long a problem will stay open is how long its been open. So I'll predict it won't be resolved this year. However, by that reasoning, I will always predict it will not be resolved in the following year.<br />
<br />
10) There will be a big data breach. The security protocols used were never proven secure, though that won't be what caused the breach. Nor was it caused because of a really fast factoring or DL algorithm.<br />
<br />
11) Comp Sci enrollment will continue to rise.<br />
<br />
12) Leave your predictions in the comments!http://blog.computationalcomplexity.org/2016/01/predictions-of-new-year.htmlnoreply@blogger.com (GASARCH)5tag:blogger.com,1999:blog-3722233.post-1484673271574333614Mon, 28 Dec 2015 16:09:00 +00002015-12-28T11:09:29.614-05:00Complexity Year in Review 2015We had an incredible year for theorems in 2015, the strongest year in computational complexity in the last decade. Topping the list as the theorem of the year is László Babai's <a href="http://blog.computationalcomplexity.org/2015/11/a-primer-on-graph-isomorphism.html">quasipolynomial-time algorithm for graph isomorphism</a>. A little risky because nobody, including Babai himself, has fully verified the proof but given Babai's reputation and the warm reception of his lectures, I have little doubt it will all work out. Again I strongly recommend watching the <a href="http://people.cs.uchicago.edu/~laci/2015-11-10talk.mp4">video</a> of Babai's first lecture. Not only is Babai a great researcher but he's an excellent speaker as well. For the those up to the challenge, <a href="http://arxiv.org/abs/1512.03547">the paper</a> is now available as well.<br />
<br />
Beyond Babai's result we had a great year including <a href="http://blog.computationalcomplexity.org/2015/04/ph-infinite-under-random-oracle.html">random oracles separating the polynomial-time hierarchy</a>, the <a href="http://eccc.hpi-web.de/report/2015/166/">(3+1/86)n size lower bound for general circuits</a>, <a href="http://arxiv.org/abs/1509.05363">Terry Tao's solution to the Erdős discrepancy problem</a>, <a href="http://eccc.hpi-web.de/report/2015/119/">explicit</a> <a href="http://arxiv.org/abs/1506.04428">constructions</a> of Ramsey Graphs and <a href="http://arxiv.org/abs/1511.07860">new bounds on threshold circuits</a>.<br />
<br />
We celebrated 50 years of <a href="http://blog.computationalcomplexity.org/2015/05/fiftieth-anniversary-of-publication-of.html">computational complexity</a> and <a href="http://blog.computationalcomplexity.org/2015/04/fifty-years-of-moores-law.html">Moore's law</a>, the centenary of <a href="http://blog.computationalcomplexity.org/2015/02/richard-hamming-centenary.html">Hamming</a> and the bicentenaries of <a href="http://blog.computationalcomplexity.org/2015/12/ada-lovelace-1815-1852.html">Ada Lovelace</a> and <a href="http://blog.computationalcomplexity.org/2015/11/george-boole-1815-1864.html">George Boole</a>, the latter giving us the Boolean algebra. In 2016 we will celebrate the centenary of the person who used those bits to describe information.<br />
<br />
We thank our guest posters <a href="http://blog.computationalcomplexity.org/2015/06/a-metric-group-product.html">Dylan McKay</a> and <a href="http://blog.computationalcomplexity.org/2015/03/guest-post-by-thomas-zeume-on-app-of.html">Thomas Zeume</a>. Dylan's <a href="http://blog.computationalcomplexity.org/2015/06/a-metric-group-product.html">metric group problem</a> is still open if anyone wants to try and crack it.<br />
<br />
In 2015 we continue to see CS departments struggle to meet the demand of students and industry. US universities faced many difficult challenges including dealing with race relations, freedom of speech and safety issues. Too much tragedy in the US and abroad, and a far too divisive political campaign. Here's hoping for reason to prevail in 2016.<br />
<br />
We remember <a href="http://www.nytimes.com/2015/11/13/technology/gene-amdahl-pioneer-of-mainframe-computing-dies-at-92.html">Gene Amdahl</a>, <a href="https://rjlipton.wordpress.com/2015/07/22/alberto-apostolico-1948-2015/">Alberto Apostolico</a>, <a href="http://blog.computationalcomplexity.org/2015/10/s-barry-cooper-1943-2015.html">Barry Cooper</a>, <a href="https://twitter.com/fortnow/status/602872182579015681">George Cox</a>, <a href="http://www.inf.ethz.ch/news-and-events/spotlights/jiri-matousek.html">Jiří Matoušek</a>, <a href="http://blog.computationalcomplexity.org/2015/05/john-nash-1928-2015.html">John Nash</a>, <a href="http://blog.computationalcomplexity.org/2015/03/leonard-nimoy-1931-2015_2.html">Leonard Nimoy</a>, <a href="http://blog.computationalcomplexity.org/2015/07/hartley-rogers-author-of-first-textbook.html">Hartley Rogers Jr.</a>, <a href="http://www.cs.duke.edu/news/index.php?article=584">Donald Rose</a>, <a href="http://blog.computationalcomplexity.org/2015/10/cancer-sucks.html">Karsten Schwan</a> and <a href="http://www.nytimes.com/2015/08/27/science/joseph-traub-who-helped-bring-computer-science-to-universities-dies-at-83.html">Joe Traub</a>.<br />
<br />
Wishing everyone a Happy New Year and a successful 2016.http://blog.computationalcomplexity.org/2015/12/complexity-year-in-review-2015.htmlnoreply@blogger.com (Lance Fortnow)6tag:blogger.com,1999:blog-3722233.post-2834119674129174111Tue, 22 Dec 2015 14:58:00 +00002015-12-22T09:58:24.167-05:00Guns and Crypto“We believe it would be wrong to weaken security for hundreds of millions of law-abiding customers so that it will also be weaker for the very few who pose a threat,” said a spokesperson from Smith & Wesson on the recent calls for increased gun control.<br />
<br />
The quote was actually from Apple on a <a href="http://www.nytimes.com/2015/12/22/world/europe/apple-pushes-against-british-talk-of-softening-encryption.html">proposed British law</a> that would require the company to devise methods to break into iPhone communications.<br />
<br />
In the wake of Paris and San Bernardino we've heard calls for controls on both guns and cryptography with eerily similar arguments coming from defenders. If we ban guns/crypto then only bad people will have guns/crypto. Any attempts to limit guns/crypto is a slippery slope that takes away our constitutional rights and our freedoms.<br />
<br />
I don't have a gun but I do use encryption, built into the iPhone and the various apps and browsers I use to communicate. Fat lot of good it does me as hackers stole my personal information because I shopped at Home Depot and Target. Because my health insurance runs through Anthem. Because I voted in the State of Georgia.<br />
<br />
I am a strong believer in individual rights, a person should be able to use cryptography to protect their communications and a gun, if they wish, to protect their family. But I do see the value in gaining access in communications to stop a terrorist as well as making it harder for them to get the weapons to carry out their acts. Why can't the fingerprint technology that unlocks my iPhone also unlock a gun? The gun/crypto advocates don't trust to government to implement any restrictions reasonably and thus fight any changes.<br />
<br />
No laws can completely eliminate or even restrict guns or crypto but they can make it harder to use. The challenges aren't technological, we can create guns or crypto protocols that perform as we want them to perform. The challenges are social, finding the right balance between rights and security and governments we can trust to enforce that balance.http://blog.computationalcomplexity.org/2015/12/guns-and-crypto.htmlnoreply@blogger.com (Lance Fortnow)10tag:blogger.com,1999:blog-3722233.post-5490604391604371121Mon, 21 Dec 2015 15:12:00 +00002015-12-21T10:12:43.798-05:00Blog FollowersIf you follow this or any other Blogger-based blog via a non-Google account, you'll need to follow with a Google account instead. <a href="http://buzz.blogger.com/2015/12/an-update-on-google-friend-connect.html">Details</a>.http://blog.computationalcomplexity.org/2015/12/blog-followers.htmlnoreply@blogger.com (Lance Fortnow)2tag:blogger.com,1999:blog-3722233.post-8047839355128781296Wed, 16 Dec 2015 20:00:00 +00002015-12-16T15:00:26.769-05:00Simons and BerkeleyThe <a href="https://simons.berkeley.edu/">Simons Institute for the Theory of Computing</a> in Berkeley held two programs this fall, <a href="https://simons.berkeley.edu/programs/complexity2015">Fine-Grained Complexity and Algorithm Design</a> and <a href="https://simons.berkeley.edu/programs/economics2015">Economics and Computation</a> both ending this week. It would have been a great fall for me to spend there as I have strong connections in both groups, but as a department chair and father to a high-school senior it would have been impossible to be away from Atlanta for an extended period of time.<br />
<br />
But I did manage some short trips, my first to Simons. In September I went to attend the workshop on <a href="https://simons.berkeley.edu/workshops/complexity2015-1">Connections Between Algorithm Design and Complexity Theory</a> but had to leave early and to make up for it I visited for a few days last week as well. The Simons Institute has taken over Calvin Lab, a circular building on the UC Berkeley campus. Lecture rooms on the first floor, wide-open working spaces on the second floor where most people congregate and visitor offices on the third floor. It's not just the space but an <a href="https://simons.berkeley.edu/people/visitors">incredible group</a> of complexity theorists and EC types there this fall.<br />
<br />
The year marks thirty years since I spent that <a href="http://blog.computationalcomplexity.org/2006/04/one-miserable-year.html">not-so-happy year</a> of grad school in Berkeley. The city of Berkeley has mellowed a bit. When I lived there before, many recent graduates didn't leave and instead remained in their cheap rent-controlled apartments. Now you see almost a bimodal distribution centered around college-aged students and late middle-aged residents. Berkeley housing is getting hard to find again, this time because of overflow from limited affordable housing in San Francisco. Still Berkeley seems like a city that has never grown up. I guess people like it that way.http://blog.computationalcomplexity.org/2015/12/simons-and-berkeley.htmlnoreply@blogger.com (Lance Fortnow)0tag:blogger.com,1999:blog-3722233.post-5929511404430088479Thu, 10 Dec 2015 13:17:00 +00002015-12-10T08:17:06.217-05:00Ada Lovelace (1815-1852)<div class="separator" style="clear: both; text-align: center;">
<a href="https://upload.wikimedia.org/wikipedia/commons/thumb/a/a4/Ada_Lovelace_portrait.jpg/220px-Ada_Lovelace_portrait.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="320" src="https://upload.wikimedia.org/wikipedia/commons/thumb/a/a4/Ada_Lovelace_portrait.jpg/220px-Ada_Lovelace_portrait.jpg" width="222" /></a></div>
Back in my teens I had ordered my first computer, a TRS-80 from Radio Shack, and I grew anxious in the weeks before it arrived. I learned the basics of BASIC and started writing programs on paper ready to be typed into this new machine. Imagine though if I had to wait not just a few weeks but over a hundred years. Such was the fate of Augusta Ada King, Countess of Lovelace, born two hundred years ago today.<br />
<br />
Ada, daughter of the poet Lord Byron, showed an early talent in mathematics. At age 17 she became a friend of Charles Babbage, inventor of the difference engine and proposed a more complex analytic engine. Babbage gave a talk at the University of Turin and Ada had the task of translating an article written by Italian engineer Luigi Menabrea based on the talk. Ada did more than just translate, she added her own thoughts which included a program that could be run on the analytic engine computing a specific sequence of numbers. The analytic engine never got built and Ada never saw her program run but nevertheless she gets credit as the first computer programmer and most notably in 1980 the US Department of Defense named their preferred computer language "Ada" in her honor.<br />
<br />
Here's to Lady Ada Lovelace, born way too far ahead of her time.http://blog.computationalcomplexity.org/2015/12/ada-lovelace-1815-1852.htmlnoreply@blogger.com (Lance Fortnow)2tag:blogger.com,1999:blog-3722233.post-6993213568088951160Mon, 07 Dec 2015 15:34:00 +00002015-12-07T10:34:11.816-05:00 crank math and crank people<br />
In the same week I got email claming:<br />
<br />
1) Babai had shown that GI is in quasipoly time.<br />
<br />
2) Opeyemi Enoch, a Nigerian Mathematician, solved the Riemann hypothesis.<br />
<br />
I believe Babai's claim since (a) he's worked on the problem a lot and is a known expert, (b) the result is believable, and (c) the math we know now seem up to the task.<br />
<br />
I do not believe Enoch's claim. It would be unfair for me to not believe it because I never heard of him. However, my sources in math say that RH is not in a position to be solved. Also, articles I've read on the web including <a href="http://qz.com/552327/the-nigerian-mathematics-genius-who-fooled-the-british-media/">this article </a>seem to say things that cast doubts on it such as this quote:<br />
<br />
<i>motivation to solve the problem came from his students, who brought it to him with the hope of making 1 million ``off the internet''</i><br />
<br />
Bobby Jindall (who?) dropped out of the Prez race (why am I bringing this up? Is he working on RH?) Bobby Jindall was a Rhodes Scholar. (why am I bringing this up? Wait and see.)<br />
<br />
In 2003 I was in a Taxi (ask your grandparents what Taxi's were before we had Uber) and the driver began telling me lots of conspiracies--- there is no Gold in Fort Knx, the novel Goldfinger was Ian Flemmings attempt to tell us this, Reagan was shot because he knew this, and the American Presidency is controlled by a secret cabal. I pointed out that if that's the case how did George Bush Sr (clearly a member of the Cabal) lose to Bill Clinton. His answer: Bill Clinton was a Rhodes Scholar and the Cabal is controlled by Rhode Scholars.<br />
<br />
Its hard to argue with a conspiracy theorist about the past. But you can challenge him to predict the future. So I told him ``if you can predict who will be the Democratic Nominee for Prez in 2004 then I will look at your website and take it seriously. If not, then not'' He smiled and just said ``Wesley Clark was a Rhodes scholar'' If I had asked him who would be the republican nominee in 2016 then he might have said ``Bobby Jindal is a Rhodes Scholar''<br />
<br />
The way to expose conspiracy theorists, astrologers, and The National Inquirer is to take their predictions seriously and test them. I wonder why anyone believes these things given their track record.<br />
<br />
Judging math things is different- A math paper can be read and should stand on its own.<br />
<br />
<br />http://blog.computationalcomplexity.org/2015/12/crank-math-and-crank-people.htmlnoreply@blogger.com (GASARCH)4tag:blogger.com,1999:blog-3722233.post-7808673758357566847Thu, 03 Dec 2015 14:43:00 +00002015-12-03T09:43:31.910-05:00Moonshots<span style="-webkit-line-break: after-white-space; -webkit-nbsp-mode: space; word-wrap: break-word;">In 1961 Kennedy said "this nation should commit itself to achieving the goal, before this decade is out, of landing a man on the Moon and returning him safely to the Earth". In 1969 Neil Armstrong and Buzz Aldrin walked on the moon and returned to earth safely. In the 46 years hence man has gone no further.</span><br />
<br />
Bill Gates founded Microsoft to place "a computer on every desk and in every home". Once he succeeded, Microsoft lost its way. Their current mission takes a different tact "Empower every person and every organization on the planet to achieve more."<br />
<br />
In my life I've seen many a moonshot succeed, from a computer chess program that can beat the best humans, the classification of finite simple groups and the proof of Fermat's last theorem. They all seem to end up with a "Now what?". If you have a finite mission, you can only achieve finite things. We often think of moonshots as an amazing challenge but they serve as limits as well.<br />
<br />
In computing we have a law that has become a mission, that the power of computing doubles roughly every two years. These improvements have enabled the incredible advances in learning, automation and connectivity that we see today. We keep making better computers because we don't have a limit just a goal of getting better than where we are today.<br />
<br />
When I first started writing grant proposals, a senior faculty member chided me for saying "the ultimate goal of computational complexity is prove P ≠ NP". He said that would kill any grant proposals after we did settle the question. While we are in no danger of solving this problem in the near future, eventually we will and I'd hate to see the field whither afterwards.<br />
<br />
My favorite mission comes from Star Trek, particularly the TNG version with "Its continuing mission: to explore strange new worlds, to seek out new life and new civilizations, to boldly go where no one has gone before."http://blog.computationalcomplexity.org/2015/12/moonshots.htmlnoreply@blogger.com (Lance Fortnow)2tag:blogger.com,1999:blog-3722233.post-4518467302112184229Mon, 30 Nov 2015 03:45:00 +00002015-11-30T11:26:11.931-05:00One more sign that a Journal is bogus. Or is it?I went to an AMS conference with my High School Student Naveen (who presented this <a href="http://arxiv.org/abs/1507.04421">paper) </a>and an ugrad from my REU program Nichole (who presented this <a href="http://www.cs.umd.edu/~gasarch/reupapers/ramseysat.pdf">paper</a>). Note that this is a math conference- low prestige, mostly unrefereed, parallel sessions, but still a good place to pick up some new problems to work on and learn some things, and meet people.<br />
<br />
Both Naveen and Nichole later got email from a journal urging them to submit their work! They were also invited to be on the Editorial Board! I can understand inviting a HS student who is a SENIOR to be on an editorial board, but Naveen is a SOPHMORE!<br />
<br />
Naveen and Nichole both emailed me asking what this was about and I looked around and found, not to my surprise, that the journal is an author-pays journal of no standards. On the other hand, it was open access, on the other other hand, they had an article claiming that R(4,6)=36 (might be true, but if this was known, I would know it, see <a href="http://blog.computationalcomplexity.org/2010/11/non-rigorous-proof-technique.html#comment-form">this blog post </a>for more on that proof technique).<br />
<br />
The pay-to-publish model is not necc. bad, and is standard in some fields, but is unusual for math. Of perhaps more importance is that the journal had no standards. And they prey on the young and naive.<br />
<br />
Or do they?<br />
<br />
Consider the following scenario: Naveen publishes in this journal and this publication is on his college application, hence he gets into a good college with scholarship. He knows exactly what he is buying. Or Nicole does this to help her Grad school Application. Or I do this for my resume and get a salary increase. Or an untenured prof does this to get tenure ( <i>Deans can count but they can't read).</i> And it gets worse--- the school gives the prof tenure, knowing the papers are bogus, but now they can say they have a prof who publishes X papers a year! At some point I don't know who is scamming who.<br />
<br />
<a href="http://library-blog.syr.edu/drs/2014/09/03/a-cautionary-tale-about-predatory-publishers/">This </a>blog post (not mine) gives several criteria for when a journal is bogus. I'll add one: <i>When they ask a 15</i> <i>years old to be on their editorial board.</i><br />
<br />
<br />http://blog.computationalcomplexity.org/2015/11/one-more-sign-that-journal-is-bogus-or.htmlnoreply@blogger.com (GASARCH)5tag:blogger.com,1999:blog-3722233.post-98517605919665296Mon, 23 Nov 2015 15:30:00 +00002015-11-23T10:30:55.573-05:00Star Trek ComputingIn the wake of Leonard Nimoy's death last February, I decided to rewatch the entire original Star Trek series, all 79 episodes. I had watched them each many times over in high school in the 70's, though the local station removed a scene or two from each episode to add commercial time and I often missed the opening segment because I didn't get home from school in time. Back in those stone ages we had no DVR or other method to record shows. I hadn't seen many episodes of the original series since high school.<br />
<br />
Now I can watch the entire episodes whenever I want in full and in order through the magic of Netflix. I finished this quest a few days ago. Some spoilers below.<br />
<br />
I could talk about the heavy sexism, the ability to predict future technologies (the flat screen TV in episode 74), the social issues in the 23rd century as viewed from the 60's, or just the lessons in leadership you can get from Kirk. Given the topic of this blog, let's talk about computing in Star Trek which they often just get so wrong, such as when Spock asks the computer to compute the last digit of π to force Jack-the-Ripper to remove his consciousness from the ship's computers.<br />
<br />
Too many episodes end with Kirk convincing a computer or robot to destroy itself. I'd like to see him try that with Siri. In one such episode "<a href="https://en.wikipedia.org/wiki/The_Ultimate_Computer">The Ultimate Computer</a>", a new computer is installed in the Enterprise that replaces most of the crew. A conversation between Kirk and McCoy sounds familiar to many we have today (<a href="http://www.chakoteya.net/StarTrek/53.htm">source</a>).<br />
<br />
MCCOY: Did you see the love light in Spock's eyes? The right computer finally came along. What's the matter, Jim?<br />
KIRK: I think that thing is wrong, and I don't know why.<br />
MCCOY: I think it's wrong, too, replacing men with mindless machines.<br />
KIRK: I don't mean that. I'm getting a Red Alert right here. (the back of his head) That thing is dangerous. I feel. (hesitates) Only a fool would stand in the way of progress, if this is progress. You have my psychological profiles. Am I afraid of losing my job to that computer?<br />
MCCOY: Jim, we've all seen the advances of mechanisation. After all, Daystrom did design the computers that run this ship.<br />
KIRK: Under human control.<br />
MCCOY: We're all sorry for the other guy when he loses his job to a machine. When it comes to your job, that's different. And it always will be different.<br />
KIRK: Am I afraid of losing command to a computer? Daystrom's right. I can do a lot of other things. Am I afraid of losing the prestige and the power that goes with being a starship captain? Is that why I'm fighting it? Am I that petty?<br />
MCCOY: Jim, if you have the awareness to ask yourself that question, you don't need me to answer it for you. Why don't you ask James T. Kirk? He's a pretty honest guy.<br />
<br />
Later in the episode the computer starts behaving badly and Kirk has to convince it to shut itself down. But what if the computer just did its job? Is that our real future: Ships that travel to stars controlled only by machine. Or are we <a href="http://news.mit.edu/2015/nasa-gives-mit-humanoid-robot-future-space-missions-1117">already there</a>?http://blog.computationalcomplexity.org/2015/11/star-trek-computing.htmlnoreply@blogger.com (Lance Fortnow)5tag:blogger.com,1999:blog-3722233.post-6052109793999482549Thu, 19 Nov 2015 13:53:00 +00002015-11-19T08:53:05.061-05:00A Silly String TheoremFirst a note on a serious theorem: Babai has posted a <a href="http://people.cs.uchicago.edu/~laci/2015-11-10talk.mp4">video</a> (mp4, 1h 40 m, 653MB) of his first talk on his Graph Isomorphism algorithm.<br />
<b><br /></b>I was giving a talk on the Kleene star operator (don't ask) and came across this cute little problem. Say a language L commutes if for all u,v in L, uv=vu.<br />
<b><br />Problem: </b>Show that L commutes if and only if L is a subset of w* for some fixed string w.<br />
<br />
Here w* is the set of strings consisting of zero or more concatenations of w with itself. The if case is easy, but I found the other direction pretty tricky and came up with an ugly proof. I found a cleaner proof in Seymour Ginsburg's 1966 textbook <i>The Mathematical Theory of Context Free Languages </i>which I present here.<br />
<b><br /></b>
<b>⇐</b> If u=w<sup>i</sup> and v=w<sup>j</sup> then uv=vu=w<sup>i+j</sup>.<br />
<br />
<b>⇒</b> Trivial if L contains at most one string. Assume L contains at least two strings.<br />
<br />
Proof by induction on the length of the shortest non-empty string v in L.<br />
<br />
Base case: |v|=1<br />
Suppose there is an x in L with x not in v*. Then x = v<sup>i</sup>bz for b in Σ-{v}. Then the i+1st character of xv is b and the i+1st character of vx is v, contradicting xv=vx. So we can let w=v.<br />
<br />
Inductive case: |v|=k>1.
<br />
<br />
Lemma 1: Let y=v<sup>r</sup>u be in L (u might be empty). Then uv=vu.<br />
Proof: Since yv=vy we have v<sup>r</sup>uv=vv<sup>r</sup>u=v<sup>r</sup>vu. So vu=uv.<br />
<br />
Let x in L be a string not in v* (if no such x exists we can let w=v). Pick j maximum such that x=v<sup>j</sup>z with z not the empty string. By Lemma 1 we have zv=vz. Note |z| < |v| otherwise v is a prefix of z, contradicting the maximality of j.<br />
<br />
Lemma 2: For all y in L we have yz=zy.<br />
Proof: As before let y = v<sup>r</sup>u. By Lemma 1, uv=vu<br />
Since yx=xy we have v<sup>r</sup>uv<sup>j</sup>z = v<sup>j</sup>zv<sup>r</sup>u<br />
v<sup>r</sup>uv<sup>j</sup>z = v<sup>r</sup>uzv<sup>j</sup> by swapping v's and z's.<br />
v<sup>j</sup>zv<sup>r</sup>u = zv<sup>r</sup>uv<sup>j</sup> by swapping v's and z's, and v's and u's.<br />
So we have v<sup>r</sup>uzv<sup>j</sup> = zv<sup>r</sup>uv<sup>j</sup><br />
or yzv<sup>j</sup> = zyu<sup>j</sup> and thus yz=zy.<br />
<br />
By Lemma 2, the set L∪{z} commutes. Since 0 < |z| < |v| by induction L∪{z} is a subset of w* for some w so L is a subset of w*.http://blog.computationalcomplexity.org/2015/11/a-silly-string-theorem.htmlnoreply@blogger.com (Lance Fortnow)4tag:blogger.com,1999:blog-3722233.post-6976871382979968922Mon, 16 Nov 2015 17:20:00 +00002015-11-16T14:15:24.782-05:00Is the word Quantum being used properly by civilians? Understood by them? You know the answer.<br />
I've seen the word `civilians' expanded in use from non-military to non-X for some X. Not sure I've ever seen `civilians' mean `people who don't do math stuff' until the title of todays post. Well, there is a first time for everything.<br />
<br />
I've blogged in the past about the use of the word Quantum (<a href="http://blog.computationalcomplexity.org/2008/12/quantum-leapquantum-of-solacewhat-does.html#comment-form">here) </a>. The phrase <i>Quantum Leap</i> means a BIG leap, where as Quantum stuff is small. Though, to be fair, the discovery (invention?) of Quantum Mechanics was a big leap. So maybe that IS proper use. The James Bond movie <i>Quantum of Solace</i> uses Quantum to mean small, the ONLY time I've seen Quantum used to mean small, so Kudos to the title of an absolutely awful movie. Commenters on my prior blog on the subject pointed out that the original meaning of<br />
<i>quantum </i>was <i>quantity or amount without regard to size of discreteness. </i>I think using it that way now would be very rare.<br />
<br />
I came across a theatre called <a href="http://www.quantumtheatre.com/about/what-we-are/">Quantum Theatre.</a> What is Quantum Theatre? The following are actual quotes from their website. <br />
<br />
<i>Quantum artists mine all kinds of non-traditional spaces for the sensory possibilities they offer when combined with creative design.</i><br />
<br />
<i> We find it meaningful to place the audience and performer together, the moving parts inside the works.</i><br />
<br />
<i> We want to move people with our experiements.</i><br />
<br />
<i>The shows run the gamut from those you thought you knew but now
experience like never before, to shows that didn’t exist until their
elements mixed in our laboratory.</i><br />
<br />
<i> </i>I came across this article in a place it didn't belong-- in the middle of an article about Google and NASA trying to build a quantum computer (see <a href="http://www.thedailybeast.com/articles/2015/09/30/google-and-nasa-team-up-on-quantum-computer.html">here.)</a> This news might be exciting but the article was full of mistakes and bad-signs so I'm not to excited about it. Plus the reference to Quantum Theatre is just odd.<br />
<br />
The BEST use of the word <i>quantum </i>that I've heard recently was in the episode <i>The Ricks must be crazy</i> of the excellent TV show <i>Ricky and Morty</i>:<br />
<br />
The car is broken<br />
<br />
Morty (a 14 year old): W-Whats wrong Rick? Is it the Quantum carburetor or something?<br />
<br />
Rick (his grandfather, a brilliant scientist): Quantum carburetor? You can't just add a Sci-fi word to a car word and hope it means something.<br />
<div style="left: -99999px; position: absolute;">
W-what's wrong, Rick? Is it the quantum carburetor or something?<br />
<br />
Read more at: http://transcripts.foreverdreaming.org/viewtopic.php?f=364&t=20185</div>
<div style="left: -99999px; position: absolute;">
W-what's wrong, Rick? Is it the quantum carburetor or something?<br />
<br />
Read more at: http://transcripts.foreverdreaming.org/viewtopic.php?f=364&t=20185</div>
<br />
<br />http://blog.computationalcomplexity.org/2015/11/is-word-quantum-being-used-properly-by.htmlnoreply@blogger.com (GASARCH)5tag:blogger.com,1999:blog-3722233.post-1680472021350980055Thu, 12 Nov 2015 13:30:00 +00002015-12-14T08:15:19.236-05:00A Primer on Graph Isomorphism<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-huVsxq73vP0/VkOIj2UgO5I/AAAAAAABVgc/KFrQJ8bJkJI/s1600/CTeqULsVAAA-Edn.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="106" src="http://1.bp.blogspot.com/-huVsxq73vP0/VkOIj2UgO5I/AAAAAAABVgc/KFrQJ8bJkJI/s400/CTeqULsVAAA-Edn.jpg" width="400" /></a></div>
<br />
I spent 14 years on the faculty at the University of Chicago. I know László Babai well, we collaborated on some of my best known work. I also know Ryerson 251, a room where I've seen hundreds of talks and given more than a few myself. So I could imagine the excitement in that room on Tuesday as Babai gave the most anticipated talk in the history of theoretical computer science, the first of <a href="http://people.cs.uchicago.edu/~laci/quasipoly.html">several talks</a> Babai is giving on his new algorithm for graph isomorphism [<a href="http://people.cs.uchicago.edu/~laci/2015-11-10talk.mp4">Video</a>]. Gabriel Gaster extensively <a href="https://storify.com/ptwiddle/babai-s-first-graph-isomorphism-talk">live tweeted</a> the event. Jeremy Kun has <a href="http://jeremykun.com/2015/11/12/a-quasipolynomial-time-algorithm-for-graph-isomorphism-the-details/">some details</a>. Update (12/14): <a href="http://arxiv.org/abs/1512.03547">Paper now posted</a>.<br />
<br />
For this post instead of doing a bad job trying to overview Babai's proof, I'll explain the graph isomorphism problem and why it is important in the complexity world.<br />
<br />
Suppose we have two high schools, say HS North and HS South, each with 1000 students. Consider a diagram (or graph) containing a point for each student at HS North with lines between students who are facebook friends, and a similar diagram for HS South. Is there a 1-1 map from the students at HS North to the students at HS South so that these diagrams look identical? That's the graph isomorphism problem.<br />
<br />
To determine whether these two graphs were isomorphic you could look at all the possible mappings between students, but that's 1000! or more than 4x10<sup>2567</sup> possible maps. There has long been known how to search a smaller number of possibilities, especially if we put some restrictions on the diagrams, but always exponential in n (the number of students) in the worst case. Ideally we'd like a polynomial-time algorithm and Babai gets very close, an algorithm that runs in time n<sup>log<sup>k</sup>n</sup> time for some fixed k.<br />
<br />
Graph Isomorphism is one of those few problems, like factoring, known to be in NP but not known to be in P or NP-complete. Graph non-isomorphism is the poster child for the class AM, the problems solved by a randomized verifier asking a single question to a powerful prover. Graph non-isomorphism in AM implies that Graph Isomorphism is not likely NP-complete and that under reasonable derandomization assumptions that Graph non-isomorphism is in NP. Kobler, Schöning and Toran wrote a <a href="http://www.springer.com/us/book/9780817636807">whole book</a> on the computational complexity issues of graph isomorphism.<br />
<br />
Even small progress in graph isomorphism creates waves. At a theory conference in the late 80's a speaker caused a major stir when he casually mentioned he had a proof (he didn't) that Graph non-isomorphism was in NP. Babai's announced caused a huge tsunami and those of us who know him realize he wouldn't make such an announcement without being sure he has a proof. The talk put together a large number of ideas from combinatorics and graph theory. My impression is that those who saw the talk didn't leave convinced of the proof, but did feel Babai had found the pieces to make it work.<br />
<br />
With Babai's breakthrough algorithm, the smart money says now that graph isomorphism sits in P. It took decades to get from quasipolynomial time to polynomial time for primality testing and the same time frame may be needed to get polynomial time for graph isomorphism. But it will likely happen and the complexity of graph isomorphism then gets a whole lot simpler.<br />
<br />
A couple of thoughts: All the breakthrough results that I can remember were released as papers, ready to devour. This is the first result of this caliber I remember being announced as a talk.<br />
<br />
Also we think of theory as a young person's game, most of the big breakthroughs coming from researchers early in their careers. Babai is 65, having just won the <a href="http://www.sigact.org/Prizes/Knuth/">Knuth Prize</a> for his lifetime work on interactive proofs, group algorithms and communication complexity. Babai uses his extensive knowledge of combinatorics and group theory to get his algorithm. No young researcher could have had the knowledge base or maturity to be able to put the pieces together the way that Babai did.<br />
<br />
More on Babai and graph isomorphism from <a href="http://www.scottaaronson.com/blog/?p=2521">Scott</a>, <a href="https://lucatrevisan.wordpress.com/2015/11/03/laci-babai-and-graph-isomorphism/">Luca</a>, <a href="http://blog.computationalcomplexity.org/2015/11/looking-forward-to-gi-result.html">Bill</a>, <a href="https://plus.google.com/+TimothyGowers0/posts/JbSPELbKQED">Tim</a>, <a href="https://rjlipton.wordpress.com/2015/11/04/a-big-result-on-graph-isomorphism/">Dick and Ken</a>, <a href="https://www.reddit.com/r/math/comments/3sdixw/babais_breakthrough_on_graph_isomorphism/">Reddit</a>, <a href="https://www.sciencenews.org/article/new-algorithm-cracks-graph-problem">Science News</a>, <a href="https://www.newscientist.com/article/dn28478-complex-problem-made-simple-sends-computer-scientists-wild">New Scientist</a> and <a href="http://news.sciencemag.org/math/2015/11/mathematician-claims-breakthrough-complexity-theory">Science</a>.http://blog.computationalcomplexity.org/2015/11/a-primer-on-graph-isomorphism.htmlnoreply@blogger.com (Lance Fortnow)5tag:blogger.com,1999:blog-3722233.post-5246892705611455093Mon, 09 Nov 2015 02:59:00 +00002015-11-08T21:59:58.656-05:00Looking forward to the GI resultAs you all know Laszlo Babai will give a talk Tuesday Nov 10 on a result: GI in quasipolynomial time (2 to a polylog). Other theory blogs have already commented on this (<a href="https://rjlipton.wordpress.com/2015/11/04/a-big-result-on-graph-isomorphism/">GLL</a>,<a href="https://lucatrevisan.wordpress.com/2015/11/03/laci-babai-and-graph-isomorphism/">In Theory</a>,<a href="http://www.scottaaronson.com/blog/?p=2521">Shetl-Opt</a>)<br />
<br />
When I was in Graduate School (early 1980's) it looked like GI would get into P. Three key results: graphs of bounded degree, graphs of bounded genus, graphs of bounded eigenvalue multiplicity, were all shown to be in P. These results used group theory and linear algebra in serious ways so the thought was that more advanced group theory, perhaps the classification of all finite simple groups (CFSG) would be used to show GI in P. If CFSG was used in an algorithm for GI in P then the algorithm might have an enormous constant, but that's okay since the quest for GI in P is not for practical reasons (I think that all the people who want to solve it already have fast enough algorithms) . I was looking forward to the debates on <i>if an algorithm with that big a constant would count as poly time</i> (I would say yes). But no algorithm for GI in P, using CFSG or not, was found. Also note that it was possible (and is still possible) that GI is NOT in P. In my 2012 survey of P vs NP I also asked about GI. Of the 20 people who responded 14 thought GI is in P, 6 thought not. Note that GI is not known to be in co-NP or BPP. It IS in IP[2], and if GI was NPC<br />
then PH collapses, so it is unlikely that GI is NP-complete. This tells us NOTHING about whether it is in P.<br />
<br />
An analogous thing happened with PRIMES. Lots of results that got it almost in P, lots of hard number theory used, but then the progress stopped. Two differences: Most people thought PRIMES IS in P (likely because it's in coNP and BPP),and this was finally proven in 2002.<br />
<br />
Is Babai's proof likely to be correct? There are three parameters to consider when looking at that question: (1) Who is making the claim?, (2) How likely the claim is to be true?, (3) How likely the claim is to be provable with what is known today?.You can give different weights to each one in different combinations..<br />
<br />
Laszlo Babai is an expert in GI with a track record in the area. (reading that over again it undersells the case- Babai is, as the kids say, awesome!) That GI is in quasipolynomial time is quite believable. Even those 6 who think that Gi is not in P might believe that. Its harder to know if today's techniques are up to the task; however, there are no results like `group theory won't suffice' or any kind of obstacles, so certainly the claim seems provable with todays technology. <br />
<br />
Here's hoping...http://blog.computationalcomplexity.org/2015/11/looking-forward-to-gi-result.htmlnoreply@blogger.com (GASARCH)2tag:blogger.com,1999:blog-3722233.post-7684043071724994534Thu, 05 Nov 2015 16:10:00 +00002015-11-05T13:30:43.095-05:00Computation and Journalism Part 2<i>The <a href="http://www.scottaaronson.com/blog/?p=2521">theory</a> <a href="https://rjlipton.wordpress.com/2015/11/04/a-big-result-on-graph-isomorphism/">blogosphere</a> is atwitter with an announcement of a talk next Tuesday at Chicago by László Babai <a href="https://calendar.google.com/calendar/render?eid=czNzOXNtZ2tydG00OG5obDJlZ3I3c21uY2cgYzU3c2hpY2k0NW0xN3FsMGdodmw1NmVrMzhAZw&ctz=America/Chicago">Graph Isomorphism in Quasipolynomial Time</a>. <a href="https://lucatrevisan.wordpress.com/">Luca</a> will blog from the scene and we will have a full report as details emerge. </i><br />
<br />
More from the Comp and Journalism conference<br />
<br />
0) (ADDED WRITE BEFORE POSTING) The conference was about how journalists do or should use technology. Part of this is how news gets to people. Twitter and Blogs are one method, as the above note about Babai's talk shows. <br />
<br />
1) There was a panel discussion on context When one reads an article it depends on where one is coming from. Here is a strange thing that technology has allowed journalists to do: An article can have slight differences depending on where its being read. For example if someone from New York is reading it may refer to Broadway, whereas if someone in LA is reading it it might refer to Hollywood. The tech also exists to have the reader (online) have a choice- so you can read it as if you are in place X. This scares me a bit- what if an article is can be tinkered with to appeal to lefthanded latino lesbians who work on the local<br />
Lovasz Lemma. Isn't our country fragmented enough? STILL, the point that context matters came through nicely.<br />
<br />
<br />
2) There was a panel of three ACTUAL JOURNALISTS (I think that all of the other speakers were academics) who used BIG DATA or MACHINE LEARNING or FILL IN BUZZWORD to write articles. Propublica, a nonprofit newspaper (aren't they all?) did an article about rating doctors<a href="http://projects.propublica.org/graphics/investigating-doctors"> (here ) </a>that seems to be very well checked (example- if a patient has to come back again because of a complication, is it the surgeons fault? hard to answer). However, some doctors (those that got low ratings?) complained that there article was not peer reviewed. This brings up a question--- journalists run things by fact checkers and proofreaders, but do not have peer-review. Academics do. Which is the better system? Another journalist did a study of the Supreme court and found that there is a small set of lawyers that are well connected (one used to play darts with Alito) who are involved in over half of the cases the court sees. What interested me is--- what if you have an idea for an article and you do ALL of the number crunching, bring in ML people to help and find out in the end OH, the Supreme court actually is doing a fair job or OH, Doctors are pretty good. No story! They said this happens about 1/3 of the time where either you don't have enough data for a story or the story is not really a story.<br />
<br />
3) There was a paper about YELP ratings of doctors. It seems that the actual<br />
doctor-stuff is rarely mentioned, usually it was how long the wait time was, <br />
how boring the magazines in the office were, etc. Also, most doctors got <br />
either 1 star or 5 stars. One thing they didn't mention but I will- Walter Palmer, the Dentist who killed Lion in Africa, got lots of very negative YELP reviews from people who were never patients of his. So YELP ratings can be skewed by things independent of what is supposed to be measured. That is, of course, and extreme case.<br />
<br />
4) Key Note by Chris Wiggins who is an academic who, on his Sabbatical, worked for the NYT on issues of <i>How many copies of the paper should we print</i>? and other Operations Research type questions. Print subscriptions still out number online subscriptions, the online is doing pretty well and bringing in money, but they still NEED to find a better business model.I am reminded that when Jeff Bezos bought the Wash Post Stephen Colbert said<i> there are more people buying newspapers than there are people buying newspapers.</i><br />
(My spellchecker things online is not a word. Everyone else does.)<i><br /></i><br />
<br />
5) There was a panel on Education. From there point of view very pessimistic---<br />
most schools don't offer courses in how to use Technology in journalism, no real<br />
books, no agreement on what's important. And I suspect they will have a hard time updating courses once they have them. I wonder if the early days of Comp Science were like that.<br />
<br />
6) The issue of jobs in journalism going away was not quite ignored but also not<br />
quite confronted either. Some people told me that Journalism schools are not being honest with their students about the dismal job market. Then again, they are journalism students- they should investigate!<br />
<br />
7) A journalism student told me of a case going to the supreme court that may be<br />
very important for privacy: Spokeo vs Robins:<a href="https://www.epic.org/amicus/spokeo/"> here</a><br />
<br />
8) UPSHOT- I am glad I went, I learned some things outside of what I usually think about. Unlike the RATLOCC (Ramsey Theory and Logic) I was able to tell my mom what I learned. I didn't bring my laptop nor had TV access I was able to work on some math and had a minor breakthrough. But that,s for a later blog <br />
(My spellcheck thinks<i> blog </i>is not a word. It also thinks spellcheck is not a word.)http://blog.computationalcomplexity.org/2015/11/computation-and-journalism-part-2.htmlnoreply@blogger.com (GASARCH)0