tag:blogger.com,1999:blog-3722233Fri, 22 May 2015 21:16:19 +0000typecastfocs metacommentsComputational ComplexityComputational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill Gasarchhttp://blog.computationalcomplexity.org/noreply@blogger.com (Lance Fortnow)Blogger2276125tag:blogger.com,1999:blog-3722233.post-589748657827195716Thu, 21 May 2015 23:56:00 +00002015-05-21T18:58:48.727-05:00An Intentional and an Unintentional teaching experiment regarding proving the number of primes is infinite.<br />
I taught Discrete Math Honors this semester. Two of the days were cancelled entirely because of snow (the entire school was closed) and four more I couldn't make because of health issues (I'm fine now). People DID sub for me those two and DID do what I would have done. I covered some crypto which I had not done in the past.<br />
<br />
Because of all of this I ended up not covering the proof that the primes were infinite until the last week.<br />
<br />
INTENTIONAL EXPERIMENT: Rather than phrase it as a proof by contradiction I phrased it, as I think Euclid did, as<br />
<br />
Given primes p1,p2,...,pn you can find a prime NOT on the list. (From this it easily follows that the primes are infinite.)<br />
<br />
Proof: the usual one, look at p1xp2x...xpn + 1 and either its prime or it has a prime factor not on the list.<br />
<br />
The nice thing about doing it this way is that there are EASY examples where p1xp2x...xpn+1 is NOT prime<br />
<br />
(e.g., the list is {2,5,11} yields 2x5x11 + 1 = 111 = 3 x 37, so 3 and 37 are both not in {2,5,11})<br />
<br />
<br />
where as if you always use the the product of the first n primes then add 1, you don't get to a non-prime until 2x3x5x7x11x13 + 1 = 30031 = 59x 509.<br />
<br />
They understood the proof better than prior classes had, even prior honors classes. <br />
<br />
UNINTENTIONAL: Since I did the proof at the end of the semester they ALREADY had some proof maturity, more so than had I done it (as I usually do) about 1/3 of the way through the course.<br />
<br />
They understood the proof better than prior classes had, even prior honors classes. Hence I should proof all of the theorems the last week! :-)<br />
<br />
But seriously, they did understand it better, but I don't know which of the two factors, or what combination caused it. Oh well.<br />
<br />
<br />http://blog.computationalcomplexity.org/2015/05/an-intentinoal-and-unintentional.htmlnoreply@blogger.com (GASARCH)0tag:blogger.com,1999:blog-3722233.post-3603431268870041295Tue, 19 May 2015 00:24:00 +00002015-05-18T19:24:04.704-05:00Theory Jobs 2015In the fall we <a href="http://blog.computationalcomplexity.org/2014/10/2014-fall-jobs-post.html">list theory jobs</a>, in the spring we see who got them. Like <a href="http://blog.computationalcomplexity.org/2014/05/theory-jobs-2014.html">last year</a>, I created a fully editable <a href="https://docs.google.com/spreadsheets/d/14pOq1dEI5X77LoSFQIMC0B0x-TZMv4rRrGiC3WFMqA0/edit?usp=sharing">Google Spreadsheet</a> to crowd source who is going where. Ground rules:<br />
<ul>
<li>I set up separate sheets for faculty, industry and postdoc/visitors.</li>
<li>People should be connected to theoretical computer science, broadly defined.</li>
<li>Only add jobs that you are absolutely sure have been offered and accepted. This is not the place for speculation and rumors.</li>
<li>You are welcome to add yourself, or people your department has hired.</li>
</ul>
This document will continue to grow as more jobs settle. So check it often.<br />
<br />
<iframe frameborder="0" height="500" src="https://docs.google.com/spreadsheets/d/14pOq1dEI5X77LoSFQIMC0B0x-TZMv4rRrGiC3WFMqA0/pubhtml?widget=true&headers=false" width="575"></iframe>
<a href="https://docs.google.com/spreadsheets/d/14pOq1dEI5X77LoSFQIMC0B0x-TZMv4rRrGiC3WFMqA0/edit?usp=sharing">Edit</a>http://blog.computationalcomplexity.org/2015/05/theory-jobs-2015.htmlnoreply@blogger.com (Lance Fortnow)10tag:blogger.com,1999:blog-3722233.post-8322009766625865132Thu, 14 May 2015 13:34:00 +00002015-05-14T08:39:35.094-05:00Fiftieth Anniversary of the Publication of the seminal paper on Computational Complexity<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-sMoKiQvVwB8/VVEMTLT-beI/AAAAAAAA2VQ/LMLeAxFklRg/s1600/GE.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="281" src="http://1.bp.blogspot.com/-sMoKiQvVwB8/VVEMTLT-beI/AAAAAAAA2VQ/LMLeAxFklRg/s400/GE.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Juris Hartmanis and Richard Stearns in a photo dated May 1963. The main theorem from their paper is on the board later improved by <a href="http://doi.acm.org/10.1145/321356.321362">Hennie and Stearns</a>. Photo courtesy of Richard Stearns.</td></tr>
</tbody></table>
The seminal paper of Juris Hartmanis and Richard Stearns, <a href="http://dx.doi.org/10.2307/1994208">On the Computational Complexity of Algorithms</a>, appeared in the Transactions of the American Mathematical Society in the May 1965 issue. This paper gave the name to the field of Computational Complexity which I took for the name of this blog. <a href="http://amturing.acm.org/award_winners/hartmanis_1059260.cfm">Hartmanis</a> and <a href="http://amturing.acm.org/award_winners/stearns_1081900.cfm">Stearns</a> received the Turing Award in 1993 for this work.<br />
<br />
I've mentioned this paper several times in the blog before, including as a <a href="http://blog.computationalcomplexity.org/2005/02/favorite-theorems-seminal-paper.html">favorite theorem</a>. Hartmanis and Stearns first formalized and truly popularized the idea of measuring time and other resources as a function the problem size, laying the foundation for virtually every paper in computational complexity and algorithms to follow.<br />
<br />
Both <a href="http://dx.doi.org/10.1109/MAHC.1981.10005">Hartmanis</a> and <a href="http://dx.doi.org/10.1007/978-1-4612-4478-3_2">Stearns</a> wrote about those early days. The main breakthroughs for their paper started in November 1962 and on December 31 Hartmanis wrote in his logbook "This was a good year," A good year indeed.http://blog.computationalcomplexity.org/2015/05/fiftieth-anniversary-of-publication-of.htmlnoreply@blogger.com (Lance Fortnow)4tag:blogger.com,1999:blog-3722233.post-1981922211527036900Mon, 11 May 2015 12:59:00 +00002015-05-21T18:37:58.457-05:00The law of the excluded middle of the road republicans<br />
In the book <i>Hail to the Chiefs </i>about the presidents, when pointing to a race between two people who had no business being president (I think it was Franklin Pierce vs Winfield Scott) wrote something like <i>That's the thing about elections, someone has to win. </i><br />
<br />
<i> </i>Looking at the republicans running for the nomination I can (with the help of reading many of Nate Silver's Columns) tell you why, for each one, they can't win the nomination. Note that this is not a partisan thing. But again, someone has to win. Is it possible to have the statements<br />
<br />
A1 can't win AND A2 can't win AND .... AND An can't win<br />
<br />
and yet someone wins?<br />
<br />
Here is a list of the candidates and why they can't win.<br />
<br />
<ol>
<li>Jeb Bush is what passes for a front runner nowadays. Has the money, does not have the party (very few endorsements), and not doing well in polls in Iowa or New Hampshire, the first two states. Possibly because he does not hate Obama enough. Its an interesting question of whether the party, the people, or the money pick the candidate. In the past its been the party and the money together, but that might be changing. CAN"T WIN: Viewed as too moderate by the people who go to Caucus's. Also some people may be put off by the family name. THOUGHT: If H CLINTON was not the likely Democratic candidate, thus making the family name thing a less of an issue in the general, I don't think he would have run at all. </li>
<li>Marco Rubio. Running for president for the first time is hard. The republicans rarely nominate someone who hadn't run before (W in 2000, Ford in 1976 which is an outlier since he was prez). Perhaps the voters/party/money like someone they are familiar with OR perhaps first timers make mistakes. CAN"T WIN: Will make some mistake and some might think its not his turn since he's so young. Also, Senators have it rough since they sometimes vote for bills in funny ways, TRIVIA: The electoral college reps from state X cannot cast both the Prez and Veep vote for people both from state X. So you won't see a Bush-Rubio or Rubio-Bush ticket.</li>
<li>Rick Perry. He lost in 2012 because either he was too soft on immigrants (he supported some sort of Dream Act) or because of his Whoops moment in a debate. Frankly I have sympathy on that one--- I also sometimes forget which cabinet positions I want to get rid of. He has tried to cure both of his problems by flip-flopping (or evolving) on immigration and by wearing glasses so at least he looks smart. CAN"T WIN: His past failure makes him not look that serious this time around, and he will likely have another WHOOPS moment.</li>
<li>Scott Walker. Like Rubio but might be in better shape because he's a governor. Still, people in his home state are beginning to turn on him, a bad sign. He may soon have the same problem that Gov Brownback of Kansas has--- you promise to cut taxes, close loopholes, and cut spending, and that might actually be a good idea if done right, but you cut taxes , don't close loopholes, cut spending in stupid places like education, run up a big debt, destroy your states economy, and ... get re-elected. CAN"T WIN: Having not run before Walker will say something stupid. And as a gov of a moderate state I suspect some moderate things he did may be a problem for hard core republican voters.Plus his states current problems may be an issue.</li>
<li>Ted Cruz. Which of these quotes did Ted Cruz really say and which did I hear in some satirical setting (as a fan of The Daily Show, The Colbert Report, John Oliver's show, The Capitol Steps, others, I lose track of where I heard what) (1) <i>There was no ebola before Obamacare, </i>or (2)<i> Net neutrality is Obamacare for the internet.</i> I'll answer at the end of the post. He is now using Obamacare himself. . CAN"T WIN: A niche candidate with a small but loyal set of supporters. Not enough CAVEAT: Might be running to get some points of view out there like Adlai Stevenson and Barry Goldwater, though they got all the way to the nomination which I doubt he can. </li>
<li>Rand Paul. Interesting mathematically: an authenticity/electability tradeoff. Some people are attracted to his libertarianism, authenticity and consistency, but not enough to get him anywhere close to the nomination. So he changes some of his positions to be more mainstream but less libertarian, authentic, and consistent. Alas, those who trade their integrity for electability end up with neither. CAN"T WIN: His evolving views might lose him his base but not gain him any establishment cred. Also he has the chicken-egg problem in that even people who like him don't think he can win the general election. (Ted Cruz and others on this list may also have this problem.)</li>
<li>Mike Huckabee. Won't get much support beyond his Social Conservative base. His stance on Same Sex Marriage will hurt him outside of the social conservatives, especially in 2015 (as opposed to his last run in 2008). I'm surprised he's running- he got what he wanted from his last run, a show on FOX News. CAN"T WIN: Was a moderate on some economic and immigration issues (he may also `evolve' which won't work), a conservative on social issues, is just the wrong mix for current republican primary voters. Note that many candidates are trying to avoid the Same Sex Marriage question as they know that being opposed to it will hurt them in the general. Plus some don't want to be on the wrong side of history (or as the kids say WSOH) . Politics- sometimes you're forced to have a public opinion that you disagree with and know will make you be on the WSOH , but you're stuck with it. I think Huckabee is sincere in his opposition to same sex marriage but he must surely know he's on the WSOH.</li>
<li>Ben Carson. I suspect he is actually running to be a FOX News commentator. At that he might succeed. CAN"T WIN: First timer, never ran for anything, he'll be a curiosity not a candidate. Which of the following did he say: <i>(1) Obama is a sociopath (2) Obamacare is like slavery</i>. This may even hurt him with the Republican hard core who want someone who can win. CAVEAT: is being African-American going to help or hurt? I doubt his campaign will get far enough to tell. The other candidates and the debate panelists (in the sanctioned debates) will treat him with kid gloves to avoid being called racist.</li>
<li>Carly Fiorina. She said that our founding fathers did not intend there to be a political class, what we now call politicians. They intended for ordinary people (like the president of HP), for the good of their community, to serve in office. She left out that the founding fathers also did not intend for women to be president. CAN"T WIN: First timer, never ran for anything. (correction added later: She ran for Senator of Califorina. She got the nomination but lost to Incumbent Barbara Boxer.) CAVEAT: is being female going to help or hurt? I doubt her campaign will get far enough to tell. The other candidates and debate panelists (in the sanctioned debates) will treat her with kid gloves to avoid being called sexist. HER HOOK: She claims that as a women she can neutralize H Clinton' women-advantage in the general. Interesting that she is making an electability argument instead of a policy argument, given that she has no chance of being elected.</li>
<li>Chris Christie. CAN"T WIN: Hated inside of New Jersey. Hated outside of New Jersey.</li>
<li>Bobby Jindal. Once said the Republicans have to stop being the stupid party. Later said some stupid things about Muslims in America. CAN"T WIN: If he ran as the moderate sane voice who will rescue the party from itself, he might get some traction. If he runs as anything else he has too much competition. Also a first-time-runner which is hard. </li>
<li>Lindsey Graham. CAN"T WIN: Has worked with democrats which should be a PRO but it's a CON. </li>
<li>John Kasich. Gov of Ohio. CAN"T WIN: Not that well known. Democrats have nominated unknowns (B Clinton, B Obama) but republicans almost never do (W might count).</li>
</ol>
<br />
There may be more (Donald Trump anyone?). But my point is that it seems like nobody can win, yet someone has to. Do you know other examples where A OR B OR C has to be true, yet none of A,B,C look plausible?<br />
<br />
I can phrase this another way:<br />
<br />
novices can't win (e.g., Ben Carson) AND first timers can't win (e.g., Mario Rubio) AND too moderate can't win (e.g., perecption of Jeb) AND unknown can't win (e.g., John Kasich).<br />
<br />
They all can't be right, but looking at it `first timers' is prob the weak link in my reasoning. I could replace it with `inexperienced politician'. Even so, sure looks like nobody can win.<br />
<br />
Ted Cruz: Both of the quotes I attribute to him he really did say.<br />
<br />
I don't know Ben Carson's original quote but he backtracked to <i>Obama reminds you of a</i> <i>psychopath,</i><br />
which is much better than saying Obama IS a psychopath . But he never said sociopath, so the quote I gave is NOT from Ben Carson.<br />
<br />
On Obamacare he said <i>its the worst thing to happen in America since slavery. </i>But later<br />
opposite-of-backtracking said it was <i>in a way like slavery because it robs you of your ability to control your own life.</i><br />
<br />
<br />http://blog.computationalcomplexity.org/2015/05/the-law-of-excluded-middle-of-road.htmlnoreply@blogger.com (GASARCH)8tag:blogger.com,1999:blog-3722233.post-2279347994238839622Thu, 07 May 2015 11:30:00 +00002015-05-07T06:30:04.109-05:00The Virtuous CycleLast week we have a celebration of the new CS graduates at Georgia Tech where each graduating student says what their plans are for next year. Other than the few going to grad school, they all have great jobs but the difference I see this year is how many are staying in Atlanta. Any CS student who wants to stay in Atlanta can find a great job in Atlanta.<br />
<div>
<br /></div>
<div>
A large number of companies are moving their headquarters or established large facilities in the Atlanta area and the reasons they give almost always include the students and researchers at Georgia Tech. We're starting to grow a good start-up culture here as well. </div>
<div>
<br /></div>
<div>
Companies in Atlanta want our students and our research. That helps add to the prestige of Georgia Tech which in turn draws more companies. A virtuous cycle. A similar story is playing out in several other cities across the country and I even saw it in tiny but growing Bozeman where I visited Montana State earlier this week. Computer Science these days plays a major role if not the largest role in many of these industries.<br />
<br />
All this growth leads to challenges such as finding the people and resources to meet the growing demands. All told though a good problem to have.<br />
<br />
We don't want the success of the STEM fields to come at the expense of the rest of academics. It shouldn't have to be CS vs Classics, Physics or Philosophy. How we make it all successful will be the great challenge in higher education in the years to come.</div>
http://blog.computationalcomplexity.org/2015/05/the-virtuous-cycle.htmlnoreply@blogger.com (Lance Fortnow)1tag:blogger.com,1999:blog-3722233.post-4478424031861564777Tue, 05 May 2015 01:29:00 +00002015-05-11T09:07:29.749-05:00Sizes of DFAs, NFAs, DPDAs, UCFG, CFGs, CSLs.<br />
If A is a decider (e.g, DFA) or generator (e.g., CFG) then L(A) is the language that it decides or generates.<br />
<br />
The following are well known:<br />
<br />
L(DFA) = L(NDFA) ⊂ L(DPDA) ⊂ L(PDA) ⊂ L(CSL).<br />
<br />
We are concerned with the <i>size </i>of these devices. For a DFA and NDFA the size is<br />
the number of states, for a DPDA, PDA the size is the sum of the stack alphabet and the number of states, and for CSL its the number of nonterminals. <br />
<br />
If D and E are two classes of devices (e.g., DFAs and DPDAs) then <i>a bounding function for (D,E) </i>is a function f such that if L is recognized by both a D-device and an E-device, and L is recognized by an E-device of size n, then it is recognized by a D-device of size ≤ f(n). We abbreviate b-fun<br />
<br />
Readers of this column likely know that f(n)=2^n is a b-fun for (DFA,NFA) and prob know that this is tight. Below are some results that I suspect many readers don't know. Some of them may be suitable for inclusion in an undergrad theory class. In what is below ≤ means Turing-Less-Than-Or-Equal.<br />
<br />
<ol>
<li> <a href="http://www.cs.umd.edu/~gasarch/BLOGPAPERS/stearnspda.pdf">Stearns</a> showed that f(n) = n^{n^{n^{O(n)}}} is a b-fun for (DFA,DPDA).</li>
<li> <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.129.8903&rep=rep1&type=pdf">Valiant </a>improved this to double exp for a b-fun for (DFA,DPDA).</li>
<li> <a href="http://people.csail.mit.edu/meyer/economy-of-description.pdf">Meyer and Fischer</a> showed the 2^n lower bound for (DFA,NDFA). They also showed a lower bound of 2^{2^{O(n)}} for (DFA,DPDA). I think the question of closing the gap between Valiant's result and the Meyer-Fischer result is still open; however, if you know a ref please leave a comment.</li>
<li><a href="http://www.cs.umd.edu/~gasarch/BLOGPAPERS/valiantpda.pdf">Valiant</a> showed that the if f is a b-fun for (DPDA,UCFG) then HALT ≤ f.</li>
<li><a href="http://ecommons.library.cornell.edu/bitstream/1813/7319/1/76-277.pdf">Schmidt</a> showed that if f is a b-fun for (UCFG,CFG) then HALT ≤ f. </li>
<li><a href="http://www.cs.umd.edu/~gasarch/BLOGPAPERS/hartpda.pdf">Hartmanis</a> showed that if f is a b-fun for (DPDA,PDA) then HALT ≤ f</li>
<li><a href="http://www.cs.umd.edu/~gasarch/BLOGPAPERS/haypda.pdf">Hay </a> showed that if f is a b-fun for (DPDA,PDA) then f is NOT computable-in-HALT. </li>
<li><a href="http://arxiv.org/pdf/1503.08847v2.pdf">Beigel and Gasarch </a>prove a general theorem from which they obtain the following: (a) if f is a b-fun for (DPDA,PDA) then f ≤_INF. (It is easy to show that there exists a b-fun f ≤ INF for (DPDA,PDA) so the Turing degree is now precise), and (b) if f is a b-fun for (PDA,CSL) then SAME AS PART (a).</li>
</ol>
Results 3,4,5,6,7 can be restated in the following way--- we'll do result 6 as an example:<br />
<br />
<i>If f ≤ HALT then there exists infinitely many n such that there exists L_n such that (a) L_n is DPDA, (b) there is a PDA for L_n of size n, (c) any DPDA for L_n is of size at least f(n). </i><br />
<br />
<i> </i>Are there any ``for almost all n '' type bounds? There are but they are much weaker. The following theorems are from the Beigel-Gasarch paper pointed to above.<br />
<br />
<ol>
<li>For almost all n there exists a cofinite (hence DPDA) L_n that has a PDA of size O(n) but any DPDA for it has size 2^2^{n^{Ω(1)}}. </li>
<li>Same as point 1 but for PDA,CSL.</li>
</ol>
<br />
Both results 1,2 above use natural languages in that they are not created for the sole purpose of proving the theorem, no diagonalization (my spell check says thats spelled wrong but I think its spelled right). Using a construction Beigel-Gasarch obtained (Meyer probably had the result 40 years earlier with a diff proof, see the Beigel-Gasarch paper for historical details) that if f ≤ HALT then for almost all n there is a lang L_n such that L_n has a CSL of size n but any PDA for it is of size at least f(n). <br />
<br />
<br />
<br />
<br />
<ol>
</ol>
<br />
<br />http://blog.computationalcomplexity.org/2015/05/sizes-of-dfas-nfas-dpdas-ucfg-cfgs-csls.htmlnoreply@blogger.com (GASARCH)1tag:blogger.com,1999:blog-3722233.post-7166443596217832424Wed, 29 Apr 2015 23:50:00 +00002015-04-29T18:50:48.696-05:00Is Logarithmic Space Closed Under Kleene Star?A Georgia Tech student asked the title question in an introductory theory course. The instructor asked his TA, the TA asked me and I asked the oracle of all things log space, Eric Allender. Eric didn't disappoint and pointed me to Burkhard Monien’s <a href="http://dx.doi.org/10.1007/3-540-07407-4_15">1975 theorem</a><br />
<blockquote class="tr_bq">
L is closed under Kleene star if and only if L = NL.</blockquote>
L here is the set of problems solved in deterministic O(log n) space and NL is the nondeterministic counterpart. For a set of strings A, <a href="http://en.wikipedia.org/wiki/Kleene_star">Kleene star</a>, denoted A<sup>*</sup> is the set of all finite concatenations of strings of A. For example if A = {00,1} then A<sup>*</sup> = {ε, 1, 00, 11, 001, 100, 111, 0000, 0011, 1100, …} where ε is the zero-length string.<br />
<br />
Kleene star comes up in regular expressions but also makes for many a good homework problem.<br />
<ol>
<li>Show that if A is in NL then A<sup>*</sup> is also in NL.</li>
<li>Show that if A is in P then A<sup>*</sup> is also in P.</li>
<li>Show that if A is c.e. (recognizable) then A<sup>*</sup> is also c.e.</li>
</ol>
Problem 1 above is equivalent to saying NL is closed under Kleene star and implies the “if” part of Monien’s result. Here is a simple proof of the other direction that L closed under Kleene star implies L = NL.<br />
<br />
Consider the following NL-complete problem: The set of triples (G,s,t) such that G is a directed graph with a restriction that all edges (i,j) have i<j and there is a path from node s to node t.<br />
<br />
Define the following language<br />
<blockquote class="tr_bq">
B = {G#i+1#G#i+2#...#G#j# | there is an edge (i,j) in G}</blockquote>
B is computable in log space and the string G#s+1#G#s+2#…#G#t# is in B<sup>*</sup> if and only if there is a path from node s to node t. QED<br />
<br />
Allender, Arvind and Mahajan <a href="http://dx.doi.org/10.1007/s00224-003-1077-7">give some generalizations</a> to log-space counting classes and also notes that there are languages computable in AC<sup>0</sup> (constant-depth circuits) whose Kleene star is NL-complete. B above is one such set.http://blog.computationalcomplexity.org/2015/04/is-logarithmic-space-closed-under.htmlnoreply@blogger.com (Lance Fortnow)9tag:blogger.com,1999:blog-3722233.post-2907736180987079209Mon, 27 Apr 2015 17:20:00 +00002015-04-27T12:20:40.782-05:00Advice on running a theory dayLast semester I ran a THEORY DAY AT UMCP. Below I have ADVICE for people running theory days. Some I did, some I didn't do but wish I did, and some are just questions you need to ask yourself.<br />
<br />
1) Picking the day- I had two external speakers (Avrim Blum and Sanjeev Arrora) so I was able to ask THEM what day was good for THEM. Another way is to pick the DAY first and then asking for speakers.<br />
<br />
2) Number of external speakers- We had two, and the rest were internal. The external speakers had hour-long talks, the internal had 20-minute talks. This worked well; however, one can have more or even all external speakers.<br />
<br />
3) Whoever is paying for it should be told of it towards the beginning of the process.<br />
<br />
4) Lunch- catered or out? I recommend catered if you can afford it since good time for people to all talk. See next point.<br />
<br />
5) If its catered you need a head count so you need people to register. The number you get may be off- you may want to ask when they register if they want lunch. Then add ten percent.<br />
<br />
6) Tell the guest speakers what time is good for them to arrive before they make plans so you can coordinate their dinner the previous night.<br />
<br />
7) If have the money and energy do name tags ahead of time. If not then just have some sticky tags and a magic marker.<br />
<br />
8) Guest speakers- getting them FROM Amtrak or Airport to dinner/hotel --- give them a personal lift (they may be new to the city and a bit lost). Getting them from the event back TO the airport or amtrak, can call airline limo and taxi. (though if can give a ride, that's of course good.)<br />
<br />
9) Pick a day early and stick with it. NO day is perfect, so if someone can't make it, or there is a faculty meeting that day, then don't worry about it.<br />
<br />
10) Have website, speakers, all set at least a month ahead of time. Advertise on theory-local email lists, blogs you have access to, and analogs of theory-local for other places (I did NY, NJ, PA). Also email people to spread the word.<br />
<br />
11) Advertise to ugrads. Students are the future!<br />
<br />
12) If you are the organizer you might not want to give a talk since you'll be too busy doing other things.<br />
<br />
13) Well established regular theory days (e.g., NY theory day) can ignore some of the above as they already have things running pretty well. <br />
<br />
<br />http://blog.computationalcomplexity.org/2015/04/advice-on-running-theory-day.htmlnoreply@blogger.com (GASARCH)2tag:blogger.com,1999:blog-3722233.post-2164203448944404159Fri, 24 Apr 2015 11:20:00 +00002015-04-24T06:20:14.944-05:00Fifty Years of Moore's LawGordon Moore formulated his <a href="http://en.wikipedia.org/wiki/Moore%27s_law">famous law</a> in a <a href="http://www.cs.utexas.edu/~fussell/courses/cs352h/papers/moore.pdf">paper</a> dated fifty years and five days ago. We all have seen how Moore's law has changed real-world computing, but how does it relate to computational complexity?<br />
<br />
In complexity we typically focus on running times but we really care about how large a problem we can solve in current technology. In one of my <a href="http://blog.computationalcomplexity.org/2002/12/note-on-running-times.html">early posts</a> I showed how this view can change how we judge running time improvements from faster algorithms. Improved technology also allows us to solve bigger problems. This is one justification for asymptotic analysis. For polynomial-time algorithms a doubling of processor speed gives a constant multiplicative factor increase in the size of the problem we can solve. We only get an additive factor for an exponential-time algorithm.<br />
<br />
Although Moore's law continues, computers have stopped getting faster ten years ago. Instead we've seen the rise of new technologies: GPUs and other specialized processors, multicore, cloud computing and more on the horizon.<br />
<br />
The complexity and algorithmic communities are slow to catch up. With some exceptions, we still focus on single-core single-thread algorithms. Rather we need to find good models for these new technologies and develop algorithms and complexity bounds that map nicely into our current computing reality.http://blog.computationalcomplexity.org/2015/04/fifty-years-of-moores-law.htmlnoreply@blogger.com (Lance Fortnow)1tag:blogger.com,1999:blog-3722233.post-1903515936383568055Wed, 22 Apr 2015 13:47:00 +00002015-04-23T08:47:01.936-05:00The New Oracle Result! The new circuit result! which do you care about?You have likely heard of the new result by Ben Roco, and Li-Yang on random oracles (<a href="http://arxiv.org/abs/1504.03398">see here for preprint)</a> from either <a href="http://blog.computationalcomplexity.org/2015/04/ph-infinite-under-random-oracle.html">Lance </a>or <a href="http://www.scottaaronson.com/blog/?p=2272">Scott</a> or some other source:<br />
<br />
Lance's headline was <i>PH infinite under random oracle</i><br />
<br />
Scott's headline was <i>Two papers </i>but when he stated the result he also stated it as a random oracle result.<br />
<br />
The paper itself has the title<br />
<br />
<i>An average case depth hierarchy theorem for Boolean circuits</i> <br />
<br />
and the abstract is: <br />
<br />
We prove an average-case depth hierarchy theorem for Boolean circuits over
the standard basis of <span class="MathJax_Preview"></span><span class="MathJax" id="MathJax-Element-1-Frame" role="textbox"><nobr><span class="math" id="MathJax-Span-1" style="display: inline-block; width: 2.795em;"><span style="display: inline-block; font-size: 129%; height: 0px; position: relative; width: 2.153em;"><span style="clip: rect(1.405em, 1000em, 2.422em, -0.456em); left: 0em; position: absolute; top: -2.261em;"><span class="mrow" id="MathJax-Span-2"><span class="texatom" id="MathJax-Span-3"><span class="mrow" id="MathJax-Span-4"><span class="mi" id="MathJax-Span-5" style="font-family: MathJax_SansSerif;">A</span><span class="mi" id="MathJax-Span-6" style="font-family: MathJax_SansSerif;">N</span><span class="mi" id="MathJax-Span-7" style="font-family: MathJax_SansSerif;">D</span></span></span></span><span style="display: inline-block; height: 2.261em; width: 0px;"></span></span></span><span style="border-left: 0em solid; display: inline-block; height: 1.034em; overflow: hidden; vertical-align: -0.069em; width: 0px;"></span></span></nobr></span>, <span class="MathJax_Preview"></span><span class="MathJax" id="MathJax-Element-2-Frame" role="textbox"><nobr><span class="math" id="MathJax-Span-8" style="display: inline-block; width: 1.823em;"><span style="display: inline-block; font-size: 129%; height: 0px; position: relative; width: 1.4em;"><span style="clip: rect(1.384em, 1000em, 2.444em, -0.429em); left: 0em; position: absolute; top: -2.261em;"><span class="mrow" id="MathJax-Span-9"><span class="texatom" id="MathJax-Span-10"><span class="mrow" id="MathJax-Span-11"><span class="mi" id="MathJax-Span-12" style="font-family: MathJax_SansSerif;">O</span><span class="mi" id="MathJax-Span-13" style="font-family: MathJax_SansSerif;">R</span></span></span></span><span style="display: inline-block; height: 2.261em; width: 0px;"></span></span></span><span style="border-left: 0em solid; display: inline-block; height: 1.09em; overflow: hidden; vertical-align: -0.098em; width: 0px;"></span></span></nobr></span>, and <span class="MathJax_Preview"></span><span class="MathJax" id="MathJax-Element-3-Frame" role="textbox"><nobr><span class="math" id="MathJax-Span-14" style="display: inline-block; width: 2.795em;"><span style="display: inline-block; font-size: 129%; height: 0px; position: relative; width: 2.153em;"><span style="clip: rect(1.384em, 1000em, 2.444em, -0.396em); left: 0em; position: absolute; top: -2.261em;"><span class="mrow" id="MathJax-Span-15"><span class="texatom" id="MathJax-Span-16"><span class="mrow" id="MathJax-Span-17"><span class="mi" id="MathJax-Span-18" style="font-family: MathJax_SansSerif;">N</span><span class="mi" id="MathJax-Span-19" style="font-family: MathJax_SansSerif;">O</span><span class="mi" id="MathJax-Span-20" style="font-family: MathJax_SansSerif;">T</span></span></span></span><span style="display: inline-block; height: 2.261em; width: 0px;"></span></span></span><span style="border-left: 0em solid; display: inline-block; height: 1.09em; overflow: hidden; vertical-align: -0.098em; width: 0px;"></span></span></nobr></span> gates.
Our hierarchy theorem says that for every <span class="MathJax_Preview"></span><span class="MathJax" id="MathJax-Element-4-Frame" role="textbox"><nobr><span class="math" id="MathJax-Span-21" style="display: inline-block; width: 3.142em;"><span style="display: inline-block; font-size: 129%; height: 0px; position: relative; width: 2.422em;"><span style="clip: rect(1.405em, 1000em, 2.56em, -0.451em); left: 0em; position: absolute; top: -2.261em;"><span class="mrow" id="MathJax-Span-22"><span class="mi" id="MathJax-Span-23" style="font-family: MathJax_Math; font-style: italic;">d<span style="display: inline-block; height: 1px; overflow: hidden; width: 0.003em;"></span></span><span class="mo" id="MathJax-Span-24" style="font-family: MathJax_Main; padding-left: 0.278em;">≥</span><span class="mn" id="MathJax-Span-25" style="font-family: MathJax_Main; padding-left: 0.278em;">2</span></span><span style="display: inline-block; height: 2.261em; width: 0px;"></span></span></span><span style="border-left: 0em solid; display: inline-block; height: 1.212em; overflow: hidden; vertical-align: -0.247em; width: 0px;"></span></span></nobr></span>, there is an explicit
<span class="MathJax_Preview"></span><span class="MathJax" id="MathJax-Element-5-Frame" role="textbox"><nobr><span class="math" id="MathJax-Span-26" style="display: inline-block; width: 0.781em;"><span style="display: inline-block; font-size: 129%; height: 0px; position: relative; width: 0.592em;"><span style="clip: rect(1.657em, 1000em, 2.433em, -0.463em); left: 0em; position: absolute; top: -2.261em;"><span class="mrow" id="MathJax-Span-27"><span class="mi" id="MathJax-Span-28" style="font-family: MathJax_Math; font-style: italic;">n</span></span><span style="display: inline-block; height: 2.261em; width: 0px;"></span></span></span><span style="border-left: 0em solid; display: inline-block; height: 0.723em; overflow: hidden; vertical-align: -0.084em; width: 0px;"></span></span></nobr></span>-variable Boolean function <span class="MathJax_Preview"></span><span class="MathJax" id="MathJax-Element-6-Frame" role="textbox"><nobr><span class="math" id="MathJax-Span-29" style="display: inline-block; width: 0.712em;"><span style="display: inline-block; font-size: 129%; height: 0px; position: relative; width: 0.538em;"><span style="clip: rect(1.394em, 1000em, 2.627em, -0.429em); left: 0em; position: absolute; top: -2.261em;"><span class="mrow" id="MathJax-Span-30"><span class="mi" id="MathJax-Span-31" style="font-family: MathJax_Math; font-style: italic;">f<span style="display: inline-block; height: 1px; overflow: hidden; width: 0.06em;"></span></span></span><span style="display: inline-block; height: 2.261em; width: 0px;"></span></span></span><span style="border-left: 0em solid; display: inline-block; height: 1.313em; overflow: hidden; vertical-align: -0.334em; width: 0px;"></span></span></nobr></span>, computed by a linear-size depth-<span class="MathJax_Preview"></span><span class="MathJax" id="MathJax-Element-7-Frame" role="textbox"><nobr><span class="math" id="MathJax-Span-32" style="display: inline-block; width: 0.712em;"><span style="display: inline-block; font-size: 129%; height: 0px; position: relative; width: 0.538em;"><span style="clip: rect(1.405em, 1000em, 2.432em, -0.451em); left: 0em; position: absolute; top: -2.261em;"><span class="mrow" id="MathJax-Span-33"><span class="mi" id="MathJax-Span-34" style="font-family: MathJax_Math; font-style: italic;">d<span style="display: inline-block; height: 1px; overflow: hidden; width: 0.003em;"></span></span></span><span style="display: inline-block; height: 2.261em; width: 0px;"></span></span></span><span style="border-left: 0em solid; display: inline-block; height: 1.047em; overflow: hidden; vertical-align: -0.082em; width: 0px;"></span></span></nobr></span> formula,
which is such that any depth-<span class="MathJax_Preview"></span><span class="MathJax" id="MathJax-Element-8-Frame" role="textbox"><nobr><span class="math" id="MathJax-Span-35" style="display: inline-block; width: 3.976em;"><span style="display: inline-block; font-size: 129%; height: 0px; position: relative; width: 3.068em;"><span style="clip: rect(1.349em, 1000em, 2.672em, -0.39em); left: 0em; position: absolute; top: -2.261em;"><span class="mrow" id="MathJax-Span-36"><span class="mo" id="MathJax-Span-37" style="font-family: MathJax_Main;">(</span><span class="mi" id="MathJax-Span-38" style="font-family: MathJax_Math; font-style: italic;">d<span style="display: inline-block; height: 1px; overflow: hidden; width: 0.003em;"></span></span><span class="mo" id="MathJax-Span-39" style="font-family: MathJax_Main; padding-left: 0.222em;">−</span><span class="mn" id="MathJax-Span-40" style="font-family: MathJax_Main; padding-left: 0.222em;">1</span><span class="mo" id="MathJax-Span-41" style="font-family: MathJax_Main;">)</span></span><span style="display: inline-block; height: 2.261em; width: 0px;"></span></span></span><span style="border-left: 0em solid; display: inline-block; height: 1.429em; overflow: hidden; vertical-align: -0.392em; width: 0px;"></span></span></nobr></span> circuit that agrees with <span class="MathJax_Preview"></span><span class="MathJax" id="MathJax-Element-9-Frame" role="textbox"><nobr><span class="math" id="MathJax-Span-42" style="display: inline-block; width: 0.712em;"><span style="display: inline-block; font-size: 129%; height: 0px; position: relative; width: 0.538em;"><span style="clip: rect(1.394em, 1000em, 2.627em, -0.429em); left: 0em; position: absolute; top: -2.261em;"><span class="mrow" id="MathJax-Span-43"><span class="mi" id="MathJax-Span-44" style="font-family: MathJax_Math; font-style: italic;">f<span style="display: inline-block; height: 1px; overflow: hidden; width: 0.06em;"></span></span></span><span style="display: inline-block; height: 2.261em; width: 0px;"></span></span></span><span style="border-left: 0em solid; display: inline-block; height: 1.313em; overflow: hidden; vertical-align: -0.334em; width: 0px;"></span></span></nobr></span> on <span class="MathJax_Preview"></span><span class="MathJax" id="MathJax-Element-10-Frame" role="textbox"><nobr><span class="math" id="MathJax-Span-45" style="display: inline-block; width: 7.656em;"><span style="display: inline-block; font-size: 129%; height: 0px; position: relative; width: 5.922em;"><span style="clip: rect(1.403em, 1000em, 2.726em, -0.39em); left: 0em; position: absolute; top: -2.315em;"><span class="mrow" id="MathJax-Span-46"><span class="mo" id="MathJax-Span-47" style="font-family: MathJax_Main;">(</span><span class="mn" id="MathJax-Span-48" style="font-family: MathJax_Main;">1</span><span class="texatom" id="MathJax-Span-49"><span class="mrow" id="MathJax-Span-50"><span class="mo" id="MathJax-Span-51" style="font-family: MathJax_Main;">/</span></span></span><span class="mn" id="MathJax-Span-52" style="font-family: MathJax_Main;">2</span><span class="mo" id="MathJax-Span-53" style="font-family: MathJax_Main; padding-left: 0.222em;">+</span><span class="msubsup" id="MathJax-Span-54" style="padding-left: 0.222em;"><span style="display: inline-block; height: 0px; position: relative; width: 0.99em;"><span style="clip: rect(1.658em, 1000em, 2.433em, -0.45em); left: 0em; position: absolute; top: -2.261em;"><span class="mi" id="MathJax-Span-55" style="font-family: MathJax_Math; font-style: italic;">o</span><span style="display: inline-block; height: 2.261em; width: 0px;"></span></span><span style="left: 0.484em; position: absolute; top: -1.734em;"><span class="mi" id="MathJax-Span-56" style="font-family: MathJax_Math; font-size: 70.7%; font-style: italic;">n</span><span style="display: inline-block; height: 1.884em; width: 0px;"></span></span></span></span><span class="mo" id="MathJax-Span-57" style="font-family: MathJax_Main;">(</span><span class="mn" id="MathJax-Span-58" style="font-family: MathJax_Main;">1</span><span class="mo" id="MathJax-Span-59" style="font-family: MathJax_Main;">)</span><span class="mo" id="MathJax-Span-60" style="font-family: MathJax_Main;">)</span></span><span style="display: inline-block; height: 2.315em; width: 0px;"></span></span></span><span style="border-left: 0em solid; display: inline-block; height: 1.429em; overflow: hidden; vertical-align: -0.392em; width: 0px;"></span></span></nobr></span> fraction of all inputs must have size <span class="MathJax_Preview"></span><span class="MathJax" id="MathJax-Element-11-Frame" role="textbox"><nobr><span class="math" id="MathJax-Span-61" style="display: inline-block; width: 6.962em;"><span style="display: inline-block; font-size: 129%; height: 0px; position: relative; width: 5.383em;"><span style="clip: rect(1.152em, 1000em, 2.619em, -0.456em); left: 0em; position: absolute; top: -2.207em;"><span class="mrow" id="MathJax-Span-62"><span class="mi" id="MathJax-Span-63" style="font-family: MathJax_Main;">exp</span><span class="mo" id="MathJax-Span-64"></span><span class="mo" id="MathJax-Span-65" style="font-family: MathJax_Main;">(</span><span class="texatom" id="MathJax-Span-66"><span class="mrow" id="MathJax-Span-67"><span class="msubsup" id="MathJax-Span-68"><span style="display: inline-block; height: 0px; position: relative; width: 2.82em;"><span style="clip: rect(1.657em, 1000em, 2.433em, -0.463em); left: 0em; position: absolute; top: -2.261em;"><span class="mi" id="MathJax-Span-69" style="font-family: MathJax_Math; font-style: italic;">n</span><span style="display: inline-block; height: 2.261em; width: 0px;"></span></span><span style="left: 0.592em; position: absolute; top: -2.247em;"><span class="texatom" id="MathJax-Span-70"><span class="mrow" id="MathJax-Span-71"><span class="mi" id="MathJax-Span-72" style="font-family: MathJax_Main; font-size: 70.7%;">Ω</span><span class="mo" id="MathJax-Span-73" style="font-family: MathJax_Main; font-size: 70.7%;">(</span><span class="mn" id="MathJax-Span-74" style="font-family: MathJax_Main; font-size: 70.7%;">1</span><span class="texatom" id="MathJax-Span-75"><span class="mrow" id="MathJax-Span-76"><span class="mo" id="MathJax-Span-77" style="font-family: MathJax_Main; font-size: 70.7%;">/</span></span></span><span class="mi" id="MathJax-Span-78" style="font-family: MathJax_Math; font-size: 70.7%; font-style: italic;">d<span style="display: inline-block; height: 1px; overflow: hidden; width: 0.002em;"></span></span><span class="mo" id="MathJax-Span-79" style="font-family: MathJax_Main; font-size: 70.7%;">)</span></span></span><span style="display: inline-block; height: 1.884em; width: 0px;"></span></span></span></span></span></span><span class="mo" id="MathJax-Span-80" style="font-family: MathJax_Main;">)</span><span class="mo" id="MathJax-Span-81" style="font-family: MathJax_Main;">.</span></span><span style="display: inline-block; height: 2.207em; width: 0px;"></span></span></span><span style="border-left: 0em solid; display: inline-block; height: 1.614em; overflow: hidden; vertical-align: -0.392em; width: 0px;"></span></span></nobr></span> This
answers an open question posed by Hastad in his Ph.D. thesis.
<br />
Our average-case depth hierarchy theorem implies that the polynomial
hierarchy is infinite relative to a random oracle with probability 1,
confirming a conjecture of Hastad, Cai, and Babai. We also use our result
to show that there is no "approximate converse" to the results of Linial,
Mansour, Nisan and Boppana on the total influence of small-depth circuits, thus
answering a question posed by O'Donnell, Kalai, and Hatami.
<br />
A key ingredient in our proof is a notion of \emph{random projections} which
generalize random restrictions.<br />
<br />
Note that they emphasize the circuit aspect.<br />
<br />
In Yao's paper where he showed PARITY in constant dept requires exp size the title was<br />
<br />
<i>Separating the polynomial hiearchy by oracles</i><br />
<br />
Hastad's paper and book had titles about circuits, not oracles.<i> </i><br />
<br />
When Scott showed that for a random oracle P^NP is properly in Sigma_2^p the title was<br />
<br />
<i>Counterexample to the general Linial-Nissan Conjecture</i><br />
<br />
However the abstarct begins with a statement of the oracle result.<br />
<br />
SO, here is the real question: What is more interesting, the circuit lower bounds or the oracle results that follow? The authors titles and abstracts might tell you what they are thinking, then again they might not. For example, I can't really claim to know that Yao cared about oracles more than circuits.<br />
<br />
Roughly speaking the Circuit results are interesting since they are actual lower bounds, often on reasonable models for natural problems (both of these statements can be counter-argued), oracle results are interesting since they give us a sense that certain proof teachniques are not going to work. Random oracle results are interesting since for classes like these (that is not well defined) things true for random oracles tend to be things we think are true.<br />
<br />
But I want to hear from you, the reader: For example which of PARITY NOT IN AC_0 and THERE IS AN ORACLE SEP PH FROM PSPACE do you find more interesting? Is easier to motivate to other theorists? To non-theorists (for non-theorists I think PARITY).http://blog.computationalcomplexity.org/2015/04/the-new-oracle-result-new-circuit.htmlnoreply@blogger.com (GASARCH)3tag:blogger.com,1999:blog-3722233.post-80531775970903263Thu, 16 Apr 2015 13:14:00 +00002015-04-16T08:14:04.261-05:00PH Infinite Under a Random OracleBenjamin Rossman, Rocco Servedio and Li-Yang Tan show <a href="http://arxiv.org/abs/1504.03398">new circuit lower bounds</a> that imply, among other things, that the polynomial-time hierarchy is infinite relative to a random oracle. What does that mean, and why is it important?<br />
<br />
The polynomial-time hierarchy can be defined inductively as follows: Σ<sup>P</sup><sub>0 </sub>= P, the set of problems solvable in polynomial-time. Σ<sup>P</sup><sub>i+1 </sub>= NP<sup>Σ<sup>P</sup><sub>i</sub></sup>, the set of problems computable in nondeterministic polynomial-time that can ask arbitrary questions to the previous level. We say the polynomial-time hierarchy is infinite if Σ<sup>P</sup><sub>i+1 </sub>≠ Σ<sup>P</sup><sub>i</sub> for all i and it collapses otherwise.<br />
<br />
Whether the polynomial-time hierarchy is infinite is one of the major assumptions in computational complexity and would imply a large number of statements we believe to be true including that NP-complete problems do not have small circuits and that Graph Isomorphism is not co-NP-complete.<br />
<br />
We don't have the techniques to settle whether or not the polynomial-time hierarchy is infinite so we can look at relativized worlds, where all machines have access to the same oracle. The Baker-Gill-Solovay oracle that makes P = NP also collapses the hierarchy. Finding an oracle that makes the hierarchy infinite was a larger challenge and required new results in circuit complexity.<br />
<br />
In 1985, Yao in his paper <a href="http://dx.doi.org/10.1109/SFCS.1985.49">Separating the polynomial-time hierarchy by oracles</a> showed that there were functions that had small depth d+1 circuits but large depth d circuits which was needed for the oracle. Håstad gave a <a href="http://dx.doi.org/10.1145/12130.12132">simplified</a> proof. Cai <a href="http://dx.doi.org/10.1145/12130.12133">proved</a> that PSPACE ≠ Σ<sup>P</sup><sub>i</sub> for all i even if we choose the oracle at random (with probability one). Babai later and independently <a href="http://dx.doi.org/10.1016/0020-0190(87)90036-6">gave a simpler proof</a>.<br />
<br />
Whether a randomly chosen oracle would make the hierarchy infinite required showing the depth separation of circuits in the average case which remained open for three decades. Rossman, Servedio and Tan solved that circuit problem and get the random oracle result as a consequence. They build on Håstad's proof technique of randomly restricting variables to true and false. Rossman et al. generalize to a random projection method that projects onto a new set of variables. <a href="http://arxiv.org/abs/1504.03398">Read their paper</a> to see all the details.<br />
<br />
In 1994, Ron Book <a href="http://dx.doi.org/10.1016/0020-0190(94)00157-X">showed</a> that if the polynomial-time hierarchy was infinite that it remained infinite relativized to a random oracle. Rossman et al. thus gives even more evidence to believe that the hierarchy is indeed infinite, in the sense that if they had proven the opposite result than the hierarchy would have collapsed.<br />
<br />
I <a href="http://dx.doi.org/10.1016/S0020-0190(99)00034-4">used</a> Book's paper to show that a number of complexity hypothesis held simultaneously with the hierarchy being infinite, now a trivial consequence of the Rossman et al. result. I can live with that.http://blog.computationalcomplexity.org/2015/04/ph-infinite-under-random-oracle.htmlnoreply@blogger.com (Lance Fortnow)4tag:blogger.com,1999:blog-3722233.post-3854119300364302672Tue, 14 Apr 2015 17:01:00 +00002015-04-14T12:01:06.560-05:00Baseball is More Than Data<div class="MsoNormal">
As baseball starts its second week, lets reflect a bit on
how data analytics has changed the game. Not just the Moneyball phenomenon of
ranking players but also the extensive use of defensive shifts (repositioning
the infielders and outfielders for each batter) and other maneuvers. We're not
quite to the point that technology can replace managers and umpires but give it
another decade or two.</div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
<o:p></o:p></div>
<div class="MsoNormal">
We've seen a huge increase in data analysis in sports. ESPN
<a href="http://espn.go.com/espn/feature/story/_/id/12331388/the-great-analytics-rankings">ranked teams</a> based on their use of analytics and it correlates well with how
those teams are faring. Eventually everyone will use the same learning
algorithms and games will just be a random coin toss with coins weighted by how
much each team can spend.<o:p></o:p><br />
<br /></div>
<div class="MsoNormal">
Steve Kettmann wrote an NYT op-ed piece <a href="http://www.nytimes.com/2015/04/08/opinion/baseball-by-the-numbers.html">Don't Let StatisticsRuin Baseball</a>. At first I thought this was just another luddite who will be
left behind but he makes a salient point. We don’t go to baseball to watch the
stats. We go to see people play. We enjoy the suspense of every pitch, the one-on-one
battle between pitcher and batter and the great defensive moves. Maybe
statistics can tell which players a team should acquire and where the fielders
should stand but it still is people that play the game.<o:p></o:p><br />
<br /></div>
<div class="MsoNormal">
Kettmann worries about the obsession of baseball writers
with statistics. Those who write based on stats can be replaced by machines.
Baseball is a great game to listen on the radio for the best broadcasters don't
talk about the numbers, they talk about the people. Otherwise you might as well
listen to competitive tic-tac-toe.</div>
http://blog.computationalcomplexity.org/2015/04/baseball-is-more-than-data.htmlnoreply@blogger.com (Lance Fortnow)5tag:blogger.com,1999:blog-3722233.post-2693479907911641019Thu, 09 Apr 2015 11:12:00 +00002015-04-09T06:41:33.963-05:00 FCRC 2015Every four years the Association for Computing Machinery organizes a <a href="http://fcrc.acm.org/">Federated Computing Research Conference</a> consisting of several co-located conferences and some joint events. This year's event will be held June 13-20 in Portland, Oregon and includes Michael Stonebraker's Turing award lecture. There is a <a href="https://www.regonline.com/Register/Checkin.aspx?EventID=1692718">single registration site</a> for all conferences (early deadline May 18th) and I recommend <a href="http://fcrc.acm.org/travel.cfm">booking hotels</a> early and definitely before the May 16th cutoff.<br />
<br />
Theoretical computer science is well represented.<br />
<ul>
<li><a href="http://acm-stoc.org/stoc2015/">47th ACM Symposium on the Theory of Computing</a>. Apply for <a href="http://acm-stoc.org/stoc2015/travel-support.html">student travel support</a> by May 9th. </li>
<li><a href="http://computationalcomplexity.org/Archive/2015/local.html">30th Computational Complexity Conference</a>, now an independent conference. <a href="http://computationalcomplexity.org/travelAllowance.php">Student travel support</a> deadline of May 9th. CCC is looking for a new logo, if you have ideas send them to <a href="http://pages.cs.wisc.edu/~dieter/contact.html">Dieter van Melkebeek</a>.</li>
<li><a href="http://www.sigecom.org/ec15/">16th ACM Conference on Economics and Computation</a> and its associated <a href="http://www.sigecom.org/ec15/schedule_workshops.html">workshops and tutorials</a></li>
<li><a href="http://www.cs.jhu.edu/~spaa/">27th ACM Symposium on Parallelism in Algorithms and Architectures</a></li>
<li>A plenary lecture by Andy Yao</li>
</ul>
<div>
The CRA-W is organizing mentoring workshops for early career and mid-career faculty and faculty supervising undergraduate research.</div>
<div>
<br /></div>
<div>
A number of other major conferences will also be part of FCRC including HPDC, ISCA, PLDI and SIGMETRICS. There are many algorithmic challenges in all these areas and FCRC really gives you an opportunity to sit in talks outside your comfort zone. You might be surprised in what you see.</div>
<div>
<br /></div>
<div>
See you in Portland!</div>
http://blog.computationalcomplexity.org/2015/04/fcrc-2015.htmlnoreply@blogger.com (Lance Fortnow)5tag:blogger.com,1999:blog-3722233.post-4036509745716529498Tue, 07 Apr 2015 12:47:00 +00002015-04-07T07:47:41.765-05:00Two cutoffs about Warings problem for cubes<br />
Known:<br />
<ol>
<li>All numbers except 23 can be written as the sum of 8 cubes</li>
<li>All but a finite number of numbers can be written as the sum of 7 cubes </li>
<li>There are an infinite number of numbers that cannot be written as the sum of 3 cubes(this you can prove yourself, the other two are hard, deep theorems).</li>
</ol>
Open: Find x such that: <br />
<ol>
<li>All but a finite number of numbers can be written as the sum of x cubes.</li>
<li>There exists an infinite number of numbers that cannot be written as the sum of x-1 cubes.</li>
</ol>
It is known that 4 ≤ x ≤ 7<br />
<br />
Lets say you didn't know any of this and were looking at empirical data.<br />
<br />
<ol>
<li>If you find that every number ≤ 10 can be written as the sum of 7 cubes this is NOT interesting because 10 is too small.</li>
<li> If you find that every number ≤ 1,000,000 except 23 can be written as the sum of 8 cubes this IS interesting since 1,000,000 is big enough that one thinks this is telling us something (though we could be wrong). What if you find all but 10 numbers (I do not know if that is true) ≤ 1,000,000 are the sum of seven cubes?</li>
</ol>
Open but too informal to be a real question: Find x such that<br />
<ol>
<li>Information about sums-of-cubes for all numbers ≤ x-1 is NOT interesting</li>
<li>Information about sums-of-cubes for all numbers ≤ x IS interesting. </li>
</ol>
By the intermediate value theorem such an x exists. But of course this is silly. The fallacy probably relies on the informal notion `interesting'. But a serious question: How big does x have to be before data about this would be considered interesting? (NO- I won't come back with `what about x-1').<br />
<br />
More advanced form: Find a function f(x,y) and constants c1 and c2 such that<br />
<ol>
<li>If f(x,y) ≥ c1 then the statement <i>all but y numbers ≤ x are the sum of 7 cubes</i> is interesting.</li>
<li>If f(x,y) ≤ c2 then the statement <i>all but y numbers ≤ x are the sum of 7 cubes</i> is not interesting. </li>
</ol>
To end with a more concrete question: Show that there are an infinite number of numbers that cannot be written as the sum of 14 4th powers.<br />
<br />
<ol>
</ol>
<br />
<br />
<br />
<br />http://blog.computationalcomplexity.org/2015/04/two-cutoffs-about-warings-problem-for.htmlnoreply@blogger.com (GASARCH)5tag:blogger.com,1999:blog-3722233.post-5478491529188488125Wed, 01 Apr 2015 13:18:00 +00002015-04-03T13:59:49.281-05:00Which of these stories is falseI would rather challenge you than fool you on April fools day. Below I have some news items. All but one are true. I challenge you to determine which one is false.<br />
<br />
<ol>
<li>Amazon open a brick and mortar store: <a href="http://money.cnn.com/2015/02/04/technology/amazon-purdue/">Full story here.</a> If true this is really really odd since I thought they saved time and money by not having stores.</li>
<li>You may have heard of some music groups releasing Vinyl albums in recent times. They come with an MP3 chip so I doubt the buyers ever use Vinyl,but the size allows for more interesting art. What did people record on before Vinyl. Wax Cylinders! Some music groups have released songs on wax cylinders! See <a href="http://hyperallergic.com/82804/after-nearly-a-century-wax-cylinder-music-gets-a-new-release/">here</a> for a release a while back by <i>Tiny Tim</i> (the singer, not the fictional character) and <a href="http://en.wikipedia.org/wiki/The_Steampunk_Album_That_Cannot_Be_Named_For_Legal_Reasons">here</a> for a release by a group whose name is <i>The Men Will Not Be Blamed For Anything</i>.</li>
<li>An error in Google Maps lead to Nicaragua accidentally invading Costa Rica. Even more amazing--- This excuse was correct and Google admitted the error. See <a href="http://www.wired.com/2010/11/google-maps-error-blamed-for-nicaraguan-invasion/">here</a> for details.</li>
<li>There was a conference called <i>Galileo was wrong, The Church Was Right </i>for people who think the Earth really is the centre of the universe (my spell checker says that `center' is wrong and `centre' is right. Maybe its from England). I assume they mean that the sun and other stuff goes around the earth in concentric circles, and not that one can take any ref point and call it the center. The conference is run by<a href="http://en.wikipedia.org/wiki/Robert_Sungenis"> Robert Sungenesis</a> who also wrote a book on the topic (its on Amazon <a href="http://www.amazon.com/Galileo-Was-Wrong-Church-Right/dp/0977964000">here</a> and the comments section actually has a debate on the merits of his point of view.) There is also a website on the topic <a href="http://galileowaswrong.blogspot.com/">here</a>. The Catholic Church does not support him or his point of view, and in fact asked him to take ``Catholic'' out of the name of his organization, which he has done. (ADDED LATER- A commenter named Shane Chubbs, who has read over the relevent material on this case more carefully than I have, commented that Robert Sungenesis DOES claim that we can take the center of the universe to be anywhere, so it mine as well be here. If thats Roberts S's only point, its hard to believe he got a whole book out of it.) OH- this is one of the TRUE points.</li>
</ol>
http://blog.computationalcomplexity.org/2015/04/which-of-these-stories-is-false.htmlnoreply@blogger.com (GASARCH)7tag:blogger.com,1999:blog-3722233.post-5701624028183470062Mon, 30 Mar 2015 18:36:00 +00002015-03-30T13:36:46.584-05:00Intuitive ProofsAs I <a href="http://blog.computationalcomplexity.org/2014/12/undergraduate-research.html">mentioned</a> a few months ago, I briefly joined an undergraduate research seminar my freshman year at Cornell. In that seminar I was asked if a two-dimensional random walk on a lattice would return to the origin infinitely often. I said of course. The advisor was impressed until he asked about three-dimensional walks and I said they also hit the origin infinitely often. My intuition was wrong.<br />
<div>
<br /></div>
<div>
33 years later I'd like to give the right intuition. This is rough intuition, not a proof, and I'm sure none of this is original with me.</div>
<div>
<br /></div>
<div>
In a 1-dimensional random walk, you will be at the origin on the nth step with probability about 1/n<sup>0.5</sup>. Since the sum of 1/n<sup>0.5</sup> diverges this happens infinitely often.</div>
<div>
<br /></div>
<div>
In a 2-dimensional random walk, you will be at the origin on the nth step with probability about (1/n<sup>0.5</sup>)<sup>2</sup> = 1/n. Since the sum of 1/n diverges this happens infinitely often.</div>
<br />
<div>
In a 3-dimensional random walk, you will be at the origin on the nth step with probability about (1/n<sup>0.5</sup>)<sup>3</sup> = 1/n<sup>1.5</sup>. Since the sum of 1/n<sup>1.5</sup> converges this happens finitely often.</div>
http://blog.computationalcomplexity.org/2015/03/intuitive-proofs.htmlnoreply@blogger.com (Lance Fortnow)4tag:blogger.com,1999:blog-3722233.post-4340163653560675939Wed, 25 Mar 2015 18:10:00 +00002015-03-25T13:10:49.472-05:00News AplentyBoth the Turing Award and the Abel Prize were announced this morning.<br />
<br />
MIT databases researcher Michael Stonebraker <a href="http://preview.acm.org/2014-turing-award">wins the ACM Turing Award</a>. He developed INGRES one of the first relational databases. Stonebraker is the first Turing award winner since the prize went up to a cool million dollars.<br />
<br />
John Nash and Louis Nirenberg <a href="http://www.abelprize.no/nyheter/vis.html?tid=63589">share this years Abel Prize</a> “for striking and seminal contributions to the theory of nonlinear partial differential equations and its applications to geometric analysis.” This work on PDEs is completely independent from the equilibrium results that won Nash the 1994 Nobel Prize.<br />
<br />
Earlier this week the CRA released their latest Best Practice Memo: <a href="http://cra.org/resources/bp-view/best_practices_memo_evaluating_scholarship_in_hiring_tenure_and_promot">Incentivizing Quality and Impact: Evaluating Scholarship in Hiring, Tenure, and Promotion</a>. In short: Emphasize quality over quantity in research.<br />
<br />
The NSF announced their <a href="http://www.nsf.gov/news/special_reports/public_access/">public access plan</a> to ensure that research is "available for download, reading and analysis free of charge no later than 12 months after initial publication".http://blog.computationalcomplexity.org/2015/03/news-aplenty.htmlnoreply@blogger.com (Lance Fortnow)1tag:blogger.com,1999:blog-3722233.post-195292551308946501Mon, 23 Mar 2015 02:35:00 +00002015-03-22T21:35:50.034-05:00Which mathematician had the biggest gap between fame and contribution?(I was going to call this entry <i> Who was the worst mathematician of all time? </i>but Clyde Kruskal reminded me that its not (say) Goldbach's fault that his conjecture got so well known, in fact its a good thing! I'll come back to Goldbach later.) <br />
<br />
Would Hawking be as well known if he didn't have ALS? I suspect that within Physics yes, but I doubt he would have had guest shots on <i>ST:TNG, The Simpsons, Futurama, and The Big Bang Theory</i> (I just checked the IDMB database- they don't mention Futurama but they do say he's a Capricorn. I find that appalling that they mention a Scientists Horoscope.) I also doubt there would be a movie about him.<br />
<br />
Would Turing be as well known if he wasn't gay and didn't die young (likely because of the ``treatment'') would he be as well known? I suspect that within Computer Science yes, but I doubt there would be a play, a movie, and there are rumors of a musical. Contrast him with John von Neumann who one could argue contributed as much as Turing, but, alas, no guest shots on I Love Lucy, no movie, no Rap songs about him. (The only scientist that there may be a rap song about is Heisenberg, and that doesn't count since it would really be about Walter White.)<br />
<br />
Hawking and Turing are/were world class in their fields. Is there someone who is very well known but didn't do that much? <br />
<br />
SO we are looking for a large gap between how well known the person is and how much math they actually did. This might be unfair to well-known people (it might be unfair to ME since complexityblog makes me better known than I would be otherwise). However, I have AN answer that is defensible. Since the question is not that well defined there prob cannot be a definitive answer.<br />
<br />
First lets consider Goldbach (who is NOT my answer). He was a professor of math and did some stuff on the Theory of curves, diff eqs, and infinite series. Certainly respectable. But if not for his<br />
conjecture (every even number is the sum of two primes- still open) I doubt we would have heard of him.<br />
<br />
My answer: Pythagoras! He is well known as a mathematician but there is no evidence that he had any connection to the theorem that bears his name.<br />
<br />
Historians (or so-called historians) would say that it was well known that he proved the theorem, or gave the first rigorous proof, or something, but there is no evidence. Can people make things up out of whole cloth? Indeed they can.<br />
<br />
Witness this <a href="https://www.youtube.com/watch?v=bnROB_RCF7U">Mr. Clean Commercial</a> which says: <i>they</i> <i>say that after seeing </i>a <i>magician make his assistant disappear Mr Clean came up with a product that makes dirt disappear- the magic eraser.</i> REALLY? Who are ``they''? Is this entire story fabricated? Should we call the FCC :-) ANYWAY, yes, people can and do make up things out of whole cloth and then claim they are well known. Even historians.<br />
<br />
Commenters: I politely request that if you suggest other candidates for large gap then they be people who died before 1950 (arbitrary but firm deadline). This is not just out of politeness to the living and recently deceased, its also because these questions needs time. Kind of like people who want to rank George W Bush or Barack Obama as the worst prez of all time--- we need lots more time to evaluate these things.<br />
<br />
<br />
<br />
<br />http://blog.computationalcomplexity.org/2015/03/which-mathematician-had-biggest-gap.htmlnoreply@blogger.com (GASARCH)21tag:blogger.com,1999:blog-3722233.post-743693179363652449Thu, 19 Mar 2015 13:58:00 +00002015-03-19T08:58:51.962-05:00Feeling UnderappreciatedAs academics we live and die by our research. While our proofs are either correct or not, the import of our work has a far more subjective feel. One can see where the work is published or how many citations it gets and we often say that we care most about the true intrinsic or extrinsic value of the research. But the measure of success of a research that we truly care most about is how it is viewed within the community. Such measures can have a real value in terms of hiring, tenure, promotion, raises and grants but it goes deeper, filling some internal need to have our research matter to our peers.<br />
<br />
So even little things can bother you. Not being cited when you think your work should be. Not being mentioned during a talk. Seeing a review that questions the relevance of your model. Nobody following up on your open questions. Difficulty in finding excitement in others about your work. We tend to keep these feelings bottled up since we feel we shouldn't be bragging about own work.<br />
<br />
If you feel this way a few things to keep in mind. It happens to all of us even though we rarely talk about it. You are not alone. Try not to obsess, it's counterproductive and just makes you feel even worse. If appropriate let the authors know that your work is relevant to theirs, the authors truly may have been unaware. Sometimes it is just best to acknowledge to yourself that while you think the work is good, you can't always convince the rest of the world and just move on.<br />
<br />
More importantly remember the golden rule, and try to cite all relevant research and show interest in other people's work as well as your own.http://blog.computationalcomplexity.org/2015/03/feeling-underappreciated.htmlnoreply@blogger.com (Lance Fortnow)5tag:blogger.com,1999:blog-3722233.post-7329933002808223925Mon, 16 Mar 2015 02:25:00 +00002015-03-15T22:54:30.452-05:00Has anything interesting ever come out of a claimed proof that P=NP or P ≠ NP?<br />
When I was young and foolish and I heard that someone thinks they proven P=NP or P ≠ NP I would think <i>Wow- maybe they did!</i>. Then my adviser, who was on the FOCS committee, gave me a<i> paper that claimed to resolve P vs NP! </i> to review for FOCS. It was terrible. I got a became more skeptical.<br />
<br />
When I was older and perhaps a byte less foolish I would think the following: <br />
<br />
For P=NP proofs: I am sure it does not proof P=NP BUT maybe there are some nice ideas here that could be used to speed up some known algorithms in practice, or give some insights, or something. Could still be solid research (A type of research that Sheldon Cooper has disdain for, but I think is fine).<br />
<br />
and<br />
<br />
For P ≠ NP proofs: I am sure it does not prove P ≠ NP BUT maybe there are some nice ideas here, perhaps a `if BLAH than P ≠ NP', perhaps an nlog^* lower bound on something in some restricted model'.<br />
<br />
Since I've joined this blog I've been emailed some proofs that claim to resolve P vs NP (I also get some proofs in Ramsey Theory, which probably saves Graham/Rothchild/Spencer some time since cranks might bother me instead of them). These proofs fall into some categories:<br />
<br />
P ≠ NP because there are all those possibilities to look through (or papers that are far less coherent than that but that's what it comes down to)<br />
<br />
P=NP look at my code!<br />
<br />
P=NP here is my (incoherent) approach. For example `first look for two variables that are quasi-related' What does `quasi-related' mean? They don't say.<br />
<br />
Papers where I can't tell what they are saying. NO they are not saying independent of ZFC, I wish they were that coherent. Some say that its the wrong question, a point which could be argued intelligently but not by those who are writing such papers.<br />
<br />
OKAY, so is there ANY value to these papers? Sadly looking over all of the papers I've gotten on P vs NP (in my mind- I didn't save them --should I have?) the answer is an empirical NO. Why not? I'll tell you why not by way of counter-example:<br />
<br />
Very early on, before most people know about FPT, I met Mike Fellows at a conference and he told me about the Graph Minor Theorem and Vertex Cover. It was fascinating. Did he say `I've solved P vs NP' Of course not. <i>He knew better.</i><br />
<br />
Taking Mike Sipers's Grad Theory course back in the 1980's he presented the recent result: DTIME(n) ≠ NTIME(n). Did Mike Sipser or the authors (Paul, Pippenger, Szemeredi, Trotter) claim that they had proven P vs NP? Of course not, <i>they knew better</i><br />
<br />
Think of the real advances made in theory. They are made by insiders, outsiders, people you've heard of, people you hadn't heard of before, but they were all made my people who... were pretty good and know stuff. YES, some are made by people who are not tainted by conventional thinking, but such people can still differentiate an informal argument from a proof, and they know that an alleged proof that resolves P vs NP needs to be checked quite carefully before bragging about it.<br />
<br />
When the result that Monotone circuits have exponential lower bounds for some problems there was excitement that this may lead to a proof that P ≠ NP, however, nobody, literally nobody, claimed that these results proved P ≠ NP.<i> They knew better.</i><br />
<br />
So, roughly speaking, the people who claim they've resolved P vs NP either have a knowledge gap or can't see their own mistakes or something that makes their work unlikely to have value. One test for that is to ask if they retracted the proof once flaws have been exposed. <br />
<br />
This is not universally true- I know of two people who claimed to have solved the problem who are pretty careful normally. I won't name names since my story might not be quite right, and because they of them retracted IMMEDIATELY after seeing the error. (When Lance proofread this post he guessed one of them,<br />
so there just aren't that many careful people who claim to have resolved P vs NP.) And one of them got an obscure paper into an obscure journal out of their efforts.<br />
<br />
I honestly don't know how careful Deolaikar is, nor do I know if anything of interest every came out of his work, or if has retracted it. If someone knows, please leave a comment.<br />
<br />
I discuss Swart after the next paragraph. <br />
<br />
I WELCOME counter-example! If you know of a claim to resolve P vs NP where the authors paper had something of value, please comment. The term <i>of value</i> means one of two things: there really was some theorem of interest OR there really were some ideas that were later turned into theorems (or in the case of P=NP turned into usable algorithms that worked well in practice).<br />
<br />
One partial counter-example- Swarts claim that P=NP inspired OTHER papers that were good: Yannakakis's proof that Swart's approach could not work and some sequels that made Lance's list of best papers of the 2000's (see <a href="http://blog.computationalcomplexity.org/2014/04/favorite-theorems-extended-formulations.html#comment-form">this post</a>). I don't quite know how to count that.<br />
<br />
<br />
<br />http://blog.computationalcomplexity.org/2015/03/has-anything-interesting-every-come-out.htmlnoreply@blogger.com (GASARCH)19tag:blogger.com,1999:blog-3722233.post-397116112733971482Thu, 12 Mar 2015 11:25:00 +00002015-03-12T06:25:13.952-05:00Quotes with which I disagreeOften we hear pithy quotes by famous people but some just don't hold water.<br />
<br />
"Computer science is no more about computers than astronomy is about telescopes."<br />
<br />
Usually <a href="http://en.wikiquote.org/wiki/Computer_science#Disputed">attributed to Edsger Dijkstra</a>, the quote tries to capture that using computers or even programming is not computer science, which I agree. But computer science is most definitely about the computers, making them connected, smarter, faster, safer, reliable and easier to use. You can get a PhD in computer science with a smarter cache system, you can't get a PhD in Astronomy from developing a better telescope lens.<br />
<br />
"If your laptop cannot find it, neither can the market."<br />
<br />
This quote by Kamal Jain is used to say a market can't find equilibrium prices when the equilibrium problem is hard to compute. But to think that the market, with thousands of highly sophisticated and unknown trading algorithm combined with more than a few less than rational agents all interacting with each other can be simulated on a sole laptop seems absurd, even in theory.<br />
<br />
"If you never miss the plane, you're spending too much time in airports."<br />
<br />
George Stigler, a 1982 Nobelist in economics, had this quote to explain individual rationality. But missing a flight is a selfish activity since you will delay seeing people at the place or conference you are heading to or family if you are heading home. I've seen people miss PhD defenses because they couldn't take an extra half hour to head to the airport earlier. If you really have no one on the other side, go ahead and miss your plane. But keep in mind usually you aren't the only one to suffer if you have to take a later flight.<br />
<br />
I take the opposite approach, heading to the airport far in advance of my flight and working at the airport free of distractions of the office. Most airports have the three ingredients I need for an effective working environment: wifi, coffee and restrooms.http://blog.computationalcomplexity.org/2015/03/quotes-with-which-i-disagree.htmlnoreply@blogger.com (Lance Fortnow)11tag:blogger.com,1999:blog-3722233.post-355866376643250287Tue, 10 Mar 2015 17:52:00 +00002015-03-15T22:17:31.021-05:00Guest Post by Thomas Zeume on Applications of Ramsey Theory to Dynamic Descriptive ComplexityGuest Post by Thomas Zeume on<br />
<br />
Lower Bounds for Dynamic Descriptive Complexity<br />
<br />
(A result that uses Ramsey Theory!)<br />
<br />
<br />
In a previous blog post Bill mentioned his hobby to collect theorems that<br />
apply Ramsey theory. I will present one such application that arises in<br />
dynamic descriptive complexity theory. The first half of the post introduces<br />
the setting, the second part sketches a lower bound proof that uses Ramsey theory.<br />
<br />
Dynamic descriptive complexity theory studies which queries can be maintained by<br />
first-order formulas with the help of auxiliary relations, when the input structure<br />
is subject to simple modifications such as tuple insertions and tuple deletions.<br />
<br />
As an example consider a directed graph into which edges are inserted. When an edge<br />
(u, v) is inserted, then the new transitive closure T' can be defined from the old<br />
transitive closure T by a first-order formula that uses u and v as parameters:<br />
<br />
T'(x,y) = T(x,y) ∨ (T(x, u) ∧ T(v, y))<br />
<br />
Thus the reachability query can be maintained under insertions in this fashion<br />
(even though it cannot be expressed in first-order logic directly).<br />
<br />
The above update formula is an example of a dynamic descriptive complexity program.<br />
In general, dynamic programs may use several auxiliary relations that are helpful<br />
to maintain the query under consideration. Then each auxiliary relation has one<br />
update formula for edge insertions and one formula for edge deletions.<br />
The example above uses a single auxiliary relation T (which is also the designated<br />
query result) and only updates T under insertions.<br />
<br />
This principle setting has been independently formalized in very similar ways by<br />
Dong, Su and Topor [1, 2] and by Patnaik and Immerman [3]. For both groups one of<br />
the main motivations was that first-order logic is the core of SQL and therefore<br />
queries maintainable in this setting can also be maintained using SQL. Furthermore<br />
the correspondance of first-order logic with built-in arithmetic to uniform<br />
AC0-circuits (constant-depth circuits of polynomial size with unbounded fan-in)<br />
yields that queries maintainable in this way can be evaluated dynamically in a<br />
highly parallel fashion.<br />
<br />
One of the main questions studied in Dynamic Complexity has been whether<br />
Reachability on directed graphs can be maintained in DynFO<br />
(under insertions and deletions of edges). Here DynFO is the class of<br />
properties that can be maintained by first-order update formulas.<br />
The conjecture by Patnaik and Immerman that this is possible has been recently<br />
confirmed by Datta, Kulkarni, Mukherjee, Schwentick and the author of this post,<br />
but has not been published yet [4].<br />
<br />
In this blog post, I would like to talk about dynamic complexity LOWER rather<br />
than upper bounds. Research on dynamic complexity lower bounds has not been<br />
very successful so far. Even though there are routine methods to prove that a<br />
property can not be expressed in first-order logic (or, for that matter, not in AC0),<br />
the dynamic setting adds a considerable level of complication. So far, there is<br />
no lower bound showing that a particular property can not be maintained in<br />
DynFO (besides trivial bounds for properties beyond polynomial time).<br />
<br />
For this reason, all (meaningful) lower bounds proved so far in this setting<br />
have been proved for restricted dynamic programs. One such restriction is to<br />
disallow the use of quantifiers in update formulas. The example above illustrates<br />
that useful properties can be maintained even without quantifiers<br />
(though in this example under insertions only). Therefore proving lower bounds<br />
for this small syntactic fragment can be of interest.<br />
<br />
Several lower bounds for quantifier-free dynamic programs have been proved by using<br />
basic combinatorial tools. For example, counting arguments yield a lower bound for<br />
alternating reachability and non-regular languages [5], and Ramsey-like theorems<br />
as well as Higman's lemma can be used to prove that the reachability query<br />
(under edge insertions and deletions) cannot be maintained by<br />
quantifier-free dynamic programs with binary auxiliary relations [6].<br />
<br />
Here, I will present how bounds for Ramsey numbers can be used to obtain lower bounds.<br />
Surprisingly, the proof of the lower bound in the following result relies on both<br />
upper and lower bounds for Ramsey numbers. Therefore the result might be a good candidate<br />
for Bill's collection of theorems that use Ramsey-like results.<br />
<br />
THEOREM (from [7])<br />
When only edge insertions are allowed, then (k+2)-clique can be maintained by a<br />
quantifier-free dynamic program with (k+1)-ary auxiliary relations, but it cannot be<br />
maintained by such a program with k-ary auxiliary relations.<br />
<br />
SKETCH OF PROOF<br />
<br />
I present a (very) rough proof sketch of the lower bound in the theorem.<br />
The proof sketch aims at giving a flavour of how the upper and lower bounds<br />
on the size of Ramsey numbers are used to prove the above lower bound.<br />
<br />
Instead of using bounds on Ramsey numbers, it will be more convenient to use<br />
the following equivalent bounds on the size of Ramsey cliques. For every c and large enough n:<br />
<br />
1) Every $c$-colored complete $k$-hypergraph of size n contains a large Ramsey clique.<br />
<br />
2) There is a 2-coloring of the complete $(k+1)$-hypergraph of size n that does \emph{not} contain a large Ramsey clique.<br />
<br />
<br />
In the following it is not necessary to know what "large" exactly means<br />
(though it roughly means of size log^{k-1} n in both statements).<br />
Those bounds are due to Rado, Hajnal and Erdős.<br />
<br />
Towards a contradiction we assume that there is a quantifier-free program P with<br />
k-ary auxiliary relations that maintains whether a graph contains a (k+2)-clique.<br />
<br />
The first step is to construct a graph G = (V UNION W, E) such that in all large subsets<br />
C of V one can find independent sets A and B of size k+1 such that adding all edges<br />
between nodes of A yields a graph containing a (k+2)-clique while adding all edges<br />
between nodes of B yields a graph without a (k+2)-clique. Such a graph G can be constructed<br />
using (2). (Choose a large set V and let W := V^{k+1}. Color the set W according to<br />
(2) with colors red and blue. Connect all blue elements w = (v_1, ..., v_{k+1}) in W<br />
with the elements v_1, \ldots, v_{k+1} in V.)<br />
<br />
Now, if the program P currently stores G, then within the current auxiliary relations<br />
stored by P one can find a large subset C of V where all k-tuples are colored equally<br />
by the auxiliary relations. Such a set C can be found using (1). (More precisely:<br />
by a slight extension of (1) to structures.)<br />
<br />
By the construction of G there are subsets A and B of the set C with the property stated<br />
above. As A and B are subsets of C, they are isomorphic with respect to the auxiliary<br />
relations and the edge relation. A property of quantifier-free programs is that for such<br />
isomorphic sets, the application of corresponding modification sequences yields the same<br />
answer of the program, where "corresponding" means that they adhere to the isomorphism.<br />
<br />
Thus the dynamic program P will give the same answer when adding all edges of A, and whenadding all edges of B (in an order that preserves the isomorphism). This is a contradiction<br />
as the first sequence of modifications yields a graph with a (k+2)-clique while the second<br />
yields a graph without a (k+2)-clique. Hence such a program P cannot exist. This proves<br />
the lower bound from the above theorem.<br />
<br />
I thank Thomas Schwentick and Nils Vortmeier for many helpful suggestions on how to<br />
improve a draft of this blog post.<br />
<br />
[1] Guozhu Dong and Rodney W. Topor. Incremental evaluation of datalog queries. In ICDT 1992, pages 282–296. Springer, 1992.<br />
<br />
[2] Guozhu Dong and Jianwen Su. First-order incremental evaluation of datalog queries. In Database Programming Languages, pages 295–308. Springer, 1993.<br />
<br />
[3] Sushant Patnaik and Neil Immerman. Dyn-FO: A parallel, dynamic complexity class. J. Comput. Syst. Sci., 55(2):199–209, 1997.<br />
<br />
[4] Samir Datta, Raghav Kulkarni, Anish Mukherjee, Thomas Schwentick, and Thomas Zeume. Reachability is in DynFO. ArXiv 2015.<br />
<br />
[5] Wouter Gelade, Marcel Marquardt, and Thomas Schwentick. The dynamic complexity of formal languages. ACM Trans. Comput. Log., 13(3):19, 2012.<br />
<br />
[6] Thomas Zeume and Thomas Schwentick. On the quantifier-free dynamic complexity of Reachability. Inf. Comput. 240 (2015), pp. 108–129<br />
<br />
[7] Thomas Zeume. The dynamic descriptive complexity of k-clique. In MFCS 2014, pages 547–558. Springer, 2014.<br />
<br />http://blog.computationalcomplexity.org/2015/03/guest-post-by-thomas-zeume-on-app-of.htmlnoreply@blogger.com (GASARCH)1tag:blogger.com,1999:blog-3722233.post-8824991713370923749Thu, 05 Mar 2015 15:23:00 +00002015-03-05T09:55:35.839-06:00(1/2)! = sqrt(pi) /2 and other conventions (This post is inspired by the book <a href="http://www.amazon.com/The-Cult-Pythagoras-Math-Myths/dp/0822962705">The cult of Pythagoras: Math and Myths</a> which I recently<br />
read and reviewed. See <a href="http://www.cs.umd.edu/~gasarch/BLOGPAPERS/cult.pdf">here</a> for my review.)<br />
<br />
STUDENT: The factorial function is only defined on the natural numbers. Is there some way to extend it to all the reals? For example, what is (1/2)! ?<br />
<br />
BILL: Actually (1/2)! is sqrt(π)/2<br />
<br />
STUDENT: Oh well, ask a stupid question, get a stupid answer.<br />
<br />
BILL: No, I'm serious, (1/2)! is sqrt(π)/2.<br />
<br />
STUDENT: C'mon, be serious. If you don't know or if its not known just tell me.<br />
<br />
The Student has a point. (1/2)! = sqrt(π)/2 is stupid even though its true. So I ask--- is there some other way that factorial could be expanded to all the reals that is as well motivated as the Gamma function? Since 0!=1 and 1!=1, perhaps (1/2)! should be 1.<br />
<br />
Is there a combinatorial interpretation for (1/2)!=sqrt(π) /2?<br />
<br />
If one defined n! by piecewies linear interpolation that works but is it useful? interesting?<br />
<br />
For that matter is the Gamma function useful? Interesting?<br />
<br />
ANOTHER CONVENTION: We say that 0^0 is undefined. But I think it should be 1.<br />
Here is why:<br />
<br />
d/dx x^n = nx^{n-1} is true except at 1. Lets make it ALSO true at 1 by saying that x^0=1 ALWAYS<br />
and that includes at 0.<br />
<br />
A SECOND LOOK AT A CONVENTION: (-3)(4) = -12 makes sense since if I owe my bookie<br />
3 dollars 4 times than I owe him 12 dollars. But what about (-3)(-4)=12. This makes certain<br />
other laws of arithmetic extend to the negatives, which is well and good, but we should not<br />
mistake this convention for a discovered truth. IF there was an application where definiting<br />
NEG*NEG = NEG then that would be a nice alternative system, much like the diff geometries.<br />
<br />
I COULD TALK ABOUT a^{1/2} = sqrt(a) also being a convention to make a rule work out<br />
however (1) my point is made, and (2) I think I blogged about that a while back.<br />
<br />
So what is my point- we adapt certain conventions which are fine and good, but should not<br />
mistake them for eternal truths. This may also play into the question of is math invented or<br />
discovered. <br />
<br />
<br />http://blog.computationalcomplexity.org/2015/03/12-sqrtpi-and-other-conventions.htmlnoreply@blogger.com (GASARCH)11tag:blogger.com,1999:blog-3722233.post-1123635599972944401Mon, 02 Mar 2015 20:46:00 +00002015-03-02T14:46:29.827-06:00Leonard Nimoy (1931-2015)<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-vAZdHhfPsY4/VPTL0FZ2-MI/AAAAAAAA0UM/-I_nORUSvXU/s1600/spock.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-vAZdHhfPsY4/VPTL0FZ2-MI/AAAAAAAA0UM/-I_nORUSvXU/s1600/spock.jpg" height="116" width="400" /></a></div>
<br />
Bill and I rarely write joint blog posts but with the loss of a great cultural icon we both had to have our say.<br />
<br />
<b>Bill:</b> Leonard Nimoy (Spock) died last week at the age of 83. DeForest Kelley (McCoy) passed away in 1999. William Shatner (Kirk) is still alive, though I note that he is four days older than Nimoy.<br />
<br />
Spock tried to always be logical. I wonder if an unemotional scientist would be a better or worse scientist.<br />
Does emotion drive our desire to learn things? Our choice of problems to work on? Our creativity?<br />
<br />
Did Star Trek (or its successors) inspire many to go into science? Hard to tell but I suspect yes. Did it inspire you?<br />
<br />
There depiction of technology ranged from predicative (communicators are cell phones!) to awful (Episode 'The Ultimate Computer' wanted to show that humans are better than computers. It instead showed that humans are better than a malfunctioning killer-computer. I think we knew that.) I think TV shows now hire science consultants to get things right (The Big Bang Theory seems to get lots of science right, though their view of academia is off.) but in those days there was less of a concern for that.<br />
<br />
<b>Lance:</b> I'm too young to remember the original Star Trek series when it first aired but I did watch the series religiously during the 70's when a local TV station aired an episode every day, seeing every episode multiple times. The original Star Trek was a product of its time, using the future to reflect the current societal issues of the 60's. Later Star Trek movies and series seemed to have lost that premise.<br />
<br />
Every nerdy teenager, myself included, could relate to Spock with his logical exterior and his half-human emotional interior that he could usually suppress. Perhaps my favorite Spock episode was the penultimate "All Our Yesterdays" where Spock having been sent back in time takes on an earlier emotional state of the old Vulcans and falls in love.<br />
<br />
I did see Leonard Nimoy in person once, during a lecture at MIT in the 80's. He clearly relished being Spock and we all relished him.<br />
<br />
Goodby Leonard. You have lived long and prospered and gone well beyond where any man has gone before.http://blog.computationalcomplexity.org/2015/03/leonard-nimoy-1931-2015_2.htmlnoreply@blogger.com (Lance Fortnow)1tag:blogger.com,1999:blog-3722233.post-4151756712487990603Thu, 26 Feb 2015 22:22:00 +00002015-02-26T16:23:08.129-06:00Selecting the Correct OracleAfter my <a href="http://blog.computationalcomplexity.org/2015/02/and-winners-are.html">post last week</a> on the Complexity accepts, a friend of Shuichi Hirahara send Shuichi an email saying that I was interested in his paper. Shuichi contacted me, sent me his paper and we had a few good emails back and forth. He posted his paper <a href="http://arxiv.org/abs/1502.07258">Identifying an Honest EXP<sup>NP</sup> Oracle Among Many</a> on the arXiv yesterday.<br />
<div>
<br /></div>
<div>
Shuichi asks the following question: Given two oracles both claiming to compute a language L, figure out which oracle is correct. For which languages does there exist such a selector?</div>
<div>
<br /></div>
<div>
For deterministic polynomial-time selectors, every such L must sit in PSPACE and all PSPACE-complete languages have selectors. The question gets much more interesting if you allow probabilistic computation.</div>
<div>
<br /></div>
<div>
Shuichi shows that every language that has a probabilistically poly-time selector sits in S<sub>2</sub><sup>EXP</sup>, the exponential analogue of <a href="http://blog.computationalcomplexity.org/2002/08/complexity-class-of-week-s2p.html">S<sub>2</sub><sup>P</sup></a>. His main theorem shows that EXP<sup>NP</sup>-complete sets have this property. His proof is quite clever, using the EXP<sup>NP</sup>-complete problem of finding the lexicographically least witness of a succinctly-described exponential-size 3-SAT question. He uses PCP techniques to have each oracle produce a witness and then he has a clever way to doing binary search to find the least bit where these witnesses differ. I haven't checked all the details carefully but the proof ideas look good.</div>
<div>
<br /></div>
<div>
Still leaves an interesting gap between EXP<sup>NP</sup> and S<sub>2</sub><sup>EXP</sup>. Is there a selector for Promise-S<sub>2</sub><sup>EXP</sup>-complete languages?</div>
http://blog.computationalcomplexity.org/2015/02/selecting-correct-oracle.htmlnoreply@blogger.com (Lance Fortnow)1