tag:blogger.com,1999:blog-3722233Sat, 25 Apr 2015 04:51:28 +0000typecastfocs metacommentsComputational ComplexityComputational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill Gasarchhttp://blog.computationalcomplexity.org/noreply@blogger.com (Lance Fortnow)Blogger2268125tag:blogger.com,1999:blog-3722233.post-2164203448944404159Fri, 24 Apr 2015 11:20:00 +00002015-04-24T06:20:14.944-05:00Fifty Years of Moore's LawGordon Moore formulated his <a href="http://en.wikipedia.org/wiki/Moore%27s_law">famous law</a> in a <a href="http://www.cs.utexas.edu/~fussell/courses/cs352h/papers/moore.pdf">paper</a> dated fifty years and five days ago. We all have seen how Moore's law has changed real-world computing, but how does it relate to computational complexity?<br />
<br />
In complexity we typically focus on running times but we really care about how large a problem we can solve in current technology. In one of my <a href="http://blog.computationalcomplexity.org/2002/12/note-on-running-times.html">early posts</a> I showed how this view can change how we judge running time improvements from faster algorithms. Improved technology also allows us to solve bigger problems. This is one justification for asymptotic analysis. For polynomial-time algorithms a doubling of processor speed gives a constant multiplicative factor increase in the size of the problem we can solve. We only get an additive factor for an exponential-time algorithm.<br />
<br />
Although Moore's law continues, computers have stopped getting faster ten years ago. Instead we've seen the rise of new technologies: GPUs and other specialized processors, multicore, cloud computing and more on the horizon.<br />
<br />
The complexity and algorithmic communities are slow to catch up. With some exceptions, we still focus on single-core single-thread algorithms. Rather we need to find good models for these new technologies and develop algorithms and complexity bounds that map nicely into our current computing reality.http://blog.computationalcomplexity.org/2015/04/fifty-years-of-moores-law.htmlnoreply@blogger.com (Lance Fortnow)0tag:blogger.com,1999:blog-3722233.post-1903515936383568055Wed, 22 Apr 2015 13:47:00 +00002015-04-23T08:47:01.936-05:00The New Oracle Result! The new circuit result! which do you care about?You have likely heard of the new result by Ben Roco, and Li-Yang on random oracles (<a href="http://arxiv.org/abs/1504.03398">see here for preprint)</a> from either <a href="http://blog.computationalcomplexity.org/2015/04/ph-infinite-under-random-oracle.html">Lance </a>or <a href="http://www.scottaaronson.com/blog/?p=2272">Scott</a> or some other source:<br />
<br />
Lance's headline was <i>PH infinite under random oracle</i><br />
<br />
Scott's headline was <i>Two papers </i>but when he stated the result he also stated it as a random oracle result.<br />
<br />
The paper itself has the title<br />
<br />
<i>An average case depth hierarchy theorem for Boolean circuits</i> <br />
<br />
and the abstract is: <br />
<br />
We prove an average-case depth hierarchy theorem for Boolean circuits over
the standard basis of <span class="MathJax_Preview"></span><span class="MathJax" id="MathJax-Element-1-Frame" role="textbox"><nobr><span class="math" id="MathJax-Span-1" style="display: inline-block; width: 2.795em;"><span style="display: inline-block; font-size: 129%; height: 0px; position: relative; width: 2.153em;"><span style="clip: rect(1.405em, 1000em, 2.422em, -0.456em); left: 0em; position: absolute; top: -2.261em;"><span class="mrow" id="MathJax-Span-2"><span class="texatom" id="MathJax-Span-3"><span class="mrow" id="MathJax-Span-4"><span class="mi" id="MathJax-Span-5" style="font-family: MathJax_SansSerif;">A</span><span class="mi" id="MathJax-Span-6" style="font-family: MathJax_SansSerif;">N</span><span class="mi" id="MathJax-Span-7" style="font-family: MathJax_SansSerif;">D</span></span></span></span><span style="display: inline-block; height: 2.261em; width: 0px;"></span></span></span><span style="border-left: 0em solid; display: inline-block; height: 1.034em; overflow: hidden; vertical-align: -0.069em; width: 0px;"></span></span></nobr></span>, <span class="MathJax_Preview"></span><span class="MathJax" id="MathJax-Element-2-Frame" role="textbox"><nobr><span class="math" id="MathJax-Span-8" style="display: inline-block; width: 1.823em;"><span style="display: inline-block; font-size: 129%; height: 0px; position: relative; width: 1.4em;"><span style="clip: rect(1.384em, 1000em, 2.444em, -0.429em); left: 0em; position: absolute; top: -2.261em;"><span class="mrow" id="MathJax-Span-9"><span class="texatom" id="MathJax-Span-10"><span class="mrow" id="MathJax-Span-11"><span class="mi" id="MathJax-Span-12" style="font-family: MathJax_SansSerif;">O</span><span class="mi" id="MathJax-Span-13" style="font-family: MathJax_SansSerif;">R</span></span></span></span><span style="display: inline-block; height: 2.261em; width: 0px;"></span></span></span><span style="border-left: 0em solid; display: inline-block; height: 1.09em; overflow: hidden; vertical-align: -0.098em; width: 0px;"></span></span></nobr></span>, and <span class="MathJax_Preview"></span><span class="MathJax" id="MathJax-Element-3-Frame" role="textbox"><nobr><span class="math" id="MathJax-Span-14" style="display: inline-block; width: 2.795em;"><span style="display: inline-block; font-size: 129%; height: 0px; position: relative; width: 2.153em;"><span style="clip: rect(1.384em, 1000em, 2.444em, -0.396em); left: 0em; position: absolute; top: -2.261em;"><span class="mrow" id="MathJax-Span-15"><span class="texatom" id="MathJax-Span-16"><span class="mrow" id="MathJax-Span-17"><span class="mi" id="MathJax-Span-18" style="font-family: MathJax_SansSerif;">N</span><span class="mi" id="MathJax-Span-19" style="font-family: MathJax_SansSerif;">O</span><span class="mi" id="MathJax-Span-20" style="font-family: MathJax_SansSerif;">T</span></span></span></span><span style="display: inline-block; height: 2.261em; width: 0px;"></span></span></span><span style="border-left: 0em solid; display: inline-block; height: 1.09em; overflow: hidden; vertical-align: -0.098em; width: 0px;"></span></span></nobr></span> gates.
Our hierarchy theorem says that for every <span class="MathJax_Preview"></span><span class="MathJax" id="MathJax-Element-4-Frame" role="textbox"><nobr><span class="math" id="MathJax-Span-21" style="display: inline-block; width: 3.142em;"><span style="display: inline-block; font-size: 129%; height: 0px; position: relative; width: 2.422em;"><span style="clip: rect(1.405em, 1000em, 2.56em, -0.451em); left: 0em; position: absolute; top: -2.261em;"><span class="mrow" id="MathJax-Span-22"><span class="mi" id="MathJax-Span-23" style="font-family: MathJax_Math; font-style: italic;">d<span style="display: inline-block; height: 1px; overflow: hidden; width: 0.003em;"></span></span><span class="mo" id="MathJax-Span-24" style="font-family: MathJax_Main; padding-left: 0.278em;">≥</span><span class="mn" id="MathJax-Span-25" style="font-family: MathJax_Main; padding-left: 0.278em;">2</span></span><span style="display: inline-block; height: 2.261em; width: 0px;"></span></span></span><span style="border-left: 0em solid; display: inline-block; height: 1.212em; overflow: hidden; vertical-align: -0.247em; width: 0px;"></span></span></nobr></span>, there is an explicit
<span class="MathJax_Preview"></span><span class="MathJax" id="MathJax-Element-5-Frame" role="textbox"><nobr><span class="math" id="MathJax-Span-26" style="display: inline-block; width: 0.781em;"><span style="display: inline-block; font-size: 129%; height: 0px; position: relative; width: 0.592em;"><span style="clip: rect(1.657em, 1000em, 2.433em, -0.463em); left: 0em; position: absolute; top: -2.261em;"><span class="mrow" id="MathJax-Span-27"><span class="mi" id="MathJax-Span-28" style="font-family: MathJax_Math; font-style: italic;">n</span></span><span style="display: inline-block; height: 2.261em; width: 0px;"></span></span></span><span style="border-left: 0em solid; display: inline-block; height: 0.723em; overflow: hidden; vertical-align: -0.084em; width: 0px;"></span></span></nobr></span>-variable Boolean function <span class="MathJax_Preview"></span><span class="MathJax" id="MathJax-Element-6-Frame" role="textbox"><nobr><span class="math" id="MathJax-Span-29" style="display: inline-block; width: 0.712em;"><span style="display: inline-block; font-size: 129%; height: 0px; position: relative; width: 0.538em;"><span style="clip: rect(1.394em, 1000em, 2.627em, -0.429em); left: 0em; position: absolute; top: -2.261em;"><span class="mrow" id="MathJax-Span-30"><span class="mi" id="MathJax-Span-31" style="font-family: MathJax_Math; font-style: italic;">f<span style="display: inline-block; height: 1px; overflow: hidden; width: 0.06em;"></span></span></span><span style="display: inline-block; height: 2.261em; width: 0px;"></span></span></span><span style="border-left: 0em solid; display: inline-block; height: 1.313em; overflow: hidden; vertical-align: -0.334em; width: 0px;"></span></span></nobr></span>, computed by a linear-size depth-<span class="MathJax_Preview"></span><span class="MathJax" id="MathJax-Element-7-Frame" role="textbox"><nobr><span class="math" id="MathJax-Span-32" style="display: inline-block; width: 0.712em;"><span style="display: inline-block; font-size: 129%; height: 0px; position: relative; width: 0.538em;"><span style="clip: rect(1.405em, 1000em, 2.432em, -0.451em); left: 0em; position: absolute; top: -2.261em;"><span class="mrow" id="MathJax-Span-33"><span class="mi" id="MathJax-Span-34" style="font-family: MathJax_Math; font-style: italic;">d<span style="display: inline-block; height: 1px; overflow: hidden; width: 0.003em;"></span></span></span><span style="display: inline-block; height: 2.261em; width: 0px;"></span></span></span><span style="border-left: 0em solid; display: inline-block; height: 1.047em; overflow: hidden; vertical-align: -0.082em; width: 0px;"></span></span></nobr></span> formula,
which is such that any depth-<span class="MathJax_Preview"></span><span class="MathJax" id="MathJax-Element-8-Frame" role="textbox"><nobr><span class="math" id="MathJax-Span-35" style="display: inline-block; width: 3.976em;"><span style="display: inline-block; font-size: 129%; height: 0px; position: relative; width: 3.068em;"><span style="clip: rect(1.349em, 1000em, 2.672em, -0.39em); left: 0em; position: absolute; top: -2.261em;"><span class="mrow" id="MathJax-Span-36"><span class="mo" id="MathJax-Span-37" style="font-family: MathJax_Main;">(</span><span class="mi" id="MathJax-Span-38" style="font-family: MathJax_Math; font-style: italic;">d<span style="display: inline-block; height: 1px; overflow: hidden; width: 0.003em;"></span></span><span class="mo" id="MathJax-Span-39" style="font-family: MathJax_Main; padding-left: 0.222em;">−</span><span class="mn" id="MathJax-Span-40" style="font-family: MathJax_Main; padding-left: 0.222em;">1</span><span class="mo" id="MathJax-Span-41" style="font-family: MathJax_Main;">)</span></span><span style="display: inline-block; height: 2.261em; width: 0px;"></span></span></span><span style="border-left: 0em solid; display: inline-block; height: 1.429em; overflow: hidden; vertical-align: -0.392em; width: 0px;"></span></span></nobr></span> circuit that agrees with <span class="MathJax_Preview"></span><span class="MathJax" id="MathJax-Element-9-Frame" role="textbox"><nobr><span class="math" id="MathJax-Span-42" style="display: inline-block; width: 0.712em;"><span style="display: inline-block; font-size: 129%; height: 0px; position: relative; width: 0.538em;"><span style="clip: rect(1.394em, 1000em, 2.627em, -0.429em); left: 0em; position: absolute; top: -2.261em;"><span class="mrow" id="MathJax-Span-43"><span class="mi" id="MathJax-Span-44" style="font-family: MathJax_Math; font-style: italic;">f<span style="display: inline-block; height: 1px; overflow: hidden; width: 0.06em;"></span></span></span><span style="display: inline-block; height: 2.261em; width: 0px;"></span></span></span><span style="border-left: 0em solid; display: inline-block; height: 1.313em; overflow: hidden; vertical-align: -0.334em; width: 0px;"></span></span></nobr></span> on <span class="MathJax_Preview"></span><span class="MathJax" id="MathJax-Element-10-Frame" role="textbox"><nobr><span class="math" id="MathJax-Span-45" style="display: inline-block; width: 7.656em;"><span style="display: inline-block; font-size: 129%; height: 0px; position: relative; width: 5.922em;"><span style="clip: rect(1.403em, 1000em, 2.726em, -0.39em); left: 0em; position: absolute; top: -2.315em;"><span class="mrow" id="MathJax-Span-46"><span class="mo" id="MathJax-Span-47" style="font-family: MathJax_Main;">(</span><span class="mn" id="MathJax-Span-48" style="font-family: MathJax_Main;">1</span><span class="texatom" id="MathJax-Span-49"><span class="mrow" id="MathJax-Span-50"><span class="mo" id="MathJax-Span-51" style="font-family: MathJax_Main;">/</span></span></span><span class="mn" id="MathJax-Span-52" style="font-family: MathJax_Main;">2</span><span class="mo" id="MathJax-Span-53" style="font-family: MathJax_Main; padding-left: 0.222em;">+</span><span class="msubsup" id="MathJax-Span-54" style="padding-left: 0.222em;"><span style="display: inline-block; height: 0px; position: relative; width: 0.99em;"><span style="clip: rect(1.658em, 1000em, 2.433em, -0.45em); left: 0em; position: absolute; top: -2.261em;"><span class="mi" id="MathJax-Span-55" style="font-family: MathJax_Math; font-style: italic;">o</span><span style="display: inline-block; height: 2.261em; width: 0px;"></span></span><span style="left: 0.484em; position: absolute; top: -1.734em;"><span class="mi" id="MathJax-Span-56" style="font-family: MathJax_Math; font-size: 70.7%; font-style: italic;">n</span><span style="display: inline-block; height: 1.884em; width: 0px;"></span></span></span></span><span class="mo" id="MathJax-Span-57" style="font-family: MathJax_Main;">(</span><span class="mn" id="MathJax-Span-58" style="font-family: MathJax_Main;">1</span><span class="mo" id="MathJax-Span-59" style="font-family: MathJax_Main;">)</span><span class="mo" id="MathJax-Span-60" style="font-family: MathJax_Main;">)</span></span><span style="display: inline-block; height: 2.315em; width: 0px;"></span></span></span><span style="border-left: 0em solid; display: inline-block; height: 1.429em; overflow: hidden; vertical-align: -0.392em; width: 0px;"></span></span></nobr></span> fraction of all inputs must have size <span class="MathJax_Preview"></span><span class="MathJax" id="MathJax-Element-11-Frame" role="textbox"><nobr><span class="math" id="MathJax-Span-61" style="display: inline-block; width: 6.962em;"><span style="display: inline-block; font-size: 129%; height: 0px; position: relative; width: 5.383em;"><span style="clip: rect(1.152em, 1000em, 2.619em, -0.456em); left: 0em; position: absolute; top: -2.207em;"><span class="mrow" id="MathJax-Span-62"><span class="mi" id="MathJax-Span-63" style="font-family: MathJax_Main;">exp</span><span class="mo" id="MathJax-Span-64"></span><span class="mo" id="MathJax-Span-65" style="font-family: MathJax_Main;">(</span><span class="texatom" id="MathJax-Span-66"><span class="mrow" id="MathJax-Span-67"><span class="msubsup" id="MathJax-Span-68"><span style="display: inline-block; height: 0px; position: relative; width: 2.82em;"><span style="clip: rect(1.657em, 1000em, 2.433em, -0.463em); left: 0em; position: absolute; top: -2.261em;"><span class="mi" id="MathJax-Span-69" style="font-family: MathJax_Math; font-style: italic;">n</span><span style="display: inline-block; height: 2.261em; width: 0px;"></span></span><span style="left: 0.592em; position: absolute; top: -2.247em;"><span class="texatom" id="MathJax-Span-70"><span class="mrow" id="MathJax-Span-71"><span class="mi" id="MathJax-Span-72" style="font-family: MathJax_Main; font-size: 70.7%;">Ω</span><span class="mo" id="MathJax-Span-73" style="font-family: MathJax_Main; font-size: 70.7%;">(</span><span class="mn" id="MathJax-Span-74" style="font-family: MathJax_Main; font-size: 70.7%;">1</span><span class="texatom" id="MathJax-Span-75"><span class="mrow" id="MathJax-Span-76"><span class="mo" id="MathJax-Span-77" style="font-family: MathJax_Main; font-size: 70.7%;">/</span></span></span><span class="mi" id="MathJax-Span-78" style="font-family: MathJax_Math; font-size: 70.7%; font-style: italic;">d<span style="display: inline-block; height: 1px; overflow: hidden; width: 0.002em;"></span></span><span class="mo" id="MathJax-Span-79" style="font-family: MathJax_Main; font-size: 70.7%;">)</span></span></span><span style="display: inline-block; height: 1.884em; width: 0px;"></span></span></span></span></span></span><span class="mo" id="MathJax-Span-80" style="font-family: MathJax_Main;">)</span><span class="mo" id="MathJax-Span-81" style="font-family: MathJax_Main;">.</span></span><span style="display: inline-block; height: 2.207em; width: 0px;"></span></span></span><span style="border-left: 0em solid; display: inline-block; height: 1.614em; overflow: hidden; vertical-align: -0.392em; width: 0px;"></span></span></nobr></span> This
answers an open question posed by Hastad in his Ph.D. thesis.
<br />
Our average-case depth hierarchy theorem implies that the polynomial
hierarchy is infinite relative to a random oracle with probability 1,
confirming a conjecture of Hastad, Cai, and Babai. We also use our result
to show that there is no "approximate converse" to the results of Linial,
Mansour, Nisan and Boppana on the total influence of small-depth circuits, thus
answering a question posed by O'Donnell, Kalai, and Hatami.
<br />
A key ingredient in our proof is a notion of \emph{random projections} which
generalize random restrictions.<br />
<br />
Note that they emphasize the circuit aspect.<br />
<br />
In Yao's paper where he showed PARITY in constant dept requires exp size the title was<br />
<br />
<i>Separating the polynomial hiearchy by oracles</i><br />
<br />
Hastad's paper and book had titles about circuits, not oracles.<i> </i><br />
<br />
When Scott showed that for a random oracle P^NP is properly in Sigma_2^p the title was<br />
<br />
<i>Counterexample to the general Linial-Nissan Conjecture</i><br />
<br />
However the abstarct begins with a statement of the oracle result.<br />
<br />
SO, here is the real question: What is more interesting, the circuit lower bounds or the oracle results that follow? The authors titles and abstracts might tell you what they are thinking, then again they might not. For example, I can't really claim to know that Yao cared about oracles more than circuits.<br />
<br />
Roughly speaking the Circuit results are interesting since they are actual lower bounds, often on reasonable models for natural problems (both of these statements can be counter-argued), oracle results are interesting since they give us a sense that certain proof teachniques are not going to work. Random oracle results are interesting since for classes like these (that is not well defined) things true for random oracles tend to be things we think are true.<br />
<br />
But I want to hear from you, the reader: For example which of PARITY NOT IN AC_0 and THERE IS AN ORACLE SEP PH FROM PSPACE do you find more interesting? Is easier to motivate to other theorists? To non-theorists (for non-theorists I think PARITY).http://blog.computationalcomplexity.org/2015/04/the-new-oracle-result-new-circuit.htmlnoreply@blogger.com (GASARCH)2tag:blogger.com,1999:blog-3722233.post-80531775970903263Thu, 16 Apr 2015 13:14:00 +00002015-04-16T08:14:04.261-05:00PH Infinite Under a Random OracleBenjamin Rossman, Rocco Servedio and Li-Yang Tan show <a href="http://arxiv.org/abs/1504.03398">new circuit lower bounds</a> that imply, among other things, that the polynomial-time hierarchy is infinite relative to a random oracle. What does that mean, and why is it important?<br />
<br />
The polynomial-time hierarchy can be defined inductively as follows: Σ<sup>P</sup><sub>0 </sub>= P, the set of problems solvable in polynomial-time. Σ<sup>P</sup><sub>i+1 </sub>= NP<sup>Σ<sup>P</sup><sub>i</sub></sup>, the set of problems computable in nondeterministic polynomial-time that can ask arbitrary questions to the previous level. We say the polynomial-time hierarchy is infinite if Σ<sup>P</sup><sub>i+1 </sub>≠ Σ<sup>P</sup><sub>i</sub> for all i and it collapses otherwise.<br />
<br />
Whether the polynomial-time hierarchy is infinite is one of the major assumptions in computational complexity and would imply a large number of statements we believe to be true including that NP-complete problems do not have small circuits and that Graph Isomorphism is not co-NP-complete.<br />
<br />
We don't have the techniques to settle whether or not the polynomial-time hierarchy is infinite so we can look at relativized worlds, where all machines have access to the same oracle. The Baker-Gill-Solovay oracle that makes P = NP also collapses the hierarchy. Finding an oracle that makes the hierarchy infinite was a larger challenge and required new results in circuit complexity.<br />
<br />
In 1985, Yao in his paper <a href="http://dx.doi.org/10.1109/SFCS.1985.49">Separating the polynomial-time hierarchy by oracles</a> showed that there were functions that had small depth d+1 circuits but large depth d circuits which was needed for the oracle. Håstad gave a <a href="http://dx.doi.org/10.1145/12130.12132">simplified</a> proof. Cai <a href="http://dx.doi.org/10.1145/12130.12133">proved</a> that PSPACE ≠ Σ<sup>P</sup><sub>i</sub> for all i even if we choose the oracle at random (with probability one). Babai later and independently <a href="http://dx.doi.org/10.1016/0020-0190(87)90036-6">gave a simpler proof</a>.<br />
<br />
Whether a randomly chosen oracle would make the hierarchy infinite required showing the depth separation of circuits in the average case which remained open for three decades. Rossman, Servedio and Tan solved that circuit problem and get the random oracle result as a consequence. They build on Håstad's proof technique of randomly restricting variables to true and false. Rossman et al. generalize to a random projection method that projects onto a new set of variables. <a href="http://arxiv.org/abs/1504.03398">Read their paper</a> to see all the details.<br />
<br />
In 1994, Ron Book <a href="http://dx.doi.org/10.1016/0020-0190(94)00157-X">showed</a> that if the polynomial-time hierarchy was infinite that it remained infinite relativized to a random oracle. Rossman et al. thus gives even more evidence to believe that the hierarchy is indeed infinite, in the sense that if they had proven the opposite result than the hierarchy would have collapsed.<br />
<br />
I <a href="http://dx.doi.org/10.1016/S0020-0190(99)00034-4">used</a> Book's paper to show that a number of complexity hypothesis held simultaneously with the hierarchy being infinite, now a trivial consequence of the Rossman et al. result. I can live with that.http://blog.computationalcomplexity.org/2015/04/ph-infinite-under-random-oracle.htmlnoreply@blogger.com (Lance Fortnow)4tag:blogger.com,1999:blog-3722233.post-3854119300364302672Tue, 14 Apr 2015 17:01:00 +00002015-04-14T12:01:06.560-05:00Baseball is More Than Data<div class="MsoNormal">
As baseball starts its second week, lets reflect a bit on
how data analytics has changed the game. Not just the Moneyball phenomenon of
ranking players but also the extensive use of defensive shifts (repositioning
the infielders and outfielders for each batter) and other maneuvers. We're not
quite to the point that technology can replace managers and umpires but give it
another decade or two.</div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
<o:p></o:p></div>
<div class="MsoNormal">
We've seen a huge increase in data analysis in sports. ESPN
<a href="http://espn.go.com/espn/feature/story/_/id/12331388/the-great-analytics-rankings">ranked teams</a> based on their use of analytics and it correlates well with how
those teams are faring. Eventually everyone will use the same learning
algorithms and games will just be a random coin toss with coins weighted by how
much each team can spend.<o:p></o:p><br />
<br /></div>
<div class="MsoNormal">
Steve Kettmann wrote an NYT op-ed piece <a href="http://www.nytimes.com/2015/04/08/opinion/baseball-by-the-numbers.html">Don't Let StatisticsRuin Baseball</a>. At first I thought this was just another luddite who will be
left behind but he makes a salient point. We don’t go to baseball to watch the
stats. We go to see people play. We enjoy the suspense of every pitch, the one-on-one
battle between pitcher and batter and the great defensive moves. Maybe
statistics can tell which players a team should acquire and where the fielders
should stand but it still is people that play the game.<o:p></o:p><br />
<br /></div>
<div class="MsoNormal">
Kettmann worries about the obsession of baseball writers
with statistics. Those who write based on stats can be replaced by machines.
Baseball is a great game to listen on the radio for the best broadcasters don't
talk about the numbers, they talk about the people. Otherwise you might as well
listen to competitive tic-tac-toe.</div>
http://blog.computationalcomplexity.org/2015/04/baseball-is-more-than-data.htmlnoreply@blogger.com (Lance Fortnow)4tag:blogger.com,1999:blog-3722233.post-2693479907911641019Thu, 09 Apr 2015 11:12:00 +00002015-04-09T06:41:33.963-05:00 FCRC 2015Every four years the Association for Computing Machinery organizes a <a href="http://fcrc.acm.org/">Federated Computing Research Conference</a> consisting of several co-located conferences and some joint events. This year's event will be held June 13-20 in Portland, Oregon and includes Michael Stonebraker's Turing award lecture. There is a <a href="https://www.regonline.com/Register/Checkin.aspx?EventID=1692718">single registration site</a> for all conferences (early deadline May 18th) and I recommend <a href="http://fcrc.acm.org/travel.cfm">booking hotels</a> early and definitely before the May 16th cutoff.<br />
<br />
Theoretical computer science is well represented.<br />
<ul>
<li><a href="http://acm-stoc.org/stoc2015/">47th ACM Symposium on the Theory of Computing</a>. Apply for <a href="http://acm-stoc.org/stoc2015/travel-support.html">student travel support</a> by May 9th. </li>
<li><a href="http://computationalcomplexity.org/Archive/2015/local.html">30th Computational Complexity Conference</a>, now an independent conference. <a href="http://computationalcomplexity.org/travelAllowance.php">Student travel support</a> deadline of May 9th. CCC is looking for a new logo, if you have ideas send them to <a href="http://pages.cs.wisc.edu/~dieter/contact.html">Dieter van Melkebeek</a>.</li>
<li><a href="http://www.sigecom.org/ec15/">16th ACM Conference on Economics and Computation</a> and its associated <a href="http://www.sigecom.org/ec15/schedule_workshops.html">workshops and tutorials</a></li>
<li><a href="http://www.cs.jhu.edu/~spaa/">27th ACM Symposium on Parallelism in Algorithms and Architectures</a></li>
<li>A plenary lecture by Andy Yao</li>
</ul>
<div>
The CRA-W is organizing mentoring workshops for early career and mid-career faculty and faculty supervising undergraduate research.</div>
<div>
<br /></div>
<div>
A number of other major conferences will also be part of FCRC including HPDC, ISCA, PLDI and SIGMETRICS. There are many algorithmic challenges in all these areas and FCRC really gives you an opportunity to sit in talks outside your comfort zone. You might be surprised in what you see.</div>
<div>
<br /></div>
<div>
See you in Portland!</div>
http://blog.computationalcomplexity.org/2015/04/fcrc-2015.htmlnoreply@blogger.com (Lance Fortnow)4tag:blogger.com,1999:blog-3722233.post-4036509745716529498Tue, 07 Apr 2015 12:47:00 +00002015-04-07T07:47:41.765-05:00Two cutoffs about Warings problem for cubes<br />
Known:<br />
<ol>
<li>All numbers except 23 can be written as the sum of 8 cubes</li>
<li>All but a finite number of numbers can be written as the sum of 7 cubes </li>
<li>There are an infinite number of numbers that cannot be written as the sum of 3 cubes(this you can prove yourself, the other two are hard, deep theorems).</li>
</ol>
Open: Find x such that: <br />
<ol>
<li>All but a finite number of numbers can be written as the sum of x cubes.</li>
<li>There exists an infinite number of numbers that cannot be written as the sum of x-1 cubes.</li>
</ol>
It is known that 4 ≤ x ≤ 7<br />
<br />
Lets say you didn't know any of this and were looking at empirical data.<br />
<br />
<ol>
<li>If you find that every number ≤ 10 can be written as the sum of 7 cubes this is NOT interesting because 10 is too small.</li>
<li> If you find that every number ≤ 1,000,000 except 23 can be written as the sum of 8 cubes this IS interesting since 1,000,000 is big enough that one thinks this is telling us something (though we could be wrong). What if you find all but 10 numbers (I do not know if that is true) ≤ 1,000,000 are the sum of seven cubes?</li>
</ol>
Open but too informal to be a real question: Find x such that<br />
<ol>
<li>Information about sums-of-cubes for all numbers ≤ x-1 is NOT interesting</li>
<li>Information about sums-of-cubes for all numbers ≤ x IS interesting. </li>
</ol>
By the intermediate value theorem such an x exists. But of course this is silly. The fallacy probably relies on the informal notion `interesting'. But a serious question: How big does x have to be before data about this would be considered interesting? (NO- I won't come back with `what about x-1').<br />
<br />
More advanced form: Find a function f(x,y) and constants c1 and c2 such that<br />
<ol>
<li>If f(x,y) ≥ c1 then the statement <i>all but y numbers ≤ x are the sum of 7 cubes</i> is interesting.</li>
<li>If f(x,y) ≤ c2 then the statement <i>all but y numbers ≤ x are the sum of 7 cubes</i> is not interesting. </li>
</ol>
To end with a more concrete question: Show that there are an infinite number of numbers that cannot be written as the sum of 14 4th powers.<br />
<br />
<ol>
</ol>
<br />
<br />
<br />
<br />http://blog.computationalcomplexity.org/2015/04/two-cutoffs-about-warings-problem-for.htmlnoreply@blogger.com (GASARCH)5tag:blogger.com,1999:blog-3722233.post-5478491529188488125Wed, 01 Apr 2015 13:18:00 +00002015-04-03T13:59:49.281-05:00Which of these stories is falseI would rather challenge you than fool you on April fools day. Below I have some news items. All but one are true. I challenge you to determine which one is false.<br />
<br />
<ol>
<li>Amazon open a brick and mortar store: <a href="http://money.cnn.com/2015/02/04/technology/amazon-purdue/">Full story here.</a> If true this is really really odd since I thought they saved time and money by not having stores.</li>
<li>You may have heard of some music groups releasing Vinyl albums in recent times. They come with an MP3 chip so I doubt the buyers ever use Vinyl,but the size allows for more interesting art. What did people record on before Vinyl. Wax Cylinders! Some music groups have released songs on wax cylinders! See <a href="http://hyperallergic.com/82804/after-nearly-a-century-wax-cylinder-music-gets-a-new-release/">here</a> for a release a while back by <i>Tiny Tim</i> (the singer, not the fictional character) and <a href="http://en.wikipedia.org/wiki/The_Steampunk_Album_That_Cannot_Be_Named_For_Legal_Reasons">here</a> for a release by a group whose name is <i>The Men Will Not Be Blamed For Anything</i>.</li>
<li>An error in Google Maps lead to Nicaragua accidentally invading Costa Rica. Even more amazing--- This excuse was correct and Google admitted the error. See <a href="http://www.wired.com/2010/11/google-maps-error-blamed-for-nicaraguan-invasion/">here</a> for details.</li>
<li>There was a conference called <i>Galileo was wrong, The Church Was Right </i>for people who think the Earth really is the centre of the universe (my spell checker says that `center' is wrong and `centre' is right. Maybe its from England). I assume they mean that the sun and other stuff goes around the earth in concentric circles, and not that one can take any ref point and call it the center. The conference is run by<a href="http://en.wikipedia.org/wiki/Robert_Sungenis"> Robert Sungenesis</a> who also wrote a book on the topic (its on Amazon <a href="http://www.amazon.com/Galileo-Was-Wrong-Church-Right/dp/0977964000">here</a> and the comments section actually has a debate on the merits of his point of view.) There is also a website on the topic <a href="http://galileowaswrong.blogspot.com/">here</a>. The Catholic Church does not support him or his point of view, and in fact asked him to take ``Catholic'' out of the name of his organization, which he has done. (ADDED LATER- A commenter named Shane Chubbs, who has read over the relevent material on this case more carefully than I have, commented that Robert Sungenesis DOES claim that we can take the center of the universe to be anywhere, so it mine as well be here. If thats Roberts S's only point, its hard to believe he got a whole book out of it.) OH- this is one of the TRUE points.</li>
</ol>
http://blog.computationalcomplexity.org/2015/04/which-of-these-stories-is-false.htmlnoreply@blogger.com (GASARCH)7tag:blogger.com,1999:blog-3722233.post-5701624028183470062Mon, 30 Mar 2015 18:36:00 +00002015-03-30T13:36:46.584-05:00Intuitive ProofsAs I <a href="http://blog.computationalcomplexity.org/2014/12/undergraduate-research.html">mentioned</a> a few months ago, I briefly joined an undergraduate research seminar my freshman year at Cornell. In that seminar I was asked if a two-dimensional random walk on a lattice would return to the origin infinitely often. I said of course. The advisor was impressed until he asked about three-dimensional walks and I said they also hit the origin infinitely often. My intuition was wrong.<br />
<div>
<br /></div>
<div>
33 years later I'd like to give the right intuition. This is rough intuition, not a proof, and I'm sure none of this is original with me.</div>
<div>
<br /></div>
<div>
In a 1-dimensional random walk, you will be at the origin on the nth step with probability about 1/n<sup>0.5</sup>. Since the sum of 1/n<sup>0.5</sup> diverges this happens infinitely often.</div>
<div>
<br /></div>
<div>
In a 2-dimensional random walk, you will be at the origin on the nth step with probability about (1/n<sup>0.5</sup>)<sup>2</sup> = 1/n. Since the sum of 1/n diverges this happens infinitely often.</div>
<br />
<div>
In a 3-dimensional random walk, you will be at the origin on the nth step with probability about (1/n<sup>0.5</sup>)<sup>3</sup> = 1/n<sup>1.5</sup>. Since the sum of 1/n<sup>1.5</sup> converges this happens finitely often.</div>
http://blog.computationalcomplexity.org/2015/03/intuitive-proofs.htmlnoreply@blogger.com (Lance Fortnow)4tag:blogger.com,1999:blog-3722233.post-4340163653560675939Wed, 25 Mar 2015 18:10:00 +00002015-03-25T13:10:49.472-05:00News AplentyBoth the Turing Award and the Abel Prize were announced this morning.<br />
<br />
MIT databases researcher Michael Stonebraker <a href="http://preview.acm.org/2014-turing-award">wins the ACM Turing Award</a>. He developed INGRES one of the first relational databases. Stonebraker is the first Turing award winner since the prize went up to a cool million dollars.<br />
<br />
John Nash and Louis Nirenberg <a href="http://www.abelprize.no/nyheter/vis.html?tid=63589">share this years Abel Prize</a> “for striking and seminal contributions to the theory of nonlinear partial differential equations and its applications to geometric analysis.” This work on PDEs is completely independent from the equilibrium results that won Nash the 1994 Nobel Prize.<br />
<br />
Earlier this week the CRA released their latest Best Practice Memo: <a href="http://cra.org/resources/bp-view/best_practices_memo_evaluating_scholarship_in_hiring_tenure_and_promot">Incentivizing Quality and Impact: Evaluating Scholarship in Hiring, Tenure, and Promotion</a>. In short: Emphasize quality over quantity in research.<br />
<br />
The NSF announced their <a href="http://www.nsf.gov/news/special_reports/public_access/">public access plan</a> to ensure that research is "available for download, reading and analysis free of charge no later than 12 months after initial publication".http://blog.computationalcomplexity.org/2015/03/news-aplenty.htmlnoreply@blogger.com (Lance Fortnow)1tag:blogger.com,1999:blog-3722233.post-195292551308946501Mon, 23 Mar 2015 02:35:00 +00002015-03-22T21:35:50.034-05:00Which mathematician had the biggest gap between fame and contribution?(I was going to call this entry <i> Who was the worst mathematician of all time? </i>but Clyde Kruskal reminded me that its not (say) Goldbach's fault that his conjecture got so well known, in fact its a good thing! I'll come back to Goldbach later.) <br />
<br />
Would Hawking be as well known if he didn't have ALS? I suspect that within Physics yes, but I doubt he would have had guest shots on <i>ST:TNG, The Simpsons, Futurama, and The Big Bang Theory</i> (I just checked the IDMB database- they don't mention Futurama but they do say he's a Capricorn. I find that appalling that they mention a Scientists Horoscope.) I also doubt there would be a movie about him.<br />
<br />
Would Turing be as well known if he wasn't gay and didn't die young (likely because of the ``treatment'') would he be as well known? I suspect that within Computer Science yes, but I doubt there would be a play, a movie, and there are rumors of a musical. Contrast him with John von Neumann who one could argue contributed as much as Turing, but, alas, no guest shots on I Love Lucy, no movie, no Rap songs about him. (The only scientist that there may be a rap song about is Heisenberg, and that doesn't count since it would really be about Walter White.)<br />
<br />
Hawking and Turing are/were world class in their fields. Is there someone who is very well known but didn't do that much? <br />
<br />
SO we are looking for a large gap between how well known the person is and how much math they actually did. This might be unfair to well-known people (it might be unfair to ME since complexityblog makes me better known than I would be otherwise). However, I have AN answer that is defensible. Since the question is not that well defined there prob cannot be a definitive answer.<br />
<br />
First lets consider Goldbach (who is NOT my answer). He was a professor of math and did some stuff on the Theory of curves, diff eqs, and infinite series. Certainly respectable. But if not for his<br />
conjecture (every even number is the sum of two primes- still open) I doubt we would have heard of him.<br />
<br />
My answer: Pythagoras! He is well known as a mathematician but there is no evidence that he had any connection to the theorem that bears his name.<br />
<br />
Historians (or so-called historians) would say that it was well known that he proved the theorem, or gave the first rigorous proof, or something, but there is no evidence. Can people make things up out of whole cloth? Indeed they can.<br />
<br />
Witness this <a href="https://www.youtube.com/watch?v=bnROB_RCF7U">Mr. Clean Commercial</a> which says: <i>they</i> <i>say that after seeing </i>a <i>magician make his assistant disappear Mr Clean came up with a product that makes dirt disappear- the magic eraser.</i> REALLY? Who are ``they''? Is this entire story fabricated? Should we call the FCC :-) ANYWAY, yes, people can and do make up things out of whole cloth and then claim they are well known. Even historians.<br />
<br />
Commenters: I politely request that if you suggest other candidates for large gap then they be people who died before 1950 (arbitrary but firm deadline). This is not just out of politeness to the living and recently deceased, its also because these questions needs time. Kind of like people who want to rank George W Bush or Barack Obama as the worst prez of all time--- we need lots more time to evaluate these things.<br />
<br />
<br />
<br />
<br />http://blog.computationalcomplexity.org/2015/03/which-mathematician-had-biggest-gap.htmlnoreply@blogger.com (GASARCH)21tag:blogger.com,1999:blog-3722233.post-743693179363652449Thu, 19 Mar 2015 13:58:00 +00002015-03-19T08:58:51.962-05:00Feeling UnderappreciatedAs academics we live and die by our research. While our proofs are either correct or not, the import of our work has a far more subjective feel. One can see where the work is published or how many citations it gets and we often say that we care most about the true intrinsic or extrinsic value of the research. But the measure of success of a research that we truly care most about is how it is viewed within the community. Such measures can have a real value in terms of hiring, tenure, promotion, raises and grants but it goes deeper, filling some internal need to have our research matter to our peers.<br />
<br />
So even little things can bother you. Not being cited when you think your work should be. Not being mentioned during a talk. Seeing a review that questions the relevance of your model. Nobody following up on your open questions. Difficulty in finding excitement in others about your work. We tend to keep these feelings bottled up since we feel we shouldn't be bragging about own work.<br />
<br />
If you feel this way a few things to keep in mind. It happens to all of us even though we rarely talk about it. You are not alone. Try not to obsess, it's counterproductive and just makes you feel even worse. If appropriate let the authors know that your work is relevant to theirs, the authors truly may have been unaware. Sometimes it is just best to acknowledge to yourself that while you think the work is good, you can't always convince the rest of the world and just move on.<br />
<br />
More importantly remember the golden rule, and try to cite all relevant research and show interest in other people's work as well as your own.http://blog.computationalcomplexity.org/2015/03/feeling-underappreciated.htmlnoreply@blogger.com (Lance Fortnow)5tag:blogger.com,1999:blog-3722233.post-7329933002808223925Mon, 16 Mar 2015 02:25:00 +00002015-03-15T22:54:30.452-05:00Has anything interesting ever come out of a claimed proof that P=NP or P ≠ NP?<br />
When I was young and foolish and I heard that someone thinks they proven P=NP or P ≠ NP I would think <i>Wow- maybe they did!</i>. Then my adviser, who was on the FOCS committee, gave me a<i> paper that claimed to resolve P vs NP! </i> to review for FOCS. It was terrible. I got a became more skeptical.<br />
<br />
When I was older and perhaps a byte less foolish I would think the following: <br />
<br />
For P=NP proofs: I am sure it does not proof P=NP BUT maybe there are some nice ideas here that could be used to speed up some known algorithms in practice, or give some insights, or something. Could still be solid research (A type of research that Sheldon Cooper has disdain for, but I think is fine).<br />
<br />
and<br />
<br />
For P ≠ NP proofs: I am sure it does not prove P ≠ NP BUT maybe there are some nice ideas here, perhaps a `if BLAH than P ≠ NP', perhaps an nlog^* lower bound on something in some restricted model'.<br />
<br />
Since I've joined this blog I've been emailed some proofs that claim to resolve P vs NP (I also get some proofs in Ramsey Theory, which probably saves Graham/Rothchild/Spencer some time since cranks might bother me instead of them). These proofs fall into some categories:<br />
<br />
P ≠ NP because there are all those possibilities to look through (or papers that are far less coherent than that but that's what it comes down to)<br />
<br />
P=NP look at my code!<br />
<br />
P=NP here is my (incoherent) approach. For example `first look for two variables that are quasi-related' What does `quasi-related' mean? They don't say.<br />
<br />
Papers where I can't tell what they are saying. NO they are not saying independent of ZFC, I wish they were that coherent. Some say that its the wrong question, a point which could be argued intelligently but not by those who are writing such papers.<br />
<br />
OKAY, so is there ANY value to these papers? Sadly looking over all of the papers I've gotten on P vs NP (in my mind- I didn't save them --should I have?) the answer is an empirical NO. Why not? I'll tell you why not by way of counter-example:<br />
<br />
Very early on, before most people know about FPT, I met Mike Fellows at a conference and he told me about the Graph Minor Theorem and Vertex Cover. It was fascinating. Did he say `I've solved P vs NP' Of course not. <i>He knew better.</i><br />
<br />
Taking Mike Sipers's Grad Theory course back in the 1980's he presented the recent result: DTIME(n) ≠ NTIME(n). Did Mike Sipser or the authors (Paul, Pippenger, Szemeredi, Trotter) claim that they had proven P vs NP? Of course not, <i>they knew better</i><br />
<br />
Think of the real advances made in theory. They are made by insiders, outsiders, people you've heard of, people you hadn't heard of before, but they were all made my people who... were pretty good and know stuff. YES, some are made by people who are not tainted by conventional thinking, but such people can still differentiate an informal argument from a proof, and they know that an alleged proof that resolves P vs NP needs to be checked quite carefully before bragging about it.<br />
<br />
When the result that Monotone circuits have exponential lower bounds for some problems there was excitement that this may lead to a proof that P ≠ NP, however, nobody, literally nobody, claimed that these results proved P ≠ NP.<i> They knew better.</i><br />
<br />
So, roughly speaking, the people who claim they've resolved P vs NP either have a knowledge gap or can't see their own mistakes or something that makes their work unlikely to have value. One test for that is to ask if they retracted the proof once flaws have been exposed. <br />
<br />
This is not universally true- I know of two people who claimed to have solved the problem who are pretty careful normally. I won't name names since my story might not be quite right, and because they of them retracted IMMEDIATELY after seeing the error. (When Lance proofread this post he guessed one of them,<br />
so there just aren't that many careful people who claim to have resolved P vs NP.) And one of them got an obscure paper into an obscure journal out of their efforts.<br />
<br />
I honestly don't know how careful Deolaikar is, nor do I know if anything of interest every came out of his work, or if has retracted it. If someone knows, please leave a comment.<br />
<br />
I discuss Swart after the next paragraph. <br />
<br />
I WELCOME counter-example! If you know of a claim to resolve P vs NP where the authors paper had something of value, please comment. The term <i>of value</i> means one of two things: there really was some theorem of interest OR there really were some ideas that were later turned into theorems (or in the case of P=NP turned into usable algorithms that worked well in practice).<br />
<br />
One partial counter-example- Swarts claim that P=NP inspired OTHER papers that were good: Yannakakis's proof that Swart's approach could not work and some sequels that made Lance's list of best papers of the 2000's (see <a href="http://blog.computationalcomplexity.org/2014/04/favorite-theorems-extended-formulations.html#comment-form">this post</a>). I don't quite know how to count that.<br />
<br />
<br />
<br />http://blog.computationalcomplexity.org/2015/03/has-anything-interesting-every-come-out.htmlnoreply@blogger.com (GASARCH)18tag:blogger.com,1999:blog-3722233.post-397116112733971482Thu, 12 Mar 2015 11:25:00 +00002015-03-12T06:25:13.952-05:00Quotes with which I disagreeOften we hear pithy quotes by famous people but some just don't hold water.<br />
<br />
"Computer science is no more about computers than astronomy is about telescopes."<br />
<br />
Usually <a href="http://en.wikiquote.org/wiki/Computer_science#Disputed">attributed to Edsger Dijkstra</a>, the quote tries to capture that using computers or even programming is not computer science, which I agree. But computer science is most definitely about the computers, making them connected, smarter, faster, safer, reliable and easier to use. You can get a PhD in computer science with a smarter cache system, you can't get a PhD in Astronomy from developing a better telescope lens.<br />
<br />
"If your laptop cannot find it, neither can the market."<br />
<br />
This quote by Kamal Jain is used to say a market can't find equilibrium prices when the equilibrium problem is hard to compute. But to think that the market, with thousands of highly sophisticated and unknown trading algorithm combined with more than a few less than rational agents all interacting with each other can be simulated on a sole laptop seems absurd, even in theory.<br />
<br />
"If you never miss the plane, you're spending too much time in airports."<br />
<br />
George Stigler, a 1982 Nobelist in economics, had this quote to explain individual rationality. But missing a flight is a selfish activity since you will delay seeing people at the place or conference you are heading to or family if you are heading home. I've seen people miss PhD defenses because they couldn't take an extra half hour to head to the airport earlier. If you really have no one on the other side, go ahead and miss your plane. But keep in mind usually you aren't the only one to suffer if you have to take a later flight.<br />
<br />
I take the opposite approach, heading to the airport far in advance of my flight and working at the airport free of distractions of the office. Most airports have the three ingredients I need for an effective working environment: wifi, coffee and restrooms.http://blog.computationalcomplexity.org/2015/03/quotes-with-which-i-disagree.htmlnoreply@blogger.com (Lance Fortnow)11tag:blogger.com,1999:blog-3722233.post-355866376643250287Tue, 10 Mar 2015 17:52:00 +00002015-03-15T22:17:31.021-05:00Guest Post by Thomas Zeume on Applications of Ramsey Theory to Dynamic Descriptive ComplexityGuest Post by Thomas Zeume on<br />
<br />
Lower Bounds for Dynamic Descriptive Complexity<br />
<br />
(A result that uses Ramsey Theory!)<br />
<br />
<br />
In a previous blog post Bill mentioned his hobby to collect theorems that<br />
apply Ramsey theory. I will present one such application that arises in<br />
dynamic descriptive complexity theory. The first half of the post introduces<br />
the setting, the second part sketches a lower bound proof that uses Ramsey theory.<br />
<br />
Dynamic descriptive complexity theory studies which queries can be maintained by<br />
first-order formulas with the help of auxiliary relations, when the input structure<br />
is subject to simple modifications such as tuple insertions and tuple deletions.<br />
<br />
As an example consider a directed graph into which edges are inserted. When an edge<br />
(u, v) is inserted, then the new transitive closure T' can be defined from the old<br />
transitive closure T by a first-order formula that uses u and v as parameters:<br />
<br />
T'(x,y) = T(x,y) ∨ (T(x, u) ∧ T(v, y))<br />
<br />
Thus the reachability query can be maintained under insertions in this fashion<br />
(even though it cannot be expressed in first-order logic directly).<br />
<br />
The above update formula is an example of a dynamic descriptive complexity program.<br />
In general, dynamic programs may use several auxiliary relations that are helpful<br />
to maintain the query under consideration. Then each auxiliary relation has one<br />
update formula for edge insertions and one formula for edge deletions.<br />
The example above uses a single auxiliary relation T (which is also the designated<br />
query result) and only updates T under insertions.<br />
<br />
This principle setting has been independently formalized in very similar ways by<br />
Dong, Su and Topor [1, 2] and by Patnaik and Immerman [3]. For both groups one of<br />
the main motivations was that first-order logic is the core of SQL and therefore<br />
queries maintainable in this setting can also be maintained using SQL. Furthermore<br />
the correspondance of first-order logic with built-in arithmetic to uniform<br />
AC0-circuits (constant-depth circuits of polynomial size with unbounded fan-in)<br />
yields that queries maintainable in this way can be evaluated dynamically in a<br />
highly parallel fashion.<br />
<br />
One of the main questions studied in Dynamic Complexity has been whether<br />
Reachability on directed graphs can be maintained in DynFO<br />
(under insertions and deletions of edges). Here DynFO is the class of<br />
properties that can be maintained by first-order update formulas.<br />
The conjecture by Patnaik and Immerman that this is possible has been recently<br />
confirmed by Datta, Kulkarni, Mukherjee, Schwentick and the author of this post,<br />
but has not been published yet [4].<br />
<br />
In this blog post, I would like to talk about dynamic complexity LOWER rather<br />
than upper bounds. Research on dynamic complexity lower bounds has not been<br />
very successful so far. Even though there are routine methods to prove that a<br />
property can not be expressed in first-order logic (or, for that matter, not in AC0),<br />
the dynamic setting adds a considerable level of complication. So far, there is<br />
no lower bound showing that a particular property can not be maintained in<br />
DynFO (besides trivial bounds for properties beyond polynomial time).<br />
<br />
For this reason, all (meaningful) lower bounds proved so far in this setting<br />
have been proved for restricted dynamic programs. One such restriction is to<br />
disallow the use of quantifiers in update formulas. The example above illustrates<br />
that useful properties can be maintained even without quantifiers<br />
(though in this example under insertions only). Therefore proving lower bounds<br />
for this small syntactic fragment can be of interest.<br />
<br />
Several lower bounds for quantifier-free dynamic programs have been proved by using<br />
basic combinatorial tools. For example, counting arguments yield a lower bound for<br />
alternating reachability and non-regular languages [5], and Ramsey-like theorems<br />
as well as Higman's lemma can be used to prove that the reachability query<br />
(under edge insertions and deletions) cannot be maintained by<br />
quantifier-free dynamic programs with binary auxiliary relations [6].<br />
<br />
Here, I will present how bounds for Ramsey numbers can be used to obtain lower bounds.<br />
Surprisingly, the proof of the lower bound in the following result relies on both<br />
upper and lower bounds for Ramsey numbers. Therefore the result might be a good candidate<br />
for Bill's collection of theorems that use Ramsey-like results.<br />
<br />
THEOREM (from [7])<br />
When only edge insertions are allowed, then (k+2)-clique can be maintained by a<br />
quantifier-free dynamic program with (k+1)-ary auxiliary relations, but it cannot be<br />
maintained by such a program with k-ary auxiliary relations.<br />
<br />
SKETCH OF PROOF<br />
<br />
I present a (very) rough proof sketch of the lower bound in the theorem.<br />
The proof sketch aims at giving a flavour of how the upper and lower bounds<br />
on the size of Ramsey numbers are used to prove the above lower bound.<br />
<br />
Instead of using bounds on Ramsey numbers, it will be more convenient to use<br />
the following equivalent bounds on the size of Ramsey cliques. For every c and large enough n:<br />
<br />
1) Every $c$-colored complete $k$-hypergraph of size n contains a large Ramsey clique.<br />
<br />
2) There is a 2-coloring of the complete $(k+1)$-hypergraph of size n that does \emph{not} contain a large Ramsey clique.<br />
<br />
<br />
In the following it is not necessary to know what "large" exactly means<br />
(though it roughly means of size log^{k-1} n in both statements).<br />
Those bounds are due to Rado, Hajnal and Erdős.<br />
<br />
Towards a contradiction we assume that there is a quantifier-free program P with<br />
k-ary auxiliary relations that maintains whether a graph contains a (k+2)-clique.<br />
<br />
The first step is to construct a graph G = (V UNION W, E) such that in all large subsets<br />
C of V one can find independent sets A and B of size k+1 such that adding all edges<br />
between nodes of A yields a graph containing a (k+2)-clique while adding all edges<br />
between nodes of B yields a graph without a (k+2)-clique. Such a graph G can be constructed<br />
using (2). (Choose a large set V and let W := V^{k+1}. Color the set W according to<br />
(2) with colors red and blue. Connect all blue elements w = (v_1, ..., v_{k+1}) in W<br />
with the elements v_1, \ldots, v_{k+1} in V.)<br />
<br />
Now, if the program P currently stores G, then within the current auxiliary relations<br />
stored by P one can find a large subset C of V where all k-tuples are colored equally<br />
by the auxiliary relations. Such a set C can be found using (1). (More precisely:<br />
by a slight extension of (1) to structures.)<br />
<br />
By the construction of G there are subsets A and B of the set C with the property stated<br />
above. As A and B are subsets of C, they are isomorphic with respect to the auxiliary<br />
relations and the edge relation. A property of quantifier-free programs is that for such<br />
isomorphic sets, the application of corresponding modification sequences yields the same<br />
answer of the program, where "corresponding" means that they adhere to the isomorphism.<br />
<br />
Thus the dynamic program P will give the same answer when adding all edges of A, and whenadding all edges of B (in an order that preserves the isomorphism). This is a contradiction<br />
as the first sequence of modifications yields a graph with a (k+2)-clique while the second<br />
yields a graph without a (k+2)-clique. Hence such a program P cannot exist. This proves<br />
the lower bound from the above theorem.<br />
<br />
I thank Thomas Schwentick and Nils Vortmeier for many helpful suggestions on how to<br />
improve a draft of this blog post.<br />
<br />
[1] Guozhu Dong and Rodney W. Topor. Incremental evaluation of datalog queries. In ICDT 1992, pages 282–296. Springer, 1992.<br />
<br />
[2] Guozhu Dong and Jianwen Su. First-order incremental evaluation of datalog queries. In Database Programming Languages, pages 295–308. Springer, 1993.<br />
<br />
[3] Sushant Patnaik and Neil Immerman. Dyn-FO: A parallel, dynamic complexity class. J. Comput. Syst. Sci., 55(2):199–209, 1997.<br />
<br />
[4] Samir Datta, Raghav Kulkarni, Anish Mukherjee, Thomas Schwentick, and Thomas Zeume. Reachability is in DynFO. ArXiv 2015.<br />
<br />
[5] Wouter Gelade, Marcel Marquardt, and Thomas Schwentick. The dynamic complexity of formal languages. ACM Trans. Comput. Log., 13(3):19, 2012.<br />
<br />
[6] Thomas Zeume and Thomas Schwentick. On the quantifier-free dynamic complexity of Reachability. Inf. Comput. 240 (2015), pp. 108–129<br />
<br />
[7] Thomas Zeume. The dynamic descriptive complexity of k-clique. In MFCS 2014, pages 547–558. Springer, 2014.<br />
<br />http://blog.computationalcomplexity.org/2015/03/guest-post-by-thomas-zeume-on-app-of.htmlnoreply@blogger.com (GASARCH)1tag:blogger.com,1999:blog-3722233.post-8824991713370923749Thu, 05 Mar 2015 15:23:00 +00002015-03-05T09:55:35.839-06:00(1/2)! = sqrt(pi) /2 and other conventions (This post is inspired by the book <a href="http://www.amazon.com/The-Cult-Pythagoras-Math-Myths/dp/0822962705">The cult of Pythagoras: Math and Myths</a> which I recently<br />
read and reviewed. See <a href="http://www.cs.umd.edu/~gasarch/BLOGPAPERS/cult.pdf">here</a> for my review.)<br />
<br />
STUDENT: The factorial function is only defined on the natural numbers. Is there some way to extend it to all the reals? For example, what is (1/2)! ?<br />
<br />
BILL: Actually (1/2)! is sqrt(π)/2<br />
<br />
STUDENT: Oh well, ask a stupid question, get a stupid answer.<br />
<br />
BILL: No, I'm serious, (1/2)! is sqrt(π)/2.<br />
<br />
STUDENT: C'mon, be serious. If you don't know or if its not known just tell me.<br />
<br />
The Student has a point. (1/2)! = sqrt(π)/2 is stupid even though its true. So I ask--- is there some other way that factorial could be expanded to all the reals that is as well motivated as the Gamma function? Since 0!=1 and 1!=1, perhaps (1/2)! should be 1.<br />
<br />
Is there a combinatorial interpretation for (1/2)!=sqrt(π) /2?<br />
<br />
If one defined n! by piecewies linear interpolation that works but is it useful? interesting?<br />
<br />
For that matter is the Gamma function useful? Interesting?<br />
<br />
ANOTHER CONVENTION: We say that 0^0 is undefined. But I think it should be 1.<br />
Here is why:<br />
<br />
d/dx x^n = nx^{n-1} is true except at 1. Lets make it ALSO true at 1 by saying that x^0=1 ALWAYS<br />
and that includes at 0.<br />
<br />
A SECOND LOOK AT A CONVENTION: (-3)(4) = -12 makes sense since if I owe my bookie<br />
3 dollars 4 times than I owe him 12 dollars. But what about (-3)(-4)=12. This makes certain<br />
other laws of arithmetic extend to the negatives, which is well and good, but we should not<br />
mistake this convention for a discovered truth. IF there was an application where definiting<br />
NEG*NEG = NEG then that would be a nice alternative system, much like the diff geometries.<br />
<br />
I COULD TALK ABOUT a^{1/2} = sqrt(a) also being a convention to make a rule work out<br />
however (1) my point is made, and (2) I think I blogged about that a while back.<br />
<br />
So what is my point- we adapt certain conventions which are fine and good, but should not<br />
mistake them for eternal truths. This may also play into the question of is math invented or<br />
discovered. <br />
<br />
<br />http://blog.computationalcomplexity.org/2015/03/12-sqrtpi-and-other-conventions.htmlnoreply@blogger.com (GASARCH)11tag:blogger.com,1999:blog-3722233.post-1123635599972944401Mon, 02 Mar 2015 20:46:00 +00002015-03-02T14:46:29.827-06:00Leonard Nimoy (1931-2015)<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-vAZdHhfPsY4/VPTL0FZ2-MI/AAAAAAAA0UM/-I_nORUSvXU/s1600/spock.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-vAZdHhfPsY4/VPTL0FZ2-MI/AAAAAAAA0UM/-I_nORUSvXU/s1600/spock.jpg" height="116" width="400" /></a></div>
<br />
Bill and I rarely write joint blog posts but with the loss of a great cultural icon we both had to have our say.<br />
<br />
<b>Bill:</b> Leonard Nimoy (Spock) died last week at the age of 83. DeForest Kelley (McCoy) passed away in 1999. William Shatner (Kirk) is still alive, though I note that he is four days older than Nimoy.<br />
<br />
Spock tried to always be logical. I wonder if an unemotional scientist would be a better or worse scientist.<br />
Does emotion drive our desire to learn things? Our choice of problems to work on? Our creativity?<br />
<br />
Did Star Trek (or its successors) inspire many to go into science? Hard to tell but I suspect yes. Did it inspire you?<br />
<br />
There depiction of technology ranged from predicative (communicators are cell phones!) to awful (Episode 'The Ultimate Computer' wanted to show that humans are better than computers. It instead showed that humans are better than a malfunctioning killer-computer. I think we knew that.) I think TV shows now hire science consultants to get things right (The Big Bang Theory seems to get lots of science right, though their view of academia is off.) but in those days there was less of a concern for that.<br />
<br />
<b>Lance:</b> I'm too young to remember the original Star Trek series when it first aired but I did watch the series religiously during the 70's when a local TV station aired an episode every day, seeing every episode multiple times. The original Star Trek was a product of its time, using the future to reflect the current societal issues of the 60's. Later Star Trek movies and series seemed to have lost that premise.<br />
<br />
Every nerdy teenager, myself included, could relate to Spock with his logical exterior and his half-human emotional interior that he could usually suppress. Perhaps my favorite Spock episode was the penultimate "All Our Yesterdays" where Spock having been sent back in time takes on an earlier emotional state of the old Vulcans and falls in love.<br />
<br />
I did see Leonard Nimoy in person once, during a lecture at MIT in the 80's. He clearly relished being Spock and we all relished him.<br />
<br />
Goodby Leonard. You have lived long and prospered and gone well beyond where any man has gone before.http://blog.computationalcomplexity.org/2015/03/leonard-nimoy-1931-2015_2.htmlnoreply@blogger.com (Lance Fortnow)1tag:blogger.com,1999:blog-3722233.post-4151756712487990603Thu, 26 Feb 2015 22:22:00 +00002015-02-26T16:23:08.129-06:00Selecting the Correct OracleAfter my <a href="http://blog.computationalcomplexity.org/2015/02/and-winners-are.html">post last week</a> on the Complexity accepts, a friend of Shuichi Hirahara send Shuichi an email saying that I was interested in his paper. Shuichi contacted me, sent me his paper and we had a few good emails back and forth. He posted his paper <a href="http://arxiv.org/abs/1502.07258">Identifying an Honest EXP<sup>NP</sup> Oracle Among Many</a> on the arXiv yesterday.<br />
<div>
<br /></div>
<div>
Shuichi asks the following question: Given two oracles both claiming to compute a language L, figure out which oracle is correct. For which languages does there exist such a selector?</div>
<div>
<br /></div>
<div>
For deterministic polynomial-time selectors, every such L must sit in PSPACE and all PSPACE-complete languages have selectors. The question gets much more interesting if you allow probabilistic computation.</div>
<div>
<br /></div>
<div>
Shuichi shows that every language that has a probabilistically poly-time selector sits in S<sub>2</sub><sup>EXP</sup>, the exponential analogue of <a href="http://blog.computationalcomplexity.org/2002/08/complexity-class-of-week-s2p.html">S<sub>2</sub><sup>P</sup></a>. His main theorem shows that EXP<sup>NP</sup>-complete sets have this property. His proof is quite clever, using the EXP<sup>NP</sup>-complete problem of finding the lexicographically least witness of a succinctly-described exponential-size 3-SAT question. He uses PCP techniques to have each oracle produce a witness and then he has a clever way to doing binary search to find the least bit where these witnesses differ. I haven't checked all the details carefully but the proof ideas look good.</div>
<div>
<br /></div>
<div>
Still leaves an interesting gap between EXP<sup>NP</sup> and S<sub>2</sub><sup>EXP</sup>. Is there a selector for Promise-S<sub>2</sub><sup>EXP</sup>-complete languages?</div>
http://blog.computationalcomplexity.org/2015/02/selecting-correct-oracle.htmlnoreply@blogger.com (Lance Fortnow)1tag:blogger.com,1999:blog-3722233.post-631270962699991525Mon, 23 Feb 2015 17:11:00 +00002015-02-23T11:11:29.875-06:00Eliminate the Postal ServiceIt's gotten very difficult to mail a letter these days. There are no mailboxes along my ten mile commute. Georgia Tech has contracted with an outside company to handle outgoing mail. To send a piece of mail requires filling out a form with an account number and many other universities have similar practices. Mail into or out of the university can tack on several days. I sent a piece of mail from Georgia Tech in Atlanta to the University of Pennsylvania in Philadelphia--two weeks from sender to recipient.<br />
<br />
Why do I have to send mail in this world with email, texts and instant messages? Some places require "original receipts". Some government agencies require forms sent by mail or fax, and I've given up trying to find a reliable fax machine with someone who knows how to work it. It's still not always easy to transfer money to another person or company with a physical check. I stopped using the Netflix DVD service because it lost its value when I had to make a special trip to mail the DVD back. It's easier to find a Redbox than a mailbox.<br />
<br />
Meanwhile most of the mail I receive is junk, or magazines, which look better on the iPad, or official letters that I have to scan to keep an electronic copy since they didn't email it to me. I do get the occasional birthday card or hand-written thank you note, a nice Southern tradition but we can live without it. USPS also does package delivery but that is often handled better by private provider such as UPS and FedEx.<br />
<br />
So what if we just eliminated the US Postal System, say with a three-year warning? There is nothing that can't be replaced by electronic means and a planned closing would force the various government and businesses make that final push. We'll reminisce about mail like we did about the telegram. But why keep an inferior technology alive? It's time to move on.http://blog.computationalcomplexity.org/2015/02/eliminate-postal-service.htmlnoreply@blogger.com (Lance Fortnow)8tag:blogger.com,1999:blog-3722233.post-1915448827082850620Thu, 19 Feb 2015 12:06:00 +00002015-02-20T09:27:05.206-06:00And the Winners Are...[Shortly after this post went up, <a href="http://acm-stoc.org/stoc2015/">STOC</a> announced their <a href="http://acm-stoc.org/stoc2015/acceptedpapers.html">accepted papers</a> as well]<br />
<br />
I always like that the announcement of <a href="http://www.computationalcomplexity.org/Archive/2015/accept.html">accepted papers</a> for the <a href="http://www.computationalcomplexity.org/">Computational Complexity Conference</a> happens around the time of the Academy Awards. These acceptances are the Oscars for our community that shares its name with this conference and the blog.<br />
<br />
The title that interests me is <i>Identifying an honest EXP^NP oracle among many </i>by Shuichi Hirahara since it seems closely related to some of my own research. Not only cannot I not find the paper online, I can't even find the author's email. Shuichi, if you reading this, please send me a copy of your paper.<br />
<br />
Luckily not all papers are so elusive. Murray and Williams <a href="http://eccc.hpi-web.de/report/2014/164/">show</a> that proving the NP-hardness of computing the circuit complexity would require proving real complexity class separation results. Oliveira and Santhanam <a href="http://eccc.hpi-web.de/report/2014/173/">give</a> tight lower bounds on how much you can compress majority so that you can compute it with constant-depth circuits. A different Oliveira has two papers in the conference, a solely authored paper <a href="http://www.cs.princeton.edu/~rmo/papers/small-depth-factors.pdf">showing</a> that polynomials of low individual degree with small low-depth arithmetic circuits have factors similarly computable, and a <a href="http://arxiv.org/abs/1411.7492">paper</a> with Shpilka and Volk on hitting sets for bounded-depth multilinear formula. A hitting set is a small easily and deterministically computable set that contains, for every such arithmetic circuit, an input with a non-zero output.<br />
<br />
Many more <a href="http://www.computationalcomplexity.org/Archive/2015/accept.html">interesting papers</a> and you can see them all at the conference in Portland, this year part of the <a href="http://fcrc.acm.org/">Federated Computing Research Conference</a> which includes STOC, SPAA and EC, which now stands for Economics and Computation. My tip: <a href="http://fcrc.acm.org/travel.cfm">book your hotels</a> now, they fill up fast.http://blog.computationalcomplexity.org/2015/02/and-winners-are.htmlnoreply@blogger.com (Lance Fortnow)5tag:blogger.com,1999:blog-3722233.post-7341087524587328276Tue, 17 Feb 2015 15:43:00 +00002015-02-17T09:44:44.698-06:00Stephan Colbert, Jon Stewart, Andrew Sullivan, William GasarchStephan Colbert is leaving the Colbert Report<br />
<br />
Jon Stewart is leaving the Daily Show<br />
<br />
Andrew Sullivan is no longer blogging<br />
<br />
Bill Gasarch has resigned as SIGACT News Book Review Editor<br />
<br />
Where will Gasarch get his news from?<br />
<br />
Where will Colbert-Stewart-Sullivan get their reviews-of-theory-books from?<br />
<br />
<br />
Why am I stepping down? I've been SIGACT News book Review editor for 17 years (just as long as Jon Stewart has been doing The Daily Show.) That's enough (more than enough?) time. I want time to spend more time with my books and my family. I will be on sabbatical next year so I am generally cutting down on my obligations. <br />
<br />
I have enjoyed it, gotten to know some publishers, got more free books than I know what to do with. I haven't paid for a math or CS book in... probably 17 years.<br />
<br />
While writing reviews is great, figuring out who reviews other books, getting them the books, getting the review from them, editing it all into a column four times a year, can get to be routine. Though I DO like reading the reviews.<br />
<br />
Who will take over? I asked Lance who would be good and he said `someone old who still reads books'- so I asked Fred Green who agreed to take the job. I then had to get my files (of reviews, of who-owes-me-reviews, of which-books-do-I-want-reviewed, etc) in order to email to him. The usual- I wish I had cleaned up my files years ago so I could benefit from it.<br />
<br />
The main PLUS of the job was that I got to read lots of books and learn about some fields. As someone who would rather read a good book rather than produce a bad paper, the job suited me.<br />
<br />
The main NEGATIVE of the job was seeing so many books that I WANT to review but either didn't have time to (and I usually KNEW that and had someone else review it) or found myself unable to (gee that books is harder than I thought!) leave my office for someone else to review.<br />
<br />
The biggest change that Fred will encounter is e-books.Will publishers want to send out free e-books instead of hardcopy? Will reviewers want hardcopy? This is of course a very tiny part of a more profound conversation of what will happen to the book market once e-books are more common. <br />
<br />
<br />http://blog.computationalcomplexity.org/2015/02/stephan-colbert-jon-stewart-andrew.htmlnoreply@blogger.com (GASARCH)5tag:blogger.com,1999:blog-3722233.post-485122826244171100Wed, 11 Feb 2015 12:23:00 +00002015-02-11T06:26:46.264-06:00Richard Hamming Centenary<div class="separator" style="clear: both; text-align: center;">
</div>
<a href="http://www-history.mcs.st-andrews.ac.uk/Biographies/Hamming.html">Richard Hamming</a> was born a hundred years ago today in Chicago. He worked on the Manhattan Project during World War II, moved to Bell Labs after the war and started working with Claude Shannon and John Tukey. It was there that he wrote his seminal 1950 paper <a href="http://dx.doi.org/10.1002/j.1538-7305.1950.tb00463.x">Error detecting and error correcting codes</a>.<br />
<div>
<br /></div>
<div>
Suppose you send a string of bits where a bit might have been flipped during the transmission. You can add an extra parity bit at the end that can be used to detect errors. What if you wanted to correct the error? Richard Hamming developed an <a href="http://en.wikipedia.org/wiki/Hamming(7,4)">error-correcting code</a> (now called the Hamming code) that encodes 4 bits into 16 codewords of 7 bits each such that every two codewords differ in at least three bits (which we now call the Hamming distance). </div>
<div>
<br /></div>
<pre>0000000 1101001 0101010 1000011
1001100 0100101 1100110 0001111
1110000 0011001 1011010 0110011
0111100 1010101 0010110 1111111
</pre>
<div>
If there is a single error then there is a unique codeword within one bit of the damaged string. By having an error-correcting code you can continue a process instead of just halting when you detect a bad code.</div>
<div>
<br /></div>
<div>
The Hamming code is a linear code, the bitwise sum mod 2 of any two codewords is another codeword. This linear idea led to many more sophisticated codes which have had many applications in computer science, practical and theoretical.</div>
<div>
<br /></div>
<div>
Hamming received several awards notably the <a href="http://amturing.acm.org/award_winners/hamming_1000652.cfm">1968 ACM Turing Award</a> and the inaugural <a href="http://www.ieee.org/about/awards/medals/hamming.html">IEEE Richard W. Hamming Medal</a> in 1988 given for exceptional contributions to information sciences, systems, and technology. Hamming <a href="http://www.nytimes.com/1998/01/11/business/richard-hamming-82-dies-pioneer-in-digital-technology.html">passed away</a> in 1998. </div>
http://blog.computationalcomplexity.org/2015/02/richard-hamming-centenary.htmlnoreply@blogger.com (Lance Fortnow)2tag:blogger.com,1999:blog-3722233.post-7939964039088101106Mon, 09 Feb 2015 03:24:00 +00002015-02-08T21:24:28.426-06:00Pros and Cons of students being wikiheadsA <i>wikihead</i> is someone who learns things from the web (not necc. Wikipedia) but either learns things that are not correct or misinterprets them. I've also heard the term <i>webhead</i> but thats ambigous since it also refers to fans of Spiderman.<br />
<br />
I like to end the first lecture of Discrete Math by talking about SAT and asking the students if they think it can be solved <i>much faster than the obvious 2^n algorithm.</i> This semester in honors DM I got the usual heuristics (look for a contradiction!) that may well help but certainly won't get down to much better than 2^n in all cases. This leads to nice discussions of worst-case vs avg-case and formal vs what-works-in-practice.<br />
<br />
I also got the following answers:<br />
<br />
SAT cannot be done better than 2^n since P ≠ NP.<br />
<br />
and<br />
<br />
SAT can be done in O(n) time with a Quantum Computer.<br />
<br />
They both made there assertions boldly! I gently corrected them. They had both <i>read it on the web.</i><br />
<br />
I suspect that the P ≠ NP person read something that was correct (perhaps a survey that said 80% of all theorists THINK P ≠ NP) and misconstrued it, while the second person read something that was just wrong (perhaps one of those <i>by the many worlds quantum theory a quantum computer can look at all possibilities at the same time</i> people).<br />
<br />
SO- they went and looked up stuff on their own (YEAH) but didn't quite understand it (BOO)<br />
or read incorrect things (BOO). But I will correct them (YEAH). But there are other people who will never get corrected (BOO). But there are others who will get interested in these things because of the false things they read (YEAH?) The quantum person might either NOT go into quantum computing since he thinks its all bogus now, or GO INTO it since he is now curious about what is really going on.<br />
<br />
SO the real question is: if people get excited about math or science for the wrong reasons, is that good?bad? Do you know of examples where incorrect but exciting science writing lead to someone doing real science?<br />
<br />
<br />
<br />
<br />
<br />http://blog.computationalcomplexity.org/2015/02/pros-and-cons-of-students-being.htmlnoreply@blogger.com (GASARCH)4tag:blogger.com,1999:blog-3722233.post-4399966963758822371Thu, 05 Feb 2015 12:43:00 +00002015-02-05T06:43:04.226-06:00Computers Turn Good and EvilEntertainment Weekly proclaimed 2015 the year that <a href="http://www.ew.com/article/2014/12/22/aventers-ultron-terminator-genisys-artificial-intelligence">artificial intelligence will rule the (movie) world</a> with talk of the <i>Terminator</i> reboot, the new Avengers movie <i><a href="http://www.imdb.com/title/tt2395427/">Age of Ultron</a></i>, where Ultron is an attempt at a good AI robot turned evil, and <i><a href="http://www.sonypictures.com/movies/chappie/">Chappie</a></i> who saves the world. And then there is <a href="http://www.imdb.com/title/tt0470752/">Ex Machina</a>, where Domhnall Gleeson "must conduct a Turing test, the complex analysis measuring a machine’s ability to demonstrate intelligent human behavior, while wrestling with his own conflicted romantic longing for the humanoid."<br />
<br />
Let's not forget the return of the <i>Star Wars</i> droids and the hacker movie <i>Blackhat</i> that has already come and gone. On TV we have new computer-based procedurals, one for adults (<i><a href="http://www.cbs.com/shows/csi-cyber/">CSI:Cyber</a></i>) and one for kids (<i><a href="http://www.amazon.com/Buddy-Tech-Detective-HD/dp/B00RSGIAEW">Buddy: Tech Detective</a></i>).<br />
<br />
With Wired proclaiming <a href="http://www.wired.com/2015/01/ai-arrived-really-worries-worlds-brightest-minds/">AI Has Arrived, and That Really Worries the World’s Brightest Minds</a> and Uber <a href="http://bits.blogs.nytimes.com/2015/02/02/uber-to-open-center-for-research-on-self-driving-cars/">investing in technology</a> to eliminate their drivers, what is the average person to think. Let me quote the famous philosopher Aaron Rodgers and say "Relax". We still control the technology, don't we?http://blog.computationalcomplexity.org/2015/02/computers-turn-good-and-evil.htmlnoreply@blogger.com (Lance Fortnow)0tag:blogger.com,1999:blog-3722233.post-3271600808794839193Mon, 02 Feb 2015 16:35:00 +00002015-02-02T10:35:15.493-06:00Less elegant proofs that (2n)!/n!n! is an integer(A conversation between Bill (the writer of this post) and Olivia (14-year old daughter of a friend.) All of the math involved is <a href="http://www.cs.umd.edu/~gasarch/BLOGPAPERS/nchoosek.pdf">here</a>.<br />
<br />
Bill: Do you know what 10! is?<br />
<br />
Olivia: Yes, when I turned 10 I yelled out ``I AM 10!''<br />
<br />
Bill: I mean it in a different way. In math 10! is 10 x 9 x 8 x 7 x 6 x 5 x 4 x 3 x 2.<br />
<br />
Olivia: Okay. So what?<br />
<br />
Bill: Do you think that (10)!/5!5! is an integer?<br />
<br />
Olivia: No but I'll try it. (She does) Well pierce my ears and call be drafty, it is!<br />
<br />
Bill: Do you think that (100)!/50!50! is an integer?<br />
<br />
Olivia: Fool me once shame on you, Fool me twice... uh, uh, We don't get fooled again was a great song by the Who!<br />
<br />
Bill: Who? Never mind, I'll save a whose-on-first-type blog for another day.<br />
<br />
Olivia: What?<br />
<br />
Bill: He's on second, but never mind that. It turns out that (1) (100!)/50!50! is an integer, and (2) I can prove it without actually calculating it. (Bill then goes through combinatorics and shows that n!/k!(n-k)! solves a problem in combinatorics that must have an integer solution.)<br />
<br />
Olivia: You call that a proof! That's INSANE You can't just solve a problem that must have an integer solution and turn that into a proof that the answer is an integer. Its unnatural. It is counter to the laws of God and Man!<br />
<br />
Inspired by Olivia I came up with a LESS elegant proof that (2n)!/n!n! is always an integer. Inspired by that I also came up with a LESS elegant proof that the Catalan numbers are integers. See the link above for all proofs.<br />
<br />
But realize- WE think it is OBVIOUS that (2n)!/n!n! is an integer (and for that matter n!/k!(n-k)! and the Catalan numbers) and we are right about that--- but there was a time when we would have reacted like Olivia.<br />
<br />
I've seen 5th graders who were sure there must be a fraction whose square is 2 since (1) I wouldn't have asked them if there wasn't, and (2) the concept of a problem not having a solution was alien to them. In an earlier grade the concept of having a problem whose answer was negative or a fraction was alien to them.<br />
<br />
When teaching students and daughters of friends we should be aware that what we we call obvious comes from years of exposure that they have not had.<br />
<br />
When Olivia turns 15 will she say `I AM 15!' More accurate would be `I am 15!.'http://blog.computationalcomplexity.org/2015/02/less-elegant-proofs-that-2nnn-is-integer.htmlnoreply@blogger.com (GASARCH)4tag:blogger.com,1999:blog-3722233.post-563845713372077583Thu, 29 Jan 2015 12:53:00 +00002015-01-29T06:53:35.291-06:00Reusing Data from PrivacyVitaly Feldman gave a talk at Georgia Tech earlier this week on his recent paper <a href="http://arxiv.org/abs/1411.2664">Preserving Statistical Validity in Adaptive Data Analysis</a> with Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold and Aaron Roth. This work looks at the problem of reuse of the cross-validation data in statistical inference/machine learning using tools from differential privacy.<br />
<br />
Many machine learning algorithms have a parameter that specifies the generality of the model, for example the number of clusters in a clustering algorithm. If the model is too simple it cannot capture the full complexity of what it is learning. If the model is too general it may overfit, fitting the vagrancies of this particular data too closely.<br />
<br />
One way to tune the parameters is by cross-validation, running the algorithm on fresh data to see how well it performs. However if you always cross-validate with the same data you may end up overfitting the cross-validation data.<br />
<br />
Feldman's paper shows how to reuse the cross-validation data safely. They show how to get an exponential (in the dimension of the data) number of adaptive uses of the same data without significant degradation. Unfortunately their algorithm takes exponential time but sometimes time is much cheaper than data. They also have an efficient algorithm that allows a quadratic amount of reuse.<br />
<br />
The intuition and proof ideas come from differential privacy where one wants to make it hard to infer individual information from multiple database queries. A standard approach is to add some noise in the responses and the same idea is used by the authors in this paper.<br />
<br />
All of the above is pretty simplified and you should <a href="http://arxiv.org/abs/1411.2664">read the paper</a> for details. This is one of my favorite kinds of paper where ideas developed for one domain (differential privacy) have surprising applications in a seemingly different one (cross-validation).<br />
<br />
<br />http://blog.computationalcomplexity.org/2015/01/reusing-data-from-privacy.htmlnoreply@blogger.com (Lance Fortnow)1