tag:blogger.com,1999:blog-3722233Thu, 30 Jun 2016 12:21:26 +0000typecastfocs metacommentsComputational ComplexityComputational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill Gasarchhttp://blog.computationalcomplexity.org/noreply@blogger.com (Lance Fortnow)Blogger2394125tag:blogger.com,1999:blog-3722233.post-3491866800616455832Thu, 30 Jun 2016 12:21:00 +00002016-06-30T08:21:26.739-04:00ExcaliburGoro Hasegawa <a href="http://www.nytimes.com/2016/06/26/world/asia/goro-hasegawa-creator-of-othello-board-game-dies-at-83.html">passed away</a> last week at the age of 83. Hasegawa created and popularized the board game Othello based on an old British game Reversi.<br />
<div>
<br /></div>
<div>
Back in the 80's, a high school friend Chris Eisnaugle and I used to write programs together including the Frogger-like program <a href="http://lance.fortnow.com/ribbit/">Ribbit</a> for the Apple II. We decided to try our hands at board games and aimed at Othello as it seemed simpler to manage than the more popular attempts at computer chess. Our first program played awful but we contacted and had some great discussions with Peter Frey, a Northwestern psychology professor who worked on computer games. Frey pointed us to some great techniques like alpha-beta pruning and table look up for edge positions. Who knew that I would become a Northwestern professor myself later in life (2008-2012).<br />
<br /></div>
<div>
Unlike many games, the number of moves in Othello are limited in the end of the game, so even on the old IBM PC we used, we could play a perfect endgame from 14 moves out. So a simple strategy of maximizing mobility in the early game, controlling the edges and corners in the mid-game and a perfect end game give a pretty strong Othello program. We called our game <a href="http://lance.fortnow.com/othello/">Excalibur</a> after King Arthur's sword and the title of a <a href="http://www.imdb.com/title/tt0082348/">great movie</a> telling his tale. We traveled to Cal State Northridge for the 1986 computer Othello tournament and captured third place, not bad considering the limited hardware we used. I entered a human tournament myself and for a brief time ranked the 35th best Othello player in the US. </div>
<div>
<br /></div>
<div>
In 1989 we tried another computer Othello tournament, this time just calling in and coming in fifth place. One of our games was against the eventual winner by then CMU professor <a href="https://en.wikipedia.org/wiki/Kai-Fu_Lee">Kai-Fu Lee</a>. His program beat us of course but he was still impressed with the play of Excalibur. Kai-Fu Lee would later work for Microsoft then leave to build up Google China, leading to one of the more memorable <a href="http://www.cnet.com/news/microsoft-settles-with-google-over-executive-hire/">lawsuits</a> over a non-compete agreement.</div>
<div>
<br /></div>
<div>
<a href="https://en.wikipedia.org/wiki/Computer_Othello">Computer Othello</a> improved greatly since then and in 1997 Michael Buro's game Logistello easily beat the best human players. Michael Buro worked at the NEC Research Institute and we met when I joined in 1999. We chatted Othello but of course Excalibur was not in the same league as Logistello. Michael Buro later would join University of Alberta, which became the academic center for computer games. </div>
<div>
<br /></div>
<div>
Computer Othello gained popularity because no one could create a Computer Go program that could beat good amateurs. That changed with randomized search and machine learning that led to <a href="http://blog.computationalcomplexity.org/2016/02/go-google-go.html">AlphaGo</a>. </div>
<div>
<br /></div>
<div>
So thank you Goro Hasegawa for creating this simple game that played such an interesting part of my life back in the day. </div>
http://blog.computationalcomplexity.org/2016/06/excalibur.htmlnoreply@blogger.com (Lance Fortnow)0tag:blogger.com,1999:blog-3722233.post-8037830445610090135Mon, 27 Jun 2016 03:05:00 +00002016-06-28T16:15:18.290-04:00There is now a Bounded Discrete Envy Free Cake Cutting Protocol!<i>Lance</i>: Bill, there is a new result on cake cutting that was presented at STOC! Do you want to blog about it?<br />
<br />
<i>Bill:</i> Do snakes have hips! Does a chicken have lips!<br />
<br />
<i>Lance</i>: No to the first one and I don't know to the second one.<br />
<br />
<i>Bill:</i> Yes I'll blog about it! Whats the paper?<br />
<br />
<i>Lance</i>: It's <a href="https://arxiv.org/pdf/1604.03655v5.pdf">this</a> paper by Aziz and Mackenzie.<br />
<br />
<i>Bill</i>: Oh. That's not new. Five people emailed me about it a while back. But yes I will blog about it.<br />
<br />
<i>Cake Cutting</i>: There are n people with different tastes in cake (some like chocolate and some... OH, who doesn't like chocolate? Okay, someone prefers kale which is on the cake.) They want a protocol that divides the cake in a way that is fair. What is fair? There are many definitions but I'll talk about two of them.<br />
<br />
<i>Proportional</i>: Everyone gets 1/n of the cake (in their own opinion- I will omit saying this from now on).<br />
<br />
Proportional sounds fair but consider the following scenario: Alice thinks she got 1/3 but she thinks Bob got 1/2 and Eve got 1/6. Alice i will envy Bob.<br />
<br />
<i>Envy Free</i>: Everyone thinks they have the larges piece (or are tied for it).<br />
<br />
What is a protocol? It is a set of instructions and advice so that if (1) if the players all follow the advice then the end result is fair, and (2) if a player does not follow the advice then that player might get less than his fair share. Hence all players are motivated to follow the advice. We assume that everyone acts in their own self interest and that they are at a diamond cutters convention (perhaps co-located with STOC) so they really can cut cake very finely.<br />
<br />
We will only consider discrete protocols. We won't define this formally.<br />
<br />
<i>Prior Results</i>:<br />
1) There is a protocol for Prop fairness for n people that uses O(n log n) cuts. See <a href="http://www.cs.umd.edu/~gasarch/BLOGPAPERS/prop.pdf">my notes</a><br />
<br />
2) Jeff Edmonds and Kirk Pruhs showed a lower bound of Omega(n log n). See <a href="http://www.eecs.yorku.ca/~jeff/research/kirk/cake.lower/talg.pdf">their paper.</a><br />
<br />
3) There is a protocol for Envy Free fairness for 3 people due to Conway and Selfridge. This was in 1960. This protocol took 5 cuts. (It is in the doc I point to in next item)<br />
<br />
4) In 1995 Brams and Taylor obtained a protocol for envy free fairness for n people. But there is a catch- there is no bound on the number of cuts. For all N there is a way to set four peoples tastes so that the protocol takes more than N cuts. See <a href="http://www.cs.umd.edu/~gasarch/BLOGPAPERS/envyfree.pdf">my notes.</a><br />
<br />
All items to follow are for an envy free protocol for n people.<br />
<br />
5) It was an open question to determine if there is a bounded protocol. Stromquest proved that there can be no bounded protocol if all of the players got a contiguous piece, though this was not the case in the Brams-Taylor protocol. See <a href="https://www.cs.umd.edu/users/gasarch/TOPICS/cake/lbenvyfree.pdf">his paper</a><br />
<br />
At the time I thought there would be no bounded protocol. I found a way to measure unbounded protocols using ordinals and wrote a paper on it: See <a href="https://arxiv.org/abs/1507.08497">my paper.</a><br />
<br />
6) Aziz and MacKenzie showed there was a bounded protocol for 4 people. See <a href="http://arxiv.org/abs/1508.05143">their paper.</a><br />
<br />
7) Aziz and MacKenzie, STOC 2016, showed there was, a protocol that takes at most n<sup>O(n)</sup> cuts. Hence a bounded protocol! See <a href="https://arxiv.org/abs/1604.03655">their paper.</a><br />
<br />
What's next? Either improve the number of cuts or show it can't be done!<br />
<br />
<br />http://blog.computationalcomplexity.org/2016/06/there-is-now-bounded-discrete-envy-free_0.htmlnoreply@blogger.com (GASARCH)1tag:blogger.com,1999:blog-3722233.post-8751777274825666398Thu, 23 Jun 2016 12:44:00 +00002016-06-23T08:44:16.603-04:00STOC 2016<div>
I attended the <a href="http://acm-stoc.org/stoc2016/">2016 STOC Conference</a> earlier this week in Cambridge, Massachusetts. No shortage of awesome results highlighted by László Babai's new quasipolynomial-time graph isomorphism algorithm. Babai didn't have a chance to give more than a glimpse into his algorithm in the 20 minute talk but you did see his joy in discovering the concept of "affected" last September, the key to making the whole algorithm work.<br />
<br />
Babai also received the ACM SIGACT Distinguished Service prize for his work for the community including his open-access journal <a href="http://theoryofcomputing.org/">Theory of Computing</a>.<br />
<br />
You can see and access all the papers from the <a href="http://acm-stoc.org/stoc2016/program.html">program page</a>. Links on that page will give you free access to the papers forever. I'll mention some papers here but you'll have to click over on the program page to access them.<br />
<br />
Troy Lee gave my favorite talk on his paper <i>Separations in Query Complexity Based on Pointer Functions</i> with Andris Ambainis, Kaspars Balodis, Aleksandrs Belovs, Miklos Santha and Juris Smotrovs. He gave a wonderful description of a (contrived) function that gives strong separations between deterministic, randomized and quantum decision tree complexity. Scott Aaronson, Shalev Ben-David and Robin Kothari followed up on that work in <i>Separations in query complexity using cheat sheets. </i><br />
<br />
Let me also mention the best paper winners<br />
<ul>
<li><i>Reed-Muller Codes Achieve Capacity on Erasure Channels </i>by Shrinivas Kudekar, Santhosh Kumar, Marco Mondelli, Henry D. Pfister, Eren Sasoglu and Rudiger Urbanke</li>
<li><i>Explicit Two-Source Extractors and Resilient Functions</i> by Eshan Chattopadhyay and David Zuckerman</li>
<li><i>Graph Isomorphism in Quasipolynomial Time </i>by Laszlo Babai</li>
</ul>
<div>
and the Danny Lewin best student paper awardees</div>
<div>
<ul>
<li><i>A Tight Space Bound for Consensus </i>by Leqi Zhu</li>
<li><i>The 4/3 Additive Spanner Exponent is Tight </i>by Amir Abboud and Greg Bodwin</li>
</ul>
</div>
The conference had 327 registrants, 44% students and 74% from the USA. <a href="http://www.computational-geometry.org/">The Symposium on Computational Geometry</a> was held at Tufts before STOC and they held a <a href="http://acm-stoc.org/stoc2016/workshop.html">joint workshop day</a> in between. There were about 20 registered for both conferences.<br />
<br />
STOC had 92 accepted papers out of 370 submissions. None of the three papers claiming to settle P v NP were accepted.<br />
<br />
STOC 2017 will be held in Montreal June 19-23, the first of a theory festival. There will be multiple day poster sessions, a poster for every papers. Invited talks will included best papers from other theory conference. There will be a mild increase in the number of accepted papers. The hope is to draw a broader and larger group of theorists for this festival.<br />
<br />
The first STOC conference was held in Marina del Rey in 1969. That makes the 2018 conference STOC's 50th which will return to Southern California. The Conference on Computational Complexity will co-locate.<br />
<br />
<a href="http://www.wisdom.weizmann.ac.il/~dinuri/focs16/CFP.html">FOCS 2016</a> will be held October 9-11 in New Brunswick, New Jersey preceded by a <a href="https://www.math.ias.edu/avi60">celebration</a> for the 60th birthday for Avi Wigderson.<br />
<br />
<a href="http://siam.org/meetings/da17/">SODA 2017</a> will be held January 16-19 in Barcelona. Paper registration deadline is July 6.<br />
<br /></div>
<div>
If you weren't at the business meeting, it's worth looking at the slides for the <a href="https://thmatters.files.wordpress.com/2016/06/nsf-cise-stoc16-bus-mtg-1.pptx">NSF</a> and the <a href="https://thmatters.files.wordpress.com/2016/06/catcs-report-stoc-2016.pptx">Committee for the Advancement for Theoretical Computer Science</a>. Of particular note, the CATCS site <a href="https://thmatters.wordpress.com/">Theory Matters</a> has <a href="https://cstheory-jobs.org/">job announcements</a> and <a href="https://thmatters.wordpress.com/funding-opportunities-and-tips/career-examples-proposalscomments/">sample CAREER proposals</a>.</div>
http://blog.computationalcomplexity.org/2016/06/stoc-2016.htmlnoreply@blogger.com (Lance Fortnow)3tag:blogger.com,1999:blog-3722233.post-4792566710287614960Tue, 21 Jun 2016 18:54:00 +00002016-06-25T09:21:32.853-04:00When does n divide a_n? The answer, though you already know it. The Point, though its not as strong as I wanted. Oh well.In my last post <a href="http://blog.computationalcomplexity.org/2016/06/when-does-n-divide-in-this-sequence.html">When does n divide a_n?</a> I gave a sequence:<br />
<br />
a(1)=0<br />
<br />
a(2)=2<br />
<br />
a(3)=3<br />
<br />
for all n ≥ 4 a(n) = a(n-2) + a(n-3)<br />
<br />
and I noted that for 2 ≤ n ≤ 23 it looked like<br />
<br />
n divides a(n) iff n is prime<br />
<br />
and challenged my readers to prove it or disprove it.<br />
<br />
I thought it would be hard to find the first counterexample and hence I would make the point:<br />
<br />
<i>Just because a pattern holds for the first 271440 numbers does not mean that it always holds!</i><br />
<i><br /></i>
However, several readers found that 521<sup>2</sup>=271441 is a counterexample. In fact they found it rather quickly. I thought it would take longer since <a href="http://www.solipsys.co.uk/new/FastPerrinTest.html?ColinsBlog">this blog (also my inspiration)</a> by Colin Wright seemed to indicate that fancy data structures and algorithms tricks are needed. So I emailed Colin about this and it turns out he originally wrote the program many years ago. I quote his email:<br />
<br />
---------------------------<br />
> I posted on Perrin Primes (without calling them that) and people seemed to<br />
> find the counterexample, 561^2, easily.<br />
<br />
Yes. These days the machines are faster, and languages with arbitrary<br />
precision arithmetic are common-place. When I first came across this<br />
problem, if you wanted arithmetic on more than 32 bits you had to write<br />
the arbitrary precision package yourself. This was pre-WWW,<br />
although not pre-internet. Quite.<br />
<br />
> So I wondered why your code needed those optimizations.<br />
<br />
Even now, it's easy to find the first few counter-examples, but it rapidly<br />
gets out-of-hand. The sequence grows exponentially, and very soon you are<br />
dealing with numbers that have thousands of digits. Again, not that bad now,<br />
but impossible when I first did it, and even now, to find anything beyond the<br />
first 20 or 30 counter-examples takes a very long time.<br />
<br />
So ask people how to find the first 1000 counter-examples,<br />
and what they notice about them all.<br />
<div>
-----------------------------------------</div>
<div>
<br /></div>
<div>
back to my post:</div>
<br />
It is known that if n is prime then n divides a(n). (ADDED LATER: see <a href="http://math.stackexchange.com/questions/1280125/a-question-about-the-proof-that-for-prime-p-p-divides-kp-where-k-is-the-pe">here</a> for a proof) The converse is false. Composites n such that n divides a(n) are called <a href="https://en.wikipedia.org/wiki/Perrin_number">Perrin Pseudoprimes</a>. The following questions seem open, interesting, and rather difficult:<br />
<br />
1) Why is the first Perrin Pseudoprime so large?<br />
<br />
2) Are there other simple recurrences b(n) where n divides b(n) iff n is prime LOOKS true for a while?<br />
<br />http://blog.computationalcomplexity.org/2016/06/when-does-n-divide-the-answer-though.htmlnoreply@blogger.com (GASARCH)6tag:blogger.com,1999:blog-3722233.post-4804501150043320086Fri, 17 Jun 2016 15:35:00 +00002016-06-17T11:35:27.469-04:00The Relevance of TCS<i>Avi Wigderson responds to yesterday's <a href="http://blog.computationalcomplexity.org/2016/06/karp-v-wigderson-20-years-later.html">post</a>.</i><br />
<br />
20 years is a long time, and TCS is in a completely different place today than it was then.<br />
I am happy to say that internally its members are far more confident of its importance and independence as a scientific discipline, and externally the recognition of that importance by all sciences (including computer science) has grown tremendously. I have no doubt that both will continue to grow, as will its impact on science and technology.<br />
<br />
Let me address a few aspects of the original post (one can elaborate much more than I do here).<br />
<br />
Young talent: The way we continuously draw exceptional young talent to our core questions is not just a fact to state - it carries important meaning, namely a key sign of our health and prosperity. After all, these exceptionally talented young people could have done well in any field in science and technology, and they freely chose us (and indeed have been responsible for the many great results of the field in the past 20 years)!<br />
<br />
Funding levels: In contrast, funding levels are always controlled by few and are subject to political pressures. So here our field was wise to grow up politically and realize the importance of advocacy and PR besides just doing great research. This has definitely helped, as did other factors.<br />
<br />
Growth of theory in academia: I have no idea of the exact statistics (or even how to measure it exactly) but I should note as an anecdote that as soon as Harvard got 12 new positions in CS it made three senior offers to theorists: Cynthia, Madhu and Boaz! I see it as an important critical development to have TCS well represented not only in the top 20 universities but in the top 100. Our educational mission is too important to be reserved only to the elite schools. (Needless to say, our science and way of thinking should be integrated to the K-12 educational system as well. While this is budding, significant meaningful presence will probably take decades.)<br />
<br />
Scientific relevance: While it may be too early to evaluate the true impact of our (many) specific incursions into and collaborations with Biology, Economics, Physics, Mathematics, Social Science etc., I believe the following general statement. *The emerging centrality of the notion of algorithm, and the limits of its efficient utilization of relevant resources, is nothing short than a scientific revolution in the making.* We are playing a major role in that revolution. Some of the modeling and analysis techniques we have developed and continue to develop, and even more, the language we have created over the past half century to discuss, invent and understand processes, is the fuel and catalyst of this revolution. Eventually all scientists will speak this language, and the algorithmic way of thought will be an essential part of their upbringing and research.<br />
<br />
Technological relevance: Even without going to great past achievements which are taken for granted and dominate technological products, and considering only current TCS work evidence is staggering. Sublinear algorithms, Linear solvers, Crypto (NC0-crypto, Homomorphic encryption,...), Privacy, Delegation, Distributed protocols, Network design, Verification tools, Quantum algorithms and error correction, and yes, machine learning and optimization as well, are constantly feeding technological progress. How much of it? Beyond counting patents, or direct implementations of conference papers, one should look at the integration of modeling and analysis techniques, ways of thinking, and the sheer impact of "proofs of concept" that may lead to drastically different implementations of that concept. Our part in the education we provide to future developers was, is and should be of central influence on technology.<br />
<br />
In a tiny field like ours, having the impact we do on so many scientific and technological fields that are factors 10-100 larger than us may seem miraculous. Of course we know the main reason since Turing - we have universality on our side - algorithms are everywhere. What is perhaps more miraculous is the talent and willingness of pioneers in our field over decades to search, interact, learn and uncover the numerous forms of this universal need in diverse scientific and technological fields, and then suggest and study models using our language and tools. This has greatly enriched not only our connections and impact on other disciplines, but also had the same effect on our intrinsic challenges and mysteries, many of which remain widely open. I am happy to say that at least part of that remarkable young talent constantly drawn into our field keeps focus on these intrinsic challenges and the natural, purely intellectual pursuits they lead to. Our history proves that there are direct connections between the study and progress on core questions and our interactions with the outside world. Our current culture luckily embraces both!<br />
<br />
All the above does not mean that we can't improve on various aspects, and constant questioning and discussion are welcome and fruitful. But I believe that a the firm foundation of these deliberations should be the intrinsic scientific importance of our mission, to understand the power, limits and properties of algorithms of all incarnations, shapes and forms, and the study of natural processes and intellectual concepts from this viewpoint. This importance does not depend on utility to other disciplines (it rather explains it), and should not seek justification from them. The correct analogy in my view is expecting theoretical physicists to seek similar confirmation in their quest to uncover the secrets of the universe.http://blog.computationalcomplexity.org/2016/06/the-relevance-of-tcs.htmlnoreply@blogger.com (Lance Fortnow)12tag:blogger.com,1999:blog-3722233.post-1733567208294013592Thu, 16 Jun 2016 12:19:00 +00002016-06-16T08:19:13.887-04:00Karp v Wigderson 20 Years LaterThe <a href="http://acm-stoc.org/stoc2016/">48th ACM Symposium on the Theory of Computing</a> starts this weekend in Boston. Let's go back twenty years to the 28th STOC, part of the second Federated Computing Research Conference in Philadelphia. A year earlier in June of 1995, the NSF sponsored a workshop with the purpose of assess the current goals and directions of the theory community. Based on that workshop a committee, chaired by Richard Karp, presented their report <a href="https://www.dropbox.com/s/4ipc142gsl5571b/karp-report.pdf?dl=0">Theory of Computing: Goals and Directions</a> at the 1996 STOC conference. While the report emphasized the importance of core research, the central thesis stated<br />
<blockquote class="tr_bq">
In order for TOC to prosper in the coming years, it is essential to strengthen our communication with the rest of computer science and with other disciplines, and to increase our impact on key application areas.</blockquote>
Oded Goldreich and Avi Wigderson put together a competing report, <a href="https://www.dropbox.com/s/aox8ga8uzmbjesx/toc-sp1.pdf?dl=0">Theory of Computing: A Scientific Perspective</a> that focuses on theory as a scientific discipline.<br />
<blockquote class="tr_bq">
In order for TOC to prosper in the coming years, it is essential that Theoretical Computer Scientists concentrate their research efforts in Theory of Computing and that they enjoy the freedom to do so. </blockquote>
There was a lively discussion at the business meeting, with Karp and Christos Papadimitriou on one side, with Goldreich and Wigderson on the other. I remember one exchange where one side said that the people who implement an algorithm should get as much credit as those who developed it. Avi, I believe, said he'd like to see the implementer go first.<br />
<br />
So what has transpired in the last two decades. The theory and community has not withered and died, the field continues to produce great results and attract many a strong student. On the other hand the theory community has not had the growth we've seen in other CS areas, particularly in the recent university expansion. Industrial research in core theory, which had its highs in the 90's, has dwindled to a small number of researchers in a few companies. Foundation research has helped some, IAS now has a faculty position in theoretical CS, the Simons Foundation funds some faculty and recently started an institute in Berkeley and the Clay Mathematics Institute has given the field a considerate boost by naming the P v NP problem as one of their millennial challenges.<br />
<br />
The main core theory conferences, STOC, FOCS, SODA, Complexity and others have continued to focus on theorems and proofs. Rarely do the research in these papers affect real-world computing. The theory community has not played a major role in the growth of machine learning and has left real-world optimization to the operations research community.<br />
<br />
We have seen some other developments making some progress in connecting theory to applications.<br />
<ul>
<li>1996 saw the first <a href="https://en.wikipedia.org/wiki/Paris_Kanellakis_Award">Kanellakis Prize</a> to honor "specific theoretical accomplishments that have had a significant and demonstrable effect on the practice of computing"</li>
<li>Some companies, most notably Akamai, have come out of the theory community and helped shape real-world computing.</li>
<li>We have seen new research communities in EC and Quantum Computing that connect with economists and physicists. </li>
<li>The NSF now has a program <a href="http://www.nsf.gov/pubs/2016/nsf16515/nsf16515.htm">Algorithms in the Field</a> that connects theorists with applied computer scientists.</li>
<li>Some theory topics like differential privacy have entered the mainstream discussion.</li>
</ul>
<div>
We live in a golden age of computer science and computing research is transforming society as we know it. Do we view ourselves as a scientific discipline divorced from these changes, or should theory play a major role? This is a discussion and debate the theory community should continue to have. </div>
http://blog.computationalcomplexity.org/2016/06/karp-v-wigderson-20-years-later.htmlnoreply@blogger.com (Lance Fortnow)15tag:blogger.com,1999:blog-3722233.post-6704204702231101478Mon, 13 Jun 2016 02:56:00 +00002016-06-12T22:56:51.208-04:00When does n divide a_n in this sequence?Consider the following sequence:<br />
<br />
a(1)=0<br />
<br />
a(2)=2<br />
<br />
a(3)=3<br />
<br />
for all n ≥ 4 a(n) = a(n-2)+a(n-3)<br />
<br />
Here is a table of a(n) for 2 ≤ n ≤ 23<br />
<br />
n 2 3 4 5 6 7 8 9 10 11 12<br />
a(n) 2 3 2 5 5 7 10 12 17 22 29<br />
<br />
n 13 14 15 16 17 18 19 20 21 22 23<br />
a(n) 39 51 68 90 119 158 209 277 367 486 644<br />
<br />
For 2≤ n ≤ 23 the n such that n divides a(n) are<br />
<br />
n = 2,3,5,7,11,13,17,19,23.<br />
<br />
Notice a pattern? Of course you do!<br />
<br />
I propose the following question which I will answer in my next post (in a week or so)<br />
<br />
PROVE or DISPROVE the following:<br />
<br />
for all n≥ 2 n divides a(n) iff n is prime.<br />
<br />
<br />
<div>
<br /></div>
http://blog.computationalcomplexity.org/2016/06/when-does-n-divide-in-this-sequence.htmlnoreply@blogger.com (GASARCH)14tag:blogger.com,1999:blog-3722233.post-8565965427617437841Thu, 09 Jun 2016 12:51:00 +00002016-06-09T08:51:59.000-04:00Math MoviesIn 1997 <a href="http://www.imdb.com/title/tt0119217/">Good Will Hunting</a>, a fictional movie about the hidden mathematical talents of a MIT janitor grossed $225 million and won a best screenplay Oscar for Matt Damon and Ben Affleck. At the time the chair of the Chicago math department told me how much he disliked the movie given the way mathematics and mathematicians were portrayed. I told him the movie made math seem exciting and brought public awareness of the Fields medal, mentioned several times in the movie. You can't buy that kind of publicity for an academic field.<br />
<br />
In 2001 <a href="http://www.imdb.com/title/tt0268978/">A Beautiful Mind</a>, on the life of John Nash grossed $313 million in the box office and won the best picture Oscar. In 2014 we saw critically acclaimed movies <a href="http://www.imdb.com/title/tt2084970">The Imitation Game</a> (8 Oscar nominations with a win for adapted screenplay and $233 million gross) on Alan Turing and <a href="http://www.imdb.com/title/tt2980516/">The Theory of Everything</a> (5 nominations with a win for best actor, $123 million) on Stephen Hawking. These movies focused more on the struggles of the lead character than the science itself. Though these movies had their flaws they did show to a popular audience that the goal of math and science are worth an incredible struggle.<br />
<br />
And complain all you want about the 2005 TV series <a href="https://en.wikipedia.org/wiki/Numbers_(TV_series)">Numbers</a>, but get your head around the fact that a show about a crime-solving mathematician lasted six seasons. <a href="https://en.wikipedia.org/wiki/The_Big_Bang_Theory">The Big Bang Theory</a> remains the top US television comedy heading into its tenth season this fall.<br />
<br />
Which takes us to the recent movie <a href="http://www.imdb.com/title/tt0787524/">The Man Who Knew Infinity</a> about the life of Ramanujan, a movie that has gotten <a href="http://www.math.columbia.edu/~woit/wordpress/?p=8427">wide</a> <a href="http://www.scottaaronson.com/blog/?p=2707">excitement</a> <a href="http://blogs.ams.org/mathgradblog/2016/05/02/man-knew-infinity-mathematical-movie">from</a> <a href="http://blogs.ams.org/blogonmathblogs/2016/05/27/the-ramanujan-movie">mathematicians</a> for the portrayal of the math itself, with credit given to consulting mathematician Ken Ono. I haven't seen the movie as it has barely played in Atlanta. It got critically mixed reviews, grossed only $3.4 million and will probably be forgotten in award season. The Ramanujan story is just not that dramatically interesting.<br />
<br />
What's more important: Getting the math right, or taking some liberties, telling a good story and drawing a large audience. Can you actually do both? Because you can't inspire people with a movie they don't see.http://blog.computationalcomplexity.org/2016/06/math-movies.htmlnoreply@blogger.com (Lance Fortnow)5tag:blogger.com,1999:blog-3722233.post-3319929913349181061Mon, 06 Jun 2016 02:33:00 +00002016-06-06T17:45:59.226-04:00What do Evolution, Same Sex Marriage, and Abstract Set Theory have in Common?(This post is based on articles from 2012 so it may no longer be true. Also- to be fair- I tried finding stuff on the web BY the people who object to our children being exposed to abstract set theory but could not, so this article is based on hearsay.)<br />
<div>
<br /></div>
<div>
Louisiana has a voucher system for poor kids to go to other schools, including religious ones. I am not here to debate the merits of that or the state of US education. However, from <a href="http://m.motherjones.com/blue-marble/2012/07/photos-evangelical-curricula-louisiana-tax-dollars">this article</a> it seems that they are learning some odd things.</div>
<div>
<br /></div>
<div>
As you would expect, some Christian schools teach that Evolution did not occur, same sex marriage is wrong (actually their opinion of gay people is far more negative than just being against same sex marriage), and that Abstract Set theory is evil.(Note- Catholic Schools have no problem with Evolution, in fact, the catholic church has never had a problem with it.)<br />
<br />
Come again? Yes we would expect these opinions on evolution and same sex marraige, but Abstract Set Theory? Why? Its explained in <a href="http://boingboing.net/2012/08/07/what-do-christian-fundamentali.html">this article</a> but I'll say briefly that they don't like the post-modern view of mathematics where anything goes. The coherent version of their point of view is that they are Platonists. A less charitable view is that they find Abstract Set Theory too dang hard. I've also seen somewhere that they object to Cantor's theory since there is only one infinity and it is God.<br />
<br />
The book <a href="http://www.amazon.com/Infinitesimal-Dangerous-Mathematical-Theory-Shaped/dp/0374534993">Infinitesimals: How a dangerous mathematical theory shaped the modern world</a> is about an earlier time, around 1600, when the Catholic church thought that using infinitesimals was... bad? sinful? contrary to the the laws of God and Man? (I reviewed it <a href="https://www.cs.umd.edu/users/gasarch/BLOGPAPERS/inf.pdf">here</a>) I though we were no longer living in a time where religion had an influence on Mathematics. And, to be fair, we ARE past that time. But this voucher program worries me. And I haven't even got to what they do to American History.<br />
<br />
<br />
<br />
<br /></div>
http://blog.computationalcomplexity.org/2016/06/what-do-evolution-same-sex-marriage-and.htmlnoreply@blogger.com (GASARCH)16tag:blogger.com,1999:blog-3722233.post-5799462906974107688Thu, 02 Jun 2016 14:07:00 +00002016-06-02T10:07:50.264-04:00CCC 2016Earlier this week I attended the <a href="http://www.al.ics.saitama-u.ac.jp/elc/ccc/">31st Computational Complexity Conference</a> in Tokyo. I've been to thirty of these meetings, missing only the 2012 conference in Porto. Eric Allender has attended them all.<br />
<br />
The conference had a 130 participants with fewer women than you can count on one hand and 26 made it from the States. There were 34 accepted papers out of 91 submitted.<br />
<br />
The <a href="http://drops.dagstuhl.de/portals/extern/index.php?semnr=16003">proceedings</a> are fully open access though the Dagstuhl LIPICS series, including the <a href="http://dx.doi.org/10.4230/LIPIcs.CCC.2016.19">paper</a> by Rahul Santhanam and myself that I presented at the meeting. The <a href="http://dx.doi.org/10.4230/LIPIcs.CCC.2016.10">best paper</a> by Marco Carmosino, Russell Impagliazzo, Valentine Kabanets and Antonina Kolokolova drew a surprising strong connection between natural proofs and learning theory. In one of my favorite other talks, John Kim and Swastik Kopparty show <a href="http://dx.doi.org/10.4230/LIPIcs.CCC.2016.11">how to decode Reed-Muller codes</a> over an arbitrary product set instead of a structured field.<br />
<br />
The German government will in the future no longer support LIPICS due to EU rules to prevent unfair competition with the commercial publishers. (Don't shoot the messenger) LIPICS will continue, the conferences will have to spend a little more to use them.<br />
<br />
Next year's conference will be in Riga, Latvia July 6-9 right before ICALP in Warsaw. The 2018 meeting is likely to take place closely located to STOC in southern California.<br />
<br />
Osamu Watanabe put together this <a href="https://youtu.be/3SzEHkk0Fmk">slide show</a> for the conference reception featuring pictures of attendees of the Complexity Conference through the ages, including the authors of this blog.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<iframe allowfullscreen="" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/s_vi/3SzEHkk0Fmk/default.jpg?sqp=CLTbuLoF&rs=AOn4CLAPnQXCqKrbA8dvOxz-pdlwP0mjxQ" frameborder="0" height="266" src="https://www.youtube.com/embed/3SzEHkk0Fmk?feature=player_embedded" width="320"></iframe></div>
<br />
Peter van Emde Boas forwarded the <a href="https://www.dropbox.com/s/rij1f2odazjisuz/callstruct1_1985_838.pdf?dl=0">call for papers</a> and <a href="https://www.dropbox.com/s/b0dhvwgxtwvxvmb/orgstruct1_1985_839.pdf?dl=0">initial letters</a> for the very first conference, originally called Structure in Complexity Theory.http://blog.computationalcomplexity.org/2016/06/ccc-2016.htmlnoreply@blogger.com (Lance Fortnow)1tag:blogger.com,1999:blog-3722233.post-7077840493326820837Mon, 30 May 2016 04:20:00 +00002016-06-05T02:01:04.169-04:00New Ramsey Result that will be hard to verify but Ronald Graham thinks its right which is good enough for me.<br />
If you finitely color the natural numbers there will be a monochromatic solution to<br />
<br />
x+2y+3z - 5w = 0<br />
<br />
There is a finite coloring of the natural numbers such that there is NO monochromatic solution to<br />
<br />
x+2y+3z - 7w = 0<br />
<br />
More generally:<br />
<br />
An equation is REGULAR if any finite coloring of the naturals yields a mono solution.<br />
<br />
RADO's THEOREM: A linear equation a1 x1 + ... + an xn = 0 is regular IFF some subset of the ai's sums to 0 (the ai's are all integers).<br />
<br />
Rado's theorem is well known.<br />
<br />
What about other equations? Ronald Graham has offered $100 to prove the following:<br />
<br />
For all two colorings of the naturals there is a mono x,y,z such that<br />
<br />
x<sup>2</sup> + y<sup>2</sup> = z<sup>2 </sup><br />
<br />
I've seen this conjecture before and I thought (as I am sure did others) that first there would be a prove that gave an ENORMOUS bound on<br />
<br />
f(c) = the least n such that for all c-colorings of {1,...,n} there is a mono (x,y,z) such that ...<br />
<br />
and then there would be some efforts using SAT solvers and such to get better bounds on f(c).<br />
<br />
This is NOT what has happened. Instead there is now a <a href="http://arxiv.org/abs/1605.00723">paper</a> by Heule, Kullmann, Marek, where they show f(2)=7825.<br />
<br />
(NOTE- ORIGINAL POST HAD f(2)-7285. Typo, was pointed out in one of the comments<br />
below. Now fixed.)<br />
<br />
<br />
It is a computer proof and is the longest math proof ever. They also have a program that checked that the proof was correct.<br />
<br />
And what did they get for their efforts? A check from Ronald Graham for $100.00 and a blog entry about it!<br />
<br />
While I am sure there proof is correct I wish there was a human-readable proof that f(2) existsed even if it gave worse bounds. For that matter I wish there was a proof tha,t for all c, f(c) exists. Maybe one day; however, I suspect that we are not ready for such problems.<br />
<br />
<br />
<sup><br /></sup>
<sup><br /></sup>
<br />
<br />
<br />
<br />http://blog.computationalcomplexity.org/2016/05/new-ramsey-result-that-will-be-hard-to.htmlnoreply@blogger.com (GASARCH)7tag:blogger.com,1999:blog-3722233.post-6679416278943668586Thu, 26 May 2016 12:23:00 +00002016-05-26T08:29:52.361-04:00Theory Jobs 2016In the fall we <a href="http://blog.computationalcomplexity.org/2015/10/2015-fall-jobs-post.html">point to theory jobs</a>, in the spring we see who got them. How is the CS enrollment explosion affecting the theory job market? We've seen some <a href="http://www.scottaaronson.com/blog/?p=2620">big</a> <a href="https://windowsontheory.org/2016/04/06/stanford-here-i-come/">name</a> <a href="http://news.harvard.edu/gazette/story/newsplus/microsofts-cynthia-dwork-joins-seas/">moves</a> but that's only part of the picture.<br />
<br />
Like <a href="http://blog.computationalcomplexity.org/2015/05/theory-jobs-2015.html">last year</a> and years past I created a fully editable <a href="https://docs.google.com/spreadsheets/d/16s5lKfp_mgHSVy6HNSadLwlSK0SLGJHxLUS6pwDP_NM/edit?usp=sharing">Google Spreadsheet</a> to crowd source who is going where. Ground rules:<br />
<ul>
<li>I set up separate sheets for faculty, industry and postdoc/visitors.</li>
<li>People should be connected to theoretical computer science, broadly defined.</li>
<li>Only add jobs that you are absolutely sure have been offered and accepted. This is not the place for speculation and rumors.</li>
<li>You are welcome to add yourself, or people your department has hired.</li>
</ul>
This document will continue to grow as more jobs settle. So check it often.<br />
<br />
<iframe frameborder="0" height="750" src="https://docs.google.com/spreadsheets/d/16s5lKfp_mgHSVy6HNSadLwlSK0SLGJHxLUS6pwDP_NM/pubhtml?widget=true&headers=false" width="575"></iframe>
<a href="https://docs.google.com/spreadsheets/d/16s5lKfp_mgHSVy6HNSadLwlSK0SLGJHxLUS6pwDP_NM/edit?usp=sharing">Edit</a>http://blog.computationalcomplexity.org/2016/05/theory-jobs-2016.htmlnoreply@blogger.com (Lance Fortnow)1tag:blogger.com,1999:blog-3722233.post-4341125658789844980Tue, 24 May 2016 13:33:00 +00002016-05-24T09:33:11.608-04:00My third post on Gathering for Gardners(Workshop for women in computational topology in August: <a href="https://www.ima.umn.edu/2015-2016/SW8.15-19.16">see here</a>. For a post about these kinds of workshops see <a href="http://blog.computationalcomplexity.org/2010/06/lefthanded-latino-lesbians-in-algebraic.html">here</a>.)<br />
<br />
<br />
(I have already posted twice on stuff I saw or heard at the Gathering Conference <a href="http://blog.computationalcomplexity.org/2016/04/some-short-bits-from-gathering-for.html">here</a> and <a href="http://blog.computationalcomplexity.org/2016/05/some-more-bits-from-gathering-for.html">here</a>,)<br />
<br />
Meta Point- At the Gathering for Gardner conference I learned lots of math (prob more accuarte to say I learned ABOUT lots of math) that I want to tell you about which is why this is my third post on it, and I post more.<br />
<br />
<i>The pamplets of Lewis Carol: Games, Puzzles, and related pieces: </i>This was mostly puzzles that are by now familiar, but one new (to me) think struck me: an <i>aloof word </i>is a word where if you change any one letter to anything else then its no longer a word. I think aloof is such a word.<br />
<br />
<i>Some talk don't know which one</i> was about the Piet Hein Egg, also called a <a href="https://en.wikipedia.org/wiki/Superegg">superegg</a>. The talk (which differs slightly from he page pointed to) said it was a solid whose surface has the equation<br />
<br />
(x/a)<sup>2.5</sup> + (y/a)<sup>2.5</sup> + (z/b)<sup>2.5</sup><br />
<br />
and its an egg which can stand on its end. (Note the x/a,y/a,z/b- that is correct, not a typo).<br />
(Personal Note: Piet Hein invented <a href="https://en.wikipedia.org/wiki/Soma_cube">Soma Cubes</a> which is a puzzle where you put together 3-d pieces<br />
made of small cubes into a large cube or other shapes. I learned about these in a Martin Gardner column and bought a set. I was very good at this- I put together every figure in the booklet within a week. This was the ONLY sign that I was GOOD at math when I was a kid, though there are many signs that was INTERESTED in math. About 30 years ago my girlfriend at that time and I went to a restaurant and there was a SOMA set on the table, assembled into a cube. I took it apart and she said ``Bill, you'll never be able to put it back together!!!'' I then ``tried'' to and ended up putting together instead a bathtub, a dog, a wall, and a W. But gee, it ``seemed'' like I was fumbling around and couldn't get a cube. ``Gee Bill, I think you've seen this puzzle before''. And who is this insightful girlfriend? My wife of over 20 years!)<br />
<br />
<i>Magic Magic Square </i>(Sorry, dont know which talk) Try to construct a 4x4 magic square where (as usual) all rows and columns sum to the same thing. But also try to make all sets of four numbers that form a square (e.g., all four corners) also add to that number. Can you? If you insist on using naturals then I doubt it. Integers I also doubt it. But you CAN do it with rationals. How? If you want to figure it out yourself then DO NOT go to the answer which is at this link: <a href="https://www.cs.umd.edu/users/gasarch/BLOGPAPERS/mmsq.txt">here</a><br />
<br />
<i>Droste Effect: </i>When a picture appears inside itself. For an example and why its called that see <a href="https://en.wikipedia.org/wiki/Droste_effect">here</a><br />
<br />
<i>Black Hole Numbers: </i>If you have a rule that takes numbers to numbers, are there numbers that ALL numbers eventually goto? If so, they are black hole numbers for that rule.<br />
<br />
Map a number to the number of letters in its name<br />
<br />
20 (twenty) --> 6 (six) --> 3 (three) --> 5 (five) --> four (4) --> four(4) --> ...<br />
<br />
It turns out that ANY number eventually goes to 4.<br />
<br />
Map a number to the sum of the digits of its divisors<br />
<br />
12 has divisors 1,2,3,4,6,12 --> 1+2+3+4+6+1+2=19<br />
<br />
19 has divisors 1,19 so --> 1+1+9 = 11<br />
<br />
11 has divisors 1,11 so --> 1+1+1 = 3<br />
<br />
3 has divisors 1,3 so --> 1+3=4<br />
<br />
4 has divisors 1,2,4 so --> 1+2+4=7<br />
<br />
7 has divisors 1,7 so --> 1+7=8<br />
<br />
8 has divisors 1,2,4,8, --> 15<br />
<br />
15 has divisors 1,3,5,15 --> 1+3+5+1+5 = 15<br />
<br />
AH. It turns out ALL numbers eventually get to 15.<br />
<br />
<i>Boomerang Fractions: </i>Given a fraction f do the following:<br />
<br />
x1=1, x2=1+f, x3- you can either add f to x2 or invert x2. Keep doing this. Your goal is to get back to 1 as soo nas possible. Here is a paper on it: <a href="http://seomtc.weebly.com/uploads/1/3/0/4/13040232/boomerang_fractions.pdf">here</a>. This notion can be generalized: given (s,f) start with s and try to get back so s. Can you always? how long would it take? Upper bounds?<br />
<br />
<i>Liar/Truth teller patterns on a square plane b Kotani Yoshiyuki. </i>You have an 4 x 4 grid. Every grid point has a person. They all say ``I have exactly one liar adjacet (left, right, up, or down) to me.''<br />
How many ways can this happen. This can be massively generalized.<br />
<br />
<i>Speed Solving Rubit's cube by Van Grol and Rik. </i>A robot can do it in 0.9 seconds: <a href="https://www.youtube.com/watch?v=ixTddQQ2Hs4">here.</a><br />
<br />
<br />http://blog.computationalcomplexity.org/2016/05/my-third-post-on-gathering-for-gardners.htmlnoreply@blogger.com (GASARCH)0tag:blogger.com,1999:blog-3722233.post-7062979489400396707Thu, 19 May 2016 13:28:00 +00002016-05-19T09:28:07.292-04:00UpfrontsThe US television industry has long fascinated me, an entertainment outlet driven by technology. David Sarnoff introduced television at the World's Fair in 1939 and developed the NBC network to provide content so people would buy RCA televisions, much the way Steve Jobs created the iTunes store to sell iPods. For decades television was broadcast over the air funded mostly by commercials. You could only watch a show when it aired and people adjusted their schedules to the broadcast schedule. People stayed home instead of going to the theater, movies, social clubs and restaurants. They all still exist but not to the extent before television. The nature of jobs changed. One funny comedian on TV would make considerable money but would put hundreds of vaudeville comedians out of a job.<br />
<br />
In the 70's came cable television to big cities, initially to provide a better signal. But it also provided more stations including stations that were paid explicitly by consumers like HBO and implicitly through cable subscriptions like ESPN. ESPN is the single largest source of revenue for Disney. Eventually we would have hundreds of cable stations, many very specialized.<br />
<br />
In the 80's came the VCR, then the DVR. No longer did we need to plan our time around the TV broadcast schedule. Eventually TV shows could have a continuing story line allowing for richer plot and character development.<br />
<br />
Then came the Internet and streaming video. You could watch videos from series and movies on Netflix to user generated short pieces on YouTube or shorter still on Vine. People are watching TV not so much on TVs anymore but on their computers and phones. Like many others we have cut the cable cord in the Fortnow household, a trend that the industry still tries to fathom. Every cord cutter is $6 less a month to ESPN and the Disney bottom line.<br />
<br />
Why bring up TV now? This is what used to be the most exciting week for television, the upfronts, where the broadcast networks reveal their new seasons to advertisers and the public at large. The networks are still having their presentations and parties, but the new shows fail to excite and quite a few retreads and revivals including 24, Prison Break, MacGyver, Tales from the Crypt, Gilmore Girls. Do you remember the Muppets returning last year? Neither do I.<br />
<br />
We are in a golden age of television. One could take a rich novel and turn it into an equally rich 10-13 episode TV series. There were over 400 scripted TV series, and more really good series than I have time to watch (basically when I run on the treadmill). Meanwhile the networks continue to promote and party though an undercurrent of a very uncertain future. Watching the television industry is itself a never ending story.http://blog.computationalcomplexity.org/2016/05/upfronts.htmlnoreply@blogger.com (Lance Fortnow)1tag:blogger.com,1999:blog-3722233.post-33970272611540147Mon, 16 May 2016 18:42:00 +00002016-05-16T14:42:36.168-04:00Does this leak information?Here are four fictional stories though inspired by real world events or TV shows (I forget which is which). My question is, was a confidence broken or was some information leaked that should not have been? I do not have answers.<br />
<br />
Tenure: The candidate DOES find out the vote (e.g., 18 yes, 2 no) but DOES NOT find out who voted what. But what if the vote is 20 Yes 0 No. Then the candidate DOES know the vote. (Worse if it was 20 NO, 0 yes). I am sure this has been studied in crypto. Here is one solution: randomly flip one bit.<br />
<br />
Lawyers:<br />
<br />
CLIENT: I have roughly X dollars counting all of my assets. Are you the right firm to handle my estate?<br />
<br />
LAWYER: Yes<br />
<br />
CLIENT: Do you always say that?<br />
<br />
LAWYER: No. If you had log(X) money then we would recommend a cheaper firm since your estate would not need our complex services. And if you had X^{10} money then there are other firms that are more familiar with investments at that level.<br />
<br />
CLIENT: So, for example, Mitt Romney is not a client.<br />
<br />
LAWYER: That is correct.<br />
<br />
Did the lawyer break a confidence by saying that Mitt Romney was NOT a client? Could CLIENT goto lots of law firms and play this game and eventually find out Mitt Romney's lawyer?<br />
<br />
Nobel Prize: If he committee leaks that the winner has been notified THAT he or she won, but not WHO it was, is that a breach?<br />
<br />
Someone has confessed to a priest that he murdered someone (a staple of TV shows and movies). The wrong man is in jail, whose name is Bob.<br />
<br />
PRIEST TO COP: You have the wrong man.<br />
<br />
COP: How do you know.<br />
<br />
PRIEST: I can't say how I know, but I know.<br />
<br />
COP: Oh, It must be that the guilty man confessed to you but you can't break the seal of the confession. I won't ask you to. But here is a question: Has Bob been to confession lately?<br />
<br />
PRIEST: No! (and he seems relieved to have gotten the message through)<br />
<br />
Did the Priest betray the killers confidence?<br />
<br />
People in Crypto (and elsewhere) define information, Knowledge, Security, similar terms formally so they can have protocols and try to prove things. Are their defintinitions realistic? In the above scenario's, are the above cases breaches or not? Is that even a rigorous question?<br />
<br />
<br />
<br />
<br />http://blog.computationalcomplexity.org/2016/05/does-this-leak-information.htmlnoreply@blogger.com (GASARCH)6tag:blogger.com,1999:blog-3722233.post-7249134896188452011Thu, 12 May 2016 21:20:00 +00002016-05-12T17:21:02.183-04:00The Challenges of Smart CitiesEarlier this week I attended the CCC workshop <a href="http://cra.org/ccc/events/computing-innovation-societal-needs-the-impact-of-computing-research">Computing Research: Addressing National Priorities and Societal Needs</a> (<a href="http://livestream.com/CompComCon/ComputingResearch">video</a>). The workshop covered a large collection of topics, highlighting challenges of big data, privacy, security, sustainability, education, the future of work, CS funding and partnerships and more.<br />
<div>
<br /></div>
<div>
I'd like to highlight the challenges of Smart Cities, addressed in a panel Monday Morning and a talk by Keith Marzullo on Tuesday afternoon. Roughly a smart city is using technology to improve services, for example, sensors everywhere or preparing cities for autonomous vehicles. The speakers highlighted a number of major challenges.</div>
<div>
<ul>
<li>There are 382 <a href="https://en.wikipedia.org/wiki/Metropolitan_statistical_area">Metropolitan Statistical Areas</a> in the US from New York to Carson City that totals 84% of the US population and 91% of GDP. Many cities share similar problems but how easy can one port hardware and algorithms from one area to another? How do you scale smart cities without reinventing the wheel each time?</li>
<li>Who pays for the infrastructure? Sometimes one can get research grants or federal help to start new projects, but these projects need continual maintenance afterwards. Are researchers just in it to start a project, write a paper and get out? How do we keep the advantages going in the long run?</li>
<li>How do you keep the public's trust that the information collected will help the city and not just keeping track of everyone a la Orwell's 1984?</li>
<li>How do we make sure we tackle the problems of the general public and not just the researchers and those who help fund? A great quote: We need to make sure we are focused more on mass transit than on how to make parking the Tesla easier.</li>
<li>If we use big data to predict crime and position police in response, could that cause discrimination and harassment?</li>
<li>How do we keep our research relevant?</li>
</ul>
<div>
Rural areas got their due as well. Interesting presentations on how farmers can use sensors and machine learning to optimize crops, fertilizer and water to use just the right amount needed for each segment of the farm. </div>
</div>
<div>
<br /></div>
<div>
To paraphrase Tip O'Neill, all computing is local, but we face many challenges taking our broad tools of cloud, big data, machine learning, automation and internet of things and apply them in our own neighborhoods.</div>
http://blog.computationalcomplexity.org/2016/05/the-challenges-of-smart-cities.htmlnoreply@blogger.com (Lance Fortnow)2tag:blogger.com,1999:blog-3722233.post-1665186546970958827Tue, 10 May 2016 18:30:00 +00002016-05-11T19:33:24.903-04:00Math lessons from the Donald Trump Nomination (non-political)There may be articles titled <i>Donald Trump and the Failure of Democracy. </i>This is NOT one of them. This is about some math questions. I drew upon many sources but mostly Nate Silver's columns:<a href="http://fivethirtyeight.com/features/donald-trumps-six-stages-of-doom/">Donald Trump's Six Stages of doom</a>, <a href="http://fivethirtyeight.com/features/how-the-republican-field-dwindled-from-17-to-donald-trump/">How the Republican Field Dwindled from 17 to Trump (a collection of article)</a>,<a href="http://fivethirtyeight.com/features/the-four-things-i-learned-from-the-donald-trump-primary/">Four things I learned from the Donald Trump Primary</a>. For the best news piece of the year on Donald Trump see <a href="https://www.youtube.com/watch?v=DnpO_RTSNmQ">John Oliver's </a><br />
<br />
1) <i>Trends</i>. Since 1972 (the beginning of the modern era of prez primaries) the republicans, have ALWAYS (with one exception I"ll get to) nominated someone who was either PRZEZ or a sitting or former Gov, Senator, or VP who had ALSO been a serious candidate in a prior primary-prez race. The only exception is W who was a sitting Governor but had never run before, though he of course had name recognition. In short, someone FAMILIAR. This also fits our image of the Republicans as an old boys network (Dole got the nomination in 1996 because it was his turn). Hence most pundits expected the same this year.<br />
<br />
a) The old ML maxim: Trends hold TILL THEY DON"T.<br />
<br />
b) Nobody quite fit the pattern. The only ones who had run before were Rick Santorum, Rick Perry, and Mike Huckabee. Rich S and Mike H were niche candidates, Rick P had wider appeal in 2012 but entered late and stumbled (the WHOOPS moment, though more on that later). Jeb was like W, Former Gov with family name. So based just on Trends Perry or Jeb should have been the nominee, but its not that strong a match. (Added Later- A commenter says that John K ran briefly in 2000. My criteria was had been a serious candidate- not quite well defined, but John K would not have qualified. Even so, Governor and ran a bit, so he also was close to the criteria.) IF YOU HAVE A TREND AND USE IT TO PREDICT MAKE SURE THE DATA YOU HAVE FITS THE TREND.<br />
<br />
c) I WILL NOT claim that I predicted any of this but there is an inkling of what happened in my post <a href="http://blog.computationalcomplexity.org/2015/05/the-law-of-excluded-middle-of-road.html">The law of the excluded middle of the road republicans</a> where I pointed out for each candidate (including Trump) why they couldn't win. IF YOU HAVE A LARGE NUMBER OF LOW PROB EVENTS WHERE ONE WILL HAPPEN ITS HARD TO PREDICT WHICH ONE.<br />
<br />
d) The pattern itself is only based on 11 data points and you might not want to count the four where there was a republican prez running for re-election. And 11 data points is not the full story--- the political situation from 1972 to 2016 changed dramatically. So these are data points on a moving target. Perhaps they should use papers like <a href="http://www.cs.nyu.edu/~mohri/pub/drift.pdf">New analysis and algorithms for learning with drifting distributions</a>. ITS HARD TO DO ANY REAL DATA ANALYSIS WHEN YOU DON"T HAVE ENOUGH DATA AND IT CHANGES OVER TIME.<br />
<br />
2) <i>Domination:</i> Republican primary voters (in the past) wanted a candidate who was both conservative and electable. But what combination? I read that Chris Christie had no chance since it was thought that Jeb, S Walker, and Rubio were all MORE electable and MORE conservative- so they dominated him. Hence Donald Trump couldn't win since he was (though to be) less electable and his prob less conservative though that's hard to tell since he never held office. Hence he can't win. But some voters were tired of voting for electable as McCain and Romney were allegedly electable. And some were just plain angry. If you think your problems are because of immigrants vote Trump, if you think your problems are because of Wall Street then Feel the Bern. VOTERS DO NOT CARE ABOUT CONVEXITY AND DOMINATION.<br />
<br />
3) <i>Nate Silver</i>. He's the Pollster who is NOT a pundit, does NOT let who he wants to see win affect what he predicts, wrote a great book about predictions: <a href="http://www.amazon.com/The-Signal-Noise-Many-Predictions/dp/159420411X/ref=cm_cr_arp_d_product_top?ie=UTF8">The Signal and the Noise: Why so Many Predictions Fail But Some Don't</a> and got many predictions right in recent years. He wrote an excellent article <a href="http://fivethirtyeight.com/features/donald-trumps-six-stages-of-doom/">Donald Trumps six stages of Doom</a> in Aug 2015 which said what the obstacles are to the nomination and giving the nomination a 2% chance. To his credit he has owned this prediction in that later columns have told us where he went wrong. (Most pundits never say `Gee I was wrong') So why did his prediction not pan out?<br />
<br />
a) They did in a sense. All of the problem he pointed out that Trump would have, Trump DID have- for example, Trump did not have a good organization to control delegates, and the party did try to stop him. So in a strange sense Nate was right. Except that he was wrong.<br />
<br />
b) Back to Nate's 2%. Bill James (Baseball Stats guru) wrote (I am paraphrasing)<i> If you are given odds of 500-1 that some awful team will win the world series than TAKE THAT BET. People have a hard time telling unlikely from REALLY unlikely. And the NY Mets did win the 1969 world series. (A quote from 1962: There will be a man on the moon before the Mets win the world series- true by two months). </i>Also note that the the Leicester Soccer Team won this year despite being (literally) 5000-1 underdogs (see <a href="http://www.cbssports.com/soccer/eye-on-soccer/25575254/everything-you-should-know-about-5000-to-1-premier-league-champ-leicester">here</a>). WAS NATE WRONG? If you give an event 2% chance and it happens I can't say you are wrong. In fact, if most everyone else gave it less than 2% or even 0 (which is the case here) then you are... less wrong.<br />
<br />
4) <i>Bill Gasarch. </i>Based on TRENDS above I predicted Paul Ryan (and I owe Lance a dinner). My mistake was betting Ryan-I win, ANYONE ELSE-Lance wins (oddly enough, with a contested convention I might have still won that bet) . I should have made Lance name 5 candidates, and if any of those five win, he wins, but if its Ryan I win. I doubt he would have named Trump.<br />
<br />
5) <i>Game Theory: </i>Lance has posted about <a href="http://blog.computationalcomplexity.org/search?q=Primaries">Primary Game </a>Theory. The main issue for a Trump voter might be `Gee, if I vote Trump he is not electable event though I like him, so I'll vote for X instead who is more electable' But voters are not game theorists. Plus they voted for John McCain and Mitt Romney based on that and they lost. So when Rubio said <i>A Vote for Trump is a Vote for Hillary </i>he may be right but the voters are not listening. Plus since Little Marco only won Minnesota and Puerto Rico (they have a primary! who knew!) he was not positioned to talk about electability. Plus one could argue that VERY few of the candidates could beat Hillary. In an early Column Nate thought only Jeb, Little Marco, and Scott Walker (remember him?). So once Rubio dropped out the electability argument was useless.<br />
<br />
6) <i>More Game Theory:</i> Many of the candidates wanted someone ELSE to go after Trump so they went after each other.<br />
<br />
7) <i>The Pledge: </i>For fear that Trump would run third party they all signed a pledge promising to support whoever got the nomination. When they signed it they never imagined that Trump would be the nominee.<br />
<br />
8) <i>Prediction Markets: </i>They did pretty well, in about March they came around. Last week David Brooks maintained that Trump would not be the nominee, but he was kidding. I think.<br />
<br />
<br />
<br />
<br />http://blog.computationalcomplexity.org/2016/05/math-lessons-from-donald-trump.htmlnoreply@blogger.com (GASARCH)4tag:blogger.com,1999:blog-3722233.post-6996481209208374401Thu, 05 May 2016 14:26:00 +00002016-05-05T10:26:06.524-04:00Open QuestionsThrough the years I've mentioned a few of my favorite open problems in computational complexity on this blog that have perplexed me through the years. Let me mention a few of them again in case they inspire some of the new generation of complexity theorists.<br />
<ol>
<li>Does the polynomial-time hierarchy look like the arithmetic hierarchy? I mentioned this in the <a href="http://blog.computationalcomplexity.org/2013/11/andrzej-mostowski-1913-1975.html">centenary post for Mostowski</a>. Probably not because it would imply factoring in P (since NP∩co-NP would equal P) but we have no proof of separation and no oracle that makes them look the same.</li>
<li><a href="http://blog.computationalcomplexity.org/2003/12/does-npup.html">Does UP = NP imply the polynomial-time hierarchy collapses</a>? What are the consequences if SAT had an NP algorithm with a unique accepting path? Remains open, again even in relativized worlds. </li>
<li><a href="http://blog.computationalcomplexity.org/2003/11/rational-functions-and-decision-tree.html">Do rational functions that agree with Boolean functions on the hypercube have low decision tree complexity</a>? I really expected someone to have come up with a proof or counterexample by now. </li>
<li>What happens if two queries to NP can be simulated by a single query? Does S<sup>2</sup>=ZPP<sup>NP</sup>? Both questions asked in a <a href="http://blog.computationalcomplexity.org/2002/08/complexity-class-of-week-s2p.html">post on S<sub>2</sub><sup>P</sup></a>.
</li>
<li>Separate NP from Logarithmic space. I gave four approaches in a pre-blog 2001 <a href="http://lance.fortnow.com/papers/files/diag.pdf">survey on diagonalization</a> (Section 3) though none have panned out. Should be much easier than separating P from NP.</li>
</ol>
http://blog.computationalcomplexity.org/2016/05/open-questions.htmlnoreply@blogger.com (Lance Fortnow)16tag:blogger.com,1999:blog-3722233.post-7870018573692638331Sun, 01 May 2016 21:56:00 +00002016-05-01T17:56:36.357-04:00Some more bits from the Gathering for Gardner<br />
I posted about the Gathering for Gardner conference and about some of the talks I saw <a href="http://blog.computationalcomplexity.org/2016/04/some-short-bits-from-gathering-for.html">here.</a> Today I continue with a few more talks.<br />
<br />
<i>Playing Penney's game with Roulette</i> by Robert Vallen. Penney;'s game is the following: let k be fixed. Alice and Bob pick different elements of {H,T}^k. They flip a coin until one of their sequences shows up, and that person wins. Which sequences have the best probability of winning?<br />
<br />
<i>New Polyhedral dice</i> by Robert Fathauer, Henry Segerman, Robert Bosch. This is a good example of how my mentality (and possibly yours) differs from others. When I hear ``60-sided dice'' I think ``p1,...,p60 where are all between 0 and 1 and add up to 1'' I also thought that only the platonic solids could be usedvto form fair dice (so only 4-sided, 6-sided, 8-sided, 12-sided, and 20-sided dice can be made). NOT so. These authors actually MAKE real dice and they do not have to be platonic solids. <a href="https://plus.google.com/+HenrySegerman/posts/AADYapU6BTo">Here</a> is their website.<br />
<br />
<i>Numerically balance dice</i> by Robert Bosch (paper is <a href="http://www.oberlin.edu/math/faculty/bosch/nbd_abridged.pdf">here</a>). Why do dice have the opposite sides sum to the same thing? Read the paper to find out!<br />
<br />
<i>Secret messages in juggling and card shuffling</i> by Erik Demaine. Erik Demaine was one of about 4 theoretical computer scientists I met at the conference, though Erik is so well rounded that calling him a theoretical computer scientist doesn't seem quite right. I had never met him before which surprised me. In this talk he showed us some new fonts- one using juggling. See <a href="http://erikdemaine.org/fonts/juggling/">here</a> for an example of juggling fonts, co-authored with his father Martin.<br />
<br />
<i>Fibonacci Lemonade</i> by Andrea Johanna Hawksley. Put in the leomon and sugar in fib number increments. <a href="http://blog.andreahawksley.com/fibonacci-lemonade/">Here</a> is their website. In my first post I said the talks were on a variety of topics and then presented mostly math talks. This talk is an example of that variety. There were other talks involving the Fib numbers. I was surprised by this since they aren't that special (see <a href="http://www.goldennumber.net/golden-ratio-myth/">here</a>).<br />
<br />
<i>Penis Covers and Puzzles: Brain Injuries and Brain Health</i> by Gini Wingard-Phillips. She recounted having various brain injuries and how working on mathematical puzzles, of the type Martin Gardner popularized as HELPING HER RECOVER! As for the title- people with brain injuries sometimes have a hard time finding the words for things so they use other words. In this case she wanted her husband to buy some <i>condoms</i> but couldn't think of the word so she said <i>Penis Covers</i> instead.<br />
<br />
Loop- Pool on an Ellipse by Alex Bellos. Similar in my mind to the Polyhedral dice talk (you'll see why). We all know that if you built an elliptical pool table with a hole at one of the foci then if the ball is placed at the other foci and hit hard enough it WILL go into the other hole. But Alex Bellos actually MAKES these pool table (see <a href="http://www.loop-the-game.com/">here</a> if you want buy one for $20,000). He told us the history- someone else tried to make one in 1962 but nobody bought them (I wonder if anyone are going to buy his), and Alex had problems with friction as you may recall that it only works on a frictionless surface. So his game does require some skill. The similarity to dice is that I (and you?) are used to thinking about dice and ellipses abstractly, not as objects people actually build.<br />
<br />
This post is getting long so I'll stop here and report more in a later post. Why so mny posts? Six minute talks that I an actually understand and are delighted to tell you about!<br />
<br />
<br />http://blog.computationalcomplexity.org/2016/05/some-more-bits-from-gathering-for.htmlnoreply@blogger.com (GASARCH)0tag:blogger.com,1999:blog-3722233.post-7636396135154304804Thu, 28 Apr 2016 18:06:00 +00002016-04-28T14:06:43.443-04:00Claude Shannon (1916-2001)<div class="separator" style="clear: both; text-align: center;">
<a href="https://upload.wikimedia.org/wikipedia/en/thumb/2/2f/Claude_Elwood_Shannon_(1916-2001).jpg/200px-Claude_Elwood_Shannon_(1916-2001).jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="200" src="https://upload.wikimedia.org/wikipedia/en/thumb/2/2f/Claude_Elwood_Shannon_(1916-2001).jpg/200px-Claude_Elwood_Shannon_(1916-2001).jpg" width="141" /></a></div>
Claude Shannon was born hundred years ago Saturday. Shannon had an incredible career but we know him best for his 1948 paper <a href="http://dx.doi.org/10.1145/584091.584093">A Mathematical Theory of Communication</a> that introduced entropy and information theory to the world. Something I didn't know until looking him up: Shannon was the first to define information-theoretic security and show that one-time pads are the one and basically only code that secure.<br />
<br />
Entropy has a <a href="https://en.wikipedia.org/wiki/Entropy_(information_theory)#Definition">formal definition</a>, the minimum expected number of bits to represent the output of a distribution. But I view information as a more abstract concept of which entropy is just one substantiation. When you think of concepts like conditional information, mutual information, symmetry of information, the idea of an underlying distribution tends to fade away and you begin to think of information itself as an entity worth mentioning. And when you look at Kolmogorov Complexity, often called algorithmic information theory, the measure is over strings, not distributions, yet has many of the same concepts and relationships in the entropy setting.<br />
<br />
Computational Complexity owes much to Shannon's information. We can use information theory to get lower bounds on communication protocols, circuits, even upper bounds on algorithms. Last spring the Simons Institute for the Theory of Computing had a semester program on <a href="https://simons.berkeley.edu/programs/inftheory2015">Information Theory</a> including including a workshop on <a href="https://simons.berkeley.edu/workshops/inftheory2015-3">Information Theory in Complexity Theory and Combinatorics</a>. Beyond theory, relative entropy, or <a href="https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence">Kullback–Leibler divergence</a>, plays an important role in measuring the effectiveness of machine learning algorithms.<br />
<br />
We live in an age of information, growing dramatically every year. How do we store information, how do we transmit, how do we learn from it, how do we keep it secure and private? Let's celebrate the centenary of the man who gave us the framework to study these questions and so much more.http://blog.computationalcomplexity.org/2016/04/claude-shannon-1916-2001.htmlnoreply@blogger.com (Lance Fortnow)2tag:blogger.com,1999:blog-3722233.post-6069339588006717070Mon, 25 Apr 2016 00:01:00 +00002016-05-01T17:45:32.550-04:00Some short bits from the Gathering for Gardner Conference<br />
I attended G4G12 (Gathering for Gardner) a conference that meets every 2 years (though the gap between the first and second was three years) to celebrate the work of Martin Gardner. Most of the talks were on Recreational mathematics, but there were also some on Magic and some are hard to classify.<br />
<br />
Martin Gardner had a column in Scientific American called Mathematical Games from 1956 to 1981. His column inspired man people to go into mathematics. Or perhaps people who liked math read his column. The first theorem I ever read outside of a classroom was in his column. It was, in our terminology, a graph is Eulerian iff every vertex has even degree.<br />
<br />
For a joint review of six G4G proceedings see <a href="https://www.cs.umd.edu/users/gasarch/BLOGPAPERS/gardnerg.pdf">here</a>. For a joint review of six books on recreational math including three of Gardner's, see <a href="https://www.cs.umd.edu/users/gasarch/BLOGPAPERS/gardneraha.pdf">here</a>. For a review of a book that has serious math based on the math he presented in his column see <a href="https://www.cs.umd.edu/users/gasarch/BLOGPAPERS/gardner21.pdf">here</a>.<br />
<br />
The talks at G4G are usually 6 minutes long so you can learn about a nice problem and then work on it yourself. Their were a large variety of talks and topics. Many of the talks do not have an accompanying paper. Many of them are not on original material. But none of this matters--- the talks were largely interesting and told me stuff I didn't know.<br />
<br />
64=64 and Fibonacci, as Studied by Lewis Caroll, by Stuart Moshowitz. This was about a Lewis Caroll puzzle where he put together shapes in one way to get a rectangle of area 65, and another way to get a square of area 64, The following link is NOT to his talk or a paper of Moshowitz, but it is about the problem: <a href="https://mathlesstraveled.com/2011/05/02/an-area-paradox/">here</a><br />
<br />
How Math can Save your life by <a href="https://en.wikipedia.org/wiki/Susan_Marie_Frontczak">Susan Marie Frontczak</a>. This was part talk about bricks and weights and then she stood on the desk and sang <a href="https://www.youtube.com/watch?v=VuRgiZVMUO0">this song</a> (thats not her signing it).<br />
<br />
Twelve ways to trisect and angle by David Richeson. This was NOT a talk about cranks who thought they had trisected and angle with straightedge and compass. It was about people who used ruler, compass, and JUST ONE MORE THING. I asked David later if the people who trisected the angle before it was shown impossible had a research plan to remove the ONE MORE THING and get the real trisection. He said no- people pretty much knew it was impossible even before the proof.<br />
<br />
The Sleeping Beauty Paradox Resolved by Pradeep Mutalik. This paradox would take an entire blog post to explains so here is a pointer to the wikipedia entry on it: <a href="https://en.wikipedia.org/wiki/Sleeping_Beauty_problem">here</a>. AH, this one DOES have a paper associated to it, so you can read his resolution <a href="https://www.quantamagazine.org/20160129-solution-sleeping-beautys-dilemma/">here</a><br />
<br />
Larger Golomb Rulers by Tomas Rokicki. A <a href="https://en.wikipedia.org/wiki/Golomb_ruler">Golomb Ruler</a> is a ruler with marks on it so that the all of the distances between marks are distinct. The number of marks is called the order of the ruler. Construction a Golumb ruler is easy (e.g., marks at the 1,2,4,8,... positions I think works). The real question is to get one of shortest length. They had some new results but, alas, I can't find them on the web.<br />
<br />
Chemical Pi by John Conway. There are people who memorize the first x digits of pi. John Conway does something else. He has memorized the digits of pi and the chemical elements in the following way:<br />
<br />
HYDROGEN 3.141592653 HELIUM next 10 digits of pi LITHIUM etc<br />
<br />
that is, he memorized the digits of pi by groups of 10 and separated by the chemical elements in the order they are on the Periodic table. He claims this makes it easier to answer questions like: What is the 87th digits of pi. He also claims it gives a natural stopping point for how many digits of pi you need to memorize (need? maybe want). (ADDED LATER WHEN I CORRECTED HELIUM TO HYDROGEN: here are some mnemonic devices: <a href="https://www.mnemonic-device.com/chemistry/periodic-table/periodic-table-of-elements/">here</a>.<br />
<br />
This post is getting long so I may report on more of the talks in a later post.<br />
<br />
<br />
<br />
<br />
<br />
<br />http://blog.computationalcomplexity.org/2016/04/some-short-bits-from-gathering-for.htmlnoreply@blogger.com (GASARCH)7tag:blogger.com,1999:blog-3722233.post-6594221842661710665Thu, 21 Apr 2016 13:07:00 +00002016-04-21T09:07:23.503-04:00The Master AlgorithmWe see so few popular science books on computer science, particularly outside of crypto and theory. Pedro Domingos' <a href="http://www.amazon.com/Master-Algorithm-Ultimate-Learning-Machine/dp/0465065708/ref=as_li_ss_tl?s=books&ie=UTF8&qid=1461241628&sr=1-1&keywords=master+algorithm&linkCode=ll1&tag=computation09-20&linkId=895411fe7361c0f6bd783e038d1ac56d">The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake the World</a>, despite the hyped title and prologue, does a nice job giving the landscape of machine learning algorithms and putting them in a common text from their philosophical underpinnings to the models that they build on, all in a mostly non-technical way. I love the diagram he creates:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://2.bp.blogspot.com/-0dPkMd0BD44/VxjI2d9fyWI/AAAAAAABWN0/wSGS-ShV29YVI984nPMhYtHY6mkQZ4zKgCLcB/s1600/Five%252BTribes%252Bof%252BML%252BAccording%252Bto%252BDomnigos.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="https://2.bp.blogspot.com/-0dPkMd0BD44/VxjI2d9fyWI/AAAAAAABWN0/wSGS-ShV29YVI984nPMhYtHY6mkQZ4zKgCLcB/s320/Five%252BTribes%252Bof%252BML%252BAccording%252Bto%252BDomnigos.png" width="319" /></a></div>
Working out from the inner ring are the representations of the models, how we measure goodness, the main tool to optimize the model and the philosophies that drove that model. The book hits on other major ML topics including unsupervised and reinforcement learning.<br />
<br />
In the bullseye you can see the "Master Equation" or the Master Algorithm, one learning algorithm to rule them all. The quest for such an algorithm drives the book, and Domingos describes his own, admittedly limited attempts, towards reaching that goal.<br />
<br />
I diverge from Domingos in whether we can truly have a single Master Algorithm. What model captures all the inner-ring models above: circuits. A Master Algorithm would find a minimum-sized circuit relative to some measure of goodness. You can do that if P = NP and while we don't think circuit-minimization is NP-hard, it would break cryptography and factor numbers. One of Domingos' arguments states "If we invent an algorithm that can learn to solve satisfiability, it would have a good claim to being the Master Algorithm". Good luck with that.http://blog.computationalcomplexity.org/2016/04/the-master-algorithm.htmlnoreply@blogger.com (Lance Fortnow)5tag:blogger.com,1999:blog-3722233.post-4769870258477367307Mon, 18 Apr 2016 14:37:00 +00002016-04-18T10:37:18.216-04:00Its hard to tell if a problem is hard. Is this one hard?Here is a problem I heard about at the Gathering for Gardner. Is it hard? easy? boring? interesting? I don't know.<br />
<br />
Let N={1,2,3,...}<br />
<br />
PROBLEM: parameters are s (start point) and f (not sure why to call it f). both are in N<br />
<br />
Keep in mind the sequence, in order, of operations:<br />
<br />DIVIDE BY f, SUBTRACT f, ADD f, MULTIPLY by f.<br />
<br />
form the following sequence of numbers in N<br />
<br />
a(0)= s<br />
<br />
Assume a(0),...,a(n) are known. Let A = {a(0),...,a(n)}. N-A are the elements in N that are NOT in A.<br />
<br />
If a(n)/f is in N-A then a(n+1)=a(n)/f<br />
<br />
Else<br />
<br />If a(n)-f is in N-A then a(n+1)=a(n)-f<br />
<br />
Else<br />
<br />
If a(n)+f is in N-A then a(n+1)=a(n)+f<br />
<br />
Else<br />
<br />
If a(n)*f is in N-A then a(n+1) = a(n)*f<br />
<br />
Else<br />
<br />
If none of the above holds then the sequence terminates.<br />
<br />
Lets do an example! Let a=14 and f=2<br />
<br />
14, 7, 5, 3, 1, 2, 4, 6, 8, 10, 12, 24, 22, 11, 9, 18, 16, 32, 30, 15, 13, 26, 28, 56, 54, 27, 25, 23, 21,<br />
<br />
19, 17, 34, 36, 38, 40, 20, STOP since 10, 18, 22, 40 are all on the list.<br />
<br />
Lets do another example! Let a=7, f=2<br />
<br />
7, 5, 3, 1, 2, 4, 6, 8, 10, 12, 14, 16, 18, 9, 11, 13, 15, 17, ... (keeps going)<br />
<br />
If f=2 and you get to an odd number x so that ALL of the odds less than x have already appeared but NONE of the odd numbers larger than x have appeared, then the sequence will go forever<br />
with x, x+2, x+4, ...<br />
<br />
QUESTIONS and META QUESTIONS<br />
<br />
1) Can one characterize for which (s,f) the sequence stops.<br />
<br />
2) Is it decidable to determine for which (s,f) the sequence stops.<br />
<br />
3) Both (1) and (2) for either fixed s or fixed f.<br />
<br />
4) Are the above questions easy?<br />
<br />
5) Are the above questions interesting? <br />
<br />
There are four categories:<br />
<br />
Easy and Interesting- Hmmm, if its TOO easy (which I doubt) then I supposed can't be interesting.<br />
<br />
Easy and boring.<br />
<br />
Hard and interesting. This means that some progress can be made and perhaps connections to other mathematics.<br />
<br />
Hard and Boring. Can't solve and are not enlightened for the effort.<br />
<br />
<br />http://blog.computationalcomplexity.org/2016/04/its-hard-to-tell-if-problem-is-hard-is.htmlnoreply@blogger.com (GASARCH)8tag:blogger.com,1999:blog-3722233.post-4948910746694899353Thu, 14 Apr 2016 11:37:00 +00002016-04-14T07:37:29.735-04:00Who Controls Machine Learning?After AlphaGo's victory, the New York Times ran an article <a href="http://www.nytimes.com/2016/03/26/technology/the-race-is-on-to-control-artificial-intelligence-and-techs-future.html">The Race Is On to Control Artificial Intelligence, and Tech’s Future</a>.<br />
<blockquote class="tr_bq">
A platform, in technology, is essentially a piece of software that other companies build on and that consumers cannot do without. Become the platform and huge profits will follow. Microsoft dominated personal computers because its Windows software became the center of the consumer software world. Google has come to dominate the Internet through its ubiquitous search bar. If true believers in A.I. are correct that this long-promised technology is ready for the mainstream, the company that controls A.I. could steer the tech industry for years to come.</blockquote>
I then <a href="https://twitter.com/fortnow/status/714513427872538625">tweeted</a> "Can a company control AI? More likely to become a commodity." The major machine learning algorithms are public knowledge and one can find a number of open-source implementations including Google's own <a href="https://www.tensorflow.org/">TensorFlow</a> that powered AlphaGo. What's to stop a start-up from implementing their own machine learning tools on the cloud?<br />
<br />
Some of my readers' comments forced me to rethink my hasty tweet. First, Google, Microsoft and Amazon can create ML infrastructure, cloud hardware that optimizes computational power and storage for machine learning algorithms to get a level of data analysis that one couldn't replicate in software alone.<br />
<br />
More importantly, Google etc. have access to huge amounts of data. Cloud companies can provide pretrained machine learning algorithms. Google <a href="https://cloud.google.com/products/machine-learning/">provides</a> image classification, voice transcription and translation. Microsoft <a href="https://azure.microsoft.com/en-us/services/cognitive-services/">offers</a> face and emotion detection and speech and text analysis. One could imagine, in the absence of privacy issues, Google taking your customer data, matching with data that Google already has on the same customers to draw new inferences about how to market to those customers better.<br />
<br />
With almost all our computing heading to the cloud, cloud computing providers will continue to compete, and provide continuing better tools in machine learning and beyond. Eventually will one company "control AI"? That would surprise me but we may still end up with an AI oligarchy. http://blog.computationalcomplexity.org/2016/04/who-controls-machine-learning.htmlnoreply@blogger.com (Lance Fortnow)5tag:blogger.com,1999:blog-3722233.post-7776462762560228708Mon, 11 Apr 2016 01:11:00 +00002016-04-10T21:11:49.775-04:00What Rock Band Name would you choose?<br />
I looked up my colleague Dave Mount on Wikipedia and found that he was a drummer for the glam rock band <a href="https://en.wikipedia.org/wiki/Mud_(band)">Mud</a>. He informed me that (a) on Wikipedia he is <a href="https://en.wikipedia.org/wiki/David_Mount">David Mount</a> and (b) if he had a rock band it would be named <i>Fried Purple Ellipsoids.</i><br />
<i><br /></i>
This set off an email discussion where people said what their rock band name would be. I noticed that many ideas for names had variants. For example, my favorite for Ramsey Theorists: <i>The Red Cliques </i>could be<br />
<br />
<i>The Red Cliques</i><br />
<br />
<i>Red Clique</i><br />
<br />
<i>Bill Gasarch and the Red Cliques!</i><br />
<br />
<i>Clique!</i><br />
<br />
So below I list one variant of each name but keep in mind that there are others.<br />
<br />
<i>The Hidden Subgroups</i><br />
<br />
<i>Amplitudes with Attitude </i><br />
<i><br /></i>
<i> Schrodinger's cat (I wonder if this IS a rock band already)</i><br />
<i><br /></i> <i>The Red Cliques</i><br />
<i><br /></i>
<i> Fried purple ellipsoids</i><br />
<i><br /></i>
<i> Fried green ellipsoids</i><br />
<i><br /></i>
<i>BIG A-G-T</i><br />
<br />
<i>The Biconnected Sets</i><br />
<i><br /></i>
<i>PRAM!</i><br />
<i><br /></i>
<i>BPP! (I wonder if any complexity class would work.)</i><br />
<i><br /></i>
<i>SAT (I wonder if other one-word problems would work. TSP!)</i><br />
<i><br /></i>
<i>Karp and the reductions</i><br />
<br />
<i>Avi and the derandomizers </i><br />
<br />
<i>Aravind and the Expanders</i><br />
<i><br /></i>
<i>(Could replace Karp, Avi, and Aravind with others, but these are the first that</i><br />
<i>came to mind. Plus THE EXPANDERS was Aravind Srinivasan's idea.)</i><br />
<br />
<i>The MIT Logarhythms</i> (This is a real acapella group <a href="https://mitlogs.com/">see here</a>.)<br />
<br />
<i>The Discrete Logarhythms</i><br />
<br />
<i>RSA!</i><br />
<i><br /></i>
<i>The Oracles</i><br />
<i><br /></i>
<i>The Interactive Proofs</i><br />
<i><br /></i>
<i>The Natural Proofs</i><br />
<i><br /></i>
<i>Fried Green Proofs</i><br />
<i><br /></i>
If we expand to include math we get lots more, so I'll just mention one real one: <a href="https://www.casa.org/groups/klein-four-group">The Klein Four</a>, an acapella group.<br />
<br />
SO- what would YOUR rock band name be?<br />
<br />
<br />
<br />
<br />http://blog.computationalcomplexity.org/2016/04/what-rock-band-name-would-you-choose.htmlnoreply@blogger.com (GASARCH)3