tag:blogger.com,1999:blog-3722233Sat, 30 Apr 2016 10:01:27 +0000typecastfocs metacommentsComputational ComplexityComputational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill Gasarchhttp://blog.computationalcomplexity.org/noreply@blogger.com (Lance Fortnow)Blogger2375125tag:blogger.com,1999:blog-3722233.post-7636396135154304804Thu, 28 Apr 2016 18:06:00 +00002016-04-28T14:06:43.443-04:00Claude Shannon (1916-2001)<div class="separator" style="clear: both; text-align: center;">
<a href="https://upload.wikimedia.org/wikipedia/en/thumb/2/2f/Claude_Elwood_Shannon_(1916-2001).jpg/200px-Claude_Elwood_Shannon_(1916-2001).jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="200" src="https://upload.wikimedia.org/wikipedia/en/thumb/2/2f/Claude_Elwood_Shannon_(1916-2001).jpg/200px-Claude_Elwood_Shannon_(1916-2001).jpg" width="141" /></a></div>
Claude Shannon was born hundred years ago Saturday. Shannon had an incredible career but we know him best for his 1948 paper <a href="http://dx.doi.org/10.1145/584091.584093">A Mathematical Theory of Communication</a> that introduced entropy and information theory to the world. Something I didn't know until looking him up: Shannon was the first to define information-theoretic security and show that one-time pads are the one and basically only code that secure.<br />
<br />
Entropy has a <a href="https://en.wikipedia.org/wiki/Entropy_(information_theory)#Definition">formal definition</a>, the minimum expected number of bits to represent the output of a distribution. But I view information as a more abstract concept of which entropy is just one substantiation. When you think of concepts like conditional information, mutual information, symmetry of information, the idea of an underlying distribution tends to fade away and you begin to think of information itself as an entity worth mentioning. And when you look at Kolmogorov Complexity, often called algorithmic information theory, the measure is over strings, not distributions, yet has many of the same concepts and relationships in the entropy setting.<br />
<br />
Computational Complexity owes much to Shannon's information. We can use information theory to get lower bounds on communication protocols, circuits, even upper bounds on algorithms. Last spring the Simons Institute for the Theory of Computing had a semester program on <a href="https://simons.berkeley.edu/programs/inftheory2015">Information Theory</a> including including a workshop on <a href="https://simons.berkeley.edu/workshops/inftheory2015-3">Information Theory in Complexity Theory and Combinatorics</a>. Beyond theory, relative entropy, or <a href="https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence">Kullback–Leibler divergence</a>, plays an important role in measuring the effectiveness of machine learning algorithms.<br />
<br />
We live in an age of information, growing dramatically every year. How do we store information, how do we transmit, how do we learn from it, how do we keep it secure and private? Let's celebrate the centenary of the man who gave us the framework to study these questions and so much more.http://blog.computationalcomplexity.org/2016/04/claude-shannon-1916-2001.htmlnoreply@blogger.com (Lance Fortnow)1tag:blogger.com,1999:blog-3722233.post-6069339588006717070Mon, 25 Apr 2016 00:01:00 +00002016-04-25T10:28:40.076-04:00Some short bits from the Gathering for Gardner Conference<br />
I attended G4G12 (Gathering for Gardner) a conference that meets every 2 years (though the gap between the first and second was three years) to celebrate the work of Martin Gardner. Most of the talks were on Recreational mathematics, but there were also some on Magic and some are hard to classify.<br />
<br />
Martin Gardner had a column in Scientific American called Mathematical Games from 1956 to 1981. His column inspired man people to go into mathematics. Or perhaps people who liked math read his column. The first theorem I ever read outside of a classroom was in his column. It was, in our terminology, a graph is Eulerian iff every vertex has even degree.<br />
<br />
For a joint review of six G4G proceedings see <a href="https://www.cs.umd.edu/users/gasarch/BLOGPAPERS/gardnerg.pdf">here</a>. For a joint review of six books on recreational math including three of Gardner's, see <a href="https://www.cs.umd.edu/users/gasarch/BLOGPAPERS/gardneraha.pdf">here</a>. For a review of a book that has serious math based on the math he presented in his column see <a href="https://www.cs.umd.edu/users/gasarch/BLOGPAPERS/gardner21.pdf">here</a>.<br />
<br />
The talks at G4G are usually 6 minutes long so you can learn about a nice problem and then work on it yourself. Their were a large variety of talks and topics. Many of the talks do not have an accompanying paper. Many of them are not on original material. But none of this matters--- the talks were largely interesting and told me stuff I didn't know.<br />
<br />
64=64 and Fibonacci, as Studied by Lewis Caroll, by Stuart Moshowitz. This was about a Lewis Caroll puzzle where he put together shapes in one way to get a rectangle of area 65, and another way to get a square of area 64, The following link is NOT to his talk or a paper of Moshowitz, but it is about the problem: <a href="https://mathlesstraveled.com/2011/05/02/an-area-paradox/">here</a><br />
<br />
How Math can Save your life by <a href="https://en.wikipedia.org/wiki/Susan_Marie_Frontczak">Susan Marie Frontczak</a>. This was part talk about bricks and weights and then she stood on the desk and sang <a href="https://www.youtube.com/watch?v=VuRgiZVMUO0">this song</a> (thats not her signing it).<br />
<br />
Twelve ways to trisect and angle by David Richeson. This was NOT a talk about cranks who thought they had trisected and angle with straightedge and compass. It was about people who used ruler, compass, and JUST ONE MORE THING. I asked David later if the people who trisected the angle before it was shown impossible had a research plan to remove the ONE MORE THING and get the real trisection. He said no- people pretty much knew it was impossible even before the proof.<br />
<br />
The Sleeping Beauty Paradox Resolved by Pradeep Mutalik. This paradox would take an entire blog post to explains so here is a pointer to the wikipedia entry on it: <a href="https://en.wikipedia.org/wiki/Sleeping_Beauty_problem">here</a>. AH, this one DOES have a paper associated to it, so you can read his resolution <a href="https://www.quantamagazine.org/20160129-solution-sleeping-beautys-dilemma/">here</a><br />
<br />
Larger Golomb Rulers by Tomas Rokicki. A <a href="https://en.wikipedia.org/wiki/Golomb_ruler">Golomb Ruler</a> is a ruler with marks on it so that the all of the distances between marks are distinct. The number of marks is called the order of the ruler. Construction a Golumb ruler is easy (e.g., marks at the 1,2,4,8,... positions I think works). The real question is to get one of shortest length. They had some new results but, alas, I can't find them on the web.<br />
<br />
Chemical Pi by John Conway. There are people who memorize the first x digits of pi. John Conway does something else. He has memorized the digits of pi and the chemical elements in the following way:<br />
<br />
HYDROGEN 3.141592653 HELIUM next 10 digits of pi LITHIUM etc<br />
<br />
that is, he memorized the digits of pi by groups of 10 and separated by the chemical elements in the order they are on the Periodic table. He claims this makes it easier to answer questions like: What is the 87th digits of pi. He also claims it gives a natural stopping point for how many digits of pi you need to memorize (need? maybe want). (ADDED LATER WHEN I CORRECTED HELIUM TO HYDROGEN: here are some<br />
nmemonic (spell check says that's wrong but the link I will soon give says it is correct) devices for the periodic table: <a href="https://www.mnemonic-device.com/chemistry/periodic-table/periodic-table-of-elements/">here</a>.)<br />
<br />
This post is getting long so I may report on more of the talks in a later post.<br />
<br />
<br />
<br />
<br />
<br />
<br />http://blog.computationalcomplexity.org/2016/04/some-short-bits-from-gathering-for.htmlnoreply@blogger.com (GASARCH)4tag:blogger.com,1999:blog-3722233.post-6594221842661710665Thu, 21 Apr 2016 13:07:00 +00002016-04-21T09:07:23.503-04:00The Master AlgorithmWe see so few popular science books on computer science, particularly outside of crypto and theory. Pedro Domingos' <a href="http://www.amazon.com/Master-Algorithm-Ultimate-Learning-Machine/dp/0465065708/ref=as_li_ss_tl?s=books&ie=UTF8&qid=1461241628&sr=1-1&keywords=master+algorithm&linkCode=ll1&tag=computation09-20&linkId=895411fe7361c0f6bd783e038d1ac56d">The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake the World</a>, despite the hyped title and prologue, does a nice job giving the landscape of machine learning algorithms and putting them in a common text from their philosophical underpinnings to the models that they build on, all in a mostly non-technical way. I love the diagram he creates:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://2.bp.blogspot.com/-0dPkMd0BD44/VxjI2d9fyWI/AAAAAAABWN0/wSGS-ShV29YVI984nPMhYtHY6mkQZ4zKgCLcB/s1600/Five%252BTribes%252Bof%252BML%252BAccording%252Bto%252BDomnigos.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="https://2.bp.blogspot.com/-0dPkMd0BD44/VxjI2d9fyWI/AAAAAAABWN0/wSGS-ShV29YVI984nPMhYtHY6mkQZ4zKgCLcB/s320/Five%252BTribes%252Bof%252BML%252BAccording%252Bto%252BDomnigos.png" width="319" /></a></div>
Working out from the inner ring are the representations of the models, how we measure goodness, the main tool to optimize the model and the philosophies that drove that model. The book hits on other major ML topics including unsupervised and reinforcement learning.<br />
<br />
In the bullseye you can see the "Master Equation" or the Master Algorithm, one learning algorithm to rule them all. The quest for such an algorithm drives the book, and Domingos describes his own, admittedly limited attempts, towards reaching that goal.<br />
<br />
I diverge from Domingos in whether we can truly have a single Master Algorithm. What model captures all the inner-ring models above: circuits. A Master Algorithm would find a minimum-sized circuit relative to some measure of goodness. You can do that if P = NP and while we don't think circuit-minimization is NP-hard, it would break cryptography and factor numbers. One of Domingos' arguments states "If we invent an algorithm that can learn to solve satisfiability, it would have a good claim to being the Master Algorithm". Good luck with that.http://blog.computationalcomplexity.org/2016/04/the-master-algorithm.htmlnoreply@blogger.com (Lance Fortnow)4tag:blogger.com,1999:blog-3722233.post-4769870258477367307Mon, 18 Apr 2016 14:37:00 +00002016-04-18T10:37:18.216-04:00Its hard to tell if a problem is hard. Is this one hard?Here is a problem I heard about at the Gathering for Gardner. Is it hard? easy? boring? interesting? I don't know.<br />
<br />
Let N={1,2,3,...}<br />
<br />
PROBLEM: parameters are s (start point) and f (not sure why to call it f). both are in N<br />
<br />
Keep in mind the sequence, in order, of operations:<br />
<br />DIVIDE BY f, SUBTRACT f, ADD f, MULTIPLY by f.<br />
<br />
form the following sequence of numbers in N<br />
<br />
a(0)= s<br />
<br />
Assume a(0),...,a(n) are known. Let A = {a(0),...,a(n)}. N-A are the elements in N that are NOT in A.<br />
<br />
If a(n)/f is in N-A then a(n+1)=a(n)/f<br />
<br />
Else<br />
<br />If a(n)-f is in N-A then a(n+1)=a(n)-f<br />
<br />
Else<br />
<br />
If a(n)+f is in N-A then a(n+1)=a(n)+f<br />
<br />
Else<br />
<br />
If a(n)*f is in N-A then a(n+1) = a(n)*f<br />
<br />
Else<br />
<br />
If none of the above holds then the sequence terminates.<br />
<br />
Lets do an example! Let a=14 and f=2<br />
<br />
14, 7, 5, 3, 1, 2, 4, 6, 8, 10, 12, 24, 22, 11, 9, 18, 16, 32, 30, 15, 13, 26, 28, 56, 54, 27, 25, 23, 21,<br />
<br />
19, 17, 34, 36, 38, 40, 20, STOP since 10, 18, 22, 40 are all on the list.<br />
<br />
Lets do another example! Let a=7, f=2<br />
<br />
7, 5, 3, 1, 2, 4, 6, 8, 10, 12, 14, 16, 18, 9, 11, 13, 15, 17, ... (keeps going)<br />
<br />
If f=2 and you get to an odd number x so that ALL of the odds less than x have already appeared but NONE of the odd numbers larger than x have appeared, then the sequence will go forever<br />
with x, x+2, x+4, ...<br />
<br />
QUESTIONS and META QUESTIONS<br />
<br />
1) Can one characterize for which (s,f) the sequence stops.<br />
<br />
2) Is it decidable to determine for which (s,f) the sequence stops.<br />
<br />
3) Both (1) and (2) for either fixed s or fixed f.<br />
<br />
4) Are the above questions easy?<br />
<br />
5) Are the above questions interesting? <br />
<br />
There are four categories:<br />
<br />
Easy and Interesting- Hmmm, if its TOO easy (which I doubt) then I supposed can't be interesting.<br />
<br />
Easy and boring.<br />
<br />
Hard and interesting. This means that some progress can be made and perhaps connections to other mathematics.<br />
<br />
Hard and Boring. Can't solve and are not enlightened for the effort.<br />
<br />
<br />http://blog.computationalcomplexity.org/2016/04/its-hard-to-tell-if-problem-is-hard-is.htmlnoreply@blogger.com (GASARCH)8tag:blogger.com,1999:blog-3722233.post-4948910746694899353Thu, 14 Apr 2016 11:37:00 +00002016-04-14T07:37:29.735-04:00Who Controls Machine Learning?After AlphaGo's victory, the New York Times ran an article <a href="http://www.nytimes.com/2016/03/26/technology/the-race-is-on-to-control-artificial-intelligence-and-techs-future.html">The Race Is On to Control Artificial Intelligence, and Tech’s Future</a>.<br />
<blockquote class="tr_bq">
A platform, in technology, is essentially a piece of software that other companies build on and that consumers cannot do without. Become the platform and huge profits will follow. Microsoft dominated personal computers because its Windows software became the center of the consumer software world. Google has come to dominate the Internet through its ubiquitous search bar. If true believers in A.I. are correct that this long-promised technology is ready for the mainstream, the company that controls A.I. could steer the tech industry for years to come.</blockquote>
I then <a href="https://twitter.com/fortnow/status/714513427872538625">tweeted</a> "Can a company control AI? More likely to become a commodity." The major machine learning algorithms are public knowledge and one can find a number of open-source implementations including Google's own <a href="https://www.tensorflow.org/">TensorFlow</a> that powered AlphaGo. What's to stop a start-up from implementing their own machine learning tools on the cloud?<br />
<br />
Some of my readers' comments forced me to rethink my hasty tweet. First, Google, Microsoft and Amazon can create ML infrastructure, cloud hardware that optimizes computational power and storage for machine learning algorithms to get a level of data analysis that one couldn't replicate in software alone.<br />
<br />
More importantly, Google etc. have access to huge amounts of data. Cloud companies can provide pretrained machine learning algorithms. Google <a href="https://cloud.google.com/products/machine-learning/">provides</a> image classification, voice transcription and translation. Microsoft <a href="https://azure.microsoft.com/en-us/services/cognitive-services/">offers</a> face and emotion detection and speech and text analysis. One could imagine, in the absence of privacy issues, Google taking your customer data, matching with data that Google already has on the same customers to draw new inferences about how to market to those customers better.<br />
<br />
With almost all our computing heading to the cloud, cloud computing providers will continue to compete, and provide continuing better tools in machine learning and beyond. Eventually will one company "control AI"? That would surprise me but we may still end up with an AI oligarchy. http://blog.computationalcomplexity.org/2016/04/who-controls-machine-learning.htmlnoreply@blogger.com (Lance Fortnow)4tag:blogger.com,1999:blog-3722233.post-7776462762560228708Mon, 11 Apr 2016 01:11:00 +00002016-04-10T21:11:49.775-04:00What Rock Band Name would you choose?<br />
I looked up my colleague Dave Mount on Wikipedia and found that he was a drummer for the glam rock band <a href="https://en.wikipedia.org/wiki/Mud_(band)">Mud</a>. He informed me that (a) on Wikipedia he is <a href="https://en.wikipedia.org/wiki/David_Mount">David Mount</a> and (b) if he had a rock band it would be named <i>Fried Purple Ellipsoids.</i><br />
<i><br /></i>
This set off an email discussion where people said what their rock band name would be. I noticed that many ideas for names had variants. For example, my favorite for Ramsey Theorists: <i>The Red Cliques </i>could be<br />
<br />
<i>The Red Cliques</i><br />
<br />
<i>Red Clique</i><br />
<br />
<i>Bill Gasarch and the Red Cliques!</i><br />
<br />
<i>Clique!</i><br />
<br />
So below I list one variant of each name but keep in mind that there are others.<br />
<br />
<i>The Hidden Subgroups</i><br />
<br />
<i>Amplitudes with Attitude </i><br />
<i><br /></i>
<i> Schrodinger's cat (I wonder if this IS a rock band already)</i><br />
<i><br /></i> <i>The Red Cliques</i><br />
<i><br /></i>
<i> Fried purple ellipsoids</i><br />
<i><br /></i>
<i> Fried green ellipsoids</i><br />
<i><br /></i>
<i>BIG A-G-T</i><br />
<br />
<i>The Biconnected Sets</i><br />
<i><br /></i>
<i>PRAM!</i><br />
<i><br /></i>
<i>BPP! (I wonder if any complexity class would work.)</i><br />
<i><br /></i>
<i>SAT (I wonder if other one-word problems would work. TSP!)</i><br />
<i><br /></i>
<i>Karp and the reductions</i><br />
<br />
<i>Avi and the derandomizers </i><br />
<br />
<i>Aravind and the Expanders</i><br />
<i><br /></i>
<i>(Could replace Karp, Avi, and Aravind with others, but these are the first that</i><br />
<i>came to mind. Plus THE EXPANDERS was Aravind Srinivasan's idea.)</i><br />
<br />
<i>The MIT Logarhythms</i> (This is a real acapella group <a href="https://mitlogs.com/">see here</a>.)<br />
<br />
<i>The Discrete Logarhythms</i><br />
<br />
<i>RSA!</i><br />
<i><br /></i>
<i>The Oracles</i><br />
<i><br /></i>
<i>The Interactive Proofs</i><br />
<i><br /></i>
<i>The Natural Proofs</i><br />
<i><br /></i>
<i>Fried Green Proofs</i><br />
<i><br /></i>
If we expand to include math we get lots more, so I'll just mention one real one: <a href="https://www.casa.org/groups/klein-four-group">The Klein Four</a>, an acapella group.<br />
<br />
SO- what would YOUR rock band name be?<br />
<br />
<br />
<br />
<br />http://blog.computationalcomplexity.org/2016/04/what-rock-band-name-would-you-choose.htmlnoreply@blogger.com (GASARCH)3tag:blogger.com,1999:blog-3722233.post-466295802437248693Thu, 07 Apr 2016 11:02:00 +00002016-04-07T07:02:41.394-04:00It's All About the JobsIn the April CACM Moshe Vardi asks <a href="http://cacm.acm.org/magazines/2016/4/200176-are-we-headed-toward-another-global-tech-bust/fulltext">Are We Headed toward Another Global Tech Bust?</a> I agree with some of Vardi’s points, mostly that VC money chasing after unicorns (potential billion-dollar start-ups) will not continue at its heavy pace and we’re already seeing a slow down. But I disagree with Vardi’s assessment that “we should brace ourselves for another global tech and enrollment bust” in computer science. I suspect we’ll see more of a reality check, but that reality looks extremely strong.<br />
<br />
Vardi claims that “It is the dream of joining a unicorn that probably attracts many students to study computing”. It’s not just the unicorns bringing students to computer science, but essentially a 100% employment rate for CS graduates looking for a job in the field, many receiving six-figure starting salaries. Few, if any, other disciplines can claim full employment after the bachelor’s degree. Industry is desperate to hire computing professionals in machine learning, cloud computing, cybersecurity, mobile computing, automation, robotics and data science, among others. Not just the computing companies but every industry that deals with data, which is pretty much every industry. Unicorns may become rarer but we won’t see a decline in demand for computer science students until we automate ourselves out of a job.<br />
<br />
Take a look at this chart from Ed Lazowska's <a href="http://www.cccblog.org/2016/03/31/where-the-jobs-are-2016-edition/">Where The Jobs Are – 2016 Edition</a>. Those CS jobs won't fill themselves.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-xcTUrjErHOc/VwLZv2YN3lI/AAAAAAABV-k/NVSG00EhOf4SXBCSQ5ptcVBxB12bUoa7w/s1600/Jobs%2Bvs.%2Bdegrees.jpg" imageanchor="1" style="clear: left; float: center; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="335" src="https://1.bp.blogspot.com/-xcTUrjErHOc/VwLZv2YN3lI/AAAAAAABV-k/NVSG00EhOf4SXBCSQ5ptcVBxB12bUoa7w/s640/Jobs%2Bvs.%2Bdegrees.jpg" width="520" /></a></div>
<br />
<div>
<br /></div>
http://blog.computationalcomplexity.org/2016/04/its-all-about-jobs.htmlnoreply@blogger.com (Lance Fortnow)11tag:blogger.com,1999:blog-3722233.post-825143172796067327Tue, 05 Apr 2016 13:30:00 +00002016-04-05T09:30:27.442-04:00Are Perfect Numbers Bigger than Six initial sums of odd cubes (answered)<br />
(NONE of this is my work. In fact some of it is on Wikipedia.)<br />
<br />
In my <a href="http://blog.computationalcomplexity.org/2016/04/are-perfect-numbers-bigger-than-6.html">last blog</a> I noticed that<br />
<br />
28 = 1<sup>3</sup> + 3<sup>3</sup><br />
<br />
496= 1<sup>3</sup> + 3<sup>3</sup> + 5<sup>3</sup> + 7<sup>3</sup><br />
<br />
noting that 28 and 496 are the 2nd and 3rd perfect numbers.<br />
<br />
I asked if 8128, the next perfect number is also an initial sum of odd cubes. It is!<br />
<br />
8128 = 1<sup>3</sup> + 3<sup>3</sup> + ... + 15<sup>3</sup><br />
<br />
I also asked if there was something interesting going on .The answer is YES but not that interesting.<br />
<br />
All of the math with proofs are <a href="https://www.cs.umd.edu/users/gasarch/BLOGPAPERS/perfect.pdf">here</a>. I sketch below.<br />
<br />
Known Theorem 1: n is an even perfect number iff n is of the form (2<sup>p-1</sup>)(2<sup>p</sup>- 1) where 2<sup>p</sup>-1 is prime.<br />
<br />
Known Theorem 2: 1<sup>3</sup> + 3<sup>3</sup> + 5<sup>3</sup> + ... + (2(m-1)+1)<sup>3</sup> = m<sup>2</sup>(2m<sup>2</sup>-1).<br />
<br />
Interesting theorem: if n is an even perfect number larger than 6 and p is the p from Known Theorem 1 then n is the sum of the first 2<sup>(p-1)/2</sup> odd cubes.<br />
<br />
Why this is less interesting: The proof does not use that n is perfect. It holds for any number of the form 2<sup>p-1</sup>(2<sup>p</sup>-1) where p is odd.<br />
<br />
So the theorem has nothing to do with perfect numbers. Oh well.<br />
<br />
<br />
<br />
<br />http://blog.computationalcomplexity.org/2016/04/are-perfect-numbers-bigger-than-six.htmlnoreply@blogger.com (GASARCH)0tag:blogger.com,1999:blog-3722233.post-8635354215110811406Mon, 04 Apr 2016 15:52:00 +00002016-04-04T13:29:46.063-04:00Are perfect numbers bigger than 6 initial sums of odd cubes?<br />
I pose two questions today (Monday April 4).<br />
<br />
I will post the answers tomorrow (Tuesday April 5).<br />
<br />
Feel free to comment about the answers. If you don't want clues look at the comments.<br />
If I need to clarify something I will do it in the main post So, to reiterate- feel free to leave spoilers but if you want to avoid reading them, don't read the comments.<br />
<br />
Note:<br />
<br />
The first four perfect numbers are 6, 28, 496, 8128<br />
<br />
28 = 1<sup>3</sup> + 3<sup>3</sup><br />
<br />
496 = 1<sup>3</sup> + 3<sup>3</sup> + 5<sup>3</sup> + 7<sup>3</sup><br />
<br />
Is 8128 the sum of the first six odd cubes? No, and that is not one of my questions.<br />
<br />
Questions:<br />
<br />
1) Is there a k such that 8128 is the sum of the first k odd cubes?<br />
<br />
2) Is there something interesting going on here?http://blog.computationalcomplexity.org/2016/04/are-perfect-numbers-bigger-than-6.htmlnoreply@blogger.com (GASARCH)7tag:blogger.com,1999:blog-3722233.post-2407499193923243828Fri, 01 Apr 2016 10:50:00 +00002016-04-01T06:53:05.051-04:00The Machine Learning TurkGoogle's AlphaGo took the world by storm when it won its match with Lee Sedol but Demis Hassabis now acknowledges the dark truth. Google wanted to promote its cloud computing division as Amazon AWS and Microsoft Azure have quite the head start. Google needed a killer app that would bring users to Google Cloud and decided they could win if they had the best machine learning tools. They bought Deepmind, run by Hassabis, and needed a showcase event and decided to focus on Go, a game yet to be conquered by computers. Hassabis and his team used clever machine learning techniques on top of Monte Carlo Tree Search but only made mild improvements to the game. Google was growing desperate so a plan was hatched.<br />
<br />
Using a modern version of the mechanical turk, an 18th century chess playing automaton that secretly hid a human inside playing the game, Hassabis enlisted Japanese Go player Yuta Iyama to secretly choose the moves for AlphaGo. Iyama, who worked with Google when they agreed to remove Iyama's embarrassing Karaoke videos from YouTube, didn't have to physically be in the machine but relayed the moves by a method Hassabis wouldn't reveal. AlphaGo, secretly getting its moves from Iyama, easily dispatched the European champion in October.<br />
<br />
Hannabis and his team wrote up their failed algorithms and found it shockingly easy to fool the Nature editors and reviewers. Yann LeCun of Facebook looked at the Google's team's <a href="http://www.nature.com/nature/journal/v529/n7587/full/nature16961.html">Nature paper</a> and didn't see that much different from what Facebook had tried. "I just figured Google had chosen better parameters to make their program successful. At the time I should have realized what Google was up to."<br />
<br />
Google took a risk challenging Lee Sedol but Sedol, not realizing he was really facing Iyama, played the wrong style of game and lost the match four games to one.<br />
<br />
Will this revelation hurt the future of AI? "Machine learning continues to change society, but when it comes to Go," said LeCun, "Alpha fools".http://blog.computationalcomplexity.org/2016/04/the-machine-learning-turk.htmlnoreply@blogger.com (Lance Fortnow)10tag:blogger.com,1999:blog-3722233.post-3683747460014125843Mon, 28 Mar 2016 11:45:00 +00002016-03-28T12:48:46.471-04:00MohammadTaghi HajiAghayi on David Johnson <div class="MsoNormal" style="margin-bottom: 0.0001pt; text-align: justify;">
<span style="font-family: inherit;"><span style="color: #222222; ">More than a week ago,
I heard the very sad news that David Johnson has passed away after one year
fight with cancer. I felt that I should write a memorial note for him. Indeed I
have done the same for </span><a href="http://blog.computationalcomplexity.org/2012/06/mihai-patrascu-1982-2012.html"><span style="">Mihai Pătraşcu</span></a><span style="color: #222222; "> in the same blog and both Mihai and David were very similar to
me from several aspects: both were my colleagues at AT&T and more
importantly my dear friends, both they got their Ph.D. from MIT (the same place
that I got my Ph.D. as well), they both were extraordinary researchers, and
both passed away due to cancer after almost a year-long fight with it (and I
was closely aware of their situations in that year). Indeed David read my memo
for Mihai and he told me that he liked it. In addition, there is another reason that I feel respect for David; he was just a bit older than my father who also passed away very recently. So here I would like to put my
thoughts into words for David (and this took me more time in this case
since I wanted to mention some new thoughts given the </span><a href="http://blog.computationalcomplexity.org/2016/03/david-johnson-1945-2016.html"><span style="">comments</span></a><span style="color: #222222; "> already
in this blog). To do so, I would like to mention some of David’s personal characteristics
that I appreciated a lot and give some examples on them from my interactions
with him. Indeed I have even mentioned some of these to him when he was alive
and told him because of these (and other reasons), I am always proud to mention
that I have him as my boss at some point in my career.</span></span></div>
<div class="MsoNormal" style="margin-bottom: 0.0001pt; text-align: justify;">
<span style="font-family: inherit;"><br /></span></div>
<div class="MsoNormal" style="margin-bottom: 0.0001pt; text-align: justify;">
<span style="font-family: inherit;"><span style="color: #222222; ">First of all, David
was very humble and modest especially given his extraordinary </span><a href="http://davidsjohnson.net/vita14.pdf"><span style="">CV</span></a><span style="color: #222222; ">: he won several awards especially Knuth
prize, he is the co-author of one of the top most-cited books in CS, he was
fellows of almost every community that he was involved with (e.g., ACM, SIAM,
AT&T), he was a member and the chair of several prestigious award
committees (like Gödel, Knuth, ACM Kanellakis,</span> <span style="color: #222222; ">ACM Thesis Award) and indeed he was a founder of some of them (e.g.,
Kanellakis), and he was the founder of SODA, the best algorithms conference,
among others. Despite all this he was a very humble and modest man and I think
lots of people who interacted with him will fully agree on this. Just to give
an example, in 1998, while I was still a second-year undergrad at Sharif
University, I sent him an email asking whether he was aware of any book similar
to Garey & Johnson but for parallel computing (indeed this was my first remote
interaction with him); I was shocked how fast he answered my email just in a couple
of hours with a relevant reference. This was especially very exciting and encouraging
for me, since several other people never answered my emails at that time. More
interestingly, later in 2012, I told him personally that I admired him for answering
that email. He told me just wait a second and in a couple of minutes, he could
find the exact same email from 1998 that I sent him; then we even discussed
some English improvements for the email text as well.<o:p></o:p></span></span></div>
<div class="MsoNormal" style="margin-bottom: 0.0001pt; text-align: justify;">
<span style="font-family: inherit;"><br /></span></div>
<div class="MsoNormal" style="margin-bottom: 0.0001pt; text-align: justify;">
<span style="color: #222222; font-family: inherit; ">Second he was a
perfectionist from several aspects. Here are some examples. He was often the
reviewer for P=NP or P!=NP papers for several journals. Probably lots of us
even do not look into these papers unless written by a well-known fellow; however
he was reading these papers very carefully to find the exact bugs and mention
them to the authors. Indeed even when I sent him several referee requests for
conferences for which I severed as a PC member, he always spent a lot of time
to read the paper very carefully and often came with novel improvements and
simplifications, sometime in a extend that authors of the paper under review
wanted to have this anonymous referee as a co-author. All these happened despite
he was a very busy man; however he still considered the task of refereeing a
paper very seriously and respected the authors (and I think this is an example
that lots of us can learn from it). He was a very good writer as well and spent
a lot of time to improve the presentation of a paper, simplify it, and present
it in a perfect way. I am proud to have one paper coauthored with David, a very
long paper with several co-authors. On this paper David had the lead and indeed
spent all the years that I was with AT&T (and even after than) to prepare
the journal version of the paper. Indeed he was sending us the almost final
version on Dec 2014 (and asked us for comments) just a month before he was
diagnosed with cancer (I hope that still we can send the paper to a journal
given the time that David spent on it). Another example of his perfectionism: he
attended ALL SODA while he was alive and almost ALL STOC and FOCS (expect 1-2
years that AT&T had travel restrictions). Not only that, anytime that there
was any talk in the conference, he attended at least one session. Yet another
example: we had group lunches every day at AT&T. That was David’s habit to ask everyone in the
group to see whether they want to join. Now the interesting point was that he
came exactly at noon EVERY DAY and you could even set your watch for 12pm when you
saw him for lunch.<o:p></o:p></span></div>
<div class="MsoNormal" style="margin-bottom: 0.0001pt; text-align: justify;">
<span style="font-family: inherit;"><br /></span></div>
<div class="MsoNormal" style="margin-bottom: 0.0001pt; text-align: justify;">
<span style="color: #222222; font-family: inherit; ">He was founder of SODA,
the best algorithms conference. Indeed lots of us know David because he was the
founder of SODA and he was handling SODA business meetings for lots of year as
the chair of the steering committee. As a result, I often had lots of
discussion with him regarding SODA and its future. We discussed what the
protocol for selecting the chair of SODA should be, whether SODA should have an
official Rebuttal Phase or not, etc. During discussion even some interesting
topics came up which are good to discuss in the community as well. David
believed since SODAs (and in general other major TCS conferences) are the main
venues for publications but still we need full and correct mathematical proofs
for our claims (despite the rest of CS), we should have a <b>five-year period</b> that any major claims and theorems for which the
authors do not provide full proofs in a verifiable manner in arxiv or in a
journal during these five years should be considered officially open for
everyone to grab, prove formally, and get the full credit for that. Another
discussion was that ideally SODA (and again other major TCS conferences) <b>should go double-blind</b> like lots of
other major CS conferences in other fields. This will help to have much more
fair selection in which the name of authors do not give advantage/disadvantage
for acceptance (though PC chair still could see the author lists for some
extreme cases). <o:p></o:p></span></div>
<div class="MsoNormal" style="margin-bottom: 0.0001pt; text-align: justify;">
<span style="font-family: inherit;"><br /></span></div>
<div class="MsoNormal" style="margin-bottom: 0.0001pt; text-align: justify;">
<span style="color: #222222; font-family: inherit; ">I can probably write
pages and pages of other memories on David’s excellent personal characteristics
(e.g. he was a marathon runner, he held the annual barbecue for
AT&T/Bell-labs theory interns, researchers, and alumni for more than two
decades, he served in Army between his
Masters and Ph.D. and kept the same types of spirits and disciplines in the
rest of his life, he always emphasized on putting his middle initial “S.” in
his name especially due to Airport Security since his name is a very common
name, etc), but I think I should stop at this point. <o:p></o:p></span></div>
<div class="MsoNormal" style="margin-bottom: 0.0001pt; text-align: justify;">
<span style="font-family: inherit;"><br /></span></div>
<div class="MsoNormal" style="margin-bottom: 0.0001pt; text-align: justify;">
<span style="color: #222222; font-family: inherit; ">I hope that we have a
great memorial event for him in the next SODA (SODA’17) the conference that he
founded.<o:p></o:p></span></div>
<div class="MsoNormal" style="margin-bottom: 0.0001pt; text-align: justify;">
<span style="font-family: inherit;"><br /></span></div>
<div class="MsoNormal" style="margin-bottom: 0.0001pt; text-align: justify;">
<span style="color: #222222; font-family: inherit; ">Rest in Peace David,<o:p></o:p></span></div>
<span style="font-family: inherit;"><br /></span>
<br />
<div class="MsoNormal" style="margin-bottom: 0.0001pt; text-align: justify;">
<span style="color: #222222; "><span style="font-family: inherit;">From Mohammad</span><span style="font-family: "arial" , sans-serif; "><o:p></o:p></span></span></div>http://blog.computationalcomplexity.org/2016/03/mohammadtaghi-hajiaghayi-on-david.htmlnoreply@blogger.com (Lance Fortnow)1tag:blogger.com,1999:blog-3722233.post-3919768825342275273Thu, 24 Mar 2016 16:54:00 +00002016-03-24T12:54:35.441-04:00Complexity versus ComplexityFor those interested, I've started writing posts for the <a href="http://predictwise.com/blog/">Predictwise Blog</a>. Predictwise makes predictions of future events such as <a href="http://predictwise.com/politics/2016-president-republican-nomination">who will win the Republican Nomination</a> (currently Trump with an 80% probability) based on prediction markets and other betting sites. This has been a fascinating election in terms of predictions, strategies, rules and game theory and I'm happy to try and makes sense out of it over at Predictwise without subjecting my readers here at Computational Complexity with too many political posts.<br />
<br />
A reader had asked me to comment on a Slate article <a href="http://www.slate.com/articles/technology/bitwise/2016/01/a_crude_look_at_the_whole_looks_at_complexity_theory_which_wants_to_understand.html">The Theory of Everything and Then Some</a>, a book review of John Miller's <a href="http://amzn.to/1o8RcHV">A Crude Look at the Whole: The Science of Complex Systems in Business, Life, and Society</a>. John Miller is a social scientist who works on the other "complexity theory" that studies that "simple local rules can have complex global implications". Often complex systems work quite well, like the invisible hand of the economy, but sometimes things can go wrong and the article often mentions the "flash crash" of trading programs reacting to each other causing a major drop in stock prices in May of 2010.<br />
<br />
Our fields with the similar names are not as different as might appear. Much of what they study are inherently computational-like processes and we also look at emergent behavior from simple operations of Turing machine; read, write and move the tape. What they call non-linear we call computation. We do take very different approaches. The computational complexity theory community proves theorems where we can and helps understand the mathematical challenges of when we can't. The other complexity theorists try to explain by examples, simulations and simplified models.<br />
<br />
The two communities often, but not always, seem to have disdain for one another and that's a shame. The tools of computational complexity can help understand the power and limitations of complex systems. These collaborations require them to understand how we can help them and for us to be willing to work on problems that may not yield difficult-to-prove theorems. That's what attracts me to prediction markets, a very simple kind of information aggregation system that still is very difficult to analyze as a computational mechanism.<br />
<br />
What's missing from the article is how tools like machine learning can play in helping to predict the outcomes of many complex systems. The big deluge of data that starts off the article may add to the complexity but it almost paradoxically also makes it possible to learn from it.<br />
<script async="" charset="utf-8" src="//platform.twitter.com/widgets.js"></script>http://blog.computationalcomplexity.org/2016/03/complexity-versus-complexity.htmlnoreply@blogger.com (Lance Fortnow)2tag:blogger.com,1999:blog-3722233.post-6920660603832080783Mon, 21 Mar 2016 04:21:00 +00002016-03-23T01:23:14.482-04:00Hilary Putnam passed away on March 13<br />
Hilary Putnam passed away on March 13, 2016. Some of the obits say he was a philosopher, mathematician, logician, and computer scientist. <br />
<br />
He is probably best known to readers of this blog for his work on Hilbert's 10 problem and resolution.<br />
<br />
HILBERT TENTH:<br />
<br />
Recall H10 stated in current terminology: Find an ALGORITHM that will, given a poly p(x1,,...,,xn) in many variables, with coefficients in the integers, determine if it has a diophantine solution.<br />
<br />
Martin Davis, Hilary Putnam, and Julia Robinson showed that if you also allow exponentation then the problem is undecidable in the early 1960s. Yuri Matijasevich in 1970 showed how to express exps in terms of polynomials to complete the proof. The solution to Hilbert's 10th problem is often credited to all four of them which seems right to me.<br />
<br />
One consequence of there proof: for any c.e. set A there is a poly p such that<br />
<br />
A = { x | exists x1,...,xn p(x,x1,...,xn)=0}<br />
<br />
Later work got the polynomial down to 13 variables.<br />
<br />
<br />
<br />
RESOLUTION:<br />
<br />
John Robinson (but see comments) and later papers by Davis-Putnam aad later Davis-Logemann-Loveland devised resolution theorem proven which is an early SAT-solver algorithm. Many modern algorithms are based on it. (Note- earlier version of this post had mistakes in it. I thank Paul Beame's comments below for clarifying the history.)<br />
<br />
HOW TO CLASSIFY HIM:<br />
<br />
I suspect that Hilary Putnam would call himself a philosopher since that was his MOTIVATION. That may be the best way to classify people (if we are inclined to do that), don't look at WHAT they do look at WHY they do it.<br />
<br />
PHIL OF MATH- one problem with Philosophy, even Phil of Math, is that its hard to have well defined questions and therefore hard to answer them. I am NOT criticizing the field, just saying why I would have a hard time working in it.<br />
<br />
<br />
<br />
<br />http://blog.computationalcomplexity.org/2016/03/hillary-putnam-passed-away-on-march-13.htmlnoreply@blogger.com (GASARCH)5tag:blogger.com,1999:blog-3722233.post-2766533177751347178Thu, 17 Mar 2016 11:35:00 +00002016-03-17T07:35:43.542-04:00The Value of ShapleyNobel laureate Lloyd Shapley <a href="http://www.nytimes.com/2016/03/15/business/economy/lloyd-s-shapley-92-nobel-laureate-and-a-father-of-game-theory-is-dead.html">passed away</a> Saturday. We best know Shapley for his stable matching algorithm with David Gale. Nicole Immorlica <a href="http://blog.computationalcomplexity.org/2008/03/life-of-david-gale.html">guest posted</a> on stable matching shortly after Gale's passing in 2008.<br />
<br />
I'd like to talk about another great innovation, the <a href="https://en.wikipedia.org/wiki/Shapley_value">Shapley Value</a>, a solution concept for cooperative games. For example, suppose no candidate has a majority of candidates heading into the Republican convention and there is no winner on the first ballot. Now we have many delegates that might group themselves into coalitions, and a union of coalitions that have enough delegates can determine the nominee. Larger coalitions have more power than smaller ones but even a single delegate coalition could tip the election. The Shapley value gives weights to the coalitions that measures their relative power with some nice linear and symmetric properties. In this scenario, the Shapley value of a coalition is the probability that adding that coalition will tip the election when coalitions are added in a random order.<br />
<br />
Game Theorist Robert Aumann, another Nobel laureate, used the Shapley value to <a href="https://gilkalai.wordpress.com/2009/02/16/which-coalition/">predict winning coalitions in Israeli elections</a>.<br />
<br />
The main challenge of the Shapley value is computational, in general it is <a href="http://dx.doi.org/10.1007/BF01258278">#P-complete</a> to compute but it can be <a href="http://dx.doi.org/10.1016/j.artint.2008.05.003">approximated efficiently</a>.http://blog.computationalcomplexity.org/2016/03/the-value-of-shapley.htmlnoreply@blogger.com (Lance Fortnow)1tag:blogger.com,1999:blog-3722233.post-2030923131164089011Mon, 14 Mar 2016 13:54:00 +00002016-03-14T09:54:08.745-04:00On Phillip Rogaway's The Moral Character of Cryptographic Work.Some people have emailed me asking me to blog about the paper <a href="http://web.cs.ucdavis.edu/~rogaway/papers/moral-fn.pdf">The Moral Character of Cryptographic Work by Phillip Rogaway</a> I urge you to read it, even if you disagree with it. Especially if you disagree with it. (Hmm- how will you know if you don't read it!)<br />
<br />
There are so many issues raised in this paper that it could be (and might be) the topic of many blog posts. The first three paragraphs are today's topic:<br />
<br />
<i>Preamble. Most academic cryptographers seem to think that our field is a fun,
deep, and politically neutral game—a set of puzzles involving communicating
parties and notional adversaries. This vision of who we are animates a field
whose work is intellectually impressive and rapidly produced, but also quite
inbred and divorced from real-world concerns. Is this what cryptography should
be like? Is it how we should expend the bulk of our intellectual capital? </i><br />
<i><br /></i>
<i>For me, these questions came to a head with the Snowden disclosures of 2013.
If cryptography’s most basic aim is to enable secure communications, how could
it not be a colossal failure of our field when ordinary people lack even a modicum
of communication privacy when interacting electronically? Yet I soon realized
that most cryptographers didn’t see it this way. Most seemed to feel that the
disclosures didn’t even implicate us cryptographers. </i><br />
<i><br /></i>
<i>I think that they do. So I want to talk about the moral obligations of cryptographers,
and my community as a whole. This is not a topic cryptographers
routinely discuss. In this post-Snowden era, I think it needs to be. </i><br />
<br />
My thoughts:<br />
<br />
1) I would add that the Target Breaking, the SONY hack, and the OPM breakin might also show that crypto has been a failure. He doesn't seem to mention those but I think they strengthen his case.<br />
<br />
2) Might it be Security that is a colossal failure? Of course, crypto and security go together so it may be hard to disentangle whose failure it is.<br />
<br />
3) Might it be that good crypto research has been done but is not being used- the tech transfer problem. He later claims that this would be relevant if crypto worked on the right problems in the<br />
first place.<br />
<br />
4) I tend to think he's right. Rather than me telling you why I think he's right, just read his paper.<br />
<br />
<br />http://blog.computationalcomplexity.org/2016/03/on-phillip-rogaways-moral-character-of.htmlnoreply@blogger.com (GASARCH)17tag:blogger.com,1999:blog-3722233.post-4317983773889402361Wed, 09 Mar 2016 13:43:00 +00002016-03-09T09:10:15.620-05:00David Johnson (1945-2016)<div class="separator" style="clear: both; text-align: center;">
<a href="https://4.bp.blogspot.com/-96ZgleV45Wo/VuAc39z9zBI/AAAAAAABV28/IS4C-88PP6U/s1600/Johnson_DS%254072dpi_0.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="200" src="https://4.bp.blogspot.com/-96ZgleV45Wo/VuAc39z9zBI/AAAAAAABV28/IS4C-88PP6U/s200/Johnson_DS%254072dpi_0.jpg" width="200" /></a></div>
David Johnson, a leader and advocate for algorithms and all of theoretical computer science, passed away yesterday at the age of 70. A truly sad day for us all.<br />
<br />
David's 1979 book with Michael Garey, <a href="http://amzn.to/1LcIJyv">Computers and Intractability: A Guide to the Theory of NP-Completeness</a>, is still the best reference on the topic and perhaps the single most important resource in any computer scientist's library. David Johnson also wrote the NP-completeness column for the Journal on Algorithms and later the ACM Transactions on Algorithms, as well as "A Catalog of Complexity Classes" for the 1990 Handbook of Theoretical Computer Science. David founded the Symposium on Discrete Algorithms (SODA), a conference that is now often mentioned with STOC and FOCS as a top theory venue. He created the DIMACS algorithms challenges. He led SIGACT from 1987-1991, really transforming that organization, and served as its face for many years thereafter. I'm only scratching the surface of what he's done for the community, and can think of no one who put more effort into making the theoretical computer science as strong as it is.<br />
<br />
Of course David was a great researchers as well, working on NP-completeness and approximation algorithms.<br />
<br />
He received an <a href="http://awards.acm.org/award_winners/johnson_1325893.cfm">ACM Fellow</a> in 1995, the <a href="http://www.sigact.org/Prizes/Service/1997.html">first SIGACT Distinguished Service prize</a> in 1997 and the <a href="http://www.acm.org/press-room/news-releases/2010/knuth-prize-09">Knuth Prize</a> in 2010. He used his Knuth prize lecture to push for practical applications for our algorithms. Just last month he was elected into the National Academy of Engineering.<br />
<br />
I worked with David Johnson closely on various SIGACT activities. David never missed a STOC and we always invited him to the SIGACT Executive Committee dinners, not because he had an official role, but because he was David Johnson. I truly respected and admired David and glad I could call him a friend. We'll miss him deeply. STOC and SODA just won't be the same without him.http://blog.computationalcomplexity.org/2016/03/david-johnson-1945-2016.htmlnoreply@blogger.com (Lance Fortnow)19tag:blogger.com,1999:blog-3722233.post-8048670625440627112Mon, 07 Mar 2016 15:04:00 +00002016-03-07T13:57:59.245-05:00When do we care about the constants?I've been reading two books recently: <a href="http://www.amazon.com/Asymptopia-Student-Mathematical-Library-Spencer/dp/1470409046/ref=sr_1_1?s=books&ie=UTF8&qid=1456408724&sr=1-1&keywords=asymptopia">Asymptopia</a> by Joel Spencer (He turns 70 soon! <a href="http://cims.nyu.edu/conferences/spencer70/index.html">Workshop for it!</a>. My nephew things that celebrating your bday with a workshop would be... odd) and <a href="http://www.amazon.com/Joy-Factoring-Student-Mathematical-Library/dp/1470410486/ref=sr_1_1?s=books&ie=UTF8&qid=1456408803&sr=1-1&keywords=Joy+of+factoring">The Joy of Factoring</a> by Simon Wagtaff. In terms of content they are on two different topics. In terms of practicality they are different: <i>Asymptopia</i> is clearly a pure math book (there is one chapter on algorithms, but the rest is really pure math) whereas <i>The Joy of Factoring </i>is very practical in that it focuses on real algorithms for the important (for crytography) practical problem of factoring. However, there is one thing the books had in common: They both often care about multiplicative constants.<br />
<br />
Example from <i>Asymptopia</i>: They gave better and better lower bounds on Ramsey numbers:<br />
<br />
(1) R(k) ≥ (1+o(1))(k/e sqrt(2)) 2<sup>k/2</sup> roughly (1+o(1))(0.26)2<sup>k/2</sup><br />
<br />
(2) R(k) ≥ (1+o(1))(k/e) 2<sup>k/2</sup> roughly (1+o(1))(1+o(1))(0.37)<sup>k/2</sup><br />
<br />
(3) R(k) ≥ (1+o(1))(k/sqrt(2)) 2<sup>k/2</sup> roughly (1+o(1))(0.71)<sup>k/2</sup><br />
<br />
(It may be hard to read so I will clarify- the o(1) is little-o, a term that goes to 0 as k gets large.)<br />
<br />
The first lower bound uses the prob method and you the reader has prob seen it or could prob derive it yourself. Prob. The second lower bound uses prob and a clever way of coloring and then tossing out some vertices. The third lower bound uses the Local Lovasz Lemma.<br />
<br />
Note that for this problem Joel Spencer cared about the constant.<br />
<br />
Example from <i>The Joy of Factoring: </i>Since many (but not all!) factoring algorithms do not have rigorously proven run times (Number Theory is Hard!) it's harder to give clean examples here. The book often refers to tricks to get constants down and the notion that constants matters permeates the book. Here is one rigorous example of caring about constants:<br />
<br />
Fermat's difference-of-squares algorithm goes as follows: We want to factor N. Let x=floor(sqrt(N)). Test each of the following numbers for being a square and stop when you get a square: x<sup>2</sup>-N, (x+1)<sup>2</sup>-N, (x+2)<sup>2</sup> - N, etc. When you find an r such that (x+r)<sup>2</sup>-N=y<sup>2</sup> then you have (x+r-y)(x+u+y)=N. Almost surely this is a nontrivial factorization of N. (This algorithm is worse than the trivial sqrt(N) algorithm in some cases; however, it has some of the ideas needed for more sophisticated algorithms including the Quadratic Sieve.) Of course, one might be looking for the right r a long time. How long:<br />
<br />
Let a be the largest divisor of N that is ≤ \sqrt(N). Let k=a/sqrt(N). Then the search will take<br />
<br />
1+ (1-k)<sup>2</sup>sqrt(N)/(2k)<br />
<br />
Again note that there are no hidden multiplicative constants.<br />
<br />
So when do we care about constants and why?<br />
<br />
1) If you are working on an algorithm for a problem people <i>really </i>want to solve then you need the constants to be small.<br />
<br />
2) If you can get good bounds on the exact constants then you should.<br />
<br />
3) If you have a technique and try it out you might end up just improving the constant. Even so, you have showed that the technique has merit.<br />
<br />
4) Improving the constant may show progress which will later lead to more important improvements.<br />
<br />
5) Chicken and Egg: Here is an example from <i>Asymptopia </i>where he didn't care about the constant: Fix ε. Given three points in the unit square what is the prob that their area will be ≤ ε ? He showed its Θ(ε).This proof is very nice. Tracking the constants used in his proof looks tedious. In order to care about the constants perhaps we need an interesting proof about them. To look for a proof technique that applies to them perhaps we need to care in the first place. Chicken and Egg?<br />
<br />
<br />
<br />http://blog.computationalcomplexity.org/2016/03/when-do-we-care-about-constants.htmlnoreply@blogger.com (GASARCH)4tag:blogger.com,1999:blog-3722233.post-4604792070179450225Wed, 02 Mar 2016 13:46:00 +00002016-03-02T08:50:27.335-05:00Changing This Ancient Art Into a Science<div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://pbs.twimg.com/media/Cce4irsWEAAzF9-.jpg:large" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="200" src="https://pbs.twimg.com/media/Cce4irsWEAAzF9-.jpg:large" width="114" /></a></div>
The ACM <a href="http://www.acm.org/awards/2015-turing">announced yesterday</a> that they will award the 2015 Turing Award to Whitfield Diffie and Martin Hellman for contributions to modern cryptography. The Turing award is the highest honor in all of computing. John Markoff in the New York Times also has<a href="http://www.nytimes.com/2016/03/02/technology/cryptography-pioneers-to-win-turing-award.html"> the story</a>.</div>
<div>
</div>
<div>
Diffie and Hellman are best known for public-key cryptography, the brilliant idea that one could communicate secretly with someone you haven't communicated previously. Without public-key cryptography there would be no e-commerce. Equally important Diffie and Hellman brought computational complexity to bare, moving cryptography into its modern age. I strongly recommend reading their 1976 gem <a href="http://dx.doi.org/10.1109/TIT.1976.1055638">New Directions in Cryptography</a> (<a href="http://cs.unc.edu/~fabian/course_papers/diffie.hellman.pdf">PDF</a>) particularly the introduction and chapter 6 where Diffie and Hellman connect cryptography to computational complexity and the P v NP problem itself defined only five years earlier. Here's the first paragraph:</div>
<div>
<blockquote class="tr_bq">
We stand today on the brink of a revolution in cryptography. The development of cheap digital hardware has freed it from the design limitations of mechanical computing and brought the cost of high grade cryptographic devices down to where they can be used in such commercial applications as remote key cash dispensers and computer terminals. In turn, such applications create a need for new types of cryptographic systems which minimize the necessity of secure key distribution channels and supply the equivalent of a written signature. At the same time, theoretical developments in information theory and computer science show promise of providing provably secure cryptosystems, changing this ancient art into a science. </blockquote>
</div>
<div>
One question for which I shall offer no opinion: Should <a href="http://www.merkle.com/">Ralph Merkle</a> have been a co-recipient of this award? </div>
http://blog.computationalcomplexity.org/2016/03/changing-this-ancient-art-into-science.htmlnoreply@blogger.com (Lance Fortnow)2tag:blogger.com,1999:blog-3722233.post-7841739031138975409Mon, 29 Feb 2016 15:05:00 +00002016-02-29T10:05:45.825-05:00It works in practice, but does it work in theory (Pollard's Factorization algorithm)<br />
Throughout this post I ignore polylog factors.<br />
<br />
It is trivial to factor N in time N<sup>1/2</sup>. Pollard's rho-algorithm (see my write up <a href="http://www.cs.umd.edu/~gasarch/BLOGPAPERS/pollardfact.pdf">here</a> or <a href="https://en.wikipedia.org/wiki/Pollard%27s_rho_algorithm">Wikipedia Entry</a>) for factoring does bette expected time N<sup>1/4</sup>. Or does it? It works well in practice but has not been proven to work well in theory. (If I missed some paper that DID prove it works well in theory please leave a polite comment.)<br />
<br />
Here we state a conjectures that, if true, will show that Pollard's algorithm is in time (randomized) N<sup>1/4</sup>. Let p be a prime. Let c be in {2,...,p-1}.<br />
<br />
Let f<sub>c</sub>(x)= x<sup>2</sup> + c mod p.<br />
<br />
x<sub>1</sub> will be specified in the conjecture. x<sub>i</sub> is f(x<sub>i-1</sub>).<br />
<br />
For at least half of the elements x<sub>1</sub> in {2,...,p-1} and at least half of the elements c in {2,...,p-1} the sequence x<sub>1</sub>, x<sub>2</sub>,... will have a repeat within the first O(p<sup>1/2</sup>) items.<br />
<br />
This is thought to be true since it is thought that the sequence is random-enough so that the birthday paradox will work. Still... no proof. <br />
<br />
When reading some algorithms papers the interest is in getting an algorithm that you can PROVE the run time of. By contrast, papers on factoring and discrete log and other things that are used to break crypto systems the interest is more in getting something that actually works. I have to learn to stop thinking ``but they haven't proven that!'' and start thinking ``Oh, yes, that would work''. And to be fair, for Pollard's algorithms and others (e.g., quad sieve, number field sieve, which do better in practice than Pollard for large enough numbers) there are REASONS to think they will work well. <br />
<br />
More generally, theoretical and applied work may need different mentalities. <br />
<br />
<!--1-->http://blog.computationalcomplexity.org/2016/02/it-works-in-practice-but-does-it-work.htmlnoreply@blogger.com (GASARCH)4tag:blogger.com,1999:blog-3722233.post-807393785579939265Thu, 25 Feb 2016 13:05:00 +00002016-02-25T08:20:37.568-05:00Primary Game Theory[Nominations open for the S<a href="http://www.sigact.org/Prizes/Service/">IGACT Distinguished Service Prize</a>. Deadline: April 1]<br />
<br />
The US presidential primaries have not gone as expected as you can see from the crazy shifts in the <a href="http://predictwise.com/politics/">prediction markets</a>. This year besides the usual democratic/republican split, we have an establishment/non-establishment split in both parties. Back in my day outside candidates like Trump, Cruz and Sanders would have run as independents like Ross Perot and John Anderson.<br />
<br />
Despite the split, the establishment candidates focus more on themselves than the establishment. Christie's attack on Rubio in New Hampshire may have handed Trump the election and it certainly didn't save Christie's campaign. Kasich should just drop out now if he cares about keeping the nomination for an establishment candidate--it's just not his year, though maybe he's playing some <a href="http://www.nytimes.com/2016/02/25/upshot/john-kasich-republican-nomination.html">game theory of his own</a>.<br />
<br />
The democratic side does not offer such interesting game theory, since we have a two horse race. Mostly a one horse race because the <a href="http://www.nytimes.com/2016/02/22/us/politics/delegate-count-leaving-bernie-sanders-with-steep-climb.html">delegate math</a> doesn't work well for Sanders.<br />
<br />
Let's look at the election from the point of view of a hypothetical Georgia voter voting on Super Tuesday next week. Such a voter can choose which primary to vote on in election day.<br />
<br />
Clinton will easily <a href="http://elections.huffingtonpost.com/pollster/2016-georgia-democratic-presidential-primary">win Georgia</a> but as long as Bernie gets at least 15% of the vote (likely), delegates will be allocated proportionally. So a vote in the democratic primary could affect a delegate but less likely to to affect who will be the nominee than on the Republican side. Unless Bernie surprises in South Carolina, the hypothetical voter may opt to vote in the Republican primary instead.<br />
<br />
The <a href="http://frontloading.blogspot.com/2015/12/2016-republican-delegate-allocation_10.html">republican delegate allocation rules</a> most likely mean that the candidates receiving at least 20% of the votes will get a proportional allocation of 31 delegates and the winner in each of the 14 congressional districts gets two delegates while the runner up gets one. Looking at the <a href="http://www.realclearpolitics.com/epolls/2016/president/ga/georgia_republican_presidential_primary-5471.html">polls</a>, Trump will easily win the election with Cruz and Rubio hovering about 20%. A single vote could affect 6 delegates (20% of Georgia's at large 31 delegates). A vote for Kasich or Carson would not net Kasich or Carson any delegates but could bolster Trump by pushing Rubio's vote percentage down towards that 20% mark.<br />
<br />
This scenario plays out across the Super Tuesday primaries. Trump is favored to win in every state voting that day except Cruz's Texas. If Rubio can get at least 20% of the vote in those states he keeps the race alive and could make up ground in winner-take-all states coming up later. Kasich doesn't draw much voters but enough that by not dropping out he may help close out this election on Tuesday. Game theory indeed.<br />
<br />
In an early primary season already full of surprises we may see many more. It would be a lot more fun to watch if the fate of the US and the entire world didn't depend on the outcome.http://blog.computationalcomplexity.org/2016/02/primary-game-theory.htmlnoreply@blogger.com (Lance Fortnow)4tag:blogger.com,1999:blog-3722233.post-8881878164865187344Mon, 22 Feb 2016 13:33:00 +00002016-02-22T08:33:46.549-05:00What is a `previous publication'? Here are the guidelines about submission to STOC 2015 with regard to<br />
submitting a prior published paper. I assume that most of the Theory Conferences have a similar policy. <br />
<br />
<br />
<i>Prior and Simultaneous Submissions: The conference will follow SIGACT's policy on prior publication and simultaneous submissions. Abstract material which has been previously published in another conference proceedings or journal, or which is scheduled for publication prior to July 2015, will not be considered for acceptance at STOC 2015. The only exception to this policy are prior or simultaneous publications appearing in the Science and Nature journals. SIGACT policy does not allow simultaneous submissions of the same (or essentially the same) abstract material to another conference with published proceedings. The program committee may consult with program chairs of other (past or future) conferences to find out about closely related submissions.</i><br />
<br />
Here is a question that I ask non-rhetorically. What if Alice has a paper in arXiv in 2010 and submits it to STOC in 2012. Technically it has not been published before. However, it certainly is not new.<br />
<br />
Should this be allowed? Under the current rules of course YES. Should the rules be changed? A paper can be <i>out there </i> without it being published. Should the rules be changed to reflect this? I think NOT since it might be hard to define carefully and I don't want people to discourage posting on arXiv.<br />
<br />
Should the committee be allowed to take its not-newness into account in judging it? Do they already? And the notion of <i>well known </i>or <i>out there </i>are subjective.<br />
<br />
BOB: This paper has been known about for years.<br />
<br />
EVE: Well, I didn't know about it, so for ME its new!<br />
<br />
There might be a newness/quality trade off. If Donna posted her proof that P=NP in 2020 but submitted it to STOC 2030, I think it would still get in. By contrast if Bob posts a proof of a good but not great paper that is STOC-worthy in 2020, and then submits it in 2030,, I think it would not get in.<br />
<br />
Then again, by 2030 maybe we will have changed the prestige-conference model we currently use.<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />http://blog.computationalcomplexity.org/2016/02/what-is-previous-publication.htmlnoreply@blogger.com (GASARCH)7tag:blogger.com,1999:blog-3722233.post-4064527292278181558Thu, 18 Feb 2016 12:33:00 +00002016-02-18T07:33:54.745-05:00Posting PapersIn the ancient days of the 80's, if someone wanted a paper from you, they would ask and you would mail via post. Sometimes I would get a self-addressed envelope asking for a certain paper. Departments would maintain collections of local technical reports. Someone could request a paper, an admin would make a copy, slap on a cover and send it out.<br />
<div>
<br /></div>
<div>
In the 90's, we started distributing papers by email, but then who you sent papers to <a href="http://people.cs.uchicago.edu/~laci/papers/email90.pdf">started to matter</a>. As soon as we had a browser in 1993, for fairness, though more because I got tired of responding to paper requests, I put together a page that had electronic copies of all my papers. Over the years those files have gone from postscript to pdf and the page started as html and later I used bib2html which I kept going on my old Chicago CS account that nobody bothered turning off. Bib2html failed to work for me last week, I asked the twitterverse for an alternative and <a href="https://twitter.com/fortnow/status/698932411137200129">they answered</a>. I went with <a href="http://bibbase.org/">bibbase</a> and now can reveal my new <a href="http://papers.fortnow.com/">paper page</a>. Pretty easy to tell from the page when I started as a department chair. I kept the <a href="http://papers.fortnow.com/chicago.php">old page</a> active just in case but it will no longer be updated.</div>
<div>
<br />
Sometimes I wonder why I bother and just let people use Google Scholar or DBLP to find my papers. I guess I'm just not ready to give up this record of my research life.</div>
http://blog.computationalcomplexity.org/2016/02/posting-papers.htmlnoreply@blogger.com (Lance Fortnow)2tag:blogger.com,1999:blog-3722233.post-4198450764241401127Sun, 14 Feb 2016 20:26:00 +00002016-03-14T14:02:08.185-04:00∑{p≤ n} 1/p = ln(ln(n)) + o(1). read it here because....(Last April fools day I posted four links to stories that seemed absurd and asked which one was false. They all were true. I recently came across five stories that all seem absurd but are all real, but three of them can't wait until April 1 since they are about current political events. Here they are:<br />
<a href="http://www.cnbc.com/2016/02/02/amazon-plans-hundreds-of-physical-bookstores-dj-citing-mall-ceo.html">Amazon to open many brick-and-mortar stores.</a><br />
<a href="http://www.deathandtaxesmag.com/279423/john-kasich-reunite-pink-floyd-cnn/">Why John Kasich got second place in New Hampshire</a> (whch was better than expected).<br />
<a href="http://www.theblaze.com/stories/2016/02/09/jim-gilmore-plans-to-get-more-than-12-votes-in-new-hampshire/">Jim Gilmore's (who?) low expectations</a>,<br />
<a href="http://mashable.com/2016/02/01/ben-carson-fresh-clothes/#CnIUs_NaBZqF">Why Ben Carson left Iowa</a>.<br />
<a href="http://www.bbc.com/news/technology-26418991">airpnp- not about P vs NP</a><br />
<a href="http://www.cnn.com/2016/03/03/politics/donald-trump-small-hands-marco-rubio/">Donald Trump defends....</a> (I added this one on March 14)<br />
And now back to our regularly scheduled blog)<br />
<br />
<br />
<br />
<br />
A while back Larry Washington (number theorist at UMCP) showed me a simple proof that<br />
<br />
∑<sub>p ≤ n</sub> 1/p = ln(ln(n)) + o(1)<br />
<br />
(p goes through all the primes ≤ n.)<br />
<br />
Recently I tried to remember it and could not so I looked on the web and... I could not find it! I found proofs that the sum is at least ln(ln(n)) as part of a proof that the series diverges, but could not find a simple proof of the equality.<br />
<br />
I asked Larry Washington to email me the proof, and then (and this happens often) while waiting for the response I came up with it.<br />
<br />
Anyway, to try to avoid what Lance <a href="http://blog.computationalcomplexity.org/2015/07/will-our-understanding-of-math.html">pondered</a>, the deterioratation of math over time, I post the proof <a href="http://www.cs.umd.edu/~gasarch/BLOGPAPERS/sump.pdf">here</a>.Read it here since you probably can't find this proof elsewhere.<br />
<br />
I continue to be amazed at both what IS and IS NOT on the web.<br />
<br />
(ADDED LATER- one of the comments pointed to links on the web that DO contain the proofs I could not find.)http://blog.computationalcomplexity.org/2016/02/p-n-1p-lnlnn-o1-read-it-here-because.htmlnoreply@blogger.com (GASARCH)9tag:blogger.com,1999:blog-3722233.post-5439438703035623693Thu, 11 Feb 2016 14:02:00 +00002016-02-11T09:29:50.460-05:00Test of Time Award- a good idea but...The ESA Conference (European Symposium on Algorithms) has a <a href="http://esa-symposium.org/test-of-time-award-15.php">test-of-time award</a><br />
which<br />
<br />
<i>recognizes outstanding papers in algorithms research that were published in the ESA proceedings 19-21 years ago and which are still influential and stimulating the field today.</i><br />
<i><br /></i>
This sounds like a great idea- some papers are more influential then people might have thought when they first got into ESA, and some papers are less influential then people might have thought. And I am happy that Samir Khuller (my chair) and Sudipto Guha (a grad student when the paper was written) won it for their paper <i>Approximating Algorithms for Connected Dominating Sets.</i><br />
<i><br /></i>
But there are two things that are not quite right.<br />
<br />
1) 19-21 years. That seems like a very small window. '<br />
<br />
2) The paper has to have been published in ESA.<br />
<br />
Together this makes the job of the panel that decides the award easier as they only have to look at three years of conferences. But there are many fine papers in algorithms that are not in ESA and there may be an awesome three year period and then a draught, so the window seems short.<br />
<br />
But rather than complain let me ask some well defined questions:<br />
<br />
Are there any other awards with a lower limit (in this case 19 years) on how long the paper has to be out, so that its influence can be better appreciated? This is a good idea, though 19 seems high. Awards for a lifetime of work are often similar in that the are given after the works influence is known.<br />
<br />
Are there any other awards that restrict themselves to ONE conference or journal? Of course best-paper and best-student-paper awards to that, but I don't know of any others.<br />
<br />
ADDED LATER: A commenter says that there are LOTS of test-of-time awards associated to conferences:<br />
<br />
<a href="https://www.google.com/?gws_rd=ssl#q=%22test+of+time+award%22">see here</a><br />
<br />
STOC, FOCS, SODA, CCC don't have them so I foolishly thought that was representative.<br />
That raises another question - why do some conferences have it and some dont'?<br />
<br />
<br />
<br />
<br />
<br />
<br />http://blog.computationalcomplexity.org/2016/02/test-of-time-award-good-idea-but.htmlnoreply@blogger.com (GASARCH)12tag:blogger.com,1999:blog-3722233.post-5228107042238383034Mon, 08 Feb 2016 21:56:00 +00002016-02-09T06:16:02.781-05:00The Moral Hazard of Avoiding Complexity AssumptionsMoshe Vardi's CACM editor letter <a href="http://cacm.acm.org/magazines/2016/2/197412-the-moral-hazard-of-complexity-theoretic-assumptions/fulltext">The Moral Hazard of Complexity-Theoretic Assumptions</a> practically begs a response from this blog. I also encourage you to read the <a href="http://cacm.acm.org/magazines/2016/2/197412-the-moral-hazard-of-complexity-theoretic-assumptions/fulltext#comments">discussion</a> between Moshe and Piotr Indyk and a <a href="https://twitter.com/vardi/status/693520910096187393">twitter discussion</a> between Moshe and Ryan Williams.<br />
<br />
I take issue mostly with the title of Vardi's letter. Unfortunately we don't have the tools to prove strong unconditional lower bounds on solving problems. In computational complexity we rely on hardness assumptions, like P ≠ NP, to show that various problems are difficult to solve. Some of these assumptions, like the strong exponential-time hypothesis, the Unique Games Conjecture, the circuits lower bounds needed for full derandomization, are quite strong and we can't be completely confident that these assumptions are true. Nevertheless computer scientists will not likely disprove these assumptions in the near future so they do point to the extreme hardness of solving problems like getting a better than quadratic upper bound for edit distance or a better than 2-ε approximation for vertex cover.<br />
<br />
If you read Vardi's letter, he doesn't disagree with the above paragraph. His piece focuses instead on the press that oversells theory results, claiming the efficiency of Babai's new graph isomorphism algorithm or the impossibility of improving edit distance. A science writer friend <a href="http://blog.computationalcomplexity.org/2004/04/view-of-science-writer.html">once told me</a> that scientists always want an article to be fully and technically correct, but he doesn't write for scientists, he writes for the readers who want to be excited by science. Scientists rarely mislead the press, and we shouldn't, but do we really want to force science writers to downplay the story? These stories might not be completely accurate but if we can get the public interested in theoretical computer science we all win. To paraphrase Oscar Wilde, as I <a href="https://twitter.com/fortnow/status/693056927560044545">tweeted</a>, the only thing worse than the press talking about theoretical computer science results is the press not talking about theoretical computer science results.http://blog.computationalcomplexity.org/2016/02/the-moral-hazard-of-avoiding-complexity.htmlnoreply@blogger.com (Lance Fortnow)11