tag:blogger.com,1999:blog-3722233Wed, 25 May 2016 19:13:51 +0000typecastfocs metacommentsComputational ComplexityComputational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill Gasarchhttp://blog.computationalcomplexity.org/noreply@blogger.com (Lance Fortnow)Blogger2382125tag:blogger.com,1999:blog-3722233.post-4341125658789844980Tue, 24 May 2016 13:33:00 +00002016-05-24T09:33:11.608-04:00My third post on Gathering for Gardners(Workshop for women in computational topology in August: <a href="https://www.ima.umn.edu/2015-2016/SW8.15-19.16">see here</a>. For a post about these kinds of workshops see <a href="http://blog.computationalcomplexity.org/2010/06/lefthanded-latino-lesbians-in-algebraic.html">here</a>.)<br />
<br />
<br />
(I have already posted twice on stuff I saw or heard at the Gathering Conference <a href="http://blog.computationalcomplexity.org/2016/04/some-short-bits-from-gathering-for.html">here</a> and <a href="http://blog.computationalcomplexity.org/2016/05/some-more-bits-from-gathering-for.html">here</a>,)<br />
<br />
Meta Point- At the Gathering for Gardner conference I learned lots of math (prob more accuarte to say I learned ABOUT lots of math) that I want to tell you about which is why this is my third post on it, and I post more.<br />
<br />
<i>The pamplets of Lewis Carol: Games, Puzzles, and related pieces: </i>This was mostly puzzles that are by now familiar, but one new (to me) think struck me: an <i>aloof word </i>is a word where if you change any one letter to anything else then its no longer a word. I think aloof is such a word.<br />
<br />
<i>Some talk don't know which one</i> was about the Piet Hein Egg, also called a <a href="https://en.wikipedia.org/wiki/Superegg">superegg</a>. The talk (which differs slightly from he page pointed to) said it was a solid whose surface has the equation<br />
<br />
(x/a)<sup>2.5</sup> + (y/a)<sup>2.5</sup> + (z/b)<sup>2.5</sup><br />
<br />
and its an egg which can stand on its end. (Note the x/a,y/a,z/b- that is correct, not a typo).<br />
(Personal Note: Piet Hein invented <a href="https://en.wikipedia.org/wiki/Soma_cube">Soma Cubes</a> which is a puzzle where you put together 3-d pieces<br />
made of small cubes into a large cube or other shapes. I learned about these in a Martin Gardner column and bought a set. I was very good at this- I put together every figure in the booklet within a week. This was the ONLY sign that I was GOOD at math when I was a kid, though there are many signs that was INTERESTED in math. About 30 years ago my girlfriend at that time and I went to a restaurant and there was a SOMA set on the table, assembled into a cube. I took it apart and she said ``Bill, you'll never be able to put it back together!!!'' I then ``tried'' to and ended up putting together instead a bathtub, a dog, a wall, and a W. But gee, it ``seemed'' like I was fumbling around and couldn't get a cube. ``Gee Bill, I think you've seen this puzzle before''. And who is this insightful girlfriend? My wife of over 20 years!)<br />
<br />
<i>Magic Magic Square </i>(Sorry, dont know which talk) Try to construct a 4x4 magic square where (as usual) all rows and columns sum to the same thing. But also try to make all sets of four numbers that form a square (e.g., all four corners) also add to that number. Can you? If you insist on using naturals then I doubt it. Integers I also doubt it. But you CAN do it with rationals. How? If you want to figure it out yourself then DO NOT go to the answer which is at this link: <a href="https://www.cs.umd.edu/users/gasarch/BLOGPAPERS/mmsq.txt">here</a><br />
<br />
<i>Droste Effect: </i>When a picture appears inside itself. For an example and why its called that see <a href="https://en.wikipedia.org/wiki/Droste_effect">here</a><br />
<br />
<i>Black Hole Numbers: </i>If you have a rule that takes numbers to numbers, are there numbers that ALL numbers eventually goto? If so, they are black hole numbers for that rule.<br />
<br />
Map a number to the number of letters in its name<br />
<br />
20 (twenty) --> 6 (six) --> 3 (three) --> 5 (five) --> four (4) --> four(4) --> ...<br />
<br />
It turns out that ANY number eventually goes to 4.<br />
<br />
Map a number to the sum of the digits of its divisors<br />
<br />
12 has divisors 1,2,3,4,6,12 --> 1+2+3+4+6+1+2=19<br />
<br />
19 has divisors 1,19 so --> 1+1+9 = 11<br />
<br />
11 has divisors 1,11 so --> 1+1+1 = 3<br />
<br />
3 has divisors 1,3 so --> 1+3=4<br />
<br />
4 has divisors 1,2,4 so --> 1+2+4=7<br />
<br />
7 has divisors 1,7 so --> 1+7=8<br />
<br />
8 has divisors 1,2,4,8, --> 15<br />
<br />
15 has divisors 1,3,5,15 --> 1+3+5+1+5 = 15<br />
<br />
AH. It turns out ALL numbers eventually get to 15.<br />
<br />
<i>Boomerang Fractions: </i>Given a fraction f do the following:<br />
<br />
x1=1, x2=1+f, x3- you can either add f to x2 or invert x2. Keep doing this. Your goal is to get back to 1 as soo nas possible. Here is a paper on it: <a href="http://seomtc.weebly.com/uploads/1/3/0/4/13040232/boomerang_fractions.pdf">here</a>. This notion can be generalized: given (s,f) start with s and try to get back so s. Can you always? how long would it take? Upper bounds?<br />
<br />
<i>Liar/Truth teller patterns on a square plane b Kotani Yoshiyuki. </i>You have an 4 x 4 grid. Every grid point has a person. They all say ``I have exactly one liar adjacet (left, right, up, or down) to me.''<br />
How many ways can this happen. This can be massively generalized.<br />
<br />
<i>Speed Solving Rubit's cube by Van Grol and Rik. </i>A robot can do it in 0.9 seconds: <a href="https://www.youtube.com/watch?v=ixTddQQ2Hs4">here.</a><br />
<br />
<br />http://blog.computationalcomplexity.org/2016/05/my-third-post-on-gathering-for-gardners.htmlnoreply@blogger.com (GASARCH)0tag:blogger.com,1999:blog-3722233.post-7062979489400396707Thu, 19 May 2016 13:28:00 +00002016-05-19T09:28:07.292-04:00UpfrontsThe US television industry has long fascinated me, an entertainment outlet driven by technology. David Sarnoff introduced television at the World's Fair in 1939 and developed the NBC network to provide content so people would buy RCA televisions, much the way Steve Jobs created the iTunes store to sell iPods. For decades television was broadcast over the air funded mostly by commercials. You could only watch a show when it aired and people adjusted their schedules to the broadcast schedule. People stayed home instead of going to the theater, movies, social clubs and restaurants. They all still exist but not to the extent before television. The nature of jobs changed. One funny comedian on TV would make considerable money but would put hundreds of vaudeville comedians out of a job.<br />
<br />
In the 70's came cable television to big cities, initially to provide a better signal. But it also provided more stations including stations that were paid explicitly by consumers like HBO and implicitly through cable subscriptions like ESPN. ESPN is the single largest source of revenue for Disney. Eventually we would have hundreds of cable stations, many very specialized.<br />
<br />
In the 80's came the VCR, then the DVR. No longer did we need to plan our time around the TV broadcast schedule. Eventually TV shows could have a continuing story line allowing for richer plot and character development.<br />
<br />
Then came the Internet and streaming video. You could watch videos from series and movies on Netflix to user generated short pieces on YouTube or shorter still on Vine. People are watching TV not so much on TVs anymore but on their computers and phones. Like many others we have cut the cable cord in the Fortnow household, a trend that the industry still tries to fathom. Every cord cutter is $6 less a month to ESPN and the Disney bottom line.<br />
<br />
Why bring up TV now? This is what used to be the most exciting week for television, the upfronts, where the broadcast networks reveal their new seasons to advertisers and the public at large. The networks are still having their presentations and parties, but the new shows fail to excite and quite a few retreads and revivals including 24, Prison Break, MacGyver, Tales from the Crypt, Gilmore Girls. Do you remember the Muppets returning last year? Neither do I.<br />
<br />
We are in a golden age of television. One could take a rich novel and turn it into an equally rich 10-13 episode TV series. There were over 400 scripted TV series, and more really good series than I have time to watch (basically when I run on the treadmill). Meanwhile the networks continue to promote and party though an undercurrent of a very uncertain future. Watching the television industry is itself a never ending story.http://blog.computationalcomplexity.org/2016/05/upfronts.htmlnoreply@blogger.com (Lance Fortnow)1tag:blogger.com,1999:blog-3722233.post-33970272611540147Mon, 16 May 2016 18:42:00 +00002016-05-16T14:42:36.168-04:00Does this leak information?Here are four fictional stories though inspired by real world events or TV shows (I forget which is which). My question is, was a confidence broken or was some information leaked that should not have been? I do not have answers.<br />
<br />
Tenure: The candidate DOES find out the vote (e.g., 18 yes, 2 no) but DOES NOT find out who voted what. But what if the vote is 20 Yes 0 No. Then the candidate DOES know the vote. (Worse if it was 20 NO, 0 yes). I am sure this has been studied in crypto. Here is one solution: randomly flip one bit.<br />
<br />
Lawyers:<br />
<br />
CLIENT: I have roughly X dollars counting all of my assets. Are you the right firm to handle my estate?<br />
<br />
LAWYER: Yes<br />
<br />
CLIENT: Do you always say that?<br />
<br />
LAWYER: No. If you had log(X) money then we would recommend a cheaper firm since your estate would not need our complex services. And if you had X^{10} money then there are other firms that are more familiar with investments at that level.<br />
<br />
CLIENT: So, for example, Mitt Romney is not a client.<br />
<br />
LAWYER: That is correct.<br />
<br />
Did the lawyer break a confidence by saying that Mitt Romney was NOT a client? Could CLIENT goto lots of law firms and play this game and eventually find out Mitt Romney's lawyer?<br />
<br />
Nobel Prize: If he committee leaks that the winner has been notified THAT he or she won, but not WHO it was, is that a breach?<br />
<br />
Someone has confessed to a priest that he murdered someone (a staple of TV shows and movies). The wrong man is in jail, whose name is Bob.<br />
<br />
PRIEST TO COP: You have the wrong man.<br />
<br />
COP: How do you know.<br />
<br />
PRIEST: I can't say how I know, but I know.<br />
<br />
COP: Oh, It must be that the guilty man confessed to you but you can't break the seal of the confession. I won't ask you to. But here is a question: Has Bob been to confession lately?<br />
<br />
PRIEST: No! (and he seems relieved to have gotten the message through)<br />
<br />
Did the Priest betray the killers confidence?<br />
<br />
People in Crypto (and elsewhere) define information, Knowledge, Security, similar terms formally so they can have protocols and try to prove things. Are their defintinitions realistic? In the above scenario's, are the above cases breaches or not? Is that even a rigorous question?<br />
<br />
<br />
<br />
<br />http://blog.computationalcomplexity.org/2016/05/does-this-leak-information.htmlnoreply@blogger.com (GASARCH)6tag:blogger.com,1999:blog-3722233.post-7249134896188452011Thu, 12 May 2016 21:20:00 +00002016-05-12T17:21:02.183-04:00The Challenges of Smart CitiesEarlier this week I attended the CCC workshop <a href="http://cra.org/ccc/events/computing-innovation-societal-needs-the-impact-of-computing-research">Computing Research: Addressing National Priorities and Societal Needs</a> (<a href="http://livestream.com/CompComCon/ComputingResearch">video</a>). The workshop covered a large collection of topics, highlighting challenges of big data, privacy, security, sustainability, education, the future of work, CS funding and partnerships and more.<br />
<div>
<br /></div>
<div>
I'd like to highlight the challenges of Smart Cities, addressed in a panel Monday Morning and a talk by Keith Marzullo on Tuesday afternoon. Roughly a smart city is using technology to improve services, for example, sensors everywhere or preparing cities for autonomous vehicles. The speakers highlighted a number of major challenges.</div>
<div>
<ul>
<li>There are 382 <a href="https://en.wikipedia.org/wiki/Metropolitan_statistical_area">Metropolitan Statistical Areas</a> in the US from New York to Carson City that totals 84% of the US population and 91% of GDP. Many cities share similar problems but how easy can one port hardware and algorithms from one area to another? How do you scale smart cities without reinventing the wheel each time?</li>
<li>Who pays for the infrastructure? Sometimes one can get research grants or federal help to start new projects, but these projects need continual maintenance afterwards. Are researchers just in it to start a project, write a paper and get out? How do we keep the advantages going in the long run?</li>
<li>How do you keep the public's trust that the information collected will help the city and not just keeping track of everyone a la Orwell's 1984?</li>
<li>How do we make sure we tackle the problems of the general public and not just the researchers and those who help fund? A great quote: We need to make sure we are focused more on mass transit than on how to make parking the Tesla easier.</li>
<li>If we use big data to predict crime and position police in response, could that cause discrimination and harassment?</li>
<li>How do we keep our research relevant?</li>
</ul>
<div>
Rural areas got their due as well. Interesting presentations on how farmers can use sensors and machine learning to optimize crops, fertilizer and water to use just the right amount needed for each segment of the farm. </div>
</div>
<div>
<br /></div>
<div>
To paraphrase Tip O'Neill, all computing is local, but we face many challenges taking our broad tools of cloud, big data, machine learning, automation and internet of things and apply them in our own neighborhoods.</div>
http://blog.computationalcomplexity.org/2016/05/the-challenges-of-smart-cities.htmlnoreply@blogger.com (Lance Fortnow)2tag:blogger.com,1999:blog-3722233.post-1665186546970958827Tue, 10 May 2016 18:30:00 +00002016-05-11T19:33:24.903-04:00Math lessons from the Donald Trump Nomination (non-political)There may be articles titled <i>Donald Trump and the Failure of Democracy. </i>This is NOT one of them. This is about some math questions. I drew upon many sources but mostly Nate Silver's columns:<a href="http://fivethirtyeight.com/features/donald-trumps-six-stages-of-doom/">Donald Trump's Six Stages of doom</a>, <a href="http://fivethirtyeight.com/features/how-the-republican-field-dwindled-from-17-to-donald-trump/">How the Republican Field Dwindled from 17 to Trump (a collection of article)</a>,<a href="http://fivethirtyeight.com/features/the-four-things-i-learned-from-the-donald-trump-primary/">Four things I learned from the Donald Trump Primary</a>. For the best news piece of the year on Donald Trump see <a href="https://www.youtube.com/watch?v=DnpO_RTSNmQ">John Oliver's </a><br />
<br />
1) <i>Trends</i>. Since 1972 (the beginning of the modern era of prez primaries) the republicans, have ALWAYS (with one exception I"ll get to) nominated someone who was either PRZEZ or a sitting or former Gov, Senator, or VP who had ALSO been a serious candidate in a prior primary-prez race. The only exception is W who was a sitting Governor but had never run before, though he of course had name recognition. In short, someone FAMILIAR. This also fits our image of the Republicans as an old boys network (Dole got the nomination in 1996 because it was his turn). Hence most pundits expected the same this year.<br />
<br />
a) The old ML maxim: Trends hold TILL THEY DON"T.<br />
<br />
b) Nobody quite fit the pattern. The only ones who had run before were Rick Santorum, Rick Perry, and Mike Huckabee. Rich S and Mike H were niche candidates, Rick P had wider appeal in 2012 but entered late and stumbled (the WHOOPS moment, though more on that later). Jeb was like W, Former Gov with family name. So based just on Trends Perry or Jeb should have been the nominee, but its not that strong a match. (Added Later- A commenter says that John K ran briefly in 2000. My criteria was had been a serious candidate- not quite well defined, but John K would not have qualified. Even so, Governor and ran a bit, so he also was close to the criteria.) IF YOU HAVE A TREND AND USE IT TO PREDICT MAKE SURE THE DATA YOU HAVE FITS THE TREND.<br />
<br />
c) I WILL NOT claim that I predicted any of this but there is an inkling of what happened in my post <a href="http://blog.computationalcomplexity.org/2015/05/the-law-of-excluded-middle-of-road.html">The law of the excluded middle of the road republicans</a> where I pointed out for each candidate (including Trump) why they couldn't win. IF YOU HAVE A LARGE NUMBER OF LOW PROB EVENTS WHERE ONE WILL HAPPEN ITS HARD TO PREDICT WHICH ONE.<br />
<br />
d) The pattern itself is only based on 11 data points and you might not want to count the four where there was a republican prez running for re-election. And 11 data points is not the full story--- the political situation from 1972 to 2016 changed dramatically. So these are data points on a moving target. Perhaps they should use papers like <a href="http://www.cs.nyu.edu/~mohri/pub/drift.pdf">New analysis and algorithms for learning with drifting distributions</a>. ITS HARD TO DO ANY REAL DATA ANALYSIS WHEN YOU DON"T HAVE ENOUGH DATA AND IT CHANGES OVER TIME.<br />
<br />
2) <i>Domination:</i> Republican primary voters (in the past) wanted a candidate who was both conservative and electable. But what combination? I read that Chris Christie had no chance since it was thought that Jeb, S Walker, and Rubio were all MORE electable and MORE conservative- so they dominated him. Hence Donald Trump couldn't win since he was (though to be) less electable and his prob less conservative though that's hard to tell since he never held office. Hence he can't win. But some voters were tired of voting for electable as McCain and Romney were allegedly electable. And some were just plain angry. If you think your problems are because of immigrants vote Trump, if you think your problems are because of Wall Street then Feel the Bern. VOTERS DO NOT CARE ABOUT CONVEXITY AND DOMINATION.<br />
<br />
3) <i>Nate Silver</i>. He's the Pollster who is NOT a pundit, does NOT let who he wants to see win affect what he predicts, wrote a great book about predictions: <a href="http://www.amazon.com/The-Signal-Noise-Many-Predictions/dp/159420411X/ref=cm_cr_arp_d_product_top?ie=UTF8">The Signal and the Noise: Why so Many Predictions Fail But Some Don't</a> and got many predictions right in recent years. He wrote an excellent article <a href="http://fivethirtyeight.com/features/donald-trumps-six-stages-of-doom/">Donald Trumps six stages of Doom</a> in Aug 2015 which said what the obstacles are to the nomination and giving the nomination a 2% chance. To his credit he has owned this prediction in that later columns have told us where he went wrong. (Most pundits never say `Gee I was wrong') So why did his prediction not pan out?<br />
<br />
a) They did in a sense. All of the problem he pointed out that Trump would have, Trump DID have- for example, Trump did not have a good organization to control delegates, and the party did try to stop him. So in a strange sense Nate was right. Except that he was wrong.<br />
<br />
b) Back to Nate's 2%. Bill James (Baseball Stats guru) wrote (I am paraphrasing)<i> If you are given odds of 500-1 that some awful team will win the world series than TAKE THAT BET. People have a hard time telling unlikely from REALLY unlikely. And the NY Mets did win the 1969 world series. (A quote from 1962: There will be a man on the moon before the Mets win the world series- true by two months). </i>Also note that the the Leicester Soccer Team won this year despite being (literally) 5000-1 underdogs (see <a href="http://www.cbssports.com/soccer/eye-on-soccer/25575254/everything-you-should-know-about-5000-to-1-premier-league-champ-leicester">here</a>). WAS NATE WRONG? If you give an event 2% chance and it happens I can't say you are wrong. In fact, if most everyone else gave it less than 2% or even 0 (which is the case here) then you are... less wrong.<br />
<br />
4) <i>Bill Gasarch. </i>Based on TRENDS above I predicted Paul Ryan (and I owe Lance a dinner). My mistake was betting Ryan-I win, ANYONE ELSE-Lance wins (oddly enough, with a contested convention I might have still won that bet) . I should have made Lance name 5 candidates, and if any of those five win, he wins, but if its Ryan I win. I doubt he would have named Trump.<br />
<br />
5) <i>Game Theory: </i>Lance has posted about <a href="http://blog.computationalcomplexity.org/search?q=Primaries">Primary Game </a>Theory. The main issue for a Trump voter might be `Gee, if I vote Trump he is not electable event though I like him, so I'll vote for X instead who is more electable' But voters are not game theorists. Plus they voted for John McCain and Mitt Romney based on that and they lost. So when Rubio said <i>A Vote for Trump is a Vote for Hillary </i>he may be right but the voters are not listening. Plus since Little Marco only won Minnesota and Puerto Rico (they have a primary! who knew!) he was not positioned to talk about electability. Plus one could argue that VERY few of the candidates could beat Hillary. In an early Column Nate thought only Jeb, Little Marco, and Scott Walker (remember him?). So once Rubio dropped out the electability argument was useless.<br />
<br />
6) <i>More Game Theory:</i> Many of the candidates wanted someone ELSE to go after Trump so they went after each other.<br />
<br />
7) <i>The Pledge: </i>For fear that Trump would run third party they all signed a pledge promising to support whoever got the nomination. When they signed it they never imagined that Trump would be the nominee.<br />
<br />
8) <i>Prediction Markets: </i>They did pretty well, in about March they came around. Last week David Brooks maintained that Trump would not be the nominee, but he was kidding. I think.<br />
<br />
<br />
<br />
<br />http://blog.computationalcomplexity.org/2016/05/math-lessons-from-donald-trump.htmlnoreply@blogger.com (GASARCH)4tag:blogger.com,1999:blog-3722233.post-6996481209208374401Thu, 05 May 2016 14:26:00 +00002016-05-05T10:26:06.524-04:00Open QuestionsThrough the years I've mentioned a few of my favorite open problems in computational complexity on this blog that have perplexed me through the years. Let me mention a few of them again in case they inspire some of the new generation of complexity theorists.<br />
<ol>
<li>Does the polynomial-time hierarchy look like the arithmetic hierarchy? I mentioned this in the <a href="http://blog.computationalcomplexity.org/2013/11/andrzej-mostowski-1913-1975.html">centenary post for Mostowski</a>. Probably not because it would imply factoring in P (since NP∩co-NP would equal P) but we have no proof of separation and no oracle that makes them look the same.</li>
<li><a href="http://blog.computationalcomplexity.org/2003/12/does-npup.html">Does UP = NP imply the polynomial-time hierarchy collapses</a>? What are the consequences if SAT had an NP algorithm with a unique accepting path? Remains open, again even in relativized worlds. </li>
<li><a href="http://blog.computationalcomplexity.org/2003/11/rational-functions-and-decision-tree.html">Do rational functions that agree with Boolean functions on the hypercube have low decision tree complexity</a>? I really expected someone to have come up with a proof or counterexample by now. </li>
<li>What happens if two queries to NP can be simulated by a single query? Does S<sup>2</sup>=ZPP<sup>NP</sup>? Both questions asked in a <a href="http://blog.computationalcomplexity.org/2002/08/complexity-class-of-week-s2p.html">post on S<sub>2</sub><sup>P</sup></a>.
</li>
<li>Separate NP from Logarithmic space. I gave four approaches in a pre-blog 2001 <a href="http://lance.fortnow.com/papers/files/diag.pdf">survey on diagonalization</a> (Section 3) though none have panned out. Should be much easier than separating P from NP.</li>
</ol>
http://blog.computationalcomplexity.org/2016/05/open-questions.htmlnoreply@blogger.com (Lance Fortnow)16tag:blogger.com,1999:blog-3722233.post-7870018573692638331Sun, 01 May 2016 21:56:00 +00002016-05-01T17:56:36.357-04:00Some more bits from the Gathering for Gardner<br />
I posted about the Gathering for Gardner conference and about some of the talks I saw <a href="http://blog.computationalcomplexity.org/2016/04/some-short-bits-from-gathering-for.html">here.</a> Today I continue with a few more talks.<br />
<br />
<i>Playing Penney's game with Roulette</i> by Robert Vallen. Penney;'s game is the following: let k be fixed. Alice and Bob pick different elements of {H,T}^k. They flip a coin until one of their sequences shows up, and that person wins. Which sequences have the best probability of winning?<br />
<br />
<i>New Polyhedral dice</i> by Robert Fathauer, Henry Segerman, Robert Bosch. This is a good example of how my mentality (and possibly yours) differs from others. When I hear ``60-sided dice'' I think ``p1,...,p60 where are all between 0 and 1 and add up to 1'' I also thought that only the platonic solids could be usedvto form fair dice (so only 4-sided, 6-sided, 8-sided, 12-sided, and 20-sided dice can be made). NOT so. These authors actually MAKE real dice and they do not have to be platonic solids. <a href="https://plus.google.com/+HenrySegerman/posts/AADYapU6BTo">Here</a> is their website.<br />
<br />
<i>Numerically balance dice</i> by Robert Bosch (paper is <a href="http://www.oberlin.edu/math/faculty/bosch/nbd_abridged.pdf">here</a>). Why do dice have the opposite sides sum to the same thing? Read the paper to find out!<br />
<br />
<i>Secret messages in juggling and card shuffling</i> by Erik Demaine. Erik Demaine was one of about 4 theoretical computer scientists I met at the conference, though Erik is so well rounded that calling him a theoretical computer scientist doesn't seem quite right. I had never met him before which surprised me. In this talk he showed us some new fonts- one using juggling. See <a href="http://erikdemaine.org/fonts/juggling/">here</a> for an example of juggling fonts, co-authored with his father Martin.<br />
<br />
<i>Fibonacci Lemonade</i> by Andrea Johanna Hawksley. Put in the leomon and sugar in fib number increments. <a href="http://blog.andreahawksley.com/fibonacci-lemonade/">Here</a> is their website. In my first post I said the talks were on a variety of topics and then presented mostly math talks. This talk is an example of that variety. There were other talks involving the Fib numbers. I was surprised by this since they aren't that special (see <a href="http://www.goldennumber.net/golden-ratio-myth/">here</a>).<br />
<br />
<i>Penis Covers and Puzzles: Brain Injuries and Brain Health</i> by Gini Wingard-Phillips. She recounted having various brain injuries and how working on mathematical puzzles, of the type Martin Gardner popularized as HELPING HER RECOVER! As for the title- people with brain injuries sometimes have a hard time finding the words for things so they use other words. In this case she wanted her husband to buy some <i>condoms</i> but couldn't think of the word so she said <i>Penis Covers</i> instead.<br />
<br />
Loop- Pool on an Ellipse by Alex Bellos. Similar in my mind to the Polyhedral dice talk (you'll see why). We all know that if you built an elliptical pool table with a hole at one of the foci then if the ball is placed at the other foci and hit hard enough it WILL go into the other hole. But Alex Bellos actually MAKES these pool table (see <a href="http://www.loop-the-game.com/">here</a> if you want buy one for $20,000). He told us the history- someone else tried to make one in 1962 but nobody bought them (I wonder if anyone are going to buy his), and Alex had problems with friction as you may recall that it only works on a frictionless surface. So his game does require some skill. The similarity to dice is that I (and you?) are used to thinking about dice and ellipses abstractly, not as objects people actually build.<br />
<br />
This post is getting long so I'll stop here and report more in a later post. Why so mny posts? Six minute talks that I an actually understand and are delighted to tell you about!<br />
<br />
<br />http://blog.computationalcomplexity.org/2016/05/some-more-bits-from-gathering-for.htmlnoreply@blogger.com (GASARCH)0tag:blogger.com,1999:blog-3722233.post-7636396135154304804Thu, 28 Apr 2016 18:06:00 +00002016-04-28T14:06:43.443-04:00Claude Shannon (1916-2001)<div class="separator" style="clear: both; text-align: center;">
<a href="https://upload.wikimedia.org/wikipedia/en/thumb/2/2f/Claude_Elwood_Shannon_(1916-2001).jpg/200px-Claude_Elwood_Shannon_(1916-2001).jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="200" src="https://upload.wikimedia.org/wikipedia/en/thumb/2/2f/Claude_Elwood_Shannon_(1916-2001).jpg/200px-Claude_Elwood_Shannon_(1916-2001).jpg" width="141" /></a></div>
Claude Shannon was born hundred years ago Saturday. Shannon had an incredible career but we know him best for his 1948 paper <a href="http://dx.doi.org/10.1145/584091.584093">A Mathematical Theory of Communication</a> that introduced entropy and information theory to the world. Something I didn't know until looking him up: Shannon was the first to define information-theoretic security and show that one-time pads are the one and basically only code that secure.<br />
<br />
Entropy has a <a href="https://en.wikipedia.org/wiki/Entropy_(information_theory)#Definition">formal definition</a>, the minimum expected number of bits to represent the output of a distribution. But I view information as a more abstract concept of which entropy is just one substantiation. When you think of concepts like conditional information, mutual information, symmetry of information, the idea of an underlying distribution tends to fade away and you begin to think of information itself as an entity worth mentioning. And when you look at Kolmogorov Complexity, often called algorithmic information theory, the measure is over strings, not distributions, yet has many of the same concepts and relationships in the entropy setting.<br />
<br />
Computational Complexity owes much to Shannon's information. We can use information theory to get lower bounds on communication protocols, circuits, even upper bounds on algorithms. Last spring the Simons Institute for the Theory of Computing had a semester program on <a href="https://simons.berkeley.edu/programs/inftheory2015">Information Theory</a> including including a workshop on <a href="https://simons.berkeley.edu/workshops/inftheory2015-3">Information Theory in Complexity Theory and Combinatorics</a>. Beyond theory, relative entropy, or <a href="https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence">Kullback–Leibler divergence</a>, plays an important role in measuring the effectiveness of machine learning algorithms.<br />
<br />
We live in an age of information, growing dramatically every year. How do we store information, how do we transmit, how do we learn from it, how do we keep it secure and private? Let's celebrate the centenary of the man who gave us the framework to study these questions and so much more.http://blog.computationalcomplexity.org/2016/04/claude-shannon-1916-2001.htmlnoreply@blogger.com (Lance Fortnow)2tag:blogger.com,1999:blog-3722233.post-6069339588006717070Mon, 25 Apr 2016 00:01:00 +00002016-05-01T17:45:32.550-04:00Some short bits from the Gathering for Gardner Conference<br />
I attended G4G12 (Gathering for Gardner) a conference that meets every 2 years (though the gap between the first and second was three years) to celebrate the work of Martin Gardner. Most of the talks were on Recreational mathematics, but there were also some on Magic and some are hard to classify.<br />
<br />
Martin Gardner had a column in Scientific American called Mathematical Games from 1956 to 1981. His column inspired man people to go into mathematics. Or perhaps people who liked math read his column. The first theorem I ever read outside of a classroom was in his column. It was, in our terminology, a graph is Eulerian iff every vertex has even degree.<br />
<br />
For a joint review of six G4G proceedings see <a href="https://www.cs.umd.edu/users/gasarch/BLOGPAPERS/gardnerg.pdf">here</a>. For a joint review of six books on recreational math including three of Gardner's, see <a href="https://www.cs.umd.edu/users/gasarch/BLOGPAPERS/gardneraha.pdf">here</a>. For a review of a book that has serious math based on the math he presented in his column see <a href="https://www.cs.umd.edu/users/gasarch/BLOGPAPERS/gardner21.pdf">here</a>.<br />
<br />
The talks at G4G are usually 6 minutes long so you can learn about a nice problem and then work on it yourself. Their were a large variety of talks and topics. Many of the talks do not have an accompanying paper. Many of them are not on original material. But none of this matters--- the talks were largely interesting and told me stuff I didn't know.<br />
<br />
64=64 and Fibonacci, as Studied by Lewis Caroll, by Stuart Moshowitz. This was about a Lewis Caroll puzzle where he put together shapes in one way to get a rectangle of area 65, and another way to get a square of area 64, The following link is NOT to his talk or a paper of Moshowitz, but it is about the problem: <a href="https://mathlesstraveled.com/2011/05/02/an-area-paradox/">here</a><br />
<br />
How Math can Save your life by <a href="https://en.wikipedia.org/wiki/Susan_Marie_Frontczak">Susan Marie Frontczak</a>. This was part talk about bricks and weights and then she stood on the desk and sang <a href="https://www.youtube.com/watch?v=VuRgiZVMUO0">this song</a> (thats not her signing it).<br />
<br />
Twelve ways to trisect and angle by David Richeson. This was NOT a talk about cranks who thought they had trisected and angle with straightedge and compass. It was about people who used ruler, compass, and JUST ONE MORE THING. I asked David later if the people who trisected the angle before it was shown impossible had a research plan to remove the ONE MORE THING and get the real trisection. He said no- people pretty much knew it was impossible even before the proof.<br />
<br />
The Sleeping Beauty Paradox Resolved by Pradeep Mutalik. This paradox would take an entire blog post to explains so here is a pointer to the wikipedia entry on it: <a href="https://en.wikipedia.org/wiki/Sleeping_Beauty_problem">here</a>. AH, this one DOES have a paper associated to it, so you can read his resolution <a href="https://www.quantamagazine.org/20160129-solution-sleeping-beautys-dilemma/">here</a><br />
<br />
Larger Golomb Rulers by Tomas Rokicki. A <a href="https://en.wikipedia.org/wiki/Golomb_ruler">Golomb Ruler</a> is a ruler with marks on it so that the all of the distances between marks are distinct. The number of marks is called the order of the ruler. Construction a Golumb ruler is easy (e.g., marks at the 1,2,4,8,... positions I think works). The real question is to get one of shortest length. They had some new results but, alas, I can't find them on the web.<br />
<br />
Chemical Pi by John Conway. There are people who memorize the first x digits of pi. John Conway does something else. He has memorized the digits of pi and the chemical elements in the following way:<br />
<br />
HYDROGEN 3.141592653 HELIUM next 10 digits of pi LITHIUM etc<br />
<br />
that is, he memorized the digits of pi by groups of 10 and separated by the chemical elements in the order they are on the Periodic table. He claims this makes it easier to answer questions like: What is the 87th digits of pi. He also claims it gives a natural stopping point for how many digits of pi you need to memorize (need? maybe want). (ADDED LATER WHEN I CORRECTED HELIUM TO HYDROGEN: here are some mnemonic devices: <a href="https://www.mnemonic-device.com/chemistry/periodic-table/periodic-table-of-elements/">here</a>.<br />
<br />
This post is getting long so I may report on more of the talks in a later post.<br />
<br />
<br />
<br />
<br />
<br />
<br />http://blog.computationalcomplexity.org/2016/04/some-short-bits-from-gathering-for.htmlnoreply@blogger.com (GASARCH)7tag:blogger.com,1999:blog-3722233.post-6594221842661710665Thu, 21 Apr 2016 13:07:00 +00002016-04-21T09:07:23.503-04:00The Master AlgorithmWe see so few popular science books on computer science, particularly outside of crypto and theory. Pedro Domingos' <a href="http://www.amazon.com/Master-Algorithm-Ultimate-Learning-Machine/dp/0465065708/ref=as_li_ss_tl?s=books&ie=UTF8&qid=1461241628&sr=1-1&keywords=master+algorithm&linkCode=ll1&tag=computation09-20&linkId=895411fe7361c0f6bd783e038d1ac56d">The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake the World</a>, despite the hyped title and prologue, does a nice job giving the landscape of machine learning algorithms and putting them in a common text from their philosophical underpinnings to the models that they build on, all in a mostly non-technical way. I love the diagram he creates:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://2.bp.blogspot.com/-0dPkMd0BD44/VxjI2d9fyWI/AAAAAAABWN0/wSGS-ShV29YVI984nPMhYtHY6mkQZ4zKgCLcB/s1600/Five%252BTribes%252Bof%252BML%252BAccording%252Bto%252BDomnigos.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="https://2.bp.blogspot.com/-0dPkMd0BD44/VxjI2d9fyWI/AAAAAAABWN0/wSGS-ShV29YVI984nPMhYtHY6mkQZ4zKgCLcB/s320/Five%252BTribes%252Bof%252BML%252BAccording%252Bto%252BDomnigos.png" width="319" /></a></div>
Working out from the inner ring are the representations of the models, how we measure goodness, the main tool to optimize the model and the philosophies that drove that model. The book hits on other major ML topics including unsupervised and reinforcement learning.<br />
<br />
In the bullseye you can see the "Master Equation" or the Master Algorithm, one learning algorithm to rule them all. The quest for such an algorithm drives the book, and Domingos describes his own, admittedly limited attempts, towards reaching that goal.<br />
<br />
I diverge from Domingos in whether we can truly have a single Master Algorithm. What model captures all the inner-ring models above: circuits. A Master Algorithm would find a minimum-sized circuit relative to some measure of goodness. You can do that if P = NP and while we don't think circuit-minimization is NP-hard, it would break cryptography and factor numbers. One of Domingos' arguments states "If we invent an algorithm that can learn to solve satisfiability, it would have a good claim to being the Master Algorithm". Good luck with that.http://blog.computationalcomplexity.org/2016/04/the-master-algorithm.htmlnoreply@blogger.com (Lance Fortnow)5tag:blogger.com,1999:blog-3722233.post-4769870258477367307Mon, 18 Apr 2016 14:37:00 +00002016-04-18T10:37:18.216-04:00Its hard to tell if a problem is hard. Is this one hard?Here is a problem I heard about at the Gathering for Gardner. Is it hard? easy? boring? interesting? I don't know.<br />
<br />
Let N={1,2,3,...}<br />
<br />
PROBLEM: parameters are s (start point) and f (not sure why to call it f). both are in N<br />
<br />
Keep in mind the sequence, in order, of operations:<br />
<br />DIVIDE BY f, SUBTRACT f, ADD f, MULTIPLY by f.<br />
<br />
form the following sequence of numbers in N<br />
<br />
a(0)= s<br />
<br />
Assume a(0),...,a(n) are known. Let A = {a(0),...,a(n)}. N-A are the elements in N that are NOT in A.<br />
<br />
If a(n)/f is in N-A then a(n+1)=a(n)/f<br />
<br />
Else<br />
<br />If a(n)-f is in N-A then a(n+1)=a(n)-f<br />
<br />
Else<br />
<br />
If a(n)+f is in N-A then a(n+1)=a(n)+f<br />
<br />
Else<br />
<br />
If a(n)*f is in N-A then a(n+1) = a(n)*f<br />
<br />
Else<br />
<br />
If none of the above holds then the sequence terminates.<br />
<br />
Lets do an example! Let a=14 and f=2<br />
<br />
14, 7, 5, 3, 1, 2, 4, 6, 8, 10, 12, 24, 22, 11, 9, 18, 16, 32, 30, 15, 13, 26, 28, 56, 54, 27, 25, 23, 21,<br />
<br />
19, 17, 34, 36, 38, 40, 20, STOP since 10, 18, 22, 40 are all on the list.<br />
<br />
Lets do another example! Let a=7, f=2<br />
<br />
7, 5, 3, 1, 2, 4, 6, 8, 10, 12, 14, 16, 18, 9, 11, 13, 15, 17, ... (keeps going)<br />
<br />
If f=2 and you get to an odd number x so that ALL of the odds less than x have already appeared but NONE of the odd numbers larger than x have appeared, then the sequence will go forever<br />
with x, x+2, x+4, ...<br />
<br />
QUESTIONS and META QUESTIONS<br />
<br />
1) Can one characterize for which (s,f) the sequence stops.<br />
<br />
2) Is it decidable to determine for which (s,f) the sequence stops.<br />
<br />
3) Both (1) and (2) for either fixed s or fixed f.<br />
<br />
4) Are the above questions easy?<br />
<br />
5) Are the above questions interesting? <br />
<br />
There are four categories:<br />
<br />
Easy and Interesting- Hmmm, if its TOO easy (which I doubt) then I supposed can't be interesting.<br />
<br />
Easy and boring.<br />
<br />
Hard and interesting. This means that some progress can be made and perhaps connections to other mathematics.<br />
<br />
Hard and Boring. Can't solve and are not enlightened for the effort.<br />
<br />
<br />http://blog.computationalcomplexity.org/2016/04/its-hard-to-tell-if-problem-is-hard-is.htmlnoreply@blogger.com (GASARCH)8tag:blogger.com,1999:blog-3722233.post-4948910746694899353Thu, 14 Apr 2016 11:37:00 +00002016-04-14T07:37:29.735-04:00Who Controls Machine Learning?After AlphaGo's victory, the New York Times ran an article <a href="http://www.nytimes.com/2016/03/26/technology/the-race-is-on-to-control-artificial-intelligence-and-techs-future.html">The Race Is On to Control Artificial Intelligence, and Tech’s Future</a>.<br />
<blockquote class="tr_bq">
A platform, in technology, is essentially a piece of software that other companies build on and that consumers cannot do without. Become the platform and huge profits will follow. Microsoft dominated personal computers because its Windows software became the center of the consumer software world. Google has come to dominate the Internet through its ubiquitous search bar. If true believers in A.I. are correct that this long-promised technology is ready for the mainstream, the company that controls A.I. could steer the tech industry for years to come.</blockquote>
I then <a href="https://twitter.com/fortnow/status/714513427872538625">tweeted</a> "Can a company control AI? More likely to become a commodity." The major machine learning algorithms are public knowledge and one can find a number of open-source implementations including Google's own <a href="https://www.tensorflow.org/">TensorFlow</a> that powered AlphaGo. What's to stop a start-up from implementing their own machine learning tools on the cloud?<br />
<br />
Some of my readers' comments forced me to rethink my hasty tweet. First, Google, Microsoft and Amazon can create ML infrastructure, cloud hardware that optimizes computational power and storage for machine learning algorithms to get a level of data analysis that one couldn't replicate in software alone.<br />
<br />
More importantly, Google etc. have access to huge amounts of data. Cloud companies can provide pretrained machine learning algorithms. Google <a href="https://cloud.google.com/products/machine-learning/">provides</a> image classification, voice transcription and translation. Microsoft <a href="https://azure.microsoft.com/en-us/services/cognitive-services/">offers</a> face and emotion detection and speech and text analysis. One could imagine, in the absence of privacy issues, Google taking your customer data, matching with data that Google already has on the same customers to draw new inferences about how to market to those customers better.<br />
<br />
With almost all our computing heading to the cloud, cloud computing providers will continue to compete, and provide continuing better tools in machine learning and beyond. Eventually will one company "control AI"? That would surprise me but we may still end up with an AI oligarchy. http://blog.computationalcomplexity.org/2016/04/who-controls-machine-learning.htmlnoreply@blogger.com (Lance Fortnow)5tag:blogger.com,1999:blog-3722233.post-7776462762560228708Mon, 11 Apr 2016 01:11:00 +00002016-04-10T21:11:49.775-04:00What Rock Band Name would you choose?<br />
I looked up my colleague Dave Mount on Wikipedia and found that he was a drummer for the glam rock band <a href="https://en.wikipedia.org/wiki/Mud_(band)">Mud</a>. He informed me that (a) on Wikipedia he is <a href="https://en.wikipedia.org/wiki/David_Mount">David Mount</a> and (b) if he had a rock band it would be named <i>Fried Purple Ellipsoids.</i><br />
<i><br /></i>
This set off an email discussion where people said what their rock band name would be. I noticed that many ideas for names had variants. For example, my favorite for Ramsey Theorists: <i>The Red Cliques </i>could be<br />
<br />
<i>The Red Cliques</i><br />
<br />
<i>Red Clique</i><br />
<br />
<i>Bill Gasarch and the Red Cliques!</i><br />
<br />
<i>Clique!</i><br />
<br />
So below I list one variant of each name but keep in mind that there are others.<br />
<br />
<i>The Hidden Subgroups</i><br />
<br />
<i>Amplitudes with Attitude </i><br />
<i><br /></i>
<i> Schrodinger's cat (I wonder if this IS a rock band already)</i><br />
<i><br /></i> <i>The Red Cliques</i><br />
<i><br /></i>
<i> Fried purple ellipsoids</i><br />
<i><br /></i>
<i> Fried green ellipsoids</i><br />
<i><br /></i>
<i>BIG A-G-T</i><br />
<br />
<i>The Biconnected Sets</i><br />
<i><br /></i>
<i>PRAM!</i><br />
<i><br /></i>
<i>BPP! (I wonder if any complexity class would work.)</i><br />
<i><br /></i>
<i>SAT (I wonder if other one-word problems would work. TSP!)</i><br />
<i><br /></i>
<i>Karp and the reductions</i><br />
<br />
<i>Avi and the derandomizers </i><br />
<br />
<i>Aravind and the Expanders</i><br />
<i><br /></i>
<i>(Could replace Karp, Avi, and Aravind with others, but these are the first that</i><br />
<i>came to mind. Plus THE EXPANDERS was Aravind Srinivasan's idea.)</i><br />
<br />
<i>The MIT Logarhythms</i> (This is a real acapella group <a href="https://mitlogs.com/">see here</a>.)<br />
<br />
<i>The Discrete Logarhythms</i><br />
<br />
<i>RSA!</i><br />
<i><br /></i>
<i>The Oracles</i><br />
<i><br /></i>
<i>The Interactive Proofs</i><br />
<i><br /></i>
<i>The Natural Proofs</i><br />
<i><br /></i>
<i>Fried Green Proofs</i><br />
<i><br /></i>
If we expand to include math we get lots more, so I'll just mention one real one: <a href="https://www.casa.org/groups/klein-four-group">The Klein Four</a>, an acapella group.<br />
<br />
SO- what would YOUR rock band name be?<br />
<br />
<br />
<br />
<br />http://blog.computationalcomplexity.org/2016/04/what-rock-band-name-would-you-choose.htmlnoreply@blogger.com (GASARCH)3tag:blogger.com,1999:blog-3722233.post-466295802437248693Thu, 07 Apr 2016 11:02:00 +00002016-04-07T07:02:41.394-04:00It's All About the JobsIn the April CACM Moshe Vardi asks <a href="http://cacm.acm.org/magazines/2016/4/200176-are-we-headed-toward-another-global-tech-bust/fulltext">Are We Headed toward Another Global Tech Bust?</a> I agree with some of Vardi’s points, mostly that VC money chasing after unicorns (potential billion-dollar start-ups) will not continue at its heavy pace and we’re already seeing a slow down. But I disagree with Vardi’s assessment that “we should brace ourselves for another global tech and enrollment bust” in computer science. I suspect we’ll see more of a reality check, but that reality looks extremely strong.<br />
<br />
Vardi claims that “It is the dream of joining a unicorn that probably attracts many students to study computing”. It’s not just the unicorns bringing students to computer science, but essentially a 100% employment rate for CS graduates looking for a job in the field, many receiving six-figure starting salaries. Few, if any, other disciplines can claim full employment after the bachelor’s degree. Industry is desperate to hire computing professionals in machine learning, cloud computing, cybersecurity, mobile computing, automation, robotics and data science, among others. Not just the computing companies but every industry that deals with data, which is pretty much every industry. Unicorns may become rarer but we won’t see a decline in demand for computer science students until we automate ourselves out of a job.<br />
<br />
Take a look at this chart from Ed Lazowska's <a href="http://www.cccblog.org/2016/03/31/where-the-jobs-are-2016-edition/">Where The Jobs Are – 2016 Edition</a>. Those CS jobs won't fill themselves.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-xcTUrjErHOc/VwLZv2YN3lI/AAAAAAABV-k/NVSG00EhOf4SXBCSQ5ptcVBxB12bUoa7w/s1600/Jobs%2Bvs.%2Bdegrees.jpg" imageanchor="1" style="clear: left; float: center; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="335" src="https://1.bp.blogspot.com/-xcTUrjErHOc/VwLZv2YN3lI/AAAAAAABV-k/NVSG00EhOf4SXBCSQ5ptcVBxB12bUoa7w/s640/Jobs%2Bvs.%2Bdegrees.jpg" width="520" /></a></div>
<br />
<div>
<br /></div>
http://blog.computationalcomplexity.org/2016/04/its-all-about-jobs.htmlnoreply@blogger.com (Lance Fortnow)11tag:blogger.com,1999:blog-3722233.post-825143172796067327Tue, 05 Apr 2016 13:30:00 +00002016-04-05T09:30:27.442-04:00Are Perfect Numbers Bigger than Six initial sums of odd cubes (answered)<br />
(NONE of this is my work. In fact some of it is on Wikipedia.)<br />
<br />
In my <a href="http://blog.computationalcomplexity.org/2016/04/are-perfect-numbers-bigger-than-6.html">last blog</a> I noticed that<br />
<br />
28 = 1<sup>3</sup> + 3<sup>3</sup><br />
<br />
496= 1<sup>3</sup> + 3<sup>3</sup> + 5<sup>3</sup> + 7<sup>3</sup><br />
<br />
noting that 28 and 496 are the 2nd and 3rd perfect numbers.<br />
<br />
I asked if 8128, the next perfect number is also an initial sum of odd cubes. It is!<br />
<br />
8128 = 1<sup>3</sup> + 3<sup>3</sup> + ... + 15<sup>3</sup><br />
<br />
I also asked if there was something interesting going on .The answer is YES but not that interesting.<br />
<br />
All of the math with proofs are <a href="https://www.cs.umd.edu/users/gasarch/BLOGPAPERS/perfect.pdf">here</a>. I sketch below.<br />
<br />
Known Theorem 1: n is an even perfect number iff n is of the form (2<sup>p-1</sup>)(2<sup>p</sup>- 1) where 2<sup>p</sup>-1 is prime.<br />
<br />
Known Theorem 2: 1<sup>3</sup> + 3<sup>3</sup> + 5<sup>3</sup> + ... + (2(m-1)+1)<sup>3</sup> = m<sup>2</sup>(2m<sup>2</sup>-1).<br />
<br />
Interesting theorem: if n is an even perfect number larger than 6 and p is the p from Known Theorem 1 then n is the sum of the first 2<sup>(p-1)/2</sup> odd cubes.<br />
<br />
Why this is less interesting: The proof does not use that n is perfect. It holds for any number of the form 2<sup>p-1</sup>(2<sup>p</sup>-1) where p is odd.<br />
<br />
So the theorem has nothing to do with perfect numbers. Oh well.<br />
<br />
<br />
<br />
<br />http://blog.computationalcomplexity.org/2016/04/are-perfect-numbers-bigger-than-six.htmlnoreply@blogger.com (GASARCH)0tag:blogger.com,1999:blog-3722233.post-8635354215110811406Mon, 04 Apr 2016 15:52:00 +00002016-04-04T13:29:46.063-04:00Are perfect numbers bigger than 6 initial sums of odd cubes?<br />
I pose two questions today (Monday April 4).<br />
<br />
I will post the answers tomorrow (Tuesday April 5).<br />
<br />
Feel free to comment about the answers. If you don't want clues look at the comments.<br />
If I need to clarify something I will do it in the main post So, to reiterate- feel free to leave spoilers but if you want to avoid reading them, don't read the comments.<br />
<br />
Note:<br />
<br />
The first four perfect numbers are 6, 28, 496, 8128<br />
<br />
28 = 1<sup>3</sup> + 3<sup>3</sup><br />
<br />
496 = 1<sup>3</sup> + 3<sup>3</sup> + 5<sup>3</sup> + 7<sup>3</sup><br />
<br />
Is 8128 the sum of the first six odd cubes? No, and that is not one of my questions.<br />
<br />
Questions:<br />
<br />
1) Is there a k such that 8128 is the sum of the first k odd cubes?<br />
<br />
2) Is there something interesting going on here?http://blog.computationalcomplexity.org/2016/04/are-perfect-numbers-bigger-than-6.htmlnoreply@blogger.com (GASARCH)7tag:blogger.com,1999:blog-3722233.post-2407499193923243828Fri, 01 Apr 2016 10:50:00 +00002016-04-01T06:53:05.051-04:00The Machine Learning TurkGoogle's AlphaGo took the world by storm when it won its match with Lee Sedol but Demis Hassabis now acknowledges the dark truth. Google wanted to promote its cloud computing division as Amazon AWS and Microsoft Azure have quite the head start. Google needed a killer app that would bring users to Google Cloud and decided they could win if they had the best machine learning tools. They bought Deepmind, run by Hassabis, and needed a showcase event and decided to focus on Go, a game yet to be conquered by computers. Hassabis and his team used clever machine learning techniques on top of Monte Carlo Tree Search but only made mild improvements to the game. Google was growing desperate so a plan was hatched.<br />
<br />
Using a modern version of the mechanical turk, an 18th century chess playing automaton that secretly hid a human inside playing the game, Hassabis enlisted Japanese Go player Yuta Iyama to secretly choose the moves for AlphaGo. Iyama, who worked with Google when they agreed to remove Iyama's embarrassing Karaoke videos from YouTube, didn't have to physically be in the machine but relayed the moves by a method Hassabis wouldn't reveal. AlphaGo, secretly getting its moves from Iyama, easily dispatched the European champion in October.<br />
<br />
Hannabis and his team wrote up their failed algorithms and found it shockingly easy to fool the Nature editors and reviewers. Yann LeCun of Facebook looked at the Google's team's <a href="http://www.nature.com/nature/journal/v529/n7587/full/nature16961.html">Nature paper</a> and didn't see that much different from what Facebook had tried. "I just figured Google had chosen better parameters to make their program successful. At the time I should have realized what Google was up to."<br />
<br />
Google took a risk challenging Lee Sedol but Sedol, not realizing he was really facing Iyama, played the wrong style of game and lost the match four games to one.<br />
<br />
Will this revelation hurt the future of AI? "Machine learning continues to change society, but when it comes to Go," said LeCun, "Alpha fools".http://blog.computationalcomplexity.org/2016/04/the-machine-learning-turk.htmlnoreply@blogger.com (Lance Fortnow)10tag:blogger.com,1999:blog-3722233.post-3683747460014125843Mon, 28 Mar 2016 11:45:00 +00002016-03-28T12:48:46.471-04:00MohammadTaghi HajiAghayi on David Johnson <div class="MsoNormal" style="margin-bottom: 0.0001pt; text-align: justify;">
<span style="font-family: inherit;"><span style="color: #222222; ">More than a week ago,
I heard the very sad news that David Johnson has passed away after one year
fight with cancer. I felt that I should write a memorial note for him. Indeed I
have done the same for </span><a href="http://blog.computationalcomplexity.org/2012/06/mihai-patrascu-1982-2012.html"><span style="">Mihai Pătraşcu</span></a><span style="color: #222222; "> in the same blog and both Mihai and David were very similar to
me from several aspects: both were my colleagues at AT&T and more
importantly my dear friends, both they got their Ph.D. from MIT (the same place
that I got my Ph.D. as well), they both were extraordinary researchers, and
both passed away due to cancer after almost a year-long fight with it (and I
was closely aware of their situations in that year). Indeed David read my memo
for Mihai and he told me that he liked it. In addition, there is another reason that I feel respect for David; he was just a bit older than my father who also passed away very recently. So here I would like to put my
thoughts into words for David (and this took me more time in this case
since I wanted to mention some new thoughts given the </span><a href="http://blog.computationalcomplexity.org/2016/03/david-johnson-1945-2016.html"><span style="">comments</span></a><span style="color: #222222; "> already
in this blog). To do so, I would like to mention some of David’s personal characteristics
that I appreciated a lot and give some examples on them from my interactions
with him. Indeed I have even mentioned some of these to him when he was alive
and told him because of these (and other reasons), I am always proud to mention
that I have him as my boss at some point in my career.</span></span></div>
<div class="MsoNormal" style="margin-bottom: 0.0001pt; text-align: justify;">
<span style="font-family: inherit;"><br /></span></div>
<div class="MsoNormal" style="margin-bottom: 0.0001pt; text-align: justify;">
<span style="font-family: inherit;"><span style="color: #222222; ">First of all, David
was very humble and modest especially given his extraordinary </span><a href="http://davidsjohnson.net/vita14.pdf"><span style="">CV</span></a><span style="color: #222222; ">: he won several awards especially Knuth
prize, he is the co-author of one of the top most-cited books in CS, he was
fellows of almost every community that he was involved with (e.g., ACM, SIAM,
AT&T), he was a member and the chair of several prestigious award
committees (like Gödel, Knuth, ACM Kanellakis,</span> <span style="color: #222222; ">ACM Thesis Award) and indeed he was a founder of some of them (e.g.,
Kanellakis), and he was the founder of SODA, the best algorithms conference,
among others. Despite all this he was a very humble and modest man and I think
lots of people who interacted with him will fully agree on this. Just to give
an example, in 1998, while I was still a second-year undergrad at Sharif
University, I sent him an email asking whether he was aware of any book similar
to Garey & Johnson but for parallel computing (indeed this was my first remote
interaction with him); I was shocked how fast he answered my email just in a couple
of hours with a relevant reference. This was especially very exciting and encouraging
for me, since several other people never answered my emails at that time. More
interestingly, later in 2012, I told him personally that I admired him for answering
that email. He told me just wait a second and in a couple of minutes, he could
find the exact same email from 1998 that I sent him; then we even discussed
some English improvements for the email text as well.<o:p></o:p></span></span></div>
<div class="MsoNormal" style="margin-bottom: 0.0001pt; text-align: justify;">
<span style="font-family: inherit;"><br /></span></div>
<div class="MsoNormal" style="margin-bottom: 0.0001pt; text-align: justify;">
<span style="color: #222222; font-family: inherit; ">Second he was a
perfectionist from several aspects. Here are some examples. He was often the
reviewer for P=NP or P!=NP papers for several journals. Probably lots of us
even do not look into these papers unless written by a well-known fellow; however
he was reading these papers very carefully to find the exact bugs and mention
them to the authors. Indeed even when I sent him several referee requests for
conferences for which I severed as a PC member, he always spent a lot of time
to read the paper very carefully and often came with novel improvements and
simplifications, sometime in a extend that authors of the paper under review
wanted to have this anonymous referee as a co-author. All these happened despite
he was a very busy man; however he still considered the task of refereeing a
paper very seriously and respected the authors (and I think this is an example
that lots of us can learn from it). He was a very good writer as well and spent
a lot of time to improve the presentation of a paper, simplify it, and present
it in a perfect way. I am proud to have one paper coauthored with David, a very
long paper with several co-authors. On this paper David had the lead and indeed
spent all the years that I was with AT&T (and even after than) to prepare
the journal version of the paper. Indeed he was sending us the almost final
version on Dec 2014 (and asked us for comments) just a month before he was
diagnosed with cancer (I hope that still we can send the paper to a journal
given the time that David spent on it). Another example of his perfectionism: he
attended ALL SODA while he was alive and almost ALL STOC and FOCS (expect 1-2
years that AT&T had travel restrictions). Not only that, anytime that there
was any talk in the conference, he attended at least one session. Yet another
example: we had group lunches every day at AT&T. That was David’s habit to ask everyone in the
group to see whether they want to join. Now the interesting point was that he
came exactly at noon EVERY DAY and you could even set your watch for 12pm when you
saw him for lunch.<o:p></o:p></span></div>
<div class="MsoNormal" style="margin-bottom: 0.0001pt; text-align: justify;">
<span style="font-family: inherit;"><br /></span></div>
<div class="MsoNormal" style="margin-bottom: 0.0001pt; text-align: justify;">
<span style="color: #222222; font-family: inherit; ">He was founder of SODA,
the best algorithms conference. Indeed lots of us know David because he was the
founder of SODA and he was handling SODA business meetings for lots of year as
the chair of the steering committee. As a result, I often had lots of
discussion with him regarding SODA and its future. We discussed what the
protocol for selecting the chair of SODA should be, whether SODA should have an
official Rebuttal Phase or not, etc. During discussion even some interesting
topics came up which are good to discuss in the community as well. David
believed since SODAs (and in general other major TCS conferences) are the main
venues for publications but still we need full and correct mathematical proofs
for our claims (despite the rest of CS), we should have a <b>five-year period</b> that any major claims and theorems for which the
authors do not provide full proofs in a verifiable manner in arxiv or in a
journal during these five years should be considered officially open for
everyone to grab, prove formally, and get the full credit for that. Another
discussion was that ideally SODA (and again other major TCS conferences) <b>should go double-blind</b> like lots of
other major CS conferences in other fields. This will help to have much more
fair selection in which the name of authors do not give advantage/disadvantage
for acceptance (though PC chair still could see the author lists for some
extreme cases). <o:p></o:p></span></div>
<div class="MsoNormal" style="margin-bottom: 0.0001pt; text-align: justify;">
<span style="font-family: inherit;"><br /></span></div>
<div class="MsoNormal" style="margin-bottom: 0.0001pt; text-align: justify;">
<span style="color: #222222; font-family: inherit; ">I can probably write
pages and pages of other memories on David’s excellent personal characteristics
(e.g. he was a marathon runner, he held the annual barbecue for
AT&T/Bell-labs theory interns, researchers, and alumni for more than two
decades, he served in Army between his
Masters and Ph.D. and kept the same types of spirits and disciplines in the
rest of his life, he always emphasized on putting his middle initial “S.” in
his name especially due to Airport Security since his name is a very common
name, etc), but I think I should stop at this point. <o:p></o:p></span></div>
<div class="MsoNormal" style="margin-bottom: 0.0001pt; text-align: justify;">
<span style="font-family: inherit;"><br /></span></div>
<div class="MsoNormal" style="margin-bottom: 0.0001pt; text-align: justify;">
<span style="color: #222222; font-family: inherit; ">I hope that we have a
great memorial event for him in the next SODA (SODA’17) the conference that he
founded.<o:p></o:p></span></div>
<div class="MsoNormal" style="margin-bottom: 0.0001pt; text-align: justify;">
<span style="font-family: inherit;"><br /></span></div>
<div class="MsoNormal" style="margin-bottom: 0.0001pt; text-align: justify;">
<span style="color: #222222; font-family: inherit; ">Rest in Peace David,<o:p></o:p></span></div>
<span style="font-family: inherit;"><br /></span>
<br />
<div class="MsoNormal" style="margin-bottom: 0.0001pt; text-align: justify;">
<span style="color: #222222; "><span style="font-family: inherit;">From Mohammad</span><span style="font-family: "arial" , sans-serif; "><o:p></o:p></span></span></div>http://blog.computationalcomplexity.org/2016/03/mohammadtaghi-hajiaghayi-on-david.htmlnoreply@blogger.com (Lance Fortnow)1tag:blogger.com,1999:blog-3722233.post-3919768825342275273Thu, 24 Mar 2016 16:54:00 +00002016-03-24T12:54:35.441-04:00Complexity versus ComplexityFor those interested, I've started writing posts for the <a href="http://predictwise.com/blog/">Predictwise Blog</a>. Predictwise makes predictions of future events such as <a href="http://predictwise.com/politics/2016-president-republican-nomination">who will win the Republican Nomination</a> (currently Trump with an 80% probability) based on prediction markets and other betting sites. This has been a fascinating election in terms of predictions, strategies, rules and game theory and I'm happy to try and makes sense out of it over at Predictwise without subjecting my readers here at Computational Complexity with too many political posts.<br />
<br />
A reader had asked me to comment on a Slate article <a href="http://www.slate.com/articles/technology/bitwise/2016/01/a_crude_look_at_the_whole_looks_at_complexity_theory_which_wants_to_understand.html">The Theory of Everything and Then Some</a>, a book review of John Miller's <a href="http://amzn.to/1o8RcHV">A Crude Look at the Whole: The Science of Complex Systems in Business, Life, and Society</a>. John Miller is a social scientist who works on the other "complexity theory" that studies that "simple local rules can have complex global implications". Often complex systems work quite well, like the invisible hand of the economy, but sometimes things can go wrong and the article often mentions the "flash crash" of trading programs reacting to each other causing a major drop in stock prices in May of 2010.<br />
<br />
Our fields with the similar names are not as different as might appear. Much of what they study are inherently computational-like processes and we also look at emergent behavior from simple operations of Turing machine; read, write and move the tape. What they call non-linear we call computation. We do take very different approaches. The computational complexity theory community proves theorems where we can and helps understand the mathematical challenges of when we can't. The other complexity theorists try to explain by examples, simulations and simplified models.<br />
<br />
The two communities often, but not always, seem to have disdain for one another and that's a shame. The tools of computational complexity can help understand the power and limitations of complex systems. These collaborations require them to understand how we can help them and for us to be willing to work on problems that may not yield difficult-to-prove theorems. That's what attracts me to prediction markets, a very simple kind of information aggregation system that still is very difficult to analyze as a computational mechanism.<br />
<br />
What's missing from the article is how tools like machine learning can play in helping to predict the outcomes of many complex systems. The big deluge of data that starts off the article may add to the complexity but it almost paradoxically also makes it possible to learn from it.<br />
<script async="" charset="utf-8" src="//platform.twitter.com/widgets.js"></script>http://blog.computationalcomplexity.org/2016/03/complexity-versus-complexity.htmlnoreply@blogger.com (Lance Fortnow)2tag:blogger.com,1999:blog-3722233.post-6920660603832080783Mon, 21 Mar 2016 04:21:00 +00002016-03-23T01:23:14.482-04:00Hilary Putnam passed away on March 13<br />
Hilary Putnam passed away on March 13, 2016. Some of the obits say he was a philosopher, mathematician, logician, and computer scientist. <br />
<br />
He is probably best known to readers of this blog for his work on Hilbert's 10 problem and resolution.<br />
<br />
HILBERT TENTH:<br />
<br />
Recall H10 stated in current terminology: Find an ALGORITHM that will, given a poly p(x1,,...,,xn) in many variables, with coefficients in the integers, determine if it has a diophantine solution.<br />
<br />
Martin Davis, Hilary Putnam, and Julia Robinson showed that if you also allow exponentation then the problem is undecidable in the early 1960s. Yuri Matijasevich in 1970 showed how to express exps in terms of polynomials to complete the proof. The solution to Hilbert's 10th problem is often credited to all four of them which seems right to me.<br />
<br />
One consequence of there proof: for any c.e. set A there is a poly p such that<br />
<br />
A = { x | exists x1,...,xn p(x,x1,...,xn)=0}<br />
<br />
Later work got the polynomial down to 13 variables.<br />
<br />
<br />
<br />
RESOLUTION:<br />
<br />
John Robinson (but see comments) and later papers by Davis-Putnam aad later Davis-Logemann-Loveland devised resolution theorem proven which is an early SAT-solver algorithm. Many modern algorithms are based on it. (Note- earlier version of this post had mistakes in it. I thank Paul Beame's comments below for clarifying the history.)<br />
<br />
HOW TO CLASSIFY HIM:<br />
<br />
I suspect that Hilary Putnam would call himself a philosopher since that was his MOTIVATION. That may be the best way to classify people (if we are inclined to do that), don't look at WHAT they do look at WHY they do it.<br />
<br />
PHIL OF MATH- one problem with Philosophy, even Phil of Math, is that its hard to have well defined questions and therefore hard to answer them. I am NOT criticizing the field, just saying why I would have a hard time working in it.<br />
<br />
<br />
<br />
<br />http://blog.computationalcomplexity.org/2016/03/hillary-putnam-passed-away-on-march-13.htmlnoreply@blogger.com (GASARCH)5tag:blogger.com,1999:blog-3722233.post-2766533177751347178Thu, 17 Mar 2016 11:35:00 +00002016-03-17T07:35:43.542-04:00The Value of ShapleyNobel laureate Lloyd Shapley <a href="http://www.nytimes.com/2016/03/15/business/economy/lloyd-s-shapley-92-nobel-laureate-and-a-father-of-game-theory-is-dead.html">passed away</a> Saturday. We best know Shapley for his stable matching algorithm with David Gale. Nicole Immorlica <a href="http://blog.computationalcomplexity.org/2008/03/life-of-david-gale.html">guest posted</a> on stable matching shortly after Gale's passing in 2008.<br />
<br />
I'd like to talk about another great innovation, the <a href="https://en.wikipedia.org/wiki/Shapley_value">Shapley Value</a>, a solution concept for cooperative games. For example, suppose no candidate has a majority of candidates heading into the Republican convention and there is no winner on the first ballot. Now we have many delegates that might group themselves into coalitions, and a union of coalitions that have enough delegates can determine the nominee. Larger coalitions have more power than smaller ones but even a single delegate coalition could tip the election. The Shapley value gives weights to the coalitions that measures their relative power with some nice linear and symmetric properties. In this scenario, the Shapley value of a coalition is the probability that adding that coalition will tip the election when coalitions are added in a random order.<br />
<br />
Game Theorist Robert Aumann, another Nobel laureate, used the Shapley value to <a href="https://gilkalai.wordpress.com/2009/02/16/which-coalition/">predict winning coalitions in Israeli elections</a>.<br />
<br />
The main challenge of the Shapley value is computational, in general it is <a href="http://dx.doi.org/10.1007/BF01258278">#P-complete</a> to compute but it can be <a href="http://dx.doi.org/10.1016/j.artint.2008.05.003">approximated efficiently</a>.http://blog.computationalcomplexity.org/2016/03/the-value-of-shapley.htmlnoreply@blogger.com (Lance Fortnow)1tag:blogger.com,1999:blog-3722233.post-2030923131164089011Mon, 14 Mar 2016 13:54:00 +00002016-03-14T09:54:08.745-04:00On Phillip Rogaway's The Moral Character of Cryptographic Work.Some people have emailed me asking me to blog about the paper <a href="http://web.cs.ucdavis.edu/~rogaway/papers/moral-fn.pdf">The Moral Character of Cryptographic Work by Phillip Rogaway</a> I urge you to read it, even if you disagree with it. Especially if you disagree with it. (Hmm- how will you know if you don't read it!)<br />
<br />
There are so many issues raised in this paper that it could be (and might be) the topic of many blog posts. The first three paragraphs are today's topic:<br />
<br />
<i>Preamble. Most academic cryptographers seem to think that our field is a fun,
deep, and politically neutral game—a set of puzzles involving communicating
parties and notional adversaries. This vision of who we are animates a field
whose work is intellectually impressive and rapidly produced, but also quite
inbred and divorced from real-world concerns. Is this what cryptography should
be like? Is it how we should expend the bulk of our intellectual capital? </i><br />
<i><br /></i>
<i>For me, these questions came to a head with the Snowden disclosures of 2013.
If cryptography’s most basic aim is to enable secure communications, how could
it not be a colossal failure of our field when ordinary people lack even a modicum
of communication privacy when interacting electronically? Yet I soon realized
that most cryptographers didn’t see it this way. Most seemed to feel that the
disclosures didn’t even implicate us cryptographers. </i><br />
<i><br /></i>
<i>I think that they do. So I want to talk about the moral obligations of cryptographers,
and my community as a whole. This is not a topic cryptographers
routinely discuss. In this post-Snowden era, I think it needs to be. </i><br />
<br />
My thoughts:<br />
<br />
1) I would add that the Target Breaking, the SONY hack, and the OPM breakin might also show that crypto has been a failure. He doesn't seem to mention those but I think they strengthen his case.<br />
<br />
2) Might it be Security that is a colossal failure? Of course, crypto and security go together so it may be hard to disentangle whose failure it is.<br />
<br />
3) Might it be that good crypto research has been done but is not being used- the tech transfer problem. He later claims that this would be relevant if crypto worked on the right problems in the<br />
first place.<br />
<br />
4) I tend to think he's right. Rather than me telling you why I think he's right, just read his paper.<br />
<br />
<br />http://blog.computationalcomplexity.org/2016/03/on-phillip-rogaways-moral-character-of.htmlnoreply@blogger.com (GASARCH)17tag:blogger.com,1999:blog-3722233.post-4317983773889402361Wed, 09 Mar 2016 13:43:00 +00002016-03-09T09:10:15.620-05:00David Johnson (1945-2016)<div class="separator" style="clear: both; text-align: center;">
<a href="https://4.bp.blogspot.com/-96ZgleV45Wo/VuAc39z9zBI/AAAAAAABV28/IS4C-88PP6U/s1600/Johnson_DS%254072dpi_0.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="200" src="https://4.bp.blogspot.com/-96ZgleV45Wo/VuAc39z9zBI/AAAAAAABV28/IS4C-88PP6U/s200/Johnson_DS%254072dpi_0.jpg" width="200" /></a></div>
David Johnson, a leader and advocate for algorithms and all of theoretical computer science, passed away yesterday at the age of 70. A truly sad day for us all.<br />
<br />
David's 1979 book with Michael Garey, <a href="http://amzn.to/1LcIJyv">Computers and Intractability: A Guide to the Theory of NP-Completeness</a>, is still the best reference on the topic and perhaps the single most important resource in any computer scientist's library. David Johnson also wrote the NP-completeness column for the Journal on Algorithms and later the ACM Transactions on Algorithms, as well as "A Catalog of Complexity Classes" for the 1990 Handbook of Theoretical Computer Science. David founded the Symposium on Discrete Algorithms (SODA), a conference that is now often mentioned with STOC and FOCS as a top theory venue. He created the DIMACS algorithms challenges. He led SIGACT from 1987-1991, really transforming that organization, and served as its face for many years thereafter. I'm only scratching the surface of what he's done for the community, and can think of no one who put more effort into making the theoretical computer science as strong as it is.<br />
<br />
Of course David was a great researchers as well, working on NP-completeness and approximation algorithms.<br />
<br />
He received an <a href="http://awards.acm.org/award_winners/johnson_1325893.cfm">ACM Fellow</a> in 1995, the <a href="http://www.sigact.org/Prizes/Service/1997.html">first SIGACT Distinguished Service prize</a> in 1997 and the <a href="http://www.acm.org/press-room/news-releases/2010/knuth-prize-09">Knuth Prize</a> in 2010. He used his Knuth prize lecture to push for practical applications for our algorithms. Just last month he was elected into the National Academy of Engineering.<br />
<br />
I worked with David Johnson closely on various SIGACT activities. David never missed a STOC and we always invited him to the SIGACT Executive Committee dinners, not because he had an official role, but because he was David Johnson. I truly respected and admired David and glad I could call him a friend. We'll miss him deeply. STOC and SODA just won't be the same without him.http://blog.computationalcomplexity.org/2016/03/david-johnson-1945-2016.htmlnoreply@blogger.com (Lance Fortnow)19tag:blogger.com,1999:blog-3722233.post-8048670625440627112Mon, 07 Mar 2016 15:04:00 +00002016-03-07T13:57:59.245-05:00When do we care about the constants?I've been reading two books recently: <a href="http://www.amazon.com/Asymptopia-Student-Mathematical-Library-Spencer/dp/1470409046/ref=sr_1_1?s=books&ie=UTF8&qid=1456408724&sr=1-1&keywords=asymptopia">Asymptopia</a> by Joel Spencer (He turns 70 soon! <a href="http://cims.nyu.edu/conferences/spencer70/index.html">Workshop for it!</a>. My nephew things that celebrating your bday with a workshop would be... odd) and <a href="http://www.amazon.com/Joy-Factoring-Student-Mathematical-Library/dp/1470410486/ref=sr_1_1?s=books&ie=UTF8&qid=1456408803&sr=1-1&keywords=Joy+of+factoring">The Joy of Factoring</a> by Simon Wagtaff. In terms of content they are on two different topics. In terms of practicality they are different: <i>Asymptopia</i> is clearly a pure math book (there is one chapter on algorithms, but the rest is really pure math) whereas <i>The Joy of Factoring </i>is very practical in that it focuses on real algorithms for the important (for crytography) practical problem of factoring. However, there is one thing the books had in common: They both often care about multiplicative constants.<br />
<br />
Example from <i>Asymptopia</i>: They gave better and better lower bounds on Ramsey numbers:<br />
<br />
(1) R(k) ≥ (1+o(1))(k/e sqrt(2)) 2<sup>k/2</sup> roughly (1+o(1))(0.26)2<sup>k/2</sup><br />
<br />
(2) R(k) ≥ (1+o(1))(k/e) 2<sup>k/2</sup> roughly (1+o(1))(1+o(1))(0.37)<sup>k/2</sup><br />
<br />
(3) R(k) ≥ (1+o(1))(k/sqrt(2)) 2<sup>k/2</sup> roughly (1+o(1))(0.71)<sup>k/2</sup><br />
<br />
(It may be hard to read so I will clarify- the o(1) is little-o, a term that goes to 0 as k gets large.)<br />
<br />
The first lower bound uses the prob method and you the reader has prob seen it or could prob derive it yourself. Prob. The second lower bound uses prob and a clever way of coloring and then tossing out some vertices. The third lower bound uses the Local Lovasz Lemma.<br />
<br />
Note that for this problem Joel Spencer cared about the constant.<br />
<br />
Example from <i>The Joy of Factoring: </i>Since many (but not all!) factoring algorithms do not have rigorously proven run times (Number Theory is Hard!) it's harder to give clean examples here. The book often refers to tricks to get constants down and the notion that constants matters permeates the book. Here is one rigorous example of caring about constants:<br />
<br />
Fermat's difference-of-squares algorithm goes as follows: We want to factor N. Let x=floor(sqrt(N)). Test each of the following numbers for being a square and stop when you get a square: x<sup>2</sup>-N, (x+1)<sup>2</sup>-N, (x+2)<sup>2</sup> - N, etc. When you find an r such that (x+r)<sup>2</sup>-N=y<sup>2</sup> then you have (x+r-y)(x+u+y)=N. Almost surely this is a nontrivial factorization of N. (This algorithm is worse than the trivial sqrt(N) algorithm in some cases; however, it has some of the ideas needed for more sophisticated algorithms including the Quadratic Sieve.) Of course, one might be looking for the right r a long time. How long:<br />
<br />
Let a be the largest divisor of N that is ≤ \sqrt(N). Let k=a/sqrt(N). Then the search will take<br />
<br />
1+ (1-k)<sup>2</sup>sqrt(N)/(2k)<br />
<br />
Again note that there are no hidden multiplicative constants.<br />
<br />
So when do we care about constants and why?<br />
<br />
1) If you are working on an algorithm for a problem people <i>really </i>want to solve then you need the constants to be small.<br />
<br />
2) If you can get good bounds on the exact constants then you should.<br />
<br />
3) If you have a technique and try it out you might end up just improving the constant. Even so, you have showed that the technique has merit.<br />
<br />
4) Improving the constant may show progress which will later lead to more important improvements.<br />
<br />
5) Chicken and Egg: Here is an example from <i>Asymptopia </i>where he didn't care about the constant: Fix ε. Given three points in the unit square what is the prob that their area will be ≤ ε ? He showed its Θ(ε).This proof is very nice. Tracking the constants used in his proof looks tedious. In order to care about the constants perhaps we need an interesting proof about them. To look for a proof technique that applies to them perhaps we need to care in the first place. Chicken and Egg?<br />
<br />
<br />
<br />http://blog.computationalcomplexity.org/2016/03/when-do-we-care-about-constants.htmlnoreply@blogger.com (GASARCH)4tag:blogger.com,1999:blog-3722233.post-4604792070179450225Wed, 02 Mar 2016 13:46:00 +00002016-03-02T08:50:27.335-05:00Changing This Ancient Art Into a Science<div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://pbs.twimg.com/media/Cce4irsWEAAzF9-.jpg:large" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="200" src="https://pbs.twimg.com/media/Cce4irsWEAAzF9-.jpg:large" width="114" /></a></div>
The ACM <a href="http://www.acm.org/awards/2015-turing">announced yesterday</a> that they will award the 2015 Turing Award to Whitfield Diffie and Martin Hellman for contributions to modern cryptography. The Turing award is the highest honor in all of computing. John Markoff in the New York Times also has<a href="http://www.nytimes.com/2016/03/02/technology/cryptography-pioneers-to-win-turing-award.html"> the story</a>.</div>
<div>
</div>
<div>
Diffie and Hellman are best known for public-key cryptography, the brilliant idea that one could communicate secretly with someone you haven't communicated previously. Without public-key cryptography there would be no e-commerce. Equally important Diffie and Hellman brought computational complexity to bare, moving cryptography into its modern age. I strongly recommend reading their 1976 gem <a href="http://dx.doi.org/10.1109/TIT.1976.1055638">New Directions in Cryptography</a> (<a href="http://cs.unc.edu/~fabian/course_papers/diffie.hellman.pdf">PDF</a>) particularly the introduction and chapter 6 where Diffie and Hellman connect cryptography to computational complexity and the P v NP problem itself defined only five years earlier. Here's the first paragraph:</div>
<div>
<blockquote class="tr_bq">
We stand today on the brink of a revolution in cryptography. The development of cheap digital hardware has freed it from the design limitations of mechanical computing and brought the cost of high grade cryptographic devices down to where they can be used in such commercial applications as remote key cash dispensers and computer terminals. In turn, such applications create a need for new types of cryptographic systems which minimize the necessity of secure key distribution channels and supply the equivalent of a written signature. At the same time, theoretical developments in information theory and computer science show promise of providing provably secure cryptosystems, changing this ancient art into a science. </blockquote>
</div>
<div>
One question for which I shall offer no opinion: Should <a href="http://www.merkle.com/">Ralph Merkle</a> have been a co-recipient of this award? </div>
http://blog.computationalcomplexity.org/2016/03/changing-this-ancient-art-into-science.htmlnoreply@blogger.com (Lance Fortnow)2