tag:blogger.com,1999:blog-3722233Tue, 06 Oct 2015 22:12:01 +0000typecastfocs metacommentsComputational ComplexityComputational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill Gasarchhttp://blog.computationalcomplexity.org/noreply@blogger.com (Lance Fortnow)Blogger2315125tag:blogger.com,1999:blog-3722233.post-1811916031439740547Mon, 05 Oct 2015 21:48:00 +00002015-10-05T16:48:55.699-05:00Is Kim Davis also against Nonconstrutive proofs?<br />
Recall that Kim Davis is the Kentucky clerk who refused to issue marriage licenses for same-sex couples and was cheered on by Mike Huckabee and other Republican candidates for Prez. Had she refused to issue marriage license for couples that had been previously divorced than neither Mike Huckabee would not be supporting her, and the Pope wouldn't have a private 15 minute meeting with her telling her to stay strong (NOTE- I wrote this post in that tiny window of time when it was believed she did have such a meeting with the Pope, which is not true. The Pope DID meet with a former student of his who is gay, and that studen'ts partner.) Had she refused to issue marriage licenses to inter-racial couples (this did happen in the years after the Supreme court said that states could not ban interracial marriage, <a href="https://en.wikipedia.org/wiki/Loving_v._Virginia">Loving vs Virginia, 1967</a> ) then ... hmm, I'm curious what would happen. Suffie to say that Mike H and the others would prob not be supporting her.<br />
<br />
David Hilbert solved Gordon's problem using methods that were nonconstructive in (I think) the 1890's. This as considered controversial and Gordon famously said <i>this is not math, this is theology.</i> Had someone else solved this problem in 1990 then the fact that the proof is non-constructive might be noted, and the desire for a constructive proof might have been stated, but nobody would think the proof was merely theology.<br />
<br />
I don't think the Prob Method was ever controversial; however, it was originally not used much and a paper might highlight its use in the title or abstract. Now its used so often that it would be unusual to point to it as a novel part of a paper. If I maintained a website of <i>Uses of the Prob Method in Computer Science</i> then it would be very hard to maintain since papers use it without commentary.<br />
<br />
The same is becoming true of Ramsey Theory. I DO maintain a website of apps of Ramsey Theory to CS (see <a href="http://www.cs.umd.edu/~gasarch/TOPICS/ramsey/ramsey.html">here</a>) and its geting harder to maintain since using Ramsey Theory is not quite so novel as to be worth a mention.<br />
<br />
SO- when does a math technique (e.g., prob method) or a social attuitude (e.g., acceptance of same-sex marriage) cross a threshold where its no longer controversial? Or no longer novel? How can you tell? Is it sudden or gradual? Comment on other examples from Math! CS! http://blog.computationalcomplexity.org/2015/10/is-kim-davis-also-against.htmlnoreply@blogger.com (GASARCH)2tag:blogger.com,1999:blog-3722233.post-4214365156853800764Thu, 01 Oct 2015 16:48:00 +00002015-10-06T11:09:24.755-05:00Cancer Sucks<div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://www.cc.gatech.edu/sites/default/files/images/people/karsten-schwan_1.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="http://www.cc.gatech.edu/sites/default/files/images/people/karsten-schwan_1.jpg" height="200" width="160" /></a></div>
Karsten Schwan said the title quote when we were gathered as a faculty two years ago mourning the Georgia Tech School of Computer Science faculty member Mary Jean Harrold who <a href="http://cacm.acm.org/news/168025-in-memoriam-mary-jean-harrold-1947-2013/fulltext">died from the disease</a>. Karsten just <a href="http://www.scs.gatech.edu/content/college-computing-mourns-loss-regents%E2%80%99-professor-karsten-schwan">lost his own battle with cancer</a> monday morning and my department is now mourning another great faculty member.<br />
<br />
Just a few months ago, Alberto Apostolico, an algorithms professor at Georgia Tech, also <a href="http://www.cc.gatech.edu/news/427841/college-computing-remembers-alberto-apostolico">passed away from cancer</a>. </div>
<div>
<br /></div>
<div>
I went back through the obituaries in the blog and we lost quite a few to cancer, often way too young, including <a href="http://xn--mihai%20ptracu-ovb98l/">Mihai Pătraşcu</a>, <a href="http://blog.computationalcomplexity.org/2010/10/benoit-mandelbrot-1924-2010.html">Benoît Mandelbrot</a>, <a href="http://blog.computationalcomplexity.org/2010/10/partha-niyogi-1967-2010.html">Partha Niyogi</a>, <a href="http://blog.computationalcomplexity.org/2008/12/ingo-wegener-1950-2008.html">Ingo Wegener</a>, <a href="http://blog.computationalcomplexity.org/2005/04/clemens-lautemann.html">Clemens Lautemann</a> and <a href="http://blog.computationalcomplexity.org/2004/07/carl-smith-1950-2004.html">Carl Smith</a>. I just taught Lautemann's proof that BPP is in Σ<sup>p</sup><sub>2</sub>∩Π<sup>p</sup><sub>2</sub> in class yesterday.<br />
<br />
With apologies to Einstein, God does play dice with people's lives, taking them at random in this cruel way. Maybe someday we'll find a way to cure or mitigate this terrible disease but for now all I can say is Cancer Sucks. </div>
http://blog.computationalcomplexity.org/2015/10/cancer-sucks.htmlnoreply@blogger.com (Lance Fortnow)2tag:blogger.com,1999:blog-3722233.post-339437391065343906Mon, 28 Sep 2015 17:14:00 +00002015-09-28T12:14:41.926-05:00Venn Diagrams are used (yeah) but abused (boo)<br />
In a <a href="http://blog.computationalcomplexity.org/search?q=Venn+diagrams">prior blog entry</a> I speculated about which math phrases will enter the English Language and will they be used correctly. I thought <i>Prisoner's Dilemma</i> would enter and be used correctly, but <i>Turing Test</i> and <i>Venn Diagram</i> would not enter.<br />
<br />
Since then I've seen Turing Test used, but only because of the movie <i>The</i> <i>Imitation Game</i>. I don't think it will enter the human language until a computer actually passes it for real, which might not be for a while. See <a href="http://www.scottaaronson.com/blog/?p=1858">this</a> excellent post by Scott Aaronson about a recent bogus claim that a machine passed the Turing Test.<br />
<br />
Venn Diagrams seem to be used more (Yeah!) but incorrectly (Boo!)<br />
<br />
1) In <a href="http://www.thedailybeast.com/articles/2015/09/10/will-republicans-move-to-unseat-boehner.html">this article</a> (which inspired this post), about who might replace John Boehner as speaker of the house, there is the following passage:<br />
<br />
<i>Option 3: An acceptable and respected conservative like Jeb Hensarling or<br />Tom Price emerges as speaker. Why these two? First, Paul Ryan doesn’t seem<br />to want the gig, so that leaves us with only a few options for someone who<br />fits in the Venn diagram of being enough of an outsider, well liked, and<br />sufficiently conservative</i>.<br />
<br />
Is this correct use? They really mean the intersection of oustider, well-liked,<br />
and suff conservative. One can picture it and it sort of makes sense, but its not<br />
quite correct mathematically.<br />
<br />
2) In <a href="http://popchartlab.tumblr.com/post/93786896999/in-celebration-of-john-venns-180th-birthday">this ad</a> for Venn Beer (in celebration of John Venn's 180th birthday!) they really mean union, not intersection.<br />
<br />
3) <a href="http://latenightseth.tumblr.com/post/93793705536/happy-180th-birthday-to-john-venn-creator-of-the">This Venn Diagram</a> about Vladmir Putin's and your Aunt's record collection doesn't really make sense but I know what they mean and its funny.<br />
<br />
4) <a href="http://thefaultinourstarsmovie.com/post/87845256595/and-you-thought-women-were-complicated-turns-out">This Venn Diagram</a> about how to woo women is incorrect, not funny, not mathematically meaningful. <br />
<br />
5) <a href="http://imgur.com/IT5FP?tags">This Venn Diagram</a> involved Doctors, Prostitutes, and TSA agents. At first it is funny and seems to make sense. But then you realize that the intersection of Doctors and Prostitutes is NOT People who make more per hours than you make all day, its actually prostitutes with medical degrees. Its still funny and I see what they are getting at.<br />
<br />
6) This <a href="http://fivethirtyeight.com/datalab/donald-trump-is-the-nickelback-of-gop-candidates/">Venn Diagram</a> (it's later in the article) of Republican Candidates for the 2016 nomination for Prez is correct for the math and informative, though one may disagree with some of it (Is Trump really Tea-Party or should he be in his own category of Trumpness?)<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />http://blog.computationalcomplexity.org/2015/09/venn-diagrams-are-used-yeah-but-abused.htmlnoreply@blogger.com (GASARCH)4tag:blogger.com,1999:blog-3722233.post-3357661575827123993Thu, 24 Sep 2015 12:23:00 +00002015-09-24T07:25:00.767-05:0021st Century ProblemsMy youngest daughter, Molly, a high school senior talking colleges with a woman about ten years her senior. The woman remembered all her friends watching the clock so they could go home to check their emails to see if they were accepted. Molly said "Sheesh, you had to go home to check email?"<br />
<br />
My other daughter Annie, a college junior, went on an overnight last Thursday to a place without cell phone reception. She spent Friday night with her friends in her class catching up on emails, texts and Facebook messages.<br />
<br />
Now back in my day (that being the early 80's) we got our college acceptances and rejections by postal mail, where that one crucial bit of information could be determined by the thickness of the envelope. Some of my friends had their mail held by the post office so they could find out a few hours earlier.<br />
<br />
In college I did have email access, in fact I <a href="http://blog.computationalcomplexity.org/2011/06/creating-email-system-at-cornell.html">wrote an email system for Cornell</a>. But most students didn't use email so we resorted to other means. Student organizations could hire a service that put posters on key locations throughout campus. Chalk on sidewalks worked particularly well. The personals section of the Cornell Daily Sun served as a campus bulletin board. In my freshman dorm we had phones in our rooms but no answering machines. We did put little whiteboards on our doors so people could leave us messages. We had a lounge on our floor where you could find most people and talk to them in person. You young people should try that more often.<br />
<br />
We had to coordinate activities and meeting places ahead of time, if someone was late you waited for them. On the other hand I never had to spend my Friday nights catching up on emails and texts.http://blog.computationalcomplexity.org/2015/09/21st-century-problems.htmlnoreply@blogger.com (Lance Fortnow)1tag:blogger.com,1999:blog-3722233.post-6876765130667206397Mon, 21 Sep 2015 12:55:00 +00002015-09-21T07:55:54.787-05:00When did Mathematicians realize that Fermat did not have a proof of FLT?I recently came across the following passage which is about Fermat's Last Theorem (FLT).<br />
<br />
<i>Pierre de Fermat had found a proof, but he did not bother to write it down. This is perhaps the most frustrating note in the history of mathematics, particularly as Fermat took his secret to the grave.</i><br />
<br />
<br />
AH- so at one time people thought Fermat DID have a proof of FLT. That is, a proof using just the math of his time, likely a very clever proof. I doubt anyone thinks that Fermat had a proof in this day and age. Actually it has been in fiction: in a 2010 episode Dr. Who episode <i>The eleventh hour, </i>the doctor has to prove to some very smart people that they should take his advice. He does this by showing them Fermat's proof of FLT. Good Science Fiction but highly unlikely as Science Fact. In an episode of ST-TNG (Title: The Royale. Year: 1989) it is claimed that FLT is still open. Whoops. But in an episode of ST-DSN (Title: Facets. Year: 1995) they refer to `Wiles proof of FLT'.<br />
<br />
Wikipedia states:<i> It is not known whether Fermat had actually found a valid proof for all exponents n, but it appears unlikely.</i> I think that understates the case.<br />
<br />
<br />
So here is a question for all you math historians out there: When did the math community realize that FLT was really hard?<br />
<br />
We have one clue- the quote I began with. Its from... 2013. Whoops. The book is <i><a href="http://www.amazon.com/The-Simpsons-Their-Mathematical-Secrets/dp/1620402785/ref=pd_sim_14_6?ie=UTF8&refRID=0BWFXMJ1Z2DTNXWFQDKN&dpID=51woMcLhaRL&dpSrc=sims&preST=_AC_UL160_SR107%2C160_">The Simpsons and their mathematical secrets </a></i>by Simon Singh (Author of <a href="http://www.amazon.com/Fermats-Enigma-Greatest-Mathematical-Problem/product-reviews/0385493622/ref=cm_cr_pr_hist_2/190-0701399-4452552?ie=UTF8&filterBy=addTwoStar&showViewpoints=0&sortBy=helpful&reviewerType=all_reviews&formatType=all_formats&filterByStar=two_star&pageNumber=1">Fermat's Enigma</a> which is about the quest to proof FLT). I've read the passages about FLT in the Simpsons book over again to make sure he doesn't someplace say that Fermat prob didn't have a proof. No--- he seems to really say the Fermat had a proof. So whats going on here? Possibilities:<br />
<br />
1) I'm wrong. There are serious credible people who think Fermat had a proof and he talked to them; perhaps while working Fermat's Enigma. I find this unlikely- I have never, literally never, heard of anyone, not even math cranks, who think Fermat had a simple proof. Some cranks think THEY have a simple proof, though even that seems far less common after FLT was proven.<br />
<br />
2) I'm right. He didn't have anyone who was both serious and credible check his book. I find this unlikely. He has written Fermat's 't Enigma so surely he is in contact with people that are both credible and serious.<br />
<br />
3) He did have someone check his book but thought the story was better the way he told it. (This was common on the TV show <i>Numb3rs </i>which never let a mathematical truth get in the way of a good story.)<br />
I find this unlikely since a better way to say it is <i>we'll never know if Fermat had a proof!</i> <br />
<br />
One problem with such mistakes is that it destroys his credibility on other math things he writes of. That info about Dr. Who I got from the book, but I suspect its correct. And the stuff about the math that appears in the Simpsons is checkable and seems correct. I give one example: in one episode you see in the background<br />
<br />
3987<sup>12</sup> + 4365<sup>12</sup>= 4472<sup>12</sup><br />
<br />
which is correct on a calculator with only 10 digits of precision. Cute! But I have stopped reading any of the math history in the book for fear that I will get incorrect notions in my head.<br />
<br />
However, back to my original question: Was there a time when people thought Fermat really had a proof?Was there a time when people thought there was an easy proof? When did that change?<br />
<br />
http://blog.computationalcomplexity.org/2015/09/when-did-mathematicians-realize-that.htmlnoreply@blogger.com (GASARCH)8tag:blogger.com,1999:blog-3722233.post-9003988703313356477Thu, 17 Sep 2015 07:49:00 +00002015-09-17T02:49:25.754-05:00The Theorems ConferenceAll too often theoretical computer scientists get more obsessed by proofs than the theorems themselves. I suggest a theorems conference. Here's how it would work:<br />
<br />
Authors would submit two versions of a paper. One has the statement of the theorem and why that theorem matters, but no proofs. The second version includes the proofs.<br />
<br />
The program committee first makes tentative decisions based on the first version of the paper. If tentatively accepted the PC then looks at the second version. The PC can reject the paper if the the proofs have significant flaws, gaps or readability issues. The PC cannot reject the paper for any other aspect of the proof such as length or lack of technical depth or originality.<br />
<br />
This way we truly judge papers based on what they prove--what the results add to the knowledge base of the field.<br />
<br />
Of course my plan has many flaws. Some papers with their proofs may have already been posted on archive sites which the PC members could have seen. More likely, the PC will guess the difficulty of the proof and judge the paper based on this perceived difficulty, and not on the theorem itself.<br />
<br />
We need a culture shift, away from an emphasis on proofs. That's the only way we can judge our results for the results themselves.http://blog.computationalcomplexity.org/2015/09/the-theorems-conference.htmlnoreply@blogger.com (Lance Fortnow)13tag:blogger.com,1999:blog-3722233.post-8678647584142438200Mon, 14 Sep 2015 04:00:00 +00002015-09-13T23:00:59.908-05:00An Open(?) Question about Prime Producting Polynomials Part II in 3-D!(Apologies- No, this post is not in 3-D)<br />
<br />
I posted last week about <a href="http://blog.computationalcomplexity.org/2015/09/an-open-question-about-prime-producing.html">An Open(?) Question about Prime Producing Polynomails</a><br />
<br />
I got several emails about the post with more information, inspiring this post!<br />
All the math in this post can be found in my writeup <a href="http://www.cs.umd.edu/~gasarch/BLOGPAPERS/polyprimes.pdf">Polynomials and Primes,</a><br />
unless otherwise specified. NONE of the results are mine. <br />
<br />
You can read this post without reading the prior one.<br />
<br />
1) KNOWN: Let f(x) ∈ Z[x] be a poly of degree d. Then there exists a non prime in f(x)'s image (actually there are an infinite number of non primes, in fact there are an infinite number of composites). If f(1) is prime then of f(1+mf(1)) as m=0,1,2,... at most 2d-2 of them are prime.<br />
<br />
2) Algorithm to find an x such that f(x) is not prime: compute f(1), f(1+f(1)),...,f(1+(2d-2)f(1)) until you find one that is not prime. This takes 2d-1 evals. OPEN(?): Is there a better deterministic algorithm where we measure complexity by number of evals? Since this is a simple model of computation lower bounds might be possible.<br />
<br />
3) There is the following randomized algorithm: Eval f(1)- if its not prime you're done. If f(1) is prime then pick a random m where 0≤ m ≤ (2d-2)<sup>2</sup> and eval f(1+mf(1)). This is non prime with prob 1- (1/(2d-1)).<br />
<br />
4) What is it about Z that makes this theorem true? In my write up I show that if D is an integral domain with only a finite number of units (numbers that have mult inverses) then any poly in D[x] has to produce non-primes infinitely often. (A prime in D is a number a such that if a=bc then either b or c is a unit.)<br />
<br />
5) What about if D has an infinite number of units? See <a href="http://www.cs.umd.edu/~gasarch/BLOGPAPERS/polyprimesamm.pdf">this</a> paper for examples of polynomials over integral domains D such that the poly only takes on only prime or unit values.<br />
<br />
6) What about polys over Q? over R? over C? In my write up I prove similar theorems for Q and then use that to get theorems for C.<br />
<br />
7) Looking at polys in Z[x,y] is much harder, see<a href="http://www.jstor.org/stable/2975080?seq=1#page_scan_tab_contents"> this survey. </a><br />
<br />
8) If f(x)∈Z[x] is a poly then does there exist a prime in f(x)'s image? An infinite number of primes? Easy but stupid answer is no: f(x)=2x. Better question: assume that f(x)'s coefficients are rel prime.<br />
<br />
Dirichlet's theorem: if GCD(a,b)=1 then ax+b is prime infinitely often.<br />
<br />
Open: is x<sup>2</sup> + 1 prime infinitely often?<br />
<br />
<br />http://blog.computationalcomplexity.org/2015/09/an-open-question-about-prime-producting.htmlnoreply@blogger.com (GASARCH)1tag:blogger.com,1999:blog-3722233.post-5586490118604985932Thu, 10 Sep 2015 14:23:00 +00002015-09-10T09:23:47.541-05:00Designer ChipsA computer architecture researcher talked to me about a role theoretical computer science can play for them: creating a new kind of computer processor. Microprocessors stopped getting faster a decade ago due to energy challenges so computer architects look for new ways to improve performance, moving away from the general-purpose CPU towards processors that handle more specific functions. The GPU, Graphics Processor Unit, has long been around to handle the graphics-intensive needs of modern computers and many have used GPUs for other purposes such as machine learning. These days we can program chips using FPGAs (Field-programming gate arrays) and are nearly at the point of cheaply compiling directly to hardware. How does this change the theory we do?<br />
<br />
What kind of specialized chips would speed up our algorithms? If we want to find matchings on graphs, for example, is there some routine one could put in a chip that would lead to a much more efficient algorithm?<br />
<br />
On the complexity side, how do we model a computer where we can program the hardware as well as the software? What are the right resource bounds and tradeoffs?<br />
<br />
In general our notions of computers are changing, now with multi-core, cloud computing and designer chips. Not only should we focus on applying theory to these new models of computing, but we should think about what future changes in computing could yield more efficient algorithms. Theorists should be involved in planning the future of computing and we're not even doing a great job reacting to changes around us.http://blog.computationalcomplexity.org/2015/09/designer-chips.htmlnoreply@blogger.com (Lance Fortnow)2tag:blogger.com,1999:blog-3722233.post-5240612111408130205Mon, 07 Sep 2015 05:20:00 +00002015-09-09T10:53:34.862-05:00An Open(?) question about prime-producing-polynomials<br />
Known Theorem: If f(x)∈ Z[x] is prime for all nat number inputs then f(x) is a constant.<br />
<br />
NOTE- Recall that if p is a prime then so is -p.<br />
<br />
Known Proof: Assume f(x) has degree d. f(1) IS prime. Let f(1)=p. Look at<br />
<br />
f(1+p), f(1+2p),...,f(1+(2d+1)p).<br />
<br />
One can easily show that p divides all of these. Hence if they are all primes then they must all be p or -p. Since there are 2d+1 of them, at least d+1 of them are the same, say p. Hence f is the constant p.<br />
<br />
END of known proof.<br />
<br />
Note that the proof gives the following theorem:<br />
<br />
Let f(x)∈ Z[x] of degree d. We assume f(1)≥ 0. Least a st f(a) is NOT prime is ≤ 1+(2d+1)p.<br />
<br />
(This can prob be improved a bit with some cases, but its good enough for now.)<br />
<br />
Recall Euler's poly x<sup>2</sup>-x+41 produces primes for x=0,...,40. But at 41 you get a composite. This is much smaller than the upper bound 1+(2d+1)p = 1+5*41=206.<br />
<br />
Wolfram MathWorld has a <a href="http://mathworld.wolfram.com/Prime-GeneratingPolynomial.html">page</a> of other polys in Z[x] that produces lots of primes initially, but NONE come close to the bound.<br />
<br />
QUESTIONS:<br />
<br />
Proof a better upper bound.<br />
<br />
Proof a better lower bound (Fix d and produce and infinite seq of polys of degree d...)<br />
<br />
Close the gap!<br />
<br />
If this is already known, then let me know please.<br />
<br />
Can also ask for polys in Q[x], R[x], C[x]. For Q[x] and R[x] same theorem is true- no poly can produce all primes. I suspect also true for C[x] but I haven't seen it stated anywhere. (ADDED LATER- Proof for C[x] is easy. First proof<br />
for Q[x] and then by Lagrange interpoloation if a poly has inf many times<br />
where f(integer)=integer, poly is in Q[x].)<br />
<br />
You can also NOT include negative primes and see how that changes things.<br />
<br />http://blog.computationalcomplexity.org/2015/09/an-open-question-about-prime-producing.htmlnoreply@blogger.com (GASARCH)0tag:blogger.com,1999:blog-3722233.post-1199023809395005544Thu, 03 Sep 2015 20:01:00 +00002015-09-03T15:01:08.556-05:00WhiplashedI recently watched the movie <a href="http://www.imdb.com/title/tt2582802/">Whiplash</a>, about a college jazz band director, Fletcher played by J.K. Simmons, who torments his musicians to force them to be their best. The movie focuses on a drummer, Andrew, which makes for a great audio/video feast but in its essentials Whiplash is a story of a professor and his student.<br />
<br />
I can imagine playing the role, “Do you think your proof is correct? Yes or No? Are you applying Toda’s theorem correctly or are you using the same crazy logic your dad used when he left your mom?” OK, maybe not.<br />
<br />
Nevertheless Fletcher has a point. Too often I’m seeing graduate student doing just enough to get a paper into a conference instead of pushing themselves, trying to do great work and still not being satisfied. Fletcher says the two most dangerous words in the English language are “good job”. While that might be a little cruel, we do need to push our students and ourselves to take risks in research and be okay in failing. To roughly quote John Shedd and Grace Murray Hopper, "the safest place for a ship is in the harbor, but that’s not what ships are for."<br />
<br />
Whiplash had a different kind of scene that definitely hit home. Andrew could not impress his family with the fact that he was lead drummer in the top college jazz band in the nation. I’ve been there, trying to get my mother excited by the fact that I had a STOC paper early in my career. "That's nice dear".http://blog.computationalcomplexity.org/2015/09/whiplashed.htmlnoreply@blogger.com (Lance Fortnow)6tag:blogger.com,1999:blog-3722233.post-8033120432902392618Mon, 31 Aug 2015 03:26:00 +00002015-09-01T10:31:35.712-05:00Two candidates that I want to see run for Democratic NominationIt has been noted that while there are 17 Republican candidates for the nomination, of which 10 have been declared serious by FOX News via the first debate, there are far less democratic candidates for the nomination and only one has been declared serious by the powers that be. This may change if Biden runs. <br />
<br />
I want to suggest two Democrats who I think should run. They have not made ANY moves in that direction, so it won't happen... until they see that this blog post endorsing them and they get inspired!<br />
<br />
<b>Rush Holt</b>. Was a US Congressman from NJ (NOTE- earlier version of this post incorrectly had him as a US Senator from WV which his father was. Thanks to Dave MB comment for pointing that out.) He has a PhD in Physics. I want a president who wins a Nobel Prize in something OTHER THAN Peace (T. Roosevelt, Wilson, Carter, Obama have won it for peace, fictional Bartlett on West Wing won if for Economics). I can imagine one winning for Literature. But Physics- that would be awesome! While I doubt Dr. Holt will win one, it would be good to have someone as Prez who knows SOME science so he won't say stupid things about global warning. (Counter question- do the climate change deniers in the Senate really believe what they are saying or not?) He's also a Quaker, not sure how that plays into all of this. (I had a longer and incorrect passage here, but thanks to Andy P's comment below I changed it and it is now correct--- I hope.)<br />
<br />
Would having a scientist in the Whitehouse be good for Science Funding? I would guess a marginal yes. Would it improve my chance of getting my REU grant renewed? I would guess no.<br />
<br />
<b>Sheldon Whitehouse</b>. US Senator from Rhode Island. Look at his name- he was born to be Prez!<br />
<br />
You may think these are stupid criteria. You may be right. But is it any better than I voted for X in the primary since X has a better chance of winning in the general (One of Clinton's arguments against Obama in 2008 was that Obama couldn't win since... well, you know) or he looks just like a prez (Warren Harding's main qualification) or he's Rich and obnoxious (I would say who I am thinking of, but he's been known to sue people).http://blog.computationalcomplexity.org/2015/08/two-candidates-that-i-want-to-see-run.htmlnoreply@blogger.com (GASARCH)6tag:blogger.com,1999:blog-3722233.post-6272556978902059043Thu, 27 Aug 2015 12:01:00 +00002015-08-27T07:01:36.112-05:00PACMI serve on the conference committee of the ACM publications board and we've had extensive discussions on the question of the role of journals in publication venues. A number of CS conferences, though notably not in TCS, are moving to a hybrid publication model where their conference presentations make their way into refereed journal papers. One of our proposals is the creating of a specific venue for these activities, a new Proceedings of the ACM. In the September CACM, Joseph Konstan and Jack Davidson <a href="http://cacm.acm.org/magazines/2015/9/191173-should-conferences-meet-journals-and-where/fulltext">lay out this proposal</a>, with <a href="http://cacm.acm.org/magazines/2015/9/191177-the-pros-and-cons-of-the-pacm-proposal/fulltext">pros</a> and <a href="http://cacm.acm.org/magazines/2015/9/191179-the-pros-and-cons-of-the-pacm-proposal/fulltext">cons</a> by Kathryn McKinley and David Rosenblum respectively. The community (that means you) is being asked to <a href="https://www.surveymonkey.com/r/PACM2015">give their input</a>.<br />
<br />
The theory model has not significantly changed since I was a student. Papers submitted to a conference get reviewed but not refereed, the proofs read over usually just enough to feel confident that the theorem is likely correct. Once authors started submitting electronically they could submit entire proofs, though often in an appendix the program committee is not required to read.<br />
<br />
The papers appear in a proceedings and to quote from the STOC proceedings preface<br />
<blockquote class="tr_bq">
The submissions were not refereed, and many of these papers represent reports of continuing research. It is expected that most of them will appear in a more polished and complete form in scientific journals.</blockquote>
A small select number of papers from a conference are invited to a special issue of a journal where they do go through the full referee process. Some, but not most, of the other papers get submitted to journals directly. We don't have the proper incentives for authors to produce a journal version with full and complete proofs.<br />
<br />
Should theory conferences move towards a more hybrid or PACM type of model? I'd had several debates with my fellow theorists many of whom feel the advantages of requiring journal-level papers get outweighed by the extra effort and time required by the authors and the reviewers.http://blog.computationalcomplexity.org/2015/08/pacm.htmlnoreply@blogger.com (Lance Fortnow)8tag:blogger.com,1999:blog-3722233.post-6116946710163740731Mon, 24 Aug 2015 02:38:00 +00002015-08-23T21:38:42.504-05:00Interesting properties of the number 24 on someone's 24th wedding anniversary<br />
The story you are about to read is true. Only the names have been changed to protect the innocent. The Alice and Bob below are not the crypto Alice and Bob.<br />
<br />
------------------------------------------------------<br />
BOB (to ALICE): Its our 24th anniversary! Last year when it was our 23rd anniversary we celebrated by having you tell me that 23 was the ONLY number that required 9 cubes so sum to it, and that its open how many cubes you need for large n, though its between 4 and 7. Oh that was fun! What do you have planned for our 24th anniversary!<br />
<br />
ALICE (to BOB): I've prepared FIVE facts about 24! Oh, I mean 24, not 24 factorial! We'll see which one you want to discuss. Here they are:<br />
<br />
1) 24 is the largest nat number n such that all nat numbers m ≤ sqrt;(n) m divides n.<br />
<br />
2) 24 is the least nat number that has exactly 8 distinct factors. (1,2,3,4,6,8,12,24)<br />
<br />
3) 24 is the ONLY number m≥2 such that 1^2 + 2^2 + ... + m^2 is a square. Its 70^2, so if we are married 70 years, I'll have an interesting fact about 24.<br />
<br />
4) Let S be an n-sphere. How many spheres of the same size as S can kiss S? Thats the kissing number, called kiss(n). kiss(2)=6 (so given a circle you can position 6 identical circles that kiss it), kiss(3)=12 (thats 3-dim), and kiss(4)=24.<br />
<br />
5) Its one of the few numbers that is the title of a TV show. <br />
<br />
BOB: Since its our anniversary I'll go with the kissing number~ Mathematically I'd go with the square thing.<br />
---------------------------------------------------<br />
<br />
I am sure that for all numbers ≤ 94 one can come up with some facts of interest. The book <a href="http://www.amazon.com/dp/0821848070/?tag=googhydr-20&hvadid=40073556007&hvpos=1t1&hvexid=&hvnetw=g&hvrand=7999853277492080378&hvpone=42.02&hvptwo=&hvqmt=b&hvdev=c&ref=pd_sl_1jc6z4vgp3_b">Those Fascinating Number</a> has an interesting fact about many numbers. The least number that it has no interesting fact about is 95, but I suspect Alice and Bob won't be married that long.<br />
<br />
What are the coolest numbers (mathematically)? See <a href="http://planetmath.org/toptencoolestnumbers">here</a> for a possible answer. <br />
<br />
<a href="http://www.amazon.com/dp/0821848070/?tag=googhydr-20&hvadid=40073556007&hvpos=1t1&hvexid=&hvnetw=g&hvrand=7999853277492080378&hvpone=42.02&hvptwo=&hvqmt=b&hvdev=c&ref=pd_sl_1jc6z4vgp3_b"><br /></a>http://blog.computationalcomplexity.org/2015/08/interesting-properties-of-number-24-on.htmlnoreply@blogger.com (GASARCH)9tag:blogger.com,1999:blog-3722233.post-2381591759078444467Thu, 20 Aug 2015 12:18:00 +00002015-08-20T07:18:43.625-05:00Crowdsourcing the TruthA new project <a href="http://www.augur.net/">Augur</a> aims to create a decentralized prediction market. If this post so moves you, Augur is in the midst of a <a href="https://sale.augur.net/">reputation sale</a>. Don't miss out if you would like to be an Augur reporter.<br />
<br />
A prediction market takes a future event, such as Hillary Clinton winning the 2016 Presidential Election, and creates a security that pays off $100 if Hillary wins and $0 otherwise. The market allow buying, selling and short selling the security and the price of the security represents that probability the event will happen. <a href="http://www.predictwise.com/">Predictwise</a>, which aggregates prediction markets, has the <a href="http://www.predictwise.com/politics/2016president">probability of Hillary winning</a> at 47% as I write this. But there are a limited amount of markets out there for Predictwise to draw from.<br />
<br />
Intrade, which <a href="http://blog.computationalcomplexity.org/2013/03/goodbye-old-friends.html">shut down</a> due to financial improprieties in 2013, used to run markets on all aspects of elections and other current events. Many other prediction markets have disappeared over time. The Augur team put out a <a href="http://augur.link/augur.pdf">white paper</a> describing their fully decentralized prediction market immune to individuals bringing it down. They build on cryptocurrencies for buying and selling and a group of "reporters" financially incentivized to announce the correct answer for each market.<br />
<br />
It's that last part I find the most interesting, instead of having an authority that reports the results, Augur will crowdsource the truth.<br />
<blockquote class="tr_bq">
A key feature of Augur is tradeable Reputation. The total amount of Reputation is a fixed quantity, determined upon the launch of Augur. Holding Reputation entitles its owner to report on the outcomes of events, after the events occur...Reputation tokens are gained and lost depending on how reliably their owner votes with the consensus.</blockquote>
Consensus may not be "the truth". Reporters are not incentivized to report the truth but what the other reporters will report as the consensus. Those who buy and sell on the market are not betting on the truth but the outcome as it is decided by the consensus of reporters. We have an ungrounded market.<br />
<br />
The purchasers of reputation tokens likely won't represent the public at large and biases may come to play. Would this consensus have agreed that Bush won the 2000 election?<br />
<br />
What would have the consensus done on the <a href="https://en.wikipedia.org/w/index.php?title=Intrade&oldid=651267155#Disputes">North Korea controversy</a>?<br />
<blockquote class="tr_bq">
[The Intrade security] allowed speculation on whether North Korea would, by 31 July 2006, successfully fire ballistic missiles that would land outside its airspace. On 5 July 2006 the North Korean government claimed a successful test launch that would have satisfied the prediction, a launch widely reported by world media. [Intrade] declared that the contract's conditions had not been met, because the US Department of Defense had not confirmed the action, and this confirmation was specifically required by the contract. (Other government sources had confirmed the claim, but these were not the sources referenced in the contract.) Traders considered this to be in strict compliance with the stated rule but contrary to the intention of the market (which was to predict the launch event, and not whether the US Defense Department would confirm it).</blockquote>
Since Augur will allow arbitrarily worded contracts on topics such as proofs of P v NP, the consensus reporting may lead to some interesting future blog posts.<br />
<br />
Despite my reservations, I'm quite excited to see Augur set up a prediction market system that can't be shut down and wish them all the luck. You have to until October 1 to <a href="https://sale.augur.net/">buy reputation</a> if you want to be deciding the truth instead of just predicting it.http://blog.computationalcomplexity.org/2015/08/crowdsourcing-truth.htmlnoreply@blogger.com (Lance Fortnow)2tag:blogger.com,1999:blog-3722233.post-367939230197805901Mon, 17 Aug 2015 02:39:00 +00002015-08-16T21:39:31.599-05:00Have we made Progress on P vs NP?While teaching P vs NP in my class Elementary Theory of Computation (Finite Automata, CFG's, P-NP, Dec-undecid) I was asked <i>What progress has been made on P vs NP?</i><br />
<br />
I have heard respectable theorists answer this question in several ways:<br />
<br />
1) There has been no progress whatsoever- but the problem is only 40 years old, a drop in the mathematical bucket sort.<br />
<br />
2) There has been no progress whatsoever- this is terrible since 40 years of 20th and 21st century mathematics is a lot and we already had so much to draw on. We are for the long haul.<br />
<br />
3) We have made progress on showing some techniques will not suffice, which is progress--- of a sort.<br />
<br />
4) We have made progress on showing P=NP: Barringtons result, FPT, Holographic algorithms, SAT in PCP with O(1) queries. Too bad- since P NE NP we've made no progress.<br />
<br />
5) We have made progress on showing P=NP: Barringtons result, FPT, Holographic algorithms, SAT in PCP with O(1) queries. We should not be closed minded to the possibliity that P=NP. (NOTE- other theorists say YES WE SHOULD BE CLOSED MINDED.)<br />
<br />
6) Note that:<br />
<br />
a) We have pathetic lower bounds on real models of computation.<br />
<br />
b) We have Meaningful lower bounds on pathetic models of computation.<br />
<br />
c) We DO NOT have meaningful lower bounds on real models of computation.<br />
<br />
7) Have we made progress? Once the problem is solved we'll be able to look back and say what was progress.<br />
<br />
I told the students my view: <i>We have made progress on showing some techniques will not suffice, which is progress--- of a sort</i> which my class found funny. Then again, students find the fact that there are sets that are undecidable, and sets even harder than that! to be funny too.http://blog.computationalcomplexity.org/2015/08/have-we-made-progress-on-p-vs-np.htmlnoreply@blogger.com (GASARCH)9tag:blogger.com,1999:blog-3722233.post-731987152010575644Thu, 13 Aug 2015 12:45:00 +00002015-08-13T07:45:44.367-05:00What is Theoretical Computer Science?Moshe Vardi asks a provocative question in <a href="http://windowsontheory.org/2015/06/09/why-doesnt-acm-have-a-sig-for-theoretical-computer-science-guest-post-by-moshe-vardi/">Windows on Theory</a> and <a href="http://cacm.acm.org/magazines/2015/8/189832-why-doesnt-acm-have-a-sig-for-theoretical-computer-science/fulltext">CACM</a>: "Why doesn't ACM have a SIG for Theoretical Computer Science?" The reaction of myself and many of my fellow Americans is the question has a false premise. <a href="http://www.sigact.org/">SIGACT</a>, the ACM special interest group for algorithms and computation theory, plays this role as stated in its mission:<br />
<blockquote class="tr_bq">
SIGACT is an international organization that fosters and promotes the discovery and dissemination of high quality research in theoretical computer science (TCS), the formal analysis of efficient computation and computational processes. TCS covers a wide variety of topics including algorithms, data structures, computational complexity, parallel and distributed computation, probabilistic computation, quantum computation, automata theory, information theory, cryptography, program semantics and verification, machine learning, computational biology, computational economics, computational geometry, and computational number theory and algebra. Work in this field is often distinguished by its emphasis on mathematical technique and rigor.</blockquote>
Theoretical computer science in Europe has a much different balance, putting as much or even more emphasis on automata and logic as it does on algorithms and complexity. So from that point of view (a view shared by Moshe who has strong ties to CS logic) SIGACT does not cover the full range of TCS.<br />
<br />
The term "theoretical computer science" just doesn't have a universal meaning. Neither definition is right or wrong, though we all have our biases.<br />
<br />
Why does TCS have such a different meaning in Europe and the US? A different research culture and relatively little mixing. Very few North Americans go to grad school, take postdocs or faculty positions in Europe and very few Europeans go the other way, and those that did tended to do algorithms and complexity. Until we started to see web-based archives in the mid 90's, distribution of research between the Europe and US went quite slowly. Things have changed since but by then the different notions of TCS have been set.<br />
<br />
SIGACT fully acknowledges that it doesn't do a good job covering the logic community and it has always strongly supported <a href="http://siglog.hosting.acm.org/">SIGLOG</a>, the special interest group for logic and computation.<br />
I would love to see joint events between SIGACT and SIGLOG. LICS should be part of FCRC with STOC and Complexity or hold a co-located meeting in other years. But SIGACT does do a great job representing the theoretical computer science community in North America.http://blog.computationalcomplexity.org/2015/08/what-is-theoretical-computer-science.htmlnoreply@blogger.com (Lance Fortnow)8tag:blogger.com,1999:blog-3722233.post-1683663089909714380Mon, 10 Aug 2015 17:29:00 +00002015-08-10T12:29:31.978-05:00Ways to deal with the growing number of CS majors.(Univ of MD at College Park is looking to hire a Comp Sci Lecturer. Here is the link: <a href="http://www.cs.umd.edu/job/2015/lecturer-computer-science">HERE)</a><br />
<br />
<br />Univ of MD at College Park will have 2100 students in the CS program next year. Thats... a lot! CS is up across the country which is mostly a good thing but does raise some logistical questions. How are your schools handling the increase in the number of CS students? Here are some options I've heard people use:<br /><br />1) Hiring adjuncts who come in, teach a course, and leave is good economically (they don't get paid much) but bad for the long term. Better to have people that are integrated into the dept (lectures are, see next point). Also, CS is a changing field so you can't just give someone some notes and slides and say TEACH THIS. For Calculus you probably can, and they can even do a good job. Calculus doesn't change as fast, or maybe even at all. I envy my friends in math departments who can count<br />on a stable first year course that everyone in the dept can teach. They envy CS profs who have the freedom to have diff versions of CS1 in diff schools. Even in diff semesters!<br /><br />2) Hiring lecturers who are full time and have a high teaching load is good in terms of them being fully integrated into the dept and being involved with course syllabus changes. Also, if they stay long term there is stability. If the first year courses are only taught by lecturers this may be bad abstractly as profs should be in touch with all levels of the dept. I once tried to make this <br />point over lunch at a Dagstuhl meeting and before I could even finish my point they shouted me down and asked if I wanted to teach Intro Programming. I do not, so I'll just shut up now.<br /><br />3) Don't let profs buy out. I've heard of this at some schools as a way to at least have the profs that are there teaching. Alternatively only let profs buy out of grad courses. I don't know if either is a good idea. For one thing, the money use to buy out can be used to hire someone else. But that goes back to point 1- not really good to have part timers. If other profs teach overloads with the money that sounds okay. If half of the profs are paying the other half to teach for them, that sounds odd, but I'm not sure its bad. Also, it would never get that extreme.<br /><br />4) Postdocs sometimes teach a course for extra money. This is good for them for teaching experience and resume, and if they are teaching a course with someone else this can work well. However, if a postdocs point is to get more research done, this will of course cut into that.<br /><br />5) Grad students sometimes teach a course. I knew a grad student in math who was teachng the junior-course in number theory. I asked her if this was an honor or exploitation. She just said YES.<br /><br />6) Increase class size. Going from lecturing to 80 to lecturing to 300 might be okay, though (a) you NEED to use powerpoint or similar and have resources on the web, maybe also Piazza, and (b) you NEED to have LOTS of recitations so they at least are small and (c) you NEED to havehigh quality TAs for the recitations. In some schools its even a problem getting ROOMS of that size!<br /><br />7) HIRE MORE PROFS! Profs have lower teaching loads than lecturers and in CS its harder to teach outside of your area then in (say) Math. (How is it for other fields? If you know, please comment.) Should you hire based on research needs or teaching needs? If a recursive model theorists can teach graphics, that might be a real win if you NEED research in recursive model theory but also<br />NEED someone to teach graphics.<br /><br />8) Not a suggestion but a thought- IF many of the new majors aren't very good then many might flunk out of the major in the first year so this is not a problem for Sophmore, Junior, Senior courses. At least at UMPC this does NOT seem to be the case. To rephrase: many of the new majors are good and do not flunk out. I call that good news! But what about at your school?<br /><br />9) What does your school do? Does it work?<br /><br />
<br />http://blog.computationalcomplexity.org/2015/08/ways-to-deal-with-growing-number-of-cs.htmlnoreply@blogger.com (GASARCH)4tag:blogger.com,1999:blog-3722233.post-1442502959825675643Thu, 06 Aug 2015 11:56:00 +00002015-08-06T06:56:00.431-05:00How hard would this cipher be for Eve to crack?I've been looking at old ciphers since I am teaching a HS course on Crypto. We've done shift, affine, matrix, Playfair, 1-time pad, Vigenere, and then noting that in all of the above Alice and Bob need to meet, we did Diffie-Hellman.<br />
<br />
The <a href="https://en.wikipedia.org/wiki/Playfair_cipher">Playfair Cipher</a> cipher is interesting in that it gives a compact way to obtain a mapping from pairs of letters to pairs of letters that (informally) LOOKS random. Of course NOW it would not be hard for Alice and Bob to AGREE on a random mapping of {a,b,...,z}^2 to{a,b,c,...,z}^2 and use that. For that matter, Alice and Bob could agree on a random mapping from {a,b,c,...,z}^k to {a,b,c,...,z}^k for some reasonable values of k.<br />
<br />
So consider the following cipher with parameter k: Alice and Bob generate a RANDOM 1-1, onto mapping of {a,...,z}^k to {a,...,z}^k. To send a message Alice breaks it into blocks of k and encodes each block using the mapping they agree on.<br />
<br />
Question One: How big does k have to be before this is impractical for Alice and Bob?<br />
<br />
Question two: If k=1 then Eve can easily break this with Freq analysis of single letters. For k=2 Eve can easily break this with Freq analysis of digraphs (pairs of letters). Is there a value of k such that the distribution of k-grams is not useful anymore? As k goes up does it get easier or harder to crack? I originally thought harder which is why i thought this would be a good code, but frankly I don't really know.Looking over the web it is EASY to find freq of letters and digraphs, but very hard to find any source for freq of (say) 10-grams. So at the very least Eve would need to build up her own statistics--- but of course she could do this.<br />
<br />
SO, here is the question: Is there a value of k such this code is easy for Alice and Bob but Hard for Eve TODAY? How about X years ago?<br />
<br />
<br />http://blog.computationalcomplexity.org/2015/08/how-hard-would-this-cipher-be-for-eve.htmlnoreply@blogger.com (GASARCH)3tag:blogger.com,1999:blog-3722233.post-565524057593401535Sun, 02 Aug 2015 21:14:00 +00002015-08-02T16:14:28.643-05:0017 candidates, only 10 in the debate- what to do?<br />On Thursday Aug 6 there will be Republican debate among 10 of the 17 (yes 17) candidates for the republican nomination.<br /><br />1) There are 17 candidates. Here is how I remember them: I think of the map of the US and go down the east coast, then over to Texas then up. That only works for the candidates that are or were Senators or Govenors. I THEN listthe outsiders. Hence my order is (listing their last job in government) George Pataki (Gov-NY), Chris Christie (Gov-NY), Rick Santorum (Sen-PA), Rand Paul(Sen KT),JimGilmore(Gov-VA), Lindsay Graham (Sen-SC),Jeb Bush (Gov-FL), Marco Rubio (Sen-FL), Bobby Jindal (Gov-LA), Ted Cruz (Sen-TX), Rick Perry (Gov-TX), Mike Huckabee (Gov-AK), Scott Walker (Gov-Wisc), John Kaisch (Gov-Ohio) Donald Trump (Businessman), Ben Carson (Neurosurgeon), Carly Fiorina (Businesswomen).<br />9 Govs, 5 Sens, 3 outsiders.<br /><br />2) Having a debate with 17 candidates would be insane. Hence they decided a while back to have the main debate with the top 10 in the average of 5 polls, and also have a debate with everyone else. There are several problems with this: (a) candidates hovering around slots 9,10,11,12 are closer together than the margin of error, (b) the polls are supposed to measure what the public wants, not dictate things, (c) the polls were likely supposd to determine who the serious candidates are, but note that Trump is leading the polls, so thats not quite right.QUESTION: Lets say that Chris Christie is at 2% with a margin of + or - 3%. Could he really be a -1%?<br /><br />3) A better idea might be to RANDOMLY partition the field into two groups, one of size 8 and one of size 9, and have two debates that way.What randomizer would they use? This is small enough they really could just put slips of paper in a hat and draw them. If they had many more candidates we might use Nisan-Wigderson.<br /><br />4) How did they end up with 17 candidates?<br /><br />a) Being a Candidate is not a well defined notion. What is the criteria to be a candidate? Could Lance Fortnow declare that he is a candidate for the Republian Nomination (or for that matter the Democratic nomination). YES. He's even written some papers on Economics so I'dvote for him over... actually, any of the 17. RUN LANCE RUN! So ANYONE who wants to run can! And they Do! I'm not sure what they can do about this---it would be hard to define ``serious candidate'' rigorously.<br />
<br />b) The debate is in August but the Iowa Caucus isn't until Feb 1. So why have the debate now? I speculate that they wanted to thin out the field early, but this has the opposite effect--- LOTS of candidates now want to get into the debates.<br /><br />c) (I've heard this) Campaign Finance laws have been gutted by the Supreme court, so if you just have ONE mega-wealthy donor you have enough money to run. Or you can fund yourself (NOTE- while Trump could fund himself, sofar he hasn't had to as the media is covering him so much).<br /><br />d) Because there are so many, and no dominating front runner, they all think they have a shot at it. So nobody is dropping out. Having a lot of people running makes more people want to run. (All the cool kids are doing it!)<br /><br />
<br />http://blog.computationalcomplexity.org/2015/08/17-candidates-only-10-in-debate-what-to.htmlnoreply@blogger.com (GASARCH)2tag:blogger.com,1999:blog-3722233.post-4910831260888267418Wed, 29 Jul 2015 03:44:00 +00002015-07-28T22:44:14.763-05:00Explain this Scenario in Jeapardy and some more thoughts<br />
In the last post I had the following scenario:<br />
<br />
Larry, Moe, and Curly are on Jeopardy.<br />
<br />
Going into Final Jeopardy:<br />
<br />
Larry has $50,000, Moe has $10,000, Curly has $10,000<br />
<br />
Larry bets $29,999, Moe bets $10,000, Curly bets $10,000<br />
<br />
These
bets are ALL RATIONAL and ALL MATTER independent of what the category
is. For example, these bets make sense whether the category is THE THREE
STOOGES or CIRCUIT LOWER BOUNDS.<br />
<br />
Explain why this is.<br />
<br />
EXPLANATION: You were probably thinking of ordinary Jeopardy where the winner gets whatever he gets, and the losers take-home is based ONLY on their rank (2000 for second place, 1000 for first place). Hence Larry's bet seems risky since he may lose 29,999 and Moe and Curly's bets seem irrelevant (or barely relelvent- they both want to finish in second)<br />
<br />
BUT- these are Larry, Moe, Curly, The Three Stooges. This is CELEBRITY JEOPARDY! The rules for money are different. First place gets MAX of what he wins, and 50,000. So Larry has NOTHING TO LOSE by betting 29,999. Second and Third place BOTH get MAX of what they win and 10,000. So Moe and Curly have NOTHING TO LOSE by betting 10,000. (I suspect they do this because the money goes to a charity chosen by the celebrity).<br />
<br />
SIDE NOTE: I saw Celebrity Jeopardy and wanted to verify the above before posting. So I looked on the web for the rules for Celebrity Jeopardy. THEY WERE NO WHERE TO BE FOUND! A friend of mine finally found a very brief you-tube clip of Penn Jillette wining Celeb Jeopardy and a VERY BRIEF look at the final scores and how much money everyone actually got. Thats how I verified what I thought were the rules for celebrity jeopardy.<br />
<br />
IF I am looking up a theorem in Recursive Ramsey theory and can't find it on the web I am NOT surprised at all since that would be somewhat obscure (9 times out of 10 when I look up something in Ramsey Theory it points to one of the Ramsey Theory Websites that I maintain. Usually is there!). But the rules for final Jeopardy -- I would think that is not so obscure. Rather surprised it was not on the web.<br />
<br />http://blog.computationalcomplexity.org/2015/07/explain-this-scenario-in-jeapardy-and.htmlnoreply@blogger.com (GASARCH)4tag:blogger.com,1999:blog-3722233.post-6327277819057909538Mon, 27 Jul 2015 22:22:00 +00002015-07-27T21:33:05.534-05:00Explain this Scenario on Jeopardy<br />
Ponder the following:<br />
<br />
Larry, Moe, and Curly are on Jeopardy.<br />
<br />
Going into Final Jeopardy:<br />
<br />
Larry has $50,000, Moe has $10,000, Curly has $10,000<br />
<br />
Larry bets $29,999, Moe bets $10,000, Curly bets $10,000<br />
<br />
These bets are ALL RATIONAL and ALL MATTER independent of what the category is. For example, these bets make sense whether the category is THE THREE STOOGES or CIRCUIT LOWER BOUNDS.<br />
<br />
Explain why this is.<br />
<br />
I'll answer in my next post or in the comments of this one<br />
depending on... not sure what it depends on.<br />
<br />http://blog.computationalcomplexity.org/2015/07/explain-this-scenario-on-jeapardy.htmlnoreply@blogger.com (GASARCH)6tag:blogger.com,1999:blog-3722233.post-5973096607801741217Thu, 23 Jul 2015 13:07:00 +00002015-07-23T13:12:48.644-05:00New Proof of the Isolation LemmaThe isolation lemma of <a href="http://dx.doi.org/10.1007/BF02579206">Mulmuley, Vazirani and Vazirani</a> says that if we take random weights for elements in a set system, with high probability there will be a unique set of minimum weight. Mulmuley et al. use the isolation lemma to randomly reduce matching to computing the determinant. The isolation lemma also gives an alternative proof to <a href="http://blog.computationalcomplexity.org/2006/09/favorite-theorems-unique-witnesses.html">Valiant-Vazirani</a> that show how to randomly reduce NP-complete problems to ones with a unique solution.<br>
<br>
Noam Ta-Shma, an Israeli high school student (and son of Amnon), recently <a href="http://eccc.hpi-web.de/report/2015/080/">posted</a> a new proof of the isolation lemma. The MVV proof is not particularly complicated but it does require feeling very comfortable with independent random variables. Ta-Shma's proof is a more straight-forward combinatorial argument.<br>
<br>
Suppose you have a set system over a universe of n elements. Give each element i, a weight w<sub>i</sub> uniformly chosen between 1 and m. The weight of a set is the sum of the weights of the elements of that set. Ta-Shma shows that there is a unique minimum weighted set with probability at least (1-1/m)<sup>n</sup>, which beats out the bound of (1-n/m) given by MVV.<br>
<br>
Here is a sketch of his proof: Suppose all the w<sub>i</sub>'s had weights between 2 and m. Let S be the lexicographically minimal weight set given these weights. Consider the function φ(w), defined on weights with all the w<sub>i</sub> at least 2, as the following:<br>
<ul>
<li>φ(w)<sub>i</sub> = w<sub>i</sub> -1 if i is in S</li>
<li>φ(w)<sub>i</sub> = w<sub>i</sub> if i is not in S</li>
</ul>
<div>
Note that S is the unique minimal set now in the weights φ(w)<sub>i</sub>. Moreover φ is 1-1 for we can recover w from φ(w) by taking the unique minimal weight set in φ(w) and adding one to the weight of each element in that set.</div>
<div>
<br></div>
<div>
So we have the probability that random weights yield a unique minimum set is at least<br>
<div style="text-align: center;">
|range(φ)|/m<sup>n</sup> = |domain(φ)|/m<sup>n</sup> = (m-1)<sup>n</sup>/m<sup>n</sup> = (1-1/m)<sup>n</sup>.</div>
</div>
<div style="text-align: left;">
<br></div>
<div style="text-align: left;">
Read all the details in Ta-Shma's <a href="http://eccc.hpi-web.de/report/2015/080/">paper</a>.</div>
http://blog.computationalcomplexity.org/2015/07/new-proof-of-isolation-lemma.htmlnoreply@blogger.com (Lance Fortnow)2tag:blogger.com,1999:blog-3722233.post-2601546435391000317Tue, 21 Jul 2015 17:53:00 +00002015-07-22T16:13:26.525-05:00Hartley Rogers, Author of the first Textbook on Recursion Theory, passes awayHartley Rogers Jr passed away on July 17, 2015 (last week Friday as I write this).He was 89 and passed peacefully.<br />
<br />
For our community Rogers is probably best known for his textbook on Recursion Theory which I discuss below. He did many other things, for which I refer you to<br />
his Wikipedia page <a href="https://en.wikipedia.org/wiki/Hartley_Rogers,_Jr.">here</a>.<br />
<br />
His book was:<br />
<br />
<b>The theory of recursive functions and effective computability.</b><br />
<br />
It was first published in 1967 but a paperback version came out in 1987.<br />
<br />
It was probably the first textbook in recursion theory. It was fairly broad. Here are the chapter headings and some comments.<br />
Recursive functions<br />
<br />
Unsolvable problems (The first edition came out before Hilbert's tenth problem was solved),<br />
<br />
Purpose: Summary,<br />
<br />
Recursive invariants, <br />
<br />
Recursive and recursively enumerable sets,<br />
<br />
Reducibilities,<br />
<br />
One-One Reducibilities; Many-one Reducibilities, (Maybe its just me but I can't imagine caring if the reduction is 1-1 or m-1.)<br />
<br />
Truth-Table Reducibilities;simple sets, (``simple sets are not simple'' was a quote from Herbert Gelernter who taught me my first course in recursion theory.)<br />
<br />
Turing Reducibilities; hypersimple sets,<br />
<br />
Post's Problem; incomplete sets. (Posts problem was to find an r.e. set that is neither recursive nor Turing-complete. when I tell people there such a set they they often say `Oh, Like Ladner's Theorem.' Thats true but backwards. Its still open to find a NATURAL set that is incomplete, though they prob don't exist and its hard to pin that down.)<br />
<br />
The Recursion Theorem, <br />
<br />
Recursively enumerable sets as a lattice,<br />
<br />
Degrees of unsolvability,<br />
<br />
The Arithmetic Hierarchy (Part 1),<br />
<br />
The Arithmetic Hierarchy (Part 2),<br />
<br />
The Analytic Hierarchy.<br />
<br />
Looking over his book I notice the following<br />
<br />
1) He thanks Noam Chomsky (a linguist) and Burton Dreben (A philosopher). I think we are more specialized now. Would it be surprising if a text in recursion theory written now thanked people who are not in math?<br />
<br />
2) He thanks his typist. I think that people who write math books now type it themselves. I wonder if novelists also now type it themselves.<br />
<br />
3) I think that Soare's book replaced it as THE book that young recursion theorists read. (Are there young recursion theorists?) Soare's book is chiefly on r.e. degree theory, Rogers book is broader. When Rogers wrote his book much less was known (no 0'''-arguments, very little on random sets). It was possible to have most of what was known in one book. That would be hard now, though Odilfreddi book comes close. Note that Odilfreddi book is in two volumes with a third one to be finished... probably never.<br />
<br />
<br />
<br />
<br />
<br />
One personal note- I had a course on Recursion theory taught by Herbert Gelernter at Stonybrook (my ugrad school) in the Fall of 1979. We covered the first six chapters of Rogers text. It was a great course from a great book taught by a great teacher and set me on the path to do work in recursion-theoretic complexity theory. http://blog.computationalcomplexity.org/2015/07/hartley-rogers-author-of-first-textbook.htmlnoreply@blogger.com (GASARCH)3tag:blogger.com,1999:blog-3722233.post-697629226368774002Thu, 16 Jul 2015 12:40:00 +00002015-07-16T07:40:40.359-05:00Microsoft Faculty SummitLast week I participated in my first <a href="http://research.microsoft.com/en-us/um/redmond/events/fs2015/default.aspx">Microsoft Faculty Summit</a>, an annual soiree where Microsoft brings about a hundred faculty to Redmond to see the latest in Microsoft Research. I love these kinds of meetings because I enjoy getting the chance to talk to computer scientists across the broad spectrum of research. Unlike other field, CS hasn't had a true annual meeting since the 80's so it takes events like this to bring subareas together. "Unlike other fields" is an expression we say far too often in computer science.<br />
<br />
This was the first summit since the closing of the Silicon Valley lab and the reorganization of MSR into NExT (New Experiences and Technologies) led by Peter Lee and MSR Labs led by Jeannette Wing. Labs focusing on long-term research while NExT tries to put research into Microsoft products. Peter gave the example of real-time translation into Skype already <a href="http://www.skype.com/en/translator-preview/">available</a> for public preview. Everyone in MSR emphasized that Microsoft will remain committed to open long-term research and said the latest round of cuts (<a href="http://news.microsoft.com/2015/07/08/satya-nadella-email-to-employees-on-sharpening-business-focus/">announced</a> while the summit was happening) will not affect research.<br />
<br />
<a href="https://www.microsoft.com/microsoft-hololens/en-us">HoloLens</a> had the most excitement, a way to manipulate virtual three-dimensional images. Unfortunately the summit didn't have HoloLenses for us to try out but I did get a cool HoloLens T-shirt. While one expects the most interest in HoloLens for gaming, Microsoft emphasized the educational aspect. Microsoft has a <a href="http://research.microsoft.com/en-us/projects/hololens/">call for proposals</a> for research and education uses for HoloLens.<br />
<br />
I didn't go to many of the parallel sessions, instead spending the time networking with colleagues old and new. I did really enjoy the <a href="http://research.microsoft.com/en-us/um/redmond/events/fs2015/demofest-abstracts.aspx">research showcase</a> which highlight many of the research projects. I tried out the Skype translator, failing a reverse Turing test because I thought I was talking to a computer but it was really a Spanish speaking human. My colleagues at MSR NYC were showing off their <a href="https://prediction.microsoft.com/">wisdom of the crowds</a>. Microsoft is moving their defunct academic search directly into Bing and Cortana. I tried Binging myself on the prototype and it did indeed list my research papers but not my homepage and this blog. They said they'll fix that in future updates.<br />
<br />
Monica Lam showed off her latest social messaging system <a href="http://www.omlet.me/">Omlet</a> to improve privacy by keeping data on the Omlet server for no longer than two weeks though I was more excited by their open API. Feel free to Omlet me.<br />
<br />
While the meeting had its share of hype (quantum computers to solve world hunger), I really enjoyed the couple of days in Redmond. Despite the SVC closing, Microsoft is still one of the few companies that has labs focused on true basic research.http://blog.computationalcomplexity.org/2015/07/microsoft-faculty-summit.htmlnoreply@blogger.com (Lance Fortnow)10tag:blogger.com,1999:blog-3722233.post-7539655553907516201Mon, 13 Jul 2015 13:40:00 +00002015-07-13T08:40:38.876-05:00Is there an easier proof? A less messy proof? <br />
Consider the following statement:<br />
<br />
BEGIN STATEMENT:<br />
<br />
For all a,b,c, the equations<br />
<br />
x + y + z = a<br />
<br />
x<sup>2</sup> +y<sup>2</sup> + z<sup>2</sup> = b<br />
<br />
x<sup>3</sup> + y<sup>3</sup> + z<sup>3</sup> = c<br />
<br />
has a unique solution (up to perms of x,y,z). <br />
<br />
END STATEMENT<br />
<br />
One can also look at this with k equations, k variables, and powers 1,2,...,k.<br />
<br />
The STATEMENT is true. One can use Newton's identities (see <a href="https://en.wikipedia.org/wiki/Newton%27s_identities">here</a>) to obtain from the sums-of-powers all of the symmetric functions of x,y,z (uniquely). One can then form a polynomial which, in the k=3 case, is<br />
<br />
W^3 -(x+y+z)W^2 + (xy+xz+yz)W - xyz = 0<br />
<br />
whose roots are what we seek.<br />
<br />
I want to prove an easier theorem in an easier way that avoids using Newton's identities. Here is what I want to prove:<br />
<br />
Given those equations above (or the version with k-powers), and told that a,b,c are nonzero natural numbers, I want to prove that there is at most one natural-number solution for (x,y,z) (OR for x1,...,xk in the k-power case).<br />
<br />
Its hard to say `I want an easier proof' when the proof at hand really isn't that hard. And I don't want to say I want an `elementary' proof- I just want to avoid the messiness of Newton's identities. I doubt I can formalize what I want but, as <a href="https://en.wikipedia.org/wiki/I_know_it_when_I_see_it">Potter Stewart </a>said, I'll know it when I see it.<br />
<br />
<br />
<br />
<br />http://blog.computationalcomplexity.org/2015/07/is-there-easier-proof-less-messy-proof.htmlnoreply@blogger.com (GASARCH)2