tag:blogger.com,1999:blog-3722233Fri, 31 Jul 2015 22:34:22 +0000typecastfocs metacommentsComputational ComplexityComputational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill Gasarchhttp://blog.computationalcomplexity.org/noreply@blogger.com (Lance Fortnow)Blogger2297125tag:blogger.com,1999:blog-3722233.post-4910831260888267418Wed, 29 Jul 2015 03:44:00 +00002015-07-28T22:44:14.763-05:00Explain this Scenario in Jeapardy and some more thoughts<br />
In the last post I had the following scenario:<br />
<br />
Larry, Moe, and Curly are on Jeopardy.<br />
<br />
Going into Final Jeopardy:<br />
<br />
Larry has $50,000, Moe has $10,000, Curly has $10,000<br />
<br />
Larry bets $29,999, Moe bets $10,000, Curly bets $10,000<br />
<br />
These
bets are ALL RATIONAL and ALL MATTER independent of what the category
is. For example, these bets make sense whether the category is THE THREE
STOOGES or CIRCUIT LOWER BOUNDS.<br />
<br />
Explain why this is.<br />
<br />
EXPLANATION: You were probably thinking of ordinary Jeopardy where the winner gets whatever he gets, and the losers take-home is based ONLY on their rank (2000 for second place, 1000 for first place). Hence Larry's bet seems risky since he may lose 29,999 and Moe and Curly's bets seem irrelevant (or barely relelvent- they both want to finish in second)<br />
<br />
BUT- these are Larry, Moe, Curly, The Three Stooges. This is CELEBRITY JEOPARDY! The rules for money are different. First place gets MAX of what he wins, and 50,000. So Larry has NOTHING TO LOSE by betting 29,999. Second and Third place BOTH get MAX of what they win and 10,000. So Moe and Curly have NOTHING TO LOSE by betting 10,000. (I suspect they do this because the money goes to a charity chosen by the celebrity).<br />
<br />
SIDE NOTE: I saw Celebrity Jeopardy and wanted to verify the above before posting. So I looked on the web for the rules for Celebrity Jeopardy. THEY WERE NO WHERE TO BE FOUND! A friend of mine finally found a very brief you-tube clip of Penn Jillette wining Celeb Jeopardy and a VERY BRIEF look at the final scores and how much money everyone actually got. Thats how I verified what I thought were the rules for celebrity jeopardy.<br />
<br />
IF I am looking up a theorem in Recursive Ramsey theory and can't find it on the web I am NOT surprised at all since that would be somewhat obscure (9 times out of 10 when I look up something in Ramsey Theory it points to one of the Ramsey Theory Websites that I maintain. Usually is there!). But the rules for final Jeopardy -- I would think that is not so obscure. Rather surprised it was not on the web.<br />
<br />http://blog.computationalcomplexity.org/2015/07/explain-this-scenario-in-jeapardy-and.htmlnoreply@blogger.com (GASARCH)4tag:blogger.com,1999:blog-3722233.post-6327277819057909538Mon, 27 Jul 2015 22:22:00 +00002015-07-27T21:33:05.534-05:00Explain this Scenario on Jeopardy<br />
Ponder the following:<br />
<br />
Larry, Moe, and Curly are on Jeopardy.<br />
<br />
Going into Final Jeopardy:<br />
<br />
Larry has $50,000, Moe has $10,000, Curly has $10,000<br />
<br />
Larry bets $29,999, Moe bets $10,000, Curly bets $10,000<br />
<br />
These bets are ALL RATIONAL and ALL MATTER independent of what the category is. For example, these bets make sense whether the category is THE THREE STOOGES or CIRCUIT LOWER BOUNDS.<br />
<br />
Explain why this is.<br />
<br />
I'll answer in my next post or in the comments of this one<br />
depending on... not sure what it depends on.<br />
<br />http://blog.computationalcomplexity.org/2015/07/explain-this-scenario-on-jeapardy.htmlnoreply@blogger.com (GASARCH)6tag:blogger.com,1999:blog-3722233.post-5973096607801741217Thu, 23 Jul 2015 13:07:00 +00002015-07-23T13:12:48.644-05:00New Proof of the Isolation LemmaThe isolation lemma of <a href="http://dx.doi.org/10.1007/BF02579206">Mulmuley, Vazirani and Vazirani</a> says that if we take random weights for elements in a set system, with high probability there will be a unique set of minimum weight. Mulmuley et al. use the isolation lemma to randomly reduce matching to computing the determinant. The isolation lemma also gives an alternative proof to <a href="http://blog.computationalcomplexity.org/2006/09/favorite-theorems-unique-witnesses.html">Valiant-Vazirani</a> that show how to randomly reduce NP-complete problems to ones with a unique solution.<br>
<br>
Noam Ta-Shma, an Israeli high school student (and son of Amnon), recently <a href="http://eccc.hpi-web.de/report/2015/080/">posted</a> a new proof of the isolation lemma. The MVV proof is not particularly complicated but it does require feeling very comfortable with independent random variables. Ta-Shma's proof is a more straight-forward combinatorial argument.<br>
<br>
Suppose you have a set system over a universe of n elements. Give each element i, a weight w<sub>i</sub> uniformly chosen between 1 and m. The weight of a set is the sum of the weights of the elements of that set. Ta-Shma shows that there is a unique minimum weighted set with probability at least (1-1/m)<sup>n</sup>, which beats out the bound of (1-n/m) given by MVV.<br>
<br>
Here is a sketch of his proof: Suppose all the w<sub>i</sub>'s had weights between 2 and m. Let S be the lexicographically minimal weight set given these weights. Consider the function φ(w), defined on weights with all the w<sub>i</sub> at least 2, as the following:<br>
<ul>
<li>φ(w)<sub>i</sub> = w<sub>i</sub> -1 if i is in S</li>
<li>φ(w)<sub>i</sub> = w<sub>i</sub> if i is not in S</li>
</ul>
<div>
Note that S is the unique minimal set now in the weights φ(w)<sub>i</sub>. Moreover φ is 1-1 for we can recover w from φ(w) by taking the unique minimal weight set in φ(w) and adding one to the weight of each element in that set.</div>
<div>
<br></div>
<div>
So we have the probability that random weights yield a unique minimum set is at least<br>
<div style="text-align: center;">
|range(φ)|/m<sup>n</sup> = |domain(φ)|/m<sup>n</sup> = (m-1)<sup>n</sup>/m<sup>n</sup> = (1-1/m)<sup>n</sup>.</div>
</div>
<div style="text-align: left;">
<br></div>
<div style="text-align: left;">
Read all the details in Ta-Shma's <a href="http://eccc.hpi-web.de/report/2015/080/">paper</a>.</div>
http://blog.computationalcomplexity.org/2015/07/new-proof-of-isolation-lemma.htmlnoreply@blogger.com (Lance Fortnow)2tag:blogger.com,1999:blog-3722233.post-2601546435391000317Tue, 21 Jul 2015 17:53:00 +00002015-07-22T16:13:26.525-05:00Hartley Rogers, Author of the first Textbook on Recursion Theory, passes awayHartley Rogers Jr passed away on July 17, 2015 (last week Friday as I write this).He was 89 and passed peacefully.<br />
<br />
For our community Rogers is probably best known for his textbook on Recursion Theory which I discuss below. He did many other things, for which I refer you to<br />
his Wikipedia page <a href="https://en.wikipedia.org/wiki/Hartley_Rogers,_Jr.">here</a>.<br />
<br />
His book was:<br />
<br />
<b>The theory of recursive functions and effective computability.</b><br />
<br />
It was first published in 1967 but a paperback version came out in 1987.<br />
<br />
It was probably the first textbook in recursion theory. It was fairly broad. Here are the chapter headings and some comments.<br />
Recursive functions<br />
<br />
Unsolvable problems (The first edition came out before Hilbert's tenth problem was solved),<br />
<br />
Purpose: Summary,<br />
<br />
Recursive invariants, <br />
<br />
Recursive and recursively enumerable sets,<br />
<br />
Reducibilities,<br />
<br />
One-One Reducibilities; Many-one Reducibilities, (Maybe its just me but I can't imagine caring if the reduction is 1-1 or m-1.)<br />
<br />
Truth-Table Reducibilities;simple sets, (``simple sets are not simple'' was a quote from Herbert Gelernter who taught me my first course in recursion theory.)<br />
<br />
Turing Reducibilities; hypersimple sets,<br />
<br />
Post's Problem; incomplete sets. (Posts problem was to find an r.e. set that is neither recursive nor Turing-complete. when I tell people there such a set they they often say `Oh, Like Ladner's Theorem.' Thats true but backwards. Its still open to find a NATURAL set that is incomplete, though they prob don't exist and its hard to pin that down.)<br />
<br />
The Recursion Theorem, <br />
<br />
Recursively enumerable sets as a lattice,<br />
<br />
Degrees of unsolvability,<br />
<br />
The Arithmetic Hierarchy (Part 1),<br />
<br />
The Arithmetic Hierarchy (Part 2),<br />
<br />
The Analytic Hierarchy.<br />
<br />
Looking over his book I notice the following<br />
<br />
1) He thanks Noam Chomsky (a linguist) and Burton Dreben (A philosopher). I think we are more specialized now. Would it be surprising if a text in recursion theory written now thanked people who are not in math?<br />
<br />
2) He thanks his typist. I think that people who write math books now type it themselves. I wonder if novelists also now type it themselves.<br />
<br />
3) I think that Soare's book replaced it as THE book that young recursion theorists read. (Are there young recursion theorists?) Soare's book is chiefly on r.e. degree theory, Rogers book is broader. When Rogers wrote his book much less was known (no 0'''-arguments, very little on random sets). It was possible to have most of what was known in one book. That would be hard now, though Odilfreddi book comes close. Note that Odilfreddi book is in two volumes with a third one to be finished... probably never.<br />
<br />
<br />
<br />
<br />
<br />
One personal note- I had a course on Recursion theory taught by Herbert Gelernter at Stonybrook (my ugrad school) in the Fall of 1979. We covered the first six chapters of Rogers text. It was a great course from a great book taught by a great teacher and set me on the path to do work in recursion-theoretic complexity theory. http://blog.computationalcomplexity.org/2015/07/hartley-rogers-author-of-first-textbook.htmlnoreply@blogger.com (GASARCH)3tag:blogger.com,1999:blog-3722233.post-697629226368774002Thu, 16 Jul 2015 12:40:00 +00002015-07-16T07:40:40.359-05:00Microsoft Faculty SummitLast week I participated in my first <a href="http://research.microsoft.com/en-us/um/redmond/events/fs2015/default.aspx">Microsoft Faculty Summit</a>, an annual soiree where Microsoft brings about a hundred faculty to Redmond to see the latest in Microsoft Research. I love these kinds of meetings because I enjoy getting the chance to talk to computer scientists across the broad spectrum of research. Unlike other field, CS hasn't had a true annual meeting since the 80's so it takes events like this to bring subareas together. "Unlike other fields" is an expression we say far too often in computer science.<br />
<br />
This was the first summit since the closing of the Silicon Valley lab and the reorganization of MSR into NExT (New Experiences and Technologies) led by Peter Lee and MSR Labs led by Jeannette Wing. Labs focusing on long-term research while NExT tries to put research into Microsoft products. Peter gave the example of real-time translation into Skype already <a href="http://www.skype.com/en/translator-preview/">available</a> for public preview. Everyone in MSR emphasized that Microsoft will remain committed to open long-term research and said the latest round of cuts (<a href="http://news.microsoft.com/2015/07/08/satya-nadella-email-to-employees-on-sharpening-business-focus/">announced</a> while the summit was happening) will not affect research.<br />
<br />
<a href="https://www.microsoft.com/microsoft-hololens/en-us">HoloLens</a> had the most excitement, a way to manipulate virtual three-dimensional images. Unfortunately the summit didn't have HoloLenses for us to try out but I did get a cool HoloLens T-shirt. While one expects the most interest in HoloLens for gaming, Microsoft emphasized the educational aspect. Microsoft has a <a href="http://research.microsoft.com/en-us/projects/hololens/">call for proposals</a> for research and education uses for HoloLens.<br />
<br />
I didn't go to many of the parallel sessions, instead spending the time networking with colleagues old and new. I did really enjoy the <a href="http://research.microsoft.com/en-us/um/redmond/events/fs2015/demofest-abstracts.aspx">research showcase</a> which highlight many of the research projects. I tried out the Skype translator, failing a reverse Turing test because I thought I was talking to a computer but it was really a Spanish speaking human. My colleagues at MSR NYC were showing off their <a href="https://prediction.microsoft.com/">wisdom of the crowds</a>. Microsoft is moving their defunct academic search directly into Bing and Cortana. I tried Binging myself on the prototype and it did indeed list my research papers but not my homepage and this blog. They said they'll fix that in future updates.<br />
<br />
Monica Lam showed off her latest social messaging system <a href="http://www.omlet.me/">Omlet</a> to improve privacy by keeping data on the Omlet server for no longer than two weeks though I was more excited by their open API. Feel free to Omlet me.<br />
<br />
While the meeting had its share of hype (quantum computers to solve world hunger), I really enjoyed the couple of days in Redmond. Despite the SVC closing, Microsoft is still one of the few companies that has labs focused on true basic research.http://blog.computationalcomplexity.org/2015/07/microsoft-faculty-summit.htmlnoreply@blogger.com (Lance Fortnow)10tag:blogger.com,1999:blog-3722233.post-7539655553907516201Mon, 13 Jul 2015 13:40:00 +00002015-07-13T08:40:38.876-05:00Is there an easier proof? A less messy proof? <br />
Consider the following statement:<br />
<br />
BEGIN STATEMENT:<br />
<br />
For all a,b,c, the equations<br />
<br />
x + y + z = a<br />
<br />
x<sup>2</sup> +y<sup>2</sup> + z<sup>2</sup> = b<br />
<br />
x<sup>3</sup> + y<sup>3</sup> + z<sup>3</sup> = c<br />
<br />
has a unique solution (up to perms of x,y,z). <br />
<br />
END STATEMENT<br />
<br />
One can also look at this with k equations, k variables, and powers 1,2,...,k.<br />
<br />
The STATEMENT is true. One can use Newton's identities (see <a href="https://en.wikipedia.org/wiki/Newton%27s_identities">here</a>) to obtain from the sums-of-powers all of the symmetric functions of x,y,z (uniquely). One can then form a polynomial which, in the k=3 case, is<br />
<br />
W^3 -(x+y+z)W^2 + (xy+xz+yz)W - xyz = 0<br />
<br />
whose roots are what we seek.<br />
<br />
I want to prove an easier theorem in an easier way that avoids using Newton's identities. Here is what I want to prove:<br />
<br />
Given those equations above (or the version with k-powers), and told that a,b,c are nonzero natural numbers, I want to prove that there is at most one natural-number solution for (x,y,z) (OR for x1,...,xk in the k-power case).<br />
<br />
Its hard to say `I want an easier proof' when the proof at hand really isn't that hard. And I don't want to say I want an `elementary' proof- I just want to avoid the messiness of Newton's identities. I doubt I can formalize what I want but, as <a href="https://en.wikipedia.org/wiki/I_know_it_when_I_see_it">Potter Stewart </a>said, I'll know it when I see it.<br />
<br />
<br />
<br />
<br />http://blog.computationalcomplexity.org/2015/07/is-there-easier-proof-less-messy-proof.htmlnoreply@blogger.com (GASARCH)2tag:blogger.com,1999:blog-3722233.post-7613370312904154780Thu, 09 Jul 2015 13:53:00 +00002015-07-09T08:53:04.529-05:00Will Our Understanding of Math Deteriorate Over Time?Scientific American <a href="http://www.scientificamerican.com/article/researchers-race-to-rescue-the-enormous-theorem-before-its-giant-proof-vanishes/">writes</a> about rescuing the enormous theorem (classification of finite simple groups) before the proof vanishes. How can a proof vanish?<br />
<br />
In mathematics and theoretical computer science, we read research papers primarily to find research questions to work on, or find techniques we can use to prove new theorems. What happens to a research area then when researchers go elsewhere?<br />
<br />
In a <a href="http://mathoverflow.net/a/44213">response</a> to a question about how can one contribute to mathematics, Bill Thurston notes that our knowledge of mathematics can deteriorate over time.<br />
<blockquote class="tr_bq">
Mathematical understanding does not expand in a monotone direction. Our understanding frequently deteriorates as well. There are several obvious mechanisms of decay. The experts in a subject retire and die, or simply move on to other subjects and forget. Mathematics is commonly explained and recorded in symbolic and concrete forms that are easy to communicate, rather than in conceptual forms that are easy to understand once communicated. Translation in the direction conceptual -> concrete and symbolic is much easier than translation in the reverse direction, and symbolic forms often replaces the conceptual forms of understanding. And mathematical conventions and taken-for-granted knowledge change, so older texts may become hard to understand. In short, mathematics only exists in a living community of mathematicians that spreads understanding and breaths life into ideas both old and new.</blockquote>
Once a research area fills out, researchers tend to move on to new and different ideas. Much of the research in the theoretical CS community in the 50's, 60's and 70's have been lost to journal articles, now nicely digitized but rarely downloaded.<br />
<br />
What will happen with complexity classes once people stop studying them? You already don't see that many recent papers on complexity classes, even in the Computational Complexity Conference. A victim of our own success and failures: We settled most of the easy questions and the rest are very hard. As my generation retires, the classes may retire as well, outside of a couple of the biggies like P and NP. The old papers will still be out there, and you can always look up the classes in the <a href="https://complexityzoo.uwaterloo.ca/Complexity_Zoo">zoo</a> or on Wikipedia, but the understanding that goes with people studying these classes, and why we cared about them, may deteriorate just like computer programs that go unattended.http://blog.computationalcomplexity.org/2015/07/will-our-understanding-of-math.htmlnoreply@blogger.com (Lance Fortnow)11tag:blogger.com,1999:blog-3722233.post-4581755612615888984Mon, 06 Jul 2015 01:29:00 +00002015-07-05T20:29:49.889-05:00Does Bob Deserve the lavish acknowledgement: A problem in Logic<br />
Alice and Carol are real mathematicians.<br />
Bob is an English major who does not know any mathematics. <br />
<br />
<br />
(This story is based on a true incident.)<br />
<br />
Alice writes a math paper. Carol reads it and offers corrections of style and grammar and how-to-say-things. She also helps simplify some of the proofs. She does not deserve a co-authorship but Alice does of course write in the acknowledgements<br />
<br />
<i>I would like to thank Carol for proofreading and for help with some of the proofs.</i><br />
<br />
Bob points out that this is silly --- if she would like to thank Carol then do so. So Alice changes it to<br />
<br />
<i>I thank Carol for proofreading and for help with some of the proofs.</i><br />
<br />
Even though Bob does not understand the math he begins reading the paper. He finds a few grammar mistakes, some points of style, and even a math mistake:<br />
<br />
BOB: Alice, this sentence mentions A1 and A2, is A1 the steak sauce?<br />
<br />
ALICE: Its A sub 1 and no it is not the steak sauce.<br />
<br />
BOB: But later in the sentence there is a reference to A? Maybe its implicit what A is and I don't get it since I don't know the math, but it does look funny.<br />
<br />
ALICE: Well pierce my ears and call be drafty! You're right! It should be A1, A2, and A_1 ∩ A_2.<br />
<br />
SO, in the end Bob DID proofread the paper and DID help. Alice wants to include him in the acknowledgements. She modifies the ack to<br />
<br />
<i>I thank Bob and Carol for proofreading and help with some of the proofs</i>.<br />
<br />
Is that correct? Bob just did proofreading, and Carol did proofreading AND helped with some proofs. In logical terms<br />
<br />
If B did X and C did X and Y then<br />
<br />
B AND C did X AND Y<br />
<br />
does seem correct.<br />
<br />
But is also seems misleading. Alice could separate it out:<br />
<br />
<i>I thank Carol for proofreading and help with some of the proofs.</i><br />
<i> </i><i>I thank Bob and Carol for proofreading.</i><br />
<br />
<i> </i>Thats more accurate but also more cumbersome.<br />
<br />
But my real question is, is the I THANK BOB AND CAROL... statement correct or incorrect? In logic correct, in English, perhaps not. We could ask Bob who is an English major and maybe get a paper out of it which Carol can proofread!<br />
<i><br /></i>http://blog.computationalcomplexity.org/2015/07/does-bob-deserve-lavish-acknowledgement.htmlnoreply@blogger.com (GASARCH)8tag:blogger.com,1999:blog-3722233.post-4088626805723670860Thu, 02 Jul 2015 11:56:00 +00002015-07-02T06:56:53.888-05:00Goodbye SIGACT and CRATuesday I served my last day on two organizations, the <a href="http://www.sigact.org/">ACM SIGACT</a> Executive Committee and the <a href="http://cra.org/">CRA</a> Board of Directors.<br />
<br />
I spent ten years on the SIGACT (Special Interest Group on Algorithms and Computation Theory) EC, four years as vice-chair, three years as chair and three years as ex-chair, admittedly not so active those last three years. SIGACT is the main US academic organization for theoretical computer science and organizes STOC as its flagship conference. I tried to do <a href="http://futureofstoc.blogspot.com/">big things</a>, managed a few smaller things (ToCT, a few more accepted papers in STOC, poster sessions, workshops, moving Knuth and Distinguished Service to annual awards, an award for best student presentation, a tiered PC), some of them stuck and some of them didn't. Glad to see a<a href="http://blog.computationalcomplexity.org/2015/06/changing-stoc.html"> new movement</a> to try big changes to meet the main challenge that no conference, including STOC, really brings the theory community together anymore. As Michael Mitzenmacher becomes chair and Paul Beame takes my place as ex-chair, I wish them them and SIGACT well moving forward.<br />
<br />
The Computing Research Association's main efforts promotes computing research to industry and government and increasing the diversity in computing research. It's a well-run organization and we can thank them particularly for helping improve the funding situation for computing in difficult financial times. The CRA occasionally puts out <a href="http://cra.org/resources/bp-memos/">best practices memos</a> like a <a href="http://cra.org/resources/bp-view/best_practices_memo_evaluating_scholarship_in_hiring_tenure_and_promot/">recent one</a> recommending quality over quantity for hiring and promotion. Serving on the board, I most enjoyed interacting with computer scientists from across the entire field, instead of just hanging with theorists at the usual conferences and workshops.<br />
<br />
One advantage of leaving these committees: I can now kibbitz more freely on the theory community and computing in general. Should be fun.http://blog.computationalcomplexity.org/2015/07/goodbye-sigact-and-cra.htmlnoreply@blogger.com (Lance Fortnow)0tag:blogger.com,1999:blog-3722233.post-3988619006753417194Mon, 29 Jun 2015 02:21:00 +00002015-06-28T21:21:11.538-05:00When do we care about small improvements?<br />
A while back complexity blog, Shtetl-optimized , and GLL all blogged about the improved matrix mult algorithms (Complexityblog: <a href="http://blog.computationalcomplexity.org/2011/11/matrix-mult-you-heard-it-here-third.html">here</a>, Shtetl-optimized: <a href="http://www.scottaaronson.com/blog/?p=839">here</a>, GLL <a href="https://rjlipton.wordpress.com/2011/11/29/a-breakthrough-on-matrix-product/">here</a>) of Stothers/Williams. It may have been on other theory blogs as well (if you know then let me know). We denote Matrix Mult Algorithm by MMA, and we use n<sup>a</sup> instead of O(n<sup>a</sup>). All the papers we refer to can be found either <a href="https://en.wikipedia.org/wiki/Coppersmith%E2%80%93Winograd_algorithm">here</a> or <a href="https://en.wikipedia.org/wiki/Matrix_multiplication#Algorithms_for_efficient_matrix_multiplication">here</a>.<br />
<br />
1987: Cooper and Winograd get MMA in n<sup>2.375477</sup><br />
<br />
2010: Stothers gets MMA in n<sup>2.374</sup><br />
<br />
2011: Williams gets MMA in n<sup>2.3728642</sup><br />
<br />
(Williams and Stother were ind, though Williams used some of Stother's stuff to simplify her proofs for the final version.)<br />
<br />
This Stothers/Williams results were a big deal! Williams paper got into STOC and, as mentioned above, three blogs reported on it AS SOON AS it was public. <br />
<br />
Fast forward to<br />
<br />
2014: Le Gall gets MMA in n<sup>2.3728639</sup>. Wikipedia says that this MMA is a simplificaion of Williams algorithm.<br />
<br />
The 2014 result may or may not be interesting (thats a tautology!). But what strikes me is that I came across it in 2015 and had not heard of it. I emailed Lance to ask if he had heard of it, he had not. I don't quite know if Lance and I not knowing about means its not that well known, but its at least on indicator.<br />
<br />
ALL of which brings us back to the title of this blog: When do we care about small improvements?<br />
<br />
<ol>
<li>We care about a small improvement if the result in question has been stuck where it was for a long time. This was the case for Stothers/Williams MMA and also for when the approx for metric TSP went from approx of 3/2 to approx of (3/2 - c) (see <a href="http://blog.computationalcomplexity.org/2010/12/breakthrough-in-algorithms-improved.html">here</a>).</li>
<li>We care about small improvements if they illustrate an a new technique. The leaf-counting technique for lower bounds on number-of-comparisons for finding the ith largest gave clean proofs and (I think) small improvements to known results. (Leaf Counting technique in brief: Any Dec Tree for MAX has EVERY branch is of length at least n-1 hence 2^{n-2} leaves. Take a tree for ith largest. For all x_1,...,x_{i-1} prune the tree by having these elements beat anything else. How they compare to each other determine arbitrarily. This results in a MAX tree for n-i elements and hence has 2^{n-i} leaves.All such trees have disjoint sets of leaves. So original tree has at least (n choose i)2^{n-i} leaves, hence has a branch of length log_2( (n choose i)2^{n-i}) = n+ ilog n - i. Was this better than what was known at the time? Not sure but the proof was much simpler.)</li>
<li>We care about small improvements if they tell you that a natural upper or lower bound is NOT true (I can't seem to get the numbered lists right so regard the following items as subitems of this one.)</li>
</ol>
<ol>
<li>Bent/ John showed that Median required 2n - 2sqrt(n) - O(log n) comparisons and then later Dor and Zwick improved this to (2+ 1/2^50)n. (I can't seem to find a free online version of this- if you do please leave a comment.)</li>
<li> Schonage, Paterson, Pippenger had median in 3n + o(n), and Dor and Zwick (not a typo- same people) improved this to 2.95n + o(n). (I can't seem to find a free online version of this- if you do please leave a comment.)</li>
<li>In both cases, especially (1), the improvement is really small and unimportant; however, knowing that 2n is NOT the lower bound and 3n is NOT the upper bound is worth knowing. Hence exact complexity is NOT going to be the nice number 2n or 3n. The only nice number between 2 and 3 is e, so lets hope its en. (I think I read that 2.5n is the actual conjecture. 2.5 is nice, but e is nicer.)</li>
</ol>
<br />
<br />
<br />
<br />
<br />
<br />http://blog.computationalcomplexity.org/2015/06/when-do-we-care-about-small-improvements.htmlnoreply@blogger.com (GASARCH)2tag:blogger.com,1999:blog-3722233.post-8248977757222767226Thu, 25 Jun 2015 12:33:00 +00002015-06-25T07:33:01.394-05:00Changing STOCAt the recently completed STOC and the previous FOCS, much of the discussion revolved around reforming the conferences. You read the <a href="http://windowsontheory.org/tag/stoc-focs-reform/">discussion and comments</a> on Windows on Theory and I've also been cc'd on several very long email threads.<br />
<br />
STOC, as the flagship conference of ACM SIGACT, should be the focal point of the community, the place where researchers circle their calendars and make sure they attend the event. You see that at SOSP for the systems community or SIGCOMM for networking. But not STOC, smaller than it was thirty years ago when the theory community had a fraction of the people we have today.<br />
<br />
Instead STOC has become a conference for its authors, to give researchers a prestigious line in their CVs. While authors get to present their papers, STOC is no longer a primary place for dissemination, better served by <a href="http://arxiv.org/">arXiv</a> and <a href="http://eccc.hpi-web.de/">ECCC</a>.<br />
<br />
The problem is conference overload. We have two top theory conferences a year, STOC and FOCS, not to mention SODA, Computational Complexity and so many others. Conferences are expensive in both time and money and we can't afford to attend too many. People often choose more specialized conferences and workshops where they can focus on talking to people in their specialized research areas.<br />
<br />
SOSP on the other hand meets only once every two years, accepts only thirty papers and gets 600 attendees.<br />
<br />
The only true solution would merge and/or eliminate conferences. We don't need two major theory conferences a year. But that's not politically feasible.<br />
<br />
So the talk is of a Theory Festival centered around STOC in 2017, to make an event that all theorists would want to attend. What that theory festival should or should not do is the topic of all the discussion. I'm not going to talk about the various proposals but I encourage strong experimentation to get us out of this bad equilibrium. Otherwise we end up with status quo and status quo does not bring our community together.<br />
<br />http://blog.computationalcomplexity.org/2015/06/changing-stoc.htmlnoreply@blogger.com (Lance Fortnow)3tag:blogger.com,1999:blog-3722233.post-4786093465574505753Mon, 22 Jun 2015 18:15:00 +00002015-06-22T13:15:49.094-05:00Learning from teaching a HS student Schur's theorem on change<br />
(All the math this post refers to is in my manuscript which is <a href="http://www.cs.umd.edu/~gasarch/BLOGPAPERS/schur.pdf">here.)</a><br />
<br />
Recall Schur's theorem on making change as stated in wikipedia and other source:<br />
<br />
<i>Let a1,...,aL be rel prime coin denominations. Then the number of ways to make n cents</i><br />
<i>change is n<sup>L-1</sup> /(L-1)!a1a2...aL + Θ(n<sup>L-2</sup>).</i><br />
<br />
The proof I knew (from Wilfs book on generating functions) was not difficult; however,it involved roots of unity, partial fractions, Taylor series, and Generating functions. I needed to present the proof to a HS students who was in precalc. The writeup above is what I finally came up with. A few points.<br />
<br />
<ol>
<li>HS students, or at least mine, knew complex numbers. Hence roots-of-unity was okay. The proof of Schur's theorem has another plus: he had asked me just recently how complex numbers could be used in the real world since they weren't... real. I said the are often used as an intermediary on the way to a real solution and gave him an example of a cubic equation where you spot a complex solution and use it to obtain the real solutions. Schur's theorem is a more sophisticated example of using complex numbers to get a result about reals (about naturals!) so that's a win.</li>
<li>Partial Fractions. If the student had had calculus then he would know what partial fractions were and believe me when I said they always work. But since he had not had calculus I prepared a proof that they worked. Then I realized--- I have never seen a proof that they work! This is a matter of timing- I saw them in High School Calculus in 1975 which was taught without proofs (just as well, analysis is a bad first-proof course) and I didn't quite realize that the techniques they taught us aren't quite a proof that it works. I came up with my own proof (I can't imagine its original but I have not found a ref) in 2015. That's 40 years between seeing a concept and proving that it works. A personal record.</li>
<li>Taylor Series. I needed the Taylor series for 1/(1-x)^b (just for b a natural). I came up with a proof that does not use calculus and that a HS student could follow. Very happy that I was forced to do do this. Actually uses a nice combinatorial identity!</li>
<li>The lemmas about partial fractions and about Taylor series are of course very very old. Are my proofs new? I doubt it though I have not been able to find a reference. If you know one please leave a polite comment.</li>
<li>Having gone through the proof so carefully I noticed something else the proof yields: Let M be the LCM of a1,...,aL. For all 0\le r\le M-1 there is a poly p of degree L-1 such that if n\equiv r mod M then p(n) is the number of ways to make change of n cents. I suspect this is known but could not find a ref (again- if you know one then please leave a polite comment.)</li>
</ol>
Moral of the story: By working with a HS student I was forced to find a proof for partial frac deomp, find a non-calc proof of a Taylor series, and obtain an interesting corollary. Hence this is already a win!<br />
<br />http://blog.computationalcomplexity.org/2015/06/learning-from-teaching-hs-student.htmlnoreply@blogger.com (GASARCH)1tag:blogger.com,1999:blog-3722233.post-1477335660022852869Thu, 18 Jun 2015 16:59:00 +00002015-06-18T11:59:07.680-05:00FCRC ComplexityOn Wednesday at FCRC, the Complexity, STOC and EC (Economics and Computation) conferences all have sessions, a smorgasbord of talks, but tough decisions on what session to attend. Here's one you might have misses, the EC paper <a href="http://dx.doi.org/10.1145/2764468.2764515">Why Prices Need Algorithms</a> by Roughgarden and Talgam-Cohen that has a nice application of complexity to the existence of equilibrium, not whether the equilibrium is hard to compute but whether it exists.<div>
<br /></div>
<div>
Roughly Roughgarden and Talgam-Cohen show if a certain kind of a pricing equilibrium holds then one can get an efficient algorithm for a certain kind of reduction. Under reasonable complexity assumptions (like P <> NP) such reductions can't exist and so neither can the equilibrium.</div>
<div>
<br /></div>
<div>
Late Wednesday came the Complexity business meeting, the first open business meeting of the now unaffiliated Computational Complexity Conference. There were 84 attendees, a little bit down from last FCRC but higher than last year. There were 30 papers accepted out of 110 submissions. The 2016 conference will be held in Tokyo May 29-June 1.</div>
<div>
<br /></div>
<div>
There was much discussion on where Complexity will be in 2017 and on which journal will get the special issue for the next three years. Watch my twitter to see when they get set.</div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<div>
<br /></div>
<div>
<br /></div>
</div>
http://blog.computationalcomplexity.org/2015/06/fcrc-complexity.htmlnoreply@blogger.com (Lance Fortnow)0tag:blogger.com,1999:blog-3722233.post-8934833711553304501Tue, 16 Jun 2015 20:53:00 +00002015-06-16T15:53:46.279-05:00STOC Business MeetingThis week I'm at the <a href="http://fcrc.acm.org/">Federated Computing Research Conference</a> in Portland, a collection of many mostly ACM conferences. Last night was the STOC business meeting.<br />
<br />
The meeting had beer but the beer had no alcohol. Not an auspicious start.<br />
<br />
289 registrants, a bit lower than the past few STOCs. With EC and Complexity at the same conference you would think STOC would draw larger but it doesn't.<br />
<br />
The conference had 93 accepted papers out of 347 papers. Best papers (copied from proceedings):<br />
The papers “Exponential Separation of Information and Communication for Boolean Function”, by Anat Ganor, Gillat Kol and Ran Raz, “2-Server PIR with sub-polynomial communication” by Zeev Dvir and Sivakanth Gopi, and “Lower bounds on the size of semidefinite programming relaxations ” by James Lee, Prasad Raghavendra and David Steurer, were selected for the STOC Best Paper Award. The paper “Inapproximability of Nash Equilibrium”, by Aviad Rubinstein, was selected for the Danny Lewin Best Student Paper Award.<br />
<br />
These and the 2015 STOC papers are publicly accessible via <a href="http://acm-stoc.org/stoc2015/toc.html">acm-stoc.org/stoc2015/toc.html</a> forever. part of a new <a href="http://acm-stoc.org/">STOC website</a> with a history of past conferences.<br />
<br />
Babai and Spielman and Teng officially received the awards we've <a href="http://blog.computationalcomplexity.org/2015/06/award-season.html">mentioned below</a>. The distinguished service award went to Avi Wigderson.<br />
<br />
As of July 1 we have a new SIGACT Executive Committee: Paul Beame (ex-Chair), Anna Karlin, Michael Mitzenmacher (Chair), Toni Pitassi, Eric Vigoda, Ryan Williams<br />
<br />
Lots of useful information about funding and more in Salil Vadhan's <a href="https://thmatters.files.wordpress.com/2015/06/catcs-report-stoc-2015.pptx">CATCS Report</a>.<br />
<br />
Upcoming conferences:<br />
<br />
<ul>
<li><a href="http://www.cs.cmu.edu/~venkatg/FOCS-2015-cfp.html">FOCS 2015</a> October 17-20 in Berkeley</li>
<li><a href="https://www.siam.org/meetings/da16/">SODA 2016</a> January 1012 in Arlington, VA.Abstract deadline July 1</li>
<li>STOC 2016 - Cambridge, MA June 18-21 collocated with SoCG</li>
<li>STOC 2017 - Potential "Theory Festival" - see below</li>
<li>STOC 2018 - 50th STOC possibly near Marina del Ray, the home of the first STOC</li>
</ul>
<div>
The business meeting ended with a discussion about a "Theory Festival" for STOC 2017. The goal is to get STOC to be a "must attend" meeting the way SOSP is for systems and SIGCOMM for networking. Check out the many discussions on Boaz Barak's <a href="http://windowsontheory.org/tag/stoc-focs-reform/">blog</a>.</div>
http://blog.computationalcomplexity.org/2015/06/stoc-business-meeting.htmlnoreply@blogger.com (Lance Fortnow)0tag:blogger.com,1999:blog-3722233.post-6099940531411532668Mon, 15 Jun 2015 02:12:00 +00002015-06-16T11:49:29.234-05:00First issue of SIGACT news where I wasn't the editor. But...<br />
I posted at some earlier time that I was resigning from the editorship of SIGACT NEWS book review column, and handing the reins over to Fred Green (who is older than me, so `handing over the reins' might not be quite right).<br />
<br />
You can find the column on Fred's webpage <a href="http://mathcs.clarku.edu/~fgreen/">here</a>.<br />
<br />
I wish him well of course. He's off to a great start:<br />
<br />
<ol>
<li><b>The Cult of Pythagoras: Math and Myths</b>
by Alberto A. Martinez. Review by Bill Gasarch.
</li>
<li><b>Infinitesimal:
How a dangerous mathematical theory shaped the modern world</b>,
by Amir Alexander. Review by Bill Gasarch.
</li>
<li><b>Martin Gardner in the Twenty-First Century</b>,
edited by Michael Henle and Brian Hopkins. Review by Bill Gasarch.
</li>
<li><b>Algorithmic Barriers Falling: P=NP?</b>, and
<b>The Essential Knuth</b>, both by Edgar Daylight. Review by Bill Gasarch.
</li>
<li><b>Love and Math: The Heart of Hidden Reality</b>
by Edward Frenkel. Review by Bill Gasarch.
</li>
<li><b>Structure and Randomness: Pages from Year One of a Mathematical Blog</b> by Terence Tao. Review by Bill Gasarch. </li>
</ol>
Why so many reviews by... me?! Fred writes that its a tribute to my service which is of course true and appreciated. But I also note that when I was editor it would have been... odd? impolite? to have all of the columns by me in a single issue. But having Fred do it is fine. When Fred resigns 17 years from now (not to give away his age, but I think he'll retire before then), perhaps his successor will do that same.<br />
<br />
There are still books to review! Here is a list of books I still have in my office. If you want to review them email both gasarch@cs.umd.edu, fgreen@clarku.edu.<br />
Email the books and also your physical address to send it to.<br />
ALGORITHMS<br />
<br />
ReCombinatorics: The algorithmics of ancestral recombination graphs and<br />
phylogenic networks by Gusfield.<br />
<br />
<br />
Tractability: Practical approach to Hard Problems. Edited by Bordeaux, Hamadi, Kohli.<br />
<br />
Recent progress in the Boolean Domain. Edited by Bernd Steinbach<br />
<br />
<br />
PROGRAMMNG LANGUAGES<br />
<br />
Selected Papers on Computer Languages by Donald Knuth.<br />
<br />
MISC COMP SCI<br />
<br />
Introduction to reversible computing by Perumalla.<br />
<br />
<br />
Digital Logic Design: A Rigorous Approach by Even and Medina<br />
<br />
CoCo: The colorful history of Tandy's Underdog Computer by Boisy Pitre and<br />
Bill Loguidice.<br />
<br />
MATH AND HISTORY<br />
<br />
Professor Stewart's Casebook of Mathematical Mysteries by Ian Stewart.<br />
<br />
The Golden Ratio and Fibonacci Numbers by Richard Dunlap.<br />
<br />
<br />
<br />
Mathematics Galore! The first five years of the St. Marks Institue of Mathematics by Tanton.<br />
<br />
Mathematics Everywhere. Edited by Aigner and Behrends.<br />
<br />
An Epsiodic History of Mathematics: Mathematical Culture Through Problem Solving by Krantz.<br />
<br />
Proof Analysis: A Contribution to Hilbert's Last Problem by Negri and Von Plato.<br />
<br />
<br />
<br />
<br />
<br />
<br />http://blog.computationalcomplexity.org/2015/06/first-issue-of-sigact-news-where-i.htmlnoreply@blogger.com (GASARCH)2tag:blogger.com,1999:blog-3722233.post-7928080482644614795Thu, 11 Jun 2015 12:37:00 +00002015-06-12T12:35:38.318-05:00A Metric Group Product<div dir="ltr" style="margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: Arial;"><span style="font-size: 14.6666669845581px; line-height: 20.2399997711182px; white-space: pre-wrap;"><i>A guest post by Dylan McKay, recently graduated from Georgia Tech and soon to be PhD student at Stanford. </i></span></span><br />
<span style="font-family: Arial;"><span style="font-size: 14.6666669845581px; line-height: 20.2399997711182px; white-space: pre-wrap;"><i><br /></i></span></span>
<span style="font-family: Arial;"><span style="font-size: 14.6666669845581px; line-height: 20.2399997711182px; white-space: pre-wrap;">[Editor's Note: Turns out the given solution <a href="http://blog.computationalcomplexity.org/2015/06/a-metric-group-product.html?showComment=1434130382494#c1863051852889917210">doesn't work</a> and whether a metric group product over the non-negative reals exists remains open.]</span></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><br /></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Here a cute puzzle motivated by a pair of undergrads and their poor understanding of what the phrase “Algebraic Geometry” really should mean:</span><br />
<span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><br /></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Find a function </span><span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: bold; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">f </span><span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">from the nonnegative reals to the nonnegative reals that satisfies the group axioms and the metric axioms, or prove that there is no such function. That is, find an </span><span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: bold; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">f:RxR→R</span><span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> such that </span><span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: bold; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">(R,f)</span><span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> is a group and a metric space. (I am using </span><span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: italic; font-variant: normal; font-weight: bold; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">R</span><span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> to refer to the set of nonnegative real numbers).</span><br />
<span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><br /></span></div>
<span style="font-family: Arial; font-size: 14.6666666666667px; line-height: 1.38; white-space: pre-wrap;">As a quick reminder, the group axioms are:</span><br />
<span style="color: black; font-family: Arial; font-size: 14.6666666666667px; vertical-align: baseline; white-space: pre-wrap;"><br /></span>
<span style="color: black; font-family: Arial; font-size: 14.6666666666667px; vertical-align: baseline; white-space: pre-wrap;">- Closure: </span><span style="color: black; font-family: Arial; font-size: 14.6666666666667px; font-weight: bold; vertical-align: baseline; white-space: pre-wrap;">f(a,b)</span><span style="color: black; font-family: Arial; font-size: 14.6666666666667px; vertical-align: baseline; white-space: pre-wrap;"> must be in </span><span style="color: black; font-family: Arial; font-size: 14.6666666666667px; font-weight: bold; vertical-align: baseline; white-space: pre-wrap;">R</span><br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">- Identity: There must exist an element </span><span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: bold; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">e</span><span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> such that for all </span><span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: bold; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">a</span><span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">, </span><span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: bold; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">f(e,a)=f(a,e)=a</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">- Associative: for all </span><span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: bold; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">a,b,c,</span><span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: bold; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">f(f(a,b),c) = f(a,f(b,c))</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">- Inverse: for all </span><span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: bold; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">a</span><span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">, there must exist a </span><span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: bold; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">b</span><span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> such that </span><span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: bold; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">f(a,b)=e</span><br />
<span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: bold; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><br /></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">And the metric axioms are:</span><br />
<span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><br /></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: bold; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">- f(a,b) = 0 iff a=b</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: bold; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">- f(a,b) <= f(a,c) + f(c,b)</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: bold; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">- f(a,b) = f(b,a)</span><br />
<span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: bold; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><br /></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">One really bizarre thing about this supposed function is that, what this metric does is essentially guarantee that, given some number x, every other number has a distance from x that is unique -- no other number is exactly that distance away! This is quite counterintuitive to how we think about distance. Each number is its own distance from 0, though, which is very much in line with intuition.</span><br />
<span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><br /></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">(If you want to solve the puzzle yourself, don't read any further until you do! Unless you want some hints, in which case, look at the next paragraph.)</span><br />
<span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><br /></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">We can, however, make some other observations that point us in the right direction. For instance, </span><span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: bold; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">f(e,e) = 0</span><span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> from the metric axioms and </span><span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: bold; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">f(e,e) = e</span><span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> from the group axioms, so you get that </span><span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: bold; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">e = 0</span><span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">. From this and the metric axioms, you get that every element must be it’s own inverse! </span><br />
<span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><br /></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Here, we are reminded of a pretty familiar and trusty function, this exclusive-or (XOR) function! And indeed, this will lead us a to a solution.</span><br />
<span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><br /></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Here is our candidate function:</span><br />
<span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><br /></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Consider x and y in their binary representations. If they do not have the same number of bits to the left of the decimal point (or should it be called the binary point?), pad the one with fewer such bits with 0’s so they have the same number of such bits. Then </span><span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.6666666666667px; font-style: normal; font-variant: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><b>f(x,y)</b></span><span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> will be the number represented in binary by the bit-wise exclusive or of these two strings (maintaining the same number of bits to the left of the decimal point, of course). And there we have it: our function! </span><br />
<span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><br /></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Now of course, we still need to prove that it satisfies the axioms. But if you do not believe that it does, work it out for yourself! Each axiom is pretty simple to check, especially as we are (for the most part) familiar with this as an extension of an idea for some smaller groups. Well, all of them except our good friend the triangle inequality. This one actually isn’t too bad, but if you have trouble, I will include a hint at the bottom of the post.</span><br />
<span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><br /></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Anyway, I hope you liked our puzzle!</span><br />
<span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><br /></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Thanks!</span><br />
<span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><br /></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">-Dylan</span><br />
<span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><br /></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Hint for triangle inequality: XOR and addition are really similar, but there is a key difference between them that make a XOR b <= a + b.</span></div>
http://blog.computationalcomplexity.org/2015/06/a-metric-group-product.htmlnoreply@blogger.com (Lance Fortnow)18tag:blogger.com,1999:blog-3722233.post-5029182752716776614Mon, 08 Jun 2015 19:02:00 +00002015-06-08T14:27:48.643-05:00The city where the book publishers resides is Funkytown!A while back when I got back the galleys for a paper the publisher wanted to know the complete postal address of one of the co-authors and also the city where the publisher of a book in he bibliography was printed. The publisher didn't really have a city, so I wrote in FUNKYTOWN. <br />
<br />
Is it important in a paper to have the city that the publisher is in? I am assuming that at one time it was (though I am not sure why even that is) but now it is<br />
completely irrelevant.<br />
<br />
Is it important to have an authors postal address? I can't imagine why nowadays in 2015, though this I can imagine as being important in an earlier era. <br />
<br />
<br />
Question: What information do publishers ask for in galleys that are no longer relevant? Why do they? When will they stop?<br />
<br />
<br />http://blog.computationalcomplexity.org/2015/06/the-city-where-book-publishers-resides.htmlnoreply@blogger.com (GASARCH)2tag:blogger.com,1999:blog-3722233.post-4857864447576414386Thu, 04 Jun 2015 15:01:00 +00002015-06-08T15:56:58.359-05:00Langs with provably bigger CFG's then CSG's<br />
In a prior blog entry <a href="http://blog.computationalcomplexity.org/2015/05/sizes-of-dfas-nfas-dpdas-ucfg-cfgs-csls.html">HERE</a> I discussed very large differences in the size of machines. I didn't discuss CFG vs CSG so I'll do that now. We assume that the CFGs and CSGs are in Chomsky Normal Form (for CSG this means that RHS is any rule is of length 1 or 2).<br />
<br />
The following are known:<br />
<br />
1) Let PERMn be the set of permutations of {1,...,n} (so PERM3 is {123,132,213,231,312,321}). Then (1) there exists a CSG for PERMn of size O(n<sup>2</sup>), but (2) every CFG for PERMn requires size 2<sup>Ω(n) . </sup>The upper bound is easy, the lower bound is by Ellul, Krawetz, Shallit, Wang <a href="https://cs.uwaterloo.ca/~shallit/Papers/re3.pdf">Here</a>.<br />
<br />
2) Let Wn = { ww : |w|=n}. Then (1) there exists a CSG for Wn of size O(n), but (2) every CFG for Wn requires size 2<sup>Ω(n)</sup>. The upper bound we leave to the reader. Filmus <a href="http://www.cs.toronto.edu/~yuvalf/CFG-LB.pdf">HERE</a> for the lower bound.(ADDED LATER: In Comment Below Filmus (yes, the same one!) claims Wn has a CSG of size O(log n)).<br />
<br />
3) Let Sn = { w : |w|=3n and the numbers of a's, b's, c's in w are all the same}. Then (1) there exists a CSG for Sn of size O(log n) but (2) every CFG for Wn requires size 2<sup>Ω(n)</sup>. The upper bound is from<a href="http://arxiv.org/abs/1503.08847"> Beigel and Gasarch</a> (If you know of an earlier source leave a polite comment.) The lower bound is from Filmus (the ref above).(ADDED LATER- In Comment Below Filmus (yes, the same one!) corrects me, his paper did not show Sn has exp lower bound, and in fact Sn DOES have a CFG of size O(n^3)).<br />
<br />
4) The languages above are (informally) natural. If one goes to unnatural languages then there is a result where the CFG is GINORMOUS: For all f ≤<sub>T</sub> HALT, for all n, there is a lang Ln such that (1) there exists a CSG for Ln of size n, but (2) every CFG for Ln has size ≥ f(n). First proven by Meyer, but a new proof by Beigel and Gasarch is in the paper pointed to above. (See that paper for the history.)<br />
<br />
The results stated above are suitable for an ugrad formal lang theory course.<br />
<br />
Are there natural langs with triple-exp or larger gap between CFG's and CSG's? There are very few techniques to get lower bounds for CFG's (the papers of Ellul et al, and Filmus, are the only ones I know) so new techniques may be needed. Are there even any good candidates for natural languages with small CSG's and really really large CFG's?<br />
<br />
<br />
<br />http://blog.computationalcomplexity.org/2015/06/langs-with-provably-bigger-cfgs-then.htmlnoreply@blogger.com (GASARCH)1tag:blogger.com,1999:blog-3722233.post-2822666924398010803Mon, 01 Jun 2015 20:46:00 +00002015-06-01T15:47:23.996-05:00Award Season<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
László Babai will receive the <a href="http://www.sigact.org/Prizes/Knuth/citation2015.pdf">2015 Knuth Prize</a> and Daniel Spielman and Shang-Hua Teng will receive the <a href="http://www.sigact.org/Prizes/Godel/citation2015.pdf">2015 Gödel Prize</a>. ACM issued a <a href="http://www.acm.org/press-room/news-releases/2015/knuth-godel-prizes-2015">press release</a> for both awards which will be presented at the upcoming <a href="http://acm-stoc.org/stoc2015/">STOC</a> at <a href="http://fcrc.acm.org/">FCRC</a>.<br />
<br />
Babai did seminal research on interactive proofs, communication complexity, group algorithms and much more. One cannot count the number of PhD theses in mathematics and computer science that can trace themselves back to some initial work by Babai. I was lucky to have Laci Babai as a colleague, mentor and friend during my years at the University of Chicago.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
Spielman and Teng, who <a href="http://www.sigact.org/Prizes/Godel/2008.html">received the 2008 Gödel Prize</a> for smooth analysis, won again for three papers using nearly linear time Laplacian solvers for a series of graph problems.<br />
<ul>
<li><a href="http://dx.doi.org/10.1137/08074489X">Spectral sparsification of graphs</a>. SIAM J. Computing 40:981-1025, 2011.</li>
<li><a href="http://dx.doi.org/10.1137/080744888">A local clustering algorithm for massive graphs and its application to nearly linear time graph partitioning</a>. SIAM J. Computing 42:1-26, 2013.</li>
<li><a href="http://dx.doi.org/10.1137/090771430">Nearly linear time algorithms for preconditioning and solving symmetric, diagonally dominant linear systems</a>. SIAM J. Matrix Anal. Appl. 35:835-885, 2014. </li>
</ul>
<div>
The <a href="http://awards.acm.org/">ACM Awards</a> ceremony later this month will have a number of theory related prizes.</div>
<div>
<ul>
<li>Dan Boneh is the recipient of the <a href="http://awards.acm.org/4125431_boneh.pdf">ACM-Infosys Foundation Award in the Computing Sciences</a> for the development of pairing-based cryptography and its application in identity-based encryption.</li>
<li>James Demmel is the recipient of the <a href="http://awards.acm.org/kanellakis/">Paris Kanellakis Theory and Practice Award</a> for his work on numerical linear algebra libraries, including LAPACK.</li>
<li>Charles Leiserson will receive the <a href="http://awards.acm.org/press_releases/kennedy-award-2014.pdf">ACM-IEEE CS Ken Kennedy award</a> for his work on parallel computing.</li>
<li>Jon Kleinberg is the 2014 recipient of the <a href="http://awards.acm.org/award_winners/kleinberg_0032532.cfm">ACM – AAAI Allen Newell Award</a> for social and information networks, information retrieval, and data science, and for bridging computing, economics and the social sciences.</li>
<li>New <a href="http://awards.acm.org/fellow/year.cfm">ACM Fellows</a> include Al Borodin, Faith Ellen, Michael Kearns, Valerie King, Yishay Mansour, Michael Mitzenmacher, Omer Reingold, Ronitt Rubinfeld and Aravind Srinivasan.</li>
</ul>
</div>
http://blog.computationalcomplexity.org/2015/06/award-season.htmlnoreply@blogger.com (Lance Fortnow)1tag:blogger.com,1999:blog-3722233.post-5634291563235076760Thu, 28 May 2015 18:06:00 +00002015-05-28T13:06:15.336-05:00Who wins this bet?<br />
Alice and Bob are at an auction and Alice wants to buy an encyclopedia set from 1980. Bob says <i>don't buy that, you'll never use it. In this age of Wikipedia and Google and THE WEB. </i>Alice says <i>you don't know that. </i>They agree that Alice will spend no more than $20.00 on it (and she does win it at $20.00) and that:<br />
<br />
If Alice does not use the encyclopedia within 5 years then she owes Bob $10.00. If she does use it then Bob owes Alice $10.00.<br />
<br />
3 years later its really cold outside. Alice's house is not that well insulated. So she takes carpets against the bottom cracks in the door and weights them down with the volumes of the encyclopedia. This helps her keep warm and cuts down on her heating bill.<br />
<br />
Alice says<i> I used the encyclopedia. Pay up! In your face! I used them! You were wrong!</i><br />
<br />
Bob says <i>You didn't use them to look anything up. So that doesn't count.</i><br />
<br />
Alice and Bob are asking YOU do decide. So leave comments either way and whoever has more votes before my next post wins! Feel free to leave reasons as well to persuade the other readers.<br />
<br />
<br />
<br />
<br />
<br />
<br />http://blog.computationalcomplexity.org/2015/05/who-wins-this-bet.htmlnoreply@blogger.com (GASARCH)20tag:blogger.com,1999:blog-3722233.post-2723923268933467218Mon, 25 May 2015 14:56:00 +00002015-05-25T09:56:16.748-05:00John Nash (1928-2015)John Nash and his wife Alicia <a href="http://www.nytimes.com/2015/05/25/science/john-nash-a-beautiful-mind-subject-and-nobel-winner-dies-at-86.html">died in a taxi accident</a>, returning from the airport after he received the Abel prize in Norway. The public knew John Nash as the "Beautiful Mind" of book and screen, but we knew him as one of the great geniuses of the 20th century. Rakesh Vohra <a href="http://www.gametheorysociety.org/">captures</a> Nash's life and work, including his amazing <a href="http://blog.computationalcomplexity.org/2012/02/nash-and-nsa.html">letters to the NSA</a>.<br />
<br />
I briefly met John Nash at some MIT alumni events in New Jersey when I lived there (even though neither of us were MIT undergrads). He would come with his wife and son, the son wearing a winter coat no matter the season. Nash just seemed like any other introverted scientist and was happy to talk though understating his research ("I did some work in game theory") and never revealing the challenging life he led.<br />
<br />
Now that John and Alicia have found their final equilibrium, may we remember them and Nash's vision of using mathematics to understand the world we live in.http://blog.computationalcomplexity.org/2015/05/john-nash-1928-2015.htmlnoreply@blogger.com (Lance Fortnow)0tag:blogger.com,1999:blog-3722233.post-589748657827195716Thu, 21 May 2015 23:56:00 +00002015-05-21T18:58:48.727-05:00An Intentional and an Unintentional teaching experiment regarding proving the number of primes is infinite.<br />
I taught Discrete Math Honors this semester. Two of the days were cancelled entirely because of snow (the entire school was closed) and four more I couldn't make because of health issues (I'm fine now). People DID sub for me those two and DID do what I would have done. I covered some crypto which I had not done in the past.<br />
<br />
Because of all of this I ended up not covering the proof that the primes were infinite until the last week.<br />
<br />
INTENTIONAL EXPERIMENT: Rather than phrase it as a proof by contradiction I phrased it, as I think Euclid did, as<br />
<br />
Given primes p1,p2,...,pn you can find a prime NOT on the list. (From this it easily follows that the primes are infinite.)<br />
<br />
Proof: the usual one, look at p1xp2x...xpn + 1 and either its prime or it has a prime factor not on the list.<br />
<br />
The nice thing about doing it this way is that there are EASY examples where p1xp2x...xpn+1 is NOT prime<br />
<br />
(e.g., the list is {2,5,11} yields 2x5x11 + 1 = 111 = 3 x 37, so 3 and 37 are both not in {2,5,11})<br />
<br />
<br />
where as if you always use the the product of the first n primes then add 1, you don't get to a non-prime until 2x3x5x7x11x13 + 1 = 30031 = 59x 509.<br />
<br />
They understood the proof better than prior classes had, even prior honors classes. <br />
<br />
UNINTENTIONAL: Since I did the proof at the end of the semester they ALREADY had some proof maturity, more so than had I done it (as I usually do) about 1/3 of the way through the course.<br />
<br />
They understood the proof better than prior classes had, even prior honors classes. Hence I should proof all of the theorems the last week! :-)<br />
<br />
But seriously, they did understand it better, but I don't know which of the two factors, or what combination caused it. Oh well.<br />
<br />
<br />http://blog.computationalcomplexity.org/2015/05/an-intentinoal-and-unintentional.htmlnoreply@blogger.com (GASARCH)2tag:blogger.com,1999:blog-3722233.post-3603431268870041295Tue, 19 May 2015 00:24:00 +00002015-05-23T16:00:41.613-05:00Theory Jobs 2015In the fall we <a href="http://blog.computationalcomplexity.org/2014/10/2014-fall-jobs-post.html">list theory jobs</a>, in the spring we see who got them. Like <a href="http://blog.computationalcomplexity.org/2014/05/theory-jobs-2014.html">last year</a>, I created a fully editable <a href="https://docs.google.com/spreadsheets/d/14pOq1dEI5X77LoSFQIMC0B0x-TZMv4rRrGiC3WFMqA0/edit?usp=sharing">Google Spreadsheet</a> to crowd source who is going where. Ground rules:<br />
<ul>
<li>I set up separate sheets for faculty, industry and postdoc/visitors.</li>
<li>People should be connected to theoretical computer science, broadly defined.</li>
<li>Only add jobs that you are absolutely sure have been offered and accepted. This is not the place for speculation and rumors.</li>
<li>You are welcome to add yourself, or people your department has hired.</li>
</ul>
This document will continue to grow as more jobs settle. So check it often.<br />
<br />
<iframe frameborder="0" height="750" src="https://docs.google.com/spreadsheets/d/14pOq1dEI5X77LoSFQIMC0B0x-TZMv4rRrGiC3WFMqA0/pubhtml?widget=true&headers=false" width="575"></iframe>
<a href="https://docs.google.com/spreadsheets/d/14pOq1dEI5X77LoSFQIMC0B0x-TZMv4rRrGiC3WFMqA0/edit?usp=sharing">Edit</a>http://blog.computationalcomplexity.org/2015/05/theory-jobs-2015.htmlnoreply@blogger.com (Lance Fortnow)18tag:blogger.com,1999:blog-3722233.post-8322009766625865132Thu, 14 May 2015 13:34:00 +00002015-05-14T08:39:35.094-05:00Fiftieth Anniversary of the Publication of the seminal paper on Computational Complexity<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-sMoKiQvVwB8/VVEMTLT-beI/AAAAAAAA2VQ/LMLeAxFklRg/s1600/GE.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="281" src="http://1.bp.blogspot.com/-sMoKiQvVwB8/VVEMTLT-beI/AAAAAAAA2VQ/LMLeAxFklRg/s400/GE.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Juris Hartmanis and Richard Stearns in a photo dated May 1963. The main theorem from their paper is on the board later improved by <a href="http://doi.acm.org/10.1145/321356.321362">Hennie and Stearns</a>. Photo courtesy of Richard Stearns.</td></tr>
</tbody></table>
The seminal paper of Juris Hartmanis and Richard Stearns, <a href="http://dx.doi.org/10.2307/1994208">On the Computational Complexity of Algorithms</a>, appeared in the Transactions of the American Mathematical Society in the May 1965 issue. This paper gave the name to the field of Computational Complexity which I took for the name of this blog. <a href="http://amturing.acm.org/award_winners/hartmanis_1059260.cfm">Hartmanis</a> and <a href="http://amturing.acm.org/award_winners/stearns_1081900.cfm">Stearns</a> received the Turing Award in 1993 for this work.<br />
<br />
I've mentioned this paper several times in the blog before, including as a <a href="http://blog.computationalcomplexity.org/2005/02/favorite-theorems-seminal-paper.html">favorite theorem</a>. Hartmanis and Stearns first formalized and truly popularized the idea of measuring time and other resources as a function the problem size, laying the foundation for virtually every paper in computational complexity and algorithms to follow.<br />
<br />
Both <a href="http://dx.doi.org/10.1109/MAHC.1981.10005">Hartmanis</a> and <a href="http://dx.doi.org/10.1007/978-1-4612-4478-3_2">Stearns</a> wrote about those early days. The main breakthroughs for their paper started in November 1962 and on December 31 Hartmanis wrote in his logbook "This was a good year," A good year indeed.http://blog.computationalcomplexity.org/2015/05/fiftieth-anniversary-of-publication-of.htmlnoreply@blogger.com (Lance Fortnow)4tag:blogger.com,1999:blog-3722233.post-1981922211527036900Mon, 11 May 2015 12:59:00 +00002015-06-22T13:20:49.883-05:00The law of the excluded middle of the road republicans<br />
In the book <i>Hail to the Chiefs </i>about the presidents, when pointing to a race between two people who had no business being president (I think it was Franklin Pierce vs Winfield Scott) wrote something like <i>That's the thing about elections, someone has to win. </i><br />
<br />
<i> </i>Looking at the republicans running for the nomination I can (with the help of reading many of Nate Silver's Columns) tell you why, for each one, they can't win the nomination. Note that this is not a partisan thing. But again, someone has to win. Is it possible to have the statements<br />
<br />
A1 can't win AND A2 can't win AND .... AND An can't win<br />
<br />
and yet someone wins?<br />
<br />
Here is a list of the candidates and why they can't win.<br />
<br />
<ol>
<li>Jeb Bush is what passes for a front runner nowadays. Has the money, does not have the party (very few endorsements), and not doing well in polls in Iowa or New Hampshire, the first two states. Possibly because he does not hate Obama enough. Its an interesting question of whether the party, the people, or the money pick the candidate. In the past its been the party and the money together, but that might be changing. CAN"T WIN: Viewed as too moderate by the people who go to Caucus's. Also some people may be put off by the family name. THOUGHT: If H CLINTON was not the likely Democratic candidate, thus making the family name thing a less of an issue in the general, I don't think he would have run at all. </li>
<li>Marco Rubio. Running for president for the first time is hard. The republicans rarely nominate someone who hadn't run before (W in 2000, Ford in 1976 which is an outlier since he was prez). Perhaps the voters/party/money like someone they are familiar with OR perhaps first timers make mistakes. CAN"T WIN: Will make some mistake and some might think its not his turn since he's so young. Also, Senators have it rough since they sometimes vote for bills in funny ways, TRIVIA: The electoral college reps from state X cannot cast both the Prez and Veep vote for people both from state X. So you won't see a Bush-Rubio or Rubio-Bush ticket.</li>
<li>Rick Perry. He lost in 2012 because either he was too soft on immigrants (he supported some sort of Dream Act) or because of his Whoops moment in a debate. Frankly I have sympathy on that one--- I also sometimes forget which cabinet positions I want to get rid of. He has tried to cure both of his problems by flip-flopping (or evolving) on immigration and by wearing glasses so at least he looks smart. CAN"T WIN: His past failure makes him not look that serious this time around, and he will likely have another WHOOPS moment.</li>
<li>Scott Walker. Like Rubio but might be in better shape because he's a governor. Still, people in his home state are beginning to turn on him, a bad sign. He may soon have the same problem that Gov Brownback of Kansas has--- you promise to cut taxes, close loopholes, and cut spending, and that might actually be a good idea if done right, but you cut taxes , don't close loopholes, cut spending in stupid places like education, run up a big debt, destroy your states economy, and ... get re-elected. CAN"T WIN: Having not run before Walker will say something stupid. And as a gov of a moderate state I suspect some moderate things he did may be a problem for hard core republican voters.Plus his states current problems may be an issue.</li>
<li>Ted Cruz. Which of these quotes did Ted Cruz really say and which did I hear in some satirical setting (as a fan of The Daily Show, The Colbert Report, John Oliver's show, The Capitol Steps, others, I lose track of where I heard what) (1) <i>There was no ebola before Obamacare, </i>or (2)<i> Net neutrality is Obamacare for the internet.</i> I'll answer at the end of the post. He is now using Obamacare himself. . CAN"T WIN: A niche candidate with a small but loyal set of supporters. Not enough CAVEAT: Might be running to get some points of view out there like Adlai Stevenson and Barry Goldwater, though they got all the way to the nomination which I doubt he can. </li>
<li>Rand Paul. Interesting mathematically: an authenticity/electability tradeoff. Some people are attracted to his libertarianism, authenticity and consistency, but not enough to get him anywhere close to the nomination. So he changes some of his positions to be more mainstream but less libertarian, authentic, and consistent. Alas, those who trade their integrity for electability end up with neither. CAN"T WIN: His evolving views might lose him his base but not gain him any establishment cred. Also he has the chicken-egg problem in that even people who like him don't think he can win the general election. (Ted Cruz and others on this list may also have this problem.)</li>
<li>Mike Huckabee. Won't get much support beyond his Social Conservative base. His stance on Same Sex Marriage will hurt him outside of the social conservatives, especially in 2015 (as opposed to his last run in 2008). I'm surprised he's running- he got what he wanted from his last run, a show on FOX News. CAN"T WIN: Was a moderate on some economic and immigration issues (he may also `evolve' which won't work), a conservative on social issues, is just the wrong mix for current republican primary voters. Note that many candidates are trying to avoid the Same Sex Marriage question as they know that being opposed to it will hurt them in the general. Plus some don't want to be on the wrong side of history (or as the kids say WSOH) . Politics- sometimes you're forced to have a public opinion that you disagree with and know will make you be on the WSOH , but you're stuck with it. I think Huckabee is sincere in his opposition to same sex marriage but he must surely know he's on the WSOH.</li>
<li>Ben Carson. I suspect he is actually running to be a FOX News commentator. At that he might succeed. CAN"T WIN: First timer, never ran for anything, he'll be a curiosity not a candidate. Which of the following did he say: <i>(1) Obama is a sociopath (2) Obamacare is like slavery</i>. This may even hurt him with the Republican hard core who want someone who can win. CAVEAT: is being African-American going to help or hurt? I doubt his campaign will get far enough to tell. The other candidates and the debate panelists (in the sanctioned debates) will treat him with kid gloves to avoid being called racist.</li>
<li>Carly Fiorina. She said that our founding fathers did not intend there to be a political class, what we now call politicians. They intended for ordinary people (like the president of HP), for the good of their community, to serve in office. She left out that the founding fathers also did not intend for women to be president. CAN"T WIN: First timer, never ran for anything. (correction added later: She ran for Senator of Califorina. She got the nomination but lost to Incumbent Barbara Boxer.) CAVEAT: is being female going to help or hurt? I doubt her campaign will get far enough to tell. The other candidates and debate panelists (in the sanctioned debates) will treat her with kid gloves to avoid being called sexist. HER HOOK: She claims that as a women she can neutralize H Clinton' women-advantage in the general. Interesting that she is making an electability argument instead of a policy argument, given that she has no chance of being elected.</li>
<li>Chris Christie. CAN"T WIN: Hated inside of New Jersey. Hated outside of New Jersey.</li>
<li>Bobby Jindal. Once said the Republicans have to stop being the stupid party. Later said some stupid things about Muslims in America. CAN"T WIN: If he ran as the moderate sane voice who will rescue the party from itself, he might get some traction. If he runs as anything else he has too much competition. Also a first-time-runner which is hard. </li>
<li>Lindsey Graham. CAN"T WIN: Has worked with democrats which should be a PRO but it's a CON. </li>
<li>John Kasich. Gov of Ohio. CAN"T WIN: Not that well known. Democrats have nominated unknowns (B Clinton, B Obama) but republicans almost never do (W might count).</li>
<li>Donald Trump (ADDED LATER). Some people say that corporate America controls this country. If we make Donald Trump Prez we're just cutting out the middle-man. Won't get the nomination--- over half of republicans say they would never vote for him--- but his running is a fairwell gift to Jon Stewart.</li>
<li>Rick Santrum (ADDED LATER). Against Birth Control? Really? Google him to find other reasons he can't win. Odd thing- usually the person who came in second the last time around has a pretty good shot at getting the nomination this time around, but the fact that he came in second last time may be a sign of how weak the field was last time. </li>
<li>George Pataki (ADDED LATER) Would be at home in the deomocratic party. I want to see him callenge Hillary from the right. Oh, he's a republican? But he's pro-choice, pro-gay, anti-gun, and doesn't seem to hate Obama. </li>
<li>Rob Portman. (ADDED LATER) I saw him on a list of possible nominees. Not under `declared', not under `exploratory committee) (Chris Christee and Scott Walker are still exploring, which surprised me- I thought they already declared), but on a list of `No indication'. Not sure why he's on any list; however, as Rick Perry taught us last time (and this is not a joke) you need to start early. </li>
</ol>
<br />
There may be more (Donald Trump anyone? ADDED LATER- When I first posted this mentioning Donald Trump was supposed to be a joke. But political satire and reality may have finally merged with a reality-TV star running for Prez). But my point is that it seems like nobody can win, yet someone has to. Do you know other examples where A OR B OR C has to be true, yet none of A,B,C look plausible?<br />
<br />
I can phrase this another way:<br />
<br />
novices can't win (e.g., Ben Carson) AND first timers can't win (e.g., Mario Rubio) AND too moderate can't win (e.g., perecption of Jeb) AND unknowns can't win (e.g., John Kasich).<br />
<br />
They all can't be right, but looking at it `first timers' is prob the weak link in my reasoning. I could replace it with `inexperienced politician'. Even so, sure looks like nobody can win.<br />
<br />
Ted Cruz: Both of the quotes I attribute to him he really did say.<br />
<br />
I don't know Ben Carson's original quote but he backtracked to <i>Obama reminds you of a</i> <i>psychopath,</i><br />
which is much better than saying Obama IS a psychopath . But he never said sociopath, so the quote I gave is NOT from Ben Carson.<br />
<br />
On Obamacare he said <i>its the worst thing to happen in America since slavery. </i>But later<br />
opposite-of-backtracking said it was <i>in a way like slavery because it robs you of your ability to control your own life.</i><br />
<br />
<br />http://blog.computationalcomplexity.org/2015/05/the-law-of-excluded-middle-of-road.htmlnoreply@blogger.com (GASARCH)8