tag:blogger.com,1999:blog-37222332014-07-23T05:19:21.456-05:00Computational ComplexityComputational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill GasarchLance Fortnowhttps://plus.google.com/101693130490639305932noreply@blogger.comBlogger2190125tag:blogger.com,1999:blog-3722233.post-35260214701722907472014-07-22T10:42:00.002-05:002014-07-22T10:42:43.879-05:00The Burden of Large EnrollmentsThis week I'm at the <a href="http://cra.org/events/snowbird-2014/">CRA Snowbird conference</a>, the biennial meeting of CS chairs and other leaders in the field. In <a href="http://blog.computationalcomplexity.org/2012/08/moocs.html">2012</a> many of the discussion focused on MOOCS. This year the challenge facing most CS chairs are booming enrollments in CS courses. A nice problem to have, but a problem nevertheless.<br />
<br />
Last night we had a broad discussion about the burgeoning number of students. Ed Lazowska showed his <a href="http://lazowska.cs.washington.edu/NCWIT.pdf">NCWIT slides</a> giving anecdotal evidence. It's too early to get a complete survey of CS departments but hardly anyone in the audience felt that enrollments were not going up by double (or triple) digit percentages. Not only an increase in majors, but a number of non-majors, minors or others, who take upper level courses.<br />
At Georgia Tech it seems every engineering student wants to minor in CS.<br />
<br />
We all have to deal with the increase in the short term. Depending on the institution, we have more classes, larger classes, enrollment caps, increases in instructors, PhD lecturers, teaching postdocs, automated grading, undergrad and MS TAs and having other departments take up some of the load. All of this could hurt the undergrad experience but with careful planning we can mitigate those effects.<br />
<br />
What's driving the increase? Some suggested a change in the ubiquitous view of computing and the more positive opinion of geeks in society. More likely though, the driver is jobs, the great need for computer scientists and not just from computing companies, and the limited jobs for those graduating with non-technical majors.<br />
<br />
Is the big shifts in enrollment a permanent change or just another part of the boom and bust cycle in CS? More than a theoretical questions, a permanent change makes for a better argument with deans and provosts to increase faculty sizes in computer science. There are some signs of "this time is different," such as the increase in non-majors in upper level courses, but it's an argument that will have a hard sell unless the increase is sustained for several years. One good argument: If you draw a linear line through enrollments since the 70's you get a consistent 10% yearly increase averaged over booms and busts.Lance Fortnowhttps://plus.google.com/101693130490639305932noreply@blogger.com0tag:blogger.com,1999:blog-3722233.post-28362174389397926712014-07-17T08:24:00.001-05:002014-07-17T08:24:42.504-05:00Elfdrive<div>
New York Times, dateline June 11, 2019</div>
<div>
<blockquote>
With a near record-setting investment announced last week, the self-driving car service Elfdrive is the hottest, most valuable technology start-up on the planet. It is also one of the most controversial.<br />
The company, which has been the target of protests across Europe this week, has been accused of a reckless attitude toward safety, of price-gouging its customers, of putting existing cabbies out of work and of evading regulation. And it has been called trivial. In The New Yorker last year, George Packer huffed that Elfdrive typified Silicon Valley’s newfound focus on “solving all the problems of being 20 years old, with cash on hand.”<br />
It is impossible to say whether Elfdrive is worth the $117 billion its investors believe it to be; like any start-up, it could fail. But for all its flaws, Elfdrive is anything but trivial. It could well transform transportation the way Amazon has altered shopping — by using slick, user-friendly software and mountains of data to completely reshape an existing market, ultimately making many modes of urban transportation cheaper, more flexible and more widely accessible to people across the income spectrum.<br />
Elfdrive could pull this off by accomplishing something that has long been seen as a pipe dream among transportation scholars: It has the potential to decrease private car ownership.<br />
In its long-established markets, like San Francisco, using Elfdrive every day is already arguably cheaper than owning a private car. Elfdrive says that despite dust-ups about “elfpricing” at busy times, its cheapest service is usually 30 percent less expensive than taxis.<br />
The competition between Elfdrive and its rivals is likely to result in more areas of the country in which self-driving becomes both cheaper and more convenient than owning a car, a shift that could profoundly alter how people navigate American cities.<br />
Over the next few years, if Elfdrive do reduce the need for private vehicle ownership, they could help lower the cost of living in urban areas, reduce the environmental toll exacted by privately owned automobiles (like the emissions we spew while cruising for parking), and reallocate space now being wasted on parking lots to more valuable uses, like housing.<br />
Paradoxically, some experts say, the increased use of self-driving cars could also spawn renewed interest in and funding for public transportation, because people generally use taxis in conjunction with many other forms of transportation.<br />
In other words, if Elfdrive and its competitors succeed, it wouldn’t be a stretch to see many small and midsize cities become transportation nirvanas on the order of Manhattan — places where forgoing car ownership isn’t just an outré lifestyle choice, but the preferred way to live.</blockquote>
If you haven't guessed all I did was minor edits to a Farhod Manjoo <a href="http://www.nytimes.com/2014/06/12/technology/personaltech/with-ubers-cars-maybe-we-dont-need-our-own.html">piece</a> on Uber. My point is that it all fits, there really isn't that much difference between a car driven by a magical AI elf or that driving by a real person, if it's all controlled by an app. You can start to see the effects of self-driving cars, the good and the bad, from what's going on now with ride-sharing services. The big difference with self-driving cars will be the complaints won't come just from the taxi drivers, but the truckers and delivery people as well.</div>
Lance Fortnowhttps://plus.google.com/101693130490639305932noreply@blogger.com5tag:blogger.com,1999:blog-3722233.post-80536880636651453172014-07-13T22:34:00.001-05:002014-07-14T12:38:08.663-05:00What to call the top and bottom part of (n choose k)In my last post I asked for candidates for names for the top and bottom part of (n choose k) . Here are the candidates and my comments on them and ... the winner!<br />
<br />
<ol>
<li>Top part: Degree, Bottom part: Index. </li>
<li>Top part: Bino, Bottom part: Mial</li>
<li>Top part: Numerator, Bottom part: Denominator</li>
<li>Top part: Outcomes, Bottom part: Possibilities</li>
<li>Top part: Binomerator, Bottom part: I've got nothing</li>
<li>Top part: *, Bottom part: *</li>
<li>Top part: Biponendo, Bottom part: Bividendo</li>
<li>Top part: Choosand, Bottom part: choosee</li>
<li>Top part: Set size, Bottom part: Subset size.</li>
</ol>
I leave out the explanations for these since one criteria is that they be self explanatory.<br />
While choices 4,8,9 are tempting along those lines, the winner is<br />
<br />
Numerator/Denominator<br />
<br />
Why? One of the people who suggested it gave a pointer. The pointer actually went to calling the top and bottom part of the Legendre symbol Numerator and Denominator. And thats just it- there are several<br />
other math things that have a top and bottom part. We could try to find a name for each one, OR<br />
just use Numerator and Denominator for all of them. That seems to work. SO- next time you write a paper and need to refer to the top part of a bin coeff or a leg symbol or something else, use the terms<br />
Numerator and Denominator.<br />
<br />
CODA: `The Denominator' could be an Arnold Schwarzenegger character. A Math teacher by day, a crimefighter by night. Catchphrase: I'm going to put you under!GASARCHhttp://www.blogger.com/profile/06134382469361359081noreply@blogger.com5tag:blogger.com,1999:blog-3722233.post-47765689324423318682014-07-09T21:22:00.000-05:002014-07-09T21:22:03.646-05:00Is there a word for the top part of a binomial coefficient?Consider the sequence:<br />
<br />
x/y, (x+1)/y, (x+2)/y, ..., (x+z)/y<br />
<br />
one could say <i>in this sequence of fractions the numerators goes through z-1 consecutive numbers.</i><br />
<br />
Consider the sequence<br />
<br />
(x choose y), (x+1 choose y), (x+2 choose y),...,(x+z choose y)<br />
<br />
one could say <i>in this sequence of binomial coefficients the top-part goes through</i> <i>z-1 consecutive numbers.</i><br />
<br />
Is there a better way to say this? That is, is there a term for the top-part of a binomial coefficient? Or for that matter the bottom part? I have not been able to find one on the web. Hence I propose a contest:<br />
<i> </i><br />
<ol>
<li>Leave as a comment a proposed name for the top-part and for the bottom-part of a binomial coefficient.</li>
<li>If you find a website that has some official name, leave that as a comment.</li>
</ol>
I am not sure if there will be a winner of what he or she will get. But if we can agree upon a term then we are all winners!<br />
<br />GASARCHhttp://www.blogger.com/profile/06134382469361359081noreply@blogger.com11tag:blogger.com,1999:blog-3722233.post-70455929922794544702014-07-07T07:48:00.000-05:002014-07-07T07:48:38.456-05:00Favorite Theorems: Compressing the Local LemmaNot only did Moser give a near optimal construction, but he did so in my favorite STOC talk of the decade.<br />
<br />
<div style="text-align: center;">
<a href="http://dx.doi.org/10.1145/1536414.1536462">A Constructive Proof of the Lovász Local Lemma</a> by Robin Moser</div>
<div style="text-align: center;">
<br /></div>
<div style="text-align: left;">
The <a href="http://en.wikipedia.org/wiki/Lov%C3%A1sz_local_lemma">Lovász local lemma</a> informally states that if you have a large set of events with limited dependence and they individually have a reasonable chance of happening, then there is a positive probability that they all happen. Moser focused on the special case of Boolean formula satisfiability: If we have a k-CNF formula φ where each clause shares a variable with at most r other clauses with r<2<sup>k</sup>/8 then φ is satisfiable. Note the result does not depend on the number of variables or clauses.<br />
<br />
Moser gave this simple algorithm and a proof using entropy that I recognized as a Kolmogorov complexity argument and so excited me that I immediately wrote up the <a href="http://blog.computationalcomplexity.org/2009/06/kolmogorov-complexity-proof-of-lov.html">Kolmogorov proof</a> as a blog post.<br />
<br />
Moser's paper kickstarted a whole research agenda, with tight bounds using even simpler algorithms (though more complex proofs) and various generalizations. A whole <a href="http://www.columbia.edu/~cs2035/stoc/stoc2014/tutorials.shtml#lll">tutorial day</a> of STOC 2014 focused on the local lemma describing research that directly or indirectly came from Moser's most beautiful construction.</div>
Lance Fortnowhttps://plus.google.com/101693130490639305932noreply@blogger.com3tag:blogger.com,1999:blog-3722233.post-19978707038615968712014-07-02T18:24:00.000-05:002014-07-02T18:25:05.316-05:00Four Centuries of Logarithms<div class="separator" style="clear: both; text-align: center;">
<a href="http://www.nms.ac.uk/images/powerof10-book-348px.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="http://www.nms.ac.uk/images/powerof10-book-348px.jpg" height="320" width="236" /></a></div>
I just returned from visiting my former student Rahul Santhanam in Edinburgh. The National Museum of Scotland has an <a href="http://www.nms.ac.uk/our_museums/national_museum/exhibitions/power_of_ten.aspx">exhibit</a> on logarithms, first published in a book by John Napier in 1614.<br />
<br />
Napier invented logarithms to make calculations like multiplication, division and exponentiation easier, using identities like log(ab)=log(a)+log(b). Logarithmic tables and slide rules came soon thereafter. Slide rules became the standard way to do mathematical computations for your typical scientist/engineer until reasonably priced calculators appeared in the late 70's. My chemistry teacher in 1980 taught us how to use a slide rule, even though we all had calculators, and I even (foolishly) tried using my father's slide rule on a chemistry test.<br />
<br />
The exhibit struggled to find current uses for logarithms, mentioning only the Richter scale. In theoretical computer science we use logarithms all the time. Here's an incomplete list off the top of my head.<br />
<ul>
<li>As part of a running time usually as a result of divide and conquer. Sorting, for example, takes Θ(n log n) comparisons.</li>
<li>The size of a number, pointer or counter. To count up to n requires log n bits of storage.</li>
<li>The representation of a small amount of space as in the complexity classes L and NL.</li>
<li>To capture entropy, coupon collector and other topics in probability and information.</li>
<li>Roughly captures the sum of 1/i or exactly capturing the integral of 1/x.</li>
<li>The inverse of exponential growth.</li>
</ul>
<div>
Thanks John Napier for the logarithm and making our lives just a little less linear.</div>
Lance Fortnowhttps://plus.google.com/101693130490639305932noreply@blogger.com4tag:blogger.com,1999:blog-3722233.post-24581321836395872582014-06-29T15:05:00.000-05:002014-06-29T15:05:03.583-05:003000 lunchesLast week fortune smiled on two of my friends:<br />
<br />
<ol>
<li>Ken Regan made the cover of Chess Life for his work on catching chess cheaters. <a href="http://rjlipton.wordpress.com/2014/06/18/the-problem-of-catching-chess-cheaters/">(See here.)</a></li>
<li>Jacob Lurie who I mentored when he was a high school student 1993-1995 won $3,000,000 for doing pure math. I am not kidding. See <a href="https://breakthroughprize.org/?controller=Page&action=news&news_id=18">here</a> and here. and <a href="http://www.scientificamerican.com/article/with-awards-like-the-breakthrough-prize-who-needs-a-nobel/">here. </a> This is a relatively new award called the <a href="http://en.wikipedia.org/wiki/Breakthrough_Prize">breakthrough prize.</a></li>
</ol>
The breakthrough prize started a few years ago in Life Sciences and Fundamental Physics (not sure what they mean-possibly Mathematical Physics or Theoretical Physics). This is the first year it is given for Mathematics. The other winners of the Breathrough prize in math are Simon Donaldson, Maxim Kontsevich, Terence Tao, and Richard Taylor.<br />
<br />
$3,000,000 is the largest amount of money for prizes of this sort (the Nobel is a paltry $1,200,000). Even so, I had not heard of the award until there was a math one. I wonder if it will become a household name like the Nobel has.<br />
<br />
A few weeks ago I told my student Tucker, who is going to Digipen- a 4-year college with a BIG emphasis on programming computer games- that he is more likely to become a millionaire then my student Sam who is going to CMU to study pure math. I could be wrong.<br />
<br />
I taught Jacob lots of recursion theory and he won the Westinghouse Award with a paper on Recursive Surreal Numbers; however, the work was all his own and didn't use much of what I taught him. After that I made a policy that for every $1000 a high school student of mine wins I get a free lunch. I emailed Jacob- and while he's not sure he'll give me 3000 lunches, he will treat me to lunch next time I am in Boston.<br />
<br />
To quote the article, he won it for <br />
<br />
Foundations of higher category theory and derived algebraic geometry for the classification of fully extended topological quantum field theories; and for providing a moduli-theoretic interpretation of elliptic co-homology.<br />
<br />
I know what some of those words mean. <br />
<br />
<br />
<br />GASARCHhttp://www.blogger.com/profile/06134382469361359081noreply@blogger.com1tag:blogger.com,1999:blog-3722233.post-12024238515202264602014-06-26T03:36:00.000-05:002014-06-26T03:43:00.370-05:00Computer DatingMy brother went through his old stuff cleaning out his apartment in New York and stumbled upon the 1981 computer dating questionnaire I wrote in high school. Forgive the dated and adolescent nature of the questions.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-TgmZitxOQ4I/U6vUfusmCQI/AAAAAAAAvjw/JziTwdznkH4/s1600/IMG_1134.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-TgmZitxOQ4I/U6vUfusmCQI/AAAAAAAAvjw/JziTwdznkH4/s1600/IMG_1134.JPG" height="360" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
The computer science teacher showed us an example of a computer dating form created by some company, a new idea at the time. I decided to run computer dating at the high school, created a questionnaire and passed it around. I did this a few years, the 1981 form must have been the last as that's the year I graduated. In those ancient history days, my fellow students filled out these forms on paper and then I would type the answers into the big teletype machines in the computer lab. I wrote a program to create a matching, I have no idea what algorithm I used but I'm sure it was quite simplistic. I always got matched to the same person, but the pickup line "my computer dating program matched us up" didn't go very far. I did hear of one couple having one (and only one) date because of the program. There's a reason I became a theorist.</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
My daughters tell me the whole scene has changed with traditional dating quite rare these days. "First you become boyfriend/girlfriend and then you date" which seems so backwards to me.</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
Now we have websites that use sophisticated machine learning algorithms to connect people. I met my wife the old fashioned way--at a wine and cheese party sponsored by a Boston-area Jewish singles group.</div>
Lance Fortnowhttps://plus.google.com/101693130490639305932noreply@blogger.com3tag:blogger.com,1999:blog-3722233.post-49385033008914761192014-06-23T08:04:00.002-05:002014-06-23T18:36:15.861-05:00How many ways can you make change of n cents?(For a related post see <a href="http://blog.computationalcomplexity.org/2013/06/quantum-tecniquesgen-functions-dont-be.html">my post on Schur's theorem</a>. The paper THIS post refers to is <a href="http://arxiv.org/abs/1406.5213">here</a>.) <br />
<br />
(ADDED LATER: A commenter pointed to Graham-Knuth-Patashnik for a closed form for pennies, nickels, dimes, quarters, half-dollars. I have wriitten up their account and it is available <a href="http://www.cs.umd.edu/~gasarch/BLOGPAPERS/knuthchange.pdf">here</a>. I also apply their technique to just get 1,5,10,25.)<br />
<br />
How many ways are there to make change of a dollar using pennies, nickels, dimes, and quarters? This is a well known question; however, the answers I found were disappointing. They were of two types<br />
<ol>
<li>There are 242 ways to make change. The author then point to a program he wrote or to the actual list of ways to do it.</li>
<li>The number of ways to make n cents change is the coeff of x^n in the power series for 1/(1-x)(1-x^5)(1-x^{10})(1-x^{25}). This can be worked out. I have never seen it worked out.</li>
</ol>
The first answer yields an actual number but is not interesting mathematically. The second answer is interesting mathematically but does not easily yield an actual number. <br />
<br />
I looked for a closed form on the web but could not find one. So I did it myself. The paper is pointed to above. I did not use generating functions, just recurrences. I do not know if it is new. Whether it is old or new I hope you enjoy it. The paper actually gives a closed form for the coin sets {1,x,kx} and {1,x,kx,rx}. Along the way we derive the answer for a dollar and the usual currency by hand.<br />
<br />
This raises some questions:<br />
<br />
<ol>
<li>Is my formula new? I'd be surprised either way. (Is it possible to be surprised at both statement A and statement NOT(A)?). If it's new I'll be surprised since the questions has been around for so long and surely someone would have done it. If it's known I'll be surprised since (a) I went to JSTOR and searched for all math journals they had for the words ``change'' and ``coin'', looked at the over 400 such articles (just the titles and abstracts) and didn't find it, (b) this came up in math.stackexchange <a href="http://math.stackexchange.com/questions/15521/making-change-for-a-dollar-and-other-number-partitioning-problems">here</a> and, true to form, every answer was either a program or a generating function and (c) other searches also did not turn up the result.(ADDED LATER- Since the 1,5,10,25,50 case was known, I guess I AM surprised.) </li>
<li>Even if its not new it is clearly not well known. Why is that? Its a natural problem that people seemed to be interested in. One conjecture: The COMP SCI people interested in it were happy to write a program. The MATH people interested in it were happy to say `its merely the nth coeff of...' So a closed form solution seems to not be of interest to either camp</li>
<li>Is it interesting? (a) Comp Sci: the closed form solution gives an answer in O(1) steps instead of the usual dynamic programming O(n) solution, and (b) Math: the closed form solution tells you the coefficients of the power series of 1/((1-x)(a-x^5)(1-x^{10})(1-x^{25}). </li>
</ol>
<br />
Side Bar: I asked the 10 students in my REU program to just GUESS how many ways there are to make change of a dollar with pennies, nickels, dimes, and quarters. Most made guesses under 100. The lowest guess was 30. One person guessed 100! and one guessed 100!/2. I then told them that it was between 30 and 100! and assigned them to derive it themselves by hand. Most were able to.GASARCHhttp://www.blogger.com/profile/06134382469361359081noreply@blogger.com8tag:blogger.com,1999:blog-3722233.post-65522303321411062442014-06-18T15:12:00.000-05:002014-06-18T15:12:38.637-05:00Favorite Theorem: PCP SimplifiedThe famous <a href="http://dx.doi.org/10.1145/278298.278306">Probabilistically Checkable Proof Theorem</a> due to Arora, Lund, Motwani, Sudan and Szegedy states that every problem in NP has a proof that can be checked efficiently by a verifier using O(log n) random coins and a constant number of queries. Not only amazing in its own right, the PCP theorem has significant implications for hardness of approximation. In its <a href="http://dx.doi.org/10.1145/502090.502098">strongest form</a>, no efficient algorithm can maximize the number of satisfying clauses in a formula better than a 7/8 approximation unless P = NP. Both of these results have been on my previous <a href="http://people.cs.uchicago.edu/~fortnow/papers/topten.pdf">favorite</a> <a href="http://blog.computationalcomplexity.org/2004/05/favorite-theorems-probabilistically.html">theorems</a> lists so lets add one more.<br />
<br />
<div style="text-align: center;">
<a href="http://dx.doi.org/10.1145/1236457.1236459">The PCP Theorem by Gap Amplification</a> by Irit Dinur (<a href="http://www.wisdom.weizmann.ac.il/~dinuri/mypapers/combpcp.pdf">PDF</a>)</div>
<div style="text-align: center;">
<br /></div>
<div style="text-align: left;">
Dinur's main result doesn't prove a new theorem per se but gives a much simplified and intuitively understandable proof than the original paper. She increases the approximation gap by using an expansion graph though this causes an increase in the alphabet size which she reduces using another transformation. Repeating these steps yields the theorem. Unless you need tight bounds, Dinur's proof is the one to teach or read. </div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
Dinur's paper followed shortly after <a href="http://blog.computationalcomplexity.org/2014/02/favorite-theorems-connecting-in-log.html">Omer Reingold's theorem on undirected connectivity</a>, and while these are very different proofs of different results, together they show the amazing power of expander graphs in computational complexity. </div>
Lance Fortnowhttps://plus.google.com/101693130490639305932noreply@blogger.com0tag:blogger.com,1999:blog-3722233.post-53754619193312701572014-06-16T08:01:00.004-05:002014-06-16T08:01:44.370-05:00Ref/info request for an obvious appraoch to GITalking about Graphi Isom with some students they came up with the following idea which is certainly not new; however, I couldn't quite find much about it on the web.<br />
<br />
ALG(G,H): Let G(0,1) and H(0,1) be the usual adacency matrix using 0 for EDGE NOT THERE and 1 or EDGE IS THERE. Choose n random pairs of numbers (a1,b1), (a2,b2),...,(an,bn) all \le n^2 (happy to change the bound if it will make it work better). For all i see if DET(G(ai,bi))=DET(H(ai,bi)). If there is an i such that they are NOT equal than output NOT ISOM (with complete certainly). If they are ALWAYS equal then output GEE, THOSE GRAPHS ARE PROB ISOMORPHIC.<br />
<br />
<ol>
<li>I person who used to work on GI told me that he THINKS there are graphs G, H where the polynomials G(x,y) and H(x,y) are identical yet G and H are not isom. So their are some G,H where the above will definitely fail. Not surprising.</li>
<li>Are such graphs rare? Will this work on `almost all pairs of graphs' ? Not clear what that means.</li>
<li>If we also checked degree sequences and degrees-of-degrees, would that help? I doubt it since I think that graphs for which this fails are very similar in other ways, but I don't know that.</li>
</ol>
When I first heard the idea from my students it was new to me. Even so, I knew that it wasn't new and that it there has to be graphs it failes on (item 1 above). How did I know this? If there were no graphs that it failed on then GI would be in R. And if GI was in R, I would know that. And so would you. But this is not a good argument to give to students.<br />
<br />
I think its a good idea and it might work for our purposes (which may be a topic of a later blog). GASARCHhttp://www.blogger.com/profile/06134382469361359081noreply@blogger.com7tag:blogger.com,1999:blog-3722233.post-52385143533936702442014-06-13T11:17:00.000-05:002014-06-13T11:17:01.945-05:00CCC 2014I'm returning from the <a href="http://www2.cs.sfu.ca/~kabanets/CCC14/CCC%202014%20%20Local%20Arrangements.htm">2014 IEEE Conference on Computational Complexity</a> in Vancouver, unfortunately missing the last day. Several very nice talks including a wonderful survey by Shubhangi Saraf on recent problems on lower bounds in algebraic circuits (addition and multiplication gates). In short, the interesting case is depth-4 algebraic circuits with bottom fan-in √n computing homogeneous polynomials. Kayal, Saha and Saptharishi <a href="http://eccc.hpi-web.de/report/2013/091/">showed</a> a 2<sup>Ω(√n log n)</sup> lower bound. Any improvement (changing the Ω to a ω) would separate VNP from VP (the algebraic version of the P v NP problem), or equivalently show the permanent cannot be computed by poly-size algebraic circuits.<br />
<br />
At the business meeting, we discussed the <a href="http://computationalcomplexity.org/forum/">plans for independence</a> from IEEE. 60% of 189 voted for independence with open access as the main reason stated. There has been great support from the community both in volunteers to help with an independent conference and money ($42K) raised to get started. Future proceedings will likely be in the <a href="http://www.dagstuhl.de/en/publications/lipics">Dagstuhl LIPIcs</a> series. Timing and a new conference name are a few of the many issues still to be worked out. Whether or not you support independence, the conference organizers, particularly steering committee chair (and my former student) Dieter van Melkebeek, are doing a strong job coordinating the efforts.<br />
<br />
Other highlights from the business meeting.<br />
<br />
<ul>
<li>29 accepted papers out of 94 submissions. A good number.</li>
<li>There was no best paper. The best student paper went to Alexander Belov for Quantum Algorithms for Learning Symmetric Juntas.</li>
<li>66 registered participants including 29 students. A bit low probably due to the proximity to STOC in time but not distance.</li>
<li>2015 CCC will be June 17-19 as part of the <a href="http://fcrc.acm.org/">FCRC conference</a> in Portland, Oregon.</li>
<li>There were three good bids for the 2016 CCC from Tokyo, Saarbrücken and Riga. Tokyo was the overwhelming winner.</li>
<li>CCC 2017 may be part of a federated theory conference including STOC, EC and SPAA.</li>
</ul>
Lance Fortnowhttps://plus.google.com/101693130490639305932noreply@blogger.com1tag:blogger.com,1999:blog-3722233.post-40286630378377637492014-06-12T10:08:00.000-05:002014-06-12T15:53:46.954-05:00Wassily Hoeffding 1914-1991In honor of the 100th anniversary of the birth of <a href="http://www-groups.dcs.st-and.ac.uk/~history/Biographies/Hoeffding.html">Wassily Hoeffding</a> today, let's recall his incredibly useful <a href="http://en.wikipedia.org/wiki/Hoeffding%27s_inequality">inequality</a>.<br>
<div>
<center>
Prob(|X-pn|>εn) < 2e<sup>-2ε<sup>2</sup>n</sup></center>
<center>
<sup><br></sup></center>
<center style="text-align: left;">
</center>
<center style="text-align: left;">
where X is the random variable representing the number of heads from n independent coin flips each with a probability p of being heads. Informally the sum of many independent events will, with extremely high probability, be very close to the expected value. Notice that as long as ε = 1/o(√n) the probability goes down exponentially in n and this is tight as one expects X to be about O(√n) away from pn for constant p.</center>
<center style="text-align: left;">
<br></center>
<center style="text-align: left;">
</center>
<center style="text-align: left;">
</center>
<center style="text-align: left;">
</center>
<center style="text-align: left;">
Back in the 80's, probabilistic computation started to play a major role in algorithms, cryptography, learning theory and interactive proofs. To analyze these computations one needed to deal with <a href="http://en.wikipedia.org/wiki/Chernoff_bound">Chernoff bounds</a> and it was difficult to find clean and easy statements of the bounds one could use in proofs. One of my officemates in grad school, Bob Sloan, took it upon himself to write up a short document giving Hoeffding's inequality and a few generalizations. Having that write-up was like holding Excalibur in my hands, ready to do battle with probabilistic complexity.</center>
<center style="text-align: left;">
<br></center>
<center style="text-align: left;">
</center>
<center style="text-align: left;">
Later Alon and Spencer gave a nice treatment in their <a href="http://www.amazon.com/gp/product/0470170204?ie=UTF8&camp=213733&creative=393185&creativeASIN=0470170204&linkCode=shr&tag=computation09-20&qid=1402584592&sr=8-1&keywords=probabilistic+methods">Probabilistic Methods</a> text and these days you can get all the formulas you want on Wikipedia.</center>
<center style="text-align: center;">
</center>
<center style="text-align: center;">
</center>
<center style="text-align: center;">
</center>
<center style="text-align: left;">
</center>
</div>
Lance Fortnowhttps://plus.google.com/101693130490639305932noreply@blogger.com5tag:blogger.com,1999:blog-3722233.post-71735418632177819432014-06-09T08:52:00.000-05:002014-06-09T08:52:21.577-05:00Fair question? Trick question?The following problem was on my final for Formal Lang theory (Reg, P, NP, Dec, c.e.)<br />
<br />
For this problem you may assume P\ne NP and NP\ne coNP. For each of the following<br />
sets say if it is (1) REG, (2) In P but NOT REG, (3) in NP by not P, (4) Decidable but not in NP,<br />
(5) c.e. but not decidable, or (6) Not c.e.<br />
<br />
No explanation needed; however, you get +4 if its right and -2 if you give an answer and its wrong.<br />
You get 0 for a blank. (Hint- DO NOT GUESS!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!)<br />
<br />
<ol>
<li> { G : G is 2-colorable}</li>
<li> { G : G has an ind set of size 12}</li>
<li> { a^nb^ma^{n+m} : n,m\in N }</li>
<li> { (G,rho) : rho IS a proper 3-coloring of G}</li>
<li> { a^{n^2} : n is not a square }</li>
</ol>
Some thoughts on this:<br />
<br />
<ol>
<li>A reader once inquired of Marilyn Vos Savant <i>my teacher said we would be penalized for guessing. How will our teacher know that we guessed?</i></li>
<li><i> </i>The above is funny but its a real issue- I really want to give -2 if they GUESSED but give 0 if they honestly thought (say) that a problem was REG when it was P but not REG. Alas we cannot put electrodes into their brains to tell.</li>
<li>It was a good question in that how well they did on it did correlate pretty well to how they did on the rest of the course and on the rest of the exam.</li>
<li>Problem 4 most people got wrong-- they thought it was NP but not P. One of the best students in the class who got it wrong said that just SEEING the phrase ``3-col'' and not having had a choice that was NP but not P made him just leap to the NP but not P choice.</li>
<li>Some students complained to me that having them ALL be ``P but not REG'' was unfair. When I asked him why he told me that since he had 3 of them as ``P but not REG'' his guesses for the rest were NOT that. I reminded him that I didn't just say DO NOT GUESS, I also put LOTS of exclamation points after it.</li>
<li>(This point was blogged about before <a href="http://blog.computationalcomplexity.org/2010/04/is-guessing-good-idea.html">here</a>.) I annouced that if a student gets a negative score on the problem I'll give a 0. Drawback- if they are clueless they SHOULD guess one question.</li>
</ol>
<br />
SO what do you think- is it a fair question? Is it a good question? GASARCHhttp://www.blogger.com/profile/06134382469361359081noreply@blogger.com10tag:blogger.com,1999:blog-3722233.post-49924809482213732422014-06-06T15:44:00.000-05:002014-06-06T15:44:31.792-05:00Do you support CCC's declaration of Ind? If so...The steering committee for the CCC recently announced its decision to be independent of IEEE. <a href="http://computationalcomplexity.org/forum/open-letter/">Here</a> is their open letter to that effect.<br />
<ol>
<li>If you agree with this declaration of independence then you should read the petition <a href="http://computationalcomplexity.org/letter-of-support.php">here</a> and consider signing it. (I have signed it. Lance didn't.)</li>
<li>If you want to read a cleverer blog post on the topic, see Scott's <a href="http://www.scottaaronson.com/blog/?p=1853">here</a>.</li>
</ol>
GASARCHhttp://www.blogger.com/profile/06134382469361359081noreply@blogger.com12tag:blogger.com,1999:blog-3722233.post-191107072155446192014-06-04T16:18:00.000-05:002014-06-04T16:18:37.274-05:00STOC 2014At the just completed <a href="http://www.columbia.edu/~cs2035/stoc/stoc2014/">STOC conference</a> I received the 2014 <a href="http://www.sigact.org/Prizes/Service/">SIGACT Distinguished Service Prize</a>. Part of this citation reads"His blog, and many others that followed, changes the way our field communicates, both with itself and with the outside world." I owe this award to you, my readers and commenters. May the conversations always continue.<br />
<br />
The true highlight of the conference happened a day earlier with the Turing award lectures of Silvio Micali and Shafi Goldwasser, two of the better Turing award lectures I've seen in a while. Silvio talked about the nature of proofs and Shafi showed the connections between cryptographic principles and their applications in the broader community. For example, the Goldreich-Levin hard-core bit led to list-decodable codes. The ACM taped both lectures and they should be on-line soon.<br />
<br />
Notes from the business meeting<br />
<br />
<ul>
<li>372 conference attendees, 46% students. A great attendance.</li>
<li>322 submissions, 91 accepts.</li>
<li>Best Paper: "The matching polytope has exponential extension complexity" by Thomas Rothvoss</li>
<li>Student Paper: "Online learning of local structure via semidefinite programming" by Paul Christiano</li>
<li>Gödel Prize (to be awarded at ICALP): "Optimal Aggregation Algorithms for Middleware" by Ronald Fagin, Amnon Lotem and Moni Naor</li>
<li>Future Conferences:</li>
<ul>
<li>FOCS 2014 October 18-21 in Philadelphia. There will be a workshop to discuss ideas for STOC and FOCS moving forward.</li>
<li>ITCS 2015 January 11-13 at the Weizmann Institute in Israel (<a href="http://people.csail.mit.edu/shafi/itcs15/itcs-cfp.pdf">CFP</a>)</li>
<li>STOC 2015 as part of <a href="http://fcrc.acm.org/">FCRC</a> in Portland, Oregon</li>
<li>STOC 2016 in Boston likely co-located with SoCG</li>
<li>Potential federated theory conference in 2017</li>
</ul>
<li>NSF</li>
<ul>
<li>2012 Funding: Algorithmic Foundations 52 Smalls, 9 Mediums, 2 Large, 11 CAREER</li>
<li>New programs (theorists wanted): <a href="http://www.nsf.gov/brain">Brain Initiative</a>, <a href="http://www.nsf.gov/pubs/2014/nsf14562/nsf14562.htm">CISE Research Initiation Initiative</a>, <a href="http://www.nsf.gov/pubs/2014/nsf14516/nsf14516.htm">XPS</a></li>
<li>Keep abreast of new opportunities at <a href="http://thmatters.wordpress.com/">Theory Matters</a></li>
</ul>
<li>New <a href="http://books.acm.org/">ACM Book Series</a></li>
</ul>
Lance Fortnowhttps://plus.google.com/101693130490639305932noreply@blogger.com3tag:blogger.com,1999:blog-3722233.post-72062246592063107742014-06-01T20:22:00.000-05:002014-06-02T15:49:04.223-05:00Law of small numbers and lettersThe law of small numbers is that there are not enough small numbers for all of the tasks that are assigned to them. That makes some math cranks find patterns which are really only caused by not enough small numbers around. One such crank kept finding the number 57 in events related to the American Revolutionary war. The fallacy is that if you look hard enough ANY number would work.<br />
<br />
At the LLL workshop it was noted that LLL stands for both Local Lovasz Lemma (my wife asked ``an entire workshop about a lemma?'') or the Lensta-Lenstra-Lovasz algorithm. Here we see that there aren't quite enough sequences of letters, hence some get used twice. Of course in this case it helps that Lovasz is brilliant.<br />
<br />
The only other example I now of two well known theorems with the same initials of authors is AKS:<br />
<br />
AKS primality testing: Agrawal-Kayal-Sazena test of primality in poly time<br />
<br />
and<br />
<br />
AKS sorting network: Ajtai-Komlos-Szemeredi O(log n) sorting network.<br />
<br />
AKS primality gets 13,500 hits<br />
<br />
AKS sorting network gets 39,100 hits<br />
<br />
Are there other pairs of well known results where the initials are the same?<br />
(This may depend on what you call ``well known'')<br />
<br />
When someone says AKS do you think primality or sorting network?<br />
It may depend on your age and your field. I think of the Ajtai-Komlos-Szemeredi upper bound R(3,k) \le O(k^2/log(k)). I tend to NOT count that as another collision of initials since its the exact same authors as the other paper.<br />
<br />
<br />GASARCHhttp://www.blogger.com/profile/06134382469361359081noreply@blogger.com13tag:blogger.com,1999:blog-3722233.post-65767847968944793692014-05-29T08:28:00.000-05:002014-05-29T08:28:15.001-05:00Forgotten but not Lost in a Mapless WorldMassimo Vignelli <a href="http://www.nytimes.com/2014/05/28/business/massimo-vignelli-a-modernist-graphic-designer-dies-at-83.html">passed away</a> on Tuesday, a visionary designer known for many things in particularly the <div class="separator" style="clear: both; text-align: center;">
<a href="http://www.nycsubway.org/perl/caption.pl?/img/maps/system_1972.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="http://static01.nyt.com/images/2014/05/28/business/vignelliobit2/vignelliobit2-blog427.jpg" height="320" width="268" /></a></div>
1970s <a href="http://www.nycsubway.org/perl/caption.pl?/img/maps/system_1972.jpg">NYC Subway Map</a>. I used to pour over this map as a kid, finding the shortest route from my father's apartment on the upper east side to such exotic places like Coney Island. The subway map was not at all close to scale, central park certainly is not more wide than tall, but that didn't matter to me. The map did a great job showing how all the lines interconnected and that's all I needed.<div>
<br /><div>
I had a great skill in my younger days getting places using maps. I abhorred getting directions or following people--just tell me where you want me to be and I'll find it with a good (or at least topologically correct) map. If you make a mistake with directions you get hopelessly lost. If you take a wrong turn, a map is self-correcting.</div>
<div>
<br /></div>
<div>
My daughter was surprised to hear I used to carry maps in my car. Now I just use Google Maps to show me the way. I completely lost my map finding skills and often my sense of direction. Last week I ate at a restaurant managed by a relative in San Antonio. I couldn't really tell you where in San Antonio it was, Google Maps just took me on a series of highways to get there and then a series of highways to get to my hotel near the airport afterwards. </div>
</div>
<div>
<br /></div>
<div>
In the future when we just get into our <a href="http://www.nytimes.com/2014/05/28/technology/googles-next-phase-in-driverless-cars-no-brakes-or-steering-wheel.html">steering wheel-less self-driving cars</a> and enter our destination on a smartphone, we may all lose our sense of direction. Just like we no longer remember phone numbers, we just type in our friend's name to be taken to their place of residence. Location may not matter any more, we'll leave it all to the machines.</div>
Lance Fortnowhttps://plus.google.com/101693130490639305932noreply@blogger.com2tag:blogger.com,1999:blog-3722233.post-65796775559373980332014-05-26T08:36:00.000-05:002014-06-03T22:43:45.666-05:00Review of People, Problems and Proofs By Lipton and Regan (A version of this either already has or well appear in SIGACT NEWS book rev column.)<br />
<br />
A review of the blog-book PEOPLE, PROBLEMS, AND PROOFS by Richard Lipton and Ken Regan in the form of a blog. <br />
<br />
FIRST POST: OVERVIEW OF PEOPLE, PROBLEMS, PROOFS.<br />
<br />
This is the second book to be written based on the blog GODEL'S LOST LETTER AND P=NP. The first one was THE P=NP QUESTION AND GODEL"S LOST LETTER. Both books are available on amazon: <a href="http://www.amazon.com/People-Problems-Proofs-Essays-G%C3%B6dels/dp/3642414214/ref=sr_1_1?s=books&ie=UTF8&qid=1400361398&sr=1-1&keywords=Regan+Lipton">here </a> and <a href="http://www.amazon.com/The-Question-G%C3%B6dels-Lost-Letter/dp/1441971548/ref=pd_sim_sbs_b_1?ie=UTF8&refRID=0WPB3RCRNQMWF6NP61N0">here.</a><br />
<br />
When I read the GLL blog I often read it, get interested, but then something comes up so I can't finish it.THE BLOG WILL GET READ... TOMORROW! BET YOUR BOTTOM DOLLAR THAT TOMORROW, I WILL READ! Yet having it in book form it seems easier to read. It seems that the chapters in this book are shorter than the blog. If so that's a bit odd since one would think the book version could afford to be longer.<br />
<br />
The upshot is positive--- I read the entire book (rare for a math book) understood most of it (rarer still) and am inspired to read more about the topics he introduced, and try to prove things for myself. I"LL PROVE THE THER-EMS TOMORROW! BET YOUR BOTTOM DOLLAR THAT TOMORROW, I WILL PROVE.<br />
<br />
April 20, 2014. Second Post on People, Problems, and Proofs<br />
<br />
SECOND POST: ARE LIPTON AND REGAN REALLY THAT NICE?<br />
<br />
Richard Lipton and Ken Regan are nice people. Or their blog-selves are nice people. Most chapters have as its title a person and a subtitle that is more about the content. The link between the person ]and the content varies. His descriptions of the people in the title of the chapters is always quite positive.<br />
<br />
In Chapter 34, titled {Sam Buss: Bounded Logic}, Lipton talks about two courses he had in logic. In one the professor was often confused. The other course was taught in a unmotivated way. He said they were both great courses. That's being too nice.<br />
<br />
The chapter itself was also nice. It involved ways to write quantifiers and refers to a result (which I will look up tomorrow) about inexpressibility.<br />
<br />
Chapter 38 is about definitions. The title is<br />
ALFONSO BEDOYA: DEFINITIONS, DEFINITIONS, and DEFINITIONS.<br />
You may be wondering WHO IS THAT?. If you are under 35 you've probably already Googled it on some device you carry around and found out that he was the character Gold Hat in THE TREASURE OF SIERRA MADRE who uttered the famous line:<br />
<br />
``Badges? We ain't got no badges. We don't need no badges I don't have to show you any stinking badges!'' (see <a href="https://www.youtube.com/watch?v=VI6KvXmytUs">here</a> for the original and see <a href="https://www.youtube.com/watch?v=gx6TBrfCW54">here </a>for a satire from the classic movie UHF).<br />
<br />
(Number 36 in the American Film Institutes 100 best movie quotes. The version from Sierra Madre, NOT the version from UHF, alas.)<br />
<br />
Their point is that you don't need definitions--- they can always be removed. I thought they would say:<br />
<br />
Definitions? We ain't got no definitions. We don't need no definitions. I don't have to show you any stinking definitions!<br />
<br />
But they cleaned it up (I didn't even know it was dirty!) to<br />
<br />
Definitions, definitions, we don't need no definitions.<br />
<br />
I'm surprised they didn't also remove the double negative, in case children are reading, to obtain<br />
<br />
Definitions, definitions, we don't need any definitions.<br />
<br />
The chapter itself was nice. It was about what makes a nice definition, and had some nice history. It was material I already knew but nice to have it all laid out.<br />
<br />
COMMENTS<br />
<br />
TOWN FALCONER: You forgot to mention the one time they really were nice to a fault. They were nicer to that guy who rudely wasted their time with a false proof that P NE NP. <br />
<br />
GLACIAL WARMISH: Gee Town, if we ever turned our blog into a book nobody would accuse you of being to nice. Anyway, they wisely spend Chapter One, THE CLAIMANT, THE READERS, AND THE CROWD, NOT replicating their blog, but telling the whole story from start to finish. This is really good since now that the end is known its good to see how it all began. However, Town, you are right, Lipton and Regan do not have any unkind words about him at all. Not even a tepid HE SHOULD AT SOME POINT HAVE MADE A FORMAL RETRACTION.<br />
<br />
KEN REGAN: Too nice! How nice of you to say! However, note that the first paragraph of Section 1.12 of Chapter One I do note that Deolalikar has not addressed the issues raised. So there!<br />
<br />
SONATA CONSORT: Ken, you call that not-being-nice? You use words like ``Unfortunate'' and phrases like `` we glean that (the revised version) did not increase appreciably in content''. Only nice people write like that. If I had spend a month pouring over an obviously flawed paper I would have written REST OF THIS COMMENT DELETED BY THE MODERATOR<br />
<br />
ONE-CAT TREE: The chapter was more about crowd sourcing math than about the proof itself. This is an interesting topic; however, I think Terry Tao's polymath problems are a better and more productive example. And I also think that Lipton-Regan are too nice to say they disagree with me.<br />
<br />
H.K. DONNUT: Wow, it sounds like Chapter One is awesome. Would you say it's worth the price of the book?<br />
<br />
BILL G: Well... I got my copy for free. However, yes, Chapter 1 was, as the kids say, jawesome!<br />
<br />
NOT PORN: I find your posts very nice. For something even nicer click HERE for what I promise is NOT a porn site. At least not in most states.<br />
<br />
THIRD POST: FOURTEEN LIGHT CHAPTERS<br />
<br />
Not every post can be about an interesting piece of mathematics. It would just be too hard (though Terry Tao seems to manage it). And in fact Lipton-Regan do not attempt this. Of the 63 chapters in the book, 14 of them are more ABOUT MATH then have hard math in them. Things like how to guess what a result will be, how to try to prove it, the importance of a good notation, how to write up a result. These chapters were light reading but still informative.<br />
<br />
COMMENTS:<br />
<br />
COLBERT NATION: HEY, it can either be light reading OR informative, but it can't be both. We're at war here, pick a side!<br />
<br />
BILL G: Here is an example. Chapter 2, titled KENNETH IVERSON: NOTATION AND THINKING didn't have any real math in it but it did tell me the following:<br />
<br />
Descartes is the first person to use x^4 instead of xxxx.<br />
<br />
Euler is the first person to use the Sigma for for summation and also the first one to use the notation f(x) for a function.<br />
<br />
There is some debate about whether pi was the right number to since 2pi come up more often.<br />
<br />
ALMA RHO-GRANT: Hey, YOU blogged on the pi thing yourself. So that can't be news to you.<br />
<br />
BILL G: Yeah, but I FORGOT! As I was reading it I thought it was a neat issue to raise BEFORE I saw that I raised it. Awkward! More to the point--- this book is good at reminding you of things you once knew.<br />
<br />
FOURTH POST: FORTY NINE HEAVY CHAPTERS<br />
<br />
If there are 63 chapters and 14 are light then 49 must be heavy. Actually the world is not quite so binary. However, there are 49 chapters that have math in them, much of it new and interesting. Since blogs are themselves summaries if I summarized all of them we may end up with some Quantum effects (Chapter 63---Charles Bennett: Quantum Protocols). Hence I summarize just a few.<br />
<br />
Chapter 37 is titled THOMAS JECH: THE AXIOM OF CHOICE. Recall the usual axiom of choice:<br />
<br />
Let I be a set (it could be of any cardinality). Assume that for every i in I there is a set X_i. There exists a function f (a choice function) from I to bigcup_{i in I} X_i such that f(i) in A_i. }<br />
<br />
Look at the following restrictions of this. Let C_n be the following statement:<br />
<br />
Let I be a set (it could be of any cardinality). Assume that for every i in I there is a set X_i such that |X_i|=n. There exists a function f (a choice function) from I to bigcup_{i in I} X_i such that f(i) in A_i.<br />
<br />
For which n,m does C_n imply C_m? This chapter gives a clear statement of when this is true and gives some proofs of the easy direction (showing that C_n implies C_m). The hard direction, showing that C_n does not imply C_m, is really hard--- it involves constructing models of set theory where C_n is true but C_m is not. These proofs are wisely omitted.<br />
<br />
Chapter 43 is titled DENIS THERIEN: SOLVABLE GROUPS. Recall that the complex numbers are algebraically closed. Let us put that a different way: If you want to solve a polynomial p(x)=0 over the rationals then there will be a large enough field extension of the rationals where it has a root.<br />
<br />
What if you are trying to solve a group equation over a group, such as ax=bx over S_5 x S_7 where a=(1 2 3 4) and b=(1 3)(1 5). (I honestly do not know if that has a solution). If there is no solution then is there some group that contains S_5 x S_7 as a subgroup where there is a solution? For this equation I do not know. But more to the point, is there some general theorem that says you can always find a larger group with a solution? NO. In this chapter they give an example where you cannot find a larger group (and they prove it works--- it's not that hard) and state some theorems and conjectures about this issue.<br />
<br />
Both Chapter 37 and Chapter 43 were excellent since they told me about a problem I had not thought of but once introduced were very interesting. In both cases they gave me a simple proof that I could follow. I plan to read more on these topics. Tomorrow.<br />
<br />
COMMENTS:<br />
<br />
TIM ANDRER GRAN: Is the math serious or recreational?<br />
<br />
BILL G: To understand the statements of the theorems you need to know some math, say that of a sophomore math major. Even then, some terms would have to be explained to you. So you might say the statements (not the proofs) are recreational to people who read Martin Gardner's or Ian Stewart's columns AND went on to learn some more math.<br />
<br />
ANA WRITSET: Why are the names of the comments so odd? Even mine!<br />
<br />
BILL G: Aside from ``Bill G'' ``Not Porn'', and ``Colbert Nation'' all of the names are anagrams. Some of the anagrams are from the April 1, 2014 <a href="http://rjlipton.wordpress.com/2014/04/01/wait-wait-dont-fool-me/">post </a> on the Godel's Lost Letter Blog (Lipton-Regan blog) and some are from an April 1, 2013 <a href="http://blog.computationalcomplexity.org/2013/04/a-nice-case-of-interdisciplinary.html">post</a> (more properly, a paper pointed to from that post) of computationalcomplexity blog (Fortnow-Gasarch blog).<br />
<br />
<br />
FIFTH POST: WHO SHOULD BUY THIS BOOK?}<br />
<br />
If you are an undergraduate in math or CS (and like theory) then this book will tell you some tips on how to get started in research and give you some nice topics to read up on, though some may be hard. As you go from ugrad to grad student to professional the light chapters will get less interesting and the heavy chapters will get more interesting. I think Lipton and Regan have calculated the exact right balance so the book is good for anyone.<br />
<br />
<br />GASARCHhttp://www.blogger.com/profile/06134382469361359081noreply@blogger.com2tag:blogger.com,1999:blog-3722233.post-88950889589131288892014-05-22T08:14:00.000-05:002014-05-29T07:40:14.944-05:00Theory Jobs 2014In the fall we <a href="http://blog.computationalcomplexity.org/2013/10/2013-fall-jobs-post.html">list theory jobs</a>, in the spring we see who got them. Similar to <a href="http://blog.computationalcomplexity.org/2013/05/theory-jobs-2013.html">last year</a>, I created a fully editable <a href="https://docs.google.com/spreadsheet/ccc?key=0AnYf--R4oxczdDNnMVF2cVFzUjJLRTl1Q2FJdkNnNWc&usp=sharing">Google Spreadsheet</a> to crowd source who is going where. Same rules as last year:<br />
<ul>
<li>I set up separate sheets for faculty, industry, postdoc/visitors and students.</li>
<li>People should be connected to theoretical computer science, broadly defined.</li>
<li>Only add jobs that you are absolutely sure have been offered and accepted. This is not the place for speculation and rumors.</li>
<li>You are welcome to add yourself, or people your department has hired.</li>
</ul>
This document will continue to grow as more jobs settle. So check it often.<br />
<br />
<iframe frameborder="0" height="750" src="https://docs.google.com/spreadsheet/pub?key=0AnYf--R4oxczdDNnMVF2cVFzUjJLRTl1Q2FJdkNnNWc&output=html&widget=true" width="575"></iframe>
<a href="https://docs.google.com/spreadsheet/ccc?key=0AnYf--R4oxczdDNnMVF2cVFzUjJLRTl1Q2FJdkNnNWc&usp=sharing">Edit</a>Lance Fortnowhttps://plus.google.com/101693130490639305932noreply@blogger.com14tag:blogger.com,1999:blog-3722233.post-64276050456972479112014-05-18T20:39:00.000-05:002014-05-21T20:19:56.886-05:00''4col \le 3col is counter-intuitive and makes me sad'' NOT ANYMORE! (ADDED LATER- THE PROOF BELOW SEEMS TO BE WRONG- A COMMENTER POINTED OUT A MISTAKE. IF YOU CAN FIX IT PLEASE<br />
LEAVE COMMENT. I WILL POINT OUT MISTAKE WHEN IT COMES.)<br />
<br />
FINAL ADD: I DID THE PROOF IN LATEX <a href="http://www.cs.umd.edu/~gasarch/BLOGPAPERS/3col.pdf">HERE</a><br />
<br />
A COMMENTER POINTED OUT THAT LOVASZ HAD A SANE PROOF A LONG TIME AGO.<br />
IT GOES THROUGH HYPERGRAPHS. <a href="http://www.cs.elte.hu/~lovasz/scans/covercolor.pdf">Here</a> is a link to his paper.<br />
<br />
<br />
In my ugrad theory class (reg,P,NP,Dec,c.e) I proved that SAT is NP-complete. In the next lecture I proved that SAT \le 3-col, so 3-col is NP-complete. I then proved 3-col \le 4-col. I was planning on proving 4-col \le 3-col but a student beat me to it (I love when that happens!).A student asked<br />
<br />
Is 4-col \le 3-col ?<br />
<br />
So I did the obvious thing--- we discussed it and we VOTED on it. After the vote I asked one of the naysayers to say why NO. He said that he really couldn't see how you could do anything reasonable to get 4-col\le 3-col. I told him he was RIGHT but he was WRONG. Say what?<br />
<br />
I told him that 4-col \le 3-col but the reduction is INSANE as it goes 4-col \le SAT (since SAT is NP-complete) and then SAT \le 3-col, as I just showed. So the student is RIGHT in that there is no `easy way to do it' (that statement is informal) but was WRONG since YES 4-col \le 3-col. He WAS onto something and I wanted to make sure he knew that, even though he was wrong.<br />
<br />
He understood all of this, but later when I had students tell me their FAV and LEAST FAV things in the course he wrote <i>Least FAV: 4-col \le 3-col: it's counter intuitive and makes me sad.</i>(Side note: The women who sits next to him in class wrote that 4-col \le 3-col is her FAV part of the course.)<br />
<br />
I have asked myself and some people if there is a SANE reduction 4-col \le 3-col. I must not have asked that many people or thought about it myself that hard, since now that I was inspired to make this student happy I came up with an UTTERLY SANE reduction:<br />
<br />
Given G on vertices v(1),....,v(n) and edge set E we construct G' : G is 4-col iff G' is 3-col.<br />
<br />
Vertices of G': For all 1\le i \le n vertices v(i,1), v(i,2), v(i,3), v(i,4). Also vertices T, F, and R. The colors will be T, F,R. We think of coloring v(i,3) T as meaning that in G node i is colored 3. (I will add more vertices later)<br />
<br />
Edges of G' : The vertices T, F, R are all connected, so they are a triangle. Every v(i,j) nodes is connected by an edge to R. To make sure that every vertex of G gets at most one color we have for all 1\le i\le n the edges<br />
<br />
UPDATE- THE REST OF THIS POST IS STUFF I ADDED LATER WHICH I BELIEVE IS NOW A CORRECT PROOF.<br />
<br />
KEY- there is a gadget GADF(x,y) such that if x and y are both F then it<br />
has an output node that is F. AND there is a gadget GADT(x,y) such that if both x,y are T then it has an output node that is T. GADF is used in the usual proof that SAT \le 3-COL. GADT I have never seen used but it is a trivial variant of GADF.<br />
<br />
To make sure that at least one of v11, v12, v13, v14 is T, use GADF the same way as in the usual ST \le 3-COL proof: GADF(GADF(v11,v12),GADF(v13,v14))<br />
and have the output be connected to both F and R.<br />
<br />
To make sure that at most one of v11, v12, v13, v14 is T have<br />
<br />
GADT(v11,v12) connect output to both T and R<br />
<br />
GADT(v11,v13) connect output to both T and R<br />
<br />
GADT(v11,v14) ...<br />
<br />
GADT(v12,v13) ...<br />
<br />
GADT(v12,v14)...<br />
<br />
GADT(v13,v14)...<br />
<br />
How to deal with the edges of G ? My original proof was wrong here as well.<br />
But easily fixed with the gadgets. For all edges (vi,vj) in G we need to make<br />
sure that vi1 and vj1 are NOT both T, vi2 and vj2 are NOT both T, vi3 and vj3 are NOT both T, vi4 and vj4 are NOT both T. use GADT and connect to T,R<br />
for all four of these.<br />
<br />
THANKS to all who pointed out mistakes in the original and I HOPE that the current one is correct.<br />
<br />
I LEAVE the mess I had before so that comments on it below still make sense,<br />
though if you are reading this for the first time can skip. <br />
(v(i,1),v(i,2)),<br />
(v(i,1),v(i,3)),<br />
(v(i,1),v(i,4)).<br />
(v(i,2),v(i,3)),<br />
(v(i,2),v(i,4)),<br />
(v(i,3),v(i,4)),<br />
<br />
THIS IS THE PROBLEM- I WANTED TO MAKE SURE THAT ONLY ONE OF<br />
v(i,1), v(i,2), v(i,3), v(i,4) WAS TRUE. BUT THE WAY THAT I DO IT I<br />
CREATE A 5-CLIQUE WITH THESE VERTICES AND R.<br />
SO NEED A WAY TO MAKE SURE THAT ONLY ONE OF THESE IS TRUE<br />
WHILE MAKING SURE GRAPH IS STILL 3-COL. GADGET SIMILAR TO THE<br />
ONE TO MAKE SURE THAT AT LEAST ONE IS T (USED IN SAT\le 3-COL<br />
AND LATER IN THIS PROOF) MIGHT WORK.<br />
<br />
To make sure that at least one of v(i,1), v(i,2), v(i,3), v(i,4) use the same gadget used in the proof that SAT \le 3-col. (This will introduce some more vertices and edges.)<br />
THATS THE ENTIRE CONSTRUCTION!<br />
<br />
Now he is happy and I am happy that he is happy!<br />
<br />
Note that this generalizes to c-col \le 3-col.<br />
<br />GASARCHhttp://www.blogger.com/profile/06134382469361359081noreply@blogger.com10tag:blogger.com,1999:blog-3722233.post-91421062898411634342014-05-15T13:33:00.000-05:002014-05-15T13:33:01.066-05:00Losing the Middle<div class="separator" style="clear: both; text-align: center;">
</div>
In the 70's growing up, to listen to music I had a turntable for vinyl records. The quality of music went up almost continuously with the amount you were willing to spend. In the mid-80's we had digital compact discs and we saw a big change. You could get pretty good quality music at a relatively low cost. If you wanted great quality you would still use vinyl and high-priced needles and audio systems. The middle disappeared, either you were fine with the CD quality or you went for a high end sound.<br />
<br />
You see the same with photography. Back in the 70's I had a small darkroom and a mid-level camera. Again the more you spent the better quality you got. Now most people are satisfied with their Smartphones for taking pictures, or you spend significant more money for a high-end camera. There is a continuous improvement from high-end to very high end but it makes little sense to buy a $300 camera these days.<br />
<br />
For video games, you either use your games on your phone or tablet, or splurge on a system like Xbox or Playstation. Same for watching television shows and movies. You even see this phenomenon in cars where even the low-end models have a solid baseline of digital systems and safety. There are still some things you can't digitize, like wine, where you still have a continuous improvement spectrum.<br />
<br />
So what's the problem, after all everyone now gets better quality at lower cost, whatever the price point. Let's go back to the music example. You'd start off with a cheap system and then as you earned some money you'd splurge on a better needle and get an improvement in quality. And you'd keep improving until you reached a high-level sound.<br />
<br />
Now many people would never take that step from an iPhone to vinyl because it requires a large investment to get any improvement in sound. They'll never find out what they're missing.Lance Fortnowhttps://plus.google.com/101693130490639305932noreply@blogger.com10tag:blogger.com,1999:blog-3722233.post-82709303559187700392014-05-12T09:37:00.000-05:002014-05-12T09:37:10.980-05:00How should qualifying exams and courses workIn my last <a href="http://blog.computationalcomplexity.org/2014/05/every-theory-grad-student-should-know.html">post </a>about what Every Theory Grad Student Should know some more general questions were raised:<br />
<br />
Qualifying exams vs Course Requirements.<br />
Why do we have either?<br />
<br />
1) To correct mistakes that admissions made. That is, some students that were admitted are not going to finish and better to get them out of the system sooner rather than later. <br />
<br />
2) To make sure students are well rounded.<br />
<br />
When do exams work:<br />
In Math or Physics there is a well defined notion of basic stuff that all grad students should know. That's why, for the most part, every Math prof can teach every ugrad math course. That a Logician Teaches Linear Algebra does not surprise anyone.<br />
<br />
CON for CS exam:<br />
In CS... we have not settled on an `every CS grad student must know...' I suspect we haven't because the field changes fast AND our field is just so diverse. Hence NOT every CS prof could PASS every ugrad CS course. In short- having a qualifying exam may be problematic since we don't quite know what the basics are<br />
<br />
CON for exams:<br />
You are getting rid of people. Are they the right people?<br />
<br />
Would you believe that someone we kicked out of the program had four papers in STOC! Would you believe it! Four papers in STOC! No. Oh. Would you believe 2 papers in ICALP? No. How about a failed proof that P=NP?<br />
<br />
Where do you draw the line for `she's so good we should let her get a PhD even though they failed the qualifying exams' This doesn't happen in math so much since they don't start research their first year. Again, the difference is the age of the field and the prereq needed to get up to speed.<br />
<br />
PRO for exams:<br />
You get rid of people early before they waste too much time. <br />
<br />
PRO for courses: If you take Blah courses in blah blah areas then even if you try to game the system some, you do KNOW something.<br />
<br />
CAVEAT about courses: If the course grades actually matter for the requirements then the professor has to give out real HW and real exams. This forces people to learn the material-- even grad students need some encouragement. But it may be bad for the learning environment.<br />
<br />
<br />
<br />
GASARCHhttp://www.blogger.com/profile/06134382469361359081noreply@blogger.com4tag:blogger.com,1999:blog-3722233.post-30326762748953287342014-05-08T15:46:00.002-05:002014-05-08T15:46:56.798-05:001984 + 30Back in 7th grade we had some discussion in history class long since forgotten but was ended by the statement "Remember, 1984 is only eight years away". While the rumor had it George Orwell picked the title <a href="http://en.wikipedia.org/wiki/Nineteen_Eighty-Four">1984</a> from switching the last two digits from the year he wrote the book, 1984 the year loomed large ahead in that cold war world we lived in.<br />
<br />
1984 the year itself came and went without much hoopla beyond a unmemorable <a href="http://www.imdb.com/title/tt0087803">movie adaptation</a>.<br />
<br />
I was reminded of 1984 by a recent play adapted from the book at Brandeis where my daughter now attends. They told the story quite effectively in flashback during the torture of Winston Smith.<br />
<br />
And now we sit three decades later with revelations that the NSA track who we call and contact and Google knows nearly everything about me, including where I am at all times. Dave Eggers recent novel <a href="http://www.amazon.com/gp/product/0385351399?ie=UTF8&camp=213733&creative=393185&creativeASIN=0385351399&linkCode=shr&tag=computation09-20">The Circle</a> draws a rather direct line from the Internet giants of today to the world of 1984.<br />
<br />
The Internet also offers the greatest deterrent to 1984, encouraging free-flowing quick dissemination of information and ideas, at least in the countries that allow it.<br />
<br />
We've hit the technical capabilities both to enable and prevent 1984. So what happens next?Lance Fortnowhttps://plus.google.com/101693130490639305932noreply@blogger.com6tag:blogger.com,1999:blog-3722233.post-62334687180814213502014-05-07T07:03:00.001-05:002014-05-07T07:03:21.658-05:00Every Theory Grad Student should know...This semester my Graduate Complexity Course has nine students in it.<br />
It had not been offered for two years (when it had around 20) so the demand,<br />
such as it is, should have built up. Of my 9 students only about 4 are theorists.<br />
<br />
Why did the UMCP Grad Students in theory NOT take grad complexity?<br />
(I am regarded as a good teacher so I'm not the reason.)<br />
The reason is that there were two competing grad theory courses in terms of the requirements - Algorithmic Game Theory and Comp Bio.<br />
<br />
This raises the question which I ask, as always, non-rhetorically. Is a theory grad student in ALGORITHMS, who is NOT working in Alg Game Theory better off taking Grad Complexity or Alg Game Theory? Alg Game Theory (or some other<br />
type of algorithms course that is not quite what she does) is perhaps more in their interest and perhaps more the kind of thing she might switch to later. But<br />
`every theory grad student should know complexity theory', or more precisely<br />
`every theory grad student in algorithms should know about lower bounds on algorithms, e.g., PCP, aproximatin lower bounds, maybe a few other things like Unique Game Conj and some concrete lower bounds' This is true philosophically but time is short.<br />
<br />
My course has in it (1) Complexity classes L, NL, P, NP, PSPACE, EXPSPACE some problems complete there and all known relations, (2) All of the machinery to do GI NPC implies PH collapses, (3) PCP and lower bounds on approximation. <br />
(DO NOT prove the full PCP theorem). If time permits I may do some concrete lower bounds (PARITY not in constant depth or some Proof Theory lower bounds). OH, I may do SHARP P and Toda's theorem also, but this semester we had some snow days and that ended up not being done, though I did do<br />
Valiant-Vazarani and told them ABOUT SHARP P without proofs.<br />
<br />
When Jon Katz teaches it I think he does some Derandomization and no concrete lower bounds (JON- if you are reading this you an comment and correct me).<br />
<br />
The question of Comp Theory VS ALG Game Theory is really asking how well rounded one should be and in what sense. Someone who works on Approx Algs who takes Alg Game Theory could say that this DOES make them well rounded. Complexity theory is further away from their interests and from anything they may work on, but `every theory grad student should know it' Or should they?<br />
<br />
I am not complaining about having only nine students (I don't have a TA so I<br />
do all of the grading- so in that sense its great!) but I ask YOU the question,<br />
how important is Complexity Theory (my course or perhaps some other course) for a grad student in algorithms to know?GASARCHhttp://www.blogger.com/profile/06134382469361359081noreply@blogger.com15