tag:blogger.com,1999:blog-3722233Sun, 26 Feb 2017 10:58:43 +0000typecastfocs metacommentsComputational ComplexityComputational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill Gasarchhttp://blog.computationalcomplexity.org/noreply@blogger.com (Lance Fortnow)Blogger2458125tag:blogger.com,1999:blog-3722233.post-356304281567214878Thu, 23 Feb 2017 12:07:00 +00002017-02-23T07:07:56.130-05:00Ken Arrow and Oscars VotingKenneth Arrow, the Nobel Prize winning economist known for his work on social choice and general equilibrium, <a href="https://www.nytimes.com/2017/02/21/business/economy/kenneth-arrow-dead-nobel-laureate-in-economics.html">passed away Tuesday</a> at the age of 95.<br />
<br />
I can't cover Arrow's broad influential work in this blog post even if I were an economist but I would like to talk about Ken Arrow's perhaps best known work, his <a href="https://en.wikipedia.org/wiki/Arrow%27s_impossibility_theorem">impossibility theorem</a> for voting schemes. If you have at least three candidates, there is no perfect voting method.<br />
<br />
Suppose a group of voters give their full rankings of three candidates, say "La La Land", "Moonlight" and "Manchester by the Sea" and you have some mechanism that aggregates these votes and chooses a winner.<br />
<br />
Now suppose we want a mechanism to have two fairness properties (for every pair of movies):<br />
<ul>
<li>If every voter prefers "Moonlight" to "La La Land" then the winner should not be "La La Land". </li>
<li>If the winner is "Moonlight" and some voters change their ordering between "La La Land" and "Manchester by the Sea" then "Moonlight" is still the winner (independence of irrelevant alternatives).</li>
</ul>
Here's one mechanism that fills these properties: We throw out every ballot except Emma Watson's and whichever movie she chooses wins.<br />
<br />
Arrow shows these are the only mechanisms that fulfill the properties: There is no non-dictatorial voting system that has the fairness properties above.<br />
<br />
Most <a href="https://en.wikipedia.org/wiki/Arrow's_impossibility_theorem#Informal_proof">proofs</a> of Arrow's theorem are combinatorial in nature. In 2002 Gil Kalai <a href="http://dx.doi.org/10.1016/S0196-8858(02)00023-4">gave a clever proof</a> based on Boolean Fourier analysis. Noam Nisan goes over this proof in a 2009 <a href="https://agtb.wordpress.com/2009/03/31/from-arrow-to-fourier/">blog post</a>.<br />
<br />
Arrow's theorem that no system is perfect doesn't mean that some systems aren't better than others. The Oscars use a reasonably good system known as Single Transferable Voting. Here is a short version updated from a <a href="http://abcnews.go.com/Entertainment/academy-awards-voting-process-works-steps/story?id=37067797">2016 article</a>.<br />
<blockquote class="tr_bq">
For the past 83 years, the accounting firm PricewaterhouseCoopers has been responsible for tallying the votes, and again this year partners Martha Ruiz and Brian Cullinan head up the operation. The process of counting votes for Best Picture isn't as simple as one might think. According to Cullinan, each voter is asked to rank the nine nominated films 1-9, one being their top choice. After determining which film garnered the least number of votes, PWC employees take that title out of contention and look to see which movie each of those voters selected as their second favorite. That redistribution process continues until there are only two films remaining. The one with the biggest pile wins. "It doesn’t necessarily mean that who has the most number one votes from the beginning is ensured they win," he added. "It’s not necessarily the case, because going through this process of preferential voting, it could be that the one who started in the lead, doesn’t finish in the lead."</blockquote>
Another article <a href="http://www.thewrap.com/how-oscar-best-picture-winner-chosen-moonlight-la-la-land/">explicitly asks</a> about strategic voting.<br />
<blockquote class="tr_bq">
<b>So if you’re a big fan of “Moonlight” but you’re scared that “La La Land” could win, you can help your cause by ranking “Moonlight” first and “La La Land” ninth, right?</b></blockquote>
<blockquote class="tr_bq">
Wrong. That won’t do a damn thing to help your cause. Once you rank “Moonlight” first, your vote will go in the “Moonlight” stack and stay there unless “Moonlight” is eliminated from contention. Nothing else on your ballot matters as long as your film still has a chance to win. There is absolutely no strategic reason to rank your film’s biggest rival last, unless you honestly think it’s the worst of the nominees.</blockquote>
Arrow's theorem says there must be a scenario where you can act strategically. It might make sense for this fan to put "Fences" as their first choice to potentially knock out "La La Land" in an early round. A similar situation <a href="https://en.wikipedia.org/wiki/Bids_for_the_2016_Summer_Olympics#Election">knocked out</a> Chicago from hosting the 2016 Olympics.<br />
<br />
Maybe the Oscars should just let Emma Watson choose the winner.http://blog.computationalcomplexity.org/2017/02/ken-arrow-and-oscars-voting.htmlnoreply@blogger.com (Lance Fortnow)3tag:blogger.com,1999:blog-3722233.post-8949775073949839802Mon, 20 Feb 2017 04:25:00 +00002017-02-19T23:25:46.911-05:00The benefits of Recreational MathematicsWhy study Recreational Mathematics?<br />
<br />
Why do recreational Mathematics?<br />
<br />
1) The line between recreational and serious mathematics is thin. Some of the problems in so-called recreational math are harder than they look.<br />
<br />
2) Inspiring. Both Lance and I were inspired by books by Martin Gardner, Ray Smullyan, Brian Hayes, and others.<br />
<br />
3) Pedagogical: Understanding Godel's Inc. theorem via the Liar's paradox (Ray S has popularized that approach) is a nice way to teach the theorem to the layperson (and even to non-laypeople).<br />
<br />
4) Rec math can be the starting point for so-called serious math. The Konigsberg bridge problem was the starting point for graph theory, The fault diagnosis problem is a generalization of the Truth Tellers and Normals Problem. See <a href="https://www.math.iupui.edu/~bleher/Papers/1983_On_a_logical_problem.pdf">here</a> for a nice paper by Blecher on the ``recreational'' problem of given N people of which over half are truth tellers and the rest are normals, asking questions of the type ``is that guy a normal'' to determine whose who. See <a href="https://www.cs.umd.edu/~gasarch/BLOGPAPERS/normals.pdf">here</a> for my writeup of the algorithm for a slightly more general problem. See William Hurwoods Thesis: <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.75.6597&rep=rep1&type=pdf">here</a> for a review of the Fault Diagnosis Literature which includes Blecher's paper.<br />
<br />
I am sure there are many other examples and I invite the readers to write of them in the comments.<br />
<br />
5) Rec math can be used to inspire HS students who don't quite have enough background to do so-called serious mathematics.<br />
<br />
This post is a bit odd since I cannot imagine a serious counter-argument; however, if you disagree, leave an intelligent thoughtful comment with a contrary point of view.http://blog.computationalcomplexity.org/2017/02/the-benefits-of-recreational-mathematics.htmlnoreply@blogger.com (GASARCH)5tag:blogger.com,1999:blog-3722233.post-93268597532832374Thu, 16 Feb 2017 12:51:00 +00002017-02-16T07:54:31.609-05:00Liberatus Wins at Poker<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://www.cmu.edu/news/stories/archives/2017/january/images/poker_853x480-min.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="180" src="https://www.cmu.edu/news/stories/archives/2017/january/images/poker_853x480-min.jpg" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Tuomas Sandholm (center) and Ph.D. student Noam Brown (via <a href="https://www.cmu.edu/news/stories/archives/2017/january/AI-beats-poker-pros.html">CMU</a>)</td></tr>
</tbody></table>
Congratulations to Liberatus the <a href="https://www.wired.com/2017/02/libratus/">new poker champ</a>. Liberatus, an AI program, beat several top-ranked players in heads-up (two player) no-limit Texas hold 'em.<br />
<br />
For those unfamiliar with <a href="https://en.wikipedia.org/wiki/Texas_hold_%27em">Texas hold 'em</a><br />
<blockquote class="tr_bq">
Two cards, known as the hole cards or hold cards, are dealt face down to each player, and then five community cards are dealt face up in three stages. The stages consist of a series of three cards ("the flop"), later an additional single card ("the turn") and a final card ("the river"). Each player seeks the <a href="https://en.wikipedia.org/wiki/List_of_poker_hands">best five card poker hand</a> from the combination of the community cards and their own hole cards. Players have betting options to check, call, raise or fold. Rounds of betting take place before the flop is dealt, and after each subsequent deal.</blockquote>
Unlike the computers that defeated the best humans in chess, Jeopardy and go, Liberatus comes directly from academia, from Tuomas Sandholm and his student Noam Brown at Carnegie-Mellon.<br />
<br />
Unlike chess and go, poker is a game of incomplete information in many forms.<br />
<ul>
<li>Information both players have: the community cards already played.</li>
<li>Information only one player has: the hole card</li>
<li>Information neither player has: the community cards yet to be played.</li>
</ul>
<div>
Betting in poker plays the primary role of raising the stakes but betting can also signal what hole cards you have. Players can bluff (betting large amounts without a corresponding strong hand), trying to cause other players to misread the signal. There is no perfect play in poker, just a mixed equilibrium though we still don't know how to compute the equilibrium and even if we could we might deviate from the equilibrium to gain an advantage. Deviating also make you vulnerable.</div>
<div>
<br /></div>
<div>
All of this makes poker a far more complicated game for computers to tackle. But through persistence and new tools in machine learning, Sandholm and Brown have found success.</div>
<div>
<br /></div>
<div>
If history holds up, it won't be long before we have champion-caliber poker apps on our phones. Will we see cheating like has been <a href="http://www.uschess.org/content/view/12677/763">happening in chess</a>? Will online poker sites just disappear?</div>
<div>
<br /></div>
<div>
What is the next great game to fall to computers? I'm guessing NASCAR. </div>
http://blog.computationalcomplexity.org/2017/02/liberatus-wins-at-poker.htmlnoreply@blogger.com (Lance Fortnow)6tag:blogger.com,1999:blog-3722233.post-2012957173445019786Mon, 13 Feb 2017 18:45:00 +00002017-02-18T14:19:26.067-05:00Raymond Smullyan: Logician, Recreational math writer, Philosopher, passed awayRaymond Smullyan was born on May 25 1919 and passed away recently at the age of 97. He was a logician (PhD from Princeton under Alonzo Church in 1959) who did serious, recreational, and philosophical work. I doubt he invented the truth-teller/liar/normals and knight/knave/Knormal problems, but he popularized them and (I suspect) pushed them further than anyone before him.<br />
<br />
He was active all of his life:<br />
<br />
His last book on Logic Puzzles, <i>The Magic Garden of George B and other Logic Puzzle</i>s was published in 2015. Wikipedia lists 14 books on logical puzzles, from 1978 untl 2015.<br />
<br />
His last book classified (on Wikipedia) as Philosophy/Memoir,<i> A Mixed Bag: Jokes, Riddles</i>,<i> Puzzles</i>, <i>and Memorabilia </i>was published in 2016. Wikipedia lists 8 books in this category, from 1977 until 2016.<br />
<br />
His last Academica book,<i> A beginners further guide to mathematical logic </i>was published in 2016. (It was a sequel to his 2014 <i>A beginners guide to mathematical logic</i>.) Wikipedia lists 8 books in this category, from 1961 to 2016.<br />
<br />
<br />
<b>Recreational Work:</b><br />
<br />
His recreational work was of the Knights/Knaves/Knormals/Sane/Insane/ variety.a<br />
<br />
Knights always tell the truth.<br />
<br />
Knaves always lie,<br />
<br />
Knormals may tell the truth or lie.<br />
<br />
Insane people only believe false things,<br />
<br />
Sane people only believe true things.<br />
<br />
He also added a complication: a species that says ALPHA and BETA for YES and NO but<br />
you don't know which of ALPHA, BETA means YES and which one means NO.<br />
<br />
<br />
Note that a truth teller Insane Knight will answer YES to 1+1=3.<br />
<br />
He invented (discovered?) the so called <a href="https://en.wikipedia.org/wiki/The_Hardest_Logic_Puzzle_Ever">hardest logic puzzle ever</a>. He wrote many books on recreational math. We mention four of them that show the line between recreational and serious mathematics is thin.<br />
<div>
<div>
<br /></div>
<div>
<i>To Mock a Mockingbird</i>. This book has logic puzzles based on combinatory logic. Is that really recreational?</div>
<div>
<br /></div>
<div>
<i>Forever Undecided.</i> This book introduces the layperson to Godel's theorem.</div>
<div>
<br /></div>
<div>
<i>Logical Labyrinth</i>s. This is a textbook for a serious logic course that uses puzzles to teach somewhat serious logic. It was published in 2009 when he was only 89 years old.</div>
<div>
<br /></div>
<div>
A Personal Note: I read the following, from his</div>
<div>
<i>The Lady or the Tiger, </i>when I was in high school, and I still don't know the answer!:</div>
<div>
<br /></div>
<div>
<i>My brother told me he would fool me on April Fools Day. I lay awake that night wondering how he would fool me. All day I was worried how he would fool me. At midnight I asked him <i>Hey, you said you would fool me but you didn't He replied April Fools!. To this day I don't know if I was fooled or not.</i></i></div>
</div>
<div>
<i><br />
</i></div>
<div>
<i><br />
</i></div>
<div>
<div>
<i><b>Serious Math Work</b>. </i>His serious work included the Double Recursion Theorem. (you can write two programs that know both their indices and each others indices) and other theorem in logic. (ADDED LATER: Lipton and Regan have a blog post with lots of great information about Ray S's serious math work <a href="https://rjlipton.wordpress.com/">here</a>.)</div>
<div>
<br /></div>
<div>
<b>Philosophy.</b> I'm not qualified to comment on this; however, it looks like he did incorporate his knowledge of logic.</div>
<div>
<br /></div>
<div>
Looking over his books and these points it seems odd to classify his books as the recreational books had some serious logic in them, and the academic books had some math that a layperson could understand<br />
<br />
I think its rarer now to do both recreational and serious mathematics, though I welcome intelligent debate on this point.<br />
<br />
Before he died, was he the oldest living mathematician? No- Richard Guy is 100 years old. wikipedia claims that Guy is still an active math Guy. Is he the oldest living mathematican? The oldest living active mathematician? It was hard to find out on the web so I ask you.</div>
</div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
http://blog.computationalcomplexity.org/2017/02/raymond-smullyan-was-born-on-may-25.htmlnoreply@blogger.com (GASARCH)11tag:blogger.com,1999:blog-3722233.post-301310271299156009Thu, 09 Feb 2017 12:11:00 +00002017-02-09T10:51:44.800-05:00The Dichotomy ConjectureArash Rafiey, Jeff Kinne and Tomás Feder settle the Feder-Vardi dichotomy conjecture in their paper <a href="https://arxiv.org/abs/1701.02409">Dichotomy for Digraph Homomorphism Problems</a>. Jeff Kinne is my academic grandchild--how quickly they grow up.<br />
<br />
Keep in mind the usual caveat that this work has not yet been fully refereed and vetted by the community, though there is no reason to think it won't be (though some skepticism in the <a href="http://blog.computationalcomplexity.org/2017/02/the-dichotomy-conjecture.html#comments">comments</a>).<br />
<br />
A homomorphism from a graph G = (V,E) to H=(V',E') is a function f:V→V' such that if (u,v) is in E then (f(u),f(v)) is in E'. For a fixed graph H, define L(H) as the set of graphs G such that there is a homomorphism from G to H.<br />
<br />
If H is just a single edge then L(H) is the set of bipartite graphs. If H is a triangle then L(H) is the set of 3-colorable graphs. If H has a self-loop then L(H) is the set of all graphs.<br />
<br />
L(H) is always in NP by guessing the homomorphism. In 1990 Pavol Hell and Jaroslav Nešetřil <a href="http://dx.doi.org/10.1016/0095-8956(90)90132-J">showed</a> the following dichotomy result: If H is bipartite or has a self-loop then L(H) is computable in polynomial-time, otherwise L(H) is NP-complete. There are no undirected graphs H such that L(H) is not in P or NP-complete.<br />
<br />
In 1998 Tomás Feder and Moshe Vardi <a href="http://dx.doi.org/10.1137/S0097539794266766">conjectured</a> that even for all directed graphs H, L(H) is either in P or NP-complete. Rafiey, Kinney and Feder settle the conjecture by showing a polynomial-time algorithm for a certain class of digraphs H.<br />
<br />
Details in the <a href="https://arxiv.org/abs/1701.02409">paper</a>. The authors recommend first watching their <a href="https://youtu.be/PBX51Qv5wtw">videos</a> <a href="https://youtu.be/vD68km24Rh0">below</a> though I would suggest reading at least the introduction of the paper before tackling the videos.<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<iframe allowfullscreen="" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/vi/PBX51Qv5wtw/0.jpg" frameborder="0" height="266" src="https://www.youtube.com/embed/PBX51Qv5wtw?feature=player_embedded" width="320"></iframe></div>
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<iframe allowfullscreen="" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/vi/vD68km24Rh0/0.jpg" frameborder="0" height="266" src="https://www.youtube.com/embed/vD68km24Rh0?feature=player_embedded" width="320"></iframe></div>
<br />
<br />
<br />http://blog.computationalcomplexity.org/2017/02/the-dichotomy-conjecture.htmlnoreply@blogger.com (Lance Fortnow)17tag:blogger.com,1999:blog-3722233.post-6017135610692729306Mon, 06 Feb 2017 04:44:00 +00002017-02-06T09:25:28.790-05:00The Hardness of Reals Hierarchy In my last post (<a href="http://blog.computationalcomplexity.org/2017/01/what-was-first-result-in-complexity.html">here</a>) I defined the following hierarchy (which I am sure is not original- if someone has a source please leave a comment on it)<br />
<br />
Z_d[x] is the set of polys of degree d over Z (the integers)<br />
<br />
ALG_d is the set of roots of these polys.<br />
<br />
ALG_1 = Q (The rationals)<br />
<br />
Given a real alpha I think of its complexity as being the least d such that alpha is in ALG_d. This is perhaps a hierarchy of hardness of reals (though there are an uncountable number of reals that are NOT in any ALG_d.)<br />
<br />
I then said something like<br />
<br />
Clearly<br />
<br />
ALG_1 PROPER SUBSET ALG_2 PROPER SUBSET ALG_3 etc.<br />
<br />
But is that obvious? <br />
<br />
``Clearly'' 2^{1/d} is not in ALG_{d-1}. And this is not a trick- YES 2^{1/d} is NOT in ALG_{d-1} But how would you prove that. My first thought was `I'll just use Galois theory' And I am sure I could dust off my Galois theory notes (I had the course in 1978 so the notes are VERY dusty) and prove it. But is there an elementary proof. A proof a High School Student could understand.<br />
<br />
How to find out? Ask a bright high school student to prove it! Actually I asked a Freshman Math Major who is very good, Erik Metz. I thought he would find a proof, I would post about it asking if it was known, and I would get comments telling me that OF COURSE its known but not give me a reference (have I become cynical from years of blogging. Yes.)<br />
<br />
But that's not what happened. Erik had a book <i>Problems from the Book </i>by Dospinescu and Andreescu that has in it a lovely elementary proof (it uses Linear Algebra) that 2^{1/d} is NOT in ALG_{d-1}.<br />
<br />
Hence the hardness of reals hierarchy is proper. For a write up of just the proof that<br />
7^{1/3} is not in ALG_2 (which has most of the ideas) see <a href="https://www.cs.umd.edu/~gasarch/BLOGPAPERS/hardreals.pdf">this writeup</a>http://blog.computationalcomplexity.org/2017/02/the-hardness-of-reals-hierarchy.htmlnoreply@blogger.com (GASARCH)6tag:blogger.com,1999:blog-3722233.post-5654741233125813331Thu, 02 Feb 2017 19:15:00 +00002017-02-02T14:15:34.851-05:00We Are All Iranians<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/--xxGCHPAfmg/WJOEdzGG-BI/AAAAAAABZWY/Oui3E65mVQEBnO3JV_nBtagFpdOO50ObwCKgB/s1600/00001IMG_00001_BURST20170202114143.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="300" src="https://2.bp.blogspot.com/--xxGCHPAfmg/WJOEdzGG-BI/AAAAAAABZWY/Oui3E65mVQEBnO3JV_nBtagFpdOO50ObwCKgB/s400/00001IMG_00001_BURST20170202114143.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">A <a href="https://www.facebook.com/events/615927695273045/?active_tab=discussion">solidarity rally</a> held at Georgia Tech today</td></tr>
</tbody></table>
There are ten Iranian members of my department, the School of Computer Science at Georgia Tech, all of whom face a very uncertain future in America. Luckily none of them were outside the US when the executive order was signed last Friday.<br />
<br />
We have nine Iranian Ph.D. students. It was already difficult for them to leave the US and return and with the new executive order essentially impossible, even for family emergencies. One expressed disappointment “Why did we even bother to come to the United States to study?”<br />
<br />
We also have a young Iranian professor, a very successful computer architect and my first hire as chair, in the final stage before getting his green card now on hold. If things don’t change he and his wife may be forced to leave the country they now call home. That would be a huge loss for Georgia Tech and the United States.<br />
<br />
This is not the America I believe in.http://blog.computationalcomplexity.org/2017/02/we-are-all-iranians.htmlnoreply@blogger.com (Lance Fortnow)9tag:blogger.com,1999:blog-3722233.post-8074027353717692435Mon, 30 Jan 2017 04:09:00 +00002017-01-29T23:09:11.036-05:00What was the first result in complexity theory?Let Z_d[x] be the set of polynomials of degree d over the integers.<br />
<br />
Let ALG_d be the set of roots of polys in Z_d.<br />
<br />
One can easily show that<br />
<br />
ALG_1 is a proper subset ALG_2is a proper subset ...<br />
<br />
and that there are numbers not in any of the ALG_i (by a countability argument).<br />
<br />
I began my ugrad automata theory course with this theorem (also a good review of countability- I found out, NOT to my surprise, that they never really understood it as freshman taking Discrete Math, even the good students.)<br />
<br />
I presented this as the<i> first theorem in complexity.</i><br />
<i><br /></i>
But is it? I suspect that the question <i>What was the first result in complexity?</i><br />
<i><br /></i>
has no real answer, but there are thoughts:<br />
<br />
Complexity theory is about proving that you can't do BLAH with resource bound BLAHBLAH.. We will distinguish it from Computability theory by insisting that the things we want to compute are computable; however, if someone else wants to argue that (say)<i> HALT is undecidable </i>is the first result in complexity, I would not agree but I would not argue against it.<br />
<br />
Here are other candidates:<br />
<br />
1) sqrt(2) is irrational. This could be considered a result in descriptive complexity theory.<br />
<br />
2) The number of primes is infinite. If you view `finite' as a complexity class then this takes PRIMES out of that class.<br />
<br />
3) You cannot, with ruler and compass, trisect an angle, square a circle, or double a cube. This seems very close to the mark--- one can view ruler-and-compass as a well defined model of computation and these are lower bounds in that model.<br />
<br />
4) There is no quintic equation. Also close to the mark as this is a well defined lower bound.<br />
<br />
5) In the early 60s the definition of P (Cobram) and of DTIME, etc (Hartmanis-Stearns). The result I would point to is the time hiearchy theorem. While these results are much later than those above, they are also far closer to our current notion of complexity.<br />
<br />
6) I'm not sure which paper to point to, but Knuth's observation that one can analyse algorithms without running them. This is more algorithms than complexity, but at this level the distinction may be minor.<br />
<br />
7) Cook-Levin Theorem. Probably not the first theorem in Complexity, but certainly a big turning point.<br />
<br />
I urge you to comment with other candidates!<br />
<br />
<br />http://blog.computationalcomplexity.org/2017/01/what-was-first-result-in-complexity.htmlnoreply@blogger.com (GASARCH)9tag:blogger.com,1999:blog-3722233.post-7533256557125815169Fri, 27 Jan 2017 01:47:00 +00002017-01-26T20:47:26.893-05:0060 years of Eric and Mike<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://2.bp.blogspot.com/-_eVxHDeClSA/WIqmJdBx1BI/AAAAAAABZTM/rCeoAt6gSvsgiL08Q3gl8CrLPz8NEbFFQCLcB/s1600/mike.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="200" src="https://2.bp.blogspot.com/-_eVxHDeClSA/WIqmJdBx1BI/AAAAAAABZTM/rCeoAt6gSvsgiL08Q3gl8CrLPz8NEbFFQCLcB/s200/mike.jpg" width="163" /></a></div>
<a href="https://1.bp.blogspot.com/-lf0ZbUa9Wic/WIqmGCpw08I/AAAAAAABZTI/WelazE_HdYweCEpJvo953teOmcXcuczuQCLcB/s1600/eric.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="200" src="https://1.bp.blogspot.com/-lf0ZbUa9Wic/WIqmGCpw08I/AAAAAAABZTI/WelazE_HdYweCEpJvo953teOmcXcuczuQCLcB/s200/eric.jpg" width="136" /></a>As I checked in at the Holiday Inn in New Brunswick Wednesday night, they asked me if I had stayed there before. I said it has been a while and they looked in their computer: October 2007. I was last at DIMACS for the <a href="http://dimacs.rutgers.edu/Workshops/EconomicTheory/">Workshop on the Boundary between Economic Theory and Computer Science</a>, the closing workshop of the <a href="http://dimacs.rutgers.edu/SpecialYears/2004_CSEC/">Special Focus on Computation and the Socio-Economic Sciences</a> which Rakesh Vohra and I had organized. DIMACS, the Center for Discrete Math and Computer Science at Rutgers, started as an NSF center and I went often, even serving on the executive committee when I worked at NEC in the early 2000's. So an odd feeling to be back after ten years.<br />
<br />
But back for a good reason, the DIMACS <a href="http://dimacs.rutgers.edu/Workshops/EMC/">Workshop on E+M=C<sup>2</sup></a>, the joint 60th birthday celebration for Rutgers Professors Eric Allender and Michael Saks. A great turnout of Eric and Mike's former colleagues and students. I've known both Eric and Mike for many years but it is Eric I've had the closest connections to. Eric and I share many research interests, especially in Kolmogorov complexity. I took over from Eric as chair of the Computational Complexity conference and Eric took over from me as editor-in-chief of ACM Transactions on Computation Theory. Combined we've attended 61 of the 31 complexity conferences so far (I missed 2012 in Porto) and many Dagstuhl workshops and other meetings together on four continents. But we've oddly enough never co-authored.<br />
<br />
Congrats to Eric and Mike and their careers worth savoring.http://blog.computationalcomplexity.org/2017/01/60-years-of-eric-and-mike.htmlnoreply@blogger.com (Lance Fortnow)1tag:blogger.com,1999:blog-3722233.post-3194218073277459202Mon, 23 Jan 2017 01:54:00 +00002017-01-22T20:54:43.919-05:00My once-every-four-years Presidential Quiz/how should quizes work in the e-era?Every four years I post a PRESIDENTIAL QUIZ which I must update based on new information since we have a new prez and veep. The questions are here:<a href="https://www.cs.umd.edu/~gasarch/BLOGPAPERS/prezquiz.pdf">here</a>.<br />
<br />
I will post a link to the answers next week. The answers will also contain more information if interest beyond the question, so the answers are worth reading whether or not you get it right.<br />
<br />
The quiz is 43 questions (hmmm- that is long for a quiz)<br />
Start with 14 points, and the questions are 2 points each. No penalty for wrong answers.<br />
<br />
Why take a quiz when you can find most (maybe all) answers on the web?<br />
<br />
Well- there are two ways to view this:<br />
<br />
1) Take the quiz without the aid of the web and give yourself 3 hours. You are on your scouts honor. Doesn't matter much since you truly are just testing yourself.<br />
<br />
2) USE the web. This is NOT cheating. But TIME how long it takes you. Stop when you want but your score is (120 times the number you got right)/(number of minutes it took you) + 14. So if using the web you get them all right in an hour, you get a 100. There may be other ways to do this.<br />
<br />
<br />
REQUEST: Do not post answers to quiz questions since that may deprive others the joy.<br />
You CAN (and I would like this) post how well you did using either criteria (1) or (2).<br />
<br />
Is this the future of trivia contests? Rather than ban the use of the web embrace it!<br />
<br />
bill g.<br />
<a href="https://www.cs.umd.edu/~gasarch/BLOGPAPERS/prezquiz.pdf"></a>http://blog.computationalcomplexity.org/2017/01/my-once-every-four-years-presidential.htmlnoreply@blogger.com (GASARCH)0tag:blogger.com,1999:blog-3722233.post-8508515779454734672Thu, 19 Jan 2017 13:25:00 +00002017-01-19T08:25:17.985-05:00Infinite Series and Markov ChainsThere's a wonderful new series of math videos <a href="https://www.youtube.com/channel/UCs4aHmggTfFrpkPcWSaBN9g/">PBS Infinite Series</a> hosted by Cornell Math Phd student Kelsey Houston-Edwards. Check out this latest <a href="https://youtu.be/63HHmjlh794">video</a> on Markov Chains.<br />
<div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<iframe allowfullscreen="" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/vi/63HHmjlh794/0.jpg" frameborder="0" height="266" src="https://www.youtube.com/embed/63HHmjlh794?feature=player_embedded" width="320"></iframe></div>
<div>
<br />
<div>
She gives an amazingly clear description of why, on a random walk on an undirected graphs, the stationary distribution puts probability on each node proportional to its degree.</div>
</div>
</div>
<div>
<br /></div>
<div>
Houston-Edwards also relies without proof on the following fact: The expected time to start and return to a vertex v is 1/p where p is the probability given to v in the stationary distribution. Why should that be true?</div>
<div>
<br /></div>
<div>
Didn't seem so obvious to me, so I <a href="http://www.columbia.edu/~ks20/stochastic-I/stochastic-I-MCII.pdf">looked it up</a>. Here is an informal proof:</div>
<div>
<br /></div>
<div>
Let V1 be the random variable that is the expected length of time to get from v back to v. Let's take that walk again and let V2 be the expected length of time to get from v back to v the second time, and so on. Let An=V1+V2+..+Vn, the random variable representing the number of transitions taken to return to v n times. In a Markov chain each Vi is distributed the same so E(An) = n E(V1). </div>
<div>
<br /></div>
<div>
As n gets large the Markov chain approaches the stationary distribution and will be in state v about a p fraction of the time. After E(An) transitions we should return to v about p E(An) times, where we actually return n times. So we have p E(An) approaches n, or p n E(V1) approaches n and the only way this could happen is if E(V1)=1/p.</div>
http://blog.computationalcomplexity.org/2017/01/infinite-series-and-markov-chains.htmlnoreply@blogger.com (Lance Fortnow)1tag:blogger.com,1999:blog-3722233.post-1536965085783366798Mon, 16 Jan 2017 03:24:00 +00002017-01-15T22:25:55.708-05:00My REU program/REU's in general/Flyers? Why do some Transcripts...I run an REU program (Research Experience for Undergraduates) and I would normally urge you to urge undergrads who would benefit to apply to it and present both this link: <a href="http://www.cs.umd.edu/projects/reucaar/index.html">here</a> and this flyer: <a href="https://www.cs.umd.edu/~gasarch/BLOGPAPERS/caarflyers2017.pdf">here</a>.<br />
<br />
I just did that. But I want to talk about REU programs, not just mine. A few points<br />
which can be construed as advise- though its more questions and what I do.<br />
<br />
1) Do flyers still have any effect? If there is a bulletin board in a dept that is known for where to look for announcements of summer opps, then YES. Otherwise--- not sure. when I asked this question to a professor I emailed the flyer to she said that at her school they stick flyers in the bathroom stalls where they are sure to get a captive audience.<br />
<br />
2) Should you visit schools directly? I have done this; however, most of the applicants find out about it from the NSF REU website.<br />
<br />
3) Should students pick projects ahead of time or in the first week? When students apply they list a set of projects they want to work on (unranked but the Statement of Purpose can say which one(s) they REALLY want) so we can start on Day 1. Some of the mentors are in contact with students ahead of time. Other programs have them decide the first week. There are PROS and CONS to both.<br />
<br />
<br />
4) How to do admissions? I do them myself since there are few enough of them (about 180 for 10 slots- though see next item) and since there are so many criteria to balance I'd rather not get into arguments with other committee members. I will sometimes (for example) say<br />
<br />
``John, here are two people who want to work on your project. What do you think of them''<br />
<br />
5) Of the 180 applicants about 50 do not have a statement of purpose. For me this is a deal-breaker. Either they were in the middle of applying and got another offer and took it- which is fine, but no reason for me to even look at the application, OR they have a hard time completing things and again, not worth my time. A prof once asked me `But what if they are really good''-- there are plenty of really good applicants who do fill out the statement of purpose.<br />
<br />
6) The main activity is research but we have some social activities as well.<br />
<br />
7) How big should teams be? We try to avoid teams of 1. We usually have teams of 2 or 3.<br />
<br />
8) What about students from your own school? My REU grant tends to urge them to go elsewhere since the NSF likes to have people from NON-research schools, and because I personally think its better for broadening to go elsewhere. Other programs are set up differently.<br />
<br />
9) Why do some transcript not have the name of the school on them. Drives me nuts.<br />
<br />
In the Summer of 2017 I will be running this program for the 5th time. Feel free to leave questions about how to run one, OR your own experiences, in the comments.<br />
<br />
<br />
http://blog.computationalcomplexity.org/2017/01/my-reu-programreus-in-generalflyers-why.htmlnoreply@blogger.com (GASARCH)2tag:blogger.com,1999:blog-3722233.post-2677209759857653054Thu, 12 Jan 2017 17:33:00 +00002017-01-12T12:33:19.617-05:00Guest Post about the first Women in Computational Topology (WinCompTop) WorkshopThe first Women in Computational Topology <a href="https://www.ima.umn.edu/2015-2016/SW8.15-19.16">WinCompTop</a> workshop was held in August at the Institute for Mathematics and its Applications (IMA) in Minneapolis, MN. In total, 27 women participated, ranging from undergraduates to full professors; in addition, five future topologists (children of the participants) attended the various social events scattered throughout the week.<br />
<br />
The central goal of this workshop was to establish research collaborations among both junior and senior women in the field, as well as to provide an opportunity for mentoring at all levels. There were four working groups, each led by a more senior member of the community:<br />
<br />
<a href="https://dentistry.sitecore.ualberta.ca/AboutUs/FacultyStaff/FacultyStaffCollection/GiseonHeo.aspx">Giseon Heo</a> (University of Alberta) outlined a project that extends one dimensional scale-space persistent homology (a fundamental tool in computational topology) to a pseudo-multidimensional persistence tool that can be applied to a variety of applications.<br />
<br />
<a href="http://web.cs.ucdavis.edu/~amenta/">Nina Amenta</a> (University of California, Davis) posed a problem of producing an explicit representation of a surface S from an input point cloud P drawn from a distribution over S (possibly with some noise).<br />
<br />
<a href="http://web.cse.ohio-state.edu/~yusu/">Yusu Wang</a> (The Ohio State University) discussed a new method of persistence-based profiles to compare metric graphs and outlined that further exploration of what information is captured by persistence-based profiles and understanding their discriminative power would be the focus of their working group.<br />
<br />
<a href="http://www.cs.tulane.edu/~carola/">Carola Wenk</a> (Tulane University) and <a href="https://www.cs.montana.edu/brittany/">Brittany Terese Fay</a> investigated the use of topology in map construction and comparison, particularly understanding directed graphs with multiple lanes and overpasses.<br />
<br />
The workshop began with each of the senior researchers presenting an overview of their working group’s topic. After the overview of each project, working groups began to explore their topic; over the course of the week, substantial progress was made in each group. Each working group will contribute an article to a special AWM/IMA Springer journal, co-edited by the organizers of WinCompTop 2016 (Erin Chambers, Brittany Terese Fasy, and Lori Ziegelmeier). In addition, many of the participants who attended WinCompTop will meet once again at a special session of the AWM Research Symposium in April (see <a href="https://sites.google.com/site/awmmath/home/RS17">here</a>).<br />
<br />
The workshop also had several outings and other social events, including a poster session where over a dozen posters were presented, a panel on work-life balance, an open problem session, and several receptions or banquets. These events let participants come together as a group, establish future collaborations, and connect with one another. In addition to formally scheduled outings, informal activities such as a marshmallow roast one evening, group dinners, and many other gatherings happened throughout the week.<br />
<br />
<i>What we (the organizers) have learned, and some questions for the community:</i><br />
How many women in Field X does it take to justify creating a “Women in X” network? (Or, more generally, an <insert-subpopulation> in X network?) This question was brought to our attention by Bill Gasarch (thanks for letting us post on your blog, BTW). We started this community as a listserv over two years ago (by the way, visit here if you’d like to join: <a href="https://groups.google.com/forum/#!forum/wincomptop/join">Here</a>. Today, we have over 100 subscribers, several of whom are not women. Regularly, opportunities are posted through this listserv, and lively discussions sometimes ensue (for example, we recently had a lengthy thread listing all of the researchers under whom one might be able to pursue a Ph.D. in computational topology). This network was started by just a handful of us who decided that there needed to be a more formal way for junior researchers to seek advice and for organizers of events to find diverse speakers. So, perhaps the answer to Bill’s question is: just a handful of people, and you’ll be surprised how quickly things grow.</insert-subpopulation><br />
<insert-subpopulation><br /></insert-subpopulation>
Acknowledgements<br />
<br />
Finally, we want to end this post with a note of gratitude. We thank NSF,<br />
which funded the travel and local expenses for most of the participants (NSF DMS grant #1619908). We thank Microsoft Research for providing a generous donation, which funded the social events and travel for one of our international group leaders. Thanks also to AWM, which provided logistical support and advice for the event, and financial support for the upcoming follow-up events. Most enthusiastically, we thank the IMA for all of their support of both time and resources. The IMA donated the meeting spaces, breakfasts, as well as poster-printing, all in-kind. Last but not least, we thank every participant of WinCompTop 2016. We’ll see you again in 2018!<br />
<br />
If you have any questions or comments about our experience organizing WinCompTop, we encourage you to contact us:<br />
<br />
Erin Chambers erin.chambers@gmail.com<br />
<br />
Brittany Terese Fasy brittany@fasy.us<br />
<insert-subpopulation></insert-subpopulation><br />
Lori Ziegelmeier lziegel1@macalester.edu<br />
<div>
<br /></div>
<br />
<span style="font-family: "arial"; font-size: 14.6667px; vertical-align: baseline; white-space: pre-wrap;"><br /></span>
<br />
<div>
<br /></div>
http://blog.computationalcomplexity.org/2017/01/guest-post-about-first-women-in.htmlnoreply@blogger.com (GASARCH)0tag:blogger.com,1999:blog-3722233.post-2740454628817287885Tue, 10 Jan 2017 19:27:00 +00002017-01-10T14:37:54.211-05:00Babai Strikes Back<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-pWBQD_8C61I/WHTYVOsZeWI/AAAAAAABZNw/leOuDhK9pas3R9AprOwq_27BppfapGNpgCPcB/s1600/IMG_20170109_172658.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="240" src="https://2.bp.blogspot.com/-pWBQD_8C61I/WHTYVOsZeWI/AAAAAAABZNw/leOuDhK9pas3R9AprOwq_27BppfapGNpgCPcB/s320/IMG_20170109_172658.jpg" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">"So it is quasipolynomial time again"</td></tr>
</tbody></table>
Short Version: Babai <a href="http://people.cs.uchicago.edu/~laci/update.html">fixed his proof</a>.<br />
<br />
In November 2015 László Babai announced a talk "Graph Isomorphism in Quasipolynomial Time" at the University of Chicago, an incredible breakthrough for this important problem. The <a href="https://arxiv.org/abs/1512.03547">algorithm</a> is a tour-de-force masterpiece combining combinatorics and group theory. The result sent shock waves through the community, we <a href="http://blog.computationalcomplexity.org/2015/11/a-primer-on-graph-isomorphism.html">covered</a> <a href="http://blog.computationalcomplexity.org/2015/11/looking-forward-to-gi-result.html">the result</a> as did most everyone else, despite Babai's disclaimer that the result was not yet peer reviewed.<br />
<br />
Robin Thomas at Georgia Tech immediately suggested we get Babai down to Atlanta to talk on the paper. Babai's dance card filled up quickly but he eventually agreed to give two talks at Georgia Tech. January 9-10, 2017, right after the Joint Mathematics Meeting in Atlanta. Robin scheduled the <a href="http://aco25.gatech.edu/">25th anniversary celebration</a> of our Algorithms, Combinatorics and Optimization PhD program around Babai's talks.<br />
<br />
Around 4 AM last Wednesday we got a lengthy email from Babai with the subject "new development" and the first line "The GI algorithm does not run in quasipolynomial time." The email explained the new running time, still a huge breakthrough being the first subexponential time algorithm for graph isomorphism, but not as strong as before. Harald Helfgott had <a href="https://valuevar.wordpress.com/2017/01/04/graph-isomorphism-in-subexponential-time/">discovered the problem</a> while going through the proof carefully preparing for a Bourbaki seminar talk on the result. The email asked if we still wanted him to give the talk (of course we did) and instructions for removing "quasipolynomial" from his <a href="http://aco25.gatech.edu/abstracts#babai1">talk abstracts</a>. That email must have been very hard for Babai to write.<br />
<br />
Babai posted an <a href="http://people.cs.uchicago.edu/~laci/update.html">update</a> on his home page which I too quickly <a href="https://twitter.com/fortnow/status/816628539608932356">tweeted</a>. News spread quickly in <a href="https://rjlipton.wordpress.com/2017/01/04/babais-result-still-a-breakthrough/">blog</a> <a href="https://windowsontheory.org/2017/01/05/on-expexpsqrtlog-n-algorithms/">posts</a> and by Thursday an online article in Quanta titled <a href="https://www.quantamagazine.org/20170105-graph-isomorphism-retraction/">Complexity Theory Strikes Back.</a> Scott Aaronson picked a lousy day to <a href="http://www.scottaaronson.com/blog/?p=3095">release</a> his new book-length survey on the P v NP problem.<br />
<br />
On top of all this snow in Atlanta threatened to cancel the Monday talk. But the snow never came and Babai started his talk at 4:30 PM to a packed room. Without telling any of us beforehand, an hour earlier he had posted an <a href="http://people.cs.uchicago.edu/~laci/update.html">update</a> to his update and announced it in his talk.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-GoBOzMmjBZ8/WHTYVAdPtsI/AAAAAAABZNw/uBVV61KR5FwGbG0STGD_yfVg9C8pI7CcwCPcB/s1600/IMG_20170109_164434.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="201" src="https://1.bp.blogspot.com/-GoBOzMmjBZ8/WHTYVAdPtsI/AAAAAAABZNw/uBVV61KR5FwGbG0STGD_yfVg9C8pI7CcwCPcB/s320/IMG_20170109_164434.jpg" width="320" /></a></div>
<br />
We were watching history. From the talk I <a href="https://twitter.com/fortnow/status/818575040048463872">tweeted</a> the new news though Bill Cook, also in the audience, <a href="https://twitter.com/wjcook/status/818574848016478208">beat me to the punch</a>. Babai went on to describe the issue, an error in the analysis of the running time in the recursion, and the fix, basically a way to avoid that recursive step, but I can't do it justice here. At the end he proclaimed "So it is quasipolynomial time again". And so it was.<br />
<br />
Today Babai talked about the group theory behind the algorithm as though there was never an issue in the first place.http://blog.computationalcomplexity.org/2017/01/babai-strikes-back.htmlnoreply@blogger.com (Lance Fortnow)1tag:blogger.com,1999:blog-3722233.post-5146589104526336803Thu, 05 Jan 2017 15:09:00 +00002017-01-05T10:09:14.000-05:00Learning About Learning<div>
Yesterday Scott Aaronson released his sweeping new <a href="http://www.scottaaronson.com/papers/pnp.pdf">P v NP survey</a>. Babai gave an <a href="http://people.cs.uchicago.edu/~laci/update.html">update on graph isomorphism</a>, in short while he still has the first subexponential time algorithm for GI, he no longer claims quasipolynomial-time. We'll have more on graph isomorphism next week.<br />
<br />
Having seen the power of neural networks for learning, how do they actually work? Over the break I decided to catch myself up. I worked through Michael Nielsen's online book <a href="http://neuralnetworksanddeeplearning.com/">Neural Networks and Deep Learning</a> chosen mainly because he co-authored one of the <a href="http://amzn.to/2iAikk5">great books on quantum computing</a> followed by the beginning of the <a href="https://www.tensorflow.org/tutorials/">TensorFlow tutorial</a>. To try out code I downloaded <a href="https://www.python.org/downloads/">Python</a> and <a href="https://www.tensorflow.org/get_started/">TensorFlow</a> and used <a href="https://www.jetbrains.com/pycharm/">PyCharm</a> as my IDE (free for academics). I should really learn <a href="https://github.com/">GitHub</a> since that holds all the code samples. Where were all these cool tools when I was a kid?</div>
<div>
<br /></div>
<div>
So what did I learn? Here is my high level understanding, but read the above materials to get a fuller picture. A neural network is just a weighted threshold circuit, though instead of precise threshold functions you use more continuous and differentiable functions like <a href="https://en.wikipedia.org/wiki/Sigmoid_function">sigmoid</a> or a <a href="https://en.wikipedia.org/wiki/Rectifier_(neural_networks)">rectifier</a>. If you fix the weights and biases (the negation of the threshold value), the network computes a function from input to output, for example from images of digits to the digits themselves. Typically you have a sample of labelled data and you can create an error function to see how well your network computes the solution. Fix the labelled data and now you have a new function from the weights and biases to the error.</div>
<div>
<br /></div>
<div>
You want to choose weights and biases to minimize the error. The general approach is gradient descent, improve a given solution by moving a small distance in the opposite direction in the gradient and repeat. I had naively thought you estimated the gradient numerically but in fact the gradient is computed symbolically using a dynamic programming-like algorithm called backpropagation based on the chain rule for partial derivatives.</div>
<div>
<br /></div>
<div>
Convolution nets has a special first layer that captures features of pieces of the image. Recurrent neural networks allow feedback loops.</div>
<div>
<br /></div>
<div>
Nielsen shows how from scratch to build and train a neural net. In TensorFlow you just design the network and then it automatically computes the gradient and has a highly optimized algorithm that can be run on GPUs or specialized hardware to minimize the error.</div>
<div>
<br /></div>
<div>
What caught my eye is how much of an art machine learning still is. How many nodes should you have in your network? How many levels? Too many may take too long to train and could cause overfitting. Too few and you don't have enough parameters to create the function you need.</div>
<div>
<br /></div>
<div>
What threshold function do you use and how to you aggregate results? Lots of various tricks to avoid overfitting and improve speed of optimization. There's no fixed procedure to choose the parameters, though you can test how well you do. So lots of trial and error to learning and experience (which I don't have) certainly helps.</div>
<div>
<br /></div>
<div>
So we have amazingly powerful learning tools but for now we still need humans to learn. For now.</div>
http://blog.computationalcomplexity.org/2017/01/learning-about-learning.htmlnoreply@blogger.com (Lance Fortnow)9tag:blogger.com,1999:blog-3722233.post-1335949062942238647Mon, 02 Jan 2017 19:32:00 +00002017-01-02T14:32:14.467-05:00Predictions for 2017Lance's Complexity Year in Review, posted the last week of the year (lets hope that P vs NP is not resolved on Dec 31) is a tradition that goes back to 2002. Bill posting predictions for the coming year is a tradition that goes back to 2016. Here is the last one: <a href="http://blog.computationalcomplexity.org/2016/01/predictions-of-new-year.html">here.</a> My predictions were not very good- all of those that came true were obvious (P vs NP will not be resolved, CS enrollment will go up). My biggest goof- that Hillary Clinton would beat .. Ted Cruz, by accusing him of being born in a foreign country. <br />
<br />
However, being wrong never stopped me before, so here are my predictions for 2017<br />
<br />
1) Some people think Trump be more presidential once he takes office. Some point to Checks and Balances in the System - but the Congress is ruled by his party. Some point to the media as a watchdog. e.g: <a href="http://www.thedailybeast.com/articles/2016/12/31/our-murrow-moment.html">here.</a> A more accurate picture is due to John Oliver <a href="https://www.youtube.com/watch?v=bq2_wSsDwkQ">here</a>. Some say Trump is a closet liberal. But I doubt his true beliefs, if he has any, matter. Its going to be bad. I second Scott Aaronson's call to not normalize this: <a href="http://www.scottaaronson.com/blog/?p=2969">here</a>.<br />
<br />
2) Not a prediction but a thought: Prior arguments against Trump were countered with `Well Hillary is just as bad' They can't use this anymore. The following absurd conversation might happen:<br />
<br />
NEWS: Today some democrats and John McCain questioned Donald Trumps nominee, Rex Tillerson for Sec of State, about his past dealings with Russia and Putin.<br />
<br />
FOX NEWS: But Hillary was the worst Sec of State ever! Bengazi!<br />
<br />
<br />
3) I will be asked to review at least one paper claiming P=NP (or P NE NP or it won't be clear what they are saying) and at least one paper claiming to determine a Ramsey or VDW number. They will be garbage. Cranks are getting into more sophisticated areas so others may be asked to look at ``solutions'' to open problems in harder areas of math. The Navier-Stokes equations (A Millennium problem, see h<a href="http://www.claymath.org/millennium-problems/navier%E2%80%93stokes-equation">ere</a>) might be a good area for cranks since they might get out some numbers and think they've solved it. I'm glad I'm not in that area.<br />
<br />
4) Our Popular Posts links (which is determined by a program, not by us) will continue to have some of our most recent posts AND my very old post on Russell and Whitehead using 300 pages to prove 1+1=2. Why is that post seen as being so popular? I doubt it IS that popular. So--- whats going on?<br />
<br />
5) Recall that Josh Alman and Ryan Williams showed that one method for lower bounds prob won't work <a href="http://blog.computationalcomplexity.org/2016/11/a-few-interesting-computer-science.html">her</a>e . There will be more results that rule out techniques.<br />
<br />
6) n-person envy-free cake cutting can now be done with number-of-cuts TOW(n).<br />
There will be better upper bounds or some lower bounds on this proven this year.<br />
<br />
7) There will be a big breakthrough on derandomization- possibly L=RL.<br />
<br />
8) There will be a big data breach.<br />
<br />
9) Some minor celebrity will die the last week of the year and hence not make either the<br />
`who died in 2017' lists, nor the `who died in 2018' lists. In 2016 this happened to <a href="https://en.wikipedia.org/wiki/William_Christopher">William Christopher</a>. Why do people make the `end of the year lists' before the year ends?<br />
<br />
10) Fake News will become worse and worse. After Pizzagate there was NO apology or regret from the people who spread the false news.<br />
<br />
11) Fake Journals, Schools, and accreditation agencies will continue to grow.http://blog.computationalcomplexity.org/2017/01/predictions-for-2017.htmlnoreply@blogger.com (GASARCH)5tag:blogger.com,1999:blog-3722233.post-5526704316780913018Thu, 29 Dec 2016 14:05:00 +00002016-12-29T09:05:11.342-05:00Complexity Year in Review 2016Paper of the year goes to <a href="http://ieee-focs.org/FOCS-2016-Papers/3933a416.pdf">A Discrete and Bounded Envy-Free Cake Cutting Protocol for Any Number of Agents</a> by Haris Aziz and Simon Mackenzie. Might not seem like a traditional complexity result but cake cutting is a computational process with a desired set of properties and this papers settles a long standing open question. Bill <a href="http://blog.computationalcomplexity.org/2016/06/there-is-now-bounded-discrete-envy-free_0.html">posted about the paper</a> back in June.<br />
<br />
Some other nice papers from 2016: <a href="https://arxiv.org/abs/1611.05558">Hadamard not so rigid after all</a> and <a href="http://blog.computationalcomplexity.org/2016/11/a-few-interesting-computer-science.html">not usable</a> for Valiant's lower-bound program, <a href="http://eccc.hpi-web.de/report/2015/119/">2-source extractors from low-entropy sources</a> which get <a href="http://cacm.acm.org/magazines/2017/1/211100-pure-randomness-extracted-from-two-poor-sources/fulltext">near perfect randomness from two weak sources</a>, <a href="https://arxiv.org/abs/1506.04719">query complexity</a> <a href="https://arxiv.org/abs/1511.01937">separations</a>, deterministic, randomized and quantum, in two wonderful constructions <a href="http://blog.computationalcomplexity.org/2016/06/stoc-2016.html">beautifully presented at STOC</a>, <a href="https://arxiv.org/abs/1612.02788">space-efficient subset-sum</a>,and <a href="https://www.cs.sfu.ca/~kabanets/Research/natural-learning.html">learning algorithms from natural proofs</a>.<br />
<br />
2016 will go down as a watershed year for machine learning with <a href="http://blog.computationalcomplexity.org/2016/02/go-google-go.html">AlphaGo</a>, <a href="http://www.nytimes.com/2016/12/14/magazine/the-great-ai-awakening.html">huge advances in translation</a>, <a href="https://www.tesla.com/autopilot">self-driving cars</a> and with tools like TensorFlow out in the open we'll truly see learning everywhere. Prediction didn't have such a great year with bad calls on Trump's nomination, Brexit and Trump's election. Machines learning humans still has a way to go.<br />
<br />
We thank our guest posters <a href="http://blog.computationalcomplexity.org/2016/09/the-letter.html">Molly Fortnow</a>, <a href="http://blog.computationalcomplexity.org/2016/03/mohammadtaghi-hajiaghayi-on-david.html">MohammadTaghi HajiAghayi</a>, <a href="http://blog.computationalcomplexity.org/2016/12/guest-post-by-samir-khuller-about.html">Samir</a> <a href="http://blog.computationalcomplexity.org/2016/12/guest-post-by-samir-khuller-on-humans.html">Khuller</a> and <a href="http://blog.computationalcomplexity.org/2016/06/the-relevance-of-tcs.html">Avi Wigderson</a>.<br />
<br />
We remember <a href="http://www.nytimes.com/2016/11/30/technology/erich-bloch-who-helped-develop-ibm-mainframe-dies-at-91.html">Erich Bloch</a>, <a href="http://www.tfs.tu-berlin.de/menue/home/team/ehrig_hartmut_prof/">Hartmut Ehrig</a>, <a href="http://www.nytimes.com/2016/12/27/movies/carrie-fisher-dead-star-wars-princess-leia.html">Carrie Fisher</a>, <a href="http://blog.computationalcomplexity.org/2016/01/rusins-freivalds-1942-2016.html">Rūsiņš Freivalds</a>, <a href="http://www.nytimes.com/2016/12/08/us/john-glenn-dies.html">John Glenn</a>, <a href="http://blog.computationalcomplexity.org/2016/03/david-johnson-1945-2016.html">David Johnson</a>, <a href="http://www.nytimes.com/2016/01/26/business/marvin-minsky-pioneer-in-artificial-intelligence-dies-at-88.html">Marvin Minsky</a>, <a href="http://cacm.acm.org/news/196208-in-memoriam-peter-naur-1928-2016/fulltext">Peter Naur</a>, <a href="http://blog.computationalcomplexity.org/2016/08/seymour-papert-1928-2016.html">Seymour Papert</a>, <a href="http://blog.computationalcomplexity.org/2016/03/hillary-putnam-passed-away-on-march-13.html">Hillary Putnam</a>, <a href="http://blog.computationalcomplexity.org/2016/03/the-value-of-shapley.html">Lloyd Shapley</a> and <a href="http://blog.computationalcomplexity.org/2016/09/boris-trakhtenbrot-1921-2016.html">Boris Trakhtenbrot</a>.<br />
<br />
As the surprises of 2016 lead us into an uncertain 2017 let me leave with my usual advice: When in a complex world, best to keep it simple.http://blog.computationalcomplexity.org/2016/12/complexity-year-in-review-2016.htmlnoreply@blogger.com (Lance Fortnow)0tag:blogger.com,1999:blog-3722233.post-1700676884503876487Mon, 26 Dec 2016 15:08:00 +00002016-12-26T10:08:35.994-05:00Hidden Figures<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-aY8h3AJ4Rb8/WGEpI75nJeI/AAAAAAABZCI/kkrJanZ-qrop6CBP59VbVe3saWuclUC_gCEw/s1600/Hidden%2BFigures.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="200" src="https://1.bp.blogspot.com/-aY8h3AJ4Rb8/WGEpI75nJeI/AAAAAAABZCI/kkrJanZ-qrop6CBP59VbVe3saWuclUC_gCEw/s200/Hidden%2BFigures.jpg" width="134" /></a></div>
Yesterday in our tradition of seeing math movies on Christmas, we saw <a href="http://www.imdb.com/title/tt4846340">Hidden Figures</a>, the story of African-American women who worked as "computers" at NASA in 1961 Virginia in the last vestiges of "separate but equal". The movie focuses on three women, Katherine Johnson, Dorothy Vaughn and Mary Jackson as they dealt with and crossed racial and gender boundaries. But this isn't just a movie you should see, rather a movie you will enjoy watching and I highly recommend it when it goes into a wide release in the US on January 6. Some minor spoilers ahead.<br />
<br />
<div>
Beyond the struggle, Hidden Figures does work as a math movie. The major storyline follows the group of mathematicians who compute the trajectories of the Mercury flights with Katherine Johnson playing a main role in figuring out how to pull John Glenn out of an elliptical orbit into a parabolic safe entry back to Earth.<br />
<br />
The movie also serves up lessons about computers. The new IBM mainframe comes and threatens the need for human computers, women (here segregated into two groups) who did the tedious manual calculations that NASA relied on. In the movie Dorothy Vaughn recognizes the threat and retrains her team to learn Fortran, perhaps a parable for how technology changes jobs today.<br />
<br />
The majority of the audience for the sold-out theater were African-American women and they laughed and cheered as the heroines of the movies succeeded. While we have made much progress in the last 50 years we still have far to go.</div>
http://blog.computationalcomplexity.org/2016/12/hidden-figures.htmlnoreply@blogger.com (Lance Fortnow)7tag:blogger.com,1999:blog-3722233.post-5946100179256928046Thu, 22 Dec 2016 15:18:00 +00002016-12-22T10:18:48.454-05:00Moneyball for AcademicsMIT and Tel Aviv business school professors Dimitris Bertsimas, Erik Brynjolfsson, Shachar Reichman and John Silberholz wrote an intriguing paper <a href="http://dx.doi.org/10.1287/opre.2015.1447" target="_blank">Tenure Analytics: Models for Predicting Research Impact</a> about using metrics in tenure decisions, <a href="https://en.wikipedia.org/wiki/Moneyball" target="_blank">Moneyball</a> for academics. The paper is behind a firewall but here is a <a href="https://papers.ssrn.com/sol3/Papers.cfm?abstract_id=2374581">preprint</a> (with the Moneyball title), an <a href="http://sloanreview.mit.edu/article/moneyball-for-professors/">essay</a> by some of the authors, and an <a href="https://www.insidehighered.com/news/2016/12/20/mit-professors-push-data-based-model-they-say-more-predictive-academics-future">article</a> from Inside Higher Ed.<br />
<br />
<a href="https://2.bp.blogspot.com/-P2A8p054hKE/WFvkGoKSZvI/AAAAAAABY90/tSnPE_GNM50HeOp8BDKJmRebDduTTHuIgCLcB/s1600/hippo.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="193" src="https://2.bp.blogspot.com/-P2A8p054hKE/WFvkGoKSZvI/AAAAAAABY90/tSnPE_GNM50HeOp8BDKJmRebDduTTHuIgCLcB/s200/hippo.jpg" width="200" /></a>Brynjolfsson works extensively on metric-based decision making and gave <a href="http://blog.computationalcomplexity.org/2013/04/technology-and-jobs.html">early warnings</a> on the loss of jobs to ML. I still have the HiPPO (Highest Paid Person's Opinion) from his EC 2010 invited talk. I like it for the complexity class on its back.<br />
<br />
In this article, Brynjolfsson and his co-authors use properties of the co-author and citation network graphs to predict future paper-citation based metrics, like the h-index for professors at top ranked OR departments. The paper claims that their metrics do a better prediction job than tenure committees.<br />
<br />
The IHE article captures an alternative view:<br />
<blockquote class="tr_bq">
Many professors oppose the use of bibliometrics in hiring, tenure and promotion decisions, saying that scholarly potential can’t be captured in a formula most often applied to pursuits with a bottom line, like winning games or playing the stock market. Such a system inevitably will be “gamed" by academics, critics say, and time-consuming but ground-breaking research will be sidelined in favor of “sure things” in terms of publishing -- the academic equivalent of clickbait.</blockquote>
I have a different issue--the use of bibliometrics to judge the current value of a professor. In baseball, the original Moneyball, there are measurable goals: runs, wins, championships. So you can use data analytics to both measure the current and future potential of a player. In academics, what makes a great professor? I hope the sole answer isn't the h-index. And without a subjective method to measure a successful professor, how do you train a model to predict for it?<br />
<br />
I'm not against using metrics to help with tenure and hiring decisions but I still put most of the weight on the letters.<br />
<br />
The article does talk about predicting INFORMS fellows from the network centrality model, predicting a subjecting decision from objective statistics, though doesn't compare that to tenure decisions. I wonder how well one can predict ACM Fellows as well. Another challenge here: As CS is a constantly changing field, can one use an algorithm that predicts today's fellows from 20 year old data to predict future fellows from today's data?http://blog.computationalcomplexity.org/2016/12/moneyball-for-academics.htmlnoreply@blogger.com (Lance Fortnow)2tag:blogger.com,1999:blog-3722233.post-4438251809304202082Mon, 19 Dec 2016 15:54:00 +00002016-12-19T10:56:08.223-05:00The very first Ramseyian Theorem<div>
<br /></div>
<div>
Many years ago I noticed that in several books on Ramsey Theory mention that Hilbert proved the first Ramseyian theorem. The theorem is the Hilbert Cube Lemma (HCL) which in modern language is:<br />
<br />
DEFINITION: Let x, y1, y2,...,yn be natural numbers. Then the n-cube on x, y1, y2, ..., yn is<br />
<br />
{ x+ e1*y1 + ... + en*yn : e1,...,en \in {0,1} }<br />
<br />
HCL: for all c there exists H=H(m,c) such that for all c-colorings of {1,...,H} there exists a monochromatic cube.<br />
<br />
Here are some quotes about this theorem from texts on Ramsey Theory:<br />
<br />
Graham-Rothchild-Spencer's book<i> Ramsey Theory:</i> In the middle of Szemeredi's proof of a cube lemma with double exp bounds on H (Hilbert's proof gives tower-type bounds) they write:<br />
<br />
<i>Historical Note: In 1892 D. Hilbert proved that, for any k\ge 1, if N (the naturals) is</i><br />
<i>finitely colored then there exists in one color infinitely many translates of a k-cube.</i><br />
<br />
THATS IT! No mention of WHY Hilbert proved this. According to the index this is the only mention of Hilbert in the book.<br />
<br />
From Landman and Robertson <i>Ramsey Theory over the Integers</i>:<br />
<br />
<i>The results that are generally accepted to be the earliest Ramsey-type theorems are due,</i><br />
<i>in chronological order, to Hilbert, Schur, and van der Warden.</i><br />
<br />
Later in the exercises he asks the reader to prove HCL from VDW's theorem.<br />
<br />
THATS IT! No mention of WHY Hilbert proved this According to the index this is the only<br />
mention of Hilbert in the book.<br />
<br />
Andrew Soifer's The Mathematical Coloring Book. This is a very scholarly work about the history of coloring theorems. (For my review see <a href="https://www.cs.umd.edu/~gasarch/BLOGPAPERS/soifer.pdf">here</a> .) I expected to get MORE on why Hilbert did. Soifer does devote two pages to Hilbert. But as for WHY Hilbert did it all he says is:<br />
<br />
<i>As far as we know today, the first Ramseyian-type results appeared in 1892 as a little noticed</i><br />
<i>assertion in [Hil]. Its author was the great David Hilbert. In this work Hilbert proved the </i><i>theorem of our interest merely as a tool for his study of irreducibility of rational functions.</i><br />
<i><br /></i>
I wanted to know what Hilbert proved (this was easy- I looked up the Hilbert Irreducibility<br />
theorem) and how he used his Cube Lemma to do it. I assumed this would be known and out there<br />
in English since the Hilbert Irreducibility Lemma is very important.<br />
<br />
But NO- I found the original German Version but THATS IT.<br />
<br />
Well, if you want something done, best to do it yourself. Except that I don't know German.<br />
<br />
YADDA YADDA YADDA<br />
<br />
<a href="https://arxiv.org/abs/1611.06303">Here</a> is a paper with Mark Villarino and Ken Regan, in English, that has the proof.<br />
In some later post I'll describe how the paper came about.<br />
<br />
For now I will state the Hilbert Irred Theorem, two-variable case:<br />
<br />
<br />
I<i>f f(x,t) is in Z[x,t] and is irreducible then for infinitely many natural numbers t, f(x,t) is irreducible.</i><br />
<div>
<br /></div>
<br />
<br /></div>
<div>
<br /></div>
http://blog.computationalcomplexity.org/2016/12/the-very-first-ramseyian-theorem.htmlnoreply@blogger.com (GASARCH)4tag:blogger.com,1999:blog-3722233.post-6874353303813050554Fri, 16 Dec 2016 14:49:00 +00002016-12-16T09:49:00.066-05:00Freedom of SpeechBill and I have always strongly believed in the principle of freedom of speech, guaranteed to us by the constitution of the United States. The government block speech only in extreme circumstances, fraud, libel, threats to people and property, but allows people's opinions, no matter how abhorrent we find them. <div>
<br /></div>
<div>
Speech used to be tempered by complexity. You could only talk to a small number of people at once. You had to physically print and distribute materials spouting your points of view. So while people could say what they wanted, they had some barriers to distribute that information. We believed in free speech but speech wasn't free.</div>
<div>
<br /></div>
<div>
Now with the Internet, particularly through social media, speech flows quickly and cheaply. Excitedly people could express their views easily. People also discovered that others could do so as well.</div>
<div>
<br /></div>
<div>
We should welcome this diversity of opinion, never available at this level before. We should listen to what people have to say, challenge their views and even their claimed facts, but more importantly challenge ourselves and our own viewpoints with the arguments of others. Never trust anything you see on the Internet but never be afraid of it either.</div>
<div>
<br /></div>
<div>
Instead we see calls protect people from ideas that fall significantly outside the mainstream and decide for them which facts are true or false, whether by blocking accounts or making it more difficult to distribute information. Free speech may have helped elect a president whose views and actions they find abhorrent but that's not a reason to restrict the speech. One must fight words with words, not block the words of others.</div>
<div>
<br /></div>
<div>
We must always fight for the rights of people to express themselves and avoid the path of limiting the spread of ideas. Once we start restricting the distribution of ideas we take a walk down a tough path, and someday you may find you'll have to keep your own thoughts to yourself. </div>
http://blog.computationalcomplexity.org/2016/12/freedom-of-speech.htmlnoreply@blogger.com (Lance Fortnow)11tag:blogger.com,1999:blog-3722233.post-4532920947114927319Mon, 12 Dec 2016 17:13:00 +00002016-12-12T13:14:36.298-05:00Guest post by Samir Khuller on Humans, Machines, and the Future of Work (a workshop)Guest Post from Samir Khuller on<br />
<br />
Humans, Machines, and the Future of Work<br />
(A Workshop)<br />
<br />
On Dec 5th and 6th I attended, at Rice University,<br />
a <a href="http://delange.rice.edu/conference_X/">workshop on Humans, Machines, and the Future of Work</a>.<br />
I had a wonderful lunch with John Markoff of NY Times,<br />
and Moshe Vardi of Rice University (Moshe is the brains<br />
behind this workshop).<br />
<br />
I have rarely ever attended such meetings, that are so different<br />
from the usual conferences I attend where the talks are all<br />
rather technical and half the audience is lost half way through the talk.<br />
<br />
The main theme poses the speculative question about the evolving<br />
nature of work and how technology, and especially recent<br />
advances in AI and Robotics (Self driving cars, Robots to look<br />
after the elderly, self checkout machines that can simply<br />
recognize your shopping order) can render huge categories of<br />
jobs redundant. Why hire a human, when a robot will cook<br />
dinner, and clean your home every day? What will people spend their time doing?<br />
<div>
<div>
Will the work week reduce greatly? Yet, for many, their devices and</div>
<div>
24/7 connectivity means we are working more than ever! The</div>
<div>
speakers had varied backgrounds, from Economics to Social Science</div>
<div>
to Roboticists.</div>
<div>
<br /></div>
<div>
In the end, its clear that the nature of work is constantly evolving.</div>
<div>
Our children could have job titles, our parents could never dream of</div>
<div>
when they were selecting a profession. However, the rapidly growing</div>
<div>
interest in Tech and Computing is clear - these are powerful societal</div>
<div>
forces at work. New job titles - Robot fixer, Robot Programmer,</div>
<div>
Virtual Reality Expert, Lawyer for the protection of Robots. Demand</div>
<div>
for graduates with those skills will continue to increase, and the</div>
<div>
Universities that invest quickly and early will become the dominant</div>
<div>
players. CMU, Georgia Tech and others invested very early in colleges</div>
<div>
of computing, and this has paid of handsomely for them. The demand for</div>
<div>
CS degrees is going to increase even more in the next decade</div>
<div>
with research moving more closely to the impact that technology is</div>
<div>
having on society. We as researchers, need to pay attention to</div>
<div>
societal impact, since its very real and immediate.</div>
</div>
<div>
<br /></div>
<div>
<div>
The workshop speakers were pretty amazing - talks were accessible,</div>
<div>
and extremely well delivered. I suppose such talks meant for a really</div>
<div>
broad audience (there actually were not many CS people in the audience)</div>
<div>
and thus the points being made are pretty high level.</div>
<div>
<br /></div>
<div>
I would recommend that we all become better acquainted with some of</div>
<div>
the social issues that interact with Computer Science, and going</div>
<div>
to workshops like this is a great start.</div>
</div>
http://blog.computationalcomplexity.org/2016/12/guest-post-by-samir-khuller-on-humans.htmlnoreply@blogger.com (GASARCH)4tag:blogger.com,1999:blog-3722233.post-852391936252108000Thu, 08 Dec 2016 19:39:00 +00002016-12-08T14:39:59.918-05:00Fixing the Academic Job MarketLast month I <a href="http://blog.computationalcomplexity.org/2016/11/computer-science-academic-hiring.html">posted</a> about the craziness of the computer science academic job market due mainly to the decentralized nature of our field. Here are some ideas of what we can do better. I've stolen some of these ideas from other fields such as math and economics.<br />
<br />
<b>Single Job Market System</b><br />
<br />
A single website that every CS department uses to advertise positions and accept applications. Easy for applicants to apply to multiple places and, if they wish, make their materials open for any department to see. Recommenders need only upload their letter once. We could add some additional controls, methods for applicants to indicate say geographical preferences or two-body issues.<br />
<br />
We could have an opt-in site for candidates to list where and when they have interviews. It would make coordination of job interview dates and offer timing that much simpler.<br />
<br />
<b>Annual Meeting</b><br />
<b><br /></b>
A meeting in early January that brings together all the subfields of computing so we can work as one community instead of twenty. As part of this meeting, members of recruiting committees and job candidates come and schedule short interviews with each other to make preliminary choices for on-campus interviews. CS hasn't had annual meetings since the 80's but math and econ still do.<br />
<br />
<b>Virtual Meetings</b><br />
<b><br /></b>
Every time I bring up annual meetings, people complain that we already have too many conferences in computer science. So if no physical meeting, we can set aside days to have a virtual meeting with recruiting committees and candidates talking over Skype.<br />
<br />
<b>Common Dates</b><br />
<b><br /></b>
Have some common fixed dates, just a couple of times in the spring, when departments can present offers, and when candidates must make a decision. That should reduce how long departments have to hold a position before it settles.<br />
<b><br /></b>
These last two ideas require no centralization, just willing job candidates.<br />
<b><br /></b>
<b>Job Market Paper and Video</b><br />
<b><br /></b>
As the recent <a href="http://cra.org/resources/best-practice-memos/incentivizing-quality-and-impact-evaluating-scholarship-in-hiring-tenure-and-promotion/">CRA Best Practices Memo</a> suggests, candidates should choose a single paper to highlight their research. Each candidate should also post a short (3-5 minute) video where they describe this research at a level that any computer scientist could follow. The job talk should cover this paper only, instead of trying to wow with multiple works.<br />
<br />
<b>Candidate Web Page</b><br />
<b><br /></b>
If you are publicly looking for a job, set up a web page, linked from your home page, to your job materials: CV, research statement, teaching statement, list of references, pointers to all your papers with links to PDFs, with the aforementioned job market paper and video highlighted. Also give a pointer to your Google Scholar profile page and make sure that page is correct and up to date.http://blog.computationalcomplexity.org/2016/12/fixing-academic-job-market.htmlnoreply@blogger.com (Lance Fortnow)9tag:blogger.com,1999:blog-3722233.post-5234566692235534665Tue, 06 Dec 2016 14:56:00 +00002016-12-06T15:51:33.944-05:00A students unusual proof might be a better proof<br />
<br />
I asked a student to show that between any two rationals is a rational.<br />
<br />
She did the following: if x < y are rational then take δ << y-x and rational and use x+δ<y and="" be="" delta="" let="" nbsp="" p="" rational="" such="" that="" then="" y-x=""><br />
<br />
This is correct though more complicated then what I had in mind: (x+y)/2<br />
<br />
I then asked her to prove that between two irrationals is an irrational.<br /><br />
<br />She did the following: if x < y are irrational then take δ << y-x and rational and use x+δ</y><br />
<y and="" be="" delta="" let="" nbsp="" p="" rational="" such="" that="" then="" y-x=""><br /></y>
<y and="" be="" delta="" let="" nbsp="" p="" rational="" such="" that="" then="" y-x="">SAME PROOF!</y><br />
<y and="" be="" delta="" let="" nbsp="" p="" rational="" such="" that="" then="" y-x=""><br /></y>
<y and="" be="" delta="" let="" nbsp="" p="" rational="" such="" that="" then="" y-x="">I had a different proof in mind: the number of reals in (x,y) is uncountable while the number of rationals is countable, so there must be at least one (in fact uncountable many) irrationals in (x,y).</y><br />
<y and="" be="" delta="" let="" nbsp="" p="" rational="" such="" that="" then="" y-x="">(NOTE- I originally had `the number of irrationals in (x,y) is ...' which, as comment by stu below</y><br />
<y and="" be="" delta="" let="" nbsp="" p="" rational="" such="" that="" then="" y-x="">points out, is assuming what I want to prove. Typo on my part.)</y><br />
<br />
<y and="" be="" delta="" let="" nbsp="" p="" rational="" such="" that="" then="" y-x="">These proofs raise questions about our criteria of goodness-of-proof.<br />
<br />
1) Which proof for rationals is better:</y><br />
<y and="" be="" delta="" let="" nbsp="" p="" rational="" such="" that="" then="" y-x=""><y other="" p="" pro-="" proof.="" simpler="" take="" than="" the="" x="" y=""><br /></y></y>
<y and="" be="" delta="" let="" nbsp="" p="" rational="" such="" that="" then="" y-x=""><y other="" p="" pro-="" proof.="" simpler="" take="" than="" the="" x="" y="">The delta-proof or the (x+y)/2 proof?</y></y><br />
<br />
The (x+y)/2 proof is simple, but the delta-proof also works for irrationals.<br />
<br />
<y and="" be="" delta="" let="" nbsp="" p="" rational="" such="" that="" then="" y-x=""><y other="" p="" pro-="" proof.="" simpler="" take="" than="" the="" x="" y=""><y also="" and="" delta.="" delta="" find="" for="" irrational="" p="" pro-="" rational="" take="" the="" theorem="" works="" x="" y-x="">
2) Which proof for irrationals is better:</y></y></y><br />
<y and="" be="" delta="" let="" nbsp="" p="" rational="" such="" that="" then="" y-x=""><y other="" p="" pro-="" proof.="" simpler="" take="" than="" the="" x="" y=""><y also="" and="" delta.="" delta="" find="" for="" irrational="" p="" pro-="" rational="" take="" the="" theorem="" works="" x="" y-x=""><br /></y></y></y>
<y and="" be="" delta="" let="" nbsp="" p="" rational="" such="" that="" then="" y-x=""><y other="" p="" pro-="" proof.="" simpler="" take="" than="" the="" x="" y=""><y also="" and="" delta.="" delta="" find="" for="" irrational="" p="" pro-="" rational="" take="" the="" theorem="" works="" x="" y-x="">The delta proof or the uncountable proof?</y></y></y><br />
<y and="" be="" delta="" let="" nbsp="" p="" rational="" such="" that="" then="" y-x=""><y other="" p="" pro-="" proof.="" simpler="" take="" than="" the="" x="" y=""><y also="" and="" delta.="" delta="" find="" for="" irrational="" p="" pro-="" rational="" take="" the="" theorem="" works="" x="" y-x=""><br /></y></y></y>
<y and="" be="" delta="" let="" nbsp="" p="" rational="" such="" that="" then="" y-x=""><y other="" p="" pro-="" proof.="" simpler="" take="" than="" the="" x="" y=""><y also="" and="" delta.="" delta="" find="" for="" irrational="" p="" pro-="" rational="" take="" the="" theorem="" works="" x="" y-x="">Which one is simpler?</y></y></y><br />
<y and="" be="" delta="" let="" nbsp="" p="" rational="" such="" that="" then="" y-x=""><y other="" p="" pro-="" proof.="" simpler="" take="" than="" the="" x="" y=""><y also="" and="" delta.="" delta="" find="" for="" irrational="" p="" pro-="" rational="" take="" the="" theorem="" works="" x="" y-x=""><br /></y></y></y>
<y and="" be="" delta="" let="" nbsp="" p="" rational="" such="" that="" then="" y-x=""><y other="" p="" pro-="" proof.="" simpler="" take="" than="" the="" x="" y=""><y also="" and="" delta.="" delta="" find="" for="" irrational="" p="" pro-="" rational="" take="" the="" theorem="" works="" x="" y-x="">The uncountable proof gives more information (the number of irrationals is unctble)</y></y></y><br />
<y and="" be="" delta="" let="" nbsp="" p="" rational="" such="" that="" then="" y-x=""><y other="" p="" pro-="" proof.="" simpler="" take="" than="" the="" x="" y=""><y also="" and="" delta.="" delta="" find="" for="" irrational="" p="" pro-="" rational="" take="" the="" theorem="" works="" x="" y-x="">but is nonconstructive.</y></y></y><br />
<y and="" be="" delta="" let="" nbsp="" p="" rational="" such="" that="" then="" y-x=""><y other="" p="" pro-="" proof.="" simpler="" take="" than="" the="" x="" y=""><y also="" and="" delta.="" delta="" find="" for="" irrational="" p="" pro-="" rational="" take="" the="" theorem="" works="" x="" y-x=""><y countable="" in="" irrationals="" is="" number="" of="" or="" p="" rationals="" the="" uncountable="" x="" y=""><y and="" both.="" delta.="" delta="" find="" for="" p="" pro-="" rational="" simpler="" take="" works="" x="" y-x=""><br /></y></y></y></y></y>
<y and="" be="" delta="" let="" nbsp="" p="" rational="" such="" that="" then="" y-x=""><y other="" p="" pro-="" proof.="" simpler="" take="" than="" the="" x="" y=""><y also="" and="" delta.="" delta="" find="" for="" irrational="" p="" pro-="" rational="" take="" the="" theorem="" works="" x="" y-x=""><y countable="" in="" irrationals="" is="" number="" of="" or="" p="" rationals="" the="" uncountable="" x="" y=""><y and="" both.="" delta.="" delta="" find="" for="" p="" pro-="" rational="" simpler="" take="" works="" x="" y-x="">3) Are there other proofs for either theorem?<br />
<br />
Which proof do you prefer? Why? What is your criteria?<br />
<br />
<br />
<br />
</y></y></y></y></y>http://blog.computationalcomplexity.org/2016/12/a-students-unusual-proof-might-be.htmlnoreply@blogger.com (GASARCH)14tag:blogger.com,1999:blog-3722233.post-4093420466532584480Fri, 02 Dec 2016 21:22:00 +00002016-12-04T23:40:44.030-05:00Guest post by Samir Khuller about Visiting the Simons theory Inst for ComputingGuest post by Samir Khuller on his visit to the Simons Inst. of Computing<br />
<br />
Visiting the Simons Institute for Theory of Computing<br />
<br />
A few days back I had the good fortune to spend four days at the<br />
<a href="https://simons.berkeley.edu/">Simons Theory inst.</a> at UC Berkeley (for a workshop titled<br />
<a href="https://simons.berkeley.edu/workshops/uncertainty2016-2">Learning, Algorihtm Design, and Beyond Worst Case Analysis</a>,)<br />
Not only was the workshop extremely well run, the entire Institute is<br />
amazing - from wide open spaces for collaboration to an excellent staff<br />
and amenities for visitors (especially long term visitors who have<br />
shared office space). We were missing a <a href="https://www.dagstuhl.de/en/">Dagstuhl</a> like venue in the US<br />
for a long time, and I think the Simons Theory Center partially makes<br />
up for this deficiency. Dagstuhl is of course a bit isolated, so people<br />
have no-where to go but interact, and work, and that is part<br />
of the charm. Berkeley isn’t isolated and has some of the most amazing<br />
cafes and restaurants right next to its charming campus. I ended up staying<br />
on campus at their <a href="http://www.berkeleyfacultyclub.com/">Faculty Club</a>.<br />
<br />
Simons workshops are over-subscribed and in high demand, but even on<br />
Thursday evening (my last day there) I noticed that scores of badges<br />
had not yet been picked up. Part of the problem lies in the fact that<br />
registration is free. Perhaps charging $10/day for registration will<br />
<div>
<br /></div>
<div>
<div>
ensure that people sign up for the days that they intend to be there,</div>
<div>
so others who want to attend are not locked out (to be fair, I don’t</div>
<div>
know how many were locked out, but I have the impression that due to</div>
<div>
high demand, workshops have closed registration in the past).</div>
<div>
<br /></div>
<div>
Kudos to both the Simons Institute <a href="https://iribe.cs.umd.edu/">organizers</a> as well as the ones</div>
<div>
for the special semester and workshop! I did visit Simons once in 2014,</div>
<div>
for just 15 minutes when we were visiting places to see for the design</div>
<div>
of our very own <a href="https://iribe.cs.umd.edu/">Brendan Iribe Center for Computer Science and Innovation</a>.</div>
<div>
The construction of which launched in summer 2016, and we hope to be done</div>
<div>
by August 2018. Come and visit after that! This project has consumed the</div>
<div>
last 30 months of my life.</div>
<div>
<br /></div>
<div>
One of the main benefits of attending these workshops is to see sub-fields</div>
<div>
as they form - long before new courses emerge covering these topics - and</div>
<div>
it’s inspiring to meet the leaders in our field running the</div>
<div>
program on Algorithms and Uncertainty. Finally, if you missed it,</div>
<div>
videos of talks are available online <a href="https://simons.berkeley.edu/workshops/schedule/3421">here.</a></div>
<div>
<br /></div>
</div>
http://blog.computationalcomplexity.org/2016/12/guest-post-by-samir-khuller-about.htmlnoreply@blogger.com (GASARCH)1