tag:blogger.com,1999:blog-3722233Sun, 18 Mar 2018 11:17:53 +0000typecastfocs metacommentsComputational ComplexityComputational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill Gasarchhttps://blog.computationalcomplexity.org/noreply@blogger.com (Lance Fortnow)Blogger2565125tag:blogger.com,1999:blog-3722233.post-3791320685237445183Wed, 14 Mar 2018 13:08:00 +00002018-03-14T09:08:17.836-04:00Stephen Hawking (1942-2018)<div class="separator" style="clear: both; text-align: center;">
<a href="https://upload.wikimedia.org/wikipedia/commons/e/eb/Stephen_Hawking.StarChild.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="359" data-original-width="250" height="200" src="https://upload.wikimedia.org/wikipedia/commons/e/eb/Stephen_Hawking.StarChild.jpg" width="137" /></a></div>
<div style="text-align: left;">
Stephen Hawking <a href="https://www.nytimes.com/2018/03/14/obituaries/stephen-hawking-dead.html">passed away</a> earlier this morning in Cambridge, England. As a brilliant theoretical physicist and best-selling author all while dealing with a debilitating disease, Hawking rose to be the best-known scientist of his time. </div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
I'll leave it to the physics blogs to talk about his research. Hawking inspired me through his 1988 book, <a href="http://amzn.to/2Gqavr8"><i>A Brief History of Time</i></a>. Hawking told Time magazine before the magazine's publication "Someone told me that each equation I included in the book would halve the sales. In the end, however, I did put in Einstein’s famous equation E = mc<sup>2</sup>. I hope that this will not scare off half my potential readers.”<br />
<br />
So you read the book and he manages to describe his highly mathematical-based view of the universe without resorting to mathematics, by far the best written popular science book I have read.<br />
<br />
<i>A Brief History of Time</i> came out when I was in grad school so it didn't play a role in me becoming an academic but it did make me realize that science has a story to tell. From the preface of my own book.<br />
<blockquote class="tr_bq">
I took inspiration from Stephen Hawking's A Brief History of Time, which explains physics not through formulas and technicalities but with examples and stories. I attempt to do the same here to explore the spirit and importance of the P versus NP problem.</blockquote>
I am under no illusion that I came even close to Hawking's level of exposition.<br />
<br />
A <a href="https://www.researchamerica.org/sites/default/files/Americans'%20Attitudes%20about%20Science%20and%20Scientists%20in%202017.pdf">poll</a> taken last year showed most Americans could not name a single living scientist but among the 19% that could, the scientist they named most often was Stephen Hawking. We lost not only a brilliant physicist but one of the great symbols for science of our generation.</div>
https://blog.computationalcomplexity.org/2018/03/stephen-hawking-1942-2018.htmlnoreply@blogger.com (Lance Fortnow)1tag:blogger.com,1999:blog-3722233.post-1029435025512674786Mon, 12 Mar 2018 21:59:00 +00002018-03-12T17:59:00.341-04:00Wanted: An Easy Proof of a Weak Theorem in NT so that I can get another VDW--> Primes infinite proof to be really easy.(All math in this article is <a href="https://www.cs.umd.edu/users/gasarch/BLOGPAPERS/primesinfvdw2.pdf">here</a>)<br />
<div>
<br /></div>
<div>
A while back I posted about a proof that Van Der Waerden's theorem implies the number of primes</div>
<div>
is infinite (see the post <a href="https://blog.computationalcomplexity.org/2017/11/van-der-waerdens-theorem-implies.html">here</a>). That post brought up some logical issues.</div>
<div>
<br /></div>
<div>
More recently there is another proof that the primes are infinite that raises (for me at least) some number theory results and proofs. The proof uses the following theorem:</div>
<div>
<br /></div>
<div>
<i>There are no 4-length Arithmetic Progressions consisting of squares.</i></div>
<div>
<i><br /></i></div>
<div>
This is attributed to Fermat. All of the proofs I've seen of it look difficult.</div>
<div>
<br /></div>
<div>
Here is a sketch of the proof of infinite number of primes from VDW and the theorem above.</div>
<div>
Assume that {p1,...,pm} are all of the primes. Let vi(n) be the power of pi in the factorization of n.</div>
<div>
<br /></div>
<div>
We define the following 2<sup>m</sup> coloring of N</div>
<div>
<br /></div>
<div>
COL(n) = ( v1(n) mod 2, ... vm(n) mod 2)</div>
<div>
<br /></div>
<div>
There exists a monochromatic 4-AP. We leave it to the reader to show that you can use this to get a 4-AP of squares.</div>
<div>
<br /></div>
<div>
Using Fermats 4-AP theorem is hard! In the write up I show how to use a stronger VDW theorem and a weaker (at least easier to prove in my opinion) result in Number Theory to get a different proof.<br />
<br />
VDWPlus: for all k,c there exists W such that for all c-colorings of [W] there exists a,d such that<br />
<br />
a, a+d, a+2d, ..., a+(k-1)d AND d (that's the PLUS part) are all the same color.<br />
<br />
Now to prove primes are infinite. Assume not. p1,...,pm are all of the primes.<br />
<br />
Let vi(n) be as above<br />
<br />
COL(n) = (v1(n) mod 4, ..., vm(n) mod 4)<br />
<br />
just apply this to the k=1 case so we have<br />
<br />
a, a+d, d all the same color. Say the color is (b1,...,bm) where bi in {0,1,2,3}<br />
<br />
mult all three by p1^{4-b1} ... pm^{4-bm}.<br />
<br />
now we have A, A+D, D all four powers. call them x^4, z^4, y^4 and you<br />
contradict the n=4 case of Fermat'ls last theorem (this is the easiest case and was proven by Fermat)<br />
<br />
TO DO: find an even easier theorem in Number Theory to use with a perhaps more advanced form of VDW theorem to get a proof that primes are infinite.<br />
<br />
Coda: Cute that I replace one theorem by Fermat with another theorem by Fermat.</div>
https://blog.computationalcomplexity.org/2018/03/wanted-easy-proof-of-weak-theorem-in-nt.htmlnoreply@blogger.com (GASARCH)0tag:blogger.com,1999:blog-3722233.post-1938579207856937343Fri, 09 Mar 2018 13:37:00 +00002018-03-09T08:37:54.172-05:00Data-Driven ProgrammingFive years ago I posted about <a href="http://blog.computationalcomplexity.org/2012/12/flash-fill.html">Flash Fill</a>, a then new Microsoft Excel feature that would reformat data based on examples. I just checked the latest version of Excel and Flash Fill is still there just working by making suggestions when appropriate. It uses a programming language approach finding the simplest string manipulation program consistent with the examples.<br />
<br />
I loved the idea of example-based programming but it did seem pretty limited at the time. With the machine learning revolution we've gone full circle. We solve a number of problems with data and examples alone: voice recognition, face recognition, spam detection, playing Chess and Go from scratch in ways we wouldn't even know how to program.<br />
<br />
That's not completely true: Designing a good machine learning program requires programming to get data in a good format and creating the right network structure to learn is still considerably an art. But the real magic does happen when the deep learning process happens creating a neural net that works well in ways we can't fully understand even when we look at the final network produced.<br />
<br />
Now "it's all about the data". This hit me when I got an email invitation for the <a href="https://www.uber.com/c/uber-credit-card/">Uber visa card</a>. Unlike other branded cards it gives you no particular discounts or rewards for Uber. Rather it gives you significant cash back for dining and travel, in other words for giving Uber the data about you that can make Uber itself more valuable to you.<br />
<br />
Nothing in this post should be new to you readers. But every now and then just take a breadth and realize who much our notions of computing and data have changed over even the past five years.<br />
<br />
Programmers aren't going away. Machine learning works well for things humans do well instinctively. But for actual number crunching, simulations and manipulation, even what Flash Fill does, machine learning has much more to do.https://blog.computationalcomplexity.org/2018/03/data-driven-programming.htmlnoreply@blogger.com (Lance Fortnow)0tag:blogger.com,1999:blog-3722233.post-2782715569399365307Wed, 07 Mar 2018 15:43:00 +00002018-03-07T10:43:39.331-05:00When the Wrong People raise the issueOn my discrete math final in Spring 2017 I had a question:<br />
<br />
Prove that sqrt(2/3) is irrational.<br />
<br />
A student emailed me the folloing (I paraphrase and am prob not as elegant or as long as he was)<br />
<br />
Dr. Gasarch<br />
<br />
I received only 2 points out of 20 on Problem 3 of the final- the one that asked us to prove that sqrt(2/3) is irrational. This was graded rather harshly, and the reason is endemic to the entire course. It was unclear what we could and could not assume in this problem. That has been unclear all semester.<br />
<br />
I responded (I leave out the name for privacy)<br />
<br />
Mr. Student<br />
<br />
Come by my office and we will look over your final. At the same time, bring by your HWs and Midterms to look at. The deadline for regrading those is up; however, either you are correct and I will learn how to run a better course by looking over your HWs and midterms, or you incorrect and you will be enlightened by us looking over them.<br />
<br />
We agreed to meet. The Student came all ready to argue not just points on the final but also about the course in general. I looked forward to this since either I would learn about how to run the course better OR I would enlighten a student!<br />
<br />
STUDENT: I used the obvious fact that the ratio of two irrationals is irrational. How was I supposed to know I had to prove something so obvious!!!!!!!! And only 2 points!!!!!!! Really!!!!!!<br />
<br />
BILL: What is the ratio of \sqrt{8} to \sqrt{2}.<br />
<br />
STUDENT: sqrt(8)/sqrt(2) = sqrt(8/2) = sqrt(4) = 2. Why is that relevant?<br />
<br />
BILL: You just showed that the ratio of two irrationals could be rational.<br />
<br />
STUDENT: Oh.<br />
<br />
BILL: Lets look over your HW and midterm and see there were times when it was not clear what you could assume OR if not then I can clear up some misconception you had.<br />
<br />
STUDENT: Uh, Uh. Nevermind.<br />
<br />
BILL: You are here with your HWs and midterms, lets take a look.<br />
<br />
He really didn't want to. I think he really just wanted more points on the final But since he phrased it as a philosphical debate about how the course was run, I took him at his word. Everything he showed me was either clearly wrong or clearly unclear. None of it fell into the category of `not clear what you can assume'.<br />
<br />
This disappointed me. I suspect someone could make a case that my course sometimes does not make it clear what you can assume. Other students with a similar story as above claim my course is to pedantic. But the examples they show me of this are usually just wrong, not marked wrong for being to pedantic.<br />
<br />
There is a more general problem here. The students who complain about a course <i>may well have a valid point to make! </i>But its usually students who are not very good making the complaint, and hence they are not the ones who could make a good argument. One thing I have done when a student has a complaint about how I run the course is then ask my TAs about it. This has been helpful sometimes but they are on the other end -- the course is easy for them so its hard for them to see the problems.<br />
<br />
Having said all of this I will own up to one flaw I've had which the students complained about incoherently (see <a href="http://blog.computationalcomplexity.org/2017/05/students-try-to-memorize-rather-than.html">here</a>) and my TA pointed out the fair objection. I had taught ONE type of Pigeon hold argument and tested them on a very closely related but different type -- as a way of testing if they understood pigeon hole and were not just memorizing. It was a fair question BUT my TA said<br />
(correctly) I was more interested in getting a good test question then, you know, TEACHING them the Pigeon hole principle -- so I should in the future (and I have) teach them LOTS of versions, make sure they understand them, and then if on the exam I give them a variant its more fair. But more importantly he pointed out that I (and this is correct, or was) have a QUESTION I really want to put on the midterm and then teach the course so that it makes sense to do so. The question is fair, but this is NOT the point (which is why the students objections were incoherent- the question was fair). I am setting (some of them) up to fail. I have CHANGED my Teaching style and exam style since them.<br />
<br />But my point is that the students could not possibly have raised that objection-- partly because the students complaining are not very good, but also because they do not see what goes on behind the scenes.<br />
<br />
UPSHOT- if your students have an incoherent complaint there may be something to it and you should ask your TAs.<br />
<br />
<br />https://blog.computationalcomplexity.org/2018/03/when-wrong-people-raise-issue.htmlnoreply@blogger.com (GASARCH)0tag:blogger.com,1999:blog-3722233.post-1717950974413246529Thu, 01 Mar 2018 12:33:00 +00002018-03-01T07:33:03.491-05:00Me and My Bolt<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-HF3agKD1Xco/WpXJ9aJxf2I/AAAAAAABfTc/wvZPMCv0b6UJ7iqhF7jtzrqHLGFzyjlnwCKgBGAs/s1600/IMG_20171215_134715.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1200" data-original-width="1600" height="240" src="https://1.bp.blogspot.com/-HF3agKD1Xco/WpXJ9aJxf2I/AAAAAAABfTc/wvZPMCv0b6UJ7iqhF7jtzrqHLGFzyjlnwCKgBGAs/s320/IMG_20171215_134715.jpg" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
On Tuesday I got a knock on my car's window. Rolling it down someone asked if I liked my car as he was thinking of buying one himself. Was I driving the latest Tesla? No, the Chevy Bolt, General Motor's newest fully electric car.</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
On March 31, 2016 me and 180,000 of my closest friends put $1000 down sight unseen on the Tesla 3. As an east coast resident with no prior Tesla I am well down the list even though I made a reservation on the first day. I got disillusioned by early production problems and delays and bait-and-switch emails.</div>
<blockquote class="tr_bq" style="clear: both; text-align: left;">
Now that some more details regarding Model 3 came out, I wanted to gauge your interest in coming in for a test drive in a Model S. An incredibly well equipped Model S can be had for under $80k and with the $7500 federal tax credit and $1000 referral code from a current owner, you can get into a Model S for close to $70k, meanwhile a Model 3 can cost up to $60k. Model S is considerably quicker and has an incredible amount of space to seat 5 comfortably. It is our flagship vehicle and has 5 years of manufacturing behind it to perfect the build quality of the car. Not to mention a quick turnaround time on delivery. I would love to host you in our showroom so I can showcase some of the incredible features in the Model S.</blockquote>
Meanwhile a GM executive on the Georgia Tech College of Computing's advisory board suggested I take a look at the Bolt. I was skeptical but he arranged for a local dealer to bring down a car to test drive. I got sold and bought my own Bolt in December.<br />
<br />
It's an all electric car with no transmission so quick smooth acceleration. The Bolt has a one pedal driving mode as the "gas" pedal also works as a break which slows down the car by pulling power to the battery. The car doesn't even pretend to be mechanical, a screen boots up when I turn the car on, you can shift into park by pressing a button. The rear view "mirror" is really a camera. When driving slowly there is a simulated view from above. A little freaky but it helps with parking. With Android Auto I basically have Google Assistant and Maps in the car which I prefer to an in-car navigation the Bolt doesn't have.<br />
<br />
I get over 200 miles on a full charge and can plug in overnight to get enough power to get to work and back. The car has considerable safety features that warn about cars in blind sports or getting too close to the car in front of you or if you drift from lanes. The Bolt lacks any autonomous features even adaptive cruise control or parking assist. I suspect you could get significant autonomous behavior with a software upgrade but I doubt GM would ever do so.<br />
<br />
The car is missing some basic features like power seats and a garage door button. No problem, I rigged up Google Assistant to open the garage door when I say the magic words. Far cooler than pressing a button.<br />
<br />
It's not as sleek looking as a Tesla and nobody will shoot it into space but I'm driving the future and it is amazing.https://blog.computationalcomplexity.org/2018/03/me-and-my-bolt.htmlnoreply@blogger.com (Lance Fortnow)6tag:blogger.com,1999:blog-3722233.post-3863982816377095778Sun, 25 Feb 2018 21:56:00 +00002018-02-26T00:11:41.862-05:00When students are wrong but have a valid point, encourage themI often have the class VOTE on a statement (the choices are usually TRUE/FALSE/UNKNOWN TO SCIENCE/Stewart-Colbert-2020)<br />
<br />
I ask the students who voted incorrectly (some say anything except Stewart-Colbert-2020 is an incorrect vote, though I'm partial to Bee-Oliver, though John Oliver was not born in this country so he's not eligible, though our current president might say that neither was Obama so thats not a bar to the presidency) why they vote the way they do to get a good discussion going. I give some examples. I use real first names but decided to not give last names for reasons of security. The people involved all agreed. If you have examples of students being WRONG but HAVING A POINT, leave an intelligent comment about it!<br />
<br />
1) In Discrete Math they vote on: <i>is Q, the rationals, countable?</i>One of the NO votes was named Alice (really!).<br />
<br />
<i><b>Alice</b>: The rationals, like there are SO many of them. They are just so crammed in their. I mean really!</i><br />
<i><br /></i>
<i><b>Bill</b>: I think you want to say that between any two rationals is a rational.</i><br />
<i><br /></i>
<i><b>Alice: </b>Thats what I said!!</i><br />
<i><br /></i>
<i><b>Bill:</b> Great! And realize that a set is countable if there is a bijection from the set to N. So what you are really saying is-----</i><br />
<i><br /></i>
<b style="font-style: italic;">Alice: </b><span style="font-style: italic;">There is a bijection between N and Q but not an order preserving Bijection!</span><br />
<span style="font-style: italic;"><br /></span>
2) In Formal Lang theory they vote <i>if alpha is a reg expression and L(alpha) is the lang generated by alpha then is there a reg exp beta such that L(beta) is the complement of L(alpha). </i>One of the NO votes was named Andrea (nickname Andy).<br />
<br />
<b><i>Andy:</i> </b><i>Given alpha I just don't see an easy way, or for that matter any way, to create beta.</i><br />
<i><br /></i>
<i><b>Bill:</b> I will soon prove that regular expressions are equivalent to DFA's. So what one would do is take the L(alpha), form the DFA, complement the DFA, then convert back to a regular expression.</i><br />
<i><br /></i>
<i><b>Andy: </b>Oh, yes that works, but it seems complicated.</i><br />
<i><br /></i>
<b style="font-style: italic;">Bill: </b><span style="font-style: italic;">It is! You though it would be complicated so it was false. Even though its true, your intuition that its complicated is correct! In fact, there are reg expressions of length n such that the complement lang has a rather large reg expression.</span><br />
<span style="font-style: italic;"><br /></span>
<span style="font-style: italic;">3) </span>Formal Lang theory again. Not a vote this time. I had on the board the regular expression<br />
<br />
(a UNION b)(a UNION b )^*<br />
<br />
A student Dolapo thought it could be simplified:<br />
<br />
<i><b>Dolapo: </b>Can't you write that as the regular expression (a union b)^+ ?</i><br />
<i><br /></i>
<b style="font-style: italic;">Bill: </b><span style="font-style: italic;">Yes and no. Yes the lang is (a UNION b)^+, but + is not allowed in regular expressions.</span><br />
<span style="font-style: italic;"><br /></span>
<span style="font-style: italic;"><b>Dolapo</b>: Why not?</span><br />
<span style="font-style: italic;"><br /></span>
<span style="font-style: italic;"><b>Bill: </b>First off, they really could have been defined that way and perhaps should have. So why weren't they? I go with historical accident, but some people disagree- see my blog post <a href="http://blog.computationalcomplexity.org/2017/03/why-are-regular-expressions-defined-way.html">here</a></span>https://blog.computationalcomplexity.org/2018/02/when-students-are-wrong-but-have-valid.htmlnoreply@blogger.com (GASARCH)5tag:blogger.com,1999:blog-3722233.post-8116578564622534031Thu, 22 Feb 2018 12:43:00 +00002018-02-22T07:43:46.336-05:00NP is HardYou don't get much press by stating conventional beliefs--there is no "round earth society". Nevertheless there are serious researchers out there trying to say that it is reasonably possible if not likely that P = NP. Don't believe it.<br />
<div>
<br /></div>
<div>
Emanuele Viola just outright states <a href="https://emanueleviola.wordpress.com/2018/02/16/i-believe-pnp/">I Believe P = NP</a>. He starts with a similar argument that we've seen from <a href="https://rjlipton.wordpress.com/conventional-wisdom-and-pnp/">Richard Lipton</a>: We've had surprising results in the past so why should the convention wisdom that P ≠ NP be true? I have two major problems with this line of reasoning:</div>
<div>
<ul>
<li>These arguments cherry pick certain results and ignore the vast majority of theorems that went the way we expected. It's akin to saying that because there have been lottery winners in the past, the ticket I hold in my hand is likely to be a winner.</li>
<li>All the major surprises in complexity have occurred early in first two decades of the P v NP era. We've seen no surprises at that level since the early 90's. We have seen surprise proofs of results like Primes in P, Undirected Connectivity in Log Space and NEXP not in ACC<sup>0</sup> but the theorems themselves surprised no one. </li>
</ul>
<div>
Viola also makes the argument that we've seen fewer significant lower bound results than upper bounds. This doesn't surprise me--an upper bound requires a single algorithm, a lower bound requires showing an infinite number of algorithms can't work. I fully agree that we're unlikely to see a proof that P ≠ NP in the near future. But this isn't a reason to think they are the same. And we've seen no non-trivial algorithms for general NP problems like circuit-SAT.</div>
<div>
<br /></div>
<div>
Which takes me to Ryan Williams <a href="http://people.csail.mit.edu/rrw/likelihoods.pdf">Some Estimated Likelihoods For Computational Complexity</a> where he gives 20% probability (which I consider nearly 20% too high) that P = NP based on similar logic. Ryan surprises me even more by giving a whopping 80% chance that NEXP = coNEXP. That would imply every easily describable tautology has a short proof of being a tautology (for the appropriate definitions of describable and short) which I find hard to fathom.</div>
</div>
<div>
<br /></div>
<div>
If I would guess collapses that might happen, I would suggest L = NL (directed connectivity in log space) or one that Ryan gives even odds, TC<sup>0</sup> = NC<sup>1</sup>. TC<sup>0</sup> captures constant-depth neural networks and as we've seen from the ML community, those neural nets look awfully powerful. </div>
https://blog.computationalcomplexity.org/2018/02/np-is-hard.htmlnoreply@blogger.com (Lance Fortnow)17tag:blogger.com,1999:blog-3722233.post-3843782443995752343Mon, 19 Feb 2018 04:08:00 +00002018-02-18T23:08:44.223-05:00The web bad/People using it bad.I was going to write a post about how hard it was to find what grades mean at different schools (e.g., at UMCP W (for withdraw) means the student dropped the course, but at UT Austin its Q (for Quit?)) but then I found that my Googling<br />
<br />
Name of school grade key<br />
<br />
I could find it. Okay, so grade keys are not that hard to find.<br />
<br />
WEB BAD:<br />
<br />
However, course descriptions are. Both questions (what grades mean and course desc) came up when I do admissions for my REU program. If a student has a course in<i> Discrete Mathematics</i> that could mean a course where you learn how to prove simple things OR it could mean a course where you proof the graph minor theorem (actually I doubt that) or something in between. Similar for <i>Principles of Programming languages</i> and <i>Software Engineering </i>in that they could mean a variety of things. I think math and physics are more standardized, but even there you have the problem that <i>Advanced calc </i>could be either multivar or a course with proofs or something else. There is a great related XKCD <a href="https://xkcd.com/773/">here</a>.<br />
<br />
PEOPLE USING THE WEB BADLY:<br />
<br />
Fellow blogger Scott Aaronson recently posted about Vivek Wadhwa's Washington post column on quantum computing (the post is <a href="https://www.scottaaronson.com/blog/?p=3645">here</a>) The article by Vivek told the tired and utterly false story that QC can be used to solve TSP by looking at all possibilities. Should Vivik have known better by looking at the web? At first I thought YES he should know better. But then I thought -- I should try to see what he might have done. Maybe the web has this one wrong. In fact, some of my students Iwho, I should add, are not writing articles on QC for the Washington Post) tell me that they don't need to learn the proof of the Cook-Levin theorem since Quantum Computers can solve SAT anyway. So I decided to pretend I didn't know anything about QC and went to the web to see what I could find. First stop: Wikipedia. I found the following quote in the QC article:<br />
<i><br /></i>
<i>BQP is suspected to be disjoint from NP-complete and a strict superset of P though this is not known.</i><br />
<br />
So, the first place I look I find that BQP is suspected to NOT solve SAT or TSP or... Realize that Wikipedia is not some obscure source of hidden knowledge. It is literally the first place one would look. Hence, no excuse, even if other experts told him otherwise (which seems to be the case, though `experts' should be in quotes) the Wikipedia quote should at least give pause.<br />
<br />
So even when Wikipedia gives the right information, not everyone looks there.<br />
<br />
<br />https://blog.computationalcomplexity.org/2018/02/the-web-badpeople-using-it-bad.htmlnoreply@blogger.com (GASARCH)5tag:blogger.com,1999:blog-3722233.post-8202444280934257678Thu, 15 Feb 2018 13:09:00 +00002018-02-15T08:09:36.775-05:00Corporate EducationFirst of all read the <a href="http://blog.geomblog.org/2018/02/a-metoo-testimonial-that-hits-close-to.html">#metoo testimonial</a> going around the TCS blogosphere. Our field is not immune.<br />
<br />
Last Sunday Frank Bruni wrote an op-ed column <a href="https://www.nytimes.com/2018/02/10/opinion/sunday/corporations-will-inherit-the-earth.html">Corporations will Inherit the Earth</a>, an article on how corporations have taken on what governments used to do. Quite a bit is focused on education.<br />
<blockquote class="tr_bq">
The nimbleness of corporations gives them an edge over hoary, complacent institutions, including those in higher education...In an effort to make sure that employees have up-to-the-minute technical skills--or are simply adept at critical thinking and creative problem solving -- more companies have developed academies of their own. That's likely to accelerate. "I think enterprises like Amazon and Google are going to build universities that teach coding and things the nation needs" said Margaret Spellings, former education secretary and current president of the University of North Carolina system.</blockquote>
Already in the universities we've seen a move towards more majors and minors in STEM and computer science in particular. The changing corporate workplace has already changed academia, putting more an emphasis on technical areas and less on the liberal arts. Will companies though take the next step, running their own schools that focus on the needs of industry? If we see continued decreased government funding for universities and academic research, we may end up with a handful of rich universities and the rest of the population getting education on the job, a future that would be a loss for us all.<br />
<br />
<br />https://blog.computationalcomplexity.org/2018/02/corporate-education.htmlnoreply@blogger.com (Lance Fortnow)4tag:blogger.com,1999:blog-3722233.post-6728758538049236376Sat, 10 Feb 2018 15:10:00 +00002018-02-10T10:14:29.623-05:00A ``new'' ``application'' of Ramsey Theory/A Truly ``bad'' math songIan Parberry once told me (though I doubt he originated it- The first link I found says it was Mark Twain)<br />
<br />
<br />
<i>to a man with a hammer, everything looks like a nail</i><br />
<i><br /></i>
Indeed. Since I am teaching a grad course <i>Ramsey theory and its ``Applications'' </i>(I got 24 students, which is more than I thought I would- including around 10 ugrads who are taking it because `all the cool kids are taking it' ) I have been taking the hammer of Ramsey Theory and looking for nails to apply it to. (I'm also using a webiste I found of applications of Ramsey Theory <a href="https://www.cs.umd.edu/users/gasarch/TOPICS/ramsey/ramsey.html">here</a> and an survey article on applications of Ramsey <a href="http://www.combinatorics.org/ojs/index.php/eljc/article/view/DS13/pdf">here</a>.)<br />
<br />
Thinking about the infinite Ramsey Theorem I came up with this ``application'' :<br />
<br />
I<i>f p1, p2, p3, ... is an infinite sequence of points in R<sup>n</sup> then there exists a subsequence q1, q2,q3,... such that for each coordinate 1 ≤ i ≤ n the projection onto that coordinate is either (a) strictly incresing, (b) strictly decreasing, or (c) constant.</i><br />
<br />
<b>Proof: </b>For i< j color (i,j) with one of 3^n colors - indicating for each coordinate i if the ith projection is increaing, decreasing, or constant. The infinite homog set gives you the sequence.<br />
<br />
<b>End of Proof</b><br />
<b><br /></b>
One can also proof this from the Bolzano-Weierstrass theorem (an infinite bounded sequence of points in R<sup>n</sup> has a convergent subsequence). We leave that proof to the reader; however, the proof of BW looks like the proof of the infinite Ramsey Theorem, so I'm not sure if my proof is new or not.<br />
<br />
I wanted to look into the BW theorem so I googled "Bolzano-Weierstrass" I think Google knows me better than I know myself since the second hit was <a href="https://www.youtube.com/watch?v=dfO18klwKHg">https://www.youtube.com/watch?v=dfO18klwKHg</a> which is a Rap Song about the BW theorem (I am a fan of novelty songs, and of math, so does it follow I am a fan of Math Novelty songs. Not sure if its follows, but I AM!)<br />
<br />
One of the problems on the HW was <i>BW-rap- good, bad, or really bad?</i><br />
<i><br /></i>
Answers were:<br />
<br />
1) Made my ears bleed<br />
<br />
2) Lyrics good, singing really bad<br />
<br />
3) So bad its good<br />
<br />
4) No, just bad.<br />
<br />
I'm in the <i>Lyrics good/singing is `so bad its good'</i> camp. The class was okay with the lyrics, but mostly thought it was <i>so bad its... bad.</i> One person thought it was awesome!<br />
<br />
I would like to see real rapper perform it on you tube. I doubt I will. Oh well.https://blog.computationalcomplexity.org/2018/02/a-new-application-of-ramsey-theorya.htmlnoreply@blogger.com (GASARCH)2tag:blogger.com,1999:blog-3722233.post-5853037051528031641Thu, 08 Feb 2018 12:42:00 +00002018-02-08T07:42:35.056-05:00For the Love of AlgorithmsWired magazine labelled 2017 as <a href="https://www.wired.com/story/2017-was-the-year-we-fell-out-of-love-with-algorithms/">The Year We Fell Out of Love with Algorithms</a>. The article goes on to talk about how algorithms give us filter bubbles, affect elections, propagate biases and eliminate privacy. The article at the end argues that we shouldn't blame the algorithm but the people and companies behind them.<br />
<br />
Every day algorithms decide what music we listen to, what posts we see on Facebook and Twitter, how we should drive from point A to point B, what products we should buy. Algorithms feed my nostalgia in Facebook and Google Photos showing me what once was. Algorithms recognize my voice and my face. We've even created new currencies from algorithms. Algorithms form the backbone of the top six world's largest public companies (Apple, Alphabet, Microsoft, Amazon, Facebook and Tencent). It's been a long time since I only trusted algorithms to format my papers.<br />
<br />
Algorithms have taken over nearly every aspect of our lives and our economy. With new machine learning techniques, no longer can the creator of an algorithm fully understand how algorithms work. But we aren't in a Skynet world, algorithms are not sentient and not even inherently bad, any more than we label a group of people inherently bad because of a few examples.<br />
<br />
This societal transition from human-oriented decision to algorithmic decision making will continue and will have great benefits such as much safer cars and medical care. We must be vigilant, of course, to how algorithms will change society, but (in a serious sense) I welcome our new machine overlords.https://blog.computationalcomplexity.org/2018/02/for-love-of-algorithms.htmlnoreply@blogger.com (Lance Fortnow)7tag:blogger.com,1999:blog-3722233.post-478483437611165922Tue, 06 Feb 2018 03:30:00 +00002018-02-05T22:30:03.053-05:00"How can you use your research for ugrad projects?'- the wrong questionI was helping a math PhD who worked in computable ramsey theory prepare his teaching and research statements for his job application. One of the questions various schools wanted to know was<br />
<br />
<i>How can you use your research as the basis for undergraduate projects?</i><br />
<i><br /></i>
His answer was kind of a cheat- Ramsey Theory was combinatorics which ugrads can work on without too much prior knowledge (true), computability theory could be picked up (not true), and the chance to combine the two should thrill ugrads (it doesn't even thrill most Ramsey Theorists!).<br />
<br />
I don't fault the PhD student, I fault the school. What they really want to know is:<br />
<br />
<i>Can you come up with ugrad projects?</i><br />
<br />
They should realize that some math requires too much bacground and is not appropriate for ugrads AND that a math PhD will have picked up OTHER knowledge for projects. They may relate some to the research (e.g., in the case above just Ramsey Theory) or they may not even be close.<br />
<br />
In Comp Sci it is more common for an ugrad to work on research related to a profs research since Comp Sci is a younger field and needs less background knowledge. And students can often code something up of interest related to the profs research.<br />
<br />
SO- do you have ugrads work on your research program OR on something else?<br />
<br />
<br />
<br />
<br />https://blog.computationalcomplexity.org/2018/02/how-can-you-use-your-research-for-ugrad.htmlnoreply@blogger.com (GASARCH)2tag:blogger.com,1999:blog-3722233.post-6426941553330413155Thu, 01 Feb 2018 18:06:00 +00002018-02-01T13:07:30.130-05:00Flying BlindMany computer science conferences have made a number of innovations such as a rebuttal phase, multi-tiered program committees, a hybrid journal/conference model with submission deadlines spread through the year. Not theoretical computer science which hasn't significantly changed their review process in the major conferences since allowing electronic submissions in the early 90's and an ever growing program committee <a href="http://www.diku.dk/~mthorup/FOCS18/FOCS18-cfp.htm">now at 30 for FOCS</a>.<br />
<br />
Suresh Venkatasubramanian learned this lesson when he ran a <a href="http://blog.geomblog.org/2018/01/report-on-double-blind-reviewing-in.html">double blind experiment for ALENEX</a> (Algorithmic Engineering and Experiments) and <a href="http://blog.geomblog.org/2018/01/double-blind-review-at-theory.html">laid out an argument</a> for double blind at broader theory conferences to limit the biases that go along with knowing the authors of a paper. The theory blogosphere responded with posts by <a href="http://mybiasedcoin.blogspot.com/2018/01/double-blind-alenex.html">Michael Mitzenmacher</a>, <a href="https://windowsontheory.org/2018/01/11/on-double-blind-reviews-in-theory-conferences/">Boaz Barak</a> and <a href="https://theorydish.blog/2018/01/25/just-defined-bias-not-sure/">Omer Reingold</a> and a response by <a href="http://blog.geomblog.org/2018/01/double-blind-review-continuing.html">Suresh</a>. I can't stay out of a good conference discussion so here goes.<br />
<br />
Today major orchestra auditions happen behind a screen with artists even told to remove footwear so sounds won't give away the gender of the musician. On the other extreme, the value of a piece of art can increase dramatically in price if it is discovered to be the work of a famous artist, even though it is the same piece of art. Where do research papers lie? It's more a work of art than a violinist in a symphony.<br />
<br />
Knowing the authors gives useful information, even beyond trusting them to have properly checked their proofs. Academics establish themselves as a brand in their research and you learn to trust that when certain people submit to a conference you know what you get, much the way you would more likely buy a new book from an author you love.<br />
<br />
Suresh rightly points out that having authors names can and do produce biases, often against women and researchers at lesser known universities. But we should educate the program committee members and trust that they can mitigate their biases. We can completely eliminate biases by choosing papers at random but nobody would expect that to produce a good program.<br />
<br />
Having said all that, we should experiment with double blind submissions. Because everything I said above could be wrong and we won't know unless we try.https://blog.computationalcomplexity.org/2018/02/flying-blind.htmlnoreply@blogger.com (Lance Fortnow)17tag:blogger.com,1999:blog-3722233.post-4853416171168759408Mon, 29 Jan 2018 13:58:00 +00002018-01-29T08:58:50.557-05:00Publishing: Why are we doing it right and other sciences aren't?(NOTE- this is NOT a `we hate Elsevier and the others' post- though I suspect the comments will be about that.)<br />
<br />
Alexandra Elbakyan has created a repository of science papers for scientists to share. She did this because too many papers were behind paywalls. An article about her (and the inspiration for this post) is <a href="https://www.thedailybeast.com/is-this-science-hacker-a-heroine-or-a-villain">here</a>. Note that she has had some legal problems.<br />
<br />
But it got me thinking: I have rarely had a problem getting an article. Between authors websites and arXiv most of what I want is out there. Lance and others I've asked agree--- while there are problems with editors and publishes etc, access is NOT one of them.<br />
<br />
Why don't other scientists just routinely do this?<br />
<br />
some speculation but if someone actually knows please comment<br />
<br />
1) Some scientists purposely don't post on line to avoid being scooped. Or the fear of that<br />
<br />
2) Some scientists don't post for fear of legal issues with publishing. I do not know of a single computer scientist or mathematician that has gotten in any kind of trouble for posting a paper on their website or arXiv.<br />
<br />
3) Some scientists are lazy.<br />
<br />
4) (This contrasts with Comp Sci) other fields do things the way they do since they ALWAYS did it that way. Comp Sci is a younger field and hence is less tied to tradition.<br />
<br />
5) Other scientists ARE posting their papers (gee, then why did Alexandra have to create the reporistory).<br />
<br />
6) Alexandar is where arxiv was X years ago. They'll catch up. But they are having legal problems-- did arXiv ever have legal problems?<br />
<br />
Looking over these I'm not sure I believe any of them. Any other ideas?<br />
<br />
Again, the question is: Why don't other fields routinely post to something like arXiv or their own websites?https://blog.computationalcomplexity.org/2018/01/publishing-why-are-we-doing-it-right.htmlnoreply@blogger.com (GASARCH)11tag:blogger.com,1999:blog-3722233.post-8481902663870246137Fri, 26 Jan 2018 14:18:00 +00002018-01-26T09:18:29.372-05:00From Art to ScienceQ: Why did the neural net cross the road?<br />
A: Who cares along as it got to the other side.<br />
<br />
Whit Diffie and Martin Hellman in their <a href="https://doi.org/10.1109/TIT.1976.1055638">classic 1976 paper</a> talk about the transition of cryptography:<br />
<blockquote class="tr_bq">
Theoretical developments in information theory and computer science show promise of providing provably secure cryptosystems, changing this ancient art into a science.</blockquote>
Indeed we have some very strong theoretical foundations for cryptography, though they still need to rely on unproven hardness assumptions. Often we trade off formal proven techniques for efficiency, in for example AES and Bitcoin. Here we use "science" in another sense: extensive testing of the hardness of these protocols. Creating and breaking cryptographic protocols used to be an art, having the right intuitions to make things work (for now). Today, breaking security rarely involves breaking a cryptographic protocol but rather due to bad implementations and easy-to-guess or phished passwords.<br />
<br />
For neural networks we still live in an art regime. We have great methods to minimize the error for a given network (though no formal proofs as such), one still needs to design the right network with the appropriate topology and various tricks like regularization to prevent overfitting. Several members of the theoretical CS community have taken on the task to turn the art into a science but getting us to where we sit in cryptography will take some time. Crytography took thousands of years to get from an art to science, hopefully we can get there faster for machine learning.<br />
<br />
This debate came to a head at the recent NIPS conference. Ali Rahimi in his <a href="https://www.youtube.com/watch?v=ORHFOnaEzPc">test-of-time talk</a> described the current state of ML research as "alchemy". Yann LeCun gave a <a href="https://www.facebook.com/yann.lecun/posts/10154938130592143">strong response</a><br />
<blockquote class="tr_bq">
Sticking to a set of methods just because you can do theory about it, while ignoring a set of methods that empirically work better just because you don't (yet) understand them theoretically is akin to looking for your lost car keys under the street light knowing you lost them someplace else.</blockquote>
Imagine if we never did any cryptography until we had the science in place.<br />
<br />
Proving theorems in mathematics is an art more than a science. Because of incompleteness in some sense it has to be. If mathematics ever became automated it would be a whole lot less fun.https://blog.computationalcomplexity.org/2018/01/from-art-to-science.htmlnoreply@blogger.com (Lance Fortnow)6tag:blogger.com,1999:blog-3722233.post-7113444534954281927Wed, 24 Jan 2018 14:38:00 +00002018-01-25T09:53:21.853-05:00How many degrees are in a Martian Year?James Tanton gave a great talk at the JMM (Joint Math Meeting) in San Diego on<br />
<br />
<i>how many degrees are in a Martian Year?</i><br />
<br />
but he didn't quite answer his title question. I found an excerpt of the talk on YouTube but still didn't quite answer the question, though he could have.<br />
<br />
The talk was<br />
<br />
<i>How many degrees are in a Martian Year?</i><br />
<i><br /></i>
Here is an excerpt on You Tube: <a href="https://www.youtube.com/watch?v=V8_AcKoL2Fs">here</a><br />
<br />
I feel the urge to explain and complete the talk.<br />
<br />
Why do we earthlings have 360 degrees in a circle? Because there are 365.25 days in a year and nobody wants to work with that number so they round off. They could have chosen 360 or 365 or 370 but since 360 has the most divisors, it wins. (There is a different measure, <a href="https://en.wikipedia.org/wiki/Gradian">gradians</a>, of which there are 400 in a circle- also a nice round number, but 360 is rounder via the criteria we will soon explain.) So why pick 360? It has the most divisors.<br />
<br />
There are 668 martian days in a martin year. So you might think they would pick 670 or 660 for there degreees. But thats Base 10 thinking! So the answer depends on what Base the Martians use. Lets assume that, like us, it depends on the number of fingers they have on their hands. We also assume they have two hands with the same number of fingers on them (would aliens look like us or not? Here are some thoughts: <a href="http://www.independent.co.uk/news/science/aliens-what-look-like-more-human-humanoid-biped-scientists-living-image-size-physiology-a8030976.html">here</a> and <a href="http://www.popularmechanics.com/space/deep-space/g1592/we-asked-7-experts-what-would-aliens-actually-look-like/">here</a>). Two fingers on a hand seems like two few, and ten seems like too many so lets assume there base is either<br />
<br />
3 fingers per hand: base 6. 668-base 10 is 3032 base 6. 3020 is answer- divided by 2,3,5<br />
<br />
4 fingers per hand: base 8. 668-base 10 is 1234 base 8. 1240 is answer- divided by 2,3,4,8.<br />
<br />
5 fingers per hand: base 10 668. Lets go with 660- divisible by 2,3,5<br />
<br />
6 fingers per hand: base 12 668-base 10 is 478-base 12. 470 is answer- divided by 2,3,4,5<br />
<br />
7 fingers per hand: base 14 668-base 10 is 35A in base 14. 140 is answer- leave it to you.<br />
<br />
Any other good candidates or methods for this?https://blog.computationalcomplexity.org/2018/01/how-many-degrees-are-in-martin-year.htmlnoreply@blogger.com (GASARCH)7tag:blogger.com,1999:blog-3722233.post-2698003514839080037Thu, 18 Jan 2018 17:47:00 +00002018-02-24T08:58:32.851-05:00A Sensitive GameLast month I <a href="http://blog.computationalcomplexity.org/2017/12/razors-edge.html">posted</a> about the sensitivity conjecture and today I would like to talk about an interesting game developed by <a href="https://doi.org/10.1145/2688073.2688096">Gilmer, Koucký and Saks</a> and independently by <a href="https://arxiv.org/abs/1706.07890">Drucker</a> that could yield a proof.<br />
<br />
Consider the following game on n bits among three players, Alice, Bob and Carol. The game works as follows: Carol picks a bit position and Alice sets that bit to 0 or 1. Carol then picks another unused bit position and Alice sets that bit as well. This repeats until the last bit which Carol gets to set herself. The bits are then revealed to Bob who has to give a set of bit positions that includes the bit Carol set at the end. Alice and Bob can work out a strategy ahead of time with the goal of minimizing the size of the set.<br />
<br />
If Carol can force Bob to give a set of n<sup>ε</sup> bits for some ε>0, this would prove the sensitivity conjecture! Basically a family of functions that give a counterexample to the sensitivity conjecture would give a strategy for Alice and Bob that yields a set of size n<sup>o(1)</sup>. You can find the full proof in the papers above.<br />
<br />
How well can Alice and Bob do? They can use the following strategy to achieve n<sup>1/2</sup>: Break up the input positions into n<sup>1/2</sup> groups of n<sup>1/2</sup> bits each. Whenever Carol picks a bit position, Alice sets that bit to zero unless it is the last bit in that group in which case she sets it to one. Carol can set the last bit in some group anyway she wants. Bob will either see every group having one 1 and will give the set of the positions of all the 1's, or Bob will see a group of all zeros and give the positions in that<br />
group.<br />
<br />
Mario Szegedy <a href="https://arxiv.org/abs/1506.06456">uses error-correcting codes</a> to improve the upper bound to O(n<sup>0.4732</sup>). A Georgia Tech undergraduate DeVon Ingram <a href="https://arxiv.org/abs/1712.01149">improves</a> the upper bound to O(n<sup>0.4696</sup>) by generalizing Szegedy's techniques. Ingram's proof comes down to finding a one-to-one function mapping subsets of {1,...,15} of size 4 to perfect codes of length 15 with the property that the bits of the code are 1 for the indices of the subset. Perhaps one could find a clever mapping that has this property but Ingram wrote code using a matching algorithm. A true computer assisted proof. Longer code lengths alas won't yield direct improvements.<br />
<br />
The connection to sensitivity doesn't go both ways. An upper bound of n<sup>o(1) </sup>wouldn't necessarily yield a counterexample to the sensitivity conjecture though would tell us the limitation of this game approach. Nevertheless I find it cool that a game that doesn't talk about functions at all could help settle an important question about them.https://blog.computationalcomplexity.org/2018/01/a-sensitive-game.htmlnoreply@blogger.com (Lance Fortnow)2tag:blogger.com,1999:blog-3722233.post-4918349967339814884Tue, 16 Jan 2018 16:41:00 +00002018-01-16T11:41:01.124-05:00Donald Knuth Turns 80 years and 6 daysCelebrating Donald Knuth's 80th birthday, or 80 years + 7 days birthday seems odd. Should we use powers of 2? Hmm- too few, just 32 and 64 really. And having a 32-year celebration for someone is unusual. How about numbers that end in 0 in base 8. 64 would be 100, 72 would 110, 80 would be 120 so AH- we would be celebrating! So lets Celebrate!<br />
<br />
LANCE: One of us should blog about Don Knuth turning 80.<br />
<br />
BILL: How about both of us, sep posts?<br />
<br />
Lance had his post <a href="http://blog.computationalcomplexity.org/2018/01/donald-knuth-turns-eighty.html">here</a>. Now its my turn.<br />
<br />
Donald Knuth was the first person (roughly- history is not as definite as mathematics) to use mathematics to analyse algorithms. This may seem like an obvious thing to do but that's easy to say in retrospect. And he never lost focus: He may have to use some hard math to analyze algorithms but his goal always was to analyze algorithms. He didn't get fascinated with some obscure aspect of math like, say, surreal numbers, and go write a book on it. Oh wait, he did (see <a href="http://www.math.harvard.edu/~knill/teaching/mathe320_2015_fall/blog15/surreal1.pdf">here</a>). But, for the most part everything he did was towards faster algorithm and faster typesetting.<br />
<i><br /></i>
In an early draft of a book review of <i>Companion \to the Papers of Donald Knuth </i>I wrote at the end of it<br />
<br />
<i> Donald Knuth has been called the father of Theoretical Computer Science and hence his thoughts are worth knowing</i><br />
<br />
I usually email my reviews to the author so they can make sure I didn't say anything stupid or in some very rare cases have a rebuttal. In Knuth's case I postal mailed it (ask your grandparents what that is) since he does not use email. He mailed back the hardcopy with some corrections in pencil. One of them was to cross out<br />
<br />
<i>father of theoretical computer science</i><br />
<i><br /></i>
and replace it with<br />
<br />
<i>father of algorithmic analysis.</i><br />
<i><br /></i>
I made the correction. That is how he views himself and it would be odd to argue the point.<br />
<br />
As a tribute to him I have gathered up all of the book reviews in my column of books by him (between 10 and 12 depending on how you count) and books clearly inspired by him (1 book- A=B) into one file which I point to <a href="https://www.cs.umd.edu/users/gasarch/BLOGPAPERS/knuth.pdf">here</a>. <br />
<br />
<br />
<br />
<br />https://blog.computationalcomplexity.org/2018/01/donald-knuth-turns-80-years-and-6-days.htmlnoreply@blogger.com (GASARCH)2tag:blogger.com,1999:blog-3722233.post-4889227120982533363Wed, 10 Jan 2018 13:26:00 +00002018-01-10T08:26:02.861-05:00Donald Knuth Turns EightyWe've kept this blog going long enough that we start repeating our celebrations. Ten years ago Bill <a href="http://blog.computationalcomplexity.org/2008/01/today-is-knuths-70th-birthday.html">celebrated Don Knuth's 70th Birthday</a> and today Donald Knuth turns 80. While he <a href="http://knuth80.elfbrink.se/">celebrates in Piteå, Sweden</a> with its less than 4.5 hours of daylight, we wish him a happy birthday from stateside.<br />
<div>
<br /></div>
<div>
Looking back in this blog, in 2015 I wrote about the <a href="http://blog.computationalcomplexity.org/2015/01/the-history-of-history-of-history-of.html">history of the history of computing</a> including this wonderful talk by Knuth, <a href="https://youtu.be/gAXdDEQveKw">Let's Not Dumb Down the History of Computer Science</a>.<br />
<br /></div>
<iframe allow="encrypted-media" allowfullscreen="" frameborder="0" gesture="media" height="315" src="https://www.youtube.com/embed/gAXdDEQveKw" width="560"></iframe>
<br />
<div>
<br />
Donald Knuth is known for many things, important algorithms, the brilliant book series The Art of Computer Programming, TeX, the many awards he's won and the <a href="https://www.sigact.org/Prizes/Knuth/">award named after him</a>. In my favorite story, back in the 70's when Knuth started working on Volume 4 (still in progress), he wanted a good name for those hard problems that Cook and Karp found. His candidates "Herculean", "Formidable" and "Arduous" didn't get much traction so he <a href="http://blog.computationalcomplexity.org/2010/11/by-any-other-name-would-be-just-as-hard.html">ran a contest</a> which had some interesting suggestions before reluctantly going with the name "NP-complete" that came out of a group from MIT.<br />
<br />
In his <a href="https://doi.org/10.1145/1811129.1811130">SIGACT News article</a> that described the contest, Donald Knuth was so sure that these hard problems would remain hard, he puts everything on the line, offering anyone who could prove P = NP a live turkey. May Don Knuth and his turkey continue to celebrate many more birthdays.</div>
https://blog.computationalcomplexity.org/2018/01/donald-knuth-turns-eighty.htmlnoreply@blogger.com (Lance Fortnow)5tag:blogger.com,1999:blog-3722233.post-5465544257965312428Mon, 08 Jan 2018 02:21:00 +00002018-01-07T22:20:19.733-05:00A new largest prime found!A new largest KNOWN prime has been discovered and its 23 million digits long.<br />
<br />
Nate Silver's website had an article about it (written by Oliver Roeder) <a href="https://fivethirtyeight.com/features/we-have-a-new-prime-number-and-its-23-million-digits-long/">here</a><br />
<br />
An article about why people do this is <a href="http://primes.utm.edu/notes/faq/why.html">here</a><br />
<br />
Lance posted about finding large primes in 2006 <a href="http://blog.computationalcomplexity.org/2006/01/search-without-end.html">here</a><br />
<br />
I'll just make some random comments<br />
<br />
1) The prime is 2<sup>77,232,917</sup>-1<br />
<br />
2) The prime is not much bigger than the previous champion.<br />
<br />
3) More generally, the graph (in Oliver Roeder's article) shows from 1600 to about 1951there was slow progress but since then there has been LOTS of progress. See the table in <a href="https://en.wikipedia.org/wiki/Largest_known_prime_number">this</a> article. I had wanted to say <i>every year a new prime was found</i> but, alas, not that simple a pattern. Even so, lots of new records.<br />
<br />
4) I"ll list the reasons given for why people do this and my comments on them.<br />
<br />
a) Tradition! (Note to self- write a novelty song to the tune of Fiddler-on-the-roof's Tradition about why people work on various mathematical things)<br />
<br />
b) For the by-product of the quest. This one does make sense and I am sure has driven and helped test out some research. Reminds me of the spinoffs from going to the moon (see <a href="https://en.wikipedia.org/wiki/NASA_spinoff_technologies">here</a>). Contrary to what I heard as a kid, the orange-powder-drink Tang was not a spinoff. But there were many others. Of course- would these spinoffs have happened anyway? Hard to know.<br />
<br />
c) People collect rare and Beautiful items. I don't really see this one. Finding a large prime doesn't make its yours, it belongs to the ages! And I don't think people get their names on the prime either. The only prime that has someone's name on it is the famous <a href="https://en.wikipedia.org/wiki/57_(number)">Grothendieck prime</a> which is 57. Oh well. There are sets of primes with peoples names on them: <a href="https://en.wikipedia.org/wiki/Mersenne_prime">Mersenne primes</a>, <a href="https://en.wikipedia.org/wiki/Gaussian_integer#Gaussian_primes">Gaussian primes</a> (which are subsets of Gaussian integers so maybe shouldn't count), <a href="https://en.wikipedia.org/wiki/Eisenstein_prime">Eisenstein primes</a>, and<br />
<a href="https://en.wikipedia.org/wiki/Sophie_Germain_prime">Sophie Germain primes</a>. If you know of any other primes or set of primes named after someone, leave a comment please.<br />
<br />
d) For the glory! Maybe, but given how briefly people hold the record, fame is fleeting.<br />
<br />
e) To test the hardware. This one I didn't know! I'll quote the article as to why primes are good for this<br />
<blockquote class="tr_bq">
Why are prime programs used this way? They are intensely CPU and bus bound. They are relatively short, give an easily checked answer (when run on a known prime they should output true after their billions of calculations). They can easily be run in the background while other "more important" tasks run, and they are usually easy to stop and restart.</blockquote>
f) To learn more about their distribution. The prime number theorem was conjectured from data. We have so many primes now that I wonder if a few more really help formulate conjectures<br />
<br />
g) For the money. The first person to get a ten-million digit prime gets $100,000. The first person to get a one billion digit prime gets $250,000. Wow! Except that the article must be a bit old since the $100,000 prize was claimed in 2009 (see <a href="https://www.eff.org/press/archives/2009/10/14-0">here</a>). Still, theres that one billion digit prize out there!<br />
<br />
5) Mersenne primes are of the form 2^n-1. It is known that n must be prime for 2^n-1 to be prime (this is not hard). There are much faster primality testing algorithms for Mersenne primes than arb primes. But see next item.<br />
<br />
6) While writing this blog post I looked up non-mersenne primes. It seems like the largest one is<br />
<br />
10223*2^31172165 + 1 and was discovered in 2016.<br />
<br />
But of more interest- there is no Wikipedia page on non-Mersenne primes, there are some outdated pages that don't have the most recent information. As the kids say, its not a thing.<br />
<br />
8) I'll add one more reason why people work on this, but its more of a tautology: People work on finding large primes because they can!. By contrast, finding VDW numbers is hard and likely to not make much progress.<br />
<br />
9) I think that the most reason advances have come from computing power and not number theory. (if this is incorrect let me know with a polite comment)<br />
<br />
10) Have their been spinoffs in either number theory OR computing power lately?<br />
<br />
11) I wonder if there will come a point where progress gets hard again and the graph of largest known primes flattens out. I tend to think yes, but hard to say when.https://blog.computationalcomplexity.org/2018/01/a-new-largest-prime-found.htmlnoreply@blogger.com (GASARCH)4tag:blogger.com,1999:blog-3722233.post-2653680872648610604Fri, 05 Jan 2018 16:20:00 +00002018-01-05T11:20:37.476-05:00Which of these Math acronyms are well known? The last time I taught Grad Ramsey Theory there were very good math grads and ugrads in it. They used some acronyms - some I knew, some I didn't know (but know now). I am sure some are well known and some are now. I don't know which is which. Here is the list and comments<br />
<div>
<br /></div>
<div>
WLOG- Without Loss of Generality. This one I know and it seems well know-- When Googled the first page is all this definition. (Maybe I shouldn't use the term ``Googled''- I've heard that brand names don't like it when they become generic terms like `Googled'. Kleenex, Photoshop, Xerox had this fate. Their is a word for it- genericide)</div>
<div>
<br /></div>
<div>
ITOT- It turns out that. An Applied Math Prof used this in a course I had in 1977. I have not seen it used since then. I don't even use it. </div>
<div>
<br /></div>
<div>
BWOC- By Way of Contradiction. I thought this was better known. When I google it I got to <a href="https://acronyms.thefreedictionary.com/BWOC">this</a> page which tells me it stands for:</div>
<div>
<br /></div>
<div>
Big Wolf on Campus (TV Show)</div>
<div>
<br /></div>
<div>
By Weight of Cement (Oil and Gas industry)</div>
<div>
<br /></div>
<div>
Big Women on Campus (GOOD- the counterpart to BMOC)</div>
<div>
<br /></div>
<div>
By Way of Contradiction (Given the other items on the list I'm surprised it made the cut)</div>
<div>
<br /></div>
<div>
Bob Wayne's Oil company</div>
<div>
<br /></div>
<div>
FTSOC- For the Sake of Contradiction. NOT, as Google told me, Fuck this Shit O'Clock (reminds me of when I use FML for Formula and the class tells me its means Fuck My Life)</div>
<div>
<br /></div>
<div>
WTS- This was a new one for me. It took a while to get it from context but it was Want to Show. Google gives Women in Transportation. </div>
<div>
<br /></div>
<div>
NTS- Need to show. Also a radio station and the National Traffic System.</div>
<div>
<br /></div>
<div>
I think the only one of these that is standard is WLOG. The rest I think could be useful. But I ask you- are any of these standard? Are there ones that are standard that I missed? Of course, the great thing about standards is that there are so many of them.</div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
https://blog.computationalcomplexity.org/2018/01/which-of-these-math-acronyms-are-well.htmlnoreply@blogger.com (GASARCH)13tag:blogger.com,1999:blog-3722233.post-1397406569442302463Wed, 27 Dec 2017 18:40:00 +00002017-12-29T12:12:49.643-05:00Complexity Year in Review 2017Theorem of the year goes to the resolution of the dichotomy conjecture. I wrote <a href="http://blog.computationalcomplexity.org/2017/02/the-dichotomy-conjecture.html">about the conjecture</a> in February and while the Feder et. al paper didn't hold up, two later papers seem to resolve the conjecture.<br />
<br />
<div style="text-align: center;">
<a href="https://doi.org/10.1109/FOCS.2017.37">A dichotomy theorem for nonuniform CSPs</a> by Andrei Bulatov</div>
<div style="text-align: center;">
<a href="https://doi.org/10.1109/FOCS.2017.38">A Proof of CSP Dichotomy Conjecture</a> by Dmitriy Zhuk<br />
<br />
<div style="text-align: left;">
I checked with experts in the field and at least one of these papers and more likely both ought to be correct.</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
Runners up include two matching papers I <a href="http://blog.computationalcomplexity.org/2017/11/matching-and-complexity.html">posted about last month</a>, Svensson and Tarnawski who give a <a href="https://doi.org/10.1109/FOCS.2017.70">quasi-NC algorithm for general graph matching</a> and Anari and Vazirani who give a<a href="https://arxiv.org/abs/1709.07822"> NC algorithm for matching on planar graphs</a>. We also had the nice <a href="https://doi.org/10.1145/3055399.3055409">quasi-polynomial time algorithm for parity games</a> by Calude, Jain, Khoussainov, Li and Stephan that I <a href="http://blog.computationalcomplexity.org/2017/03/parity-games-in-quasipolynomial-time.html">posted on last March</a>.</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
In <a href="http://blog.computationalcomplexity.org/2016/12/complexity-year-in-review-2016.html">last year's review</a> we said "2016 will go down as a watershed year for machine learning" yet somehow it paled against 2017 with breakthroughs in <a href="http://blog.computationalcomplexity.org/2017/12/our-ai-future-good-and-ugly.html">chess</a>, <a href="http://blog.computationalcomplexity.org/2017/02/liberatus-wins-at-poker.html">poker</a>, <a href="https://www.nasa.gov/press-release/artificial-intelligence-nasa-data-used-to-discover-eighth-planet-circling-distant-star">astronomy</a> not to mention continuous advances in machine translation, autonomous vehicles and everything else. Maybe next year ML can write the 2018 year in review.<br />
<br />
We had an awesome eclipse to remind us of the wonders of the world and almost made me forget about US politics. Computing keeps growing and how do we find the resources to train people from pre-school through college and throughout their lives? How much should we worry about the dominance of a handful of computing companies? </div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
Thanks to our guest posters <a href="http://blog.computationalcomplexity.org/2017/01/guest-post-about-first-women-in.html">Erin Chambers, Brittany Terese Fasy, Lori Ziegelmeier</a>, <a href="http://blog.computationalcomplexity.org/2017/08/i-wanted-to-address-diversity-after.html">Molly Fortnow</a> and <a href="http://blog.computationalcomplexity.org/2017/05/how-to-solve-it.html">Periklis Papakonstantinou</a>. </div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
<span style="font-family: inherit;">We remember <a href="http://blog.computationalcomplexity.org/2017/02/ken-arrow-and-oscars-voting.html">Ken Arrow</a>, <a href="https://lucatrevisan.wordpress.com/2017/10/23/corrado-bohm/">Corrado Bohm</a>, <a href="https://lucatrevisan.wordpress.com/2017/09/26/3915/">Michael Cohen</a>, <a href="http://www.ustavinformatiky.cz/petrhajek/">Petr Hájek</a>, <a href="http://blog.computationalcomplexity.org/2017/10/monty-hall-1921-2017-and-his-problem.html">Monty Hall</a>, <a href="http://binaire.blog.lemonde.fr/2017/09/22/ciao-maurice/">Maurice Nivat</a>, <span style="font-size: 11pt; text-align: center; white-space: pre-wrap;"><a href="http://blog.computationalcomplexity.org/2017/02/raymond-smullyan-was-born-on-may-25.html">Raymond Smullyan</a>, <a href="http://blog.computationalcomplexity.org/2017/07/peter-wegner-1932-2017.html">Peter Wegner</a> and <a href="https://www.nytimes.com/2017/09/11/science/lotfi-zadeh-father-of-mathematical-fuzzy-logic-dies-at-96.html">Lotfi Zadeh</a>. </span></span></div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
2018 is just full of questions: What will the Internet look like post-net neutrality? How will the new tax code play out? Where will Amazon put HQ2? What will machine learning do next? What can quantum computers with 50 qbits accomplish? Will bitcoin move to $300K or 30 cents? And what great advances in complexity await us?</div>
</div>
https://blog.computationalcomplexity.org/2017/12/complexity-year-in-review-2017.htmlnoreply@blogger.com (Lance Fortnow)0tag:blogger.com,1999:blog-3722233.post-7435476721848448894Thu, 21 Dec 2017 15:14:00 +00002017-12-21T10:14:33.218-05:00Having Faith in ComplexityI believe P ≠ NP as much as anyone else. Nevertheless should we worry about trust we put in complexity?<br />
<br />
You don't need the full power of P = NP to break cryptography. I don't worry about quantum computers breaking RSA and related protocols. It won't sneak up on us--when (or if) quantum computing gets anywhere close to factoring large numbers, we'll figure out a strategy to change our protocols and to protect the information we already have. However if someone comes up with an algorithm tomorrow that cracks AES, we'll have a crisis on our hands as AES is so well used the algorithm is embedded into computer chips. Perhaps we can mitigate the damage before the algorithm spreads or at take our information off-line until we develop new solutions.<br />
<br />
But what about blockchains, the technology that underlies cryptocurrencies such as Bitcoin. A blockchain consists of a series of transactions collected into sequence of blocks, where each block consists of a hash of the previous block with transactions themselves hashed and often encrypted with public-key cryptography. One would hope that breaking the cryptography would be caught quickly and we'd still have a legit record of transactions saved somewhere. The transactions themselves might be compromised especially if anonymity was built into the system.<br />
<br />
Bitcoin itself, as I write this, has a total market cap of over $250 billion based fully on cryptography. The cryptography will probably hold up, Bitcoin investors have more to worry from bad implementations or the pop of a bubble of unrealistic expectations. But as we watch so many exciting advances in computing tackling challenges that we would never have expected to get solved, should we continue to build our economy on the hope that other advances won't happen?https://blog.computationalcomplexity.org/2017/12/having-faith-in-complexity.htmlnoreply@blogger.com (Lance Fortnow)2tag:blogger.com,1999:blog-3722233.post-3402935024911175741Mon, 18 Dec 2017 03:18:00 +00002017-12-17T22:18:15.766-05:00Monkey First!The following story is not true nor has anyone claimed its true, but it has a point:<br />
<br />
<i>A company gets a contract to do the following: train a monkey to sit on a 10-foot pedestal and recite some passages of Shakespeare. After a week they announce they have made progress! They invite their investors to see what progress they have made! They unveil a curtain and there is... a 10-foot pedestal.</i><br />
<i><br /></i>
This story was in an article about how Google does moonshots-- that is, high-risk, high-reward, innovative work. The article is <a href="https://www.theatlantic.com/magazine/archive/2017/11/x-google-moonshot-factory/540648/">here</a>. (How the Atlantic makes money when they have stuff online is a mystery to me. Perhaps they do in a very innovative way.) The point is that its BAD to have tangible results (like the pedestal) that are not getting at the heart of the problem. So Google has various incentives to do the important stuff. Their slogan is MONKEY FIRST.<br />
<br />
This also applies to our research. The following sequence of events is common:<br />
<br />
1) Prove some scattered results.<br />
<br />
2) Pedastal or Monkey? You could write up what you have, polish it, write up some nice LaTeX macros to make the writing of the paper easier OR you could try to find the unifying principle that would be hard, and might not work, but if it works that would be, as the kids say, Jawesome (Jaw-dropping awesome). The sad answer is that which you do might depend on when the next conference deadline is.<br />
<br />
More generally there is a tension between safe do-able research(Pedestal) and high-risk, high-reweard research (Monkey). Is our incentive structure set up to encourage high-risk high-reward? The Tenure system is supposed to do it and it DOES in some cases, but not as much as it could since there are other factors (salary, promotion to full prof, grants).<br />
<br />
Does the system encourage high-risk high-reward? Should it? Could we do better? What are your experiences? I have no answers (especially to the question of what are your experiences) so I welcome your comments.https://blog.computationalcomplexity.org/2017/12/monkey-first.htmlnoreply@blogger.com (GASARCH)3tag:blogger.com,1999:blog-3722233.post-3946711197336225173Wed, 13 Dec 2017 18:46:00 +00002017-12-14T10:43:37.871-05:00Our AI future: The Good and the UglyI don’t directly work in machine learning but one cannot deny the progress it has made and the effect it has on society. Who would have thought even a few years ago that ML would have basically solved face and voice recognition and translate nearly as well as humans.<br />
<br />
The <a href="https://nips.cc/">Neural Information Process Systems conference</a> held last week in Long Beach, California, sold out its 7500 registration slots in 12 days. NIPS, not long ago just another academic conference, has become a major machine learning job market <a href="https://www.bloomberg.com/news/articles/2017-12-06/demand-for-ai-talent-turns-once-staid-conference-into-draft-day">where newly minted Ph.D.s earn north of $300,000 and top-ranked senior academics command multimillion-dollar, multiyear contracts</a>."<br />
<br />
AlphaZero, an offshoot of Google’s Go programs, <a href="https://arxiv.org/abs/1712.01815">learned chess given only the rules</a> in just four hours (on 5000 tensor processing units) and easily beats the best human-designed chess programs. <a href="https://youtu.be/UcAfg9v_dDM">Check out</a> this match against Stockfish.<br />
<br />
<iframe allow="encrypted-media" allowfullscreen="" frameborder="0" gesture="media" height="315" src="https://www.youtube.com/embed/UcAfg9v_dDM" width="560"></iframe>
<br />
<br />
Just a trend that machine learning often works better when humans just get out of the way.<br />
<br />
The advances in machine learning and automation have a dark side. Earlier this week I attended the <a href="https://cra.org/events/summit-technology-jobs/">CRA Summit on Technology and Jobs</a>, one of a series of meetings organized by Moshe Vardi on how AI and other computing technology will affect the future job market. When we talk about ethics in computer science we usually talk about freedom of information, privacy and fairness but this may be the biggest challenge of them all.<br />
<br />
The most stark statistic: Contrary to what certain politicians may tell you, manufacturing output in the United States has never been higher, but <a href="http://www.acting-man.com/?p=46399">manufacturing jobs have declined dramatically</a> due to automation.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://2.bp.blogspot.com/-LW_N4orfguc/WjF1T6qF2HI/AAAAAAABelU/AW3_WpuIESQ-iQRX-BZzxqLoRCL0Fl7aACLcBGAs/s1600/1-Manufacturing-output-vs.-employment.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="786" data-original-width="1600" height="196" src="https://2.bp.blogspot.com/-LW_N4orfguc/WjF1T6qF2HI/AAAAAAABelU/AW3_WpuIESQ-iQRX-BZzxqLoRCL0Fl7aACLcBGAs/s400/1-Manufacturing-output-vs.-employment.png" width="400" /></a></div>
<br />
The changes have hit hardest for white middle-class less educated males. While this group usually doesn’t get much attention from academics, they have been hit hard, often taking less rewarding jobs or dropping out of the job market entirely. We're seeing many young people living with their parents spending their days playing video games and see a spike in suicides and drug use. Drug overdose is the now the leading cause of death of men under 50.<br />
<br />
There are no easy solutions. Universal basic income won’t solve the psychological need a job plays in being a part of something bigger than oneself. In the end we'll need to rethink the educate-work-retire cycle towards more life-long learning and find rewarding jobs that go around automation. This all starts by having a government that recognizes these real challenges.https://blog.computationalcomplexity.org/2017/12/our-ai-future-good-and-ugly.htmlnoreply@blogger.com (Lance Fortnow)17