tag:blogger.com,1999:blog-37222332018-02-21T19:45:21.955-05:00Computational ComplexityComputational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill GasarchLance Fortnowhttps://plus.google.com/101693130490639305932noreply@blogger.comBlogger2558125tag:blogger.com,1999:blog-3722233.post-38437824439957523432018-02-18T23:08:00.002-05:002018-02-18T23:08:44.223-05:00The web bad/People using it bad.I was going to write a post about how hard it was to find what grades mean at different schools (e.g., at UMCP W (for withdraw) means the student dropped the course, but at UT Austin its Q (for Quit?)) but then I found that my Googling<br />
<br />
Name of school grade key<br />
<br />
I could find it. Okay, so grade keys are not that hard to find.<br />
<br />
WEB BAD:<br />
<br />
However, course descriptions are. Both questions (what grades mean and course desc) came up when I do admissions for my REU program. If a student has a course in<i> Discrete Mathematics</i> that could mean a course where you learn how to prove simple things OR it could mean a course where you proof the graph minor theorem (actually I doubt that) or something in between. Similar for <i>Principles of Programming languages</i> and <i>Software Engineering </i>in that they could mean a variety of things. I think math and physics are more standardized, but even there you have the problem that <i>Advanced calc </i>could be either multivar or a course with proofs or something else. There is a great related XKCD <a href="https://xkcd.com/773/">here</a>.<br />
<br />
PEOPLE USING THE WEB BADLY:<br />
<br />
Fellow blogger Scott Aaronson recently posted about Vivek Wadhwa's Washington post column on quantum computing (the post is <a href="https://www.scottaaronson.com/blog/?p=3645">here</a>) The article by Vivek told the tired and utterly false story that QC can be used to solve TSP by looking at all possibilities. Should Vivik have known better by looking at the web? At first I thought YES he should know better. But then I thought -- I should try to see what he might have done. Maybe the web has this one wrong. In fact, some of my students Iwho, I should add, are not writing articles on QC for the Washington Post) tell me that they don't need to learn the proof of the Cook-Levin theorem since Quantum Computers can solve SAT anyway. So I decided to pretend I didn't know anything about QC and went to the web to see what I could find. First stop: Wikipedia. I found the following quote in the QC article:<br />
<i><br /></i>
<i>BQP is suspected to be disjoint from NP-complete and a strict superset of P though this is not known.</i><br />
<br />
So, the first place I look I find that BQP is suspected to NOT solve SAT or TSP or... Realize that Wikipedia is not some obscure source of hidden knowledge. It is literally the first place one would look. Hence, no excuse, even if other experts told him otherwise (which seems to be the case, though `experts' should be in quotes) the Wikipedia quote should at least give pause.<br />
<br />
So even when Wikipedia gives the right information, not everyone looks there.<br />
<br />
<br />GASARCHhttp://www.blogger.com/profile/06134382469361359081noreply@blogger.com3tag:blogger.com,1999:blog-3722233.post-82024442809342576782018-02-15T08:09:00.000-05:002018-02-15T08:09:36.775-05:00Corporate EducationFirst of all read the <a href="http://blog.geomblog.org/2018/02/a-metoo-testimonial-that-hits-close-to.html">#metoo testimonial</a> going around the TCS blogosphere. Our field is not immune.<br />
<br />
Last Sunday Frank Bruni wrote an op-ed column <a href="https://www.nytimes.com/2018/02/10/opinion/sunday/corporations-will-inherit-the-earth.html">Corporations will Inherit the Earth</a>, an article on how corporations have taken on what governments used to do. Quite a bit is focused on education.<br />
<blockquote class="tr_bq">
The nimbleness of corporations gives them an edge over hoary, complacent institutions, including those in higher education...In an effort to make sure that employees have up-to-the-minute technical skills--or are simply adept at critical thinking and creative problem solving -- more companies have developed academies of their own. That's likely to accelerate. "I think enterprises like Amazon and Google are going to build universities that teach coding and things the nation needs" said Margaret Spellings, former education secretary and current president of the University of North Carolina system.</blockquote>
Already in the universities we've seen a move towards more majors and minors in STEM and computer science in particular. The changing corporate workplace has already changed academia, putting more an emphasis on technical areas and less on the liberal arts. Will companies though take the next step, running their own schools that focus on the needs of industry? If we see continued decreased government funding for universities and academic research, we may end up with a handful of rich universities and the rest of the population getting education on the job, a future that would be a loss for us all.<br />
<br />
<br />Lance Fortnowhttps://plus.google.com/101693130490639305932noreply@blogger.com4tag:blogger.com,1999:blog-3722233.post-67287585380492363762018-02-10T10:10:00.002-05:002018-02-10T10:14:29.623-05:00A ``new'' ``application'' of Ramsey Theory/A Truly ``bad'' math songIan Parberry once told me (though I doubt he originated it- The first link I found says it was Mark Twain)<br />
<br />
<br />
<i>to a man with a hammer, everything looks like a nail</i><br />
<i><br /></i>
Indeed. Since I am teaching a grad course <i>Ramsey theory and its ``Applications'' </i>(I got 24 students, which is more than I thought I would- including around 10 ugrads who are taking it because `all the cool kids are taking it' ) I have been taking the hammer of Ramsey Theory and looking for nails to apply it to. (I'm also using a webiste I found of applications of Ramsey Theory <a href="https://www.cs.umd.edu/users/gasarch/TOPICS/ramsey/ramsey.html">here</a> and an survey article on applications of Ramsey <a href="http://www.combinatorics.org/ojs/index.php/eljc/article/view/DS13/pdf">here</a>.)<br />
<br />
Thinking about the infinite Ramsey Theorem I came up with this ``application'' :<br />
<br />
I<i>f p1, p2, p3, ... is an infinite sequence of points in R<sup>n</sup> then there exists a subsequence q1, q2,q3,... such that for each coordinate 1 ≤ i ≤ n the projection onto that coordinate is either (a) strictly incresing, (b) strictly decreasing, or (c) constant.</i><br />
<br />
<b>Proof: </b>For i< j color (i,j) with one of 3^n colors - indicating for each coordinate i if the ith projection is increaing, decreasing, or constant. The infinite homog set gives you the sequence.<br />
<br />
<b>End of Proof</b><br />
<b><br /></b>
One can also proof this from the Bolzano-Weierstrass theorem (an infinite bounded sequence of points in R<sup>n</sup> has a convergent subsequence). We leave that proof to the reader; however, the proof of BW looks like the proof of the infinite Ramsey Theorem, so I'm not sure if my proof is new or not.<br />
<br />
I wanted to look into the BW theorem so I googled "Bolzano-Weierstrass" I think Google knows me better than I know myself since the second hit was <a href="https://www.youtube.com/watch?v=dfO18klwKHg">https://www.youtube.com/watch?v=dfO18klwKHg</a> which is a Rap Song about the BW theorem (I am a fan of novelty songs, and of math, so does it follow I am a fan of Math Novelty songs. Not sure if its follows, but I AM!)<br />
<br />
One of the problems on the HW was <i>BW-rap- good, bad, or really bad?</i><br />
<i><br /></i>
Answers were:<br />
<br />
1) Made my ears bleed<br />
<br />
2) Lyrics good, singing really bad<br />
<br />
3) So bad its good<br />
<br />
4) No, just bad.<br />
<br />
I'm in the <i>Lyrics good/singing is `so bad its good'</i> camp. The class was okay with the lyrics, but mostly thought it was <i>so bad its... bad.</i> One person thought it was awesome!<br />
<br />
I would like to see real rapper perform it on you tube. I doubt I will. Oh well.GASARCHhttp://www.blogger.com/profile/06134382469361359081noreply@blogger.com2tag:blogger.com,1999:blog-3722233.post-58530370515280316412018-02-08T07:42:00.000-05:002018-02-08T07:42:35.056-05:00For the Love of AlgorithmsWired magazine labelled 2017 as <a href="https://www.wired.com/story/2017-was-the-year-we-fell-out-of-love-with-algorithms/">The Year We Fell Out of Love with Algorithms</a>. The article goes on to talk about how algorithms give us filter bubbles, affect elections, propagate biases and eliminate privacy. The article at the end argues that we shouldn't blame the algorithm but the people and companies behind them.<br />
<br />
Every day algorithms decide what music we listen to, what posts we see on Facebook and Twitter, how we should drive from point A to point B, what products we should buy. Algorithms feed my nostalgia in Facebook and Google Photos showing me what once was. Algorithms recognize my voice and my face. We've even created new currencies from algorithms. Algorithms form the backbone of the top six world's largest public companies (Apple, Alphabet, Microsoft, Amazon, Facebook and Tencent). It's been a long time since I only trusted algorithms to format my papers.<br />
<br />
Algorithms have taken over nearly every aspect of our lives and our economy. With new machine learning techniques, no longer can the creator of an algorithm fully understand how algorithms work. But we aren't in a Skynet world, algorithms are not sentient and not even inherently bad, any more than we label a group of people inherently bad because of a few examples.<br />
<br />
This societal transition from human-oriented decision to algorithmic decision making will continue and will have great benefits such as much safer cars and medical care. We must be vigilant, of course, to how algorithms will change society, but (in a serious sense) I welcome our new machine overlords.Lance Fortnowhttps://plus.google.com/101693130490639305932noreply@blogger.com7tag:blogger.com,1999:blog-3722233.post-4784834376111659222018-02-05T22:30:00.001-05:002018-02-05T22:30:03.053-05:00"How can you use your research for ugrad projects?'- the wrong questionI was helping a math PhD who worked in computable ramsey theory prepare his teaching and research statements for his job application. One of the questions various schools wanted to know was<br />
<br />
<i>How can you use your research as the basis for undergraduate projects?</i><br />
<i><br /></i>
His answer was kind of a cheat- Ramsey Theory was combinatorics which ugrads can work on without too much prior knowledge (true), computability theory could be picked up (not true), and the chance to combine the two should thrill ugrads (it doesn't even thrill most Ramsey Theorists!).<br />
<br />
I don't fault the PhD student, I fault the school. What they really want to know is:<br />
<br />
<i>Can you come up with ugrad projects?</i><br />
<br />
They should realize that some math requires too much bacground and is not appropriate for ugrads AND that a math PhD will have picked up OTHER knowledge for projects. They may relate some to the research (e.g., in the case above just Ramsey Theory) or they may not even be close.<br />
<br />
In Comp Sci it is more common for an ugrad to work on research related to a profs research since Comp Sci is a younger field and needs less background knowledge. And students can often code something up of interest related to the profs research.<br />
<br />
SO- do you have ugrads work on your research program OR on something else?<br />
<br />
<br />
<br />
<br />GASARCHhttp://www.blogger.com/profile/06134382469361359081noreply@blogger.com2tag:blogger.com,1999:blog-3722233.post-64269415533304131552018-02-01T13:06:00.000-05:002018-02-01T13:07:30.130-05:00Flying BlindMany computer science conferences have made a number of innovations such as a rebuttal phase, multi-tiered program committees, a hybrid journal/conference model with submission deadlines spread through the year. Not theoretical computer science which hasn't significantly changed their review process in the major conferences since allowing electronic submissions in the early 90's and an ever growing program committee <a href="http://www.diku.dk/~mthorup/FOCS18/FOCS18-cfp.htm">now at 30 for FOCS</a>.<br />
<br />
Suresh Venkatasubramanian learned this lesson when he ran a <a href="http://blog.geomblog.org/2018/01/report-on-double-blind-reviewing-in.html">double blind experiment for ALENEX</a> (Algorithmic Engineering and Experiments) and <a href="http://blog.geomblog.org/2018/01/double-blind-review-at-theory.html">laid out an argument</a> for double blind at broader theory conferences to limit the biases that go along with knowing the authors of a paper. The theory blogosphere responded with posts by <a href="http://mybiasedcoin.blogspot.com/2018/01/double-blind-alenex.html">Michael Mitzenmacher</a>, <a href="https://windowsontheory.org/2018/01/11/on-double-blind-reviews-in-theory-conferences/">Boaz Barak</a> and <a href="https://theorydish.blog/2018/01/25/just-defined-bias-not-sure/">Omer Reingold</a> and a response by <a href="http://blog.geomblog.org/2018/01/double-blind-review-continuing.html">Suresh</a>. I can't stay out of a good conference discussion so here goes.<br />
<br />
Today major orchestra auditions happen behind a screen with artists even told to remove footwear so sounds won't give away the gender of the musician. On the other extreme, the value of a piece of art can increase dramatically in price if it is discovered to be the work of a famous artist, even though it is the same piece of art. Where do research papers lie? It's more a work of art than a violinist in a symphony.<br />
<br />
Knowing the authors gives useful information, even beyond trusting them to have properly checked their proofs. Academics establish themselves as a brand in their research and you learn to trust that when certain people submit to a conference you know what you get, much the way you would more likely buy a new book from an author you love.<br />
<br />
Suresh rightly points out that having authors names can and do produce biases, often against women and researchers at lesser known universities. But we should educate the program committee members and trust that they can mitigate their biases. We can completely eliminate biases by choosing papers at random but nobody would expect that to produce a good program.<br />
<br />
Having said all that, we should experiment with double blind submissions. Because everything I said above could be wrong and we won't know unless we try.Lance Fortnowhttps://plus.google.com/101693130490639305932noreply@blogger.com17tag:blogger.com,1999:blog-3722233.post-48534161711687594082018-01-29T08:58:00.003-05:002018-01-29T08:58:50.557-05:00Publishing: Why are we doing it right and other sciences aren't?(NOTE- this is NOT a `we hate Elsevier and the others' post- though I suspect the comments will be about that.)<br />
<br />
Alexandra Elbakyan has created a repository of science papers for scientists to share. She did this because too many papers were behind paywalls. An article about her (and the inspiration for this post) is <a href="https://www.thedailybeast.com/is-this-science-hacker-a-heroine-or-a-villain">here</a>. Note that she has had some legal problems.<br />
<br />
But it got me thinking: I have rarely had a problem getting an article. Between authors websites and arXiv most of what I want is out there. Lance and others I've asked agree--- while there are problems with editors and publishes etc, access is NOT one of them.<br />
<br />
Why don't other scientists just routinely do this?<br />
<br />
some speculation but if someone actually knows please comment<br />
<br />
1) Some scientists purposely don't post on line to avoid being scooped. Or the fear of that<br />
<br />
2) Some scientists don't post for fear of legal issues with publishing. I do not know of a single computer scientist or mathematician that has gotten in any kind of trouble for posting a paper on their website or arXiv.<br />
<br />
3) Some scientists are lazy.<br />
<br />
4) (This contrasts with Comp Sci) other fields do things the way they do since they ALWAYS did it that way. Comp Sci is a younger field and hence is less tied to tradition.<br />
<br />
5) Other scientists ARE posting their papers (gee, then why did Alexandra have to create the reporistory).<br />
<br />
6) Alexandar is where arxiv was X years ago. They'll catch up. But they are having legal problems-- did arXiv ever have legal problems?<br />
<br />
Looking over these I'm not sure I believe any of them. Any other ideas?<br />
<br />
Again, the question is: Why don't other fields routinely post to something like arXiv or their own websites?GASARCHhttp://www.blogger.com/profile/06134382469361359081noreply@blogger.com11tag:blogger.com,1999:blog-3722233.post-84819026638702461372018-01-26T09:18:00.001-05:002018-01-26T09:18:29.372-05:00From Art to ScienceQ: Why did the neural net cross the road?<br />
A: Who cares along as it got to the other side.<br />
<br />
Whit Diffie and Martin Hellman in their <a href="https://doi.org/10.1109/TIT.1976.1055638">classic 1976 paper</a> talk about the transition of cryptography:<br />
<blockquote class="tr_bq">
Theoretical developments in information theory and computer science show promise of providing provably secure cryptosystems, changing this ancient art into a science.</blockquote>
Indeed we have some very strong theoretical foundations for cryptography, though they still need to rely on unproven hardness assumptions. Often we trade off formal proven techniques for efficiency, in for example AES and Bitcoin. Here we use "science" in another sense: extensive testing of the hardness of these protocols. Creating and breaking cryptographic protocols used to be an art, having the right intuitions to make things work (for now). Today, breaking security rarely involves breaking a cryptographic protocol but rather due to bad implementations and easy-to-guess or phished passwords.<br />
<br />
For neural networks we still live in an art regime. We have great methods to minimize the error for a given network (though no formal proofs as such), one still needs to design the right network with the appropriate topology and various tricks like regularization to prevent overfitting. Several members of the theoretical CS community have taken on the task to turn the art into a science but getting us to where we sit in cryptography will take some time. Crytography took thousands of years to get from an art to science, hopefully we can get there faster for machine learning.<br />
<br />
This debate came to a head at the recent NIPS conference. Ali Rahimi in his <a href="https://www.youtube.com/watch?v=ORHFOnaEzPc">test-of-time talk</a> described the current state of ML research as "alchemy". Yann LeCun gave a <a href="https://www.facebook.com/yann.lecun/posts/10154938130592143">strong response</a><br />
<blockquote class="tr_bq">
Sticking to a set of methods just because you can do theory about it, while ignoring a set of methods that empirically work better just because you don't (yet) understand them theoretically is akin to looking for your lost car keys under the street light knowing you lost them someplace else.</blockquote>
Imagine if we never did any cryptography until we had the science in place.<br />
<br />
Proving theorems in mathematics is an art more than a science. Because of incompleteness in some sense it has to be. If mathematics ever became automated it would be a whole lot less fun.Lance Fortnowhttps://plus.google.com/101693130490639305932noreply@blogger.com6tag:blogger.com,1999:blog-3722233.post-71134445349542819272018-01-24T09:38:00.001-05:002018-01-25T09:53:21.853-05:00How many degrees are in a Martian Year?James Tanton gave a great talk at the JMM (Joint Math Meeting) in San Diego on<br />
<br />
<i>how many degrees are in a Martian Year?</i><br />
<br />
but he didn't quite answer his title question. I found an excerpt of the talk on YouTube but still didn't quite answer the question, though he could have.<br />
<br />
The talk was<br />
<br />
<i>How many degrees are in a Martian Year?</i><br />
<i><br /></i>
Here is an excerpt on You Tube: <a href="https://www.youtube.com/watch?v=V8_AcKoL2Fs">here</a><br />
<br />
I feel the urge to explain and complete the talk.<br />
<br />
Why do we earthlings have 360 degrees in a circle? Because there are 365.25 days in a year and nobody wants to work with that number so they round off. They could have chosen 360 or 365 or 370 but since 360 has the most divisors, it wins. (There is a different measure, <a href="https://en.wikipedia.org/wiki/Gradian">gradians</a>, of which there are 400 in a circle- also a nice round number, but 360 is rounder via the criteria we will soon explain.) So why pick 360? It has the most divisors.<br />
<br />
There are 668 martian days in a martin year. So you might think they would pick 670 or 660 for there degreees. But thats Base 10 thinking! So the answer depends on what Base the Martians use. Lets assume that, like us, it depends on the number of fingers they have on their hands. We also assume they have two hands with the same number of fingers on them (would aliens look like us or not? Here are some thoughts: <a href="http://www.independent.co.uk/news/science/aliens-what-look-like-more-human-humanoid-biped-scientists-living-image-size-physiology-a8030976.html">here</a> and <a href="http://www.popularmechanics.com/space/deep-space/g1592/we-asked-7-experts-what-would-aliens-actually-look-like/">here</a>). Two fingers on a hand seems like two few, and ten seems like too many so lets assume there base is either<br />
<br />
3 fingers per hand: base 6. 668-base 10 is 3032 base 6. 3020 is answer- divided by 2,3,5<br />
<br />
4 fingers per hand: base 8. 668-base 10 is 1234 base 8. 1240 is answer- divided by 2,3,4,8.<br />
<br />
5 fingers per hand: base 10 668. Lets go with 660- divisible by 2,3,5<br />
<br />
6 fingers per hand: base 12 668-base 10 is 478-base 12. 470 is answer- divided by 2,3,4,5<br />
<br />
7 fingers per hand: base 14 668-base 10 is 35A in base 14. 140 is answer- leave it to you.<br />
<br />
Any other good candidates or methods for this?GASARCHhttp://www.blogger.com/profile/06134382469361359081noreply@blogger.com7tag:blogger.com,1999:blog-3722233.post-26980035148390800372018-01-18T12:47:00.001-05:002018-01-18T14:36:34.477-05:00A Sensitive GameLast month I <a href="http://blog.computationalcomplexity.org/2017/12/razors-edge.html">posted</a> about the sensitivity conjecture and today I would like to talk about an interesting game developed by <a href="https://doi.org/10.1145/2688073.2688096">Gilmer, Koucký and Saks</a> and independently by <a href="https://arxiv.org/abs/1706.07890">Drucker</a> that could yield a proof.<br />
<br />
Consider the following game on n bits among three players, Alice, Bob and Carol. The game works as follows: Carol picks a bit position and Alice sets that bit to 0 or 1. Carol then picks another unused bit position and Alice sets that bit as well. This repeats until the last bit which Carol gets to set herself. The bits are then revealed to Bob who has to give a set of bit positions that includes the bit Carol set at the end. Alice and Bob can work out a strategy ahead of time with the goal of minimizing the size of the set.<br />
<br />
If Carol can force Bob to give a set of n<sup>ε</sup> bits for some ε>0, this would prove the sensitivity conjecture! Basically a family of functions that give a counterexample to the sensitivity conjecture would give a strategy for Alice and Bob that yields a set of size n<sup>o(1)</sup>. You can find the full proof in the papers above.<br />
<br />
How well can Alice and Bob do? They can use the following strategy to achieve n<sup>1/2</sup>: Break up the input positions into n<sup>1/2</sup> groups of n<sup>1/2</sup> bits each. Whenever Carol picks a bit position, Alice sets that bit to zero unless it is the last bit in that group in which case she sets it to one. Carol can set the last bit in some group anyway she wants. Bob will either see every group having one 1 and will give the set of the positions of all the 1's, or Bob will see a group of all zeros and give the positions in that<br />
group.<br />
<br />
Mario Szegedy <a href="https://arxiv.org/abs/1706.07890">uses error-correcting codes</a> to improve the upper bound to O(n<sup>0.4732</sup>). A Georgia Tech undergraduate DeVon Ingram <a href="https://arxiv.org/abs/1712.01149">improves</a> the upper bound to O(n<sup>0.4696</sup>) by generalizing Szegedy's techniques. Ingram's proof comes down to finding a one-to-one function mapping subsets of {1,...,15} of size 4 to perfect codes of length 15 with the property that the bits of the code are 1 for the indices of the subset. Perhaps one could find a clever mapping that has this property but Ingram wrote code using a matching algorithm. A true computer assisted proof. Longer code lengths alas won't yield direct improvements.<br />
<br />
The connection to sensitivity doesn't go both ways. An upper bound of n<sup>o(1) </sup>wouldn't necessarily yield a counterexample to the sensitivity conjecture though would tell us the limitation of this game approach. Nevertheless I find it cool that a game that doesn't talk about functions at all could help settle an important question about them.Lance Fortnowhttps://plus.google.com/101693130490639305932noreply@blogger.com2tag:blogger.com,1999:blog-3722233.post-49183499673398148842018-01-16T11:41:00.000-05:002018-01-16T11:41:01.124-05:00Donald Knuth Turns 80 years and 6 daysCelebrating Donald Knuth's 80th birthday, or 80 years + 7 days birthday seems odd. Should we use powers of 2? Hmm- too few, just 32 and 64 really. And having a 32-year celebration for someone is unusual. How about numbers that end in 0 in base 8. 64 would be 100, 72 would 110, 80 would be 120 so AH- we would be celebrating! So lets Celebrate!<br />
<br />
LANCE: One of us should blog about Don Knuth turning 80.<br />
<br />
BILL: How about both of us, sep posts?<br />
<br />
Lance had his post <a href="http://blog.computationalcomplexity.org/2018/01/donald-knuth-turns-eighty.html">here</a>. Now its my turn.<br />
<br />
Donald Knuth was the first person (roughly- history is not as definite as mathematics) to use mathematics to analyse algorithms. This may seem like an obvious thing to do but that's easy to say in retrospect. And he never lost focus: He may have to use some hard math to analyze algorithms but his goal always was to analyze algorithms. He didn't get fascinated with some obscure aspect of math like, say, surreal numbers, and go write a book on it. Oh wait, he did (see <a href="http://www.math.harvard.edu/~knill/teaching/mathe320_2015_fall/blog15/surreal1.pdf">here</a>). But, for the most part everything he did was towards faster algorithm and faster typesetting.<br />
<i><br /></i>
In an early draft of a book review of <i>Companion \to the Papers of Donald Knuth </i>I wrote at the end of it<br />
<br />
<i> Donald Knuth has been called the father of Theoretical Computer Science and hence his thoughts are worth knowing</i><br />
<br />
I usually email my reviews to the author so they can make sure I didn't say anything stupid or in some very rare cases have a rebuttal. In Knuth's case I postal mailed it (ask your grandparents what that is) since he does not use email. He mailed back the hardcopy with some corrections in pencil. One of them was to cross out<br />
<br />
<i>father of theoretical computer science</i><br />
<i><br /></i>
and replace it with<br />
<br />
<i>father of algorithmic analysis.</i><br />
<i><br /></i>
I made the correction. That is how he views himself and it would be odd to argue the point.<br />
<br />
As a tribute to him I have gathered up all of the book reviews in my column of books by him (between 10 and 12 depending on how you count) and books clearly inspired by him (1 book- A=B) into one file which I point to <a href="https://www.cs.umd.edu/users/gasarch/BLOGPAPERS/knuth.pdf">here</a>. <br />
<br />
<br />
<br />
<br />GASARCHhttp://www.blogger.com/profile/06134382469361359081noreply@blogger.com2tag:blogger.com,1999:blog-3722233.post-48892271209825333632018-01-10T08:26:00.000-05:002018-01-10T08:26:02.861-05:00Donald Knuth Turns EightyWe've kept this blog going long enough that we start repeating our celebrations. Ten years ago Bill <a href="http://blog.computationalcomplexity.org/2008/01/today-is-knuths-70th-birthday.html">celebrated Don Knuth's 70th Birthday</a> and today Donald Knuth turns 80. While he <a href="http://knuth80.elfbrink.se/">celebrates in Piteå, Sweden</a> with its less than 4.5 hours of daylight, we wish him a happy birthday from stateside.<br />
<div>
<br /></div>
<div>
Looking back in this blog, in 2015 I wrote about the <a href="http://blog.computationalcomplexity.org/2015/01/the-history-of-history-of-history-of.html">history of the history of computing</a> including this wonderful talk by Knuth, <a href="https://youtu.be/gAXdDEQveKw">Let's Not Dumb Down the History of Computer Science</a>.<br />
<br /></div>
<iframe allow="encrypted-media" allowfullscreen="" frameborder="0" gesture="media" height="315" src="https://www.youtube.com/embed/gAXdDEQveKw" width="560"></iframe>
<br />
<div>
<br />
Donald Knuth is known for many things, important algorithms, the brilliant book series The Art of Computer Programming, TeX, the many awards he's won and the <a href="https://www.sigact.org/Prizes/Knuth/">award named after him</a>. In my favorite story, back in the 70's when Knuth started working on Volume 4 (still in progress), he wanted a good name for those hard problems that Cook and Karp found. His candidates "Herculean", "Formidable" and "Arduous" didn't get much traction so he <a href="http://blog.computationalcomplexity.org/2010/11/by-any-other-name-would-be-just-as-hard.html">ran a contest</a> which had some interesting suggestions before reluctantly going with the name "NP-complete" that came out of a group from MIT.<br />
<br />
In his <a href="https://doi.org/10.1145/1811129.1811130">SIGACT News article</a> that described the contest, Donald Knuth was so sure that these hard problems would remain hard, he puts everything on the line, offering anyone who could prove P = NP a live turkey. May Don Knuth and his turkey continue to celebrate many more birthdays.</div>
Lance Fortnowhttps://plus.google.com/101693130490639305932noreply@blogger.com5tag:blogger.com,1999:blog-3722233.post-54655442579653124282018-01-07T21:21:00.002-05:002018-01-07T22:20:19.733-05:00A new largest prime found!A new largest KNOWN prime has been discovered and its 23 million digits long.<br />
<br />
Nate Silver's website had an article about it (written by Oliver Roeder) <a href="https://fivethirtyeight.com/features/we-have-a-new-prime-number-and-its-23-million-digits-long/">here</a><br />
<br />
An article about why people do this is <a href="http://primes.utm.edu/notes/faq/why.html">here</a><br />
<br />
Lance posted about finding large primes in 2006 <a href="http://blog.computationalcomplexity.org/2006/01/search-without-end.html">here</a><br />
<br />
I'll just make some random comments<br />
<br />
1) The prime is 2<sup>77,232,917</sup>-1<br />
<br />
2) The prime is not much bigger than the previous champion.<br />
<br />
3) More generally, the graph (in Oliver Roeder's article) shows from 1600 to about 1951there was slow progress but since then there has been LOTS of progress. See the table in <a href="https://en.wikipedia.org/wiki/Largest_known_prime_number">this</a> article. I had wanted to say <i>every year a new prime was found</i> but, alas, not that simple a pattern. Even so, lots of new records.<br />
<br />
4) I"ll list the reasons given for why people do this and my comments on them.<br />
<br />
a) Tradition! (Note to self- write a novelty song to the tune of Fiddler-on-the-roof's Tradition about why people work on various mathematical things)<br />
<br />
b) For the by-product of the quest. This one does make sense and I am sure has driven and helped test out some research. Reminds me of the spinoffs from going to the moon (see <a href="https://en.wikipedia.org/wiki/NASA_spinoff_technologies">here</a>). Contrary to what I heard as a kid, the orange-powder-drink Tang was not a spinoff. But there were many others. Of course- would these spinoffs have happened anyway? Hard to know.<br />
<br />
c) People collect rare and Beautiful items. I don't really see this one. Finding a large prime doesn't make its yours, it belongs to the ages! And I don't think people get their names on the prime either. The only prime that has someone's name on it is the famous <a href="https://en.wikipedia.org/wiki/57_(number)">Grothendieck prime</a> which is 57. Oh well. There are sets of primes with peoples names on them: <a href="https://en.wikipedia.org/wiki/Mersenne_prime">Mersenne primes</a>, <a href="https://en.wikipedia.org/wiki/Gaussian_integer#Gaussian_primes">Gaussian primes</a> (which are subsets of Gaussian integers so maybe shouldn't count), <a href="https://en.wikipedia.org/wiki/Eisenstein_prime">Eisenstein primes</a>, and<br />
<a href="https://en.wikipedia.org/wiki/Sophie_Germain_prime">Sophie Germain primes</a>. If you know of any other primes or set of primes named after someone, leave a comment please.<br />
<br />
d) For the glory! Maybe, but given how briefly people hold the record, fame is fleeting.<br />
<br />
e) To test the hardware. This one I didn't know! I'll quote the article as to why primes are good for this<br />
<blockquote class="tr_bq">
Why are prime programs used this way? They are intensely CPU and bus bound. They are relatively short, give an easily checked answer (when run on a known prime they should output true after their billions of calculations). They can easily be run in the background while other "more important" tasks run, and they are usually easy to stop and restart.</blockquote>
f) To learn more about their distribution. The prime number theorem was conjectured from data. We have so many primes now that I wonder if a few more really help formulate conjectures<br />
<br />
g) For the money. The first person to get a ten-million digit prime gets $100,000. The first person to get a one billion digit prime gets $250,000. Wow! Except that the article must be a bit old since the $100,000 prize was claimed in 2009 (see <a href="https://www.eff.org/press/archives/2009/10/14-0">here</a>). Still, theres that one billion digit prize out there!<br />
<br />
5) Mersenne primes are of the form 2^n-1. It is known that n must be prime for 2^n-1 to be prime (this is not hard). There are much faster primality testing algorithms for Mersenne primes than arb primes. But see next item.<br />
<br />
6) While writing this blog post I looked up non-mersenne primes. It seems like the largest one is<br />
<br />
10223*2^31172165 + 1 and was discovered in 2016.<br />
<br />
But of more interest- there is no Wikipedia page on non-Mersenne primes, there are some outdated pages that don't have the most recent information. As the kids say, its not a thing.<br />
<br />
8) I'll add one more reason why people work on this, but its more of a tautology: People work on finding large primes because they can!. By contrast, finding VDW numbers is hard and likely to not make much progress.<br />
<br />
9) I think that the most reason advances have come from computing power and not number theory. (if this is incorrect let me know with a polite comment)<br />
<br />
10) Have their been spinoffs in either number theory OR computing power lately?<br />
<br />
11) I wonder if there will come a point where progress gets hard again and the graph of largest known primes flattens out. I tend to think yes, but hard to say when.GASARCHhttp://www.blogger.com/profile/06134382469361359081noreply@blogger.com4tag:blogger.com,1999:blog-3722233.post-26536808726486106042018-01-05T11:20:00.000-05:002018-01-05T11:20:37.476-05:00Which of these Math acronyms are well known? The last time I taught Grad Ramsey Theory there were very good math grads and ugrads in it. They used some acronyms - some I knew, some I didn't know (but know now). I am sure some are well known and some are now. I don't know which is which. Here is the list and comments<br />
<div>
<br /></div>
<div>
WLOG- Without Loss of Generality. This one I know and it seems well know-- When Googled the first page is all this definition. (Maybe I shouldn't use the term ``Googled''- I've heard that brand names don't like it when they become generic terms like `Googled'. Kleenex, Photoshop, Xerox had this fate. Their is a word for it- genericide)</div>
<div>
<br /></div>
<div>
ITOT- It turns out that. An Applied Math Prof used this in a course I had in 1977. I have not seen it used since then. I don't even use it. </div>
<div>
<br /></div>
<div>
BWOC- By Way of Contradiction. I thought this was better known. When I google it I got to <a href="https://acronyms.thefreedictionary.com/BWOC">this</a> page which tells me it stands for:</div>
<div>
<br /></div>
<div>
Big Wolf on Campus (TV Show)</div>
<div>
<br /></div>
<div>
By Weight of Cement (Oil and Gas industry)</div>
<div>
<br /></div>
<div>
Big Women on Campus (GOOD- the counterpart to BMOC)</div>
<div>
<br /></div>
<div>
By Way of Contradiction (Given the other items on the list I'm surprised it made the cut)</div>
<div>
<br /></div>
<div>
Bob Wayne's Oil company</div>
<div>
<br /></div>
<div>
FTSOC- For the Sake of Contradiction. NOT, as Google told me, Fuck this Shit O'Clock (reminds me of when I use FML for Formula and the class tells me its means Fuck My Life)</div>
<div>
<br /></div>
<div>
WTS- This was a new one for me. It took a while to get it from context but it was Want to Show. Google gives Women in Transportation. </div>
<div>
<br /></div>
<div>
NTS- Need to show. Also a radio station and the National Traffic System.</div>
<div>
<br /></div>
<div>
I think the only one of these that is standard is WLOG. The rest I think could be useful. But I ask you- are any of these standard? Are there ones that are standard that I missed? Of course, the great thing about standards is that there are so many of them.</div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
GASARCHhttp://www.blogger.com/profile/06134382469361359081noreply@blogger.com13tag:blogger.com,1999:blog-3722233.post-13974065694423024632017-12-27T13:40:00.000-05:002017-12-29T12:12:49.643-05:00Complexity Year in Review 2017Theorem of the year goes to the resolution of the dichotomy conjecture. I wrote <a href="http://blog.computationalcomplexity.org/2017/02/the-dichotomy-conjecture.html">about the conjecture</a> in February and while the Feder et. al paper didn't hold up, two later papers seem to resolve the conjecture.<br />
<br />
<div style="text-align: center;">
<a href="https://doi.org/10.1109/FOCS.2017.37">A dichotomy theorem for nonuniform CSPs</a> by Andrei Bulatov</div>
<div style="text-align: center;">
<a href="https://doi.org/10.1109/FOCS.2017.38">A Proof of CSP Dichotomy Conjecture</a> by Dmitriy Zhuk<br />
<br />
<div style="text-align: left;">
I checked with experts in the field and at least one of these papers and more likely both ought to be correct.</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
Runners up include two matching papers I <a href="http://blog.computationalcomplexity.org/2017/11/matching-and-complexity.html">posted about last month</a>, Svensson and Tarnawski who give a <a href="https://doi.org/10.1109/FOCS.2017.70">quasi-NC algorithm for general graph matching</a> and Anari and Vazirani who give a<a href="https://arxiv.org/abs/1709.07822"> NC algorithm for matching on planar graphs</a>. We also had the nice <a href="https://doi.org/10.1145/3055399.3055409">quasi-polynomial time algorithm for parity games</a> by Calude, Jain, Khoussainov, Li and Stephan that I <a href="http://blog.computationalcomplexity.org/2017/03/parity-games-in-quasipolynomial-time.html">posted on last March</a>.</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
In <a href="http://blog.computationalcomplexity.org/2016/12/complexity-year-in-review-2016.html">last year's review</a> we said "2016 will go down as a watershed year for machine learning" yet somehow it paled against 2017 with breakthroughs in <a href="http://blog.computationalcomplexity.org/2017/12/our-ai-future-good-and-ugly.html">chess</a>, <a href="http://blog.computationalcomplexity.org/2017/02/liberatus-wins-at-poker.html">poker</a>, <a href="https://www.nasa.gov/press-release/artificial-intelligence-nasa-data-used-to-discover-eighth-planet-circling-distant-star">astronomy</a> not to mention continuous advances in machine translation, autonomous vehicles and everything else. Maybe next year ML can write the 2018 year in review.<br />
<br />
We had an awesome eclipse to remind us of the wonders of the world and almost made me forget about US politics. Computing keeps growing and how do we find the resources to train people from pre-school through college and throughout their lives? How much should we worry about the dominance of a handful of computing companies? </div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
Thanks to our guest posters <a href="http://blog.computationalcomplexity.org/2017/01/guest-post-about-first-women-in.html">Erin Chambers, Brittany Terese Fasy, Lori Ziegelmeier</a>, <a href="http://blog.computationalcomplexity.org/2017/08/i-wanted-to-address-diversity-after.html">Molly Fortnow</a> and <a href="http://blog.computationalcomplexity.org/2017/05/how-to-solve-it.html">Periklis Papakonstantinou</a>. </div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
<span style="font-family: inherit;">We remember <a href="http://blog.computationalcomplexity.org/2017/02/ken-arrow-and-oscars-voting.html">Ken Arrow</a>, <a href="https://lucatrevisan.wordpress.com/2017/10/23/corrado-bohm/">Corrado Bohm</a>, <a href="https://lucatrevisan.wordpress.com/2017/09/26/3915/">Michael Cohen</a>, <a href="http://www.ustavinformatiky.cz/petrhajek/">Petr Hájek</a>, <a href="http://blog.computationalcomplexity.org/2017/10/monty-hall-1921-2017-and-his-problem.html">Monty Hall</a>, <a href="http://binaire.blog.lemonde.fr/2017/09/22/ciao-maurice/">Maurice Nivat</a>, <span style="font-size: 11pt; text-align: center; white-space: pre-wrap;"><a href="http://blog.computationalcomplexity.org/2017/02/raymond-smullyan-was-born-on-may-25.html">Raymond Smullyan</a>, <a href="http://blog.computationalcomplexity.org/2017/07/peter-wegner-1932-2017.html">Peter Wegner</a> and <a href="https://www.nytimes.com/2017/09/11/science/lotfi-zadeh-father-of-mathematical-fuzzy-logic-dies-at-96.html">Lotfi Zadeh</a>. </span></span></div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
2018 is just full of questions: What will the Internet look like post-net neutrality? How will the new tax code play out? Where will Amazon put HQ2? What will machine learning do next? What can quantum computers with 50 qbits accomplish? Will bitcoin move to $300K or 30 cents? And what great advances in complexity await us?</div>
</div>
Lance Fortnowhttps://plus.google.com/101693130490639305932noreply@blogger.com0tag:blogger.com,1999:blog-3722233.post-74354767218484488942017-12-21T10:14:00.000-05:002017-12-21T10:14:33.218-05:00Having Faith in ComplexityI believe P ≠ NP as much as anyone else. Nevertheless should we worry about trust we put in complexity?<br />
<br />
You don't need the full power of P = NP to break cryptography. I don't worry about quantum computers breaking RSA and related protocols. It won't sneak up on us--when (or if) quantum computing gets anywhere close to factoring large numbers, we'll figure out a strategy to change our protocols and to protect the information we already have. However if someone comes up with an algorithm tomorrow that cracks AES, we'll have a crisis on our hands as AES is so well used the algorithm is embedded into computer chips. Perhaps we can mitigate the damage before the algorithm spreads or at take our information off-line until we develop new solutions.<br />
<br />
But what about blockchains, the technology that underlies cryptocurrencies such as Bitcoin. A blockchain consists of a series of transactions collected into sequence of blocks, where each block consists of a hash of the previous block with transactions themselves hashed and often encrypted with public-key cryptography. One would hope that breaking the cryptography would be caught quickly and we'd still have a legit record of transactions saved somewhere. The transactions themselves might be compromised especially if anonymity was built into the system.<br />
<br />
Bitcoin itself, as I write this, has a total market cap of over $250 billion based fully on cryptography. The cryptography will probably hold up, Bitcoin investors have more to worry from bad implementations or the pop of a bubble of unrealistic expectations. But as we watch so many exciting advances in computing tackling challenges that we would never have expected to get solved, should we continue to build our economy on the hope that other advances won't happen?Lance Fortnowhttps://plus.google.com/101693130490639305932noreply@blogger.com2tag:blogger.com,1999:blog-3722233.post-34029350249111757412017-12-17T22:18:00.001-05:002017-12-17T22:18:15.766-05:00Monkey First!The following story is not true nor has anyone claimed its true, but it has a point:<br />
<br />
<i>A company gets a contract to do the following: train a monkey to sit on a 10-foot pedestal and recite some passages of Shakespeare. After a week they announce they have made progress! They invite their investors to see what progress they have made! They unveil a curtain and there is... a 10-foot pedestal.</i><br />
<i><br /></i>
This story was in an article about how Google does moonshots-- that is, high-risk, high-reward, innovative work. The article is <a href="https://www.theatlantic.com/magazine/archive/2017/11/x-google-moonshot-factory/540648/">here</a>. (How the Atlantic makes money when they have stuff online is a mystery to me. Perhaps they do in a very innovative way.) The point is that its BAD to have tangible results (like the pedestal) that are not getting at the heart of the problem. So Google has various incentives to do the important stuff. Their slogan is MONKEY FIRST.<br />
<br />
This also applies to our research. The following sequence of events is common:<br />
<br />
1) Prove some scattered results.<br />
<br />
2) Pedastal or Monkey? You could write up what you have, polish it, write up some nice LaTeX macros to make the writing of the paper easier OR you could try to find the unifying principle that would be hard, and might not work, but if it works that would be, as the kids say, Jawesome (Jaw-dropping awesome). The sad answer is that which you do might depend on when the next conference deadline is.<br />
<br />
More generally there is a tension between safe do-able research(Pedestal) and high-risk, high-reweard research (Monkey). Is our incentive structure set up to encourage high-risk high-reward? The Tenure system is supposed to do it and it DOES in some cases, but not as much as it could since there are other factors (salary, promotion to full prof, grants).<br />
<br />
Does the system encourage high-risk high-reward? Should it? Could we do better? What are your experiences? I have no answers (especially to the question of what are your experiences) so I welcome your comments.GASARCHhttp://www.blogger.com/profile/06134382469361359081noreply@blogger.com3tag:blogger.com,1999:blog-3722233.post-39467111973362251732017-12-13T13:46:00.001-05:002017-12-14T10:43:37.871-05:00Our AI future: The Good and the UglyI don’t directly work in machine learning but one cannot deny the progress it has made and the effect it has on society. Who would have thought even a few years ago that ML would have basically solved face and voice recognition and translate nearly as well as humans.<br />
<br />
The <a href="https://nips.cc/">Neural Information Process Systems conference</a> held last week in Long Beach, California, sold out its 7500 registration slots in 12 days. NIPS, not long ago just another academic conference, has become a major machine learning job market <a href="https://www.bloomberg.com/news/articles/2017-12-06/demand-for-ai-talent-turns-once-staid-conference-into-draft-day">where newly minted Ph.D.s earn north of $300,000 and top-ranked senior academics command multimillion-dollar, multiyear contracts</a>."<br />
<br />
AlphaZero, an offshoot of Google’s Go programs, <a href="https://arxiv.org/abs/1712.01815">learned chess given only the rules</a> in just four hours (on 5000 tensor processing units) and easily beats the best human-designed chess programs. <a href="https://youtu.be/UcAfg9v_dDM">Check out</a> this match against Stockfish.<br />
<br />
<iframe allow="encrypted-media" allowfullscreen="" frameborder="0" gesture="media" height="315" src="https://www.youtube.com/embed/UcAfg9v_dDM" width="560"></iframe>
<br />
<br />
Just a trend that machine learning often works better when humans just get out of the way.<br />
<br />
The advances in machine learning and automation have a dark side. Earlier this week I attended the <a href="https://cra.org/events/summit-technology-jobs/">CRA Summit on Technology and Jobs</a>, one of a series of meetings organized by Moshe Vardi on how AI and other computing technology will affect the future job market. When we talk about ethics in computer science we usually talk about freedom of information, privacy and fairness but this may be the biggest challenge of them all.<br />
<br />
The most stark statistic: Contrary to what certain politicians may tell you, manufacturing output in the United States has never been higher, but <a href="http://www.acting-man.com/?p=46399">manufacturing jobs have declined dramatically</a> due to automation.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://2.bp.blogspot.com/-LW_N4orfguc/WjF1T6qF2HI/AAAAAAABelU/AW3_WpuIESQ-iQRX-BZzxqLoRCL0Fl7aACLcBGAs/s1600/1-Manufacturing-output-vs.-employment.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="786" data-original-width="1600" height="196" src="https://2.bp.blogspot.com/-LW_N4orfguc/WjF1T6qF2HI/AAAAAAABelU/AW3_WpuIESQ-iQRX-BZzxqLoRCL0Fl7aACLcBGAs/s400/1-Manufacturing-output-vs.-employment.png" width="400" /></a></div>
<br />
The changes have hit hardest for white middle-class less educated males. While this group usually doesn’t get much attention from academics, they have been hit hard, often taking less rewarding jobs or dropping out of the job market entirely. We're seeing many young people living with their parents spending their days playing video games and see a spike in suicides and drug use. Drug overdose is the now the leading cause of death of men under 50.<br />
<br />
There are no easy solutions. Universal basic income won’t solve the psychological need a job plays in being a part of something bigger than oneself. In the end we'll need to rethink the educate-work-retire cycle towards more life-long learning and find rewarding jobs that go around automation. This all starts by having a government that recognizes these real challenges.Lance Fortnowhttps://plus.google.com/101693130490639305932noreply@blogger.com17tag:blogger.com,1999:blog-3722233.post-56514082559442877592017-12-12T09:33:00.000-05:002017-12-13T07:26:24.321-05:00Interesting Probability on a VERY OLD TV showI have posted about things I see in TV or Movies that are math or CS related:<br />
<br />
Do TV shows overestimate how much a genius can help solve crimes or make really good crystal meth which seems to be blue. YES, see <a href="http://blog.computationalcomplexity.org/2013/09/crystal-math-what-numb3rs-and-breaking.html">here</a><br />
<br />
Do TV shows get math wrong. YES, see <a href="http://blog.computationalcomplexity.org/2007/09/math-on-tv.html">here</a> and about 90% of the episodes of Numb3rs<br />
<br />
Closer to home- do TV shows say stupid things about P vs NP. Elementary (one of the two Modern Day Sherlock Holmes shows did) does see <a href="http://blog.computationalcomplexity.org/2013/10/p-vs-np-is-elementary-no-p-vs-np-is-on.html">here</a><br />
<br />
Did Kirk and Spock really defeat a computer by a trick that wouldn't work now. Yes, see Lance's post on this <a href="http://blog.computationalcomplexity.org/2015/11/star-trek-computing.html">here</a><br />
<br />
Do TV shows use the word Quantum incorrectly? They do but they are not alone as such, see <a href="http://blog.computationalcomplexity.org/2015/11/is-word-quantum-being-used-properly-by.html">here</a><br />
<br />
Do people writing Futrama get their math right! Yes- see <a href="http://blog.computationalcomplexity.org/2010/08/new-math-on-futurama_24.html">here</a><br />
<br />
Do people writing 24 get their math wrong! Yes- see <a href="http://blog.computationalcomplexity.org/2010/05/24-really-bad-game-theory-technology.html">here</a><br />
<br />
Does the Big Bang Theory mostly get things right? Yes! - see <a href="http://blog.computationalcomplexity.org/2014/04/the-big-bang-theory-hits-close-to-home.html">here</a><br />
<br />
There are more (Seinfeld things comedians should learn proofs! Really- see <a href="http://blog.computationalcomplexity.org/search?q=Seinfeld">here</a>) but I can make my point just with the ones above.<br />
<br />
ALL of the TV shows except Star Trek were from after 2000 (or so). So, with the exception of Science Fiction, math-refs and sci-refs in TV shows are relatively recent- I had thought.<br />
<br />
Which is why I was surprised and delighted to see, an episode of the old western (anti-western? satire of a western?) Maverick, from 1958 (before I was born!), called Rope of Cards a CORRECT and INTERESTING math reference.Maverick bets that a random 25 cards from a deck of cards can be arranged into five 5-card pat hands (I had to look that up-- hands where you don't want to discard any cards, so flush, a straight, a full house would qualify. 4 of a kind would be pat if there were no wild cards). The sucker takes the bet and loses. Maverick later says the odds are high and called the game Maverick Solitaire.<i>And that is now the name of the puzzle</i>- see <a href="http://www.solitairelaboratory.com/maverick.html">here</a>. The prob is around 0.98.<br />
<br />
I call this a mention of math since it has to do with probability- which may be a stretch. And I doubt the scene would encourage people to go into math. But it might encourage one to learn probability either to sucker others or to not be suckered.<br />
<br />
So the question now is- are there other non-science-fiction, refs to math in older TV shows?<br />
I suspect yes - similar to the one above which is gambling and probability. What is the earliest mention of math on a TV show? The oldest that did not involve science fiction or gambling?<br />
<br />
<br />GASARCHhttp://www.blogger.com/profile/06134382469361359081noreply@blogger.com3tag:blogger.com,1999:blog-3722233.post-68086756702430828622017-12-07T10:21:00.000-05:002017-12-07T20:14:26.103-05:00Razor's EdgeInformally the sensitivity conjecture asks whether every hard Boolean function has a razor's edge input, where flipping a random bit has a reasonable chance of flipping the output.<br />
<div>
<br /></div>
<div>
Let's be more precise. We consider functions f mapping {0,1}<sup>n</sup> to {0,1}. For every input x, the decision tree complexity at x is the least number of bits of x you would need to query to decide whether the function outputs 0 or 1. The decision tree complexity of a function is the maximum decision tree complexity over all possible x. Most interesting functions have high decision tree complexity, even the lowly OR function requires querying every bit on the input of all zeroes. The decision tree complexity is polynomially-equivalent to randomized-complexity, quantum complexity, certificate complexity, and the degree of a polynomial that computes the function exactly or approximately. The <a href="https://doi.org/10.1145/2897518.2897644">recent</a> <a href="https://eccc.weizmann.ac.il/report/2015/175/">paper</a> by Aaronson, Ben-David and Kothari gives a nice chart showing the relationship between these measures and references to the various papers giving the bounds.<br />
<br /></div>
<div>
The sensitivity of f on an input x is the number of bit locations i such that f(x)≠f(x⊕i) where x⊕i is x with the ith bit flipped. The sensitivity of f is the maximum sensitivity over all inputs. The sensitivity conjecture states that there is some ε>0 such that the sensitivity of f is at least m<sup>ε</sup> if the decision tree complexity is at least m. If the conjecture were true then for any function with maximal decision tree complexity n (querying every input bit) there must be some razor's edge input x such that flipping a random bit of x has probability at least n<sup>-ε</sup> of flipping the output.<br />
<br />
I find it surprising that we have no proof or counterexample to this purely combinatorial question. There is a generalization of sensitivity known as block sensitivity which is the largest set of disjoint blocks where flipping the bits in any block flips the output bit. Block sensitivity is known to be polynomially related to decision tree complexity.<br />
<br />
In a future post I'll talk about some approaches towards resolving this conjecture.</div>
Lance Fortnowhttps://plus.google.com/101693130490639305932noreply@blogger.com4tag:blogger.com,1999:blog-3722233.post-48912896670151582972017-12-04T23:00:00.000-05:002017-12-04T23:00:20.553-05:00Fireside chat with Simons Inst Director Dick Karp<div>
<a href="https://simons.berkeley.edu/events/fireside-chat-simons-institute-director-dick-karp">Fireside chat with Dick Karp</a></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
Above link is Samir Khuller interviewing Dick Karp, though its labelled as a fireside chat with Dick Karp.</div>
<div>
<br /></div>
<div>
Very interesting to hear how TCS has evolved. More generally its good to know where you've come from to have a better idea of where you're going.</div>
<div>
<br /></div>
<div>
bill g.</div>
GASARCHhttp://www.blogger.com/profile/06134382469361359081noreply@blogger.com0tag:blogger.com,1999:blog-3722233.post-8573734769057346692017-11-30T07:35:00.000-05:002017-11-30T07:35:36.711-05:00Kolmogorov Complexity and the PrimesBill's <a href="http://blog.computationalcomplexity.org/2017/11/van-der-waerdens-theorem-implies.html">post</a> on how to derive the non-finiteness of the primes from Van der Waerden's theorem reminds me of a nice proof using Kolmogorov complexity.<br />
<br />
A quick primer: Fixed some universal programming language. Let C(x), the Kolmogorov complexity of x, be the length of the smallest program that outputs x. One can show by a simple counting argument for every n there is an x such that C(x) ≥ n. We call such x "random".<br />
<br />
Suppose we had a finite list of primes p<sub>1</sub>…p<sub>k</sub>. Then any number m can be expressed as p<sub>1</sub><sup>e<sub>1</sub></sup>···p<sub>k</sub><sup>e<sub>k</sub></sup>. Pick n large, a random x of length n and let m be the number x expresses in binary. We can compute m from e<sub>1</sub>,…,e<sub>k</sub> and a constant amount of other information, remembering that k is a constant. Each e<sub>i</sub> is at most log m and so we can describe all of them in O(log log m) bits and thus C(m) = O(log log m). But roughly C(m) = C(x) ≥ n = log m, a contradiction.<br />
<br />
But we can do better. Again pick n large, a random x of length n and let m be the number x expresses in binary. Let p<sub>i</sub> be the largest prime that divides m where p<sub>i</sub> is the ith prime. We can describe m by p<sub>i</sub> and m/p<sub>i</sub>, or by i and m/p<sub>i</sub>. So we have C(m) ≤ C(i,m/p<sub>i</sub>) ≤ C(i) + C(m/p<sub>i</sub>) + 2 log C(p<sub>i</sub>) ≤ log i + log m/p<sub>i</sub> + 2 log log i + c. The 2 log C(p<sub>i</sub>) term is needed to specify the separation between the program for i and the program for m/p<sub>i</sub>.<br />
<br />
Since C(m) ≥ log m, we have<br />
log m ≤ log i + log (m/p<sub>i</sub>)+ 2 log log i + c<br />
log m ≤ log i + log m - log p<sub>i</sub> + 2 log log i + c<br />
log p<sub>i</sub> ≤ log i + 2 log log i + c<br />
p<sub>i</sub> ≤ O(i (log i)<sup>2</sup>)<br />
<br />
The prime number theorem has p<sub>i</sub> approximately i log i, so we get just a log factor off from optimal with simple Kolmogorov complexity.<br />
<br />
I wrote a <a href="http://bibbase.org/network/publication/fortnow-kolmogorovcomplexity-2001">short introduction</a> to Kolmogorov complexity with this proof. I originally got the proof from the <a href="http://amzn.to/2jxLOhx">great text</a> on Kolmogorov complexity from Li and Vitányi and they give credit to Piotr Berman and John Tromp.Lance Fortnowhttps://plus.google.com/101693130490639305932noreply@blogger.com3tag:blogger.com,1999:blog-3722233.post-45118929026275930932017-11-27T13:18:00.000-05:002017-11-28T15:08:38.059-05:00Van der Waerden's theorem implies the infinitude of the primes<br />
(Sam Buss and Denis Hirschfeld helped me on this post.)<br />
<br />
I was reading the table of contents of the American Math Monthly and saw an article by Levent Alpoge entitled<br />
<br />
Van der Waerden and the primes<br />
<br />
in which he showed from VDW's theorem that the set of primes is infinite. The article is <a href="http://www.cs.umd.edu/~gasarch/BLOGPAPERS/primesinfvdwamm.pdf">here</a> and <a href="http://www.jstore.org/stable/10.4169/amer.math.monthly.122.8.784">here</a><a href="http://www.cs.umd.edu/~gasarch/BLOGPAPERS/primesinfvdw.pdf">.</a> My writeup of it <a href="http://www.cs.umd.edu/~gasarch/BLOGPAPERS/primesinfvdw.pdf">is here</a>. Prof K saw me reading the paper.<br />
<br />
K: I see you are interested in proving the set of primes is infinite from VDW's theorem.<br />
<br />
BILL: Yes, who wouldn't be!!!!<br />
<br />
K: Well, lots of people. Including me. Can't you just state VDW's theorem and then give the normal proof? Would that count? Besides, we already have an easy proof that the set of primes is infinite without using VDW's theorem.<br />
<br />
I turn K's comments into a valid question: What does it mean to <i>prove A from B </i>if A is already known?<br />
<br />
There are two issues here, informal and formal.<br />
<br />
Informally: If you look at the proof of <i>VDW-->primes infinite</i> the steps in that proof look easier than than the usual proof that the set of primes is infinite. And the proof is certainly different. If you read the paper you will see that I am certainly not smuggling in the usual proof. Also, the proof truly does use VDW's theorem.<br />
<br />
Formally one could (and people working in Reverse Mathematics do similar things- see the books <a href="http://www.personal.psu.edu/t20/sosoa/chapter1.pdf">Subsystems of Second order Arithmetic by Simpson,</a>, and <a href="http://www.amazon.com/Slicing-Truth-Mathematics-Combinatorial-Mathematical-ebook/dp/B00Q5Y3YW6/ref=sr_1_2?ie=UTF8&qid=1445694504&sr=8-2&keywords=Slicing+the+truth">Slicing the Truth,</a> reviewed <a href="http://www.cs.umd.edu/~gasarch/BLOGPAPERS/slice.pdf">here</a>) devise a weak axiom system that itself cannot prove <i>the set of Primes is Infinite</i>, but can prove the implication <i>VDW-->Primes infinite</i>. Note that Reverse Mathematics does this sort of thing, but for proofs involving infinite objects, nothing like what I am proposing here.<br />
<br />
Open Problem 1: Find a proof system where the implication VDW-->Primes infinte can be proven, but primes infinite cannot. Sam Buss pointed out to me that for the weak system IΔ<sub>0</sub> it is not known if it can prove the primes are infinite.<br />
<br />
Open Problem 2: Find a proof system where you can do both proofs, but the prove of the implication is much shorter. Perhaps look at (VDW--> there are at least n primes) and (there are at least n primes)<br />
and look at the length of proof as a function of n.<br />
<br />
Open Problem 3: The statement <i>there are no primes with n bits, the with leading bit 1 </i>can be expressed as a propositional statement. Get lower bounds on its refuation in (say) resolution. (A commenter pointed out an error in a prior version of this one so be wary- there may be an error here as well.)<br />
<br />
I am suggesting work on the reverse mathematics of systems much weaker than RCA<sub>0</sub>. I do not know if this is a paper, a PhD thesis, a career, a dead end, or already pretty much done but I am not aware of it.<br />
<br />
<br />GASARCHhttp://www.blogger.com/profile/06134382469361359081noreply@blogger.com6tag:blogger.com,1999:blog-3722233.post-27560967458800077682017-11-20T08:15:00.000-05:002017-11-20T08:15:00.365-05:00The Grad Student TaxBy now as you've read from <a href="https://lucatrevisan.wordpress.com/2017/11/07/against-a-61-tax-increase-on-berkeley-students/">Luca</a> or <a href="https://www.scottaaronson.com/blog/?p=3542">Scott</a> or <a href="http://www.phdcomics.com/comics.php?f=1985">PhD Comics</a> or a variety of other sources on the dangerous changes to the tax code that passed the US House of Representatives last week. Among a number of university unfriendly policies, the tax code will eliminate the tax exemption for graduate student tuition for students supported with teaching or research duties, nearly every PhD student in STEM fields. The CRA, ACM, IEEE, AAAI, SIAM and Usenix put out a <a href="https://cra.org/govaffairs/wp-content/uploads/sites/6/2017/11/Statement-ComputingCommunity-on-HR1-Provision.pdf">joint statement</a> opposing this tax increase on graduate students. This is real.<br />
<div>
<br /></div>
<div>
Without other changes, a tax on tuition will make grad school unaffordable to most doctoral students. In computer science where potential PhD students can typically get lucrative jobs in industry, we'll certainly see a precipitous drop in those who choose to continue their studies. Universities will have to adjust by lower tuition, if finances and state law allows, and raising stipends. US government science funding will at best remain flat so in almost any scenario we'll see far fewer students pursue PhD degrees particularly in CS and STEM fields. Keep in mind we already don't come close to producing enough CS PhD students entering academia to meet the dramatically growing demand and these moves could frustrate faculty who also might head off to industry.<br />
<br /></div>
<div>
The current senate proposal leaves the exemption in place though no one can predict what will happen the the two bills get reconciled. In the best case scenario this bill goes the same way as the failed health care reform but republicans seem desperate to pass something major this fall. So reach out to <a href="https://www.govtrack.us/congress/members">your representatives</a>, especially your senators, and express the need to leave in the exemption.</div>
Lance Fortnowhttps://plus.google.com/101693130490639305932noreply@blogger.com5tag:blogger.com,1999:blog-3722233.post-62730800565577379912017-11-16T08:42:00.000-05:002017-11-16T09:01:15.823-05:00A Tale of Three RankingsIn the Spring of 2018 the US News and World Report should release their latest rankings of US graduate science programs including <a href="https://www.usnews.com/best-graduate-schools/top-science-schools/computer-science-rankings">computer science</a>. These are the most cited of the deluge of computer science rankings we see out there. The US News rankings have a long history and since they are reputation based they roughly correspond to how we see CS departments though some argue that reputation changes slowly with the quality of a department.<br />
<br />
US News and World Report also has a new <a href="https://www.usnews.com/education/best-global-universities/computer-science">global ranking</a> of CS departments. The US doesn't fare that well on the list and the ranking of the US programs on the global list are wildly inconsistent with the US list. What's going on?<br />
<br />
75% of the global ranking is <a href="https://www.usnews.com/education/best-global-universities/articles/subject-rankings-methodology">based</a> on statistics from Web of Science. Web of Science captures mainly journal articles where conferences in computer science typically have a higher reputation and more selectivity. In many European and Asian universities hiring and promotion often depend heavily on publications and citations in Web of Science encouraging their professor to publish in journals thus leading to higher ranked international departments.<br />
<br />
The CRA rightly <a href="https://cra.org/cra-statement-us-news-world-report-rankings-computer-science-universities/">put out a statement</a> urging the CS community to ignore the global rankings, though I wished they made a distinction between the two different US News rankings.<br />
<br />
I've <a href="http://blog.computationalcomplexity.org/2014/10/metrics-in-academics.html">never been a fan</a> of using metrics to rank CS departments but there is a relatively new site, Emery Berger's <a href="http://csrankings.org/#">Computer Science Rankings</a>, based on the number of publications in major venues. CS Rankings passes the smell test for both their US and global lists and is relatively consistent with the US News reputation-based CS graduate rankings.<br />
<br />
Nevertheless I hope CS Rankings will not become the main ranking system for CS departments. Departments who wish to raise their ranking would hire faculty based mainly on their ability to publish large number of papers in major conferences. Professors and students would then focus on quantity of papers and this would in the long run discourage risk-taking long-range research, as well as innovations in improving diversity or educating graduate students.<br />
<br />
As <a href="https://en.wikipedia.org/wiki/Goodhart%27s_law">Goodhart's Law</a> states, "when a measure becomes a target, it ceases to be a good measure". Paradoxically CS Rankings can lead to good rankings of CS departments as long as we don't treat it as such.Lance Fortnowhttps://plus.google.com/101693130490639305932noreply@blogger.com6