tag:blogger.com,1999:blog-3722233Tue, 16 Sep 2014 19:03:37 +0000typecastfocs metacommentsComputational ComplexityComputational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill Gasarchhttp://blog.computationalcomplexity.org/noreply@blogger.com (Lance Fortnow)Blogger2207125tag:blogger.com,1999:blog-3722233.post-8812107158261512109Mon, 15 Sep 2014 22:51:00 +00002014-09-15T17:51:07.400-05:00Richard Lipton Wins Knuth PrizeGeorgia Tech professor and fellow <a href="http://rjlipton.wordpress.com/">blogger</a> Richard Lipton <a href="http://www.acm.org/press-room/news-releases/2014/knuth-prize-2014">will receive</a> the 2014 Knuth Prize at the upcoming <a href="http://www.boazbarak.org/focs14/">FOCS conference</a> in Philadelphia. The <a href="http://www.sigact.org/Prizes/Knuth/">Knuth Prize</a> is given jointly by the ACM SIGACT and the IEEE TC-MFCS for major research accomplishments and contributions to the foundations of computer science over an extended period of time.<br />
<br />
Lipton's research has <a href="http://www.informatik.uni-trier.de/~ley/pers/hd/l/Lipton:Richard_J=">major results</a> across a large spectrum of theoretical computer science from probabilistic algorithms to DNA computing to communication complexity. I'd like to highlight a couple of his papers in computational complexity hugely influential, including much of my own research.<br />
<br />
Richard Karp and Lipton <a href="http://dx.doi.org/10.1145/800141.804678">showed</a> that if NP has non-uniform polynomial-size circuits then the polynomial-time hierarchy collapses. The result, and its successors, are a powerful tool, used to show a number of interesting hypotheses are not likely to happen, and plays an important role itself in circuit lower bounds and pseudorandomness. Most importantly Karp-Lipton showed gave the strongest evidence that NP does not have small circuits, justifying the circuit lower bound approach to separating complexity classes.<br />
<br />
In lesser known but perhaps even more influential work, Lipton <a href="http://books.google.com/books?id=_y-eBBIRBbUC&lpg=PA191&ots=n3Lb9iKsc1&dq=lipton%20new%20directions%20in%20testing&lr&pg=PA191#v=onepage&q=lipton%20new%20directions%20in%20testing&f=false">developed</a> a notion of program testing and showed how to test the permanent function, a result that directly led to the surprising power of interactive proofs. This algebraic characterization of hard problems led us to IP = PSPACE, MIP = NEXP and the PCP theorem.<br />
<br />
Again this just covers a sliver of his impressive canon of research. Congrats Dick!http://blog.computationalcomplexity.org/2014/09/richard-lipton-wins-knuth-prize.htmlnoreply@blogger.com (Lance Fortnow)3tag:blogger.com,1999:blog-3722233.post-8839537937547068509Thu, 11 Sep 2014 22:44:00 +00002014-09-12T12:18:26.786-05:00Beyond the CommodityBack in 2005 I <a href="http://blog.computationalcomplexity.org/2005/07/computer-science-in-high-school.html">lamented</a> the fact that students viewed computers as a commodity, a tool they use, like an automobile, but have no reason to understand how or why it works. In 2011 I <a href="http://blog.computationalcomplexity.org/2011/05/is-computer-science-cool-again.html">noticed</a> a change, that computers like IBM's Watson were starting to make computer science cool again.<br />
<br />
Now we are in the midst of yet another major change, reflected in refound interest in high school computer science, and the huge enrollment growth in universities, particularly in non-majors taking upper-level CS courses. Jobs certainly drive much of this enrollment but for an important reason. Basic computer science principles and reasoning have become a critical tool in almost any business. Every large company tries to glean knowledge from data, deal with security and privacy challenges, and solves big optimization questions in an ever complex environment. I've been told that car companies will take as many Mechanical Engineering major with CS minors as Georgia Tech can produce. For what are cars today but computers on wheels.<br />
<br />
We've been down this path before, and trends that seem to be with us forever die out leading to computer science disillusionment. Somehow this seems different, but we'll just have to wait and see.http://blog.computationalcomplexity.org/2014/09/beyond-commodity.htmlnoreply@blogger.com (Lance Fortnow)4tag:blogger.com,1999:blog-3722233.post-649274100829341014Tue, 09 Sep 2014 04:09:00 +00002014-09-08T23:09:16.295-05:00A Statistical oddity ? I keep a list of people that are famous-to-me that are old so that if someone dies I won't be surprised. When Lauren Bacall died recently I (1) knew who she was, AND (2) knew she wasn't already dead. I DO NOT look at lists of celebs. My list is organic- if I think of someone who seems old (`GEE, I wonder if that famous probabilist Monty Hall is still alive? He is! He's 92.) I look it up and if they are over 80, they go on the list. Most people are surprised to know that Dorris Day is still alive.<br />
<br />
<br />
Okay, so what of it? Bill has another weird hobby. (Add this to collecting satires, collecting papers that apply Ramsey Theory, and writing a satire of papers that apply Ramsey theory).<br />
<br />
I decided to see how many people on my list had the same birthday and see if it was reasonable with regard to probability (the birthday paradox and all that). The list currently has 70 people.<br />
<br />
What I found was probably reasonable in one respect and odd in another.<br />
<br />
REASONABLE: Nine pairs had the same birthday. One triple had the same birthday.<br />
<br />
ODD: There were NO pairs or triples of same birthdays in July, September, October, November, or December.<br />
<br />
I leave as an exercise: How reasonable is what I called reasonable and how odd is what I called odd?<br />
<br />
<br />http://blog.computationalcomplexity.org/2014/09/a-statistical-oddity.htmlnoreply@blogger.com (GASARCH)1tag:blogger.com,1999:blog-3722233.post-4123992773585237133Thu, 04 Sep 2014 11:10:00 +00002014-09-04T06:10:37.894-05:00Favorite Theorems: Quantum Interactive ProofsPractical quantum computing still has many hurdles to jump through, but the quantum computing model does generate great complexity questions and often surprising answers.<br />
<blockquote class="tr_bq">
<a href="http://dx.doi.org/10.1145/2049697.2049704">QIP = PSPACE</a> by Rahul Jain, Zhengfeng Ji, Sarvagya Upadhyay and John Watrous. </blockquote>
QIP is the quantum analogue of interactive proof systems. Since IP = PSPACE we get the consequence QIP = IP, that quantum doesn't give an advantage over classical randomness in the interactive proof model. I wouldn't read too much into that interpretation, more that we have a strange situation where IP is far more powerful than we initially suspected and that QIP is weaker than expected and so we get the collision at PSPACE.<br />
<br />
QIP ⊇ PSPACE follows from IP = PSPACE. In an <a href="http://dx.doi.org/10.1145/335305.335387">earlier paper</a>, Kitaev and Watrous show QIP ⊆ EXP by reducing QIP to an exponential-sized semi-definite program. This papers applies a clever matrix multiplicative weight algorithm to approximate a subclass of SDPs to achieve QIP ⊆ PSPACE.<br />
<br />
We've also seen progress on QMIP, quantum interactive proof with multiple entangled provers who cannot otherwise communicate. QMIP containing MIP=NEXP remained open for a long time because the provers might use entanglement to cheat. Ito and Vidick <a href="http://dx.doi.org/10.1109/FOCS.2012.11">show</a> that entangled provers can't get an advantage on the multilinear test used in the original MIP=NEXP paper, and thus QMIP does contain NEXP. QMIP contained in NEXP remains open.<br />
<br />http://blog.computationalcomplexity.org/2014/09/favorite-theorems-quantum-interactive.htmlnoreply@blogger.com (Lance Fortnow)1tag:blogger.com,1999:blog-3722233.post-7316713849914189538Tue, 02 Sep 2014 17:40:00 +00002014-09-02T12:40:24.361-05:00How hard is changing fields? Ask Sheldon!In the last season of <i>The Big Band Theory</i> Sheldon wants to change field from String theory to something else (I don't recall if he settled on what it would be, though Standard Model Physics, Quantum Loop Gravity, Calculation of nuclear matrix elements, were mentioned negatively, and Geology is, according to Sheldon, not a real science.)<br />
<br />
Sheldon faced opposition from his department. And since Physics is... hard... changing fields seems hard.<br />
How hard is it to change fields, both intellectually and in terms of your dept?<br />
<br />
<ol>
<li>If you are hired as a string theorist and you are one of the only ones in your dept, your dept may quite reasonably ask you to still teach string theory. I think this is fine.</li>
<li>Math and Physics are far older than CS so to change fields requires learning more background knowledge. In CS it was easier about 20 years ago, but CS has grown so much that I suspect it would be harder now.</li>
<li>There may be a time when you have less papers and grants as you are making the transition. Hence it may be unwise to do this before you get Tenure.</li>
<li>If your dept hired you to do String Theory and you change to Calculation of nuclear Matrix elements should they mind that? I would think that as long as it's still good work they wouldn't. And they should give you enough time to get grants and papers in it. If you change to a field they don't care about, or change to a field not in the area they might not like that. If Sheldon went into Computational Complexity then would his dept (physics) be justified in not liking that? If he solved P vs NP then all would be forgiven.</li>
<li>Perhaps the further away you change from your field the better your work has to be before your dept doesn't mind. This could be modelled by a formula. Maybe Sheldon will change to computational dept dynamics and derive it for us.</li>
<li>The obvious thing to say is <i>Depts should allow their professors to wander free as a bird and not erect arbitrary walls since the best work comes from people not being constrained.</i> I would like to think this is true but I wonder--- how many people have changed fields and ended up doing great work? good work? totally failed? </li>
</ol>
http://blog.computationalcomplexity.org/2014/09/how-hard-is-changing-fields-ask-sheldon.htmlnoreply@blogger.com (GASARCH)12tag:blogger.com,1999:blog-3722233.post-629040519052203030Thu, 28 Aug 2014 23:23:00 +00002014-08-28T18:24:33.030-05:00Sixteen Years in the MakingEvery paper has a story but Sunny Daniel's <a href="http://arxiv.org/abs/1408.6334">Arxiv paper</a> from yesterday deserves a blog post.<br />
<br />
We begin in 1982 when Ravi Kannan proved that Σ<sub>2</sub> (the problems computable in NP with an NP oracle) cannot have n<sup>2</sup> size circuits. Kannan's result hold for n<sup>k</sup>-size circuits but for this story we'll keep it simple.<br />
<br />
Kannan had an ingeniously simple proof. By diagonalization you can create a language L in Σ<sub>4 </sub>that does not have n<sup>2</sup>-size circuits. Now there are two cases:<br />
<ol>
<li>SAT doesn't have n<sup>2</sup>-size circuits. Since SAT is in Σ<sub>2</sub> we are done.</li>
<li>SAT has n<sup>2</sup>-size circuits. Then by <a href="http://dx.doi.org/10.1145/800141.804678">Karp-Lipton</a> Σ<sub>4</sub> = Σ<sub>2</sub> so L is in Σ<sub>2</sub> and we are done.</li>
</ol>
<div>
Kannan's proof is non-constructive and doesn't give an explicit Σ<sub>2</sub> language that we can show doesn't have n<sup>2</sup>-size circuits. Either SAT or L but one can't be sure which one.</div>
<div>
<br /></div>
<div>
In 1998, Sunny Daniels, a PhD student at Rutgers, took Eric Allender's complexity course. Eric offered up a constructive example of Kannan as an open problem. Sunny came up with a solution. He wrote up a draft in LaTeX but for personal reasons dropped out of academics and never published the paper.</div>
<div>
<br /></div>
<div>
In 2003, Jin-Yi Cai and Osamu Watanabe, not aware of Daniels' paper, came up with their own independent construction and presented <a href="http://dx.doi.org/10.1007/3-540-45071-8_22">their paper</a> at the COCOON conference in Montana. Word got back to Sunny but he thought he had lost the LaTeX file and didn't want to retypeset the whole proof.</div>
<div>
<br /></div>
<div>
Sunny had <a href="http://en.wikipedia.org/wiki/Zip_drive">Iomega Zip Drive</a> cartridges from his Rutgers days. Recently he found someone who had a Zip Drive reader and managed to recover the files. In there he discovered the original LaTeX, cleaned the paper up, and sixteen years after his proof put <a href="http://arxiv.org/abs/1408.6334">the paper</a> on ArXiv. Even if you don't care about the math, read the introduction for the complete version of this story.</div>
<div>
<br /></div>
<div>
Kannan's proof actually shows Σ<sub>2</sub>∩Π<sub>2</sub> does not have n<sup>2</sup>-size circuits and this was later <a href="http://dx.doi.org/10.1016/j.jcss.2003.07.015">improved</a> to S<sub>2</sub><sup>P</sup>. Whether we have any constructive language in Σ<sub>2</sub>∩Π<sub>2</sub> or S<sub>2</sub><sup>P</sup> without n<sup>2</sup>-size circuits still remains open. </div>
http://blog.computationalcomplexity.org/2014/08/sixteen-years-in-making.htmlnoreply@blogger.com (Lance Fortnow)2tag:blogger.com,1999:blog-3722233.post-7618095704018734117Mon, 25 Aug 2014 23:50:00 +00002014-08-25T18:50:21.359-05:00A Deadly Side of ComplexityBetter algorithms can lead to better medicine and save lives. Just today Tim Gowers <a href="http://gowers.wordpress.com/2014/08/25/icm2014-emmanuel-cands-plenary-lecture/">discusses</a> Emmanuel Candès' ICM <a href="https://www.youtube.com/watch?v=W-b4aDGsbJk">Plenary Lecture</a>, which among other things describes how Candès' work on compressed sensing leads to shorter MRI scans for children, greatly reducing the risk of oxygen deprivation. Prove P = NP with a practical algorithm, and you'll conquer that worst of our diseases. Sounds great until you realize what we can't do.<div>
<br /></div>
<div>
I was talking to a cancer researcher recently and he points out that many of their challenges are indeed algorithmic. But he also brings up the contrapositive. Since we don't have great algorithms now, we don't know how to make sense of DNA sequences and in particular don't know how to map genetic markers to an appropriate cancer treatment. He works with cancer patients, knowing he can't give them the best possible treatment, not because of lack of data, but due to lack of ways to analyze that data. People die because we don't have the ability to break through the complexity of these algorithmic challenges.</div>
http://blog.computationalcomplexity.org/2014/08/a-deadly-side-of-complexity.htmlnoreply@blogger.com (Lance Fortnow)2tag:blogger.com,1999:blog-3722233.post-7605656666270721510Thu, 21 Aug 2014 13:38:00 +00002014-08-21T08:38:15.426-05:00Turing's Oracle<a href="http://www.newscientist.com/data/images/ns/covers/20140719.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="http://www.newscientist.com/data/images/ns/covers/20140719.jpg" height="200" width="152" /></a>My daughter had a summer project to read and summarize some popular science articles. Having heard me talk about Alan Turing more than a few times, she picked a <a href="http://www.newscientist.com/article/mg22329780.400-turings-oracle-the-computer-that-goes-beyond-logic.html">cover story</a> from a recent New Scientist. The cover copy says "Turing's Oracle: Will the universe let us build the ultimate thinking machine" sounds like an AI story but in fact more of an attack on the Church-Turing thesis. The story is behind a paywall but here is an excerpt:<br />
<blockquote class="tr_bq">
He called it the "oracle". But in his PhD thesis of 1938, Alan Turing specified no further what shape it might take...Turing has shown with his universal machine that any regular computer would have inescapable limitations. With the oracle, he showed how you might smash through them. </blockquote>
This is a fundamental misinterpretation of Turing's oracle model. Here is what Turing said in his paper <a href="http://www.turingarchive.org/viewer/?id=469&title=12">Systems of Logic Based on Ordinals</a>, Section 4.<br />
<blockquote class="tr_bq">
Let us suppose we are supplied with some unspecified means of solving number-theoretic problems; a kind of oracle as it were. We shall not go any further into the nature of the oracle <i>apart from saying it cannot be a machine.</i> (emphasis mine)</blockquote>
The rest of the section defines the oracle model and basically argues that for any oracle O, the halting problem relative to O is not computable relative to O. Turing is arguing here that there is no single hardest problem, there is always something harder.<br />
<br />
If you take O to be the usual halting problem then a Turing machine equipped with O can solve the halting problem, just by querying the oracle. But that doesn't mean that you have some machine that solves the halting problem for, as Turing has so eloquently argued in Section 9 of his <a href="http://www.cs.virginia.edu/~robins/Turing_Paper_1936.pdf">On Computable Numbers</a>, no machine can compute such an O. Turing created the oracle model, not because he thought it would lead to a process that would solve the halting problem, but because it allowed him to show there are problems even more difficult.<br />
<br />
Turing's oracle model, like so much of his work, has played a major role in both computability and computational complexity theory. But one shouldn't twist this model to think the oracle could lead to machines that solve non-computable problems and it is sacrilege to suggest that Turing himself would think that.http://blog.computationalcomplexity.org/2014/08/turings-oracle.htmlnoreply@blogger.com (Lance Fortnow)3tag:blogger.com,1999:blog-3722233.post-947153233068274027Mon, 18 Aug 2014 21:15:00 +00002014-08-18T16:15:19.406-05:00Complexity versus Algorithms: The FOCS ChallengeIn recent years, I've heard complaints from my complexity colleagues that FOCS and STOC are mostly algorithms and from the algorithm buddies that STOC and FOCS are mostly complexity. What exactly counts as a complexity or algorithms paper has become quite blurred in recent years. So let's try an experiment. Below is a poll I've created using titles from the upcoming <a href="http://www.boazbarak.org/focs14/">FOCS conference</a>. Which of these papers do you consider complexity? Does complexity in the title make them a complexity paper? <a class="OPP-powered-by" href="http://www.objectplanet.com/opinio/" style="text-decoration: none;"></a><br />
<br />
If you are interested, you can find the manuscripts for most of these papers on the <a href="http://www.boazbarak.org/focs14/accepted.html">FOCS accepted papers list</a>.<br />
<br />
<script type="text/javascript" src="http://www.easypolls.net/ext/scripts/emPoll.js?p=53f2696be4b0ecddd1ca4b67"></script><a class="OPP-powered-by" href="https://www.murvey.com" style="text-decoration:none;"><div style="font: 9px arial; color: gray;">survey tools</div></a>
<br />
Disclaimer: This is a completely non-scientific poll solely for the interest of the readers of this blog. The results will have no effect on future conference papers.http://blog.computationalcomplexity.org/2014/08/complexity-versus-algorithms-focs.htmlnoreply@blogger.com (Lance Fortnow)5tag:blogger.com,1999:blog-3722233.post-3620378567435544516Thu, 14 Aug 2014 21:02:00 +00002014-08-14T16:02:23.596-05:00Favorite Theorems: Limited IndependenceWhen can limited randomness act as well as true random bits? <br />
<blockquote class="tr_bq">
<a href="http://dx.doi.org/10.1145/1754399.1754401">Polylogarithmic independence fools AC<sup>0</sup> circuits</a> by Mark Braverman (JACM 2010)</blockquote>
To explain this result consider choosing uniformly from among the following four strings:<br />
<div style="text-align: center;">
000 110 101 011</div>
If we look at any two of the bits, say the first and third, all four possibilities 00 10 11 01 occur. The sequence is thus 2-wise independent. We can get 2-wise independence using only two random bits to choose one of the four strings. In general one can get k-wise independent in n-bit strings using O(k<sup>2</sup> log n) random bits.<br />
<br />
Braverman shows that any polylogarithmic independent distribution fools polynomial-size constant-depth circuits (AC<sup>0</sup>), or more precisely for a size s depth d circuit C, for k=(log (m/ε))<sup>O(d<sup>2</sup>)</sup>, the probability that C will output 1 with uniformly-random inputs will differ at most ε from the probability C outputs 1 from inputs chosen from a k-wise random distribution. Braverman's result followed after <a href="http://dx.doi.org/10.1137/070691954">Bazzi</a> and <a href="http://dx.doi.org/10.1145/1490270.1490273">Razborov</a> proved a similar result for depth-2 circuits (CNF formulas).
<br />
<br />
Another nice result along these lines: Ilias Diakonikolas, Parikshit Gopalan, Ragesh Jaiswal, Rocco Servedio and Emanuele Viola show that <a href="http://dx.doi.org/10.1137/100783030">Bounded Independence Fools Halfspaces</a>. A half-space is just a weighted threshold function (is the sum of w<sub>i</sub>x<sub>i</sub> at most some given θ). Diakonikolas et al. show that one can fool halfspaces with k=O(ε<sup>-2</sup>log<sup>2</sup>(1/ε)), in particular for constant ε, k is a constant independent of the number of variables.http://blog.computationalcomplexity.org/2014/08/favorite-theorems-limited-independence.htmlnoreply@blogger.com (Lance Fortnow)5tag:blogger.com,1999:blog-3722233.post-494402574231614946Tue, 12 Aug 2014 18:36:00 +00002014-08-12T14:45:08.968-05:00Subhash Khot wins NevanlinnaAt the opening ceremonies of the <a href="http://www.icm2014.org/">International Congress of Mathematicians</a> in 2014, Subhash Khot was awarded the <a href="http://www.mathunion.org/general/prizes/nevanlinna/details/">Rolf Nevanlinna Prize</a>, given every four years to an under-40 researcher for outstanding contributions in Mathematical Aspects of Information Sciences. Subhash's citation reads<br />
<blockquote class="tr_bq">
<blockquote class="tr_bq">
Subhash Khot is awarded the Nevanlinna Prize for his prescient definition of the “Unique Games” problem, and leading the effort to understand its complexity and its pivotal role in the study of efficient approximation of optimization problems; his work has led to breakthroughs in algorithmic design and approximation hardness, and to new exciting interactions between computational complexity, analysis and geometry.</blockquote>
</blockquote>
Khot's work has indeed generated a large research agenda over the last decade. I highlighted his work in March's <a href="http://blog.computationalcomplexity.org/2014/03/favorite-theorems-unique-games.html">favorite theorems post</a>.<br />
<br />
In other big news, we have our first female Fields Medalist Maryam Mirzakhani for contributions to the dynamics and geometry of Riemann surfaces and their moduli spaces. Still no female winners among the nine Nevanlinna winners. Artur Avila, Manjul Bhargava and Martin Hairer also received Fields medalists. Stanley Osher won the Gauss Prize, Phillip Griffiths the Chern Medal and Adrián Paenza the Leelavati Prize.<br />
<br />
<a href="http://www.mathunion.org/general/prizes/2014/">Pictures and press releases</a> and <a href="http://www.mathunion.org/general/prizes/2014/prize-citations/">citations</a> of all the prize winners.http://blog.computationalcomplexity.org/2014/08/subhash-khot-wins-nevanlinna.htmlnoreply@blogger.com (Lance Fortnow)4tag:blogger.com,1999:blog-3722233.post-862938291468221547Mon, 11 Aug 2014 11:39:00 +00002014-08-11T08:11:55.028-05:00Questions that arose teaching High School students cryptoI taught a 3-week, 3-hours-a-day course to High School student titled<br />
<br />
<i>Computer Science: A Hands Off Approach.</i><br />
<br />
Given that time constraints and the fact that some already know (say) Java and some don't know any language, this seemed like a good choice.<br />
<br />
I decided to teach mostly pre-RSA crypto with the following theme: Alice and Bob want to pass secret messages. How do they do it? I cover Shift, affine, general sub, Vigenere, Matrix, 1-time pad, Diffie-Helman (a highpoint of the course since Alice and Bob don't have to meet in a dark alley). In also did secret sharing with polynomials, error correcting codes (elementary), Huffman codes, and some applications of mod arithmetic.<br />
<br />
While teaching this course some points of interest came up. I suspect most are know and I appreciate polite comments telling me so.<br />
<br />
<ol>
<li>A student suggested this cipher: code a,b,c,...,z into a 100-letter alapahbet and map each letter to a set of symbols that is the size of the freq. For example, if e occurs 9% of the time then map e to 9 letters. Then use those letters at random. This would seem to foil freq analysis? Does it? Has it been used? What are the downsides.</li>
<li>Many students suggested using Vigenere but instead of having every xth letter be done by a different shift, have it be affine or general. Of course this can be cracked the same way Vig is cracked. But it does raise an interesting point: Which ciphers are used and not used can be the based on when things were discovered. Martians may very well have used some kind of Vig where every xth letter is a different gen sub cipher.</li>
<li>Wikipedia and other sources say that the Vig cipher was unbroken for 300 years. A student pointed out that it might have been broken but the one who broke it didn't say. Jon Katz (crypto prof at UMCP) can't believe it wasn't broken immediately, but of course hindsight is 20-20.</li>
<li>(I have commented on this before) A matrix cipher with a 10x10 matrix seems uncrackable using plaintext only. I have made this question rigorous <a href="http://www.cs.umd.edu/~gasarch/BLOGPAPERS/matrix.pdf">here</a>.</li>
<li>I made the comment that 1-time pads are not used much (is this even true? Might depend on the definition of ``must'') because getting a perfect source of randomness is hard. During WW II they also would be hard to use because its hard to carry around a billion bits. But now that would not be a problem. Again--if history had been different we may use 1-time pads, or quasi-random ones today!</li>
<li>I told the students about arithmetic mod n. One of the students really really didn't like that (say) in mod 7 we have 1/3 = 5. He REALLY wants 1/3 to be between 0 and 1. I suspect he didn't care much for discrete logs either. This was a GOOD student- so his objection was not that it was hard.</li>
<li>For some of the students their first exposure to matrices was matrix codes over mod 26. I hope they can recover from that.</li>
<li>Most of the students know what logs were, but weren't that comfortable with them. And here I go and teach them discrete logs! I hope they can recover from that.</li>
<li>I showed the students that there were quadratic equations over mod 12 with more than 2 roots and challenged them to see how many roots they could come for other mods. One of my students ran massive computer tests and found stuff and in the end had a result that didn't need all of his computations: x^2 \equiv 0 mod n^2 has n roots. And I later had on a HW x^a \equiv 0 mod n^a. I am sure none of this is new, but its new to him when he discovered it and of course new to the class when I taught it.</li>
<li>I taught the class the Baby-Step/Giant-Step Discrete Log algorithm which has sqrt(p) prepocessing an sqrt(p) running time. Its not used because it also takes sqrt(p) space; however, it was good to show them that Discrete Log can be done in sqrt(p) time, much better than p time--- hence Alice and Bob need to pick their parameters larger than they may have thought when doing Diffie-Helman. That night I easily worked out that it can be modified to do p^{2/3} preprocessing (and space) but just p^{1/3} time. HW was p^a prep, p^{1-a} time. One of the students inquired if this algorithm has a name. I then looked over the web but couldn't find it anywhere so I told them to call it <i>The Gasarch Variant of Baby-Step, Giant-Step.</i> I also quickly told them to NOT be impressed--- and this helped me make a point I had made often, that CS is a NEW field, so NEW that one can present new or newish results to HS students. I also made the point that I am sure this variant is KNOWN to anyone who would care, but (1) they may not care since it takes even more space if x is larger than 0.5 and more time if x is less than 1/2 and (2) not everything is no the web. That last point freaked them out more than the quadratic equation mod 12 that had more than two roots.</li>
</ol>
UPSHOT- teaching HS students CS can bring up some issues that you had not thought of. Or at least that I had not thought of. <br />
<ol>
</ol>
<br />
<br />
<br />http://blog.computationalcomplexity.org/2014/08/quetions-that-arose-teaching-hs.htmlnoreply@blogger.com (GASARCH)10tag:blogger.com,1999:blog-3722233.post-1002396764243404579Fri, 08 Aug 2014 04:00:00 +00002014-08-07T23:00:36.383-05:00The n^{1/3} barrier for 2-server PIR's broken! A lesson for us all!Recall: PIR stands for Private Information Retrieval. Here is the model: A database is an n-bit string (my wife tells me this is not true). The user wants to find the ith bit without the database knowing what bit the user wants. The user COULD just request ALL n bits. Can the user use less communication? See <a href="http://www.cs.umd.edu/~gasarch/TOPICS/pir/pir.html">here</a> for a website of many papers on PIR.<br />
<br />
<ol>
<li>If there is just one copy of the DB and there are no comp. constraints on the computational power of the DB then ANY protocol requires n bits comm.</li>
<li>If there is one copy of the DB then, with some comp constraints, you can do better. I state one result: if quadratic residue is hard then there is a protocol using n^epsilon bits of comm. (Kushilevitz and Ostrovsky). The rest of this post is about the info-theoretic case, so the DB has no comp. constraints.</li>
<li>If there are two copies of the DB then there is a protocol that uses n^{1/3} bits. (Chor, Kushilevitz, Goldreich, Sudan)</li>
<li>If there are three copies of the DB then there is a protocl that uses n^{1/32582658} bits. Really! (Yekhanin).</li>
<li>If a 2-server protocol is a bilinear-group protocol (which all prior constructions were) then it must take n^{1/3}.(Razborov and Yekhanin)</li>
<li>Most known constructions put into a geometric framework (Woodruff and Yekhanin). </li>
</ol>
Recently Dvir ahd Gopi posted a paper <a href="http://eccc.hpi-web.de/report/2014/094/">here</a> which gives a 2-server PIR protocol that uses n^P bits where P= sqrt( (log log n)/log n)). That breaks the barrier proven by RY! How is that possible?<br />
<br />
The barrier result was only for a certain type of PIR. It DID cover all the known PIR schemes at the time. But it does not cover this one.<br />
<br />
This is how Barrier SHOULD work--- they should not discourage, but they should point to difficulties so that someone can overcome them.<br />
<br />
Also note, the new result used the some of the Framework of Woodruff and Yekhanin. So good to unify and obtain new proofs of old results. http://blog.computationalcomplexity.org/2014/08/the-n13-barrier-for-2-server-pirs.htmlnoreply@blogger.com (GASARCH)0tag:blogger.com,1999:blog-3722233.post-5056589373509195055Tue, 05 Aug 2014 15:25:00 +00002014-08-05T10:25:31.741-05:00How I know Moneyball is a true story. Do we clean up theorems and life too much? A few years ago I saw the movie Moneyball about how the Oakland A's used intelligent statistics to... win?<br />
No, but to do better-than-expected. Even if I didn't know it was a true story I would have assumed it was because the following are very rare or odd in a fictional story:<br />
<br />
<ol>
<li>At the end of the team doesn't win- it just does better than expected. In the typical sports movie the underdog pulls it all together and wins. In some the underdogs loses but they are now better people or something. In an episode of Cheers where they were the underdog to Gary's Tavern in a bloody mary making contest the Cheers gang cheats and wins. But in Moneyball, and in NO other sports (or contest) movie that I know of, does the underdog do better-than-expected in an undramatic matter. This is NOT a complaint- just note that its real life.</li>
<li>In Moneyball the General Manager wants to try out mathematical methods and the Manager resists. In most movies its the suits that are wrong and the people on the ground that are right. This is even a theme of many articles about business that I read in magazines on airplanes. So this inversion is odd - but again, you can't argue that a true story is unrealistic or undramatic. </li>
<li>Bill Beane, the one who wants to use math techniques, thinks that what Baseball scounts look for is the wrong thing. In fact, they misjudged him when he was a young player. But in what direction?--- they thought he was BETTER than he was. If this was a fictional story surely the scouts would think he was WORSE than he was.</li>
</ol>
I am glad that (as far as I know) neither the book nor the movie tried to make the story more satisfying or dramatic.<br />
<br />
In academia we do clean things up for a better story line. If the true motivation for working on a problem doesn't really make sense when you see the final paper, we change our motivation. Our original proof is intuitive but ugly, so we change it to be polished but less clear where it came from. Historians often simplfiy to make sense of things. I am NOT complaining- merely asking, do we do it too much?<br />
<br />
When I was in ninth grade and was told that you could solve a quadratic equation (I rederived the quadratic formula once a month to make sure I could), a cubic, a a quartic, but not quintic, I immediately said "I want to goto College to learn why you can't solve a quintic" That sparked my interest in math.<br />
<br />
Is the above story true? I am sure that in ninth grade I did learn that the quintic was unsolvable and that was of great interest to me, and I really did rederive the quadratic equation once a month. And I was interested to learn that the quintic was not solvable. But I really doubt the story is as clean as presented above. Even so, the story is true in spirit. However, I would not want to push the point.<br />
<br />
How about you? Do you tell stories about yourself or about others that are just a little too polished? Not so much false, and not even to put yourself in a better light, but just a little to clean to have really happened.http://blog.computationalcomplexity.org/2014/08/how-i-know-moneyball-is-true-story-do.htmlnoreply@blogger.com (GASARCH)8tag:blogger.com,1999:blog-3722233.post-2872554879184431476Thu, 31 Jul 2014 21:46:00 +00002014-07-31T16:46:12.379-05:00How NOT to bet(True Story, but names are changed for no good reason.)<br />
<br />
Alice, Bob, and Carol are betting on who will be the Republican Nominee in 2016.<br />
<br />
<ol>
<li>Alice bets on Jeb Bush. Alice's reasoning is that in the past the Republicans always end up with a moderate who they think can win. NOTE: From Alice's prediction you can tell NOTHING about her politics.</li>
<li>Bob bets on Paul Ryan. Bob's reasons that in the past the Republicans always end up with someone who they are familiar with, quite often someone who has run before. Normally this might be someone who ran in the primaries four years earlier; however, none of Herman Cain, Michelle Bachman, Rick Santorum, Newt Gingrich, seem likely. Rick Perry is possible but seems unlikely. That leaves Paul Ryan (Curiosity- people who lose, even if its close, do not run again. Not sure why. So it won't be Romney.) NOTE: From Bob's prediction you can tell NOTHING about his politics.</li>
<li>Carol bets on Rand Paul. Carol's reasoning is that the American Public is tired of Big Government and is ready for the Rand Paul Revolution! NOTE: From Carol's prediction you can tell EVERYTHING about her politics.</li>
</ol>
Carol is letting her opinion drive her betting. Can someone take advantage of this? YES- if someone is betting from sentiment or emotion or conviction rather than from facts, that gives you an advantage in a bet. This does not always work, but its a good bet. Now watch, I'll lose my bet to Carol (NOTE- I am Bob).<br />
One full dollar!<br />
<br />
I think I read that you are better off betting AGAINST a triple crown if a horse won the first two legs for the same reason- there will be emotion driving the bet FOR, and that increases the payoff for those betting AGAINST. But of course nothing is guaranteed. And there are some cases where its just ridiculous to vote against (Secretariat in 1973); however, to use my system you can't just skip one since you ``just know'' that there will be a triple crown. Its a long term strategy. And one could be wrong. That's why its called gambling!<br />
<br />
Are there other cases where long term betting with facts and against sentiment can give you an edge? http://blog.computationalcomplexity.org/2014/07/how-not-to-bet.htmlnoreply@blogger.com (GASARCH)5tag:blogger.com,1999:blog-3722233.post-8432646827751980224Mon, 28 Jul 2014 01:50:00 +00002014-07-29T18:31:51.939-05:00The Change Problem and the Gap between Recreational and Serious MathematicsIn a prior post (<a href="http://blog.computationalcomplexity.org/2014/06/how-many-ways-can-you-make-change-of-n.html">here</a>) I discussed the change problem: given a dollar, how many ways can you make change using pennies, nickels, dimes, and quarters (and generalize to n cents). Scouring the web I found either programs for it, the answer (242), and answers like `is the coefficient of x^100 in ... . What I didn't find is a formula for n cents. So I derived one using recurrences and wrote this up with some other things about the change problem, and I posted it on arXiv. (I have updated the paper many times after comments I got. It is still where it was, but updated, <a href="http://arxiv.org/abs/1406.5213">here</a>.)<br />
<br />
I then got some very helpful emails pointing me to the vast math literature on this problem. I was not surprised there was such a literature, though I was surprised I hadn't found any of it searching the web.<br />
The literature didn't have my formula, though it had several ways to derive it (some faster than mine).<br />
<br />
Why isn't the formula out there? Why couldn't I find the literature?<br />
<br />
<ol>
<li>The formula falls into a space right between recreational and serious math. I use a recurrence but not a generating function. Recreational math just wants to know the answer for 100, so a program suffices (or some clever by-hand recurrences, which are also in my paper). Serious math people are happy to show that a formula CAN be derived and to refine that in terms of how quickly the coefficients can be found.</li>
<li>There is a gap between recreational and serious math. In particular- the serious people aren't talking to (or posting on websites to) the rec people.</li>
<li>Different terminologies. I was looking for ``change problems'' and ``coin problems'' and things like that when I should have been looking for ``Sylvester Denumerant'' or ''Frobenius problem''</li>
</ol>
For some areas Google (and other tools) are still not as good as finding someone who knows about your area. My posting on the blog got some people who know stuff's attention, and my posting on arXiv got one more person (who knew A LOT!). I'm glad they emailed me comments and references so I improved the paper and cited the literature properly. But this seems so haphazard! Google didn't work, mathstack exchange and other similar sites didn't work (that is where I saw the Rec people post but not get a formula). Is it the case that no matter how good Google gets we'll still need to find people who know stuff? Thats fine if you can, but sometimes you can't.http://blog.computationalcomplexity.org/2014/07/the-change-problem-and-gap-between.htmlnoreply@blogger.com (GASARCH)11tag:blogger.com,1999:blog-3722233.post-3995406809693423827Thu, 24 Jul 2014 23:54:00 +00002014-07-25T14:57:51.881-05:00Need some book reviewed- faster than usualI have been the SIGACT NEWS book review editor since 1997. I have tried to get about 10 books reviewed per column. I have succeeded- perhaps too well! I have gotten so many reviews out that I only have six reviews left.<br />
<br />
I DO have many books that could be reviewed, and that is where YOU come in!<br />
<br />
List of books available for review: <a href="http://www.cs.umd.edu/~gasarch/BLOGPAPERS/booklist.pdf">Here</a><br />
<br />
Advice for reviewers: <a href="http://www.cs.umd.edu/~gasarch/bookrev/advice.txt">Here</a><br />
<br />
LaTeX Template for reviews: <a href="http://www.cs.umd.edu/~gasarch/bookrev/revtemplate.txt">Here</a><br />
<br />
ALL of my prior columns: <a href="http://www.cs.umd.edu/~gasarch/bookrev/bookrev.html">Here</a><br />
<br />
IF you want to review a book DO NOT leave a comment- just email me (ADDED LATER- EMAIL ME at gasarch@cs.umd.edu. Someone emailed Lance instead<br />
which you shouldn't do.)<br />
a set of books you want to review. I will pick out one for you--- it may be based<br />
on what I've already got requests for.<br />
<br />
Since it is summer you are not as busy as the regular hear (hmmm- I am running an REU program AND teaching an intense High School Class, AND going to a security conference, and mentoring three high school students hoping for some more free lunches, so I am actually MORE busy) so summer is a GOOD time to review a book.<br />
<br />
Deadline to ASK for a book: Request that you make your request BEFORE July 28 so that when I email in my column with the list-of-books-I-need-reviewed, it is accurate.<br />
<br />
Deadline for Review: About two months after you get it, though this can be flexible. http://blog.computationalcomplexity.org/2014/07/need-some-book-reviewed-faster-than.htmlnoreply@blogger.com (GASARCH)0tag:blogger.com,1999:blog-3722233.post-3526021470172290747Tue, 22 Jul 2014 15:42:00 +00002014-07-22T10:42:43.879-05:00The Burden of Large EnrollmentsThis week I'm at the <a href="http://cra.org/events/snowbird-2014/">CRA Snowbird conference</a>, the biennial meeting of CS chairs and other leaders in the field. In <a href="http://blog.computationalcomplexity.org/2012/08/moocs.html">2012</a> many of the discussion focused on MOOCS. This year the challenge facing most CS chairs are booming enrollments in CS courses. A nice problem to have, but a problem nevertheless.<br />
<br />
Last night we had a broad discussion about the burgeoning number of students. Ed Lazowska showed his <a href="http://lazowska.cs.washington.edu/NCWIT.pdf">NCWIT slides</a> giving anecdotal evidence. It's too early to get a complete survey of CS departments but hardly anyone in the audience felt that enrollments were not going up by double (or triple) digit percentages. Not only an increase in majors, but a number of non-majors, minors or others, who take upper level courses.<br />
At Georgia Tech it seems every engineering student wants to minor in CS.<br />
<br />
We all have to deal with the increase in the short term. Depending on the institution, we have more classes, larger classes, enrollment caps, increases in instructors, PhD lecturers, teaching postdocs, automated grading, undergrad and MS TAs and having other departments take up some of the load. All of this could hurt the undergrad experience but with careful planning we can mitigate those effects.<br />
<br />
What's driving the increase? Some suggested a change in the ubiquitous view of computing and the more positive opinion of geeks in society. More likely though, the driver is jobs, the great need for computer scientists and not just from computing companies, and the limited jobs for those graduating with non-technical majors.<br />
<br />
Is the big shifts in enrollment a permanent change or just another part of the boom and bust cycle in CS? More than a theoretical questions, a permanent change makes for a better argument with deans and provosts to increase faculty sizes in computer science. There are some signs of "this time is different," such as the increase in non-majors in upper level courses, but it's an argument that will have a hard sell unless the increase is sustained for several years. One good argument: If you draw a linear line through enrollments since the 70's you get a consistent 10% yearly increase averaged over booms and busts.http://blog.computationalcomplexity.org/2014/07/the-burden-of-large-enrollments.htmlnoreply@blogger.com (Lance Fortnow)0tag:blogger.com,1999:blog-3722233.post-2836217438939792671Thu, 17 Jul 2014 13:24:00 +00002014-07-17T08:24:42.504-05:00Elfdrive<div>
New York Times, dateline June 11, 2019</div>
<div>
<blockquote>
With a near record-setting investment announced last week, the self-driving car service Elfdrive is the hottest, most valuable technology start-up on the planet. It is also one of the most controversial.<br />
The company, which has been the target of protests across Europe this week, has been accused of a reckless attitude toward safety, of price-gouging its customers, of putting existing cabbies out of work and of evading regulation. And it has been called trivial. In The New Yorker last year, George Packer huffed that Elfdrive typified Silicon Valley’s newfound focus on “solving all the problems of being 20 years old, with cash on hand.”<br />
It is impossible to say whether Elfdrive is worth the $117 billion its investors believe it to be; like any start-up, it could fail. But for all its flaws, Elfdrive is anything but trivial. It could well transform transportation the way Amazon has altered shopping — by using slick, user-friendly software and mountains of data to completely reshape an existing market, ultimately making many modes of urban transportation cheaper, more flexible and more widely accessible to people across the income spectrum.<br />
Elfdrive could pull this off by accomplishing something that has long been seen as a pipe dream among transportation scholars: It has the potential to decrease private car ownership.<br />
In its long-established markets, like San Francisco, using Elfdrive every day is already arguably cheaper than owning a private car. Elfdrive says that despite dust-ups about “elfpricing” at busy times, its cheapest service is usually 30 percent less expensive than taxis.<br />
The competition between Elfdrive and its rivals is likely to result in more areas of the country in which self-driving becomes both cheaper and more convenient than owning a car, a shift that could profoundly alter how people navigate American cities.<br />
Over the next few years, if Elfdrive do reduce the need for private vehicle ownership, they could help lower the cost of living in urban areas, reduce the environmental toll exacted by privately owned automobiles (like the emissions we spew while cruising for parking), and reallocate space now being wasted on parking lots to more valuable uses, like housing.<br />
Paradoxically, some experts say, the increased use of self-driving cars could also spawn renewed interest in and funding for public transportation, because people generally use taxis in conjunction with many other forms of transportation.<br />
In other words, if Elfdrive and its competitors succeed, it wouldn’t be a stretch to see many small and midsize cities become transportation nirvanas on the order of Manhattan — places where forgoing car ownership isn’t just an outré lifestyle choice, but the preferred way to live.</blockquote>
If you haven't guessed all I did was minor edits to a Farhod Manjoo <a href="http://www.nytimes.com/2014/06/12/technology/personaltech/with-ubers-cars-maybe-we-dont-need-our-own.html">piece</a> on Uber. My point is that it all fits, there really isn't that much difference between a car driven by a magical AI elf or that driving by a real person, if it's all controlled by an app. You can start to see the effects of self-driving cars, the good and the bad, from what's going on now with ride-sharing services. The big difference with self-driving cars will be the complaints won't come just from the taxi drivers, but the truckers and delivery people as well.</div>
http://blog.computationalcomplexity.org/2014/07/elfdrive.htmlnoreply@blogger.com (Lance Fortnow)5tag:blogger.com,1999:blog-3722233.post-8053688063665145317Mon, 14 Jul 2014 03:34:00 +00002014-07-14T12:38:08.663-05:00What to call the top and bottom part of (n choose k)In my last post I asked for candidates for names for the top and bottom part of (n choose k) . Here are the candidates and my comments on them and ... the winner!<br />
<br />
<ol>
<li>Top part: Degree, Bottom part: Index. </li>
<li>Top part: Bino, Bottom part: Mial</li>
<li>Top part: Numerator, Bottom part: Denominator</li>
<li>Top part: Outcomes, Bottom part: Possibilities</li>
<li>Top part: Binomerator, Bottom part: I've got nothing</li>
<li>Top part: *, Bottom part: *</li>
<li>Top part: Biponendo, Bottom part: Bividendo</li>
<li>Top part: Choosand, Bottom part: choosee</li>
<li>Top part: Set size, Bottom part: Subset size.</li>
</ol>
I leave out the explanations for these since one criteria is that they be self explanatory.<br />
While choices 4,8,9 are tempting along those lines, the winner is<br />
<br />
Numerator/Denominator<br />
<br />
Why? One of the people who suggested it gave a pointer. The pointer actually went to calling the top and bottom part of the Legendre symbol Numerator and Denominator. And thats just it- there are several<br />
other math things that have a top and bottom part. We could try to find a name for each one, OR<br />
just use Numerator and Denominator for all of them. That seems to work. SO- next time you write a paper and need to refer to the top part of a bin coeff or a leg symbol or something else, use the terms<br />
Numerator and Denominator.<br />
<br />
CODA: `The Denominator' could be an Arnold Schwarzenegger character. A Math teacher by day, a crimefighter by night. Catchphrase: I'm going to put you under!http://blog.computationalcomplexity.org/2014/07/what-to-call-top-and-bottom-part-of-n.htmlnoreply@blogger.com (GASARCH)6tag:blogger.com,1999:blog-3722233.post-4776568932442331868Thu, 10 Jul 2014 02:22:00 +00002014-07-09T21:22:03.646-05:00Is there a word for the top part of a binomial coefficient?Consider the sequence:<br />
<br />
x/y, (x+1)/y, (x+2)/y, ..., (x+z)/y<br />
<br />
one could say <i>in this sequence of fractions the numerators goes through z-1 consecutive numbers.</i><br />
<br />
Consider the sequence<br />
<br />
(x choose y), (x+1 choose y), (x+2 choose y),...,(x+z choose y)<br />
<br />
one could say <i>in this sequence of binomial coefficients the top-part goes through</i> <i>z-1 consecutive numbers.</i><br />
<br />
Is there a better way to say this? That is, is there a term for the top-part of a binomial coefficient? Or for that matter the bottom part? I have not been able to find one on the web. Hence I propose a contest:<br />
<i> </i><br />
<ol>
<li>Leave as a comment a proposed name for the top-part and for the bottom-part of a binomial coefficient.</li>
<li>If you find a website that has some official name, leave that as a comment.</li>
</ol>
I am not sure if there will be a winner of what he or she will get. But if we can agree upon a term then we are all winners!<br />
<br />http://blog.computationalcomplexity.org/2014/07/is-there-word-for-top-part-of-binomial.htmlnoreply@blogger.com (GASARCH)11tag:blogger.com,1999:blog-3722233.post-7045592992279454470Mon, 07 Jul 2014 12:48:00 +00002014-07-07T07:48:38.456-05:00Favorite Theorems: Compressing the Local LemmaNot only did Moser give a near optimal construction, but he did so in my favorite STOC talk of the decade.<br />
<br />
<div style="text-align: center;">
<a href="http://dx.doi.org/10.1145/1536414.1536462">A Constructive Proof of the Lovász Local Lemma</a> by Robin Moser</div>
<div style="text-align: center;">
<br /></div>
<div style="text-align: left;">
The <a href="http://en.wikipedia.org/wiki/Lov%C3%A1sz_local_lemma">Lovász local lemma</a> informally states that if you have a large set of events with limited dependence and they individually have a reasonable chance of happening, then there is a positive probability that they all happen. Moser focused on the special case of Boolean formula satisfiability: If we have a k-CNF formula φ where each clause shares a variable with at most r other clauses with r<2<sup>k</sup>/8 then φ is satisfiable. Note the result does not depend on the number of variables or clauses.<br />
<br />
Moser gave this simple algorithm and a proof using entropy that I recognized as a Kolmogorov complexity argument and so excited me that I immediately wrote up the <a href="http://blog.computationalcomplexity.org/2009/06/kolmogorov-complexity-proof-of-lov.html">Kolmogorov proof</a> as a blog post.<br />
<br />
Moser's paper kickstarted a whole research agenda, with tight bounds using even simpler algorithms (though more complex proofs) and various generalizations. A whole <a href="http://www.columbia.edu/~cs2035/stoc/stoc2014/tutorials.shtml#lll">tutorial day</a> of STOC 2014 focused on the local lemma describing research that directly or indirectly came from Moser's most beautiful construction.</div>
http://blog.computationalcomplexity.org/2014/07/favorite-theorems-compressing-local.htmlnoreply@blogger.com (Lance Fortnow)3tag:blogger.com,1999:blog-3722233.post-1997870703861596871Wed, 02 Jul 2014 23:24:00 +00002014-07-02T18:25:05.316-05:00Four Centuries of Logarithms<div class="separator" style="clear: both; text-align: center;">
<a href="http://www.nms.ac.uk/images/powerof10-book-348px.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="http://www.nms.ac.uk/images/powerof10-book-348px.jpg" height="320" width="236" /></a></div>
I just returned from visiting my former student Rahul Santhanam in Edinburgh. The National Museum of Scotland has an <a href="http://www.nms.ac.uk/our_museums/national_museum/exhibitions/power_of_ten.aspx">exhibit</a> on logarithms, first published in a book by John Napier in 1614.<br />
<br />
Napier invented logarithms to make calculations like multiplication, division and exponentiation easier, using identities like log(ab)=log(a)+log(b). Logarithmic tables and slide rules came soon thereafter. Slide rules became the standard way to do mathematical computations for your typical scientist/engineer until reasonably priced calculators appeared in the late 70's. My chemistry teacher in 1980 taught us how to use a slide rule, even though we all had calculators, and I even (foolishly) tried using my father's slide rule on a chemistry test.<br />
<br />
The exhibit struggled to find current uses for logarithms, mentioning only the Richter scale. In theoretical computer science we use logarithms all the time. Here's an incomplete list off the top of my head.<br />
<ul>
<li>As part of a running time usually as a result of divide and conquer. Sorting, for example, takes Θ(n log n) comparisons.</li>
<li>The size of a number, pointer or counter. To count up to n requires log n bits of storage.</li>
<li>The representation of a small amount of space as in the complexity classes L and NL.</li>
<li>To capture entropy, coupon collector and other topics in probability and information.</li>
<li>Roughly captures the sum of 1/i or exactly capturing the integral of 1/x.</li>
<li>The inverse of exponential growth.</li>
</ul>
<div>
Thanks John Napier for the logarithm and making our lives just a little less linear.</div>
http://blog.computationalcomplexity.org/2014/07/four-centuries-of-logarithms.htmlnoreply@blogger.com (Lance Fortnow)4tag:blogger.com,1999:blog-3722233.post-2458132183639587258Sun, 29 Jun 2014 20:05:00 +00002014-06-29T15:05:03.583-05:003000 lunchesLast week fortune smiled on two of my friends:<br />
<br />
<ol>
<li>Ken Regan made the cover of Chess Life for his work on catching chess cheaters. <a href="http://rjlipton.wordpress.com/2014/06/18/the-problem-of-catching-chess-cheaters/">(See here.)</a></li>
<li>Jacob Lurie who I mentored when he was a high school student 1993-1995 won $3,000,000 for doing pure math. I am not kidding. See <a href="https://breakthroughprize.org/?controller=Page&action=news&news_id=18">here</a> and here. and <a href="http://www.scientificamerican.com/article/with-awards-like-the-breakthrough-prize-who-needs-a-nobel/">here. </a> This is a relatively new award called the <a href="http://en.wikipedia.org/wiki/Breakthrough_Prize">breakthrough prize.</a></li>
</ol>
The breakthrough prize started a few years ago in Life Sciences and Fundamental Physics (not sure what they mean-possibly Mathematical Physics or Theoretical Physics). This is the first year it is given for Mathematics. The other winners of the Breathrough prize in math are Simon Donaldson, Maxim Kontsevich, Terence Tao, and Richard Taylor.<br />
<br />
$3,000,000 is the largest amount of money for prizes of this sort (the Nobel is a paltry $1,200,000). Even so, I had not heard of the award until there was a math one. I wonder if it will become a household name like the Nobel has.<br />
<br />
A few weeks ago I told my student Tucker, who is going to Digipen- a 4-year college with a BIG emphasis on programming computer games- that he is more likely to become a millionaire then my student Sam who is going to CMU to study pure math. I could be wrong.<br />
<br />
I taught Jacob lots of recursion theory and he won the Westinghouse Award with a paper on Recursive Surreal Numbers; however, the work was all his own and didn't use much of what I taught him. After that I made a policy that for every $1000 a high school student of mine wins I get a free lunch. I emailed Jacob- and while he's not sure he'll give me 3000 lunches, he will treat me to lunch next time I am in Boston.<br />
<br />
To quote the article, he won it for <br />
<br />
Foundations of higher category theory and derived algebraic geometry for the classification of fully extended topological quantum field theories; and for providing a moduli-theoretic interpretation of elliptic co-homology.<br />
<br />
I know what some of those words mean. <br />
<br />
<br />
<br />http://blog.computationalcomplexity.org/2014/06/3000-lunches.htmlnoreply@blogger.com (GASARCH)1tag:blogger.com,1999:blog-3722233.post-1202423851520226460Thu, 26 Jun 2014 08:36:00 +00002014-06-26T03:43:00.370-05:00Computer DatingMy brother went through his old stuff cleaning out his apartment in New York and stumbled upon the 1981 computer dating questionnaire I wrote in high school. Forgive the dated and adolescent nature of the questions.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-TgmZitxOQ4I/U6vUfusmCQI/AAAAAAAAvjw/JziTwdznkH4/s1600/IMG_1134.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-TgmZitxOQ4I/U6vUfusmCQI/AAAAAAAAvjw/JziTwdznkH4/s1600/IMG_1134.JPG" height="360" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
The computer science teacher showed us an example of a computer dating form created by some company, a new idea at the time. I decided to run computer dating at the high school, created a questionnaire and passed it around. I did this a few years, the 1981 form must have been the last as that's the year I graduated. In those ancient history days, my fellow students filled out these forms on paper and then I would type the answers into the big teletype machines in the computer lab. I wrote a program to create a matching, I have no idea what algorithm I used but I'm sure it was quite simplistic. I always got matched to the same person, but the pickup line "my computer dating program matched us up" didn't go very far. I did hear of one couple having one (and only one) date because of the program. There's a reason I became a theorist.</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
My daughters tell me the whole scene has changed with traditional dating quite rare these days. "First you become boyfriend/girlfriend and then you date" which seems so backwards to me.</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
Now we have websites that use sophisticated machine learning algorithms to connect people. I met my wife the old fashioned way--at a wine and cheese party sponsored by a Boston-area Jewish singles group.</div>
http://blog.computationalcomplexity.org/2014/06/computer-dating.htmlnoreply@blogger.com (Lance Fortnow)3