tag:blogger.com,1999:blog-37222332018-11-14T14:11:56.119-05:00Computational ComplexityComputational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill GasarchLance Fortnowhttps://plus.google.com/101693130490639305932noreply@blogger.comBlogger2633125tag:blogger.com,1999:blog-3722233.post-23264207058293476932018-11-12T12:02:00.002-05:002018-11-13T20:33:34.405-05:00And the winner is again, Harambe: A pre election poll of my class that was truly a referenum on the PrezI had meant to post this before the election but I didn't quite time it right. Oh well.<br />
<br />
It has been said that this midterm election (more than others) was a referendum on the prez. Every prez election year I have my students (or someone's students) vote (Secret Ballot of course) for president. ((see <a href="https://blog.computationalcomplexity.org/2016/11/and-winner-is-harambe.html">2016</a> , <a href="https://blog.computationalcomplexity.org/2012/11/random-thoughts-on-election.html">2012</a>, <a href="https://blog.computationalcomplexity.org/2012/11/random-thoughts-on-election.html">2008</a>) This year, for the first time, I had a midterm election, though rather than ask them which Maryland people they would vote for, I made it truly a referendum on Trump by asking two questions:<br />
<br />
1) Who did you vote for in 2016?<br />
<br />
2) Knowing what you know now, who would have have voted for in 2016?<br />
<br />
Full Disclosure: I am in the Clinton/Clinton Camp. However, this is an information-post not an opinion-post, so my vote is not relevant nor was it counted.<br />
<br />
I give the full results at the end of the post; however, I will summarize the most interesting data: the change-mind people and my thoughts.<br />
<br />
4 voted Trump and now wish they voted differently: Harmabe, Clinton, Nobody, Gasarch<br />
<br />
Only 12 people had voted for Trump in 2016 and of those 4<br />
regret it. While I can see wanting Clinton, Nobody, or Gasarch,<br />
I'm surprised someone wanted Harmabe. Is he even a citizen?<br />
<br />
5 voted Clinton and now wish they voted differently: 2-Johnson, Trump, Kanye, Gasarch.<br />
<div>
<br /></div>
<div>
<div>
<br /></div>
<div>
Since Clinton hasn't done anything to merit rejection since the election,</div>
<div>
I deduce these people are more going TOWARDS the one they picked rather</div>
<div>
than AWAY from her. Trump has been Prez and has done stuff, so Clinton/Trump</div>
<div>
makes sense -- the student's opinion is that Trump is doing better</div>
<div>
than expected. Gary Johnson hasn't done anything to merit a change towards</div>
<div>
him, so thats a puzzler. Kanye, similar. As for Gasarch, if the reasoning</div>
<div>
is `he's doing a good job teaching crypto, lets make him president' doesn't</div>
<div>
really work since he's not doing THAT good a job. If it was Ramsey theory</div>
<div>
then I could see it.</div>
<div>
<br /></div>
<div>
Misc:</div>
<div>
1 Underwood(House of Cards)/Satan (For those tired of picking the LESSER of two evils)</div>
<div>
1 Stein/Clinton</div>
<div>
1 Johnson/Clinton</div>
<div>
1 Kruskal/Gasarch (some people can't tell them apart)</div>
<div>
<br /></div>
<div>
The two who went to Clinton I interpret as thinking Trump is worse</div>
<div>
than expected.<br />
<br />
ALSO: Hillary had 28 Hillary/Hillary. Hence Trump had the largest percentage of people who regret voting for him, but still only 1/3. And the numbers are to small to make much of them.</div>
<div>
<br /></div>
<div>
MY OPINON: FORTNOW in 2020!</div>
<div>
<br /></div>
<div>
------------------------------------------------------</div>
<div>
<br /></div>
<div>
61 students did the poll.</div>
<div>
<br /></div>
<div>
-----------------------------</div>
<div>
Stayed the same:</div>
</div>
<div>
<br /></div>
<div>
<div>
-----------------------------</div>
<div>
Stayed the same:</div>
<div>
<br /></div>
<div>
28 Clinton/Clinton</div>
<div>
8 Trump/Trump</div>
<div>
4 Stein/Stein</div>
<div>
3 Johnson/Johnson</div>
<div>
1 Kasich/Kasich</div>
<div>
1 Sanders/Sanders</div>
<div>
1 Jerry White/Jerry White (Socialist Party)</div>
<div>
1 Bofa/Bofa (this is a joke)</div>
<div>
1 Thomas Adam Kirkman/Thomas Adam Kirkman (prez on TV show Designated Survivor)</div>
<div>
<br /></div>
<div>
48 do not regret their vote.</div>
<div>
<br /></div>
<div>
-------------------------</div>
<div>
Trump/XXX</div>
<div>
<br /></div>
<div>
1 Trump/Harmabe (Harambe is the Gorilla who got shot in a zoo.)</div>
<div>
1 Trump/Clinton</div>
<div>
1 Trump/Nobody</div>
<div>
1 Trump/Gasarch (Gasarch would prove the problems of government are unsolvable rather than solve them)</div>
<div>
<br /></div>
<div>
FOUR people voted Trump and now regret it.</div>
<div>
<br /></div>
<div>
------------</div>
<div>
Clinton/XXX</div>
<div>
<br /></div>
<div>
2 Clinton/Johnson</div>
<div>
1 Clinton/Trump</div>
<div>
1 Clinton/Kanye</div>
<div>
1 Clinton/Gasarch</div>
</div>
<div>
<br /></div>
<div>
<div>
<br /></div>
<div>
FIVE people voted Clinton and now regret it.</div>
<div>
<br /></div>
<div>
-------------</div>
<div>
MISC changed</div>
<div>
<br /></div>
<div>
1 Underwood(House of Cards)/Satan (For those tired of picking the LESSER of two evils)</div>
<div>
1 Stein/Clinton</div>
<div>
1 Johnson/Clinton</div>
<div>
1 Kruskal/Gasarch (some people can't tell them apart)</div>
<div>
<br /></div>
<div>
TWO people who voted third party now would have voted Clinton.</div>
<div>
I interpret this as not-liking-Trump since I don't think Clinton</div>
<div>
has done anything since the election to make anyone thing better</div>
<div>
or worse of her, while as Trump as president has done enough</div>
<div>
to change people's opinions of him.</div>
</div>
<div>
<br /></div>
GASARCHhttp://www.blogger.com/profile/03615736448441925334noreply@blogger.com1tag:blogger.com,1999:blog-3722233.post-19159326095928826342018-11-08T09:00:00.000-05:002018-11-08T09:00:18.792-05:00The Data Transformation of UniversitiesWith all the news about elections, caravans, shootings and attorney generals, maybe you missed the various stories about real transformations at our top universities.<br />
<br />
On October 15 MIT announced the <a href="http://news.mit.edu/2018/mit-reshapes-itself-stephen-schwarzman-college-of-computing-1015">Stephen A. Schwarzman College of Computing</a>. Schwarzmann donated $350 million to the effort as part of an expected billion-dollar commitment that will pay for a new building and fifty new faculty.<br />
<blockquote class="tr_bq">
“As computing reshapes our world, MIT intends to help make sure it does so for the good of all,” says MIT President L. Rafael Reif. “In keeping with the scope of this challenge, we are reshaping MIT. The MIT Schwarzman College of Computing will constitute both a global center for computing research and education, and an intellectual foundry for powerful new AI tools. Just as important, the College will equip students and researchers in any discipline to use computing and AI to advance their disciplines and vice-versa, as well as to think critically about the human impact of their work. </blockquote>
Two weeks later the University of California at Berkeley <a href="https://news.berkeley.edu/2018/11/01/berkeley-inaugurates-division-of-data-science-and-information-connecting-teaching-and-research-from-all-corners-of-campus/">announced a Division of Data Science</a> to be led by an associate provost reporting directly to the provost (like a dean).<br />
<blockquote class="tr_bq">
“Berkeley’s powerful research engine, coupled with its deep commitment to equity and diversity, creates a strong bedrock from which to build the future foundations of this fast-changing field while ensuring that its applications and impacts serve to benefit society as a whole,” said Paul Alivisatos, executive vice chancellor and provost. “The division’s broad scope and its facilitation of new cross-disciplinary work will uniquely position Berkeley to lead in our data-rich age. It will simultaneously allow a new discipline to emerge and strengthen existing ones.”</blockquote>
Sorry Berkeley, you are anything but unique. Every major research university is trying to build up expertise in computing and data science given the demands of students, industry and researchers across nearly all academic disciplines who need help and guidance in collecting, managing, analyzing and interpreting data.<br />
<br />
Here at Georgia Tech, where we've had a College of Computing since 1990, we recently started an Interdisciplinary Research Institute in Data Science and Engineering and a Interdisciplinary Research Center in Machine Learning both to be housed in a twenty-one story <a href="https://codatechsquare.com/">CODA building</a> that will open next year in Midtown Atlanta (sounds impressive when I write it down).<br />
<br />
I could go on for pages on how other universities are rethinking and transforming themselves. Earlier this year Columbia (who hired Jeannette Wing to run their data science institute) <a href="https://datascience.columbia.edu/data-science-week-brought-together-nation%E2%80%99s-leading-data-scientists">held a summit</a> of academic data science leadership. The <a href="https://datascience.columbia.edu/files/seasdepts/idse/Data_Science_Leadership_Summit_Summary_Report.pdf">report</a> shows we have much to do.<br />
<br />
The real secret is that none of us have really figured it out, how to meet the escalating needs for computing and data and integrating it across campus. We aim at a moving target problem as we sit just at the beginning of the data revolution that will reshape society. The future looks rocky, scary and fun.Lance Fortnowhttps://plus.google.com/101693130490639305932noreply@blogger.com0tag:blogger.com,1999:blog-3722233.post-69404162627771556962018-11-05T19:01:00.000-05:002018-11-05T19:01:09.773-05:00Is Fuzzy Sets Common Knowledge? How about Napier as inventing (or something) logs?<b>Darling:</b> Bill, help me with this crossword puzzle. 6 letter word that begins with N, clue is log man<br />
<br />
<b>Bill:</b> Napier<br />
<br />
<b>Darling</b>: Who was that?<br />
<br />
<b>Bill: </b>A famous lumberjack<br />
<br />
<b>Darling:</b> Give me a serious answer<br />
<br />
<b>Bill:</b> Okay. He is said to have invented logarithms. Like most things in math its not quite clear who gets the credit, but he was involved with logarithms early on.<br />
<br />
<b>Darling</b>: I said give me a serious answer. That is rather obscure and would not be in a crossword puzzle.<br />
<br />
While darling is wrong about the answer being wrong she is right about it being obscure. How would your typical crossword puzzler know the answer? Perhaps Napier appears in a lot of crosswords since it has lots of vowels, so they just word-associate `log man' to `Napier' But in any case this does seem odd for a crossword puzzle.<br />
<br />
In the quiz show Jeopardy, in round two, the following was an $800 question under philosophy (the questions are 400,800,1200,1600,2000, so 800 is supposed to be easy)<br />
<br />
<i>In 1965 Lotfi Zadeh introduced this type of set with no clear boundaries leading to the same type of ``logic''</i><br />
<i><br /></i>
1) I suspect that all my readers know that the correct response is `Fuzzy' (or formally `What is Fuzzy')<br />
<br />
2) Why is this in Philosophy? There IS an entry of it in the Stanford Enc of Phil (see <a href="https://plato.stanford.edu/entries/logic-fuzzy/">here</a>). Some Philosophers work on Logic, but do they work on Fuzzy Logic? The wikipedia page for Fuzzy Logic (see <a href="https://en.wikipedia.org/wiki/Fuzzy_logic">here</a>) has only one mention of phil and its to the Stanford Enc of Phil chapter.<br />
<br />
3) (more my point) Is Fuzzy Logic so well known as to be an easy question on Jeop? See next point<br />
<br />
4) Can someone get this one right WITHOUT knowing ever having heard of Fuzzy Logic. I suspect yes and, indeed, the contestant DID get it right and I think she had never heard of Fuzzy Logic. While I can't be sure one tell is that when a contestant says `what is fuzzy logic' and it actually sounds like a question, then they are partially guessing.<br />
<br />
Anyway, I am surprised that this obscure bit of math made it into an easy question on Jeop. But since the contestant got it right, it was appropriate.GASARCHhttp://www.blogger.com/profile/03615736448441925334noreply@blogger.com3tag:blogger.com,1999:blog-3722233.post-56856710219290774382018-11-01T06:33:00.000-04:002018-11-01T16:38:00.834-04:00P = NP Need Not Give a SAT AlgorithmIn <a href="https://blog.computationalcomplexity.org/2018/10/if-pnp-then-we-have-alg-for-sat.html">Bill's post</a> earlier this week, he asks if there is a fixed algorithm such that if P = NP, this algorithm will correctly compute satisfiability on all inputs. While I believe this statement is true because P ≠ NP, I would like to argue we won't prove it anytime soon.<br />
<br />
Bill links to a TCS Stack Exchange <a href="https://cstheory.stackexchange.com/a/41753/550">answer</a> by Marzio De Biasi giving a fixed algorithm that would get SAT correct except on a finite number of inputs if P = NP. Here is Marzio's algorithm.<br />
<br />
<pre style="background-color: #eff0f1; border: 0px; box-sizing: inherit; color: #242729; font-family: Consolas, Menlo, Monaco, "Lucida Console", "Liberation Mono", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Courier New", monospace, sans-serif; font-size: 13px; font-stretch: inherit; font-variant-east-asian: inherit; font-variant-numeric: inherit; line-height: inherit; margin-bottom: 1em; max-height: 600px; overflow-wrap: normal; overflow: auto; padding: 5px; vertical-align: baseline; width: auto;"><code style="border: 0px; box-sizing: inherit; font-family: Consolas, Menlo, Monaco, "Lucida Console", "Liberation Mono", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Courier New", monospace, sans-serif; font-stretch: inherit; font-style: inherit; font-variant: inherit; font-weight: inherit; line-height: inherit; margin: 0px; padding: 0px; vertical-align: baseline; white-space: inherit;">Input: x (boolean formula)
Find the minimum i such that
1) |M_i| < log(log(|x|)) [ M_1,M_2,... is a standard fixed TM enumeration]
2) and M_i solves SAT correctly
on all formulas |y| < log(log(|x|))
halting in no more than |y|^|M_i| steps
[ checkable in polynomial time w.r.t. |x| ]
if such i exists simulate M_i on input x
until it stops and accept/reject according to its output
or until it reaches 2^|x| steps and in this case reject;
if such i doesn't exist loop for 2^|x| steps and reject.</code></pre>
<br />
This proof relativizes: There is a fixed relativizing Turing machine M such that for all A, if P<sup>A</sup> = NP<sup>A</sup> then L(M<sup>A</sup>) runs in polynomial time and differs from SAT<sup>A</sup> on only a finite number of strings. SAT<sup>A</sup> is a relativizable form of SAT with a built in relations that answer whether a sequence of variables encode an string in A. SAT<sup>A</sup> is NP<sup>A</sup>-complete for all A.<br />
<br />
The following shows any proof that a fixed algorithm can compute all of SAT correctly if P = NP cannot relativize.<br />
<br />
<b>Theorem: </b>For every deterministic Turing machine M, there is an A such that P<sup>A</sup> = NP<sup>A</sup> and either M<sup>A </sup>does not compute SAT<sup>A</sup> correctly on all inputs or M<sup>A </sup>takes at least exponential time.<br />
<br />
<b>Proof:</b><br />
<br />
Define B = {0x | x in TQBF}. TQBF is the PSPACE-complete problem containing all the true quantified Boolean formula. P<sup>TQBF</sup> = PSPACE = NP<sup>TQBF</sup> and thus P<sup>B</sup> = NP<sup>B</sup>.
<br />
<br />
Let φ<sub>n</sub> be the formula that is true if there exists a string z of length n such that 1z is in A. Let n be the smallest n such that M<sup>B</sup>(φ<sub>n</sub>) takes less than 2<sup>n</sup> computational steps. If no such n exists let A = B and we are done.
<br />
<br />
If M<sup>B</sup>(φ<sub>n</sub>) accepts we are done by letting A=B since B has no string beginning with 1.
<br />
<br />
If M<sup>B</sup>(φ<sub>n</sub>) rejects and uses less than 2<sup>n</sup> steps there must be some z such that M<sup>B</sup>(φ<sub>n</sub>) did not query 1z. Let A = B ∪ {1z}.<br />
<br />
M<sup>A</sup>(φ<sub>n</sub>) still rejects but now φ<sub>n</sub> is in SAT<sup>A</sup>. Also P<sup>A</sup> = NP<sup>A</sup> since adding a single string to an oracle won't affect whether two classes collapse.Lance Fortnowhttps://plus.google.com/101693130490639305932noreply@blogger.com10tag:blogger.com,1999:blog-3722233.post-35931160950457486932018-10-29T00:08:00.000-04:002018-10-29T00:08:03.856-04:00If P=NP then we HAVE an alg for SAT.I am writing up the result of my survey of peoples opinion of P vs NP (it will be in a SIGACT News, in Lane's Complexity Column, in 2019.) Some people wrote:<br />
<br />
<i> P=NP but the proof will be nonconstructive and have a large constant.</i><br />
<br />
Large constant could happen.<br />
<br />
If by nonconstructive they mean not practical, then yes, that could happen.<br />
<br />
The following does not quite show it can't happen but it does give one pause: an old result of Levin's shows that there is a program you could write NOW such that if P=NP then this program decides SAT <b>except for a finite number of formulas</b> (all of which are NOT in SAT) and can be proven to work in poly time (I will later give three pointers to proofs). The <b>finite number of formulas </b>is why the people above may still be right. But only a finite number- seems like a weak kind of nonconstructive.<br />
<br />
Since I am teaching crypto I wondered about Factoring. An old result of Gasarch (I proved it this morning -- I am sure it is already known) shows that there is a program you could write NOW such that if Factoring is in P then this program factors a number ALWAYS (none of this <b>finite exception</b> crap) and can be proven to work in poly time. Even so, the algorithm is insane. If someone thought that factoring in P might be nonconstructive, my construction disproves it in such an absurd way that the notion that factoring could be in P nonconstructively should be taken seriously but not literally. There should be a way to say formally:<br />
<br />
<i>I believe that FACTORING is in P but any poly-time algorithm is insane (not even looking at the constants) and hence could never be implemented.</i><br />
<br />
<br />
Not sure how to define insane.<br />
<br />
Three pointers:<br />
<br />
Stack Exchange TCS: <a href="https://cstheory.stackexchange.com/questions/41751/algorithm-whose-running-time-depends-on-p-vs-np">here</a><br />
<br />
Wikipedia: <a href="https://en.wikipedia.org/wiki/P_versus_NP_problem#Polynomial-time_algorithms">here</a><br />
<br />
My slides (also include factoring result): <a href="https://www.cs.umd.edu/users/gasarch/BLOGPAPERS/pnpconst.pdf">here</a><br />
<br />
<b>Question: </b>Can the SAT result be improved to be an algorithm that is ALWAYS right? Is there a way to show that it can't be (unless, say P=NP).<br />
<br />
<b>Question</b>: What can be said about Graph Isomphism in this context? The proof for SAT is easily adpated to this case (all we used about SAT was that it was self-reducible). But can we get our GI algorithm to never make a mistake?<br />
<b><br /></b>
GASARCHhttp://www.blogger.com/profile/03615736448441925334noreply@blogger.com14tag:blogger.com,1999:blog-3722233.post-9322305857474822072018-10-25T07:24:00.000-04:002018-10-25T07:24:55.178-04:00Good Results Made MeaninglessSometimes you see a beautiful theorem A that you love to talk about. Then another beautiful theorem B comes around, making the first one meaningless since B trivially implies A. Not just a mere extension of A but B had a completely different proof of something much stronger. People will forget all about A--why bother when you have B? Too bad because A was such a nice breakthrough in its time.<br />
<br />
Let me give two examples.<br />
<br />
In STOC 1995 Nisan and Ta-Shma showed that <a href="https://doi.org/10.1145/225058.225101">Symmetric logspace is closed under complement</a>. Their proof worked quite differently from the 1988 Immerman-Szelepcsenyi <a href="https://blog.computationalcomplexity.org/2003/06/foundations-of-complexity-lesson-19.html">nondeterministic logpsace closed under complement</a> construction. Nisan and Ta-Shma created monotone circuits out of undirected graphs and used these monotone circuits to create sorting networks to count the number of connected components of the graph.<br />
<br />
Ten years later Omer Reingold <a href="https://blog.computationalcomplexity.org/2014/02/favorite-theorems-connecting-in-log.html">showed</a> that symmetric logspace was the same as deterministic logspace making the Nisan-Ta-Shma result an trivial corollary. Reingold's proof used walks on expander graphs and the Nisan-Ta-Shma construction was lost to history.<br />
<br />
In the late 80's we had several randomized algorithms for testing primality but they didn't usually give a proof that the number was prime. A nice result of Goldwasser and Kilian gave a way to <a href="https://doi.org/10.1145/12130.12162">randomly generate certified primes</a>, primes with proofs of primeness. Adleman and Huang <a href="https://doi.org/10.1145/28395.28445">later showed</a> that one can randomly find a proof of primeness for any prime.<br />
<br />
In 2002, Agrawal, Kayal and Saxena showed <a href="https://www.jstor.org/stable/3597229">Primes in P</a>, i.e., primes no longer needed a proof of primeness. As Joe Kilian said to me at the time, "there goes my best chance at a Gödel Prize".Lance Fortnowhttps://plus.google.com/101693130490639305932noreply@blogger.com4tag:blogger.com,1999:blog-3722233.post-70636174670510323172018-10-22T20:33:00.000-04:002018-10-22T20:46:26.893-04:00 Please Don't call them Guidance CounselorsAs I mentor many HS students I was recently in email contact with the HS contact for projects and I noticed that the sign off was<br />
<br />
Allie Downey<br />
<strike>Guidance</strike> School Counselor<br />
<br />
This piqued my interest so I emailed her asking why the cross out.<br />
<br />
She was delighted to answer! Here is her email:<br />
<br />
T<i>hank you, but “Guidance Counselor” is an outdated term. It is from a time before there was the American School Counselor Association, before the profession required at least a Masters degree, and before there was a nationally recognized comprehensive school counseling program. Guidance is a service; school counseling is a program. This is a great website that explains it even better <a href="http://sccounselor.blogspot.com/2011/10/school-counselor-yes-vs-guidance.html">here</a></i><br />
<i><br /></i>
<i>Thank you for taking the time to ask. Anyone in the profession today prefers the term School Counselor and I always appreciate when people inquire.</i><br />
<br />
I asked her further about the difference and here is what she said:<br />
<br />
<i>Everyone in a young person’s life offers guidance in some fashion. School counselors still provide guidance classroom lessons (such as bully prevention and character development), but we do so much more. We help students develop their academic abilities and study skills. We assist them and their families in college and career planning. We teach coping skills so students can guide themselves. We ask questions to help these young adults discover the answers on their own. We help students learn how to advocate for themselves. We console. We mediate. We learn and adapt with changing climates. We work with families, faculty, and community members to make sure school is a safe place for students to learn and grow. And this is all before the lunch bell. It is an amazing profession and I am proud to call myself a school counselor.</i><br />
<br />GASARCHhttp://www.blogger.com/profile/03615736448441925334noreply@blogger.com3tag:blogger.com,1999:blog-3722233.post-34614803865808228082018-10-18T09:23:00.001-04:002018-10-18T09:23:50.796-04:00A New Cold War?Imagine the following not-so-unrealistic scenario: The US-China trade war deepens <a href="https://www.wsj.com/articles/mike-pence-announces-cold-war-ii-1539039480">leading to a cold war</a>. The US <a href="https://www.businessinsider.com/trump-admin-considered-banning-chinese-students-to-keep-out-spies-2018-10">blocks all Chinese citizens</a> from graduate school in the US. Visas between the two countries <a href="https://thehill.com/policy/international/389809-trump-administration-to-tighten-restrictions-on-some-chinese-visas">become hard to get</a>. The Chinese <a href="https://www.nytimes.com/2018/10/15/opinion/internet-google-china-balkanization.html?rref=collection%2Fsectioncollection%2Fopinion-editorials">close off their Internet</a> to the West.<br />
<blockquote class="tr_bq">
If things continue along this path, the next decade may see the internet relegated to little more than just another front on the new cold war.</blockquote>
I wouldn't have thought it in our hyperconnected age but we are in spitting distance of going back to the 60's. What would this all mean for science and computing?<br />
<br />
Let's go back to the original cold war between the Soviet Union and the US roughly from 1947-1991. We didn't have a significant internet back then (though we can <a href="https://www.history.com/topics/inventions/invention-of-the-internet">thank the cold war for the internet</a>). One had to assume that the government read mail to/from USSR. Travel to and from the USSR and the Eastern block to the west was difficult. Academic research did cross over but only in dribs and drabs and we saw two almost distinct academic cultures emerge, often with duplication of effort (<a href="https://blog.computationalcomplexity.org/2005/04/favorite-theorems-np-completeness.html">Cook/Levin</a>, <a href="https://blog.computationalcomplexity.org/2016/09/boris-trakhtenbrot-1921-2016.html">Borodin/Trakhtenbrot</a>, <a href="https://blog.computationalcomplexity.org/2003/06/foundations-of-complexity-lesson-19.html">Immerman/Szelepcsényi</a>).<br />
<br />
Beyond the limited communication came the lack of collaboration. Science works best with open discussion, sharing of ideas and collaborating with each other. It took a Russian and an American <a href="https://doi.org/10.1016/S0169-7552(98)00110-X">working together</a> to give us Google.<br />
<br />
No cold war in this age can completely cut off ideas flowing between countries but it can truly hamper knowledge growth. We can't count on US superiority with China already ahead of us in areas like high-speed trains, renewable energy and mobile payments.<br />
<br />
The cold war did have an upside to science: The competition between the East and the West pushed research growth and funding on both sides. We already see arguments for <a href="https://www.cnn.com/2018/09/30/tech/quantum-computing-china/index.html">quantum computing</a> and <a href="https://www.nytimes.com/2018/09/22/opinion/sunday/ai-china-united-states.html">machine learning</a> by the necessity of staying ahead of the Chinese. But science funding should not be driven by fear but by curiosity and its indisputable long-term effects on our economy.<br />
<br />
We must keep the flow of information and people open if we want science and technology to flourish to its fullest, from students through senior researchers. Don't let a new cold war bring us back to the days of separate communities, which will fail to produce the breakthroughs we will never know about.Lance Fortnowhttps://plus.google.com/101693130490639305932noreply@blogger.com1tag:blogger.com,1999:blog-3722233.post-70740883289960102462018-10-14T22:52:00.000-04:002018-10-14T22:52:07.623-04:00Practical consequences of RH ?When it seemed like Riemann Hypothesis (RH) might be solved (see Lipton-Regan blog entry on RH <a href="https://rjlipton.wordpress.com/2018/09/26/reading-into-atiyahs-proof/">here</a> and what it points to for more info) I had the following email exchange with Ajeet Gary (not Gary Ajeet, though I will keep his name in mind for when I want to string together names like George Washington, Washington Irving, Irving Berlin, with the goal of getting back to the beginning) who is an awesome ugrad at UMCP majoring in Math and CS.<br />
<br />
<i>Ajeet:</i> So Bill, now that RH has been solved should I take my credit cards off of Amazon?<br />
<br />
<i>Bill:</i> I doubt RH has been solved. And I think you are thinking that from RH you can prove that factoring is in P. That is not known and likely not true.<br />
<br />
<i>Ajeet</i>: What are my thoughts and why are they wrong?<br />
<br />
<i>Bill</i>: What am I a mind-reader?<br />
<br />
<i>Ajeet</i>: Aren't you?<br />
<br />
<i>Bill: </i>Oh, Yes, you are right, I am. Here is what you are confusing this with and why, even if you were right you would be wrong.<br />
<br />
<i>Ajeet:</i> It just isn't my day.<br />
<br />
<i>Bill: </i>Any day you are enlightened is your day. Okay, here are the thoughts you have<br />
<br />
a) From the Extended RH (a generalization of RH) you can prove that PRIMES are in P. (This algorithm is slow and not used. PRIMES has a fast algorithm in RP that people do use. Primes was eventually proven to be in P anyway, though again that is a slow algorithm). Note- even though we do not know if ERH is true, one could still RUN the algorithm that it depends on. ERH is only used to prove that the algorithm is in P.<br />
<br />
b) There was an episode of Numb3rs where they claimed (1) RH implies Factoring in P-- not likely but not absurd (2) from the proof of RH you could get a FAST algorithm for factoring in a few hours (absurd). I say absurd for two reasons: (i) Going from basic research to application takes a long time, and (ii) See next thought<br />
<br />
c) If (RH --> factoring easy) then almost surely the proof would present an algorithm (that can be run even if RH has not been proven) and then a proof that RH --> the algorithm's run time is poly. But I wonder -- is it possible that:<br />
<br />
RH--> factoring easy, and<br />
<br />
The proof does not give you the algorithm, and<br />
<br />
if you had a proof or RH then you COULD get the algorithm (though not in a few hours).<br />
<br />
I doubt this is the case.<br />
<br />
<i>Ajeet: </i>So are there any practical consequences of RH?<br />
<br />
<i>Bill</i>: Would you call better bounds on the error term of the prime number theory practical.<br />
<br />
<i>Ajeet</i>: YES!<br />
<br />
<i>Bill:</i> GREAT! For more on RH see <a href="http://empslocal.ex.ac.uk/people/staff/mrwatkin/zeta/riemannhyp.htm#q8">here</a>GASARCHhttp://www.blogger.com/profile/03615736448441925334noreply@blogger.com3tag:blogger.com,1999:blog-3722233.post-70629846983339723322018-10-11T09:27:00.001-04:002018-10-11T09:27:49.449-04:002018 Fall Jobs PostAs we do every year at this time, we help you find that perfect academic job. So who's hiring in CS this year? Perhaps we should instead follow the advice of John Regehr.<br />
<blockquote class="twitter-tweet" data-lang="en">
<div dir="ltr" lang="en">
I said this last year and it's time to say it again:<br />
to save time and energy, will the CS departments NOT hiring faculty this year please speak up?</div>
— John Regehr (@johnregehr) <a href="https://twitter.com/johnregehr/status/1045481991456534528?ref_src=twsrc%5Etfw">September 28, 2018</a></blockquote>
For computer science faculty positions best to look at the ads from the <a href="http://www.cra.org/ads/">CRA</a> and the <a href="http://jobs.acm.org/">ACM</a>. For theoretical computer science specific postdoc and faculty positions check out <a href="https://cstheory-jobs.org/">TCS Jobs</a> and <a href="http://dmatheorynet.blogspot.com/">Theory Announcements</a>. If you have jobs to announce, please post to the above and/or feel free to leave a comment on this post.<br />
<br />
Even if you don't see an ad, almost surely your favorite university is looking to hire computer scientists. Check out their website or email someone at the department.<br />
<br />
And (selfish plug) Georgia Tech is <a href="https://www.scs.gatech.edu/content/cs-faculty-hiring">looking to hire in theory this year</a>.<br />
<br />
Some generally good advice: Make sure you have strong letter writers who know your research well, best if one or two of them come from outside your university. Put all your papers and materials on your website and make sure your Google Scholar page is accurate. Put effort into your job talk and remember you need to sell you research to non-theorists. Good if you can connect to other areas especially machine learning, data science or cybersecurity. Quantum seems hot this year.<br />
<br />
Above all have fun! In this computational and data driven world we live in, there is a great job out there for you.<br />
<script async="" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>
Lance Fortnowhttps://plus.google.com/101693130490639305932noreply@blogger.com0tag:blogger.com,1999:blog-3722233.post-91805289215668121152018-10-08T19:55:00.000-04:002018-10-08T19:55:48.558-04:00A New ACO Center (guest post by Vijay Vazirani)Guest Post by Vijay Vazirani<br />
<br />
A New ACO Center!<br />
<br />
Last week, I helped launch an <a href="http://sites.uci.edu/acoi/">ACO Center</a> (Algorithms, Combinatorics and Optimization) at my wonderful new home, UC Irvine. There are only two other such centers, at <a href="http://aco.math.cmu.edu/">CMU</a> and <a href="http://www.aco.gatech.edu/">Georgia Tech</a> (29 and 27 years old, respectively). My personal belief is that there will be more in the future. Let me justify.<br />
<br />
When I joined Georgia Tech in 1995, my research was centered around approximation algorithms, a topic that resonated with its ACO Center. I was able to build on this interest in numerous ways: by offering new versions of courses on this topic as new results emerged, attracting to GT, for the first time, a large number of top theory PhD students who went on to produce stellar results and start impressive careers of their own. Course notes accumulated over the years eventually lead <a href="https://www.springer.com/us/book/9783540653677">my book on the topi</a>c in 2001. Right after that, I switched to algorithmic game theory, and again ACO became the center of that activity, this time resulting in a <a href="https://www.cambridge.org/us/academic/subjects/computer-science/algorithmics-complexity-computer-algebra-and-computational-g/algorithmic-game-theory?format=HB&isbn=9780521872829">co-edited book</a> which had a huge impact on the growth of this area. In short, ACO gave me a lot! In turn, I believed in it and I worked for it wholeheartedly.<br />
<div>
<br /></div>
<div>
<div>
I still believe in ACO and I feel it is very much relevant in today’s research world. Similar to the other two ACOs, our Center at UCI also exploits the natural synergies among TCS researchers from the CS Department, probability and combinatorics researchers from the Math Department, and optimization researchers from the Business School. Additionally, our Center has grown well beyond these boundaries to include a highly diverse collection of faculty (e.g., from the prestigious <a href="https://www.imbs.uci.edu/">Institute for Mathematical Behavioral Sciences</a>) whose common agenda is to utilize the “algorithmic way of thinking”, which is set to revolutionize the sciences and engineering over the course of this century, just as mathematics did in the last. The <a href="http://sites.uci.edu/acoi/">Center website</a> has further details about its vision and activities.</div>
<div>
<br /></div>
<div>
<div>
Many universities are in a massive hiring mode today (especially in CS), e.g., UCI <a href="https://strategicplan.uci.edu/pillar-1.php">plans to hire 250</a> new faculty over the next five years. Centers such as ours present the opportunity of hiring in a more meaningful manner around big themes. They can also be instrumental in attracting not only top students but also top faculty.</div>
<div>
<br /></div>
<div>
A center of excellence such as GT’s ACO does not simply spring up by itself; it requires massive planning, hard work, good taste and able leadership. For the last, I will forever be indebted to Robin Thomas for his highly academic, big vision, classy leadership style which was the main reason ACO remained such a high quality program for so long. Moving forward, will we stay with three ACO Centers or will there be more? I believe the latter, but only time will tell.</div>
</div>
</div>
GASARCHhttp://www.blogger.com/profile/03615736448441925334noreply@blogger.com1tag:blogger.com,1999:blog-3722233.post-25314341751171125012018-10-06T14:14:00.000-04:002018-10-06T14:14:19.722-04:00John Sidles, Mike Roman, Matt Howell, please email me/hard to get emails of peopleJohn Sidles, Mike Roman, Matt Howell : please email me. at gasarch@cs.umd.edu (my usual email)<br />
<br />
I need to ask you about some comments you left on the blog a while back (or emailed me -- I forget which, but I can't find your emails if you did email me). I need you to contact me SOON!<br />
<br />
When you do I will tell you whats up and why I decline to say here what this is about.<br />
<br />
For my other readers -- it is nothing controversial.<br />
<br />
<br />
How hard is it to find people's emails on the web?<br />
<br />
Sometimes it takes less than 5 minutes<br />
<br />
Sometimes it is impossible.<br />
<br />
Sometimes I get it by asking someone else who knows, or knows who to ask... etc.<br />
<br />
It is rare that more time on the web helps. I do not think I ever spend more than 5 minutes and then found it. I have sometimes used linked-in. I don't have a Facebook account (I was going to put in a link to the latest Facebook privacy breach, but (1) by the time you read this they may have had another one, (2) you all know about it, and (3) when I typed in `Facebook Scandal' to Google I got the Facebook page for the TV show Scandal.)<br />
<br />
Should people make their emails public? I can see why one does not want to. The old saying is that if you owe people money you want to be hard to find, but if people owe you money you want to be easy to find.<br />
<br />
Contrast email to what people DO put online. A few years ago I needed someone's email address. I found his website. From his website I found out the exact day he lost his virginity. Okay... Didn't need to know that. But I still could not find his email address. I later asked someone who asked someone etc. and got it. But I was struck by what was private and public. This is NOT a complaint (though I wish it was easier to fine email addresses) just an observation.GASARCHhttp://www.blogger.com/profile/03615736448441925334noreply@blogger.com0tag:blogger.com,1999:blog-3722233.post-74297613126327905682018-10-04T21:47:00.000-04:002018-10-04T21:47:12.808-04:00Google added years to my lifeIf you google<br />
<br />
gasarch<br />
<br />
you used to get the following: <a href="http://www.cs.umd.edu/~gasarch/BLOGPAPERS/bill68.png">here</a><br />
<br />
Please go there and notice how old they say I am.<br />
<br />
Okay, you are back. You may have noticed that they say I am 68. Gee, I don't feel 68 (I feel younger).<br />
<br />
I have no idea how Google got my age wrong.<br />
<br />
0) I found about this when I saw my age in an article about the Muffin problem. The article is <a href="https://www.nrc.nl/nieuws/2018/09/14/hoe-vijf-muffins-tot-een-diep-wiskundig-vraagstuk-leidden-a1616534">here</a>. I had been in contact with the author earlier so it was easy to contact him, assure him that I appreciate his throwing scammers and spammers off of my trail by giving me the wrong age, but I wondered why he chose 68. He then apologized (which was not needed) and pointed me to the google thing.<br />
<br />
1) My age was not on my Wikipedia page. Nor was my birth year.<br />
<br />
2) I do not recall every telling Google my age -- but then again, Google knows what I search for and perhaps deduced an incorrect age from that (I've been watching a very old Western, Maverick, lately, which may have fooled them. So my plan is working!)<br />
<br />
3) Google thinks I published with Hilbert (see <a href="https://scholar.google.com/citations?user=SVRgMwEAAAAJ&hl=en">here</a> or <a href="https://blog.computationalcomplexity.org/2017/05/google-scholar-thinks-my-hilbert-number.html">here</a>) so that would make them think I am 68 years old. Hmm, still to young. If I was a 10-year old math genius in 1938 (Hilbert died in 1943 but since I am not a 68 year old math genius I chose numbers to make it easy) and published with<br />
him then, then I would now be 80. Not 68. So that is not the answer.<br />
(Side question- are any of Hilbert's co-authors still alive?)<br />
<br />
Seriously, if anyone has any ideas why Google had it wrong, let me now<br />
<br />
4) Lance was outraged at this and hence put my birth year on my Wikipedia page thinking that<br />
would fix it. (I was not outraged, just amused.)<br />
<br />
5) It was taken off my page since Lance couldn't prove it.<br />
<br />
6) Lance and I mentioned my age in a blog post and that was proof enough. So our blog is a primary<br />
source. We should use this sparingly -- with great power comes great responsibility. (See <a href="https://www.youtube.com/watch?v=fj7c3vBZ7jA">here</a> for more on that theme)<br />
<br />
7) Other weird internet stuff: What does it take to get a Wikipedia Page? A Nobel Prize in Physics<br />
helps. See: <a href="https://www.thedailybeast.com/nobel-laureate-donna-strickland-judged-not-famous-enough-for-wikipedia-page-before-win-report">here</a>.<br />
<br />
<br />GASARCHhttp://www.blogger.com/profile/03615736448441925334noreply@blogger.com4tag:blogger.com,1999:blog-3722233.post-44914187631823346862018-10-01T17:06:00.000-04:002018-10-02T09:52:39.676-04:00Muffin Math<b>Lance: </b>It's Friday afternoon and the Dagstuhl workshop has ended. We have some time before we need take off so how about one final typecast.<br />
<br />
<b>Bill: </b>Always a good idea.<br />
<br />
<b>Lance: </b>First the exciting news, Nitin Saxena won the <a href="http://ssbprize.gov.in/">Shanti Swarup Bhatnagar prize</a> for 2018, <br />
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right; margin-left: 1em; text-align: right;"><tbody>
<tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-SaGHaelCUQU/W7KK8Yy3mtI/AAAAAAABjjQ/6Qw1qL8YAz4gXTfMv9qvqD-EU9sXhDMEgCKgBGAs/s1600/IMG_20180926_180801.jpg" imageanchor="1" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1600" data-original-width="1200" height="200" src="https://1.bp.blogspot.com/-SaGHaelCUQU/W7KK8Yy3mtI/AAAAAAABjjQ/6Qw1qL8YAz4gXTfMv9qvqD-EU9sXhDMEgCKgBGAs/s200/IMG_20180926_180801.jpg" width="150" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Nitin Saxena</td></tr>
</tbody></table>
according to the many Indians at Dagstuhl, the most prestigious science prize in the country. The awards were announced on Wednesday during the workshop. He's the S in AKS.<br />
<br />
<b>Bill: </b>That's really impressive. He was only two-years old when AKS had log-depth sorting networks.<br />
<br />
<b>Lance: </b>Bill, you moron. You're thinking of <a href="https://doi.org/10.1145/800061.808726">Ajtai-Komlós-Szemerédi</a>. I'm talking Agrawal-Kayal-Saxena <a href="http://annals.math.princeton.edu/2004/160-2/p12">Primes in P</a>, topic of my <a href="https://blog.computationalcomplexity.org/2002/08/this-has-been-exciting-summer-for.html">second ever blog post</a>. Nitin, an undergrad at that time, didn't just sit on his laurels--he has had awesome results on algebraic circuit complexity that truly justify this prize.<br />
<br />
<b>Bill: </b>Well congrats to Nitin. Moving on, let's talk math.<br />
<br />
<b>Lance: </b>We're at Dagstuhl so we have to call it computer science.<br />
<br />
<b>Bill: </b>Ronen Shaltiel gave a great but depressing talk, <a href="https://eccc.weizmann.ac.il/report/2018/061/">Indistinguishability by adaptive procedures with advice, and lower bounds on hardness amplification proofs</a>.<br />
<br />
<b>Lance: </b>Nobody sings the <a href="https://www.youtube.com/watch?v=aDunXIMNE7A">Dagstuhl Blues</a>.<br />
<br />
<b>Bill: </b>Suppose you had a hard function and want to covert it to a harder function, known in the biz as hardness amplification. For constant depth circuits we have hardness results but no known process for amplification. For larger classes, like constant depth circuits with threshold gates, we do know ways to amplify.<br />
<br />
<b>Lance: </b>But we have not hardness results there? Where are you going with this?<br />
<br />
<b>Bill: </b>Ronen put it nicely, "We can only amplify hardness where we don't have it". Ronen and his colleagues proved results along those lines. Lance, does that depress you.<br />
<br />
<b>Lance: </b>Not as much as the sprained ankle that made me miss that talk. My turn to pick a favorite talk. I loved Michael Forbes <a href="https://doi.org/10.1145/3188745.3188792">Hitting Sets for the Closure of Small Circuits</a>. You take algebraic circuits with a parameter epsilon and take the limit as epsilon goes to zero. Forbes and Amir Shpilka show a PSPACE algorithm to find small sets containing non-zeros of these functions. These kinds of functions are studied in the <a href="https://arxiv.org/abs/1304.6333">GCT approach</a> to lower bounds.<br />
<br />
<b>Bill: </b>What lower bounds is this aiming to solve?<br />
<br />
<b>Lance: </b>Showing the computation difference between the determinant and the permanent.<br />
<br />
<b>Josh Alman: </b>You've left out the most exciting part of the conference.<br />
<br />
<b>Bill and Lance: </b>So Josh, what was that?<br />
<br />
<b>Josh Alman: </b>The world debut debut performance of Stud Muffin and Smilin' Sam singing "<a href="https://www.youtube.com/watch?v=4xQFlsK7jKg">Do You Work on Muffin Math?</a>"<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<iframe allowfullscreen="" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/vi/4xQFlsK7jKg/0.jpg" frameborder="0" height="266" src="https://www.youtube.com/embed/4xQFlsK7jKg?feature=player_embedded" width="320"></iframe></div>
<br />
<b>Lance: </b>That awesome duo looks familiar Bill. Where I have seen them before?<br />
<br />
<b>Bill: </b>That Sam can really tickle the ivories.<br />
<br />
<b>Lance: </b>And Stud was definitely in the room.<br />
<br />
<b>Bill: </b>On that note, take us out.<br />
<br />
<b>Lance: </b>In a complex world, keep it simple.Lance Fortnowhttps://plus.google.com/101693130490639305932noreply@blogger.com1tag:blogger.com,1999:blog-3722233.post-32289662703792720282018-09-27T02:44:00.001-04:002018-09-27T07:16:59.382-04:00Still Typecasting from Dagstuhl<b>Lance: </b>Bill, in our <a href="https://blog.computationalcomplexity.org/2018/09/lance-and-bill-go-to-dagstuhl-riemann.html">typecast</a> earlier this week I said you were older than me. But 68? You don't look day over 66.<br />
<br />
<b>Bill: </b>Neither do you. But seriously, why do you think I'm 68?<br />
<a href="https://4.bp.blogspot.com/-10m0C7mFATw/W6txToo4KEI/AAAAAAABjfs/SbBYpREG8cgjgrBVykJ9DOCf_Qh7_jl3ACLcBGAs/s1600/Bill%2B68.1.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="1025" data-original-width="1600" height="203" src="https://4.bp.blogspot.com/-10m0C7mFATw/W6txToo4KEI/AAAAAAABjfs/SbBYpREG8cgjgrBVykJ9DOCf_Qh7_jl3ACLcBGAs/s320/Bill%2B68.1.png" width="320" /></a><br />
<b>Lance: </b>I just Google'd "How old is Bill Gasarch?"<br />
<br />
<b>Bill: </b>Don't believe everything you read on the Internet. I'm really 58.<br />
<br />
<b>Lance: </b>Prove it.<br />
<br />
<b>Bill: </b>Here's my driver's license.<br />
<br />
<b>Lance: </b>Bill you don't drive. And it literally says "NOT A DRIVER'S LICENSE" on the back. But it is an official State of Maryland Identification card stating that you were born in 1959. Are you saying I should trust the state of Maryland over Google?<br />
<br />
<a href="https://1.bp.blogspot.com/-aDqlcNz7t_c/W6t0rVdIZcI/AAAAAAABjgI/qILWfsOs0TQwiLCLncADxRvLpXYfWGquwCKgBGAs/s1600/IMG_20180925_120039.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="1600" data-original-width="1115" height="200" src="https://1.bp.blogspot.com/-aDqlcNz7t_c/W6t0rVdIZcI/AAAAAAABjgI/qILWfsOs0TQwiLCLncADxRvLpXYfWGquwCKgBGAs/s200/IMG_20180925_120039.jpg" width="138" /></a><b>Bill: </b>Yes, because they pay my salary. Back to Dagstuhl. Let's talk about <a href="https://docs.google.com/document/d/1CPf_ocNIQJuhsnwG_6HKqaUDP8U-Pt7V8pJRLq6vvRw/edit?usp=sharing">the talks</a>. William Hoza gave a nice talk about hitting sets for L (deterministic small space) vs RL (randomized small space) but when I asked him when will we prove L = RL he said not for fifty years. Grad students are not supposed to be that pessimistic.<br />
<br />
<b>Lance: </b>You mean realistic. Though I'd guess more like 10-20 years. I wouldn't even be surprised if NL (nondeterministic log space) = L.<br />
<br />
<b>Arpita Korwar: </b>I say 10-15 years.<br />
<br />
<b>Bill: </b>Can we put that in the blog?<br />
<br />
<b>Lance: </b>Too late. Bill I heard you were the stud muffin this week.<br />
<br />
<b>Bill: </b>Yes, I talked about the <a href="https://blog.computationalcomplexity.org/2018/06/the-muffin-problem.html">muffin problem</a>. Got a problem with that?<br />
<br />
<b>Lance: </b>Needed milk. I saw this talk two years ago and now you have cool theorems. Who would've thought if you have 24 muffins and 11 people you can allocate 24/11 muffins and the smallest piece is 19/44, and that's the best possible for maximizing the smallest piece.<br />
<br />
<b>Bill: </b>I can't believe you actually listened to the talk and didn't fall asleep.<br />
<br />
<b>Lance: </b>zzzzzz. Did you say something?<br />
<br />
<b>Bill: </b>Never mind. Eric Allender talked about the <a href="http://dx.doi.org/10.4230/LIPIcs.ITCS.2018.20">minimum circuit-size problem</a>: Given a truth-table of a function f is there a circuit for f less that a given size w. The problem is frustrated, just consider the following theorem: if MCSP is NP-complete then EXP does not equal ZPP (exponential time in zero-error probabilistic polynomial-time).<br />
<br />
<b>Lance: </b>Do you think EXP = ZPP?<br />
<br />
<b>Bill: </b>No, the result only tells us it will be hard to prove MSCP is NP-complete without informing us whether or not it is NP-complete. Allender did show that under projections it isn't NP-complete (Editor's Note: I should have said log-time projections see Eric's <a href="https://blog.computationalcomplexity.org/2018/09/still-typecasting-from-dagstuhl.html?showComment=1538032720983#c1629526917930788998">comment</a>. SAT and all your favorite NP-complete problems are complete under log-time projections). MSCP might be complete under poly-time reductions but not under weaker reductions.<br />
<br />
<b>Lance: </b>Reminds me of the Kolmogorov random strings that are hard for the halting for Turing reductions but not under many-one reductions.<br />
<br />
<b>Bill: </b>Everything reminds you of the Kolmogorov strings.<br />
<br />
<b>Lance: </b>As they should.<br />
<br />
<b>Bill: </b>I liked Michal Koucký's talk on <a href="http://dx.doi.org/10.4230/LIPIcs.ESA.2018.12">Gray codes</a>.<br />
<br />
<b>Lance: </b>Shouldn't that be grey codes. We're not in the UK.<br />
<br />
<b>Bill: </b>It's the color you moron. It's named after Frank Gray.<br />
<br />
<b>Lance: </b>You are smarter than you look, not bad for a 68 year old. I missed Koucký's talk due to a sports injury, but he did catch me up later.<br />
<br />
<b>Bill: </b>I never put Lance and sports in the same sentence before.<br />
<br />
<b>Lance: </b>And I never put Bill and driving together. It's a new day for everything. Koucký showed how to easily compute the next element in the Gray code querying few bits as long as the alphabet size is of size 3.<br />
<br />
<b>Bill: </b>Which contrasts Raskin's <a href="http://dx.doi.org/10.4230/LIPIcs.ICALP.2017.88">2017 paper</a> that shows with a binary alphabet you need to query at least half the bits.<br />
<br />
<b>Lance: </b>Hey you stole my line.<br />
<br />
<b>Bill: </b>That's not possible. You are editing this. I think this typecast has gone long enough. Take us out.<br />
<br />
<b>Lance: </b>In a complex world, best to keep it simple.Lance Fortnowhttps://plus.google.com/101693130490639305932noreply@blogger.com4tag:blogger.com,1999:blog-3722233.post-45469947611184524292018-09-25T02:32:00.002-04:002018-09-27T00:39:13.199-04:00Lance and Bill go to Dagstuhl: The Riemann Edition<b>Lance: </b>Welcome to our typecast directly from Dagstuhl in Southwestern Germany for the 2018 edition of the seminar on <a href="https://www.dagstuhl.de/en/program/calendar/semhp/?semnr=18391">Algebraic Methods in Computation Complexity</a>. Say hi Bill.<br />
<br />
<b>Bill: </b>Hi Bill. So Lance are you disappointed we didn't go to Heisenberg for the Michael Atiyah talk claiming a solution to the Riemann Hypothesis.<br />
<br />
<b>Lance: </b>I knew how fast I was going but I got lost going to Heisenberg. I think you mean the Heidelberg Laureate Forum a 100 miles from here. From what I heard we didn't miss much. For those who care here is the <a href="https://www.youtube.com/watch?v=jXugkzFW5qY">video</a>, some <a href="https://twitter.com/mpoessel/status/1044131977950109696">twitter</a> <a href="https://twitter.com/aperiodical/status/1044132699605274624">threads</a> and the <a href="https://drive.google.com/file/d/17NBICP6OcUSucrXKNWvzLmrQpfUrEKuY/view">paper</a>.<br />
<br />
<b>Bill: </b>Too bad. When I first heard about the claim I was optimistic because (1) László Babai <a href="http://people.cs.uchicago.edu/~laci/update.html">proved</a> that graph isomorphism is in quasipolynomial-time at the age of 65 and (2) since Atiyah was retired he had all this time to work on it. Imagine Lance if you were retired and didn't have to teach or do administration, could you solve P vs NP? (This gets an LOL from Nutan Limaye)<br />
<br />
<b>Lance: </b>I'll be too busy writing the great American novel. Before we leave this topic, don't forget about the rest of the <a href="https://www.heidelberg-laureate-forum.org/event_2018/">Laureate Forum</a>, loads of great talks from famous mathematicians and computer scientists. Why didn't they invite you Bill?<br />
<br />
<b>Bill: </b>They did but I rather be at Dagstuhl with you to hear about lower bounds on matrix multiplication from Josh Alman. Oh, hi Josh I didn't see you there.<br />
<br />
<b>Josh: </b>Happy to be here, it's my first Dagstuhl. I'm flying around the world from Boston via China to get here. Though my friends say it's not around the world if you stay in the Northern hemisphere. They are a lot of fun at parties. But not as much fun as matrix multiplication.<br />
<br />
<b>Bill: </b>So Josh, what do you have to say about matrix multiplication. Is is quadratic time yet?<br />
<br />
<b>Josh: </b>Not yet and <a href="https://arxiv.org/abs/1712.07246">we show</a> all the current technique will fail.<br />
<br />
<b>Bill: </b>Wouldn't Chris Umans disagree?<br />
<br />
<b>Kathryn Fenner: </b>You shouldn't pick on Canadians [Ed note: Josh is from Toronto]. Pick on students from your own country.<br />
<br />
<b>Josh: </b>(diplomatically) I think Chris Umans has a broader notion of what counts as known methods. There are some groups that aren't ruled out but we don't know how to use them.<br />
<br />
<b>Chris: </b>Very well put. The distinction is between powers of a fixed group versus families of groups like symmetric groups. The later one seems like the best place to look.<br />
<br />
<b>Lance: </b>Thanks Chris. Josh, what are your impressions of Dagstuhl so far?<br />
<br />
<b>Josh: </b>I like the sun and grass. I wish it was easier to get here.<br />
<br />
<b>Lance: </b>This is only the first day. You haven't even found the music room yet, past the white room, past the billiard room where Mr. Green was murdered with the candlestick. Oh hi Fred Green. Luckily Dr. Green is still alive. I remember my <a href="https://www.dagstuhl.de/en/program/calendar/semhp/?semnr=9206">first Dagstuhl</a> back in February of 1992.<br />
<br />
<b>Josh: </b>Two months before I was born.<br />
<br />
<b>Lance: </b>Way to make me feel old.<br />
<br />
<b>Bill: </b>You are old.<br />
<br />
<b>Lance: </b>You are older. Believe it or not six from that original 1992 meeting are here again this week: The two of us, Eric Allender, Vikaurum Arvind, Uwe Schöning and Jacobo Torán. Amazing how accents show up as we talk.<br />
<br />
<b>Bill: </b>What did I sleep through this morning before Josh's talk?<br />
<br />
<b>Lance: </b>Amnon Ta-Shma talked about his <a href="http://www.cs.tau.ac.il/~amnon/Papers/T.STOC17.pdf">STOC 2017 best paper</a> and Noga Ron-Zewi showed some <a href="https://sites.google.com/site/nogaronzewi1/main.pdf">new results</a> on constructive list-decoding.<br />
<br />
<b>Bill: </b>Let's do this again later in the week. Lance, takes us out.<br />
<br />
<b>Lance: </b>In a complex world, best to keep it simple.Lance Fortnowhttps://plus.google.com/101693130490639305932noreply@blogger.com3tag:blogger.com,1999:blog-3722233.post-14437160797884937322018-09-20T08:19:00.000-04:002018-09-20T08:19:28.031-04:00Why wasn't email built securely?Recently I talked with Ehsan Hoque, one of the authors of the <a href="https://acm-fca.org/2018/03/29/negativeimpacts/">ACM Future of Computing Academy</a> report that suggested "Peer reviewers should require that papers and proposals rigorously consider all reasonable broader impacts, both positive and negative." which I had <a href="https://blog.computationalcomplexity.org/2018/05/broader-impacts-redefined.html">satirized</a> last May.<br />
<br />
Ehsan said that "if email had sender authentication built in from the beginning then we wouldn't have the phishing problems we have today". Leaving aside whether this statement is fully true, why didn't we put sender authentication and encryption in the first email systems?<br />
<br />
Email goes back to the 60's but I did get involved on the early side when I <a href="https://blog.computationalcomplexity.org/2011/06/creating-email-system-at-cornell.html">wrote</a> an email system for Cornell in the early 80's. So let me take a crack at answering that question.<br />
<br />
Of course there are the technical reasons. RSA was invented just a few years earlier and there were no production systems and the digital signatures needed for authentication were just a theory back then. The amount of overhead needed for encryption in time and bandwidth would have stopped email in its tracks back then.<br />
<br />
But it's not like we said we wish we could have added encryption to email if we had the resources. BITNET which Cornell used and the ARPANET gateway only connected with other universities, government agencies and maybe some industrial research labs. We generally trusted each other and didn't expect anyone to fake email for the purpose of getting passwords. It's not like these emails could have links to fake login pages. We had no web back then.<br />
<br />
But we did all receive an email from a law firm offering green card help. My first spam message. We had a mild panic but little did we guess that spam would nearly take down email at the turn of the century. Nor would we have guessed the solution would come from machine learning which kills nearly all spam and much of the phishing emails today.<br />
<br />
I don't disagree with the report that we shouldn't think about the negative broader impacts, but the true impacts negative and positive are nearly impossible to predict. Computer Science works best when we experiment with ideas, get things working and fix problems as they arise. We can't let the fear of the future prevent us from getting there.Lance Fortnowhttps://plus.google.com/101693130490639305932noreply@blogger.com7tag:blogger.com,1999:blog-3722233.post-14009509147494067772018-09-17T00:23:00.000-04:002018-09-18T20:16:12.198-04:00What is a Physicist? A Mathematician? A Computer Scientist? Scott Aaronson recently won the Tomassoni-Chisesi Prize in Physics (yeah Scott!).<br />
In his post (<a href="https://www.scottaaronson.com/blog/?p=3955">here</a>) about it he makes a passing comment:<br />
<div>
<br /></div>
<div>
<i>I'm of course not a physicist</i></div>
<div>
<i><br /></i></div>
<div>
I won't disagree (does that mean I agree? Darn Logic!) but it raises the question of how we identify ourselves. How to answer the question:<br />
<br />
Is X a Y?<br />
<br />
(We will also consider why we care, if we do.)<br />
<br />
Some criteria below. Note that I may say thinks like `Dijkstra is obviously a computer scientist'<br />
but this is cheating since my point is that it may be hard to tell these things (though I think he is).</div>
<div>
<br /></div>
<div>
1) If X in a Y-dept then X is a Y. While often true, there are some problems: MIT CS is housed in Mathematics, some people change fields. Readers- if you know someone who is in dept X but really does Y, leave a comment. (CORRECTION- I really don't know how MIT is structured. I do know that the Math Dept has several people who I think of as Computer Scientists: Bonnie Burger, Michael Goemans, Tom Leighton, Peter Shor, Michael Sipser. There may be others as well. The point being that I would not say `Sipers is a mathematician because he is in the MIT Math Dept')<br />
<br />
2) If X got their degree in Y then they are Y. Again, people can change fields. Also, some of the older people in our field got degrees in Physics or Math since there was no CS (I am thinking Dijkstra-Physics, Knuth-Math). Even more recently there are cases. Andrew Child's degree is in Physics, but he did quantum computing. Readers- if you know someone who got there degree in X but is now donig Y, leave a comment.<br />
<br />
3) Look at X's motivation. If Donald Knuth does hard math but he does it to better analyze algorithms, then he is a computer scientist. One problem -- some people don't know their own motivations, or it can be hard to tell. And people can get distracted into another field.<br />
<br />
4) What does X call himself? Of course people can be wrong. The cranks he email me their proofs that R(5) is 40 (its not) think the are mathematicians. They are not- or are they? see next point<br />
<br />
5) What X is interested in, ind. of if they are good at it or even know any. Not quite right- if an 8 year old Bill Gasarch is interested in <a href="https://blog.computationalcomplexity.org/2012/06/ketchup-problem.html#comment-form">the Ketchup problem</a> that does not make him a mathematician.<br />
<br />
6) What X is working on right now. Fine but might change. And some work is hard to classify.<br />
<br />
7) If you win an award in X, then you are an X. Some exceptions<br />
<br />
Scott is a computer scientist who won the Tomassoni-Chisesi Physics Prize<br />
<br />
Ed Witten is a Physicist who won the Fields Medal (Math)<br />
<br />
John Nash is a mathematician who won a Nobel prize in Economics.<br />
<br />
I want to make a full circle- so if you know other X won a prize in Y then leave a comment and<br />
we'll see what kind of graph we get. Bipartite with people on one side and fields on the other.<br />
<br />
8) What they can teach? Helpful in terms of hiring when you want to fill teaching needs.<br />
<br />
Does any of this matter? We use terms like `mathematician' `physicist' `computer scientist' as shorthand for what someone is working on, so its good to know we have it right.<br />
<br />
<br /></div>
GASARCHhttp://www.blogger.com/profile/03615736448441925334noreply@blogger.com9tag:blogger.com,1999:blog-3722233.post-11455747814566626692018-09-13T08:08:00.000-04:002018-09-13T08:08:01.012-04:00P = NP and CancerOften when the question comes to what happens if P = NP, one typically hears the response that it kills public-key cryptography. And it does. But that gives the impression that given the choice we would rather not have P = NP. Quite the opposite, P = NP would greatly benefit humanity from solving AI (by finding the smallest circuit consistent with the data) and curing cancer. I've said this before but never explained why.<br />
<br />
Of course I don't have a mathematical proof that P = NP cures cancer. Nor would an efficient algorithm for SAT immediately give a cancer cure. But it could work as follows:<br />
<ol>
<li>We need an appropriately shaped protein that would inhibit the cancer cells for a specific individual without harming the healthy cells. P = NP would help find these shapes perhaps just the DNA of the person and the type of cancer.</li>
<li>At this point we don't understand the process that takes a ACGT protein sequence and describes that shape that it forms. But it must be a simple process because it happens quickly. So we can use P = NP to find a small circuit that describes this process.</li>
<li>Use P = NP to find the protein sequence that the circuit from #2 will output the shape from #1.</li>
</ol>
We'll need an truly efficient algorithm for NP problems for this to work. A n<sup>50</sup> algorithm for SAT won't do the job. All this steps may happen whether or not P = NP but we'll need some new smart algorithmic ideas.<div>
<br /><div>
Please note this is just a thought exercise since I strongly believe that P ≠ NP. I do not want to give false hope to those with friends and loved ones with the disease. If you want to cure cancer your first step should not be "Prove P = NP". <ul>
</ul>
</div>
</div>
Lance Fortnowhttps://plus.google.com/101693130490639305932noreply@blogger.com17tag:blogger.com,1999:blog-3722233.post-51810585594265621772018-09-11T12:11:00.000-04:002018-09-11T13:48:35.347-04:00The Tenure system is broken but not in the way that you think (Anon Guest Post)<br />
This is an ANON guest post. Even I don't know who it is! They emailed me asking if they<br />
could post on this topic, I said I would need to see the post. I did and it was fine.<br />
<br />
-----------------------------------------------------------------------------------------------------------------<br />
I have written many tenure/promotion letters before. But this summer, I was especially inundated with requests. Thinking about my past experiences with such letters, I started to question their value.<br />
<br />
For those unfamiliar with the process, let me explain. When someone is applying for a research job, they typically need to have recommendation letters sent on their behalf. Once someone is hired in<br />
a tenure-track position, they then need to get additional letters each time they are promoted (in the US, this will typically occur when someone is granted tenure and again when they are promoted to full<br />
professor).<br />
<br />
Now, I know from experience that recommendation letters are scrutinized very carefully, and often contain useful nuggets of information. I am not questioning the value of such letters (though<br />
they may have other problems). I am focusing here only on tenure/promotion letters.<br />
<br />
Let me fill in a bit more detail about the tenure/promotion process,since it was a mystery to me before I started an academic position. (I should note that everything I say here is based only on how things are<br />
done at my institution; I expect it does not differ much at other US universities, but it may be different in other countries.) First, the department makes a decision as to whether to put forward someone's<br />
case for promotion. If they do, then a package is prepared that includes, among other things, the external recommendation letters I am talking about. After reviewing the candidate's package, the department holds an official vote; if positive, then the package is reviewed and<br />
voted on by higher levels of administration until it is approved by the president of the university.<br />
<br />
The external letters appear very important, and they are certainly discussed when the department votes on the candidate's case. However, I am not aware of any cases (in computer science) where someone who was put forward for tenure was denied tenure. (In contrast, I am aware of a very small number cases where a department declined to put someone forward for tenure. In such cases, no letters are ever<br />
requested.) Perhaps more frustrating, this seems to be the case even when there are negative letters. In fact, I have written what I consider to be "negative" letters in the past only to see the candidate still get tenure.(To be clear, by academic standards a negative letter does not mean saying anything bad, it just means not effusively praising the candidate.) This makes be believe that these letters are simply being used as "checkboxes" rather than real sources of information to take into account during the decision-making process. Essentially, once a department has decided to put someone forward for promotion, they have effectively also decided to vote in favor of their promotion.<br />
<br />
Letters take a long time to write, especially tenure/promotion letters, and especially when you are not intimately familiar with someone's work (even if they are in the same field). But if they are<br />
basically ignored, maybe we can all save ourselves some time and just write boilerplate letters (in favor of tenure) instead?<br />
￼<br />
￼<span style="white-space: pre;"> </span><br />
￼GASARCHhttp://www.blogger.com/profile/03615736448441925334noreply@blogger.com5tag:blogger.com,1999:blog-3722233.post-81905355928553532852018-09-06T09:01:00.000-04:002018-09-06T09:01:09.223-04:00Are Conferences Discriminatory? Glencora Borradaile wrote a <a href="http://blogs.oregonstate.edu/glencora/2018/07/02/discrimination-and-the-conference-publication-system/">blog post</a> in June about how conferences discriminate.<br />
<blockquote class="tr_bq">
Let me spell it out. In order to really succeed in most areas of computer science, you need to publish conference papers and this, for the most part, means attendance at those conferences. But because of the institutional discrimination of border control laws and the individual discrimination that individuals face and the structural discrimination that others face, computer science discriminates based on nationality, gender identity, disability, and family status, just to name a few aspects of identity.</blockquote>
Suresh Venkatasubramanian follows up with a <a href="https://twitter.com/geomblog/status/1015267584432721921">tweet storm</a> (his words) echoing Glencora's points.<br />
<blockquote class="twitter-tweet" data-conversation="none" data-lang="en">
<div dir="ltr" lang="en">
Is there structural (i.e not intentional or institutional) bias in how conferences operate? I.e is there a systematic and persistent disadvantage to certain groups from how conferences are structured? If we consider location, and groups = non-US people, then yes.</div>
— Suresh Venkatasubramanian (@geomblog) <a href="https://twitter.com/geomblog/status/1015267585460334592?ref_src=twsrc%5Etfw">July 6, 2018</a></blockquote>
<script async="" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>
Ryan Williams had a <a href="https://twitter.com/rrwilliams/status/1015642178243256322">twitter thread</a> defending conferences.<br />
<blockquote class="twitter-tweet" data-conversation="none" data-lang="en">
<div dir="ltr" lang="en">
Because of where I live and work, I can collaborate with and see talks by many more people than the average person in my field. To me, conferences serve as a way of *leveling* that field, giving a venue where people from all over can benefit similarly.</div>
— R. Ryan Williams (@rrwilliams) <a href="https://twitter.com/rrwilliams/status/1015642195733303298?ref_src=twsrc%5Etfw">July 7, 2018</a></blockquote>
Not much difference these day between blog posts, tweet storms and twitter threads and I recommend you read through them all.<br />
<br />
Much as I think conferences <a href="https://cacm.acm.org/magazines/2009/8/34492/fulltext">should not serve as publication venues</a>, they do and should play a major role in connecting people within the community. We should do our best to mitigate the real concerns of Glencora and Suresh, create an environment that everyone feels comfortable, have travel support and child care to make it easier and have meetings in different countries so those with visa issues can still attend at times. But we cannot eliminate the conference without eliminating the community. Personal interactions matter.<br />
<script async="" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>
Lance Fortnowhttps://plus.google.com/101693130490639305932noreply@blogger.com8tag:blogger.com,1999:blog-3722233.post-30393145678148026362018-09-03T22:46:00.002-04:002018-09-04T08:35:06.843-04:00The Rule of Threes/AstrologyOn Aug 16, 2018 Aretha Franklin died. A famous singer.<br />
<br />
On Aug 18 2018 Kofi Anan died. A famous politician.<br />
<br />
On Aug 25, 2018 John McCain died. A famous politician.<br />
<br />
On Aug 26, 2018 Neil Simon died, a famous playwright.<br />
<br />
For 12 famous people who died between Aug 5 and Aug 26 see <a href="http://www.tributes.com/celebrity/deaths/today">here</a> (be careful- there are a few more on the list who died in August but a different year).<br />
<br />
One could group those 12 into four sets of three and claim the<i> rule of threes</i> that celebrities die in threes. There was an episode of <i>30 Rock</i> where two celebrities had died and Tracy Jordan (a celeb) tried to kill a third one so he would not be a victim of the<i> rule of threes</i>. (see the short video clip: <a href="https://vimeo.com/151801153">here</a>.)<br />
<br />
How would one actually test the rule of threes? We would need to define the rule carefully. I have below a well defined rule, with parameters you can set, and from that you could do data collection (this could be a project for a student though you would surely prove there is no such rule).<br />
<br />
<ol>
<li>Decide on a definite time frame: T days. The deaths only count if they are within T days.</li>
<li>Define celebrity. This may be the hardest part. I'll start with they must have a Wikipedia page of length W and they must have over H hits on Google. This may be hard to discern for people with common names or alternative spellings. You might also look into Friends on Facebook and Followers on Twitter. A big problem with all of this is that if you want to do a study of old data, before there was Google, Wikipedia, Facebook, and Twitter, you will need other criteria (ask your grandparents what it was like in those days).</li>
<li>Decide whether or not to have a cutoff on age. You may decide that when Katherine Hepburn, Bob Hope, and Strom Thurmond died less than a month apart, at the ages of 96, 100, 100 this doesn't qualify. Hence you may say that the celebrities who die must be younger than Y years.</li>
</ol>
<br />
I doubt anybody will ever do the experiment--- those that believe its true (are there really such people?) have no interest in defining it carefully or testing it. And people who don't believe would not bother, partially because so few people believe it that its not worth debunking. But I wonder if a well thought out experiment might reveal something interesting. Also contrast the data to all deaths and see if there is a difference. For example, you might find that more celebs die in August then would be expected based on when all people die. Or that celebs live longer. Or shorter. Actually with enough <a href="https://en.wikipedia.org/wiki/Data_dredging">p-hacking</a> I am sure you could find something. But would you find something meaningful?<br />
<br />
Astrology is in the same category- people who believe (there ARE such people!) could do well defined experiments but have no interest in doing so. I doubt they would find anything of interest if they did. Here there are enough people who believe it in to be worth debunking, but would a well designed science experiment convince them that astrology does not have predictive powers? Has such been done?<br />
<br />
<br />
I once DID do such an experiment to disprove a wild theory. In 2003 a cab driver once told me (1) there is no Gold in Fort Know, and Ian Fleming was trying to tell us this in the book Goldfinger, (2) Reagan was shot since he was going to tell, (3) a small cohort of billionaires runs the world. I challenged him-- if that is the case then how come in 1992 Bill Clinton beat George Bush, who was surely the billionaires pick. He responded that Bill Clinton was a Rhodes Scholar and hence he is in-the-club. I challenged him- OKAY, predict who will get the Democratic Nomination in 2004. This was a well defined experiment (though only one data point) He would give me a prediction and I could test it. He smiled and said <i>Wesley Clark was a Rhode Scholar.</i> Oh well.<br />
<br />GASARCHnoreply@blogger.com8tag:blogger.com,1999:blog-3722233.post-14981400123701965672018-08-30T09:25:00.001-04:002018-08-30T09:25:44.291-04:00What is Data Science?The <a href="https://simons.berkeley.edu/">Simons Institute</a> at Berkeley has two semester long programs this fall, <a href="https://simons.berkeley.edu/programs/complexity2018">Lower Bounds on Computational Complexity</a> and <a href="https://simons.berkeley.edu/programs/datascience2018">Foundations of Data Science</a>. The beginning of each program features a "boot camp" to get people up to speed in the field, <a href="https://simons.berkeley.edu/workshops/schedule/6596">complexity</a> last week and <a href="https://simons.berkeley.edu/workshops/schedule/6680">data science</a> this week. Check out the links for great videos on the current state of the art.<br />
<div>
<br /></div>
<div>
Data Science is one of those terms you see everywhere but not well understood. Is the the same as machine learning? Data analytics? Those pieces only play a part of the field.<br />
<br /></div>
<div>
Emmanuel Candès, a Stanford statistician, gave a great description during his keynote talk at the recent <a href="http://acm-stoc.org/stoc2018/">STOC theoryfest</a>. I'll try to paraphrase.</div>
<div>
<br /></div>
<div>
The basic scientific method works as follows: You make an hypothesis consistent with the world as you know it. Design an experiment that would distinguish your hypothesis from the current models that we have. Run the experiment and accept, reject or refine your hypothesis as appropriate. Repeat.</div>
<div>
The <a href="https://home.cern/topics/higgs-boson">Higgs Boson</a> followed this model as a recent example.<br />
<br />
Technological Advances have given us a different paradigm.<br />
<ol>
<li>Our ability to generate data has greatly increased whether it be from sensors, DNA, telescopes, computer simulations, social media and oh so many other sources.</li>
<li>Our ability to store, communicate and compress this data saves us from having to throw most of it away.</li>
<li>Our ability to analyze data through machine learning, streaming and other analysis tools has greatly increased with new algorithms, faster computers and specialized hardware.</li>
</ol>
</div>
<div>
All this data does not lend itself well to manually creating hypotheses to test. So we use the automated analysis tools, like ML, to create models of the data and use other data for testing those hypotheses. Data science is this process writ large.<br />
<br />
We are in the very early stages of data science and face many challenges. Candès talked about one challenge: how to prevent false claims that arise from the data not unrelated to the current <a href="https://www.nature.com/news/1-500-scientists-lift-the-lid-on-reproducibility-1.19970">reproducibility crisis</a> in science.<br />
<br />
We have other scientific issues. How can we vouch for the data itself and what about errors in the data? Many of the tools remain adhoc, how can we get theoretical guarantees? Not to mention the various ethical, legal, security, privacy and fairness issues that vary in different disciplines and nations.<br />
<br />
We sit at a time of exciting change in the very nature of research itself, but how can we get it right when we still don't know all the ways we get it wrong. </div>
Lance Fortnowhttps://plus.google.com/101693130490639305932noreply@blogger.com0tag:blogger.com,1999:blog-3722233.post-55992237056540027652018-08-27T18:36:00.000-04:002018-08-27T18:36:34.800-04:00Is Trivium (the Stream Cipher) used?This Fall I am teaching the senior course in Crypto at UMCP. Its a nice change of pace for me since REAL people REALLY use this stuff! Contrast to last Spring when I taught<br />
<br />
<i>Ramsey Theory and its `Applications'</i><br />
<br />
There is one topic in the Crypto course that LOOKS really useful but I can't tell if it IS being used, so I inquire of my readers. (I will probably come across others topics like that in the future.)<br />
<br />
A Secure Stream Cipher is (informally) a way to, given a seed and optionally an Init Vector (IV), generate bits that look random. Alice and Bob communicate the seed either in person or over a private channel or perhaps by using RSA (or some other public key system) and they then both effectively have a very long string of random bits. They send the IV in the clear. They can then do one-time-pad (really a psuedo-one-time-pad). There are other uses for random-looking bits as well.<br />
<br />
So what is needed is a Secure Stream Cipher. <i>Trivium</i> seems to be one such. According to <a href="http://www.thefullwiki.org/Trivium_(cipher)">the Trivium wiki</a><br />
<br />
<i>It was submitted to the Profile II (hardware) of the eSTREAM compeition by its authors Christophe De Canniere and Bart Preneel, and has been selected as part of the portfolio for low area hardware ciphers (Profile 2) by the eSTREAM project. It is not patented.</i><br />
<i><br /></i>
According to these papers: <a href="http://www.ecrypt.eu.org/stream/p3ciphers/trivium/trivium_p3.pdf">here</a> and <a href="http://www.ecrypt.eu.org/stream/papersdir/2006/021.pdf">here</a>, and the Wikipedia entry, <a href="https://en.wikipedia.org/wiki/Trivium_(cipher)">here</a> the following are true:<br />
<br />
1) Trivium takes an 80 bits seed and an 80 bit IV<br />
<br />
2) The implementation is simple and is already in hardware. Around 3000 logic gates.<br />
<br />
3) There are reasons to think its random-looking but no rigorous proof.<br />
<br />
4) So far it has not been broken, though its not clear how many people have tried. Thats goes to my question-- how widely used it is it?<br />
<br />
5) Trivium need 1152 steps in the init phase. If it only does 799 then <a href="https://en.wikipedia.org/wiki/Cube_attack">The Cube Attack</a> can break it in 2^68 which is better than the naive algorithm of trying every key and IV (2^160) but still not feasible.<br />
<br />
6) Trivium is also <a href="https://en.wikipedia.org/wiki/Trivium_(band)">An American Metal Band</a> and a <a href="https://en.wikipedia.org/wiki/Trivium">Medieval theory of education</a>. Its a good name for a band. See my post <a href="https://blog.computationalcomplexity.org/search?q=red+cliques">What Rock Band Name Would you Choose</a>? for fictional good names for bands with a math or theoretical cs connection.<br />
<br />
OKAY, back to the main topic:<br />
<br />
SO my questions:<br />
<br />
Is Trivium used?<br />
<br />
If so then by whom and for what (for the psuedo 1-time pad?) ?<br />
<br />
If not then why not (e.g., some of of my points above are incorrect)? and should it be instead<br />
of what is being used?<br />
<div>
<br /></div>
<br />
<br />
<br />GASARCHhttp://www.blogger.com/profile/03615736448441925334noreply@blogger.com5tag:blogger.com,1999:blog-3722233.post-59296139584803452982018-08-26T08:57:00.000-04:002018-08-26T08:57:06.961-04:00Katherine Johnson (1918-)<a href="https://www.nasa.gov/sites/default/files/styles/side_image/public/thumbnails/image/26646856911_ca242812ee_o_1.jpg?itok=NQu4uW0H" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="483" data-original-width="320" height="320" src="https://www.nasa.gov/sites/default/files/styles/side_image/public/thumbnails/image/26646856911_ca242812ee_o_1.jpg?itok=NQu4uW0H" width="211" /></a><a href="https://www.nasa.gov/content/katherine-johnson-biography">Katherine Johnson</a> is celebrating her 100th birthday today. This is the first centenary post we've done for a living person.<br />
<br />
The movie <a href="https://www.imdb.com/title/tt4846340/">Hidden Figures</a> made her story famous: In 1952, she joined NACA, the predecessor of NASA, in the all-black West Area Computing section of the Langley lab in Virginia. During the "space race" of the 50's and 60's she worked on trajectory analysis for the early human spaceflights. In 1960, she was the first woman to co-author a <a href="https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19980227091.pdf">technical report</a> for NASA on placing satellites over a specific latitude and longitude.<br />
<br />
The West Area Computing section had human computers working on the critical calculations for air and space travel. Soon NASA started moving that work to IBM machines but much as we don't fully trust machine learning today, humans didn't initially trust these computers. John Glenn's first orbital mission required complex calculations to track his flight. He insisted on Katherine Johnson working out the computations herself, which she did. "If she says they're good then I'm ready to go".<br />
<br />
In 2015, then President Obama awarded Katherine Johnson the highest US civilian honor, the Presidential Medal of Freedom.Lance Fortnowhttps://plus.google.com/101693130490639305932noreply@blogger.com0