tag:blogger.com,1999:blog-3722233Thu, 21 Mar 2019 07:52:52 +0000typecastfocs metacommentsComputational ComplexityComputational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill Gasarchhttps://blog.computationalcomplexity.org/noreply@blogger.com (Lance Fortnow)Blogger2666125tag:blogger.com,1999:blog-3722233.post-4318994903152354716Sun, 17 Mar 2019 22:55:00 +00002019-03-17T18:55:24.462-04:00Third Poll on P vs NP and related Questions is out now! And the winner is Harambe!<br />
I took a poll of the theory community (and others) about P vs NP and related issues in 2002, 2012, and 2019 (sorry its not an arithmetic sequence --- read the third poll article to see why). The 2002 and 2012 polls are on the web (see <a href="https://www.cs.umd.edu/users/gasarch/BLOGPAPERS/pollpaper1.pdf">here</a> and <a href="https://www.cs.umd.edu/users/gasarch/BLOGPAPERS/pollpaper2.pdf">here</a> ), and now the third poll is out and its <a href="https://www.cs.umd.edu/users/gasarch/BLOGPAPERS/pollpaper3.pdf">here</a> or <a href="https://dl.acm.org/citation.cfm?doid=3319627.3319636">here</a> . All three appear in Lane's Complexity Column in SIGACT News.<a href="https://www.cs.umd.edu/users/gasarch/BLOGPAPERS/pollpaper1.pdf"></a><br />
<br />
I'll give some highlights and thought about the third poll, but for more read the article. Or read all three and see how opinions have changed over time.<br />
<br />
0) How many respondents: 100 the first time, and 152 the second time, 124 the third time.<br />
<br />
1) P≠NP is at about 80%. When restricted to people who have thought about the problem a lot (my judgement) then it goes to 99%. The strongest P≠NP is by Lance who says<br />
<br />
People who think P=NP are like people who believe Elvis is alive.<br />
<br />
I disagree. I think Elvis is alive and is trying to prove P=NP.<br />
<br />
2) When will it be solved? 66% thought it would be solved before 2100, a bit more than in prior years. But also more thought it would never be solved. Some commented that a computer might solve it. I doubt that, but I do think this poll will be done by a computer in 10 years.<br />
<br />
3) Because I used Survey monkey (1) each question got more answers, and (2) most questions got a forced YES or NO so less funny comments or room for creative answers. People could still leave comments, and some did.<br />
<br />
4) Related to point (3): The last time I did the poll P=BPP was thought by everyone <i>who answered</i> <i>the question</i>. This year far more people answered it and quite a few thought P≠BPP. This may be because Survey monkey had a psychological affect of making people vote even if they didn't really know (people who have thought a lot about P vs NP all thought P=BPP). Has there been evidence that P≠BPP that I am unaware of? Or since there has not been that much progress on it maybe they are unequal. 10 years ago I would have thought we would have L=RL by now.<br />
<br />
5) The last time I did the poll 10 people thought factoring IS in P and the NSA or the KGB knows that. This time around nobody thought that. Why? Those 10 people have vanished mysteriously.<br />
<br />
6) Five people thought P vs NP will be resolved by Harambe. That is more than any person got.<br />
<br />
7) Is this poll worth doing? I would say yes (gee, I have to having done his poll three times) because it is good to have an objective record of subjective opinion. <br />
<br />
8) I'll give some of my answers: P≠NP, Elvis is dead, and both will be proven in the year 2525. For more about the future see <a href="https://www.youtube.com/watch?v=yesyhQkYrQM">this</a>.<br />
<br />https://blog.computationalcomplexity.org/2019/03/third-poll-on-p-vs-np-and-related.htmlnoreply@blogger.com (GASARCH)3tag:blogger.com,1999:blog-3722233.post-7275002484261338073Thu, 14 Mar 2019 14:08:00 +00002019-03-14T10:08:04.425-04:00Captain EinsteinIf the president of the United States uses "complexity" in a tweet, I can't leave it alone.<br />
<blockquote class="twitter-tweet" data-lang="en">
<div dir="ltr" lang="en">
Airplanes are becoming far too complex to fly. Pilots are no longer needed, but rather computer scientists from MIT. I see it all the time in many products. Always seeking to go one unnecessary step further, when often old and simpler is far better. Split second decisions are....</div>
— Donald J. Trump (@realDonaldTrump) <a href="https://twitter.com/realDonaldTrump/status/1105468569800839169?ref_src=twsrc%5Etfw">March 12, 2019</a></blockquote>
<script async="" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>
<br />
<blockquote class="twitter-tweet" data-lang="en">
<div dir="ltr" lang="en">
....needed, and the complexity creates danger. All of this for great cost yet very little gain. I don’t know about you, but I don’t want Albert Einstein to be my pilot. I want great flying professionals that are allowed to easily and quickly take control of a plane!</div>
— Donald J. Trump (@realDonaldTrump) <a href="https://twitter.com/realDonaldTrump/status/1105471621672960000?ref_src=twsrc%5Etfw">March 12, 2019</a></blockquote>
I got my MIT degree in Applied Mathematics so I don't count, but I know plenty of MIT computer scientists and the one undeniable truth is that I don't want any of them flying my plane.<br />
<br />
I also agree with the Donald that a complex environment can often lead to confusion, especially in an emergency. HCI matters.<br />
<br />
But that's where it stops. Better technology in the plane and on the ground, combined with better procedures have all but eliminated deadly major commercial airplane crashes in the US and quite rare in the world at large. I'll call that more than a "little gain". I remember the "old and simpler" days where we had deadly airline crashes every few months. No thanks.<br />
<br />
I've had great discussions with some of the aerospace engineering professors at Georgia Tech. The amount of effort to get the "front-of-the-plane" software right via formal methods and rigorous testings has become harder to engineer than the plane itself. The "back-of-the-plane" software that controls the video screens and WiFi gets less attention. I can live with safety over movies.<br />
<br />
When technology makes things much safer, whether it be planes or self-driving cars, the rare accidents become far more noticeable. while we should use these opportunities to learn and make things safer, the rare accidents help us realize just how much more safer technology can make us.<br />
<script async="" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>
https://blog.computationalcomplexity.org/2019/03/captain-einstein.htmlnoreply@blogger.com (Lance Fortnow)3tag:blogger.com,1999:blog-3722233.post-5113460714750651988Sun, 10 Mar 2019 21:26:00 +00002019-03-10T17:26:17.902-04:00Richard Karp: His influence and how to honor himWhen I first saw the definition of NP-Complete I first thought<i> if there are NP-complete problems I suspect they are contrived. </i>When I saw the proof that SAT is NP-complete I thought <i>okay, so there is one natural problem NP-complete by a trick of encoding, but I suspect there aren't any more.</i> I then read Karp's article that had 22 NP-complete problems. I thought <i>okay, there are 22, but I suspect there are no more. </i>No I didn't think that. But the point is that Cook and Karp together birthed modern complexity theory by introduction NP-completeness and showing that there are MANY natural problems that are NP-complete.<br />
<br />
How many people has Richard Karp inspired? I don't know but I suspect it's a cardinal between countable and the reals so it may depend on your model of set theory. Later in this post I will present Samir Khuller's story of how Richard Karp inspired him.<br />
<br />
How to honor him? The Simons inst. is currently welcoming contributions to the Richard M Karp Fund, which honors the scientific contributions of Founding Director Dick Karp. The fund will provide vital support for the mission and activities of the Simons institute. They have a deadline of Pi-Day. You can go <a href="https://simons.berkeley.edu/support/donate">here</a> to contribute and/or <a href="https://www.cs.umd.edu/users/gasarch/BLOGPAPERS/goldwasserkarp.txt">here</a> for a letter from Shafi Goldwasser, Prabhakar Raghavan, and Umesh Vazirani about the fund.<br />
<br />
OKAY, Samir's story:<br />
<br />
<div>
<b>Mentoring and Influence via a Train Journey...</b></div>
<div>
<br /></div>
<div>
Almost to the day, 33 years ago I was running from my dorm room to catch an unreliable bus to the train station as I headed home for a mid-semester break. On the way,I stopped by the mailroom, and not knowing where to put the CACM that had just arrived, stuffed it into the side of my bag and in the process, dropping my notebook in the process. On the train, when I looked for my notebook to go over some material I realized it was missing! All I had was a CACM and a very long train ride. Skimming through the journal, I found Karp’s Turing Award article “Combinatorics, Complexity, and Randomness“. I started reading this article, and was riveted! It was one of those life changing moments for me, when my immediate future became clear - I wanted to study Theoretical Computer Science and specifically the complexity of combinatorial problems. Without ever having met Dick Karp, he changed my life forever.</div>
<div>
<br /></div>
<div>
I met Dick long after he influenced me via his Turing Award Lecture and it was a real privilege to have had the opportunity to interview him via a fireside chat at Simons. See <a href="https://www.youtube.com/watch?v=g1DT_UeJwps">here</a>.</div>
<div>
<br /></div>
<div>
I am delighted about the fund Bill mentions above as the money that goes to it will help inspire others as Karp inspired me.</div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
<br />
<br />https://blog.computationalcomplexity.org/2019/03/richard-karp-his-influence-and-how-to.htmlnoreply@blogger.com (GASARCH)2tag:blogger.com,1999:blog-3722233.post-5011417931001574576Fri, 08 Mar 2019 15:05:00 +00002019-03-08T10:05:52.288-05:00QEDVia <a href="https://www.scottaaronson.com/blog/?p=4133">Scott</a>, John Horgan wrote a <a href="https://blogs.scientificamerican.com/cross-check/the-horgan-surface-and-the-death-of-proof/">blog post</a> following on his 1993 Scientific American article <a href="https://blogs.scientificamerican.com/cross-check/the-horgan-surface-and-the-death-of-proof/">The Death of Proof</a>. The article talked about computer-generated proofs, experimental mathematics and even mentioned some of our work on <a href="https://doi.org/10.1145/103418.103428">transparent proofs</a> (basically time-efficient probabilistically checkable proofs). I got (sarcastically I think) accused of killing mathematics after that article but of course we had traditional proofs about probabilistically check proofs. Andrew Wiles got a sidebar titled "A Splendid Anachronism?" for merely solving the most famous open problem in all of mathematics. Somehow traditional mathematical proofs survived the article.<div>
<br /></div>
<div>
Meanwhile in CACM, Moshe Vardi writes <a href="https://cacm.acm.org/magazines/2019/3/234913-lost-in-math/fulltext">Lost in Math?</a> </div>
<blockquote class="tr_bq">
TCS is surely blessed with mathematical beauty. As a graduate student a long time ago, it was mathematical beauty that attracted me to TCS, and continued to lead my research for many years. I find computational complexity theory (or complexity theory, for short), with its theorems (for example, the time-hierarchy and space-hierarchy theorems) and its open questions (for example, P vs NP), to be hauntingly beautiful. Beauty, yes; but what about truth?</blockquote>
Here here on the beauty of complexity. So what about truth? I cover much of this in my <a href="https://cacm.acm.org/magazines/2009/9/38904-the-status-of-the-p-versus-np-problem/fulltext">P v NP survey</a>. In short the P v NP captures the most critical aspects of what is efficiently computable but it is not the endpoint. Nevertheless P v NP captures both truth and beauty and that's why the problem has so captivated and guided me throughout my research career.https://blog.computationalcomplexity.org/2019/03/qed.htmlnoreply@blogger.com (Lance Fortnow)0tag:blogger.com,1999:blog-3722233.post-7118957481360007833Mon, 04 Mar 2019 20:58:00 +00002019-03-05T21:09:16.473-05:00How are there 20 copies of my new book through other booksellers and why are they so highly priced(This is about my book PROBLEMS WITH A POINT: exploring Math and computer science<br />
by Gasarch and Kruskal, <a href="https://www.amazon.com/Problems-Point-Exploring-Computer-Science/dp/9813279974/ref=sr_1_1?keywords=gasarch&qid=1551731760&s=gateway&sr=8-1">here</a>. This post is NOT a plug.)<br />
<br />
I often see weird pricing on amazon but don't have enough inside information to explain it. This time I have seen some weird pricing on amazon but DO have enough information to be truly baffled.<br />
<br />
My book is on amazon. You can buy it<br />
<br />
Softcover new: $38.00<br />
<br />
Hardcover new: $68.00<br />
<br />
Okay, that all makes sense.<br />
<br />
OR you can buy it from some other source.<br />
<br />
There are 10 copies of the softcover, they claim new, ranging from $38.00 to $57.00.<br />
<br />
There are 2 copies of the softcover, they claim used (how used could it be?), $49 and $53<br />
<br />
There are 9 copies of the hardcover, they claim new, ranging from $68.00 to $91.00<br />
<br />
There are 2 copies of the hardcover, they claim used, $84.00 and $88.00<br />
<br />
This raises two questions<br />
<br />
1) I have a copy, Clyde got a copy, and two copy's were send to reviewers. Neither Clyde nor I have sold our copy. So how did 20 or so copies get out there?<br />
<br />
2) Why do they cost MORE than list price.<br />
<br />
I've asked around and got some answers, but I INVITE you to give more possible answers. If your answer is not just speculation (e.g., `I used to be in the black market book business and here's the real scoop') then that would be great; however, I will take specuation<br />
<br />
POSSIBLE EXPLANATIONS FOR HOW SOMEONE GOT COPIES<br />
<br />
1) Clyde and I haven't gotten our free copies yet. Hijacked? Here's hoping the hijackers read it and enjoyed it before posting it. However, this theory is unlikely since I inquired with World Scientific (our publisher) and found out we'll be getting our free copies in 2 weeks. So this theory will either be confirmed or denied in 2 weeks. I would bet against OR those are some really bad hijackers.<br />
<br />
2) Complete ripoff. You pay the money and get NOTHING. I think this was more likely in the early days of amazon (or e-bay or....) but now things are pretty well monitored. I have sold books on amazon and am very impressed with how foolproof it is. Still --- could be.<br />
<br />
3) Someone hacked my computer. Really? Where did they get the orange covers? Why would they bother? This is not Harry Potter. Plus I asked our staff and NO this could not happen.<br />
<br />
4) Some of the shipment dates are fairly far in the future, so they plan to get an order, then<br />
order the book, and have it send to the buyer, at a profit. Seems like too much sugar for a satoshi, BUT could be profitable if its a bot and its automated to do this with lots of books.<br />
<br />
5) My publisher also send the book to other sellers. That does not explain the higher prices.<br />
<br />
WHY THE HIGHER PRICES?<br />
<br />
1) They are counting on people assuming that other sellers are cheaper without checking. Would people do that? People still fall for the Nigerian billionaire scam, so maybe yes.<br />
<br />
2) The seller themselves, which is possibly an army of bots, didn't bother checking but priced it<br />
similar to other books. This is plausible since $38.00 is fairly cheap, so looking at similar books may get you a higher price.<br />
<br />
-----<br />
Any speculation is welcome so long as it does not involve space aliens or astrology.<br />
<br />
<br />https://blog.computationalcomplexity.org/2019/03/how-are-there-20-copies-of-my-new-book.htmlnoreply@blogger.com (GASARCH)6tag:blogger.com,1999:blog-3722233.post-4197054057663794908Thu, 28 Feb 2019 13:50:00 +00002019-02-28T08:50:40.970-05:00Flying Pigs Unsafe at Any SpeedTake a moment and imagine a flying pig. Do you see a pig with tiny wings lazily guiding along. But pigs are not particularly slow animals as anyone has seen a <a href="https://youtu.be/-eMV1jxfl40?t=73">pig race</a> at a county fair can attest to that. So why not in the air shouldn't a pig go faster than an unladen African swallow?<br />
<br />
I have a point discussing the fastness of a flying pig. Consider my favorite flying pig, P = NP. I would put more probability on an actual flying pig (thanks CRISPR) than polynomial-time algorithms for traveling salesman.<br />
<br />
Sometimes I get in the following conversation:<br />
<br />
<b>AI person</b>: The P v NP problem is irrelevant as we have machine learning and SAT solvers and we can for all practical purposes solve NP-complete problems.<br />
<b>Me</b>: If P = NP we can solve AI!<br />
<b>AI</b>: If P = NP it's likely to have a very slow poly-time algorithm with high exponents and constants and be for all practical purposes useless.<br />
<br />
Let's break that down. The claim is E(running time of SAT|running time of SAT is poly) = cn<sup>k</sup> for some large c and k. First of all you are conditioning on a probability zero event. Beyond that nearly all the known problems we know that sit in polynomial time have practical efficient solutions. So under the assumption that SAT is one of these problems, why shouldn't the same rule apply. You've already made the assumption we can reduce its running time from exponential to polynomial, so why would it stop at some high polynomial?<br />
<br />
If we are going to talk about the hypothetical flying pig P = NP world, let my algorithms run fast.https://blog.computationalcomplexity.org/2019/02/flying-pigs-unsafe-at-any-speed.htmlnoreply@blogger.com (Lance Fortnow)6tag:blogger.com,1999:blog-3722233.post-4283341625722054774Tue, 26 Feb 2019 17:20:00 +00002019-02-27T00:20:10.380-05:00Problems with a Point: Exploring Math and Computer Science<br />
As you can see from Lance's tweet<br />
<br />
<br />
Problems with a Point: Exploring Math and Computer Science<br />
by Gasarch and Kruskal<br />
<br />
(ADDED LATER- the World scientific website has more info than amazon and has a table of contents, so here it is: <a href="https://www.worldscientific.com/worldscibooks/10.1142/11261#t=toc">here</a>)<br />
<br />
is now available! The tweet says its $68.00 but that's hardcover- paperback is $38.00 (are covers that expensive) and if you like your books already broken in, there are some used copies for $89.00. Makes sense to me (no it doesn't!). Could be the topic of a blog post (probably already was).<br />
<br />
Okay, so whats in the book? One of my favorite types of blog posts is when I make a point ABOUT math and then do some math to underscore that point. I went through all of my blogs (all? No, I doubt I did that) and picked out blogs of that type. With Clyde's help we EXPANDED and POLISHED and GOT THE MATH RIGHT (in some cases I didn't have any math so we had to supply it).<br />
<br />
When I first got a copy (about a month ago) I just couldn't stop reading it. I really like it! This is a non-trivial remark -- often authors get tired of their book, or after a while and wonder things like ``why did I write 300 page on the muffin problem? What was I thinking?'' So the fact that I am very pleased with it is not obvious. Does it mean you will?<br />
<br />
If you ever thought `I wish bill would clean up his posts spelling and grammar AND expand on the math AND make it a more cohesive whole' then buy the book!<br />
<br />
I will in future posts describe more about writing the book, but this is probably my last post where I plug the book.<br />
<br />
bill g.https://blog.computationalcomplexity.org/2019/02/problems-with-point-exploring-math-and.htmlnoreply@blogger.com (GASARCH)5tag:blogger.com,1999:blog-3722233.post-4944416293302133989Thu, 21 Feb 2019 12:42:00 +00002019-02-21T07:42:02.042-05:00Extra! Extra! Read all about it!Last weekend I saw the documentary <a href="https://www.imdb.com/title/tt7428030/">Joseph Pulitzer: Voice of the People</a>. Pulitzer, as you probably know from the prize named after him, was a major newspaper publisher in the late 19th and early 20th century. He ran two papers, the St. Louis Post-Dispatch and The New York World. The World at one point took on massive proportions, including sheet music of the latest tunes and dress patterns of new fashion that one could make at home. The World was the Internet of the turn of the 20th century.<br />
<br />
The movie mentioned the many editions of the paper during the day, including the extra edition. An extra edition came out because of some major breaking news story. Back then newspapers would drum up minor stories to sell extra editions but they tended to disappear over time.<br />
<br />
Which brings me to Monday, August 19, 1991. Hard-line members of the communist party of the USSR <a href="https://en.wikipedia.org/wiki/1991_Soviet_coup_d%27%C3%A9tat_attempt">attempted a coup</a> to take over the government from Mikhail Gorbachev. To us in the US, this seemed like the cold war which appeared to be coming to an end might rekindle. At the time I lived in Chicago and on that Monday the Chicago Tribune ran an extra afternoon edition talking about the coup. The return to the cold war didn't happen. Within a couple of days the coup failed and if anything hastened the dissolution of the Soviet Republic.<br />
<br />
That was probably the last of the extra editions. By the time of the next major historical event, ten years and twenty-three days later, we had the Internet and cell phones and one no longer needed a newspaper to tell you the world has changed.https://blog.computationalcomplexity.org/2019/02/extra-extra-read-all-about-it.htmlnoreply@blogger.com (Lance Fortnow)2tag:blogger.com,1999:blog-3722233.post-335110421348004475Tue, 19 Feb 2019 13:27:00 +00002019-02-19T08:27:48.146-05:00Using `who will be the dem VP choice' article in classI recently read an absurd article that speculated on who the Democratic VICE prez nominees will be. Yes, you read that right, VICE Prez. Gee, wouldn't knowing who the Prez nominees was first help. But leave it to Nate Silver and company to have a fascinating yet... premature article on this. Here is the link: <a href="https://fivethirtyeight.com/features/our-very-first-2020-vice-presidential-draft/">article on who will be Dem VP nominee</a>. Its hard enough to pick the VP when the Prez is known! I did a blog on how badly intrade would do on predicting VP, <a href="https://blog.computationalcomplexity.org/2008/09/i-would-be-on-intrade-that-intrade-will.html">here</a>. I didn't predict that intrade would go out of business (`intrade' was short hand for `betting markets' just like `Uber' is shorthand for `hailing a car with your phone'- I wonder if the term `Uber' will still be used if they go out of business?)<br />
<br />
After reading I thought<br />
<br />
<i>Wow- there are so many If-Then clauses in this that I could make half a lecture about it for my Discrete Math class where we are talking about Prop. Logic.</i><br />
<i><br /></i>
I emailed the students to read it, and we had a lively discussion. I made sure it was a non-partisan discussion. Realize that a statement like:<br />
<br />
<i>If the Prez is a female then the VP will likely be a male because it is thought, righly or wrongly, that all-female ticket can't win</i><br />
<br />
is not partisan or sexist, it is stating an opinion about how voters would go. It may be incorrect, but its not offensive.<br />
<br />
Here is what the class thought the article was saying (and I think they are right) expressed in Logic.<br />
<br />
Note one other thing-- the Prez nominees might not be the choice of the party (Trump wasn't the choice of the Rep party in 2016, though not clear who was, see <a href="https://blog.computationalcomplexity.org/2015/05/the-law-of-excluded-middle-of-road.html">here</a>) but the VP really will be since (I think) the party has more control over that.<br />
<br />
If the Prez Nominees is female then the Vice Prez nominee will be male. (I agree)<br />
<br />
If the Prez Nominees is white mail then the Vice Prez nominee will be either female or non-white but not both. (Great question to express that as symbols. Not sure what I think- the thought is that two white males will be odd since so many of the candidates are female or non-white. Having said that, in recent years the VP has NOT been someone else who ran: Clinton-Keane, Keane had not been a candidate, Trump-Pence, Pence had not been a candidate, Romney-Ryan, Ryan had not been a candidate, Obama-Biden, Biden had been a candidate briefly in 2008, but this was not one of these `lets unite the party by picking by OPPONENT to be VP', McCain-Palin, Palin had not been a candidate. Kerry-Edwards. YES- Edwards had been a serious candidate in 2004, though some think he was running for VP all along since he never said an unkind word about his opponents)<br />
<br />
Prez from one of the coasts IFF VP from the center. (I won't go over the list again, but this has been less of a thing since Clinton-Gore were both from flyover states.)<br />
<br />
Ideology- If Prez is establishment then VP is leftists. (Not sure these terms are so well defined to say this, and their are other ideologies as well, not sure how they fit in.)<br />
<br />
If Prez lacks Fed Experience then VP should have Fed Exp. (Quite common: Obama-Biden, Romney-Ryan, Clinton-Gore, Bush-Cheney)<br />
<br />If Prez has fed experience than nothing can be deduced about VP Fed Exp.<br />
<br />
If Prez lacks gravitas then VP should have gravitas (tricky! Don't want the VP to outshine the Prez!)<br />
<br />
Absolute statement: VP should not outshine prez. A Kangaroo ticket is when the people prefer the VP to the Prez (Dukakis-Bentsen had this problem. Kerry-Edwards might have).<br />
<br />
If Prez is boring then VP should be exciting. But again tricky! (I think that was why McCain picked Palin. And she was exciting, but not really in a good way. Couldn't he have picked someone exciting, and perhaps female who was, you know, Qualified?)<br />
<br />
VP's like doctors: do no harm.<br />
<br />
VP from Swing state helpful but not necc.<br />
<br />
Bad to have a VP who is a senator from a state where the Governor is republican and picks the replacement senator.<br />
<br />
VP should be someone who the country can picture being president without laughing. I am not talking ideology I am talking about seriousness and experience. Palin was the only one in recent memory who even voters of her party worried that she was not up to being president. One pundit defending the choice talked about how McCain was healthy and hence Palin wouldn't become prez so don't worry about it. NOT reassuring! Quayle was also not seen as serious, though not as much as Palin.<br />
<br />
If Prez is old then VP mattes more(?)<br />
<br />
Many of the above depend on who the Prez is. And that is one of the points: one can write down a long prop formula with many IF-THEN's to determine who properties the VP will have, and then<br />
<br />
1) When the Prez is picked many of the variables are set and hence the formula becomes much easeier and<br />
<br />
2) Could STILL be wrong!<br />
<br />
MY prediction: my last two predictions have been wrong so I am reluctant go predict anything. But I will tell you my WRONG predictions and then my current one<br />
<br />
I predicted Paul Ryan would be the Prez Nominee in 2016. He didn't even run.<br />
<br />
I predicted that Al Franken would be the Prez Nominess in 2020- he understands TV and was an entertainer, so he could match Trump. Whoops.<br />
<br />
So with that sterling record I predict: Prez: Cory Booker, VP: Beto<br />
<br />
I am NOT a pundit- so what I predict is not what I hope happens. What do I hope? In the interest of full disclosure (gee, shouldn't that have come at the beginning) I admit that I want to see Trump lose but I have no idea what makes someone `electable' nowadays.<br />
<br />
<br />https://blog.computationalcomplexity.org/2019/02/using-who-will-be-dem-vp-choice-article.htmlnoreply@blogger.com (GASARCH)2tag:blogger.com,1999:blog-3722233.post-5151224614999297675Thu, 14 Feb 2019 12:52:00 +00002019-02-14T07:52:08.010-05:00The iPhonification of EverythingSo you've got an iPhone XS in Space Grey. Congrats, so do 20 million other people. Maybe you have different cases but otherwise the hardware in all these phones are virtually identical. Yet you can tell with a glance that this is your phone. You can personalize apps and other elements of the home screen. It's your calendar and email and music.<br />
<br />
What? You've dropped your phone over Niagara falls. Luckily you've backed up your data. So you go back to Apple and buy another Space Grey iPhone XS and restore your data. Physically it's a completely different phone but for all practical purposes it's though you still had the original phone. Your phone is not defined by the device but the data that resides on it.<br />
<br />
It's not just phones. I can log into Google on anyone's Chrome browser and it will feel like my machine.<br />
<br />
Now we've all heard about a future world where nobody owns cars and we get driven around in self-driving Ubers, Lyfts and Waymos. One argument against this world is that people feel connected to their cars and unwilling to commute in some generic vehicle. But one can also imagine the car knows who you are, knows how you like your music, your lighting, how you adjust your seats even how your car drives. It becomes your car. Maybe even has electronic bumper stickers that change to support your political party.<br />
<br />
You can imagine the same for hotel rooms, your office, maybe even your apartment. It won't replicate your dog (or will it?) but as we get define more by our data than our things, do our things matter at all?https://blog.computationalcomplexity.org/2019/02/the-iphonification-of-everything.htmlnoreply@blogger.com (Lance Fortnow)4tag:blogger.com,1999:blog-3722233.post-5768393804446188224Mon, 11 Feb 2019 19:43:00 +00002019-02-11T14:43:27.430-05:00I think ze was confused -- in favor of genderless pronounsYou've probably heard the following:<br />
<br />
<br />
At first I didn't want to get an X but now that I have it, I can't imagine life without one.<br />
<br />
X could be telegraph, radio, TV, color TV, VCR, CD player, streaming, Netflix, Amazon prime, an uber account, Washer and Dryer, Car Phones (remember those), Cell Phones. If you go back in history wrist watches or sun dials (or wrist-sun-dials!).<br />
<br />
This has happened to me recently though not with an object. I read an article someplace saying that ze can be used instead of he or she. It was referring to nonbinaries (using `they' never quite sounded right) but actually it would be great if this was a general genderless pronoun. I am not making a political statement here (although I doubt I have any readers who are against genderless pronouns).<br />
<br />
Once I came across the term ze I found places to use it and now I can't imagine not using it.<br />
<br />
In a recent article I wrote I needed to say that someone was probably confused, but I did not know their gender. I used<br />
<br />
Ze was probably confused<br />
<br />
which is much better than<br />
<br />
S/he was probably confused<br />
<br />
He or she was probably confused<br />
<br />
The student was probably confused<br />
<br />
They were probably confused.<br />
<br />
Note that the first two leave out nonbinaries.<br />
<br />
0) In the article I put in a footnote saying what ze meant. In the future I may not have to.<br />
<br />
1) Will ze catch on? This blog post is an attempt to hasten the practice.<br />
<br />
2) Is there a term for his/her that is non-gendered? If not then maybe zer.<br />
<br />
3) Will there be political pushback on this usage? If its phrased as a way to include nonbinaries than unfortunately yes. If its phrased as above as when you don't know the gender, what do you do, then no.<br />
<br />
4) Is <i> nonbinary </i>the correct term? If not then please politely correct me in the comments.<br />
<br />
5) Has Ms replaced Miss and Mrs?<br />
<br />
I have used the term ze several times since then- often when I get email from a student such that I can't tell from the first name what their gender is, and I need to forward the email, such as<br />
<br />
Ze wants to take honors discrete math but does not have the prerequisite, but<br />
since ze placed in the top five in the math olympiad, we'll let zer take it.<br />
<br />
<br />
<br />
<br />https://blog.computationalcomplexity.org/2019/02/i-think-ze-was-confused-in-favor-of.htmlnoreply@blogger.com (GASARCH)13tag:blogger.com,1999:blog-3722233.post-971154734717743655Thu, 07 Feb 2019 12:47:00 +00002019-02-07T07:47:26.739-05:00An Immerman-Szelepcsényi StoryAs a grad student in the late 80's I had the opportunity to witness many great and often surprising theorems in computational complexity. Let me tell you about one of them, the Immerman-Szelepcsényi result that <a href="https://blog.computationalcomplexity.org/2003/06/foundations-of-complexity-lesson-19.html">nondeterministic space is closed under complement</a>. I wish I had the original emails for this story but instead I'm working from memory and apologies if I get some of the details wrong. I'm expanding from a <a href="https://blog.computationalcomplexity.org/2002/08/last-spring-i-saw-copenhagen-great.html">short version</a> from the early days of this blog.<br />
<br />
I started my graduate work at UC Berkeley in 1985 and then moved to MIT in the summer of '86, following my advisor Michael Sipser. In the summer of 1987, Neil Immerman, then at Yale, proved his famous result building on his work in <a href="https://en.wikipedia.org/wiki/Descriptive_complexity_theory">descriptive complexity</a> In those days you didn't email papers, he made copies and sent them by US postal mail to several major researchers in complexity including Sipser. But Sipser was away for the summer, I believe in Russia, and the paper sat in his office.<br />
<br />
Immerman also sent the paper to a Berkeley professor, probably Manuel Blum, who gave it to one of his students who decided to speak about the result in a student-led seminar. I forgot who was the student, maybe Moni Naor. I was still on the Berkeley email list so I got the talk announcement and went into complexity ecstasy over the news. I asked Moni (or whomever was giving the talk) if he could tell me details and he sent me a nice write-up of the proof. Given the importance of the result, I sent the proof write-up out to the MIT theory email list.<br />
<br />
Guess who was on the MIT theory list? Neil Immerman. Neil wrote back with his own explanation of the proof. Neil explained how it came out of descriptive complexity but as a pure write-up of a proof of the theorem, Moni did an excellent job.<br />
<br />
We found out about Robert Szelepcsényi when his paper showed up a few months later in the Bulletin of the European Association for Theoretical Computer Science. Szelepcsényi came to the problem from formal languages, whether context-sensitive languages (nondeterministic linear space) was closed under complement. Szelepcsényi, an undergrad in Slovakia at the time, heard about the problem in a class he took. Szelepcsényi's proof was very similar to Immerman. Szelepcsényi's paper took longer to get to US researchers but likely was proven and written about the same time as Immerman.<br />
<br />
Even though both papers were <a href="https://doi.org/10.1137/0217058">published</a> <a href="https://doi.org/10.1007/BF00299636">separately</a> we refer to the result as Immerman-Szelepcsényi and is now just some old important theorem you see in introductory theory classes.https://blog.computationalcomplexity.org/2019/02/an-immerman-szelepcsenyi-story.htmlnoreply@blogger.com (Lance Fortnow)5tag:blogger.com,1999:blog-3722233.post-9219479121065296157Mon, 04 Feb 2019 02:36:00 +00002019-02-04T14:04:53.691-05:00Don't know Football but still want bet on the Superb Owl? <div>
(Superb Owl is not a typo. I've heard (and it could be wrong) that the NFL guards their copyright so you can't even say `Buy Beer here for the YOU KNOW WHATl' but instead `Buy Beer here for the big game''. Stephen Colbert a long time ago go around this by calling the game Superb Owl.)</div>
<div>
<br /></div>
<div>
<br /></div>
<div>
If I knew more about football I might place a bet related to the Superb Owl. What kind of bets can I place?</div>
<div>
<br /></div>
<div>
1) Bet the point spread: Last time I looked the Patriots were a 2.5 point favorite. So either bet that Patriots will win by more than 2.5 or the Rams will lose by less than 2.5 or just win.</div>
<div>
<br /></div>
<div>
2) Over-Under: bet that either the total score will be over 56.5 or under it.</div>
<div>
<br /></div>
<div>
There are prop-bets-- bets that are ABOUT the game but not related to the final score.</div>
<div>
<br /></div>
<div>
I've seen the following</div>
<div>
<br /></div>
<div>
1) Tom Brady will retire after the game. I wonder if Tom Brady (or a friend of his) could bet on this one knowing some inside information. Not i any state in America, but off-shore...</div>
<div>
<br /></div>
<div>
2) Jamie White will score the first touchdown.</div>
<div>
<br /></div>
<div>
3) Will Gladys Knight's National Anthem go longer than 1 minute, 50 seconds (it was 1:47 seconds a few days ago but it shifted to 1:50).</div>
<div>
<br /></div>
<div>
Amazingly, this last one is what Josh Hermsmeyer (on Nate Silver's Webpage) chose to focus on: <a href="https://fivethirtyeight.com/features/the-super-bowls-best-matchup-is-gladys-knight-vs-the-clock/">here</a>. Note that:</div>
<div>
<br /></div>
<div>
1) The people who picked 1 minute 50 seconds as the over-under probably didn't do much research. They might have set it to get the same number of people on both sides, which may explain the shift; however, I can't imagine this bet got that much action. Then again, I'm not that imaginative.</div>
<div>
<br /></div>
<div>
2) Josh DID. He did an analysis of what is likely (he thinks it will go longer)</div>
<div>
<br /></div>
<div>
3) So- can Josh bet on this an clean up? Can you bet on this and clean up?</div>
<div>
<br /></div>
<div>
4) There is an issue: Some kinds of bets are legal in some places (betting who will WIN or beat a point-spread is legal in Las Vegas-- the Supreme court struck down a federal anti-betting rule). Some prop bets are legal. The Gladys Knight one is not. Why not? Someone could have inside information! Gladys Knight would!</div>
<div>
<br /></div>
<div>
So you CAN bet Rams+2.5 beats the Patriots LEGALLY</div>
<div>
<br /></div>
<div>
but to bet Gladys Knight's National Anthem will take more than 1 minute 50 seconds you might need to use BITCOIN, and go to some offshore account. Too much sugar for a <a href="https://en.bitcoin.it/wiki/Satoshi_(unit)">satoshi</a>.</div>
<div>
<br /></div>
<div>
5) There is another issue- there is no such thing as a sure thing (I blogged on that <a href="https://blog.computationalcomplexity.org/2008/02/there-is-no-such-thing-as-sure-thing.html">here</a>). People who bet on sports for a living (I know one such person and will blog about that later) play THE LONG GAME. So to say</div>
<div>
<br /></div>
<div>
<i> I will withdraw X dollars (for large X) from my investments and bet it on </i></div>
<div>
<i> Gladys Knight's</i><i> Star Spangled Banner to go more than 1 minute 50 seconds</i></div>
<div>
<i> because its a sure thing</i></div>
<div>
<br /></div>
<div>
Would be... a very bad idea.<br />
<br />
The above was all written the day before Superb Owl. Now its the next day and Gladys Knight has sung the National Anthem. So who won the Gladys Knight Bowl? The answer is not as straightforward as it could be, see <a href="https://www.usatoday.com/story/sports/nfl/super-bowl/2019/02/03/super-bowl-2019-gladys-knight-causes-prop-bet-controversy-anthem/2764778002/">here</a>.</div>
<div>
<br /></div>
https://blog.computationalcomplexity.org/2019/02/dont-know-football-but-still-want-bet.htmlnoreply@blogger.com (GASARCH)2tag:blogger.com,1999:blog-3722233.post-5538403199779655367Thu, 31 Jan 2019 16:46:00 +00002019-01-31T11:46:01.053-05:00Phish Before TurkeyThe Chronicle of Higher Education recently published a story <a href="https://www.chronicle.com/article/Phishing-Scheme-Targets/245535">Phishing Scheme Targets Professors’ Desire to Please Their Deans — All for $500 in Gift Cards</a>. The same thing happened to me last fall.<br />
<br />
Twas the day before Thanksgiving and an email went out to most of the faculty in my department.<br />
<blockquote class="tr_bq" style="mso-outline-level: 1;">
<b>From: </b>Lance Fortnow <lancefortnow@yahoo.com><br />
<b>Sent: </b>Wednesday, November 21, 2018 1:45 PM<br />
<b>To: </b>[name deleted]<br />
<b>Subject:</b> </blockquote>
<blockquote class="tr_bq" style="mso-outline-level: 1;">
Hello,are you available?</blockquote>
At the time I was in New Jersey visiting family. lancefortnow@yahoo.com is not my email. I do own fortnow@yahoo.com but don't email there, I rarely check it.<br />
<br />
Some faculty checked with me to see if this is real. One faculty called me to see what I wanted. Once I found out what was happening I sent a message to my entire faculty to ignore those emails.<br />
<br />
Some faculty did reply to see what I want. The response:<br />
<blockquote class="tr_bq">
i need you to help me get an Amazon gifts card from the store,i will reimburse you back when i get to the office.</blockquote>
One of our security faculty decided to follow up and replied "Sure! Let me get them for you. Could you provide more more information? e.g., amount and #cards. I can bring them on Monday." The reply:<br />
<blockquote class="tr_bq">
The amount i want is $100 each in two (2) piece so that will make it a total of $200 l'll be reimbursing back to you.i need physical cards which you are going to get from the store. When you get them,just scratch it and take a picture of them and attach it to the email then send it to me here ok</blockquote>
<div>
He went a few more rounds before the phisher just stopped responding.</div>
<div>
<br /></div>
<div>
A week later, a different faculty member came to my office and said I wanted to see him but he's been out of town. I said it was nice to see him but I didn't ask to talk to him and we figured out the confusion was the phishing email.</div>
<div>
<br /></div>
<div>
Someone went through the trouble of creating a fake email address in my name, looking up the email addresses of the faculty in the department and individually emailing each of them, without realizing computer science professors won't fall for a gift card phishing attack. Or at least none of them admitted falling for it.</div>
https://blog.computationalcomplexity.org/2019/01/phish-before-turkey.htmlnoreply@blogger.com (Lance Fortnow)0tag:blogger.com,1999:blog-3722233.post-7382539649899855346Fri, 25 Jan 2019 16:44:00 +00002019-01-28T10:44:40.650-05:00The Paradigm Shift in FinTech Computation and the need for a Computational Toolkit (Guest Post by Evangelos Georgiadis)The Paradigm Shift in FinTech Computation and the need for a Computational Toolkit<br />
<br />
(Guest Post by <a href="http://www.mathcognify.com/">Evangelos Georgiadis</a>)<br />
<br />
We are experiencing a paradigm shift in finance as we are entering the era of algorithmic FinTech computation. (**And another yet to come. See **Future** below.) This era is marked by a shift in the role played by the theoretical computer scientist. In the not so distant past, the (financial) economist had the ultimate stamp of approval for how to study financial models, pricing models, mechanism design, etc. The economist was the ultimate gatekeeper of ideas and models, whereas the main role of the computer scientist was to turn these ideas or models into working code; in a sense, an obedient beaver/engineer. (In finance, the theoretical computer scientist more often than not wears the hat of the quant.)<br />
<br />
In today's era, the role of the theoretical computer scientist has been elevated from the obedient engineer to the creative architect not only of models and mechanism designs but also of entire ecosystems. One example is blockchain based ecosystems. In the light of this promotion from obedient engineer to architect, we might need to re-hash the notion of 'sharing blame', as originally and elegantly voiced in <a href="https://www.technologyreview.com/s/408851/on-quants/">On Quants</a> by Professor Daniel W. Stroock, when things go wrong.)<br />
<br />
The role change is also coupled by a shift in emphasis of computation that in turn necessitates a deeper understanding of (what this author would refer to as) distributed yet pragmatic complexity based crypto systems' that attempt to redefine 'trust' in terms of distributed computation.<br />
<br />
This change necessitates an ability to think in terms of approximation (and lower/upper bounds) or other good-enough solutions that work on all inputs, rather than merely easy instances of problem types that usually lead to clean, exact formulas or solutions. Additionally, looking through the lens of approximation algorithms enables a different and often more insightful metric for dealing with intrinsically hard problems (for which often no exact or clean solutions exist.) Computer Scientists are trained in this way; however, financial economists are not. Might the economists actually get in the way?<br />
<br />
Our tentative response: The economists are valuable and the solution to the dilemma is to equip them with the right 'computational toolkit'. Ideally, such a toolkit comprises computational tools and algorithms that enable automation of certain computational tasks which otherwise would necessitate more granular understanding at the level of a theoretical computer scientist (or mathematician)<br />
OR be too cumbersome to perform by hand even for the expert.<br />
<br />
Essentially, a toolkit even for the theoretical computer scientist that frees her from clerical work and enables computation to scale from clean cases, such as n=1, to pathological (yet far more realistic) cases, such as n=100000, all the way to the advanced and rather important (agnostic case or) symbolic case when n=k -- without much pain or agony.<br />
<br />
The existence of such a toolkit would in turn do justice to the definition of FinTech Computation, which entails applying advanced computational techniques not necessarily information techniques) to financial computation. in fact, this author is part of building such an infrastructure solution which<br />
necessitates the underlying programming language [R-E-CAS-T] to have intrinsic hybrid capabilities -- symbolic as well as numeric.<br />
<br />
One step towards this "automation" conquest is shown in <a href="https://arxiv.org/pdf/1808.05255v2.pdf">A combinatorial-probabilistic analysis of bitcoin attacks</a> with Doron Zeilberger. The work illustrates an algorithmic risk analysis of the bitcoin protocol via symbolic computation, as opposed to the meticulous, yet more laborious by hand conquest shown by the European duo in <a href="https://arxiv.org/abs/1702.02867">Double spend races</a> Heavy usage of the "Wilf-Zeilberger algorithmic proof theory" one of the cornerstones in applied symbolic computation, enabled automated recurrence discovery and algorithmic derivation of higher-order asymptotics. For example, in terms of asymptotics tools: the ability to internalize a very dense body of mathematics, such as the G.D. Birkhoff and W.J. Trjitzinsky method, symbolically, automates the process of computing asymptotics of solutions of recurrence equations; a swiss army knife for any user.<br />
<br />
<**Future**><br />
<br />
What does the future entail for FinTech Computation ?<br />
<br />
[My two <a href="https://en.bitcoin.it/wiki/Satoshi_(unit)">satoshis</a> on this]<br />
<br />
Where are we headed in terms of type of computation ?<br />
<br />
Blockchain based systems, even though some of us (including this author) have noticed fundamental flaws, seem to still have momentum, at least, judging from recent news articles about companies becoming blockchain technology friendly. Ranging from (of course) exchanges such as our friends at <a href="https://www.binance.com/en">Binance</a> and <a href="https://www.bitmex.com/">BitMEX</a>, we have major smartphone makers such as <a href="https://www.bloomberg.com/news/articles/2018-04-15/samsung-jumps-on-blockchain-bandwagon-to-manage-its-supply-chain">Samsung</a>, <a href="https://www.bloomberg.com/news/articles/2018-03-21/huawei-said-to-be-in-talks-to-build-blockchain-ready-smartphone">Huawei</a>, and <a href="https://www.cnbc.com/2018/10/23/htc-launches-blockchain-phone-exodus-1-to-be-sold-in-cryptocurrency.html">HTC</a>. The favorable sentiment towards blockchain technology is shared even amongst top tier U.S. banks.<br />
Can one deduce success or failure momentum from the citation count distribution of the paper that laid grounds to this technology ? <a href="https://bitcoin.org/bitcoin.pdf">Bitcoin: A Peer-to-Peer Electronic Cash System</a>)<br />
<br />
If we look at crypto(currencies), one of many challenges for these blockchain based systems is the high maintenance cost. Certainly in terms of energy consumption when it comes to the process of mining -- whether Proof-of-Work (PoW) is replaced by Proof-of-Stake (PoS) or some other more energy efficient consensus variant. (This author is aware of various types of optimizations that have been used.)<br />
A few questions that have bugged this author every since ...<br />
<br />
a) Is there a natural way to formalize the notion of energy consumption for consensus mechanisms?<br />
<br />
b) What about formalizing an energy-efficient mechanism design ?)<br />
<br />
(The idea of savings when PoW is replaced by PoS as intended by our friends at the <a href="https://spectrum.ieee.org/computing/networks/ethereum-plans-to-cut-its-absurd-energy-consumption-by-99-percent">Ethereum Foundation</a> has been around for some time but the point of this author is, the value of 0.99*X (where X is a <a href="https://link.springer.com/chapter/10.1007/978-1-4684-6686-7_28">supernatural number</a> [a la Don E. Knuth style]), is still a big quantity; too big for an environmentalist ?)<br />
<br />
So, what comes next ?<br />
<br />
[... the satoshis are still on the table.]<br />
<br />
Daniel Kane has brought to my attention that quantum computation -- the seemingly next paradigm shift in which again the role of TCS seem inextricably interwoven -- may lead to blockchain based systems being replaced by less expensive (at least in terms of energy consumption) quantum based systems. (Crypto might get replaced by Quantum (money). :-)) One such pioneering approach is masterfully articulated by Daniel Kane in "<a href="https://arxiv.org/abs/1809.05925">Quantum Money from Modular Forms</a>.https://blog.computationalcomplexity.org/2019/01/the-paradigm-shift-in-fintech.htmlnoreply@blogger.com (GASARCH)0tag:blogger.com,1999:blog-3722233.post-7530001663397385709Thu, 24 Jan 2019 13:58:00 +00002019-01-24T08:58:43.801-05:00Machine Learning and Wind Turbines<div class="separator" style="clear: both; text-align: center;">
<a href="https://3.bp.blogspot.com/-IwXAr_ptgYo/XEYxZWFiAzI/AAAAAAABmPw/6t_lphNZigA3sXSix1MpcmKNANZFiGkawCLcBGAs/s1600/Turbines.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="720" data-original-width="1280" height="180" src="https://3.bp.blogspot.com/-IwXAr_ptgYo/XEYxZWFiAzI/AAAAAAABmPw/6t_lphNZigA3sXSix1MpcmKNANZFiGkawCLcBGAs/s320/Turbines.png" width="320" /></a></div>
<br />
My daughter Molly spent six weeks on an environmental program in China last summer. When she got back she had to do a report on machine learning and wind turbines used for clean energy generation. What does machine learning have to do with wind turbines? Plenty it turns out and it tell us a lot about the future of programming.<br />
<br />
Sudden changes in wind can cause damage to the blades of the turbine. Maintenance is very expensive especially for turbines in the sea and a broken turbine generates no electricity. To catch these changes ahead of time you can mount a Lidar on top of the turbine.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://4.bp.blogspot.com/-9LFv8FGZNG8/XEYxYFyCYDI/AAAAAAABmPs/TLY7-bfPWhMWjRLl8Jn9ixXrTBa_sAk7ACLcBGAs/s1600/Turbine%2BLidar.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="720" data-original-width="1280" height="180" src="https://4.bp.blogspot.com/-9LFv8FGZNG8/XEYxYFyCYDI/AAAAAAABmPs/TLY7-bfPWhMWjRLl8Jn9ixXrTBa_sAk7ACLcBGAs/s320/Turbine%2BLidar.jpg" width="320" /></a></div>
<br />
The Lidar can detect wind gusts from about 100 meters ahead, giving about 10 seconds to react. In that time you can rotate the blades, or the whole turbine itself to minimize any damage. Here's a <a href="https://www.youtube.com/watch?v=j5zLp6UuC70">video</a> describing the situation.<br />
<br />
<center>
<iframe allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/j5zLp6UuC70" width="560"></iframe>
</center>
<br />
How do you do the computations to convert the Lidar data into accurate representations of wind gusts and then how to best adjust for them? You could imagine some complex fluid dynamics computation, which gets even more complex when you several wind turbines in front of each other. Instead you can use the massive amount of data you have collected by sensors on the turbine and the Lidar information and train a neural network. Training takes a long time but a trained network can quickly determine a good course of action. Now neural networks can always make mistakes but unlike self-driving cars, a mistake won't kill anyone, just possibly cause more damage. Since on average you can save considerable maintenance costs, using ML here is a big win.<br />
<br />
I've obviously over simplified the above but I really like this example. This is not an ML solution to a standard AI question like image recognition or playing chess. Rather we are using ML to make a difficult computation tractable mostly by using ML on available data and that changes how we think about programming complex tasks.https://blog.computationalcomplexity.org/2019/01/machine-learning-and-wind-turbines.htmlnoreply@blogger.com (Lance Fortnow)0tag:blogger.com,1999:blog-3722233.post-800609330387332388Sun, 20 Jan 2019 22:44:00 +00002019-01-28T10:44:22.565-05:00ACM prize and some thoughts on the Godel Prize(ADDED LATER: deadlines for some of these awards have passed but here are ones<br />
coming up soon:<br />
<br />
Godel Prize: Feb 15<br />
<br />
Knuth Prize: Feb 15<br />
<br />
SIGACT News Dist Service: March 1<br />
<br />
)<br />
<br />
As Lance Tweeted, and I will re-iterate, nominations for the following prizes<br />
are due soon and you can nominate people <a href="https://sigact.org/articles/prizes.html">here</a><br />
<br />
Godel Prize for outstanding paper in TCS. (Godel mentioned P vs NP in a letter to Von Neumann. I've heard it said that its too bad they didn't work on it-- either it would be solved or we'd know its hard. Frankly, I think enough smart people have worked on it that we already know its hard.)<br />
<br />
Knuth Prize for outstanding contributions to foundations of Computer Science. (its a greater honor to have a prize named after you in your lifetime then to win a prize!)<br />
<br />
Dijkstra Prize (I wonder if having `ijk' in his name inspired him to work in algorithms)<br />
<br />
Kanellakis Theory and Practice Award.<br />
<br />
Lawler Award for Humanitarian Contributions within CS and Informatics.<br />
<br />
ACM Distinguished Service Award<br />
<br />
Danny Lewin Best Student Paper Award (best student paper at STOC)<br />
<br />
The Best Paper award (Best paper at STOC, Best paper at FOCS)<br />
<br />
(The last two I doubt you can nominate someone for.)<br />
<br />
A few thoughts on the Godel Prize:<br />
<div>
<br /></div>
<div>
<div>
<br /></div>
<div>
1) You can win the Godel Prize twice and some people have: Goldwasser, Hastad, Arora, Szegedy, Spielman, Teng. Spielman-Teng have won it as a team twice.</div>
<div>
<br /></div>
<div>
2) GLAD there is no limit to how many can win. If a paper has a lot of people on it (and this has happened) then FINE, they're all winners! According to The Big Bang Theory (the TV show, not the theory) in Physics at most 3 can win a Nobel Prize in Physics for the same breakthrough in a given year. The show itself shows how stupid the policy is.</div>
<div>
<br /></div>
<div>
3) I either knew and forgot or never knew that DPDA Equiv is decidable! Glad to now it just in time for teaching Automata theory this spring.</div>
<div>
<br /></div>
<div>
4) Looking over the list reminded me that there are some papers in the intersection of those I want to read and those I am able to read! Though not many. Most I want to read but they seem hard.</div>
<div>
<br /></div>
<div>
5) The Kanellakis award is for theory that is PRACTICAL. Could someone win a Godel AND a Kannellakis award for the same paper (or set of papers). I found one sort-of case ( (a) below) and one definite case ( (b) below).</div>
<div>
<br /></div>
<div>
a) Moshe and Wolper won the 2000 Godel Prize for Temporal Logic and Finite Automata (I should also read that before my class starts)</div>
<div>
<br /></div>
<div>
Holtzmann, Kurshan, Vardi, and Wolpert won the 2005 Kanellakis prize for Formal Verification Tools.</div>
<div>
<br /></div>
<div>
I assume the two works are related.</div>
<div>
<br /></div>
<div>
b) Freund and Schapire won the 2003 Godel Prize and the 2004 Kanellakis Award, both for their work on boosting in Machine Learning.</div>
<div>
<br /></div>
<div>
6) Why is it the Godel <i>Prize </i>and the Kanellakis <i>Award? </i>What is the diff between a prize and an award? A quick Google Search says that an Award is a token of effort and merit, while a Prize is something you win in a competition. I doubt that applies. I suspect they are called Prize and Award from historical accident. Does anyone know?</div>
<div>
<br /></div>
</div>
https://blog.computationalcomplexity.org/2019/01/acm-prize-and-some-thoughts-on-godel.htmlnoreply@blogger.com (GASARCH)5tag:blogger.com,1999:blog-3722233.post-8539994327270054973Thu, 17 Jan 2019 17:45:00 +00002019-01-17T12:45:43.150-05:00The Cost of Privacy<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-8HqZkaSkvdw/XD82UgwmqHI/AAAAAAABmMQ/6zYMalG9fXUS5OF28uajL8MOiXo6igmaACLcBGAs/s1600/iphone.jpeg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="732" data-original-width="802" height="182" src="https://2.bp.blogspot.com/-8HqZkaSkvdw/XD82UgwmqHI/AAAAAAABmMQ/6zYMalG9fXUS5OF28uajL8MOiXo6igmaACLcBGAs/s200/iphone.jpeg" width="200" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Billboard at 2019 CES</td></tr>
</tbody></table>
<br />Computer scientists tend to obsess about privacy and we've had a privacy/security debate for decades now. But now machine learning has really given us a whole new spin on what privacy protects and takes away.<br />
<div>
<br />
I take an open approach and basically allow Google to know everything about my life. Google knows where I've been--sometimes my Pixel asks me which store in a shopping center I visited and I give up that info. Google knows who I communicate with, what websites I visit, what music and movies I listen to and watch, all my photos, what temperature makes me comfortable and so on.<br />
<br />
What do I get? A Google ecosystem that knows me sometimes better than I know myself. Google works best when it learns and integrates. I get asked to download maps for trips Google knows I'm about to take. I have Google assistant throughout my house, in my phone, in my car and it tailor answers and sometimes even the questions that I need answers to. If anything I wish there was further integration, like Google Voice should ring my office phone only when I'm in the office.<br />
<br />
Georgia Tech now forces us to use Microsoft Exchange for email. Outlook is not a bad email program but its capabilities, especially for search, does not work as well and think of all that unused knowledge.<br />
<br />
I trust Google to keep my information safe, with a random password and 2-factor encryption and even if someone would manage to break in they would find I'm a pretty boring person with an unhealthy obsession of opera (the musical form not the browser).<br />
<br />
Doesn't work for everyone and companies should make it easy to keep your info secure. But I say go use your machine learning on me and find ways to make my life easier and more fun, and sure send me some targeted ads as payment. The Internets will find a way to discover you anyway, might as well take advantage. </div>
https://blog.computationalcomplexity.org/2019/01/the-cost-of-privacy.htmlnoreply@blogger.com (Lance Fortnow)2tag:blogger.com,1999:blog-3722233.post-4806966094683791166Tue, 15 Jan 2019 16:21:00 +00002019-01-15T18:27:17.378-05:00do we ever only care about the decision problem? I know of only one case of that(I had been thinking of this for a post then Lance's post on <a href="https://blog.computationalcomplexity.org/2019/01/search-versus-decision.html">search versus decision</a> inspired me to write up these thoughts.)<br />
<br />
When teaching NP-completeness we often say<br />
<br />
<i>The problem we really care about is, for example, given a weighted graph and two vertices s and t, find the optimal way to go from s to t while hitting every node. But its cleaner mathematically to look at the decision problem:</i><br />
<i><br /></i>
<i>{ (G,s,t,C) : there is a Ham Path from s to t that costs \le C }</i><br />
<i><br /></i>
<i>The search and decision are poly time equivalent, so its fine to just look at the decision. Indeed- if our interest in in lower bounds then clearly if Decision is hard then Find is Hard.</i><br />
<br />
But here are some questions about search vs decision in general, not just with regard to P vs NP.<br />
<br />
1) Is there ever a case where the real world actually cares about the decision version? I can think of just one- given a number is it PRIME is used in Crypto. The real world does not need the witness that its prime (or similar). They just want a prime. Any other cases?<br />
<br />
2) How far apart can search and decision be? NP-decision and NP-search they are poly equivalent. In other domains can they be very far apart? For example, is FINDING a k-clique or k-ind set in a graph on 2^{2k} vertices require roughly n^k steps (go through all k-sets) or can we do much better? I suspect this is unknown but would be delighted if a commenter tells me otherwise.<br />
<br />
<br />https://blog.computationalcomplexity.org/2019/01/do-we-ever-only-care-about-decision.htmlnoreply@blogger.com (GASARCH)11tag:blogger.com,1999:blog-3722233.post-2727898493587029341Wed, 09 Jan 2019 13:10:00 +00002019-01-09T16:43:30.052-05:00Search versus DecisionShockingly I've never done a post on search versus decision, one of the more interesting dualities in complexity. In short: Decision: Is there a needle in the haystack? Search: Find the needle.<br />
<br />
In Satisfiability, or any other NP-complete problem, the two problems are essentially equivalent. If you can decided SAT you can find a solution (good homework problem) or even the best solution. Often people mix up the two, where people say finding the shortest Traveling Salesman Tour is NP-complete, <a href="https://blog.computationalcomplexity.org/2014/01/is-traveling-salesman-np-complete.html">usually</a> without getting into too much trouble.<br />
<br />
Decision is always at least as easy as search: If you have a solution you know there is one. What about the other direction? We can't actually prove search is hard without separating P and NP, but we have our conjectures.<br />
<br />
Sometimes both are easy. We can easily find the maximum weighted matching.<br />
<br />
Sometimes decision is easy and search is supposedly hard: Composite Numbers. The search version is factoring.<br />
<br />
Sometimes decision is trivial (i.e. they always exist) and search is still hard. Nash Equilibria. <a href="https://blog.computationalcomplexity.org/2006/05/dispersing-ramsey-graphs.html">Ramsey Graphs</a>.<br />
<br />
Often we ask whether search reduces to decision? If you have some oracle (magic black box) that answered decision questions, can you solve the search problem efficiently? SAT has this property, as does Matching (for trivial reasons). Nash Equilibrium and Composite Numbers likely don't.<br />
<br />
Graph Isomorphism does, i.e., given an oracle for graph isomorphism you can find the isomorphism (another good homework problem).<br />
<br />
There's also an interesting non-adaptive version. Given a SAT formula can you find an assignment with questions to a SAT oracle that all have to be asked at the same time?<br />
<br />
Here we get a probable yes. If the formula has one solution you can find it by asking for each bit of the solution. <a href="https://blog.computationalcomplexity.org/2006/09/favorite-theorems-unique-witnesses.html">Randomly you can reduce SAT to several formulas</a>, one of which is likely to have a single assignment that is also an assignment of the original formula. With standard hardness assumptions <a href="https://blog.computationalcomplexity.org/2006/07/full-derandomization.html">you can eliminate the randomness</a>.<br />
<br />
Is the same true for graph isomorphism? I think that's still open.https://blog.computationalcomplexity.org/2019/01/search-versus-decision.htmlnoreply@blogger.com (Lance Fortnow)10tag:blogger.com,1999:blog-3722233.post-4355005625360509962Sun, 06 Jan 2019 21:35:00 +00002019-01-06T16:50:02.401-05:00When is a kilogram not a kilogram?A long long time ago the standards for meter's, kilograms, etc was an actual physical object.<br />
<br />
Those days are long gone of course. For example, the meter is defined is the length of the path traveled by light in 1/299,792,458 th of a second. Why such an odd number (can fractions be odd?)? Because they retrofitted it to what that the meter is. Rather than go to France and compare my stick to the one under a glass case I can just measure the speed of light. Oh. That sounds hard!<br />
<br />
It matters a bit since the weight of what was the standard kilogram did increase over time, though of course not by much. When did the measurements for stuff STOP being based on physical objects and was all done based on constants of the universe?<br />
<br />
The answer surprised me:<br />
<br />
On Nov 16, 2018 (yes, you read that light) they decided that by May 20, 2019, the Kilogram will be defined in terms of Plank's constant. I have not been able to find out how they will use Plank, maybe they don't know yet (they do and its known -- see the first comment) .With that, there are no more standards based on physical objects. Read about it <a href="https://www.wired.com/story/new-kilogram-definition-based-on-planck-constant/">here</a>.<br />
<br />
Why did it take so long? I honestly don't know and I am tossing that question out to my readers. You can leave serious or funny answers, and best if I can't tell which is which!<br />
<br />
<br />
<br />
<br />https://blog.computationalcomplexity.org/2019/01/when-is-kilogram-not-kilogram.htmlnoreply@blogger.com (GASARCH)7tag:blogger.com,1999:blog-3722233.post-3027987398928428578Thu, 03 Jan 2019 05:04:00 +00002019-01-03T12:16:50.805-05:00Today is Thirdsday! Enjoy it while you can!Fellow Blogger James Propp has come up with a new Math holiday:<br />
<br />
<b>Thirsdsday!</b><br />
<br />
The day is Jan 3 (1-3 in America, though 3-1 in ... Everywhere else?) but only when Jan 3 is a Thursday.<br />
<br />
It is a day where we celebrate the magic of the number 1/3.<br />
<br />
0) For other math days to celebrate see <a href="https://stemjobs.com/math-holidays/">here</a><br />
<br />
1/3) James Propp's blog about Thirdsday on Monday Dec 31. Really ??? : <a href="https://mathenchant.wordpress.com/2018/12/31/introducing-thirdsday/#more-2632">here</a><br />
<br />
2/3) Evelyn Lamb blogged about Thirdsday on Tuesday Jan 1. Really ??? : <a href="https://blogs.scientificamerican.com/roots-of-unity/how-to-celebrate-thirdsday/">here</a><br />
<br />
3/3) Ben Orlin blogged about Thirsdsday on Wedensday Jan 2. Really??? <a href="https://mathwithbaddrawings.com/2019/01/02/thirdsday-the-holiday-thats-33-33-better-than-any-other/">here</a><br />
<br />
(Added ON Thirdsday: Matt Foreman has a video about Thirdsday: <a href="https://www.youtube.com/watch?v=NinrTW1Bx2Y&feature=youtu.be">here</a> and a blog post <a href="https://www.think-maths.co.uk/celebrating-thirdsday">here</a>)<br />
<br />
How come I'm the only one blogged about Thirdsday on Thursday Jan 3 ??? (Added later- not quite true anymore, Matt Foreman also waited until Thirdsday to post on Thirdsday).<br />
I asked Jim Propp about this. He said that he want to help prepare teachers and other eduators for the excitment of Thirdsday! If they already know the wonders of 1/3 they can prepare and lecture on it! Kudos to him! I assume that Evelyn and Ben are similar! Kudos to them! And Ben blogged ON Thirdsday so Kudos to him!<br />
<br />
2) Darling asked me `<i>is it a real day like Pi-Day?'</i> Is Pi-Day real? Is any Holiday real? All holidays are made up until they are accepted and become real. The distinction between <i>real holidays</i> and <i>made up holidays</i> is ... nonexistent. One can talk of <i>accepted </i>and <i>not-accepted</i> holidays. How long did Pi-day take to be accepted? This is prob not a well defined question.<br />
<br />
3) James Propp's and Evelyn Lamb's blog has many math properties of 1/3. One educational property: I think it is the first number that students see that is an infinite decimal. My favorite unbiased use of 1/3: The Cantor Set: Uncountable subset of [0,1] that has measure 0. Really!!! My favorite biased use: its important in Muffin Math. If m>s and you want to divide and distribute m muffins to s students, there is always a way to do this with smallest piece at least 1/3. (Usually you can do better but this is sometimes the best you can do.)<br />
<br />
4) When will the next Thirdsday come?<br />
<br />
2019: Jan 3 is a Thursday, so YES<br />
<br />
2020: Jan 3 is a Friday, so NO<br />
<br />
2021: Jan 3 is a Sunday (why no Saturday? Leap year. Great- it will come sooner!) so NO<br />
<br />
2022: Jan 3 is a Monday, so NO<br />
<br />
2023: Jan 3 is a Tuesday so NO<br />
<br />
2024: Jan 3 is a Wednesday so NO<br />
<br />
2025: Jan 3 is a Friday. WHAT! Why no Thirdsday? Darn leap year! So NO.<br />
<br />
2026: Jan 3 is a Saturday, so NO<br />
<br />
2027: Jan 3 is a Sunday so NO<br />
<br />
2028: Jan 3 is a Monday so NO<br />
<br />
2029: Jan 3 is a Wedensday (Why no Tuesday? Leap year), so NO<br />
<br />
2030: Jan 3 is a Thursday (Leap Year helped!), so YES FINALLY!<br />
<br />
(Exercise: find a formula: if 2019 was the first Thirdsday, find the year for TD(i), the ith Thirdsday.)<br />
<br />
So enjoy Thirdsday in 2019 when spellcheck still flags it.<br />
<br />
In 2030 it will be an accepted holiday and spellcheck will think it's fine.<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />https://blog.computationalcomplexity.org/2019/01/today-is-thirdsday-enjoy-it-while-you.htmlnoreply@blogger.com (GASARCH)3tag:blogger.com,1999:blog-3722233.post-2187908441651073696Mon, 31 Dec 2018 12:25:00 +00002018-12-31T07:25:47.068-05:00Complexity Year in Review 2018Result of the year goes to<br />
<blockquote class="tr_bq" style="text-align: center;">
<a href="https://eccc.weizmann.ac.il/report/2018/107/">Oracle Separation of BQP and PH</a> by Ran Raz and Avishay Tal</blockquote>
<div style="text-align: left;">
which we <a href="https://blog.computationalcomplexity.org/2018/06/bqp-not-in-polynomial-time-hierarchy-in.html">wrote about in June</a>. This work solves one of the original open questions in quantum complexity using tools from both quantum and classical circuit complexity. How often do we see oracle results with popular articles in <a href="https://www.quantamagazine.org/finally-a-problem-that-only-quantum-computers-will-ever-be-able-to-solve-20180621/">Quanta</a> (ignore the hyperbolic title), <a href="https://www.thehindu.com/sci-tech/science/quantum-computers-have-an-edge-over-classical-ones-says-the-oracle/article24420375.ece">The Hindu</a> and <a href="https://cacm.acm.org/magazines/2019/1/233514-quantum-leap/fulltext">CACM</a>?</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
Runner up goes to the <a href="https://eccc.weizmann.ac.il/report/2018/006/">solution</a> of the 2-to-2 Games Conjecture by Subhash Khot, Dor Minzer and Muli Safra early in 2018. Boaz Barak gave a nice <a href="https://windowsontheory.org/2018/01/10/unique-games-conjecture-halfway-there/">two</a> <a href="https://windowsontheory.org/2018/02/26/on-the-recent-proof-of-the-2-to-2-conjecture/">post</a> overview.</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
In last year's <a href="https://blog.computationalcomplexity.org/2017/12/complexity-year-in-review-2017.html">review</a> we talked about the magical breakthroughs of machine learning. This year we seemed to have moved beyond the magic to where machine learning has become a thing. We see the immense value of data and continue to struggle with the ethical challenges of collecting and acting on data, dominance of the big tech companies, training all these students who want to gain expertise in the area and trying to understand why ML works as well as it does. </div>
<div style="text-align: left;">
<br />
The big X-factor is <a href="https://blog.computationalcomplexity.org/2018/10/a-new-cold-war.html">China</a>. Will competition with China spur science literacy and funding in the US like the cold war with the Soviets did? Or will isolation with China limit scientific collaboration like the cold war with the Soviets did? </div>
<div style="text-align: left;">
<br />
The big tech surprise was the rise of electric scooters. Georgia Tech has embraced them and it is a quick way to get around campus.<br />
<br />
Some of the other questions I asked last year didn't have interesting answers: What will the Internet look like post-net neutrality? (too early to tell) How will the new tax code play out? (too early to tell) Where will Amazon put HQ2? (New York and DC--only surprise was picking two cities) What can quantum computers with 50 qbits accomplish? (still a good question) Will bitcoin move to $300K or 30 cents? (it dropped but still has real value)<br />
<br />
Thanks to our guest posters <a href="https://blog.computationalcomplexity.org/2018/10/a-new-aco-center-guest-post-by-vijay.html">Vijay Vazirani</a>, <a href="https://blog.computationalcomplexity.org/2018/12/guest-post-join-sigact.html">Samir Khuller and Robert Kleinberg</a>, and <a href="https://blog.computationalcomplexity.org/2018/09/the-tenure-system-is-broken-but-not-in.html">anonymous</a>.</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
We remember <a href="https://terrytao.wordpress.com/2018/12/29/jean-bourgain/">Jean Bourgain</a>, <a href="https://blog.computationalcomplexity.org/2018/12/remembering-george-h-w-bush.html">George H. W. Bush</a>, <a href="https://www.tehrantimes.com/news/431247/Iranian-scientist-Babak-Farzad-dies-of-terminal-cancer">Babak Farzad</a>, <a href="https://blog.computationalcomplexity.org/2018/03/stephen-hawking-1942-2018.html">Stephen Hawking</a>, <a href="https://blog.computationalcomplexity.org/2018/12/ker-i-ko-1950-2018.html">Ker-I Ko</a> and Stan Lee.</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
We end the year with craziness, the stock market is going through wild gyrations, we have a partial government shutdown including all of NSF and an uncertain political landscape with different parties leading the two houses of congress. We're still in the midst of a technological revolution and governments around the world try to figure how to regulate it. I find it hard to predict 2019 but it will not be quiet.</div>
https://blog.computationalcomplexity.org/2018/12/complexity-year-in-review-2018.htmlnoreply@blogger.com (Lance Fortnow)0tag:blogger.com,1999:blog-3722233.post-7278887012031275805Wed, 26 Dec 2018 13:31:00 +00002018-12-27T12:07:50.339-05:00Ker-I Ko (1950-2018)<div class="separator" style="clear: both; text-align: center;">
<a href="https://4.bp.blogspot.com/-WnVnzZNnJH8/XB6H3HQ9MdI/AAAAAAABlvo/-h_Y6N7q7Ls93g-5KRqxY2c7Bv_laZ-1gCLcBGAs/s1600/Ko.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="174" data-original-width="140" height="200" src="https://4.bp.blogspot.com/-WnVnzZNnJH8/XB6H3HQ9MdI/AAAAAAABlvo/-h_Y6N7q7Ls93g-5KRqxY2c7Bv_laZ-1gCLcBGAs/s200/Ko.jpg" width="160" /></a></div>
A few weeks ago as Bill and I prepared for our upcoming year in review post, we noted that we hadn't lost any theoretical computer scientists this year, at least none that we were aware of. Unfortunately we didn't make it through all of 2018 unscathed.<br />
<br />
Computational complexity theorist Ker-I Ko passed away peacefully from lung failure on December 13. <a href="https://people.cs.nctu.edu.tw/~kyko/" target="_blank">Ker-I Ko</a> spent most of his career at Stonybrook where he had recently retired to take on a professorship at National Chiao Tung University in Taiwan.<br />
<br />
I had only had a few brief meetings with Ko but I knew his work quite well. In his <a href="https://doi.org/10.1137/0218027" target="_blank">best known work</a>, Ko, in a solo paper, created an infinite series of oracles A<sub>1</sub>, A<sub>2</sub>, … such that relative to A<sub>k</sub>, the <a href="https://blog.computationalcomplexity.org/2005/06/favorite-theorems-polynomial-time.html" target="_blank">polynomial-time hierarchy</a> collapses to exactly the kth level, that is Σ<sub>k-1</sub> ≠ Σ<sub>k</sub> = Σ<sub>k+1</sub> = PH. Ko wielded the <a href="https://en.wikipedia.org/wiki/Switching_lemma" target="_blank">switching lemma</a> like a scalpel, pulling part the k-1st and kth levels while leaving enough room to encode the k+1st level. He actually gives two sets of oracles, one which collapses PH to PSPACE while collapsing the hierarchy to the kth level and one that separates PH from PSPACE. Even his oracle showing P=NP≠PSPACE wasn't trivial and I used it as <a href="https://blog.computationalcomplexity.org/2009/05/oracle-results-are-good-for-you.html" target="_blank">an example</a> of a hard to settle complexity question.<br />
<br />
Ko, with Tim Long and Ding-Zhu Du, <a href="https://doi.org/10.1016/0304-3975(86)90152-0" target="_blank">showed that</a> if P ≠ UP if and only if there exist two sets that were one-to-one length-increasing polynomial-time reducible to each other but not polynomial-time isomorphic. This paper played a large role in helping us understand the <a href="https://blog.computationalcomplexity.org/2003/03/berman-hartmanis-isomorphism.html" target="_blank">isomorphism conjecture</a>.<br />
<br />
Ko, with Pekka Orponen, Uwe Schöning and Osamu Watanabe, used Kolmogorov complexity to <a href="https://doi.org/10.1007/3-540-16486-3_99" target="_blank">measure the complexity of an instance of a problem</a>. The instance complexity of x in a set A is the smallest program that will correctly answer whether x is in A, and will not give an incorrect answer for any other y in A, though it can answer "I don't know" for y ≠ x.<br />
<br />
Ko also had several papers on complexity of real-valued functions and wrote several textbooks and manuscripts. A big loss for all of us in the complexity world.https://blog.computationalcomplexity.org/2018/12/ker-i-ko-1950-2018.htmlnoreply@blogger.com (Lance Fortnow)6tag:blogger.com,1999:blog-3722233.post-2349102010055677243Sun, 16 Dec 2018 21:24:00 +00002018-12-16T16:24:54.857-05:00Guest post: Join SIGACT!This is a guest post by Samir Khuller and Robert Kleinberg.<br />
<br />
Dear friends,<br />
<br />
As our research community continues to grow and thrive, SIGACT membership has not grown apace. We respectfully urge you to join SIGACT! Membership is very cheap (and does not require ACM membership) – only $15 a year – and by joining you will be lending your support to the many activities that SIGACT undertakes on behalf of the theoretical computer science research community. These include:<br />
<br />
<ul>
<li>sponsoring STOC and other theory conferences such as SPAA and PODC, as well as co-sponsoring SODA;</li>
<li>awards such as the Knuth, Gödel, and Kanellakis Prizes, the SIGACT Distinguished Service Award, and the best student paper awards at STOC and SODA;</li>
<li>supporting the Women in Theory workshop;</li>
<li>representing the theoretical computer science community to the ACM and beyond.</li>
</ul>
<br />
In addition to these community benefits, membership comes with individual benefits including voting rights in SIGACT elections, reduced rate for membership in EATCS, reduced conference registration rates at SIGACT-sponsored conferences, access to SIGACT News and announcements sent on the SIGACT email list.<br />
<br />
SIG membership does not automatically renew when you renew your ACM membership, and we suspect this may be one reason for the decline in SIGACT membership. So the next time you renew your ACM membership, remember to also join SIGACT or renew your SIG membership! Better yet, why wait? If you’re not a SIGACT member, join right now- you can use this link: <a href="https://www.acm.org/special-interest-groups/join">here</a><br />
<br />
Please do your part to nurture this important resource for our community.<br />
<br />
Respectfully,<br />
<br />
The SIGACT Executive Committeehttps://blog.computationalcomplexity.org/2018/12/guest-post-join-sigact.htmlnoreply@blogger.com (GASARCH)0