tag:blogger.com,1999:blog-3722233Sat, 21 Sep 2019 14:58:30 +0000typecastfocs metacommentsComputational ComplexityComputational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill Gasarchhttps://blog.computationalcomplexity.org/noreply@blogger.com (Lance Fortnow)Blogger2710125tag:blogger.com,1999:blog-3722233.post-6037506695705789893Mon, 16 Sep 2019 13:10:00 +00002019-09-16T09:10:52.374-04:00this paper from 2015 cracks Diffie-Hellman. What to tell the students?I am teaching cryptography this semester for the second time (I taught it in Fall 2019) and will soon tell the students about the paper from 2015:<br />
<a href="https://weakdh.org/imperfect-forward-secrecy.pdf">Imperfect Forward Secrecy: How Diffie-Hellman Fails in Practice</a>. There are 14 authors.<br />
<br />
The upshot is that as Diffie-Hellman was implemented in 2015, many cases were crackable. In summary (and probably too simple):<br />
<br />
DH in a 512-bit group can be cracked by the authors<br />
<br />
DH in a 1024-bit group they speculate can be cracked with nation-state resources. <br />
<br />
<br />
<br />
Is this a big deal? If YES then what is being done, and if NOT then why not?<br />
<br />
I have come up with some statements that I DO NOT KNOW if they are true, but I am ASKING you, to shed some light on the BIG DEAL or NO BIG DEAL question. (Note- Idea for a game show: BIG DEAL or NO BIG DEAL where contestants are asked if a news story is a BIG DEAL or not.)<br />
<br />
So, please comment on the following question:<br />
<br />
1) Since 2015 the people who use DH have upped their game and are now using bigger parameters. (I doubt this is true)<br />
<br />
2) DH is mostly not used on things that hackers are not interested in, so this is not a big deal.<br />
<br />
3) The expertise required to crack DH via this paper is rather difficult, so hackers don't have the skills.<br />
<br />
4) This paper is not a problem for a bad reason: Hackers don't need to use the number field sieve DL algorithm when all they need to do is (1) guess that the pin numer is 1234 or the year the user was born (or close to it), (2) put on a uniform from Geek-Squad or some such organization and claim they are here to help, (3) exploit a known security flaw that the company has not bothered fixing. <br />
<br />
5) The 14 authors have mysteriously disappeared. (I doubt this is true.)<br />
<br />
<br />
(Misc: My spell checker thinks that Diffie and crackable are not words, but Hellman is.)https://blog.computationalcomplexity.org/2019/09/this-paper-from-2015-cracks-diffie.htmlnoreply@blogger.com (GASARCH)7tag:blogger.com,1999:blog-3722233.post-5655789602676044507Mon, 09 Sep 2019 14:55:00 +00002019-09-09T11:03:03.353-04:00Are there any natural problems complete for NP INTER TALLY? NP INTER SPARSE?<br />
Recall:<br />
<br />
A is a <i>tally set</i> if A ⊆ 1<sup>*</sup>. <br />
<br />
<br />
A is a <i>sparse set</i> if there is a polynomial p such that the number of strings of length n is ≤ p(n). <br />
<br />
<br />
If there exists a sparse set A that is NP-hard under m-reductions (even btt-reductions) then P=NP. (See <a href="https://blog.computationalcomplexity.org/2006/04/favorite-theorems-small-sets.html">this post</a>.)<br />
<br />
If there exists a sparse set A that is NP-hard under T-reductions then PH collapses. (See <a href="https://blog.computationalcomplexity.org/2006/11/favorite-theorems-nonuniform.html">this post</a>.)<br />
<br />
Okay then!<br />
<br />
I have sometimes had a tally set or a sparse set that is in NP and I think that its not in P. I would like to prove, or at least conjecture, that it's NP-complete. But alas, I cannot since then P=NP. (Clarification: If my set is NP-complete then P=NP. I do not mean that the very act of conjecturing it would make P=NP. That would be an awesome superpower.)<br />
<br />
So what to do? <br />
<br />
A is <i>NPSPARSE-complete </i> if A is in NP, A is sparse, and for all B that are in NP and sparse, B ≤<sub>m</sub> A.<br />
<br />
Similar for NPTALLY and one can also look at other types of reductions.<br />
<br />
So, can I show that my set is NPSPARSE-complete? Are there any NPSPARSE-complete sets? Are there NATURAL ones? (Natural is a slippery notion- see this <a href="https://blog.computationalcomplexity.org/2004/03/unnatural-post.html">post by Lance</a>.)<br />
<br />
Here is what I was able to find out (if more is known then please leave comments with pointers.)<br />
<br />
1) It was observed by <a href="https://lance.fortnow.com/papers/files/npsparse.pdf">Bhurman, Fenner, Fortnow, van Velkebeek</a> that the following set is NPTALLY-complete:<br />
<br />
Let M<sub>1</sub>, M<sub>2</sub>, ... be a standard list of NP-machines. Let<br />
<br />
A = { 1<sup>(i,n,t)</sup> : M<sub>i</sub>(1<sup>n</sup>) accepts on some path within t steps }'<br />
<br />
The set involves Turing Machines so its not quite what I want.<br />
<br />
<br />
2) <a href="https://www.cs.umd.edu/users/gasarch/BLOGPAPERS/messnertoran.pdf">Messner and Toran</a> show that, under an unlikely assumption about proof systems there exists an NPSPARSE-complete set. The set involves Turing Machines. Plus it uses an unlikely assumption. Interesting, but not quite what I want.<br />
<br />
<br />
3) Buhrman, Fenner, Fortnow, van Melkebeek also showed that there are relativized worlds where there are no NPSPARSE sets (this was their main result). Interesting but not quite what I want. <br />
<br />
4) If A is NE-complete then the tally version: { 1<sup>x</sup> : x is in A } is likely NPTALLY-complete. This may help me get what I want.<br />
<br />
Okay then!<br />
<br />
Are there any other sets that are NPTALLY-complete. NPSPARSE-complete? The obnoxious answer is to take finite variants of A. What I really want a set of such problems so that we can proof other problems NPTALLY-complete or NPSPARSE-complete with the ease we now prove problems NP-complete.<br />
<br />
<br />
<br />
https://blog.computationalcomplexity.org/2019/09/are-there-any-natural-problems-complete.htmlnoreply@blogger.com (GASARCH)0tag:blogger.com,1999:blog-3722233.post-1092995341963635215Thu, 05 Sep 2019 16:03:00 +00002019-09-05T12:04:44.526-04:00TransitioningYou may have noticed, or not, that I haven't posted or tweeted much in the last month. I've had a busy time moving back to Chicago and starting my new position as Dean of the College of Science at Illinois Tech.<br />
<div>
<br /></div>
<div>
Part of that trip involved driving my electric Chevy Bolt from Atlanta to Chicago. You can do it, but it got a bit nerve wracking. There is only one high-speed charging station between Indianapolis and Chicago and you pray the charger outside the Lafayette Walmart actually works (it did). We passed many Tesla charging superstations, I will have to admit they have the better network. </div>
<div>
<br /></div>
<div>
Theoremwise, Ryan Alweiss, Shachar Lovett, Kewen Wu and Jiapeng Zhang had <a href="https://gilkalai.wordpress.com/2019/08/23/amazing-ryan-alweiss-shachar-lovett-kewen-wu-jiapeng-zhang-made-dramatic-progress-on-the-sunflower-conjecture/">significant improvements on the sunflower conjecture</a>. I posted on the sunflower theorem for Richard Rado's <a href="https://blog.computationalcomplexity.org/2006/04/richard-rado-1906-1989.html">centenary</a>. Nice to see there is still some give in it.</div>
<div>
<br /></div>
<div>
I probably will post less often in this new position. Bill asked me "Why is being a dean (or maybe its just the move) more time then being a chair? Were you this busy when you moved and first became chair?"<br />
<br />
When I moved to Atlanta, I moved a year ahead of the rest of the family and rented a condo. We had plenty of time to search for a house in Atlanta and plan the move. Here it all happened in a much more compressed time schedule and, since we've bought a condo, into a much more compressed space.</div>
<div>
<br /></div>
<div>
Being a chair certainly ate up plenty of time but it feels different as dean with a more external focus. I won't give up the blog but you'll probably hear a lot more from Bill than from me at least in the near future.</div>
https://blog.computationalcomplexity.org/2019/09/transitioning.htmlnoreply@blogger.com (Lance Fortnow)1tag:blogger.com,1999:blog-3722233.post-1820902627345115895Tue, 03 Sep 2019 14:56:00 +00002019-09-03T10:56:38.565-04:00Can Mathematicians Fix Iphones? Can anyone?<br />
In my last post I noted that if I am asked (since I am a CS prof)<br />
<br />
<i>Can you fix my iphone</i><br />
<br />
is<br />
<br />
<i>No, I work on the math side of CS</i><br />
<br />
Some readers emailed me (I told them to comment instead but they were worried that other readers would argue with them) that NO, this is a tired and incorrect stereotype. Here are some samples:<br />
<br />
1) People in Mathematics are no better or worse at fixing iphones, fixing cars, programming their VCR's, etc than the public.<br />
<br />
2) For that matter, people in academica, even in practical sounding fields like Systems, are no better.<br />
<br />
3) Is your nephew Jason who used to fix forklifts for a living better at these things then the general public? I asked him. Answer: No, though he is better at fixing forklifts.<br />
<br />
I think something else is going on here. Lets look at fixing your own car. I think this is the sort of thing that some people used to be able to do but now very few can do it. Cars have gotten more complicated. <br />
<br />
Iphones are not quite there yet but its getting that way. <br />
<br />
Of course somethings have gotten easier--- programming a DVR is much easier than programming a VCR. And people can easily use WORD or write programs without having to know any hardware.<br />
<br />
OKAY, after all these random thoughts, here is the question: What do you think? <br />
<br />
Are people in CS or Math or CS theory better at X than the general public where X is NOT CS, Math or CS theory, but something like fixing their cars?<br />
<br />
And<br />
<br />
What has gotten harder? What has gotten easier?<br />
<br />
<br />
<br />
https://blog.computationalcomplexity.org/2019/09/can-mathematicians-fix-iphones-can.htmlnoreply@blogger.com (GASARCH)3tag:blogger.com,1999:blog-3722233.post-8615276220589503260Sun, 25 Aug 2019 17:39:00 +00002019-08-25T14:58:55.179-04:00`Are you a math genius?' `Can you fix my Iphone?' `What do you think about Facebook and Privacy?'<br />
When I meet non-math people and tell them I am a computer science professor I get a range of responses. Here are some and my responses.<br />
<br />
<br />
1) <i>Are you a math genius?</i><br />
<br />
Here is the answer I give:<br />
<br />
<i>I know some things in math, frankly obscure things, that most people in math don't know. On the other hand, I probably can't help your teenage daughter with her trigonometry homework. Academics, like most people, forget what they don't use, and there are some things in math I rarely use, so I've forgotten them.</i><br />
<br />
I wonder how a Fields' Medal winner would answer the question.<br />
<br />
Although the above is the answer I give I really think its an ill defined and pointless question. Its very hard to measure genius or even define what it means (there are exceptions). After a certain age, what you've done is more important than how smart you allegedly are. <br />
<br />
2)<i> Can you fix my iPhone?</i><br />
<br />
That's an easy one: <i>No. I work on the math end of computer science.</i> I don't elaborate.<br />
<br />
3) <i>What do you think of Facebook and privacy?</i><br />
<br />
This is an odd one. I DO have opinions on this but they are NOT better informed just because I'm a comp sci professor. Since I don't have a Facebook account my opinions might be worse informed. So how do I respond? I show them<br />
<br />
<a href="https://www.youtube.com/watch?v=cqggW08BWO0">this Onion News Network Video about how the CIA created Facebook </a><br />
<br />
Some think it's real. They may be right.<br />
<br />
https://blog.computationalcomplexity.org/2019/08/are-you-math-genius-can-you-fix-my.htmlnoreply@blogger.com (GASARCH)9tag:blogger.com,1999:blog-3722233.post-337501791513204418Wed, 07 Aug 2019 19:27:00 +00002019-08-07T15:27:14.876-04:00Obstacles to improving Classical Factoring AlgorithmsIn Samuel Wagstaff's excellent book The Joy of Factoring (see <a href="https://mathcs.clarku.edu/~fgreen/SIGACTReviews/bookrev/47-2.pdf">here</a> for a review) there is a discussion towards the end about why factoring algorithms have not made much progress recently. I<br />
paraphrase it:<br />
<br />
<br />
--------------------------------------------------------<br />
<br />
The time complexities of the fastest known algorithms can be expressed as a formula of the following form (where N is the number to be factored):<br />
<br />
(*) exp(c(ln N)^t (ln(ln N))^{1-t})<br />
<br />
for some constants c and for 0 < t < 1. For the Quadratic Sieve (QS) and Elliptic Curve Method (ECM) t=1/2. For the Number Field Sieve (NFS) t=1/3. The reason for this shape for the time complexity is the requirement of finding one or more smooth numbers (numbers that have only small primes as factors).<br />
<br />
----------------------------------------------------------<br />
<br />
This got me thinking: Okay, there may not be a drastic improvement anytime soon but what about just improving t? Is there a mathematical reason<br />
why an algorithm with (say) t=1/4 has not been discovered? In an earlier era I would have had to write a letter to Dr. Wagstaff to ask him. Buy an envelope, buy a stamp, find his address, the whole nine yards (my younger readers should ask their grandparents what envelopes and stamps were). In the current era I emailed him. And got a response.<br />
<br />
<br />
Samuel Wagstaff:<br />
<br />
The fastest known factoring algorithms find smooth numbers subject to parameter choice(s). In all these algorithms, one parameter choice is the smoothness bound B: a number is smooth if all its prime factors are < B. The NFS has the degree of a polynomial as an additional parameter.<br />
<br />
One analyzes the complexity of these algorithms by estimating the total work required (to find enough smooth numbers) for an arbitrary parameter choice using Dickman's function to predict the density of smooth numbers. Then one uses calculus to find the parameter choice(s) that minimize the total work function. Calculus also yields the optimal values for the parameter(s).<br />
<br />
If you have k parameters to choose, you will get the time complexity (*) with t = 1/(1+k). If you have no parameters (k = 0),you get (*) with t = 1, basically exponential time N^c. With one parameter to optimize, as in CFRAC (continued fractions algorithm) and QS, you get t = 1/2. NFS has two parameters, so t = 1/3. ECM also has t = 1/2 because it uses only one parameter, the smoothness bound B. If you want to get t = 1/4, you have to find a third parameter to optimize. No one has found one yet. That is the answer to your question.<br />
<br />
Note that some choices made in some factoring algorithms don't count as parameters. For example, the number of polynomials used in the multiple-polynomial quadratic sieve, and the upper bound on large primes kept, don't affect t. They affect the running time only in making c smaller. So you have to find a third parameter that matters in order to get (*) with t = 1/4. Or find three completely different new parameters.<br />
<br />
<br />
https://blog.computationalcomplexity.org/2019/08/obstacles-to-improving-classical.htmlnoreply@blogger.com (GASARCH)2tag:blogger.com,1999:blog-3722233.post-5104220537083628124Mon, 29 Jul 2019 00:46:00 +00002019-07-28T20:46:08.367-04:00Turing to be on the Bank of England 50 pound note, giving me an excuse to talk about TuringBILL: Darling, guess who is soon going to be on the Bank of England 50 pound note?<br />
<br />
DARLING: Alan Turing. <br />
<br />
BILL: How did you deduce that? (She is right, see <a href="https://www.bbc.com/news/business-48962557">here</a>.)<br />
<br />
DARLING: Since you asked it, it couldn't be a member of the Royal Family (you don't care about that) or some British Politician (you don't care about that either). It had to be a mathematician or computer scientist.<br />
<br />
BILL: It could have been Hardy. I wonder if Ramanujan could qualify---do they need to be British? At <a href="https://www.bankofengland.co.uk/banknotes/banknote-characters">this website</a> it says<br />
<br />
<br />
<br />
<i>Of course, banknotes need to be universally accepted. We therefore look for UK characters who have made an important contribution to our society and culture through their innovation, leadership or values. We do not include fictional characters, or people who are still living (except the monarch on the front of the note). Finally, we need to have a suitable portrait of the person which will be easy to recognise.</i><br />
<br />
(They spell <i>recognise</i> with an s instead of a z, so spellcheck flagged it, but I won't change it.) <br />
<br />
Note that people on the banknotes have to be <i>UK characters</i>. I honestly don't know if that means they must be citizens.<br />
<br />
OKAY, so here are a few thoughts on Turing.<br />
<br />
1) When I visited Bletchley Park there was a booklet that bragged about the fact that Bletchley Park was much better at cracking codes than Germany because they allowed people to work there based only on ability (unlike Germany) - women worked there, Turing who was Gay worked there. I think this is simplistic. Did any Jews work there (anti-semitism was widespread in England, and the world, at the time)? I doubt any blacks worked there since if they did that would be well known by now (if I am wrong let me know). Women DID work there but was their work respected and used? (I honestly don't know). Did Germany also use women at their codebreaking centers? Was Turing known to be gay (if not then Bletchley gets no points for tolerating him). Was JUST having Turing the reason they could crack codes. Plus I am sure there were other factors aside from merit-only.<br />
<br />
2) Turing was given a Pardon for his ``crimes'' in August 2014. When I see things like this I wonder who was against it and why and if they were an obstacle.<br />
<br />
a) Human Rights Advocate Peter Tatchell noted that its wrong to just single out Turing. Other people prosecuted under that law who did not help beat the German's in WW II should also be pardoned. The government later DID such a pardon in 2017.<br />
<br />
b) Judge Minister Lord McNally objected to the pardon:<br />
<br />
<i>A posthumous pardon was not considered appropriate as Alan Turing was properly convicted of what at the time was a criminal offence. He would have known that his offence was against the law and that he would be prosecuted. It is tragic that Alan Turing was convicted of an offence that now seems both cruel and absurd—particularly poignant given his outstanding contribution to the war effort. However, the law at the time required a prosecution and, as such, long-standing policy has been to accept that such convictions took place and, rather than trying to alter the historical context and to put right what cannot be put right, ensure instead that we never again return to those times.<br />
</i><br />
<br />
While I disagree with him, I do note that, based on what he wrote and his general record, I think he is not saying this from being anti-gay. There is a hard general question here: how does a society right past wrongs? I think pardoning and apologizing is certainly fine, but frankly it seems to weak. What else could a society due? Financial renumeration to living relatives? I don't think giving Inagh Payne (Turing's niece, who I think is still alive) would really help here.<br />
<br />
c) <i>At the bill's second reading in the House of Commons on 29 November 2013, Conservative MP Christopher Chope objected to the bill, delaying its passage<br />
</i><br />
<br />
I couldn't find Chope's reasons. On the one hand, they may be similar to McNally's. On the other hand he is against same sex marriage so its possible (though I do not know this) that he anti-gay and that is why he is against the pardon. If someone can find what his explanation for blocking the Turing bill is, or other evidence that he is anti-gay, please leave it in the comments.<br />
<br />
3) Did the delay matter? I was surprised to find out---Yes. Here is the full passage from Wikipedia:<br />
<br />
<br />
<i>At the bill's second reading in the House of Commons on 29 November 2013, Conservative MP Christopher Chope objected to the bill, delaying its passage. The bill was due to return to the House of Commons on 28 February 2014,[175] but before the bill could be debated in the House of Commons,[176] the government elected to proceed under the royal prerogative of mercy. On 24 December 2013, Queen Elizabeth II signed a pardon for Turing's conviction for "gross indecency", with immediate effect.[17] Announcing the pardon, Lord Chancellor Chris Grayling said Turing deserved to be "remembered and recognised for his fantastic contribution to the war effort" and not for his later criminal conviction.[16][18] The Queen officially pronounced Turing pardoned in August 2014.[177] The Queen's action is only the fourth royal pardon granted since the conclusion of the Second World War.[178] Pardons are normally granted only when the person is technically innocent, and a request has been made by the family or other interested party; neither condition was met in regard to Turing's conviction.[179]</i><br />
<br />
This amazed me! I thought the Queen had NO power (too bad--- I wish she could just say NO BREXIT). Or that she formally has power but if she ever used it, it might be blocked somehow and taken away. So I am surprised she has a power she can use at all.<br />
<br />
4) I wonder if the Pardon had to happen before they put him on the Banknote. I have been told that this is a very American Question--- England has no Constitution and operates more on Custom and Tradition than on written rules. <br />
<br />
5) I had always assumed that Turing committed suicide. Without going into detail, the Wikipedia site on Turing does give intelligent counterarguments to this. See <a href="https://en.wikipedia.org/wiki/Alan_Turing#Death">here</a><br />
https://blog.computationalcomplexity.org/2019/07/turing-to-be-on-bank-of-england-50.htmlnoreply@blogger.com (GASARCH)10tag:blogger.com,1999:blog-3722233.post-7683541918979097623Thu, 25 Jul 2019 20:39:00 +00002019-07-25T16:39:24.106-04:00The Advisor/Advisee RelationshipI've always felt a strong advisor/advisee relationship is the single most important factor in a successful PhD career. At its best, the advisor works closely with the student to successful research agenda and help mentor them through their graduate career and beyond. The advisor/advisee relationship can feel like a parent/child relationship that lasts an entire career. Nothing gives me more pleasure as an academic than to see the success of my current and former students.<br />
<br />
Take your time when picking an advisor. Don't choose an advisor based solely on research area or because they are "famous". Pick the advisor that will best guide you to a successful academic career.<br />
<br />
At its worst, a bad advisor/advisee relationship will destroy your graduate career, making you feel miserable, perhaps dropping out of graduate school or worse, particularly if a student doesn't feel like they are being treated fairly.<br />
<br />
Two incidents prompted this post. On TCS-Stack Exchange, a student has <a href="https://cstheory.stackexchange.com/questions/42704/single-author-papers-against-my-advisors-will/42711">authorship issues</a> with their advisor. Unfortunately these kinds of incidents happen more often than one suspects. If you can't work it out with the advisor, go talk to someone about it, another faculty, the graduate or department chair, a grad student ombudsperson if your institution has one. We care about our students, and will work hard to resolve problems.<br />
<br />
In a much more <a href="https://medium.com/@huixiangvoice/the-hidden-story-behind-the-suicide-phd-candidate-huixiang-chen-236cd39f79d3">tragic event</a>, a student felt it easier to take his own life than feeling that he had to cover up potential academic misconduct. Again, if you ever find yourself in such a situation please reach out. Giving up is never the answer.https://blog.computationalcomplexity.org/2019/07/the-advisoradvisee-relationship.htmlnoreply@blogger.com (Lance Fortnow)10tag:blogger.com,1999:blog-3722233.post-7828284719883166611Mon, 22 Jul 2019 03:00:00 +00002019-07-21T23:00:05.358-04:00Answer to both Infinite Hats Problems from the last post<br />
(This is a joint post with David Marcus. You'll see why later.)<br />
<br />
In a prior I posed two infinite hat problems. Today I post the solutions. Actually this is a copy of my last post with the solutions added, so it is self contained.<br />
<br />
A Hat Problem that you have probably seen:<br />
<br />
1) There are an infinite number of people, numbered 1,2,3,... There are 2 colors of hats. They can all see everyone's hat but their own. <br />
<br />
2) The adversary is going to put hats on all the people. They will guess their own hat color<i> at the same time</i>. <br />
<br />
3) The people can discuss strategy ahead of time, but must use a deterministic strategy and the adversary knows the strategy.<br />
<br />
4) The people want to minimize how many they get wrong. <br />
<br />
5) The adversary puts on hats to maximize how many they get wrong.<br />
<br />
I ask two questions (the answers are in a document I point to) and one meta-question:<br />
<br />
Q1: Is there a solution where they get all but a finite number of the guesses right? (If you have read my prior post on hat puzzles, <a href="https://blog.computationalcomplexity.org/2017/07/two-hat-problems-you-may-or-may-not.html">here</a> then you can do this one.) <br />
<br />
Q2: Is there a solution where they get all but at most (say) 18 wrong.<br />
<br />
<br />
Answers to Q1 and Q2 are <a href="https://www.cs.umd.edu/users/gasarch/BLOGPAPERS/infinitehats.pdf">here</a>.<br />
<br />
How did I get into this problem? I was looking at hat problems a while back. Then I began discussing Q1 and Q2 by email (Does the term <i>discussing</i> have as a default that it is by email?) with David Marcus who had just read the chapter of <a href="https://www.amazon.com/Problems-Point-Exploring-Computer-Science/dp/9813279974">Problems with a Point</a> on hat puzzles. After a few emails back and fourth, he began looking on the web for answers. He found one. There is a website of hat puzzles! It was MY website papers on Hat Puzzles! It is <a href="https://www.cs.umd.edu/users/gasarch/TOPICS/hats/hats.html">here</a>. And on it was a relevant paper <a href="https://www.cs.umd.edu/users/gasarch/TOPICS/hats/infinite-hats-and-ac.pdf">here</a>. We did not find any other source of the problem or its solution. <br />
<br />
Q3: How well known is problem Q2 and the solution? I've seen Q1 around but the only source on Q2 that I know of is that paper, and now this blog post. So, please leave a comment telling me if you have seen Q2 and/or the solution someplace else, and if so where.<br />
<br />
The responses to my last post indicated that YES the problem was out there, but the proof that you could not get all-but-18 was not well known. <br />
<br />
I THINK that all of the proofs that you can't do all-but-18 in the comment of the last post were essentially the same as the solution I pointed to in this blog. I would be interested if there is an alternative proof. <br />
https://blog.computationalcomplexity.org/2019/07/answer-to-both-infinite-hats-problems.htmlnoreply@blogger.com (GASARCH)0tag:blogger.com,1999:blog-3722233.post-6069637759837834972Tue, 16 Jul 2019 23:38:00 +00002019-07-16T19:38:00.557-04:00Guest post by Samir Khuller on attending The TCS Women 2019 meeting<br />
(I will post the solution to the problem in the last blog later in the week---probably Thursday. Meanwhile, enjoy these thoughts from Samir Khuller on the TCS Women 2019 meeting.)<br />
<br />
Guest Post by Samir Khuller:<br />
<br />
Am I even allowed here?” was the first thought that crossed my mind when I entered the room. It was packed with women (over 95%), however a few minutes later, several men had trickled in. I was at the TCS Women spotlight workshop on the day before STOC. Kudos to Barna Saha, Sofya Raskhodnikova, and Virginia Vassilevska Williams for putting this grand (and long needed) event together, which serves as a role model and showcases some of the recent work by rising stars. In addition to the Sun afternoon workshop, the event was followed by both an all women panel and a poster session (which I sadly did not attend).<br />
<br />
<br />
The rising stars talks were given by Naama Ben-David (CMU), Andrea Lincoln (MIT), Debarati Das (Charles University) and Oxana Poburinnaya (Boston U). After a short break the inspirational talk was by Ronitt Rubinfeld from MIT. Ronitt’s talk was on the topic of Program Checking, but she made it inspirational by putting us in her shoes as a young graduate student, three decades back, trying to make a dent in research by working on something that her advisor Manuel Blum, and his senior graduate student Sampath Kannan had been working on, and I must say she made a pretty big dent in the process! She also related those ideas to other pieces of work done since in a really elegant manner and how these pieces of work lead to work on property testing.<br />
<br />
<br />
I am delighted to say that NSF supported the workshop along with companies such as Amazon, Akamai, Google and Microsoft. SIGACT plans to be a major sponsor next year.<br />
<br />
<br />
The Full program for the workshop is at the following URL<a href="https://sigact.org/tcswomen/tcs-women-2019/">here.</a><br />
<br />
https://blog.computationalcomplexity.org/2019/07/guest-post-by-samir-khuller-on.htmlnoreply@blogger.com (GASARCH)12tag:blogger.com,1999:blog-3722233.post-4254868758435665114Mon, 15 Jul 2019 03:12:00 +00002019-07-14T23:12:25.307-04:00Two infinite hat problem and a question about what is ``well known''<br />
This is a joint post with David Marcus. You will see how he is involved in my next post.<br />
<br />
Two infinite hat problems based on one scenario. I am also curious if they are well known.<br />
<br />
1) There are an infinite number of people, numbered 1,2,3,... There are 2 colors of hats. They can all see everyone's hat but their own. <br />
<br />
2) The adversary is going to put hats on all the people. They will guess their own hat color<i> at the same time</i>. <br />
<br />
3) The people can discuss strategy ahead of time, but must use a deterministic strategy and the adversary knows the strategy.<br />
<br />
4) The people want to minimize how many they get wrong. <br />
<br />
5) The adversary puts on hats to maximize how many they get wrong.<br />
<br />
I ask two questions and one meta-question:<br />
<br />
Q1: Is there a solution where they get all but a finite number of the guesses right? (I have blogged about a variant of this one a while back.)<br />
<br />
Q2: Is there a solution where they get all but at most (say) 18 wrong. (My students would say <i>the answer has to be YES or he</i> <i>wouldn't ask it</i>. They don't realize that I work on upper AND lower bounds!)<br />
<br />
Q3: How well known is problem Q1 and the solution? Q2 and the solution? I've seen Q1 and its solution around (not sure where), but the only source on Q2 that I know of is CAN'T TELL YOU IN THIS POST, WILL IN THE NEXT POST. So, please leave a comment telling me if you have seen Q1 or Q2 and solutions. And if so then where.<br />
<br />
Feel free to leave any comments you want; however, I warn readers who want to solve it themselves to not look at the comments, or at my next post.<br />
https://blog.computationalcomplexity.org/2019/07/two-infinite-hat-problem-and-question.htmlnoreply@blogger.com (GASARCH)10tag:blogger.com,1999:blog-3722233.post-1705625191398821823Thu, 11 Jul 2019 17:54:00 +00002019-07-11T15:37:18.780-04:00Degree and SensitivityHao Huang's <a href="https://arxiv.org/abs/1907.00847">proof of the sensitivity conjecture</a> that I <a href="https://blog.computationalcomplexity.org/2019/07/local-kid-makes-history.html">posted on last week</a> relied on a 1992 <a href="https://doi.org/10.1016/0097-3165(92)90060-8">result of Gotsman and Linial</a>. Let's talk about that result.<br />
<br />
Consider the set S={-1,1}<sup>n</sup>. The hypercube of dimension n is the graph with vertex set S and an edge between x = (x<sub>1</sub>,…,x<sub>n</sub>) and y = (y<sub>1</sub>,…,y<sub>n</sub>) in S if there is exactly one i such that x<sub>i</sub> ≠ y<sub>i</sub>. Every vertex has degree n.<br />
<br />
We say a vertex x is odd if x has an odd number of -1 coordinates, even otherwise. Every edge joins an odd and even vertex.<br />
<br />
Let f be a function mapping S to {-1,1}. The sensitivity of f on x is the number of i such that f(x<sub>1</sub>,…,x<sub>i</sub>,…,x<sub>n</sub>) ≠ f(x<sub>1</sub>,…,-x<sub>i</sub>,…,x<sub>n</sub>). The sensitivity of f is the maximum over all x in S of the sensitivity of f on x.<br />
<br />
Let g be the same function as f except that we flip the value on all odd vertices. Notice now that the sensitivity of f on x is the number of i such that g(x<sub>1</sub>,…,x<sub>i</sub>,…,x<sub>n</sub>) = g(x<sub>1</sub>,…,-x<sub>i</sub>,…,x<sub>n</sub>).<br />
<br />
Let G be the induced subgraph of vertices of x such that g(x)=-1 and H be induced subgraph on the set of x such that g(x)=1. The sensitivity of f is the maximum number of neighbors of any vertex in G or H.<br />
<br />
Consider f as a multilinear polynomial over the reals. The sensitivity conjecture states there is some α>0 such that if f has degree n then f has sensitivity at least n<sup>α</sup>.<br />
<br />
Note g(x<sub>1</sub>,…,x<sub>n</sub>)=f(x<sub>1</sub>,…,x<sub>n</sub>)x<sub>1</sub>⋯x<sub>n</sub>. If f has a degree n term, the variables in that term cancel out on S (since x<sub>i</sub><sup>2</sup>=1) and the constant of the degree n term of f becomes the constant term of g. The constant term is just the expected value, so f has full degree iff g is unbalanced.<br />
<br />
GL Assumption: Suppose you have a partition of the hypercube into sets A and B with |A| ≠ |B|, and let G and H be the induced subgraphs of A and B. Then there is some constant α>0 such that there is a node of A or B with at least n<sup>α</sup> neighbors.<br />
<br />
The above argument, due to Gotsman and Linial, shows that the GL assumption is equivalent to the sensitivity conjecture.<br />
<br />
Huang proved that given any subset A of the vertices of a hypercube with |A|>2<sup>n</sup>/2 the induced subgraph has a node of degree at least n<sup>1/2</sup>. Since either A or B in the GL assumption has size greater than 2<sup>n</sup>/2, Huang's result gives the sensitivity conjecture. https://blog.computationalcomplexity.org/2019/07/degree-and-sensitivity.htmlnoreply@blogger.com (Lance Fortnow)0tag:blogger.com,1999:blog-3722233.post-8041836663315088806Mon, 08 Jul 2019 03:55:00 +00002019-07-07T23:55:55.092-04:00Fortran is underated!(Joint Post with David Marcus who was a classmate of mine at SUNY Stony Brook [now called Stony Brook University]. I was class of 1980, he was class of 1979. We were both math majors.)<br />
<br />
David has been reading <a href="https://www.amazon.com/Problems-Point-Exploring-Computer-Science/dp/9813279974">Problems with a POINT</a> (I'm glad someone is reading it) and emailed me a comment on the following passage which was essentially <a href="https://blog.computationalcomplexity.org/2012/02/dusting-off-my-bookshelf-i-find-book-on.html">this post</a>. I paraphrase what I wrote:<br />
<br />
PASSAGE IN BOOK:<br />
I dusted off my book shelves and found a book on Fortran. On the back it said:<br />
<br />
FORTRAN is one of the oldest high-level languages and remains the premier language for writing code for science and engineering applications. (NOTE- The back of the book uses Fortran but the spell checker I am using insists on FORTRAN. As a fan of capital letters, I don't mind going along.)<br />
<br />
When was the book written?<br />
<br />
The answer was surprising in that it was 2012 (the Chapter title was <i>Trick Question or Stupid Question</i>. This was a Trick Question.) I would have thought that FORTRAN was no longer the premier language by then. I also need to dust my bookshelves more often.<br />
END OF PASSAGE IN BOOK<br />
<br />
David Marcus emailed me the following:<br />
<br />
DAVID'S EMAIL<br />
Page 201. Fortran. One clue is that it said "Fortran" rather than"FORTRAN". Fortran 90 changed the name from all upper case. Whether it is the "premier language" depends on what you mean by "premier". It is probably the best language for scientific computing. I used it pretty much exclusively (by choice) in my previous job that I left in 2006. The handling of arrays is better than any other language I've used. Maybe there are some better languages that I'm not familiar with, but the huge number of high-quality scientific libraries available for Fortran makes it hard to beat. On the other hand, I never wrote a GUI app with it (Delphi is best for that).<br />
END OF DAVID'S EMAIL<br />
<br />
In later emails we agreed that Fortran is not used that much (there are lists of most-used languages and neither Fortran nor FORTRAN is ever in the top 10). But what intrigued me was the following contrast:<br />
<br />
1) David says that its the BEST language for Scientific Computing. I will assume he is right.<br />
<br />
2) I doubt much NEW code is being written in it. I will assume I am right.<br />
<br />
So---what's up with that? Some options<br />
<br />
OPTION 1) People SHOULD use Fortran but DON'T. If so, why is that? Fortran is not taught in schools. People are used to what they already know. Perhaps people who do pick up new things easily and want to use new things would rather use NEW things rather than NEW-TO-THEM-BUT-NOT-TO-THEIR-GRANDMOTHER things. Could be a coolness factor. Do the advantages of Fortran outweight the disadvantages? Is what they are using good enough?<br />
<br />
OPTION 2) The amount of Scientific computing software being written is small since we already have these great Fortran packages. So it may be a victim of its own success.<br />
<br />
CAVEAT: When I emailed David a first draft of the post he pointed out the following which has to do with the lists of most-used programming languages:<br />
<br />
DAVIDS EMAIL:<br />
The problem with the lists you were looking at is that most people in the world are not scientists, so most software being written is not for scientists. Scientists and technical people are writing lots of new code. If you look at a list of scientific languages, you will see Fortran, e.g., <a href="https://en.wikipedia.org/wiki/Scientific_programming_language">here</a> and <a href="https://en.wikipedia.org/wiki/Fortran#Science_and_engineering">here</a>.<br />
<br />
<br />
There are several Fortran compilers available. One of the best was bought by Intel some time back and they still sell it. I doubt they would do that if no one was using it. Actually, I think Intel had a compiler, but bought the Compaq compiler (which used to be the Digital Equipment compiler) and merged the Compaq team with their team. Something like that. I was using the Compaq compiler around that time.<br />
END OF DAVID's EMAIL<br />
<br />
One quote from the second pointer I find intriguing. (Second use of the word <i>intriguing</i>. It was my word-of-the-day on my word-calendar).<br />
<br />
<i>... facilities for inter-operation with C were added to Fortran 2003 and enhanced by ISO/ICE technical specification 29113, which will be incorporated into Fortran 2018. </i><br />
<br />
I (Bill) don't know what some of that means; however, it does mean that Fortran is still active.<br />
<br />
<br />
One fear: with its not being taught that much, will knowledge of it die out. We be like Star Trek aliens:<br />
<br />
<i>The old ones built these machines, but then died and we can't fix them!<br />
<br />
<br />
<br />
</i>https://blog.computationalcomplexity.org/2019/07/fortran-is-underated.htmlnoreply@blogger.com (GASARCH)7tag:blogger.com,1999:blog-3722233.post-8952301722851173857Tue, 02 Jul 2019 17:05:00 +00002019-07-02T21:11:31.933-04:00Local Kid Makes History<div class="separator" style="clear: both; text-align: center;">
</div>
<a href="https://1.bp.blogspot.com/-OpUgtDyqMf0/XRuOhVHWTFI/AAAAAAABplw/Tz0jTV3EQCo9w0aZ1bsI9xsJiec3OfytgCLcBGAs/s1600/Huang.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="324" data-original-width="266" height="200" src="https://1.bp.blogspot.com/-OpUgtDyqMf0/XRuOhVHWTFI/AAAAAAABplw/Tz0jTV3EQCo9w0aZ1bsI9xsJiec3OfytgCLcBGAs/s200/Huang.jpg" width="163" /></a>The <a href="https://www.scottaaronson.com/blog/?p=4229">blogosphere</a> is <a href="https://gilkalai.wordpress.com/2019/07/02/amazing-hao-huang-proved-the-sensitivity-conjecture/">blowing</a> <a href="https://windowsontheory.org/2019/07/02/sensitivity-conjecture-proved/">up</a> over Hao Huang's just <a href="https://arxiv.org/abs/1907.00847">posted proof</a> of the sensitivity conjecture, what was one of the more <a href="https://blog.computationalcomplexity.org/2017/12/razors-edge.html">frustrating open questions</a> in complexity.<br />
<br />
Huang, an assistant professor in the math department at Emory, settled an open question about the hypercube. The hypercube is a graph on N=2<sup>n</sup> vertices where each vertex corresponds to an n-bit string and their are edges between vertices corresponding to strings that differ in a single bit. Think of the set of the strings of odd parity, N/2 vertices with no edges between them. Add any other vertex and it would have n neighbors. Huang showed that no matter how you placed those N/2+1 vertices in the hypercube, some vertex will have at least n<sup>1/2</sup> neighbors. By an <a href="https://doi.org/10.1016/0097-3165(92)90060-8">old result</a> of Gotsman and Linial, Huang's theorem implies the sensitivity conjecture.<br />
<br />
I won't go through the shockingly simple proof, the <a href="https://arxiv.org/abs/1907.00847">paper</a> is well written, or you can read the blogs I linked to above or even just Ryan O'Donnell's <a href="https://twitter.com/BooleanAnalysis/status/1145837576487612416">tweet</a>.<br />
<br />
I have nothing more to say than wow, just wow.https://blog.computationalcomplexity.org/2019/07/local-kid-makes-history.htmlnoreply@blogger.com (Lance Fortnow)3tag:blogger.com,1999:blog-3722233.post-7054757178293557182Mon, 01 Jul 2019 02:34:00 +00002019-07-08T15:46:03.126-04:00A proof that 22/7 - pi > 0 and more<br />
My father was a High School English teacher who did not know much math. As I was going off to college, intending to major in math, he gave me the following sage advice:<br />
<br />
1) <i>Take Physics as well as Math since Physics and Math go well together.</i> This was good advice. I took the first year of Physics for Physics Majors, and I later took a senior course in Mechanics since that was my favorite part of the first year course. Kudos to Dad!<br />
<br />
2) π <i>is exactly </i>22/7. I knew this was not true, but I also knew that I had no easy way to show him this. In fact, I wonder if I could have proven it myself back then.<br />
<br />
I had not thought about this in many years when I came across the following:<br />
<br />
Problem A-1 on the 1968 Putnam exam:<br />
<br />
Prove 22/7 - π = ∫<sub>0</sub><sup>1</sup> (x<sup>4</sup>(1-x)<sup>4</sup>)/(1+ x<sup>2</sup> )dx<br />
<br />
(I can easily do his by partial fractions and remembering that ∫ 1/(1+x^2) dx = tan<sup>-1</sup>x which is tan inverse, not 1/tan. See <a href="https://www.cs.umd.edu/users/gasarch/BLOGPAPERS/pi.pdf">here</a>.)<br />
<br />
(ADDED LATER---I have added conjectures on getting integrals of the form above except with 4 replaced by any natural number. Be the first on your block to solve my conjectures! It has to be easier than the Sensitivity Conjecture!)<br />
<br />
<br />
<br />
Let n ∈ N which we will choose later. By looking at the circle that is inscribed in a regular n-polygon (n even) one finds that <br />
<br />
<br />
n tan(π/n) > π <br />
<br />
<br />
So we seek an even value of n such that<br />
<br />
<br />
n tan(π/n) < 22/7<br />
<br />
<br />
Using Wolfram alpha the smallest such n is 92.<br />
<br />
Would that convince Dad? Would he understand it? Probably not. Oh well.<br />
<br />
Some misc points.<br />
<br />
<br />
1) While working on this post I originally wanted to find tan(π/2<sup>7</sup>) by using the half-angle formula many times, and get an exact answer in terms of radicals, rather than using Wolfram Alpha. <br />
<br />
a) While I have lots of combinatorics books, theory of comp books, and more Ramsey Theory books than one person should own in my house, I didn't have a SINGLE book with any trig in it.<br />
<br />
b) I easily found it on the web: <br />
<br />
tan(x/2) = sqrt( (1-cos x)/(1+cos x) ) = sin x/(1+cos x) = (1-cos x)/(sin x).<br />
<br />
None of these seems like it would get me a nice expression for tan(π/2<sup>7</sup>). But I don't know. Is there a nice expression for tan(π/2<sup>k</sup>) ? If you know of one then leave a polite comment.<br />
<br />
2) I assumed that there was a more clever and faster way to do the integral. I could not find old Putnam exams and their solutions the web (I'm sure they are there someplace! --- if you know then comment politely with a pointer). So I got a book out of the library <i>The William Lowell Putnam Mathematical Competition Problems and Solutions 1965--1984</i> by Alexanderson, Klosinski, and Larson. Here is the clever solution:<br />
<br />
<i>The standard approach from Elementary Calculus applies.<br />
<br />
</i><br />
<br />
Not as clever as I as hoping for.<br />
<br />
3) I also looked at the integral with 4 replaced by 1,2,3,4,...,16. The results are in the writeup I pointed to before. It looks like I can use this sequence to get upper and lower bound on pi, ln(2), pi+2ln(2), and pi-2ln(2). I have not proven any of this. But take a look! And as noted above I have conjectures!<br />
<br />
<br />
4) When I looked up INSCRIBING a circle in a regular n-polygon, Google kept giving me CIRCUMSCRIBING. Why? I do not know but I can speculate. Archimedes had a very nice way of using circumscribed circles to approximate pi. Its on youtube <a href="https://www.youtube.com/watch?v=_rJdkhlWZVQ">here</a>. Hence people are used to using circumscribed rather than inscribed circles.<br />
<br />
https://blog.computationalcomplexity.org/2019/06/a-proof-that-227-pi-0-and-more.htmlnoreply@blogger.com (GASARCH)7tag:blogger.com,1999:blog-3722233.post-5703612485495687112Thu, 27 Jun 2019 20:34:00 +00002019-06-28T11:27:36.082-04:00FCRC 2019<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-1zksIfUU-7U/XRToEil7J7I/AAAAAAABpiI/nR4seqK1l08FfLoalQyM2OzdK3iZTwL2ACKgBGAs/s1600/MVIMG_20190626_112006_1.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1201" data-original-width="1600" height="240" src="https://1.bp.blogspot.com/-1zksIfUU-7U/XRToEil7J7I/AAAAAAABpiI/nR4seqK1l08FfLoalQyM2OzdK3iZTwL2ACKgBGAs/s320/MVIMG_20190626_112006_1.jpg" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Georgia Tech FCRC Participants</td></tr>
</tbody></table>
I'm heading home from the <a href="https://fcrc.acm.org/">2019 ACM Federated Computing Research Conference</a> in Phoenix, a collection of computer science meetings including <a href="http://acm-stoc.org/stoc2019/">STOC</a>, <a href="http://learningtheory.org/colt2019/">COLT</a> and <a href="http://www.sigecom.org/ec19/">EC</a>.<br />
<br />
Geoff Hinton and Yann LeCun gave their Turing award lectures, their co-winner Yoshua Bengio not in attendance. Hinton talked about how machine learning triumphed over symbolic AI. LeCun argued that under uncertainty, one should learn the distribution instead of just the average. If you want more, just <a href="https://fcrc.acm.org/turing-lecture-at-fcrc-2019">watch it yourself</a>.<br />
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
To get to the STOC lectures you go up and down escalators and pass through ISCA (Computer Architecture) and PLDI (Programming Languages). It's like you are going up the computing stack until you reach algorithms and complexity. </div>
<div style="text-align: left;">
<br />
The conference center was just two blocks from Chase Field so we could take in a Diamondbacks baseball game. They opened the roof because the temperature dropped into the double digits. Last night, Paul McCartney played at an arena just a block from the conference center, but instead I hung out at an Uber reception for the EC conference.<br />
<br /></div>
<div style="text-align: left;">
Let me mention a best paper awardee, <a href="https://doi.org/10.1145/3313276.3316369">The Reachability Problem for Petri Nets is Not Elementary</a> by Wojciech Czerwinski, Slawomir Lasota, Ranko Lazic, Jerome Leroux and Filip Mazowiecki. In a Petri net you have a list of vectors of integers and an initial and final vector. You start with the initial vector and can add any of the other vectors nondeterministically as often as you like as long as no coordinate goes negative. Can you get to the final vector? This problem was known to be computable in "Ackermannian" time and EXPSPACE-hard. This paper shows the problem is not elementary, i.e. not solvable in running time a tower of 2's to the n. A <a href="https://arxiv.org/abs/1903.08575">recent result</a> shows Petri Nets reachability is primitive recursive for fixed dimensions.<br />
<br />
Avi Wigderson gave the Knuth Prize lecture exploring deep connections between mathematics and algorithms. Hopefully the video will be online soon.<br />
<br />
STOC next year in Chicago, EC as part of the Game Theory Congress in Budapest. </div>
https://blog.computationalcomplexity.org/2019/06/fcrc-2019.htmlnoreply@blogger.com (Lance Fortnow)3tag:blogger.com,1999:blog-3722233.post-8803689043688761239Mon, 24 Jun 2019 05:03:00 +00002019-06-24T01:03:12.211-04:00Are you smarter than a 5th grade amoeba?(title of this blog is due to Henry Baker who posted an article about this elsewhere)<br />
<br />
Amoeba finds approx solution to TSP in linear time:<a href="https://phys.org/news/2018-12-amoeba-approximate-solutions-np-hard-problem.html">here</a>.<br />
<br />
Over the years we have seen models of computation that claim to solve NPC or other hard problems quickly. I ask non-rhetorically and with and open mind how they have panned out.<br />
<br />
In no particular order:<br />
<br />
1) Parallelism. For solving problems faster YES. For speeding up how to solve NPC problems I think YES. For making P=NP somehow NO. Even so, parallel computers have been a definite practical success.<br />
<br />
2) Quantum Computing. Will they factor large numbers anytime soon? Ever? Should we view the effort to build them as an awesome and insightful Physics experiment? Are there any problems that they are NOW doing faster? Is Quantum Crypto (I know, not the same thing) actually used? Will other things of interest come out of the study of quantum computing? It already has, see <a href="https://theoryofcomputing.org/articles/gs002/gs002.pdf">here</a>.<br />
<br />
3) DNA computing. Did that lead to practical solutions to NPC problems? I do not think it did. Did that lead to interesting math? Interesting biology? Interesting science? I do not know.<br />
<br />
4) Autistic computing for finding primes: see <a href="https://blog.computationalcomplexity.org/2009/09/possibly-recruits-for-polymath-primes.html">here</a>. Oliver Sacks, the neurologist ,claimed that two autistic twin brothers could generate large primes quickly. This story was never put to a rigorous test and may not be quite right.<br />
<br />
5) Amoeba computing: Too early to tell. The article seems to say it succeeded on 8 cities<br />
<br />
The problem with all of these non-standard models of computing is SCALE. And the more powerful classic computers get, the harder it is for these nonstandard models to compete.<br />
<br />
Are these models interesting even if they don't end up getting us fast algorithms? They can be:<br />
<br />
1) Do they lead to mathematics of interest? (Quantum- Yes, Parallelism- Yes)<br />
<br />
2) Did they inspire algorithms for classical computers? (Quantum- Yes)<br />
<br />
3) Do they give insight into other fields? (Quantum for Physics yes, DNA-computing for bio-??)<br />
<br />
4) Have they ACTUALLY sped up up computations in meaningful ways for problems we care about (Parallelism has)<br />
<br />
If you know of any result which I missed<br />
<br />
(e.g.,<br />
<br />
Amoeba-computing giving insight into evolution,<br />
<br />
Autistic computing being used by the NSA to find primes,<br />
<br />
DNA computing leading to interesting mathematics)<br />
<br />
then leave polite comments!<br />
<br />
<br />
<br />
<br />
<br />
<br />https://blog.computationalcomplexity.org/2019/06/are-you-smarter-than-5th-grade-amoeba.htmlnoreply@blogger.com (GASARCH)4tag:blogger.com,1999:blog-3722233.post-8718880846893877693Thu, 20 Jun 2019 13:18:00 +00002019-06-20T09:18:07.629-04:00The Urban/Rural Collegiality Divide<div>
Just a reminder that Grigory Yaroslavtsev has <a href="http://grigory.us/blog/theory-jobs-2019/">taken over</a> the <a href="https://docs.google.com/spreadsheets/d/1Oegc0quwv2PqoR_pzZlUIrPw4rFsZ4FKoKkUvmLBTHM/edit?usp=sharing">Theory Jobs Spreadsheet</a>. Check out who is moving where and let everyone know where you will continue your research career.</div>
<div>
<br /></div>
In 1999 when I considered leaving the University of Chicago for NEC Research I had a conversation with Bob Zimmer, then VP of Research and current U of C President. Zimmer said it was a shame I didn't live in Hyde Park, the Chicago South Side neighborhood where the university resides, and thus not fully connected with the school. At the time I didn't fully understand his point and did leave for NEC, only to return in 2003 and leave again in 2008. I never did live in Hyde Park.<br />
<div>
<br /></div>
<div>
Recently I served on a review committee for a computer science department in a rural college town. You couldn't help but notice the great collegiality among the faculty. Someone told me their theory that you generally get more faculty collegiality in rural versus urban locations. Why?</div>
<div>
<br /></div>
<div>
In urban locations faculty tend to live further from campus, to get bigger homes and better schools, and face more traffic. They are likely to have more connections with people unconnected with the university. There are more consulting opportunities in large cities, a larger variety of coffee houses to hang out in and better connected airports make leaving town easier. Particularly in computer science, where you can do most of your research remotely, faculty will find themselves less likely to come in every day and you lose those constant informal connections with the rest of the faculty. </div>
<div>
<br /></div>
<div>
This is a recent phenomenon, even going back to when I was a young faculty you needed to come to the office for access to research papers, better computers to write papers, good printers. Interactions with students and colleagues is always better in person but in the past the methods of electronic communication proved more limiting.</div>
<div>
<br /></div>
<div>
The University of Chicago helped create and promote its own neighborhood and ran a very strong private school with reduced tuition for faculty children. Maybe my life would have been different had I immersed myself in that lifestyle. </div>
<div>
<br /></div>
<div>
Or maybe we should go the other extreme. If we can find great ways to do on-line meetings and teaching, why do we need the physical university at all?</div>
https://blog.computationalcomplexity.org/2019/06/the-urbanrural-collegiality-divide.htmlnoreply@blogger.com (Lance Fortnow)5tag:blogger.com,1999:blog-3722233.post-2592965474602859939Tue, 18 Jun 2019 00:59:00 +00002019-06-17T20:59:41.365-04:00Why does the Nevalina Prize (now Abacus) got to Algorithms/Complexity peopleIn my post about the Nevanlinna prize name change (see <a href="https://blog.computationalcomplexity.org/2019/06/imus-non-controversial-changing-name-of.html">here</a>) one of my readers raised a different question about the prize:<br />
<br />
BEGIN QUOTE<br />
<br />
<div style="text-align: start;">
<br /></div>
<span style="background-color: #ccff99; color: #191919; font-family: "georgia" , "utopia" , "palatino linotype" , "palatino" , serif; font-size: 14px; text-align: justify;">So there's one of my main questions about the prize answered (or at least resolved). The second remains. The IMU's website(which still refers to the Nevanlinna Prize) says that it is awarded "for outstanding contributions in Mathematical Aspects of Information Sciences including:"</span><br />
<br style="background-color: #ccff99; color: #191919; font-family: Georgia, Utopia, "Palatino Linotype", Palatino, serif; font-size: 14px; text-align: justify;" />
<span style="background-color: #ccff99; color: #191919; font-family: "georgia" , "utopia" , "palatino linotype" , "palatino" , serif; font-size: 14px; text-align: justify;">1)All mathematical aspects of computer science, including complexity theory, logic of programming languages, analysis of algorithms, cryptography, computer vision, pattern recognition, information processing and modelling of intelligence.</span><br />
<br style="background-color: #ccff99; color: #191919; font-family: Georgia, Utopia, "Palatino Linotype", Palatino, serif; font-size: 14px; text-align: justify;" />
<span style="background-color: #ccff99; color: #191919; font-family: "georgia" , "utopia" , "palatino linotype" , "palatino" , serif; font-size: 14px; text-align: justify;">2)Scientific computing and numerical analysis. Computational aspects of optimization and control theory. Computer algebra.</span><br />
<br style="background-color: #ccff99; color: #191919; font-family: Georgia, Utopia, "Palatino Linotype", Palatino, serif; font-size: 14px; text-align: justify;" />
<span style="background-color: #ccff99; color: #191919; font-family: "georgia" , "utopia" , "palatino linotype" , "palatino" , serif; font-size: 14px; text-align: justify;">Correct me if I'm wrong, but it seems that the recognized work of the ten winners of the award all fits into two or three of the possible research areas for which the prize may be rewarded. Why do people think that this is the case?</span><br />
<br />
<br />
<div style="text-align: justify;">
<span style="color: #191919; font-family: "georgia" , "utopia" , "palatino linotype" , "palatino" , serif;"><span style="background-color: #ccff99; font-size: 14px;">END QUOTE</span></span></div>
<div style="text-align: justify;">
<span style="color: #191919; font-family: "georgia" , "utopia" , "palatino linotype" , "palatino" , serif;"><span style="background-color: #ccff99; font-size: 14px;"><br /></span></span></div>
<div style="text-align: justify;">
<span style="color: #191919; font-family: "georgia" , "utopia" , "palatino linotype" , "palatino" , serif;"><span style="background-color: #ccff99; font-size: 14px;">First off, lets see if this is true. Here is a list of all the winners:</span></span></div>
<div style="text-align: justify;">
<span style="color: #191919; font-family: "georgia" , "utopia" , "palatino linotype" , "palatino" , serif;"><span style="background-color: #ccff99; font-size: 14px;"><br /></span></span></div>
<div style="text-align: justify;">
<span style="color: #191919; font-family: "georgia" , "utopia" , "palatino linotype" , "palatino" , serif;"><span style="background-color: #ccff99; font-size: 14px;">Tarjan, Valiant, Razborov, Wigderson, Shor, Sudan, Kleinberg, Spielman, Khot, Daskalakis </span></span><br />
<span style="color: #191919; font-family: "georgia" , "utopia" , "palatino linotype" , "palatino" , serif;"><span style="background-color: #ccff99; font-size: 14px;"><br /></span></span>
<span style="color: #191919; font-family: "georgia" , "utopia" , "palatino linotype" , "palatino" , serif;"><span style="background-color: #ccff99; font-size: 14px;">Yup, they all seem to be in Algorithms or Complexity.</span></span><br />
<span style="color: #191919; font-family: "georgia" , "utopia" , "palatino linotype" , "palatino" , serif;"><span style="background-color: #ccff99; font-size: 14px;"><br /></span></span>
<span style="color: #191919; font-family: "georgia" , "utopia" , "palatino linotype" , "palatino" , serif;"><span style="background-color: #ccff99; font-size: 14px;">Speculation as to why:</span></span><br />
<span style="color: #191919; font-family: "georgia" , "utopia" , "palatino linotype" , "palatino" , serif;"><span style="background-color: #ccff99; font-size: 14px;"><br /></span></span>
<span style="color: #191919; font-family: "georgia" , "utopia" , "palatino linotype" , "palatino" , serif;"><span style="background-color: #ccff99; font-size: 14px;">1) Algorithms and Complexity have problems with short descriptions that can easily be understood: Tarjan proved Planarity was in O(n) time. Valiant defined Sharp-P and showed the Permanent was Sharp-P complete. Hence it is easy to see what they have done. In many of the fields listed, while people have done great work, it may be harder to tell since the questions are not as clean. If my way to do Vision is better than your way to do Vision, that may be hard to prove. And the proof may need to be non-rigorous.</span></span><br />
<span style="color: #191919; font-family: "georgia" , "utopia" , "palatino linotype" , "palatino" , serif;"><span style="background-color: #ccff99; font-size: 14px;"><br /></span></span>
<span style="color: #191919; font-family: "georgia" , "utopia" , "palatino linotype" , "palatino" , serif;"><span style="background-color: #ccff99; font-size: 14px;">2) If someone does great work in (for example) Logic of Programming Languages, it may not be recognized until she is past 40 years old. </span></span><br />
<span style="color: #191919; font-family: "georgia" , "utopia" , "palatino linotype" , "palatino" , serif;"><span style="background-color: #ccff99; font-size: 14px;"><br /></span></span>
<span style="color: #191919; font-family: "georgia" , "utopia" , "palatino linotype" , "palatino" , serif;"><span style="background-color: #ccff99; font-size: 14px;">3) This one I am less sure of (frankly I'm not that sure of any of these and invite polite commentary): areas that are more practical may take longer to build and get to work.</span></span><br />
<span style="color: #191919; font-family: "georgia" , "utopia" , "palatino linotype" , "palatino" , serif;"><span style="background-color: #ccff99; font-size: 14px;"><br /></span></span>
<span style="color: #191919; font-family: "georgia" , "utopia" , "palatino linotype" , "palatino" , serif;"><span style="background-color: #ccff99; font-size: 14px;">But there is still a problem with this. Numerical Analysis and Cryptography would seem to not have these problems. </span></span><br />
<span style="color: #191919; font-family: "georgia" , "utopia" , "palatino linotype" , "palatino" , serif;"><span style="background-color: #ccff99; font-size: 14px;"><br /></span></span>
<span style="color: #191919; font-family: "georgia" , "utopia" , "palatino linotype" , "palatino" , serif;"><span style="background-color: #ccff99; font-size: 14px;">I invite the reader to speculate</span></span></div>
https://blog.computationalcomplexity.org/2019/06/why-does-nevalina-prize-now-abacus-got.htmlnoreply@blogger.com (GASARCH)4tag:blogger.com,1999:blog-3722233.post-8019500166163846173Wed, 12 Jun 2019 16:27:00 +00002019-06-12T12:27:25.435-04:00Compressing in Moscow<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-Bisdr756vxM/XQDylk3YrYI/AAAAAAABozI/M1FgKQZyX08bscqjoy9wmF8BxtXG86RLACLcBGAs/s1600/Vereshchagin.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="190" data-original-width="190" src="https://1.bp.blogspot.com/-Bisdr756vxM/XQDylk3YrYI/AAAAAAABozI/M1FgKQZyX08bscqjoy9wmF8BxtXG86RLACLcBGAs/s1600/Vereshchagin.jpg" /></a><a href="https://1.bp.blogspot.com/-G6yjxH8R1pU/XQDylvxG2aI/AAAAAAABozE/P54lQPWWEa46pTgyDEi6QWQOsaWLxP5zwCLcBGAs/s1600/Shen.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="190" data-original-width="190" src="https://1.bp.blogspot.com/-G6yjxH8R1pU/XQDylvxG2aI/AAAAAAABozE/P54lQPWWEa46pTgyDEi6QWQOsaWLxP5zwCLcBGAs/s1600/Shen.jpg" /></a></div>
<br />
This week finds me in Moscow for a pair of workshops, the <a href="https://mipt.ru/education/chairs/dm/conferences/workshop-june-9-11-moscow-2019.php">Russian Workshop on Complexity and Model Theory</a> and a workshop on <a href="https://www.poncelet.ru/conference/ric">Randomness, Information and Complexity</a>. The latter celebrates the lives of Alexander Shen and Nikolay Vereshchagin on their 60th birthdays.<br />
<br />
Alexander Shen might be best known in computational complexity for his <a href="https://doi.org/10.1145/146585.146613">alternate proof</a> of IP = PSPACE. In 1989, Lund, Fortnow, Karloff and Nisan gave an interactive proof for the permanent, which got the entire polynomial-time hierarchy by Toda's theorem. But we didn't know how to push the protocol to PSPACE, we had a problem keeping degrees of polynomials low. Shamir had the first proof by looking at a specific protocol for PSPACE. Shen had the brilliant but simple idea to use a degree reducing operator, taking the polynomial modulo x<sup>2</sup>-x. The three papers appeared <a href="https://dl.acm.org/citation.cfm?id=146585#prox">back-to-back-to-back</a> in JACM.<br />
<br />
Shen and Vereshchagin though made their careers with their extensive work on Kolmogorov complexity and entropy, often together. Vereshchagin and I have co-authored some papers together during our mutual trips to Amsterdam, including on <a href="http://doi.org/10.1007/11672142_10">Kolmogorov Complexity with Errors</a> and how to <a href="http://doi.org/10.1007/b106485">increase Kolmogorov Complexity</a>. My <a href="https://doi.org/10.1006/jcss.1999.1677">favorite work</a> of Shen and Vereshchagin, which they did with Daniel Hammer and Andrei Romashchenko showed that every linear inequality that holds for entropy also holds for Kolmogorov complexity and vice-versa, the best argument that the two notions of information, one based on distributions, the other based on strings, share strong connections.<br />
<br />
Today is <a href="https://en.wikipedia.org/wiki/Russia_Day">Russia Day</a> that celebrates the reestablishment of Russia out of the Soviet Union in 1990. Just like how the British celebrate their succession from the US in 1776 I guess. But I'm celebrating Russia day by honoring these two great Russians. Congrats Sasha and Kolya!https://blog.computationalcomplexity.org/2019/06/compressing-in-moscow.htmlnoreply@blogger.com (Lance Fortnow)0tag:blogger.com,1999:blog-3722233.post-8872438004336745499Sat, 08 Jun 2019 18:35:00 +00002019-06-08T14:35:45.755-04:00Ray Miller, one of our founders, Passes away at the age of 90Ray Miller, one of the founders of our field, passed away recently at the age of 90.<br />
<br />
He has associations with both GA Tech and The University of Maryland, so both Lance and I have a connection to him. As does Dick Lipton who has posted about him <a href="https://rjlipton.wordpress.com/2019/06/08/raymond-edward-miller-just-passed-away/">here</a>.<br />
<br />
I present two guest blog-posts about him<br />
<br />
<br />
<b>Post One</b>: From<br />
<br />
<a href="https://www.cs.umd.edu/people/lin">Ming C Lin</a><br />
<br />
Elizabeth Stevinson Chair of Computer Science<br />
<br />
University of Maryland at College Park<br />
<br />
<br />
Dear CS Alumni and Friends,<br />
<br />
We are deeply saddened to learn that Professor Emeritus Ray Miller passed away two nights ago around 9 pm.<br />
<br />
A Memorial Service at St. Andrews Lutheran Church (15300 New Hampshire Ave., Silver Spring MD 20905) for Dr. Miller will be held on Saturday, June 15th at 10:30 am.<br />
<br />
Dr. Ray Miller received his Ph.D. from University of Illinois in 1957. He was a professor and the former Director of the School of Information and Computer Science at the Georgia Institute of Technology before joining our department in 1988 as the Director of the Center of Excellence in Space Data and Information Science (CESDIS). Dr. Miller was well known for his research on communication protocols, networks, distributed systems, parallel computation, and theory.<br />
<br />
In 1970, he became the Fellow of IEEE for the advancement of the theoretical understanding of computation through work in switching theory and theoretical models.<br />
<br />
In 1997, he was elevated to be a Fellow of ACM for research contributions to the theory of parallel computation and for his distinguished service to the Computer Science community as an educator and leader.<br />
<br />
In 2003, Dr. Miller was designated as a Fellow by the Computer Science Accreditation Board<br />
<br />
<i>"in recognition of his outstanding professional volunteer contributions to computing sciences </i><i>and accreditation”.</i><br />
<br />
Dr. Miller was also an AAAS Fellow, and a Charter Member of IEEE Computer Society Golden Core;he received the IEEE Third Millennium Medal in 2000 and ACM Distinguished Service Award in 2002.<br />
<br />
Beyond his outstanding research contribution and devotion to education, Dr. Ray Miller has been known for his kindness as a colleague, supportiveness as a mentor, and effectiveness as a leader. Dr. Miller will be forever remembered warmly by his friends, colleagues and students for his dedication and service to our department, the University, and the field of computing at large.<br />
<br />
<br />
<b>Post Two</b>: From<br />
<br />
<a href="http://www.cs.umd.edu/users/ben/">Ben Shneiderman</a><br />
<br />
Emeritus Professor, University of Maryland at College Park.<br />
<br />
I was saddened to hear about the death of Ray Miller at age 90. He was a dear colleague who contributed a great deal to computer science and to our department. You can read his 84-page personal memoir at the IEEE Computer Society History Committee website: <a href="https://history.computer.org/pubs/ray-miller.pdf">here</a>.<br />
<br />
His memoirs tells his story in detail, describing his research collaborations in computational complexity, parallel algorithms, and program optimization and his leadership roles. You can see more about Ray’s work on his ACM author page: <a href="https://dl.acm.org/author_page.cfm?id=81332515760">here</a><br />
<br />
<div>
This is the best source as he had no Google Scholar page or Wikipedia article that I could find. Ray’s quiet and modest style was a valuable asset, but his contributions come through in his memoir. He describes working with Turing Awardees John Cocke, Fran Allen, John Backus, Dick Karp, and other key figures, so maybe Ray should have received that award too. Ray was also an excellent administrator and leader, who contributed to building the institutions (conferences, ACM, IEEE, etc.) that supported the growth of computer science.</div>
<div>
<div>
<br /></div>
<div>
Ray was especially kind to me in the early 1970s, when I was working on my Phd, developing a graph theoretic model of data structures. As Assistant Director of the IBM Mathematical Science Department at the T. J. Watson Labs in Yorktown Heights, NY. This legendary IBM Research Lab was equal to Bell Labs and filled with computing pioneers in hardware, software, and applications.<br />
<br /></div>
<div>
Ray invited me to give a talk about my work, drawing interest from Arnold Rosenberg, who had been developing related ideas. With Ray’s support I returned for monthly visits with Arnie and Ray to refine my esoteric ideas leading to my May 1973 Phd.</div>
<div>
<br /></div>
<div>
Ray’s kindness as a colleague and supportiveness as a mentor will always be remembered warmly. Here are a few photos of Ray giving a talk in the CS Department, probably in 1985: <a href="https://www.cs.umd.edu/users/gasarch/BLOGPAPERS/ray1.jpg">here,</a> <a href="https://www.cs.umd.edu/users/gasarch/BLOGPAPERS/ray2.jpg">here</a>, and</div>
</div>
<div>
<a href="https://www.cs.umd.edu/users/gasarch/BLOGPAPERS/ray3.jpg">here</a></div>
https://blog.computationalcomplexity.org/2019/06/ray-miller-one-of-our-founders-passes.htmlnoreply@blogger.com (GASARCH)0tag:blogger.com,1999:blog-3722233.post-1381501092894849452Thu, 06 Jun 2019 14:46:00 +00002019-06-06T10:46:29.214-04:00What Happened to the Surprising Theorems?Twenty-five years ago Peter Shor presented a polynomial-time factoring algorithms for quantum computers. For Peter, it was a simple translation of a <a href="https://en.wikipedia.org/wiki/Simon%27s_problem">quantum algorithm</a> due to Dan Simon. For the rest of us, it was a shock, while we knew quantum could do some seemingly artificial problems exponentially faster, no one expected a natural problem like factoring to fall so quickly. I remember remarking at the time that Shor bought quantum computing twenty years, now I would say fifty.<br />
<div>
<br /></div>
<div>
That may have been the last time I was truly shocked by a theorem in theoretical computer science. I've been shocked by proofs, that Primes are in P, Undirected connectivity in Log space, NEXP not in ACC<sup>0</sup>, Graph Isomorphism in quasi-polynomial time. But the theorems themselves all went in the directions we expected.<br />
<br />
In the ten years before Shor we had plenty of surprises, interactive proofs, zero-knowledge proofs, probabilistically checkable proofs, nondeterministic space closed under complementation, hardness versus randomness, the permanent hard for the polynomial-time hierarchy. It seemed to come to a hard stop after Shor.<br />
<br />
There have been some mild surprises, the Hadamard isn't rigid, holographic algorithms, the complexity of Nash equilibrium, QIP = PSPACE, and many others. But nothing that has made us rethink the complexity world.<br />
<br />
This reflects the maturity of our field. How many shocking theorems have we seen recently in math in general? We're shocked by proofs of the Poincaré conjecture and Fermat's last theorem but both went in the expected direction.<br />
<br />
We will have some shocking theorem in the future, maybe Factoring in P or L = NL. To be truly shocked it would have to be something I can't even imagine being true today.</div>
https://blog.computationalcomplexity.org/2019/06/what-happened-to-surprising-theorems.htmlnoreply@blogger.com (Lance Fortnow)13tag:blogger.com,1999:blog-3722233.post-3116995407343145604Tue, 04 Jun 2019 15:53:00 +00002019-06-04T11:53:51.084-04:00IMU's non-controversial changing the name of the Nevanlinna Prize(I want to thank Alexander Soifer for supplying me with some of the documents I point to in this post. We should all thank him for getting the ball rolling on changing the name of the Nevanlinna Prize.)<br />
<br />
The <i>Nevanlinna Prize </i>was essentially a Fields Medal for Theoretical Computer Science. I do not know why it is a<i> Prize </i>instead of a <i>Medal.</i><br />
<div>
<br /></div>
<div>
It has been renamed <i>The Abacus Medal. </i>If you want to know why the IMU (International Mathematics Union) thinks the new name is good <i>but do not </i><i>care even a little about why the original name was bad</i> then see this article: <a href="https://www.heidelberg-laureate-forum.org/blog/imu-abacus-medal/">here</a>.</div>
<div>
<br /></div>
<div>
So why is <i>The Nevanlinna Prize</i> a bad name? In brief, Rolf Nevanlinna was an enthusiastic Nazi sympathizer. How enthused? He served as the chair of the Finish SS recruitment committee.<br />
<br />
That would seem like enough to get the name changed. In fact, it makes one wonder why the prize originally had the name.<br />
<br />
1) Why the change now? It began when Alexander Soifer came across this information about Nevanlinna while working on his book<br />
<br />
<i>The Scholar and the State: In Search of Van der Waerdan</i> (see <a href="https://amzn.to/2WnfDYh">here</a> to buy it, see <a href="https://mathcs.clarku.edu/~fgreen/SIGACTReviews/bookrev/47-1.pdf">here</a> for a book review column that includes my review of it).<br />
<br />
He then wrote a letter to the IMU which sponsors the <i>Nevanlinna Prize</i>. The letter is <a href="https://www.cs.umd.edu/users/gasarch/BLOGPAPERS/letterToImu.pdf">here</a>. Note that Alexander offered to pay for the prize ($15,000 every four years) if that will help get the name changed.<br />
<br />
After a response that lamely said (I paraphrase): <i>Gee, we didn't know. Oh well</i>. Alex wrote another letter which is <a href="https://www.cs.umd.edu/users/gasarch/BLOGPAPERS/letterToImu2.pdf">here</a>.<br />
<br />
The story has a happy ending: the name was changed. (No, Alexander is not paying for the award.)<br />
<br />
2) For a full summary of why the award was originally named Nevanlinna and why it was changed see the article, <i>Yes We Can, </i>by Alexander Soifer,<i> </i>in an issue of the journal <i>Mathematical Competition</i>s, see <a href="https://www.cs.umd.edu/users/gasarch/BLOGPAPERS/yeswecan.pdf">here</a>.</div>
<div>
<br /></div>
<div>
3) When is change possible?<br />
<br /></div>
<div>
Assume Y did X and X is awful (e.g., I assume for most of my readers believing and spreading Nazi propaganda). Assume there is a Y-prize. What does it take to have the name changed?<br />
<br /></div>
<div>
<br /></div>
<div>
a) You need someone pushing hard for it. Kudos to Alexander Soifer who started this.</div>
<div>
<br /></div>
<div>
b) There is no really good reason to use that name in the first place. </div>
<div>
<br /></div>
<div>
What was Nevanlinna's contribution to mathematical aspects of computer science? The IMU (International Mathematics Union) internet page answers:</div>
<div>
<br /></div>
<div>
<i>The prize was named in honors of Rolf Nevanlinna ... who in the 1950's had taken the initiative to the computer organization at Finnish Universities. </i></div>
<div>
<i><br /></i></div>
<div>
That's all. If there was a Gauss Prize (actually there IS a Gauss Prize) and we later found out that Gauss was X, I doubt we would change the name of the award. Gauss's name is on it since he is a great mathematician. </div>
<div>
<br /></div>
<div>
c) The person on the award is not the one giving the money. If we found out that Nobel was an X, I doubt the name would change since he is paid for it. </div>
<div>
<br /></div>
<div>
d) If the award name is well known then it might not change. Nobel is a good example. I think the Nevanlinna prize is mostly unknown to the public. The Field's medal is better known, though still not that well known. The general public became briefly aware of the Field's medal twice: when it was mentioned in the movie <i>Good Will Hunting,</i> and when Perelman turned it down. Fame is fleeting for both prizes and people.</div>
<div>
<br /></div>
<div>
e) Organizations don't like to change things. Hence X would need to be particularly bad to warrant a name change. </div>
<div>
<br /></div>
<div>
OTHER THOUGHTS</div>
<div>
<br /></div>
<div>
1) Why <i>The Abacus Medal</i>? Perhaps they are worried that if they name it after someone and that someone turns out to be an X they'll have to change it again. I find the explanation given <a href="https://www.heidelberg-laureate-forum.org/blog/imu-abacus-medal/">here</a> to be unsatisfying. I find the fact that they make <b>NO MENTION</b> of why they are no longer naming it <i>The</i> <i>Nevanlinna prize </i>appalling and insulting.</div>
<div>
<br /></div>
<div>
2) Lets turn to people who get the awards. If someone solved two Millennium problems and clearly deserved a Field's Medal, but was an X, should they be denied the prize on that basis. I would tend to think no (that is, they should get the prize) but it does trouble me. What would happen? I honestly don't know. </div>
<div>
<br /></div>
<div>
3) X will change over time.</div>
<div>
<br /></div>
https://blog.computationalcomplexity.org/2019/06/imus-non-controversial-changing-name-of.htmlnoreply@blogger.com (GASARCH)9tag:blogger.com,1999:blog-3722233.post-1040381893569171546Wed, 29 May 2019 20:09:00 +00002019-05-29T16:09:33.165-04:00NSF PanelsThe government shut down in January led to delays at the National Science Foundation and only recently announcing decisions on grants submitted last fall. For those who successfully received awards, congratulations! For those who didn't, don't take it personally, buckle up and try again.<br />
<br />
For those who don't know how the process works, for each grant program, the program directors organize one or more panels which typically meets in person at NSF headquarters in Alexandria, Virginia. A typical panel has about a dozen panelists and twenty or so proposals. Before the panels, each proposal gets at least three reviews by the panelists. Discussions ensue over a day or two, proposals get sorted into categories: Highly Competitive, Competitive, Low Competitive and Not Competitive and then ranked ordered in the top categories.<br />
<br />
There are tight rules for Conflict-of-Interest and those who are conflicted have to leave the room during the discussions on those papers.<br />
<br />
If you do get asked to serve on a panel, you should definitely do so. You get to see how the process works and help influence funding and research directions in your field. You can't reveal when you serve on a particular panel but you can say "Served on NSF Panels" on your CV.<br />
<br />
Panels tend to take proposals that will likely make progress and not take ones less risky. Funding risky proposals is specifically mentioned to the panel but when push comes to shove and there is less funding than worthy proposals, panelists gravitate towards proposals that don't take chances.<br />
<br />
Panels are not unlike conference program committees. It didn't always work this way, it used to be more like journal publications. I remember when the program director would send out proposals for outside reviews and then make funding decisions. That gave the program director more discretion to fund a wider variety of proposals.<br />
<br />
The NSF budget for computing goes up slowly while the number of academic computer scientists grows at a much larger clip. Until this changes, we'll have more and more worthy proposals unfunded, particularly proposals of bold risky projects. That's the saddest part of all.https://blog.computationalcomplexity.org/2019/05/nsf-panels.htmlnoreply@blogger.com (Lance Fortnow)0tag:blogger.com,1999:blog-3722233.post-3247584017741087776Mon, 27 May 2019 15:12:00 +00002019-05-27T17:53:18.254-04:00separating fact from fiction with the 56% of Americans say Arabic Numerals should not be taught in school<br />
On the excellent TV show Veep there was a subplot about a political candidate (who himself had failed algebra in HS) objecting to Algebra since it was invented by the Muslims. I don't recall the exact line, but he said something like `Math teachers are terrorists'<br />
This was, of course, fiction.<br />
<br />
The same week I read that 56% of survey respondents say `<u><i>Arabic Numerals' shouldn't be taught in</i></u> <i><u>schools'</u></i> Obviously also a fiction. Perhaps a headline from <i>The Onion</i>.<br />
<br />
No. The story is true.<br />
<br />
See snopes entry on this: <a href="https://www.snopes.com/fact-check/teaching-arabic-numerals/">here</a><br />
<br />
but also see many FALSE but FUNNY websites:<br />
<br />
Sarah Palin wants Arabic Numerals out of the schools: <a href="http://nationalreport.net/sarah-palin-wants-arabic-numerals-banned-americas-schools/">here</a> Funny but false.<br />
<br />
Jerry Brown is forcing students in California to learn Arabic Numerals as part of multi-culturism False by funny: <a href="https://me.me/i/sharia-law-must-be-stopped-under-gov-brown-students-in-20990368">here</a><br />
<br />
A website urging us to use Roman Numerals (which Jesus used!) False but funny: <a href="http://freedomnumerals.com/">here</a><br />
<br />
OKAY, what to make of the truth that really, really, 56% of Americans are against Arab Numerals<br />
<br />
1) Bigotry combined with ignorance.<br />
<br />
2) Some of the articles I read about this say its a problem with polls and people. There may be some of that, but still worries me.<br />
<br />
3) In Nazi Germany (WOW- Goodwin's law popped up rather early!) they stopped teaching relativity because Albert Einstein was Jewish (the story is more complicated than that, see <a href="https://www.scientificamerican.com/article/how-2-pro-nazi-nobelists-attacked-einstein-s-jewish-science-excerpt1/">her</a>e). That could of course never happen in America now (or could it, see <a href="https://www.tabletmag.com/jewish-news-and-politics/50097/time-warp">here</a> and <a href="https://www.conservapedia.com/index.php?title=Counterexamples_to_Relativity">here</a>).<br />
<br />
4) There is no danger that we will dump Arabic Numerals. I wonder if we will change there name to Freedom Numerals.<br />
<br />
5) Ignorance of science is a more immediate problem with the anti-vax people. See <a href="https://www.thedailybeast.com/measles-outbreak-grows-with-60-new-cases-across-26-states?ref=home">here</a><br />
<br />
<br />https://blog.computationalcomplexity.org/2019/05/separating-fact-from-fiction-with-56-of.htmlnoreply@blogger.com (GASARCH)4