Monday, November 12, 2018

And the winner is again, Harambe: A pre election poll of my class that was truly a referenum on the Prez

I had meant to post this before the election but I didn't quite time it right. Oh well.

It has been said that this midterm election (more than others) was a referendum on the prez.  Every prez election year I have my students (or someone's students) vote (Secret Ballot of course) for president. ((see 2016 , 2012, 2008) This year, for the first time, I had a midterm election, though rather than ask them which Maryland people they would vote for, I made it truly a referendum on Trump by asking two questions:

1) Who did you vote for in 2016?

2) Knowing what you know now, who would have have voted for in 2016?

Full Disclosure: I am in the Clinton/Clinton Camp. However, this is an information-post not an opinion-post, so my vote is not relevant nor was it counted.

I give the full results at the end of the post; however, I will summarize the most interesting data: the change-mind people and my thoughts.

4 voted Trump and now wish they voted differently: Harmabe, Clinton, Nobody, Gasarch

Only 12 people had voted for Trump in 2016 and of those 4
regret it. While I can see wanting Clinton, Nobody, or Gasarch,
I'm surprised someone wanted Harmabe. Is he even a citizen?

5 voted Clinton and now wish they voted differently: 2-Johnson, Trump, Kanye, Gasarch.


Since Clinton hasn't done anything to merit rejection since the election,
I deduce these people are more going TOWARDS the one they picked rather
than AWAY from her. Trump has been Prez and has done stuff, so Clinton/Trump
makes sense -- the student's opinion is that Trump is doing better
than expected. Gary Johnson hasn't done anything to merit a change towards
him, so thats a puzzler.  Kanye, similar. As for Gasarch, if the reasoning
is `he's doing a good job teaching crypto, lets make him president' doesn't
really work since he's not doing THAT good a job. If it was Ramsey theory
then I could see it.

Misc:
1 Underwood(House of Cards)/Satan (For those tired of picking the LESSER of two evils)
1 Stein/Clinton
1 Johnson/Clinton
1 Kruskal/Gasarch (some people can't tell them apart)

The two who went to Clinton I interpret as thinking Trump is worse
than expected.

ALSO: Hillary had 28 Hillary/Hillary. Hence Trump had the largest percentage of people who regret voting for him, but still only 1/3. And the numbers are to small to make much of them.

MY OPINON: FORTNOW in 2020!

------------------------------------------------------

61 students did the poll.

-----------------------------
Stayed the same:

-----------------------------
Stayed the same:

28 Clinton/Clinton
8 Trump/Trump
4 Stein/Stein
3 Johnson/Johnson
1 Kasich/Kasich
1 Sanders/Sanders
1 Jerry White/Jerry White (Socialist Party)
1 Bofa/Bofa (this is a joke)
1 Thomas Adam Kirkman/Thomas Adam Kirkman (prez on TV show Designated Survivor)

48 do not regret their vote.

-------------------------
Trump/XXX

1 Trump/Harmabe (Harambe is the Gorilla who got shot in a zoo.)
1 Trump/Clinton
1 Trump/Nobody
1 Trump/Gasarch (Gasarch would prove the problems of government are unsolvable rather than solve them)

FOUR people voted Trump and now regret it.

------------
Clinton/XXX

2 Clinton/Johnson
1 Clinton/Trump
1 Clinton/Kanye
1 Clinton/Gasarch


FIVE people voted Clinton and now regret it.

-------------
MISC changed

1 Underwood(House of Cards)/Satan (For those tired of picking the LESSER of two evils)
1 Stein/Clinton
1 Johnson/Clinton
1 Kruskal/Gasarch (some people can't tell them apart)

TWO people who voted third party now would have voted Clinton.
I interpret this as not-liking-Trump since I don't think Clinton
has done anything since the election to make anyone thing better
or worse of her, while as Trump as president has done enough
to change people's opinions of him.

Thursday, November 08, 2018

The Data Transformation of Universities

With all the news about elections, caravans, shootings and attorney generals, maybe you missed the various stories about real transformations at our top universities.

On October 15 MIT announced the Stephen A. Schwarzman College of Computing. Schwarzmann donated $350 million to the effort as part of an expected billion-dollar commitment that will pay for a new building and fifty new faculty.
“As computing reshapes our world, MIT intends to help make sure it does so for the good of all,” says MIT President L. Rafael Reif. “In keeping with the scope of this challenge, we are reshaping MIT. The MIT Schwarzman College of Computing will constitute both a global center for computing research and education, and an intellectual foundry for powerful new AI tools. Just as important, the College will equip students and researchers in any discipline to use computing and AI to advance their disciplines and vice-versa, as well as to think critically about the human impact of their work. 
Two weeks later the University of California at Berkeley announced a Division of Data Science to be led by an associate provost reporting directly to the provost (like a dean).
“Berkeley’s powerful research engine, coupled with its deep commitment to equity and diversity, creates a strong bedrock from which to build the future foundations of this fast-changing field while ensuring that its applications and impacts serve to benefit society as a whole,” said Paul Alivisatos, executive vice chancellor and provost. “The division’s broad scope and its facilitation of new cross-disciplinary work will uniquely position Berkeley to lead in our data-rich age. It will simultaneously allow a new discipline to emerge and strengthen existing ones.”
Sorry Berkeley, you are anything but unique. Every major research university is trying to build up expertise in computing and data science given the demands of students, industry and researchers across nearly all academic disciplines who need help and guidance in collecting, managing, analyzing and interpreting data.

Here at Georgia Tech, where we've had a College of Computing since 1990, we recently started an Interdisciplinary Research Institute in Data Science and Engineering and a Interdisciplinary Research Center in Machine Learning both to be housed in a twenty-one story CODA building that will open next year in Midtown Atlanta (sounds impressive when I write it down).

I could go on for pages on how other universities are rethinking and transforming themselves. Earlier this year Columbia (who hired Jeannette Wing to run their data science institute) held a summit of academic data science leadership. The report shows we have much to do.

The real secret is that none of us have really figured it out, how to meet the escalating needs for computing and data and integrating it across campus. We aim at a moving target problem as we sit just at the beginning of the data revolution that will reshape society. The future looks rocky, scary and fun.

Monday, November 05, 2018

Is Fuzzy Sets Common Knowledge? How about Napier as inventing (or something) logs?

Darling: Bill, help me with this crossword puzzle. 6 letter word that begins with N, clue is log man

Bill: Napier

Darling: Who was that?

Bill: A famous lumberjack

Darling: Give me a serious answer

Bill: Okay. He is said to have invented logarithms. Like most things in math its not quite clear who gets the credit, but he was involved with logarithms early on.

Darling: I said give me a serious answer. That is rather obscure and would not be in a crossword puzzle.

While darling is wrong about the answer being wrong she is right about it being obscure. How would your typical crossword puzzler know the answer? Perhaps Napier appears in a lot of crosswords since it has lots of vowels, so they just word-associate `log man' to `Napier'  But in any case this does seem odd for a crossword puzzle.

In the quiz show Jeopardy, in round two, the following was an $800 question under philosophy (the questions are 400,800,1200,1600,2000, so 800 is supposed to be easy)

In 1965 Lotfi Zadeh introduced this type of set with no clear boundaries leading to the same type of ``logic''

1) I suspect that all my readers know that the correct response is `Fuzzy' (or formally `What is Fuzzy')

2) Why is this in Philosophy? There IS an entry of it in the Stanford Enc of Phil (see here). Some Philosophers work on Logic, but do they work on Fuzzy Logic? The wikipedia page for Fuzzy Logic (see here) has only one mention of phil and its to the Stanford Enc of Phil chapter.

3) (more my point) Is Fuzzy Logic so well known as to be an easy question on Jeop? See next point

4) Can someone get this one right WITHOUT knowing ever having heard of Fuzzy Logic. I suspect yes and, indeed, the contestant DID get it right and I think she had never heard of Fuzzy Logic. While I can't be sure one tell is that when a contestant says `what is fuzzy logic' and it actually sounds like a question, then they are partially guessing.

Anyway, I am surprised that this obscure bit of math made it into an easy question on Jeop. But since the contestant got it right, it was appropriate.

Thursday, November 01, 2018

P = NP Need Not Give a SAT Algorithm

In Bill's post earlier this week, he asks if there is a fixed algorithm such that if P = NP, this algorithm will correctly compute satisfiability on all inputs. While I believe this statement is true because P ≠ NP, I would like to argue we won't prove it anytime soon.

Bill links to a TCS Stack Exchange answer by Marzio De Biasi giving a fixed algorithm that would get SAT correct except on a finite number of inputs if P = NP. Here is Marzio's algorithm.

Input: x   (boolean formula)
Find the minimum i such that
  1) |M_i| < log(log(|x|))  [ M_1,M_2,... is a standard fixed TM enumeration] 
  2) and  M_i solves SAT correctly 
       on all formulas |y| < log(log(|x|))
          halting in no more than |y|^|M_i| steps
          [ checkable in polynomial time w.r.t. |x| ]
  if such i exists simulate M_i on input x 
      until it stops and accept/reject according to its output
      or until it reaches 2^|x| steps and in this case reject;
  if such i doesn't exist loop for 2^|x| steps and reject.

This proof relativizes: There is a fixed relativizing Turing machine M such that for all A, if PA = NPA then L(MA) runs in polynomial time and differs from SATA on only a finite number of strings. SATA is a relativizable form of SAT with a built in relations that answer whether a sequence of variables encode an string in A. SATA is NPA-complete for all A.

The following shows any proof that a fixed algorithm can compute all of SAT correctly if P = NP cannot relativize.

Theorem: For every deterministic Turing machine M, there is an A such that PA = NPA and either Mdoes not compute SATA correctly on all inputs or Mtakes at least exponential time.

Proof:

Define B = {0x | x in TQBF}. TQBF is the PSPACE-complete problem containing all the true quantified Boolean formula. PTQBF = PSPACE = NPTQBF and thus PB = NPB.

Let φn be the formula that is true if there exists a string z of length n such that 1z is in A. Let n be the smallest n such that MBn) takes less than 2n computational steps. If no such n exists let A = B and we are done.

If MBn) accepts we are done by letting A=B since B has no string beginning with 1.

If MBn) rejects and uses less than 2n steps there must be some z such that MBn) did not query 1z. Let A = B ∪ {1z}.

MAn) still rejects but now φn is in SATA. Also PA = NPA since adding a single string to an oracle won't affect whether two classes collapse.

Monday, October 29, 2018

If P=NP then we HAVE an alg for SAT.

I am writing up the result of my survey of peoples opinion of P vs NP (it will be in a SIGACT News, in Lane's Complexity Column, in 2019.) Some  people wrote:

                          P=NP but the proof will be nonconstructive and have a large constant.

Large constant could happen.

If by nonconstructive they mean not practical, then yes, that could happen.

The following does not quite show it can't happen but it does give one pause:  an old result of Levin's shows that there is a program you could write NOW such that if P=NP then this program decides SAT except for a finite number of formulas (all of which are NOT in SAT) and can be proven to work in poly time (I will later give three pointers to proofs). The finite number of formulas is why the people above may still be right. But only a finite number- seems like a weak kind of nonconstructive.

Since I am teaching crypto I wondered about Factoring. An old result of Gasarch (I proved it this morning -- I am sure it is already known) shows that there is a program you could write NOW such that if Factoring is in P then this program factors a number ALWAYS (none of this finite exception crap) and can be proven to work in poly time. Even so, the algorithm is insane. If someone thought that factoring in P might be nonconstructive, my construction disproves it in such an absurd  way that  the notion that factoring could be in P nonconstructively should be taken seriously but not literally. There should be a way to say formally:

I believe that FACTORING is in P but  any poly-time algorithm is insane (not even looking at the constants) and hence could never be implemented.


Not sure how to define insane.

Three pointers:

Stack Exchange TCS:  here

Wikipedia: here

My slides (also include factoring result): here

Question: Can the SAT result be improved to be an algorithm that is ALWAYS right? Is there a way to show that it can't be (unless, say P=NP).

Question: What can be said about Graph Isomphism in this context? The proof for SAT is easily adpated to this case (all we used about SAT was that it was self-reducible). But can we get our GI algorithm to never make a mistake?

Thursday, October 25, 2018

Good Results Made Meaningless

Sometimes you see a beautiful theorem A that you love to talk about. Then another beautiful theorem B comes around, making the first one meaningless since B trivially implies A. Not just a mere extension of A but B had a completely different proof of something much stronger. People will forget all about A--why bother when you have B? Too bad because A was such a nice breakthrough in its time.

Let me give two examples.

In STOC 1995 Nisan and Ta-Shma showed that Symmetric logspace is closed under complement. Their proof worked quite differently from the 1988 Immerman-Szelepcsenyi nondeterministic logpsace closed under complement construction. Nisan and Ta-Shma created monotone circuits out of undirected graphs and used these monotone circuits to create sorting networks to count the number of connected components of the graph.

Ten years later Omer Reingold showed that symmetric logspace was the same as deterministic logspace making the Nisan-Ta-Shma result an trivial corollary. Reingold's proof used walks on expander graphs and the Nisan-Ta-Shma construction was lost to history.

In the late 80's we had several randomized algorithms for testing primality but they didn't usually give a proof that the number was prime. A nice result of Goldwasser and Kilian gave a way to randomly generate certified primes, primes with proofs of primeness. Adleman and Huang later showed that one can randomly find a proof of primeness for any prime.

In 2002, Agrawal, Kayal and Saxena showed Primes in P, i.e., primes no longer needed a proof of primeness. As Joe Kilian said to me at the time, "there goes my best chance at a Gödel Prize".

Monday, October 22, 2018

Please Don't call them Guidance Counselors

As I mentor many HS students I was recently in email contact with the HS contact for projects and I noticed that the sign off was

Allie Downey
Guidance School Counselor

This piqued my interest so I emailed her asking why the cross out.

She was delighted to answer! Here is her email:

Thank you, but “Guidance Counselor” is an outdated term.  It is from a time before there was the American School Counselor Association, before the profession required at least a  Masters degree, and before there was a nationally recognized comprehensive school counseling program. Guidance is a service; school counseling is a program.  This is a great website that explains it even better here

Thank you for taking the time to ask.  Anyone in the profession today prefers the term School Counselor and I always appreciate when people inquire.

I asked her further about the difference and here is what she said:

Everyone  in a young person’s life offers guidance in some fashion.  School counselors still provide guidance classroom lessons (such as bully prevention and character development), but we do so much more.  We help students develop their academic abilities and study skills.  We assist them and their families in college and career planning.  We teach coping skills so students can guide themselves.  We ask questions to help these young adults discover the answers on their own.  We help students learn how to advocate for themselves.  We console.  We mediate.  We learn and adapt with changing climates.  We work with families, faculty, and community members to make sure school is a safe place for students to learn and grow. And this is all before the lunch bell. It is an amazing profession and I am proud to call myself a school counselor.

Thursday, October 18, 2018

A New Cold War?

Imagine the following not-so-unrealistic scenario: The US-China trade war deepens leading to a cold war. The US blocks all Chinese citizens from graduate school in the US. Visas between the two countries become hard to get. The Chinese close off their Internet to the West.
If things continue along this path, the next decade may see the internet relegated to little more than just another front on the new cold war.
I wouldn't have thought it in our hyperconnected age but we are in spitting distance of going back to the 60's. What would this all mean for science and computing?

Let's go back to the original cold war between the Soviet Union and the US roughly from 1947-1991. We didn't have a significant internet back then (though we can thank the cold war for the internet). One had to assume that the government read mail to/from USSR. Travel to and from the USSR and the Eastern block to the west was difficult. Academic research did cross over but only in dribs and drabs and we saw two almost distinct academic cultures emerge, often with duplication of effort (Cook/Levin, Borodin/Trakhtenbrot, Immerman/Szelepcsényi).

Beyond the limited communication came the lack of collaboration. Science works best with open discussion, sharing of ideas and collaborating with each other. It took a Russian and an American working together to give us Google.

No cold war in this age can completely cut off ideas flowing between countries but it can truly hamper knowledge growth. We can't count on US superiority with China already ahead of us in areas like high-speed trains, renewable energy and mobile payments.

The cold war did have an upside to science: The competition between the East and the West pushed research growth and funding on both sides. We already see arguments for quantum computing and machine learning by the necessity of staying ahead of the Chinese. But science funding should not be driven by fear but by curiosity and its indisputable long-term effects on our economy.

We must keep the flow of information and people open if we want science and technology to flourish to its fullest, from students through senior researchers. Don't let a new cold war bring us back to the days of separate communities, which will fail to produce the breakthroughs we will never know about.

Sunday, October 14, 2018

Practical consequences of RH ?

When it seemed like Riemann Hypothesis (RH) might be solved (see Lipton-Regan blog entry on RH  here and what it points to for more info) I had the following email exchange with Ajeet Gary (not Gary Ajeet, though I will keep his name in mind for when I want to string together names like George Washington, Washington Irving, Irving Berlin,  with the goal of getting back to the beginning) who is an awesome ugrad at UMCP majoring in Math and CS.

Ajeet: So Bill, now that RH has been solved should I take my credit cards off of Amazon?

Bill: I doubt RH has been solved. And I think you are thinking that from RH you can prove that factoring is in P. That is not known and likely not true.

Ajeet: What are my thoughts and why are they wrong?

Bill: What am I a mind-reader?

Ajeet: Aren't you?

Bill: Oh, Yes, you are right, I am. Here is what you are confusing this with and why, even if you were right you would be wrong.

Ajeet: It just isn't my day.

Bill: Any day you are enlightened is your day. Okay, here are the thoughts you have

a) From the Extended RH (a generalization of RH) you can prove that PRIMES are in P. (This algorithm is slow and not used. PRIMES has a fast algorithm in RP that people do use. Primes was eventually proven to be in P anyway, though again that is a slow algorithm). Note- even though we do not know if ERH is true, one could still RUN the algorithm that it depends on. ERH is only used to prove that the algorithm is in P.

b) There was an episode of Numb3rs where they claimed (1) RH implies Factoring in P-- not likely but not absurd (2) from the proof of RH you could get a FAST algorithm for factoring in a few hours (absurd). I say absurd for two reasons: (i) Going from basic research to application takes a long time, and (ii) See next thought

c) If (RH --> factoring easy) then almost surely the proof would present an algorithm (that can be run even if RH has not been proven) and then a proof that RH --> the algorithm's run time is poly. But I wonder -- is it possible that:

RH--> factoring easy, and

The proof does not give you the algorithm, and

 if you had a proof  or RH then you COULD get the algorithm (though not in a few hours).

I doubt this is the case.

Ajeet: So are there any practical consequences of RH?

Bill: Would you call better bounds on the error term of the prime number theory practical.

Ajeet: YES!

Bill: GREAT! For more on RH see here

Thursday, October 11, 2018

2018 Fall Jobs Post

As we do every year at this time, we help you find that perfect academic job. So who's hiring in CS this year? Perhaps we should instead follow the advice of John Regehr.
For computer science faculty positions best to look at the ads from the CRA and the ACM. For theoretical computer science specific postdoc and faculty positions check out TCS Jobs and Theory Announcements. If you have jobs to announce, please post to the above and/or feel free to leave a comment on this post.

Even if you don't see an ad, almost surely your favorite university is looking to hire computer scientists. Check out their website or email someone at the department.

And (selfish plug) Georgia Tech is looking to hire in theory this year.

Some generally good advice: Make sure you have strong letter writers who know your research well, best if one or two of them come from outside your university. Put all your papers and materials on your website and make sure your Google Scholar page is accurate. Put effort into your job talk and remember you need to sell you research to non-theorists. Good if you can connect to other areas especially machine learning, data science or cybersecurity. Quantum seems hot this year.

Above all have fun! In this computational and data driven world we live in, there is a great job out there for you.

Monday, October 08, 2018

A New ACO Center (guest post by Vijay Vazirani)

Guest Post by Vijay Vazirani

                                                          A New ACO Center!

Last week, I helped launch an ACO Center (Algorithms, Combinatorics and Optimization) at my wonderful new home, UC Irvine. There are only two other such centers, at CMU and Georgia Tech (29 and 27 years old, respectively). My personal belief is that there will be more in the future. Let me justify.

When I joined Georgia Tech in 1995, my research was centered around approximation algorithms, a topic that resonated with its ACO Center. I was able to build on this interest in numerous ways:  by offering new versions of courses on this topic as new results emerged, attracting to GT, for the first time, a large number of top theory PhD students who went on to produce stellar results and start impressive careers of their own. Course notes accumulated over the years eventually lead my book on the topic in 2001. Right after that, I switched to algorithmic game theory, and again ACO became the center of that activity, this time resulting in a co-edited book which had a huge impact on the growth of this area. In short, ACO gave me a lot!  In turn, I believed in it and I worked for it wholeheartedly.

I still believe in ACO and I feel it is very much relevant in today’s research world. Similar to the other two ACOs, our Center at UCI also exploits the natural synergies among TCS researchers from the CS Department, probability and combinatorics researchers from the Math Department, and optimization researchers from the Business School. Additionally, our Center has grown well beyond these boundaries to include a highly diverse collection of faculty (e.g., from the prestigious  Institute for Mathematical Behavioral Sciences) whose common agenda is to utilize the “algorithmic way of thinking”, which is set to revolutionize the sciences and engineering over the course of this century, just as mathematics did in the last. The Center website has further details about its vision and activities.

Many universities are in a massive hiring mode today (especially in CS), e.g., UCI  plans to hire 250 new faculty over the next five years. Centers such as ours present the opportunity of hiring in a more meaningful manner around big themes. They can also be instrumental in attracting not only top students but also top faculty.

A center of excellence such as GT’s ACO does not simply spring up by itself; it requires massive planning, hard work, good taste and able leadership. For the last, I will forever be indebted to Robin Thomas for his highly academic, big vision, classy leadership style which was the main reason ACO remained such a high quality program for so long. Moving forward, will we stay with three ACO Centers or will there be more? I believe the latter, but only time will tell.

Saturday, October 06, 2018

John Sidles, Mike Roman, Matt Howell, please email me/hard to get emails of people

John Sidles, Mike Roman, Matt Howell : please email me. at gasarch@cs.umd.edu (my usual email)

I need to ask you about some comments you left on the blog a while back (or emailed me -- I forget which, but I can't find your emails if you did email me). I need you to contact me SOON!

When you do I will tell you whats up and why I decline to say here what this is about.

For my other readers -- it is nothing controversial.


How hard is it to find people's emails on the web?

Sometimes it takes less than 5 minutes

Sometimes it is impossible.

Sometimes I get it by asking someone else who knows, or knows who to ask... etc.

It is rare that more time on the web helps. I do not think I ever spend more than 5 minutes and then found it. I have sometimes used linked-in. I don't have a Facebook account (I was going to put in a link to the latest Facebook privacy breach, but (1) by the time you read this they may have had another one, (2) you all know about it, and (3) when I typed in `Facebook Scandal' to Google I got the Facebook page for the TV show Scandal.)

Should people make their emails public? I can see why one does not want to. The old saying is that if you owe people money you want to be hard to find, but if people owe you money you want to be easy to find.

Contrast email to what people DO put online. A few years ago I needed someone's email address. I found his website. From his website I found out the exact day he lost his virginity. Okay... Didn't need to know that. But I still could not find his email address.  I later asked someone who asked someone etc. and got it. But I was struck by what was private and public. This is NOT a complaint (though I wish it was easier to fine email addresses) just an observation.

Thursday, October 04, 2018

Google added years to my life

If you google

gasarch

you used to  get the following:   here

Please go there and notice how old they say I am.

Okay, you are back. You may have noticed that they say I am 68. Gee, I don't feel 68 (I feel younger).

I have no idea how Google got my age wrong.

0) I found about this when I saw my age in an article about the Muffin problem. The article is here. I had been in contact with the author earlier so it was easy to contact him, assure him that I appreciate his throwing scammers and spammers off of my trail by giving me the wrong age, but I wondered why he chose 68. He then apologized (which was not needed) and pointed me to the google thing.

1) My age was  not on my Wikipedia page. Nor was my birth year.

2) I do not recall every telling Google my age -- but then again, Google knows what I search for and perhaps deduced an incorrect age from that (I've been watching a very old Western, Maverick, lately, which may have fooled them. So my plan is working!)

3) Google thinks I published with Hilbert (see here or here) so that would make them think I am 68 years old. Hmm, still to young. If I was a 10-year old math genius in 1938 (Hilbert died in 1943 but since I am not a 68 year old math genius I chose numbers to make it easy) and published with
him then, then I would now be 80. Not 68. So that is not the answer.
(Side question- are any of Hilbert's co-authors still alive?)

Seriously, if anyone has any ideas why Google had it wrong, let me now

4) Lance was outraged at this and hence put my birth year on my Wikipedia page thinking that
would fix it. (I was not outraged, just amused.)

5) It was taken off my page since Lance couldn't prove it.

6) Lance and I  mentioned my age in a blog post and that was proof enough. So our blog is a primary
source. We should use this sparingly -- with great power comes great responsibility. (See here for more on that theme)

7) Other weird internet stuff: What does it take to get a Wikipedia Page? A Nobel Prize in Physics
helps. See: here.


Monday, October 01, 2018

Muffin Math

Lance: It's Friday afternoon and the Dagstuhl workshop has ended. We have some time before we need take off so how about one final typecast.

Bill: Always a good idea.

Lance: First the exciting news, Nitin Saxena won the Shanti Swarup Bhatnagar prize for 2018,
Nitin Saxena
according to the many Indians at Dagstuhl, the most prestigious science prize in the country. The awards were announced on Wednesday during the workshop. He's the S in AKS.

Bill: That's really impressive. He was only two-years old when AKS had log-depth sorting networks.

Lance: Bill, you moron. You're thinking of Ajtai-Komlós-Szemerédi. I'm talking Agrawal-Kayal-Saxena Primes in P, topic of my second ever blog post. Nitin, an undergrad at that time, didn't just sit on his laurels--he has had awesome results on algebraic circuit complexity that truly justify this prize.

Bill: Well congrats to Nitin. Moving on, let's talk math.

Lance: We're at Dagstuhl so we have to call it computer science.

Bill: Ronen Shaltiel gave a great but depressing talk, Indistinguishability by adaptive procedures with advice, and lower bounds on hardness amplification proofs.

Lance: Nobody sings the Dagstuhl Blues.

Bill: Suppose you had a hard function and want to covert it to a harder function, known in the biz as hardness amplification. For constant depth circuits we have hardness results but no known process for amplification. For larger classes, like constant depth circuits with threshold gates, we do know ways to amplify.

Lance: But we have not hardness results there? Where are you going with this?

Bill: Ronen put it nicely, "We can only amplify hardness where we don't have it". Ronen and his colleagues proved results along those lines. Lance, does that depress you.

Lance: Not as much as the sprained ankle that made me miss that talk. My turn to pick a favorite talk. I loved Michael Forbes Hitting Sets for the Closure of Small Circuits. You take algebraic circuits with a parameter epsilon and take the limit as epsilon goes to zero. Forbes and Amir Shpilka show a PSPACE algorithm to find small sets containing non-zeros of these functions. These kinds of functions are studied in the GCT approach to lower bounds.

Bill: What lower bounds is this aiming to solve?

Lance: Showing the computation difference between the determinant and the permanent.

Josh Alman: You've left out the most exciting part of the conference.

Bill and Lance: So Josh, what was that?

Josh Alman: The world debut debut performance of Stud Muffin and Smilin' Sam singing "Do You Work on Muffin Math?"



Lance: That awesome duo looks familiar Bill. Where I have seen them before?

Bill: That Sam can really tickle the ivories.

Lance: And Stud was definitely in the room.

Bill: On that note, take us out.

Lance: In a complex world, keep it simple.

Thursday, September 27, 2018

Still Typecasting from Dagstuhl

Lance: Bill, in our typecast earlier this week I said you were older than me. But 68? You don't look day over 66.

Bill: Neither do you. But seriously, why do you think I'm 68?

Lance: I just Google'd "How old is Bill Gasarch?"

Bill: Don't believe everything you read on the Internet. I'm really 58.

Lance: Prove it.

Bill: Here's my driver's license.

Lance: Bill you don't drive. And it literally says "NOT A DRIVER'S LICENSE" on the back. But it is an official State of Maryland Identification card stating that you were born in 1959. Are you saying I should trust the state of Maryland over Google?

Bill: Yes, because they pay my salary. Back to Dagstuhl. Let's talk about the talks. William Hoza gave a nice talk about hitting sets for L (deterministic small space) vs RL (randomized small space)  but when I asked him when will we prove L = RL he said not for fifty years. Grad students are not supposed to be that pessimistic.

Lance: You mean realistic. Though I'd guess more like 10-20 years. I wouldn't even be surprised if NL (nondeterministic log space) = L.

Arpita Korwar: I say 10-15 years.

Bill: Can we put that in the blog?

Lance: Too late. Bill I heard you were the stud muffin this week.

Bill: Yes, I talked about the muffin problem. Got a problem with that?

Lance: Needed milk. I saw this talk two years ago and now you have cool theorems. Who would've thought if you have 24 muffins and 11 people you can allocate 24/11 muffins and the smallest piece is 19/44, and that's the best possible for maximizing the smallest piece.

Bill: I can't believe you actually listened to the talk and didn't fall asleep.

Lance: zzzzzz. Did you say something?

Bill: Never mind. Eric Allender talked about the minimum circuit-size problem: Given a truth-table of a function f is there a circuit for f less that a given size w. The problem is frustrated, just consider the following theorem: if MCSP is NP-complete then EXP does not equal ZPP (exponential time in zero-error probabilistic polynomial-time).

Lance: Do you think EXP = ZPP?

Bill: No, the result only tells us it will be hard to prove MSCP is NP-complete without informing us whether or not it is NP-complete. Allender did show that under projections it isn't NP-complete (Editor's Note: I should have said log-time projections see Eric's comment. SAT and all your favorite NP-complete problems are complete under log-time projections). MSCP might be complete under poly-time reductions but not under weaker reductions.

Lance: Reminds me of the Kolmogorov random strings that are hard for the halting for Turing reductions but not under many-one reductions.

Bill: Everything reminds you of the Kolmogorov strings.

Lance: As they should.

Bill: I liked Michal Koucký's talk on Gray codes.

Lance: Shouldn't that be grey codes. We're not in the UK.

Bill: It's the color you moron. It's named after Frank Gray.

Lance: You are smarter than you look, not bad for a 68 year old. I missed Koucký's talk due to a sports injury, but he did catch me up later.

Bill: I never put Lance and sports in the same sentence before.

Lance: And I never put Bill and driving together. It's a new day for everything. Koucký showed how to easily compute the next element in the Gray code querying few bits as long as the alphabet size is of size 3.

Bill: Which contrasts Raskin's 2017 paper that shows with a binary alphabet you need to query at least half the bits.

Lance: Hey you stole my line.

Bill: That's not possible. You are editing this. I think this typecast has gone long enough. Take us out.

Lance: In a complex world, best to keep it simple.

Tuesday, September 25, 2018

Lance and Bill go to Dagstuhl: The Riemann Edition

Lance: Welcome to our typecast directly from Dagstuhl in Southwestern Germany for the 2018 edition of the seminar on Algebraic Methods in Computation Complexity. Say hi Bill.

Bill: Hi Bill. So Lance are you disappointed we didn't go to Heisenberg for the Michael Atiyah talk claiming a solution to the Riemann Hypothesis.

Lance: I knew how fast I was going but I got lost going to Heisenberg. I think you mean the Heidelberg Laureate Forum a 100 miles from here. From what I heard we didn't miss much. For those who care here is the video, some twitter threads and the paper.

Bill: Too bad. When I first heard about the claim I was optimistic because (1) László Babai proved that graph isomorphism is in quasipolynomial-time at the age of 65 and (2) since Atiyah was retired he had all this time to work on it. Imagine Lance if you were retired and didn't have to teach or do administration, could you solve P vs NP? (This gets an LOL from Nutan Limaye)

Lance: I'll be too busy writing the great American novel. Before we leave this topic, don't forget about the rest of the Laureate Forum, loads of great talks from famous mathematicians and computer scientists. Why didn't they invite you Bill?

Bill: They did but I rather be at Dagstuhl with you to hear about lower bounds on matrix multiplication from Josh Alman. Oh, hi Josh I didn't see you there.

Josh: Happy to be here, it's my first Dagstuhl. I'm flying around the world from Boston via China to get here. Though my friends say it's not around the world if you stay in the Northern hemisphere. They are a lot of fun at parties. But not as much fun as matrix multiplication.

Bill: So Josh, what do you have to say about matrix multiplication. Is is quadratic time yet?

Josh: Not yet and we show all the current technique will fail.

Bill: Wouldn't Chris Umans disagree?

Kathryn Fenner: You shouldn't pick on Canadians [Ed note: Josh is from Toronto]. Pick on students from your own country.

Josh: (diplomatically) I think Chris Umans has a broader notion of what counts as known methods. There are some groups that aren't ruled out but we don't know how to use them.

Chris: Very well put. The distinction is between powers of a fixed group versus families of groups like symmetric groups. The later one seems like the best place to look.

Lance: Thanks Chris. Josh, what are your impressions of Dagstuhl so far?

Josh: I like the sun and grass. I wish it was easier to get here.

Lance: This is only the first day. You haven't even found the music room yet, past the white room, past the billiard room where Mr. Green was murdered with the candlestick. Oh hi Fred Green. Luckily Dr. Green is still alive. I remember my first Dagstuhl back in February of 1992.

Josh: Two months before I was born.

Lance: Way to make me feel old.

Bill: You are old.

Lance: You are older. Believe it or not six from that original 1992 meeting are here again this week: The two of us, Eric Allender, Vikaurum Arvind, Uwe Schöning and Jacobo Torán. Amazing how accents show up as we talk.

Bill: What did I sleep through this morning before Josh's talk?

Lance: Amnon Ta-Shma talked about his STOC 2017 best paper and Noga Ron-Zewi showed some new results on constructive list-decoding.

Bill: Let's do this again later in the week. Lance, takes us out.

Lance: In a complex world, best to keep it simple.

Thursday, September 20, 2018

Why wasn't email built securely?

Recently I talked with Ehsan Hoque, one of the authors of the ACM Future of Computing Academy report that suggested "Peer reviewers should require that papers and proposals rigorously consider all reasonable broader impacts, both positive and negative." which I had satirized last May.

Ehsan said that "if email had sender authentication built in from the beginning then we wouldn't have the phishing problems we have today". Leaving aside whether this statement is fully true, why didn't we put sender authentication and encryption in the first email systems?

Email goes back to the 60's but I did get involved on the early side when I wrote an email system for Cornell in the early 80's. So let me take a crack at answering that question.

Of course there are the technical reasons. RSA was invented just a few years earlier and there were no production systems and the digital signatures needed for authentication were just a theory back then. The amount of overhead needed for encryption in time and bandwidth would have stopped email in its tracks back then.

But it's not like we said we wish we could have added encryption to email if we had the resources. BITNET which Cornell used and the ARPANET gateway only connected with other universities, government agencies and maybe some industrial research labs. We generally trusted each other and didn't expect anyone to fake email for the purpose of getting passwords. It's not like these emails could have links to fake login pages. We had no web back then.

But we did all receive an email from a law firm offering green card help. My first spam message. We had a mild panic but little did we guess that spam would nearly take down email at the turn of the century. Nor would we have guessed the solution would come from machine learning which kills nearly all spam and much of the phishing emails today.

I don't disagree with the report that we shouldn't think about the negative broader impacts, but the true impacts negative and positive are nearly impossible to predict. Computer Science works best when we experiment with ideas, get things working and fix problems as they arise. We can't let the fear of the future prevent us from getting there.

Monday, September 17, 2018

What is a Physicist? A Mathematician? A Computer Scientist?

 Scott Aaronson recently won the Tomassoni-Chisesi Prize in Physics (yeah Scott!).
In his post (here) about it he makes a passing comment:

I'm of course not a physicist

I won't disagree (does that mean I agree? Darn Logic!) but it raises the question of how we identify ourselves. How to answer the question:

Is X a Y?

(We will also consider why we care, if we do.)

Some criteria below. Note that I may say thinks like `Dijkstra is obviously a computer scientist'
but this is cheating since my point is that it may be hard to tell these things (though I think he is).

1) If X in a Y-dept then X is a Y. While often true, there are some problems: MIT CS is housed in Mathematics, some people change fields. Readers- if you know someone who is in dept X but really does Y, leave a comment. (CORRECTION- I really don't know how MIT is structured. I do know that the Math Dept has several people who I think of as Computer Scientists: Bonnie Burger,  Michael Goemans, Tom Leighton, Peter Shor, Michael Sipser. There may be others as well. The point being that I would not say `Sipers is a mathematician because he is in the MIT Math Dept')

2) If X got their degree in Y then they are Y. Again, people can change fields. Also, some of the older people in our field got degrees in Physics or Math since there was no CS (I am thinking Dijkstra-Physics, Knuth-Math). Even more recently there are cases. Andrew Child's degree is in Physics, but he did quantum computing. Readers- if you know someone who got there degree in X but is now donig Y, leave a comment.

3) Look at X's motivation. If Donald Knuth does hard math but he does it to better analyze algorithms, then he is a computer scientist. One problem -- some people don't know their own motivations, or it can be hard to tell. And people can get distracted into another field.

4) What does X call himself? Of course people can be wrong. The cranks he email me their proofs that R(5) is 40 (its not) think the are mathematicians. They are not- or are they? see next point

5) What X is interested in, ind. of if they are good at it or even know any. Not quite right- if an 8 year old  Bill Gasarch is interested in the Ketchup problem that does not make him a mathematician.

6) What X is working on right now. Fine but might change. And some work is hard to classify.

7) If you win an award in X, then you are an X. Some exceptions

Scott is a computer scientist who won the Tomassoni-Chisesi Physics Prize

Ed Witten is a Physicist who won the Fields Medal (Math)

John Nash is a mathematician who won a Nobel prize in Economics.

I want to make a full circle- so if you know other X won a prize in Y then leave a comment and
we'll see what kind of graph we get. Bipartite with people on one side and fields on the other.

8) What they can teach? Helpful in terms of hiring when you want to fill teaching needs.

Does any of this matter? We use terms like `mathematician' `physicist' `computer scientist' as shorthand for what someone is working on, so its good to know we have it right.


Thursday, September 13, 2018

P = NP and Cancer

Often when the question comes to what happens if P = NP, one typically hears the response that it kills public-key cryptography. And it does. But that gives the impression that given the choice we would rather not have P = NP. Quite the opposite, P = NP would greatly benefit humanity from solving AI (by finding the smallest circuit consistent with the data) and curing cancer. I've said this before but never explained why.

Of course I don't have a mathematical proof that P = NP cures cancer. Nor would an efficient algorithm for SAT immediately give a cancer cure. But it could work as follows:
  1. We need an appropriately shaped protein that would inhibit the cancer cells for a specific individual without harming the healthy cells. P = NP would help find these shapes perhaps just the DNA of the person and the type of cancer.
  2. At this point we don't understand the process that takes a ACGT protein sequence and describes that shape that it forms. But it must be a simple process because it happens quickly. So we can use P = NP to find a small circuit that describes this process.
  3. Use P = NP to find the protein sequence that the circuit from #2 will output the shape from #1.
We'll need an truly efficient algorithm for NP problems for this to work. A n50 algorithm for SAT won't do the job. All this steps may happen whether or not P = NP but we'll need some new smart algorithmic ideas.

Please note this is just a thought exercise since I strongly believe that P ≠ NP. I do not want to give false hope to those with friends and loved ones with the disease. If you want to cure cancer your first step should not be "Prove P = NP". 

Tuesday, September 11, 2018

The Tenure system is broken but not in the way that you think (Anon Guest Post)


This is an ANON guest post. Even I don't know who it is! They emailed me asking if they
could post on this topic, I said I would need to see the post. I did and it was fine.

-----------------------------------------------------------------------------------------------------------------
I have written many tenure/promotion letters before. But this summer, I was especially inundated with requests. Thinking about my past experiences with such letters, I started to question their value.

For those unfamiliar with the process, let me explain. When someone is applying for a research job, they typically need to have recommendation letters sent on their behalf. Once someone is hired in
a tenure-track position, they then need to get additional letters each time they are promoted (in the US, this will typically occur when someone is granted tenure and again when they are promoted to full
professor).

Now, I know from experience that recommendation letters are scrutinized very carefully, and often contain useful nuggets of information. I am not questioning the value of such letters (though
they may have other problems). I am focusing here only on tenure/promotion letters.

Let me fill in a bit more detail about the tenure/promotion process,since it was a mystery to me before I started an academic position. (I should note that everything I say here is based only on how things are
done at my institution; I expect it does not differ much at other US universities, but it may be different in other countries.) First, the department makes a decision as to whether to put forward someone's
case for promotion. If they do, then a package is prepared that includes, among other things, the external recommendation letters I am talking about. After reviewing the candidate's package, the department holds an official vote; if positive, then the package is reviewed and
voted on by higher levels of administration until it is approved by the president of the university.

The external letters appear very important, and they are certainly discussed when the department votes on the candidate's case. However, I am not aware of any cases (in computer science) where someone who was put forward for tenure was denied tenure. (In contrast, I am aware of a very small number cases where a department declined to put someone forward for tenure. In such cases, no letters are ever
requested.) Perhaps more frustrating, this seems to be the case even when there are negative letters. In fact, I have written what I consider to be "negative" letters in the past only to see the candidate still get tenure.(To be clear, by academic standards a negative letter does not mean saying anything bad, it just means not effusively praising the candidate.) This makes be believe that these letters are simply being used as "checkboxes" rather than real sources of information to take into account during the decision-making process. Essentially, once a department has decided to put someone forward for promotion, they have effectively also decided to vote in favor of their promotion.

Letters take a long time to write, especially tenure/promotion letters, and especially when you are not intimately familiar with someone's work (even if they are in the same field). But if they are
basically ignored, maybe we can all save ourselves some time and just write boilerplate letters (in favor of tenure) instead?


Thursday, September 06, 2018

Are Conferences Discriminatory?

Glencora Borradaile wrote a blog post in June about how conferences discriminate.
Let me spell it out. In order to really succeed in most areas of computer science, you need to publish conference papers and this, for the most part, means attendance at those conferences. But because of the institutional discrimination of border control laws and the individual discrimination that individuals face and the structural discrimination that others face, computer science discriminates based on nationality, gender identity, disability, and family status, just to name a few aspects of identity.
Suresh Venkatasubramanian follows up with a tweet storm (his words) echoing Glencora's points.
Ryan Williams had a twitter thread defending conferences.
Not much difference these day between blog posts, tweet storms and twitter threads and I recommend you read through them all.

Much as I think conferences should not serve as publication venues, they do and should play a major role in connecting people within the community. We should do our best to mitigate the real concerns of Glencora and Suresh, create an environment that everyone feels comfortable, have travel support and child care to make it easier and have meetings in different countries so those with visa issues can still attend at times. But we cannot eliminate the conference without eliminating the community. Personal interactions matter.

Monday, September 03, 2018

The Rule of Threes/Astrology

On Aug 16, 2018 Aretha Franklin died. A famous singer.

On Aug 18 2018 Kofi Anan died. A famous politician.

On Aug 25, 2018 John McCain died. A famous politician.

On Aug 26, 2018 Neil Simon died, a famous playwright.

For 12 famous people who died between Aug 5 and Aug 26 see here (be careful- there are a few more on the list who died in August but a different year).

One could group those 12 into four sets of three and claim the rule of threes that celebrities die in threes. There was an episode of  30 Rock   where two celebrities had died and Tracy Jordan (a celeb) tried to kill a third one so he would not be a victim of the rule of threes. (see the short video clip: here.)

How would one actually test the rule of threes? We would need to define the rule carefully. I have below a well defined rule, with parameters you can set, and from that you could do data collection (this could be a project for a student though you would surely prove there is no such rule).

  1. Decide on a definite time frame: T days. The deaths only count if they are within T days.
  2. Define celebrity. This may be the hardest part. I'll start with they must have a Wikipedia page of length W and they must have over H  hits on Google. This may be hard to discern for people with common names or alternative spellings. You might also look into Friends on Facebook and  Followers on Twitter. A big problem with all of this is that if you want to do a study of old data, before there was Google, Wikipedia, Facebook, and Twitter, you will need other criteria (ask your grandparents what it was like in those days).
  3. Decide whether or not to have a cutoff on age. You may decide that when Katherine Hepburn, Bob Hope, and Strom Thurmond died less than a month apart, at the ages of 96, 100, 100 this doesn't qualify. Hence you may say that the celebrities who die must be younger than Y  years.

I doubt anybody  will ever do the experiment--- those that believe its true (are there really such people?) have no interest in defining it carefully or testing it. And people who don't believe would not bother, partially because so few people believe it that its not worth debunking. But I wonder if a well thought out experiment might reveal something interesting. Also contrast the data to all deaths and see if there is a difference. For example, you might find that more celebs die in August then would be expected based on when all people die. Or that celebs live longer. Or shorter. Actually with enough p-hacking I am sure you could find something. But would you find something meaningful?

Astrology is in the same category- people who believe (there ARE such people!)  could do well defined experiments but have no interest in doing so. I doubt they would find anything of interest if they did. Here there are enough people who believe it in to be worth debunking, but would a well designed science experiment convince them that astrology does not have predictive powers? Has such been done?


I once DID do such an experiment to disprove a wild theory. In 2003 a cab driver once told me (1) there is no Gold in Fort Know, and Ian Fleming was trying to tell us this in the book Goldfinger,  (2)  Reagan was shot since he was going to tell,  (3) a small cohort of billionaires  runs the world. I challenged him-- if that is the case then how come in 1992 Bill Clinton beat George Bush, who was surely the billionaires  pick. He responded that Bill Clinton was a Rhodes Scholar and hence he is in-the-club. I challenged him- OKAY, predict who will get the Democratic Nomination in 2004. This was a well defined experiment (though only one data point) He would give me a prediction and I could test it. He smiled and said Wesley Clark was a Rhode Scholar. Oh well.

Thursday, August 30, 2018

What is Data Science?

The Simons Institute at Berkeley has two semester long programs this fall, Lower Bounds on Computational Complexity and Foundations of Data Science. The beginning of each program features a "boot camp" to get people up to speed in the field, complexity last week and data science this week. Check out the links for great videos on the current state of the art.

Data Science is one of those terms you see everywhere but not well understood. Is the the same as machine learning? Data analytics? Those pieces only play a part of the field.

Emmanuel Candès, a Stanford statistician, gave a great description during his keynote talk at the recent STOC theoryfest. I'll try to paraphrase.

The basic scientific method works as follows: You make an hypothesis consistent with the world as you know it. Design an experiment that would distinguish your hypothesis from the current models that we have. Run the experiment and accept, reject or refine your hypothesis as appropriate. Repeat.
The Higgs Boson followed this model as a recent example.

Technological Advances have given us a different paradigm.
  1. Our ability to generate data has greatly increased whether it be from sensors, DNA, telescopes, computer simulations, social media and oh so many other sources.
  2. Our ability to store, communicate and compress this data saves us from having to throw most of it away.
  3. Our ability to analyze data through machine learning, streaming and other analysis tools has greatly increased with new algorithms, faster computers and specialized hardware.
All this data does not lend itself well to manually creating hypotheses to test. So we use the automated analysis tools, like ML, to create models of the data and use other data for testing those hypotheses. Data science is this process writ large.

We are in the very early stages of data science and face many challenges. Candès talked about one challenge: how to prevent false claims that arise from the data not unrelated to the current reproducibility crisis in science.

We have other scientific issues. How can we vouch for the data itself and what about errors in the data? Many of the tools remain adhoc, how can we get theoretical guarantees? Not to mention the various ethical, legal, security, privacy and fairness issues that vary in different disciplines and nations.

We sit at a time of exciting change in the very nature of research itself, but how can we get it right when we still don't know all the ways we get it wrong. 

Monday, August 27, 2018

Is Trivium (the Stream Cipher) used?

This Fall I am teaching the senior course in Crypto at UMCP. Its a nice change of pace for me since REAL people REALLY use this stuff! Contrast to last Spring when I taught

                   Ramsey Theory and its `Applications'

There is one topic in the Crypto course that LOOKS really useful but I can't tell if it IS being used, so I inquire of my readers. (I will probably come across others topics like that in the future.)

A Secure Stream Cipher is (informally) a way to, given a seed and optionally an Init Vector (IV), generate bits that look random. Alice and Bob communicate the seed either in person or over a private channel or perhaps by using RSA (or some other public key system) and they then both effectively have a very long string of random bits. They send the IV in the clear. They can then do one-time-pad (really a psuedo-one-time-pad). There are other uses for random-looking bits as well.

So what is needed is a Secure Stream Cipher.  Trivium seems to be one such. According to the Trivium wiki

It was submitted to the Profile II (hardware) of the eSTREAM compeition by its authors Christophe De Canniere and Bart Preneel, and has been selected as part of the portfolio for low area hardware ciphers (Profile 2) by the eSTREAM project. It is not patented.

According to these papers: here and here, and the Wikipedia entry, here the following are true:

1) Trivium takes an 80 bits seed and an 80 bit IV

2) The implementation is simple and is already in hardware. Around 3000 logic gates.

3) There are reasons to think its random-looking but no rigorous proof.

4) So far it has not been broken, though its not clear how many people have tried. Thats goes to my question-- how widely used it is it?

5) Trivium need 1152 steps in the init phase. If it only does 799 then The Cube Attack can break it in 2^68   which is better than the naive algorithm of trying every key and IV (2^160) but still not feasible.

6) Trivium is also An American Metal Band and a Medieval theory of education. Its a good name for a band. See my post What Rock Band Name Would you Choose? for fictional good names for bands with a math or theoretical cs connection.

OKAY, back to the main topic:

SO my questions:

Is Trivium used?

If so then by whom and for what (for the psuedo 1-time pad?) ?

If not then why not (e.g., some of of my points above are incorrect)? and should it be instead
of what is being used?




Sunday, August 26, 2018

Katherine Johnson (1918-)

Katherine Johnson is celebrating her 100th birthday today. This is the first centenary post we've done for a living person.

The movie Hidden Figures made her story famous: In 1952, she joined NACA, the predecessor of NASA, in the all-black West Area Computing section of the Langley lab in Virginia. During the "space race" of the 50's and 60's she worked on trajectory analysis for the early human spaceflights. In 1960, she was the first woman to co-author a technical report for NASA on placing satellites over a specific latitude and longitude.

The West Area Computing section had human computers working on the critical calculations for air and space travel. Soon NASA started moving that work to IBM machines but much as we don't fully trust machine learning today, humans didn't initially trust these computers. John Glenn's first orbital mission required complex calculations to track his flight. He insisted on Katherine Johnson working out the computations herself, which she did. "If she says they're good then I'm ready to go".

In 2015, then President Obama awarded Katherine Johnson the highest US civilian honor, the Presidential Medal of Freedom.

Thursday, August 23, 2018

The Zero-One Law for Random Oracles

A couple of years ago, Rossman, Servedio and Tan showed that the polynomial-time hierarchy is infinite relative to a random oracle. That is if you choose each string independently to be in or out of an oracle R with probability one, the polynomial-time hierarchy will be infinite relative to R with probability one. This is one in the measure theory sense, there are oracles where it is false, it is just that those oracles will occur with zero probability.

There are still a few open questions for random oracles, such as whether P = BQP, quantum and classical computing can solve the same problems efficiently.. We suspect that P is different than BQP relative to a random oracle because otherwise BQP would be the same as BPP unrelativized (and thus factoring is easy), but we have no proof. Could it be possible that this problem has no simple resolution, that P = BQP holds with probability 1/2 relative to a random oracle, or some other probability strictly between 0 and 1? As it turns out no.

Some statements do hold with intermediate probabilities. The sentence "0101 in R" holds with probability 1/2. Even for a fixed machine M, questions like "MR accepts an infinite language" could hold with probability say 3/8.

But statements like P = BQP relative to R can't happen with intermediate probability. That's due to the Kolmogorov zero-one law. If you have a subset of oracles that are closed under finite differences, that set must occur with probability zero or one. Every statement about complexity classes has that property because we can hard wire finite differences of the oracle into the machine description without increasing the running time. It will change the machine but not the complexity class. So P = BQP holds with probability zero or one even though we can't tell which one yet.

The Kolmogorov zero-one law gives us a consistent look at complexity classes. Since the countable union of zero probability events still has probability zero, every finitely-described statement about complexity classes that hold with probability one, all simultaneously hold with probability one. While this random world does not match the unrelativized one, it does give us a relativized world where we can explore the different possible relationships between complexity classes.

Monday, August 20, 2018

Fractional Problems: 2.1-colorable, 2.8-SAT

Some graphs are 2-colorable, some graphs are 3-colorable, some graphs are...Does it make sense to say that a graph is 2.1-colorable? It does!(Source_ Factional Graph Theory by Schneinerman and Ullman-- I saw a talk on this by Jim Propp a long time ago.)

Def 1: A graph is (a,b)-colorable (with a \ge b) if you can assign to every vertex a set of b numbers from {1,...,a} such that if u and v are adjacent then the set of numbers are disjoint. Note that k-colorable is (k,1)-colorable. Let chi_b(G) be the least a such that G is (a,b)-col.
 The fractional chrom num of G is lim_{b-->infinity} chi_b(G)/b.

Def 3:  We restate the ordinary Chrom Number problem as an integer program (and NOT by using that
Chrom Num \le SAT \le IP).  In fact, our Int Prog will be LARGE. For every ind set I of G we have a 0-1 valued var x_I which will be 1 iff x_I is all one color. We want to minimize \Sum_I x_I with the constraint that, for every vertex v in the graph. sum_{v in I} x_I \ge 1, so every vertex is colored.
: Fractional Chrom number is what you get if you relax the above IP to an LP with x_I in [0,1] instead of {0,1}.

Defs 1 and 2  turn out to be equiv. The wikipedia entry on Fractional Chromatic Number (see here) is pretty good and has some applications to real world things.

QUESTION: 2-col is in P, 3-col is NPC. What about, say, 2.1-col. It turns out that, for every c>2, c-col is NPC.

Open question (which Jim Propp used to begin his lecture): Every planar graph is 5-col has an EASY proof. Every planar graph is 4-col has a HARD (or at least tedious) proof. Is there a nice proof that every planar graph is (say) 4.5-colorable? The answer is Yes, Every planar graph is 4.5 colorable.  I blogged on it here.

Are there other fractional problems related to NPC problems. YES- at a Dagstuhl there was a paper on (2+epsilon)-SAT. (by Austrin, Guruswami, Hastad) (see here).

What is fractional SAT? Lets recall ordinary k-SAT: every clause has k literals and you need to make at least one of them true. What if you wanted to make at least 2 of them true? (a/b)-SAT is if every clause has exactly b literals and you want an assignment that makes at least a in each clause true.

GOOD NEWS: for all epsilon, (2+epsilon) is NP-complete. Its not so much good that its true, but its good that its known.

BAD NEWS: The proof is hard, uses lots of tools.

ODD NEWS: The speaker said that they PROVED there was no easy proof.

I think its worth having your students try to DEFINE these terms on their own. The NPC proofs may be over their heads (they may be over my head), but the definitions are nice and the students might be able to derive them.

QUESTION: Do other NPC problems have Fractional versions? I would think yes. This could lead to a host of open problems OR perhaps they have already been asked. If you know of any, please comment.

Thursday, August 16, 2018

How valuable is a Fields Medal?

(Johan Hastad won the Knuth Prize! The below post was written before I knew that but has a mild connection to it. See here for more info on the Hastad winning it, or see Lance's tweet, or see Boaz's blog post here. There will prob be other blogs about it as well. ADDED LATER: Lipton and Regan have a post on this here.)




The Fields Medal was recently awarded to

Caucher Birkar

Alessio Figalli

Peter Scholze

Akshay Benkatesh

I was going to try to give one sentence about what they did, but Wikipedia does a better job than I ever could so I point there: here. Terry Tao also has some comments on the Fields Medal here. So does Doron Zeilberger here.

How much is a Fields medal worth?

1) The winners get $15,000 each.

2) Winning a Fields medal gets one a higher salary and the ability to change schools, so the $15,000 might not be the main monetary part.  All Field Medalists are under 40 so the salary increases and such last for a much longer time then (say) a Nobel prize given for life achievements to someone much older. So you may rather win a Fields' medal when you are 39 than a Nobel when you are 70. The Abel prize is around 740,000 dollars and (I think) given for lifetime achievement so again, a Fields Prize may be better. (See here for more on the Abel Prize).  Which would I prefer to win? I would be delighted if that was my dilemma.

3) I am sure that none of the four winners went into math because of the allure of the $15,000 Fields Medal.

4) The title of this post is ambiguous. It can also be read as

how valuable is the actual medal?

The answer is $4000, much more than I would have thought. I only know this since it was recently stolen, see here.

This raises a linguistic question. The four people above can say

                                                          I WON a Fields Medal

The thief can say

                                                          I HAVE a Fields Medal

and hope that people don't quite realize that he didn't earn it.

(The article about the theft says the Fields medal is $11,500 dollars. Do they deduct the cost of the Medal itself? Or is the article wrong?)


Tuesday, August 14, 2018

While I Was Away

After the Oxford Workshop I enjoyed a two-week family vacation in Spain, where there was no rain in the plain, just very hot up to 106℉. The old Spanish cities knew how to optimize for shade and breeze, more than I can say for Oxford.

Meanwhile in a more moderate Brazilian climate, the International Congress of Mathematicians awarded their medals, including the Rolf Nevanlinna Prize to Constantinos Daskalakis in a year with several very strong candidates. The Nevanlinna prize gets awarded every four years to a researcher under 40 for contributions to mathematical aspects of information sciences. Costis was the then-student author of the 2004 Nash Equilbrium is PPAD-complete result and has gone on to be a leader in the algorithmic game theory community.

The ICM also distributes the Fields Medal, the highest honor in mathematics. Much ado is given to Peter Scholze who received the award this year at the age of thirty though remember that Alexander Razborov received his Nevanlinna prize at the age of 27 in 1990. Caucher Birkar also received the Fields Medal at the more standard age of 40 but had it for only a few minutes before it was literally stolen away.

I didn't realize how much I appreciate the convenience of Uber and Lyft until I had to get around cities where they don't exist. Meanwhile New York started to limit ride-sharing vehicles and I arrived in Madrid to a taxi strike protesting Uber in that city. The Yin and Yang of technology.

Tuesday, August 07, 2018

The Future of TCS Workshop, celebrating V Vazirani 60th, now online


On June 29, 2018, a workshop was held, in conjunction with STOC 2018, to celebrate the accomplishments of Vijay Vazirani on the  occasion of his 60th birthday, organized by his PhD students, Aranyak Mehta, Naveen Garg and Samir Khuller. The workshop was called "TCS: Looking into the Future" and true to the title, it was precisely that!  In front of a large, enthusiastic audience, left over from STOC, the star-studded lineup of speakers outlined some of the most avant-garde, far out ideas  on the future of computing.  Fortunately, this exciting and highly thought-provoking set of talks was recorded for posterity  and is available for all to view here
THE LAST WORD `here' IS THE LINK to the website which has links to the four talks.
The speakers were:
Len Adleman, Manuel Blum, Richard Karp, Leonard Schulman, Umesh Vazirani.

1) I URGE you to all WATCH those talks!

2) I really like it when talks are available on line after the fact so even if you didn't go (I didn't) you can still see the talks later.

3) So many talks to watch, so little time, alas!

4) Sorry for the white background for this post- that happens sometimes. NO comments on it please.




Wednesday, August 01, 2018

Three trick questions in Formal Lang Theory

There are three questions I ask in my Formal Lang Theory class that even the very best students get wrong. Two I knew were trick quesions, the other I was surprised by

1) If w is a string then SUBSEQ(w) is all strings you can form by replacing some symbols in w
with empty string. SUBSEQ(L) is defined in the obv way.

I ask the following in class (not on an exam). TRUE or FALSE and WHY and we'll discuss
If L is regular then SUBSEQ(L) is regular
If L is context free then SUBSEQ(L) is context free
If L is decidable then SUBSEQ(L) is decidable
If L is c.e. (used to be called r.e.) then SUBSEQ(L) is c.e.

The students pretty much get and prove that 1,2, and 4 are TRUE. They all think 3 is false.
But is true. For a strange reason

If L is ANY lang whatsoever then SUBSEQ(L) is regular. Comes from wqo theory. For more on this see a blog post I did when I was a guest blogger (it shows- the typeface is terrible) here

2) How many states does and NFA  need for { a^n : n \ne 1000} (or similar large numbers). ALL of the students think it takes about 1000 states. They are wrong: here

The two above I know people get wrong. The third one surprised me, yet every year the good students get it wrong

3) BILL: We showed that
a) 2-colorablility is in P, hence of course planar 2-colorability is in P
b) 3-colorability is NP-complete
c) 4-colorabilty of Planar graphs is in P

SO what about 3-colorability of planar graphs?

My very best student said the following last spring:

Planar 2-col is in P

Planar 4-col is in P

so I would guess that Planar 3-col is in P.

In prior years others made the same mistake.  My opinion of these students is NOT lowered, but I am surprised they make that guess. Of course, once you KNOW something you have a hard time recovering the state of mind of NOT knowing it, so my being surprised says more about my having drunk the Kool aid then their thought patterns.


Friday, July 27, 2018

Complexity in Oxford

Oxford, England is in the middle of a heat wave and it handles high temperatures about as well as Atlanta handles snow. But that can't stop the complexity and a wonderful workshop this past week. It's my first trip to Oxford since I came five years ago to celebrate the opening of the Andrew Wiles building, a building that hosted this weeks' workshop as well.

We also got a chance to see old math and physics texts. Here's Euclid's algorithm from an old printing of Euclid's Elements.



Unlike a research conference, this workshop had several talks that gave a broader overview of several directions in complexity with a different theme each day.

A few highlights of the many great talks.

Sasha Razborov gave a nice discussion of proof systems that help us understand what makes circuit bounds hard to prove.

Tuesday was a day for pseudorandomness, finding simple distributions that certain structure can't distinguish from random. Ryan O'Donnell talked about fooling polytopes (ANDs of weighted threshold functions). Avishay Tal talked about his new oracle with Ran Raz, viewing it in this lens as a distribution that the low-depth circuit can't distinguish but quantum can. I talked about some simple extensions to Raz-Tal and the possibilities of using their techniques to show that you can't pull out quantumness in relativized worlds.

Toni Pitassi talked about lifting--creating a tight connection between decision tree and 
complexity bounds to export lower bounds from one model to the other. Yuval Ishai talked about the continued symbiosis between complexity and theoretical cryptography.

Ryan Williams talked about his approach of using circuit satisfiability algorithms to prove lower bounds that led to his famed NEXP not in ACC0 result. He has had considerable recent progress including his recent work with Cody Murray getting reducing NEXP to nondeterministic quasipolynomial time.

Great to get away and just think complexity for a week. Seeing my former students Rahul Santhanam and Josh Grochow all grown up. And realizing I've become that old professor who regales (or bores) telling complexity stories from long ago.