Sunday, August 02, 2015

17 candidates, only 10 in the debate- what to do?


On Thursday Aug 6 there will be Republican debate among 10 of the 17 (yes 17) candidates for the republican nomination.

1) There are 17 candidates. Here is how I remember them: I think of the map of the US and go down the east coast, then over to Texas then up.  That only works for the candidates that are or were Senators or Govenors.  I THEN listthe outsiders.  Hence my order is (listing their last job in government) George Pataki (Gov-NY), Chris Christie (Gov-NY), Rick Santorum (Sen-PA), Rand Paul(Sen KT),JimGilmore(Gov-VA), Lindsay Graham (Sen-SC),Jeb Bush (Gov-FL), Marco Rubio (Sen-FL), Bobby Jindal (Gov-LA), Ted Cruz (Sen-TX), Rick Perry (Gov-TX), Mike Huckabee (Gov-AK), Scott Walker (Gov-Wisc), John Kaisch (Gov-Ohio) Donald Trump (Businessman), Ben Carson (Neurosurgeon), Carly Fiorina (Businesswomen).
9 Govs, 5 Sens, 3 outsiders.

2) Having a debate with 17 candidates would be insane. Hence they decided a while back to have the main debate with the top 10 in the average of 5 polls, and also have a debate with everyone else. There are several problems with this: (a) candidates hovering around slots 9,10,11,12 are closer together than the margin of error, (b) the polls are supposed to measure what the public wants, not dictate things, (c) the polls were likely supposd to determine who the serious candidates are, but note that Trump is leading the polls, so thats not quite right.QUESTION: Lets say that Chris Christie is at 2% with a margin of + or - 3%. Could he really be a -1%?

3) A better idea might be to RANDOMLY partition the field into two groups, one of size 8 and one of size 9, and have two debates that way.What randomizer would they use? This is small enough they really could just put slips of paper in a hat and draw them. If they had many more candidates we might use Nisan-Wigderson.

4) How did they end up with 17 candidates?

a) Being a Candidate is not a well defined notion. What is the criteria to be a candidate? Could Lance Fortnow declare that he is a candidate for the Republian Nomination (or for that matter the Democratic nomination). YES. He's even written some papers on Economics so I'dvote for him over... actually, any of the 17. RUN LANCE RUN! So ANYONE who wants to run can! And they Do! I'm not sure what they can do about this---it would be hard to define ``serious candidate'' rigorously.

b) The debate is in August but the Iowa Caucus isn't until Feb 1. So why have the debate now? I speculate that they wanted to thin out the field early, but this has the opposite effect--- LOTS of candidates now want to get into the debates.

c) (I've heard this) Campaign Finance laws have been gutted by the Supreme court, so if you just have ONE mega-wealthy donor you have enough money to run. Or you can fund yourself (NOTE- while Trump could fund himself, sofar he hasn't had to as the media is covering him so much).

d) Because there are so many, and no dominating front runner, they all think they have a shot at it. So nobody is dropping out. Having a lot of people running makes more people want to run. (All the cool kids are doing it!)


Tuesday, July 28, 2015

Explain this Scenario in Jeapardy and some more thoughts


In the last post I had the following scenario:

Larry, Moe, and Curly are on Jeopardy.

Going into Final Jeopardy:

Larry has $50,000, Moe has $10,000, Curly has $10,000

Larry bets $29,999, Moe bets $10,000, Curly bets $10,000

These bets are ALL RATIONAL and ALL MATTER independent of what the category is. For example, these bets make sense whether the category is THE THREE STOOGES or CIRCUIT LOWER BOUNDS.

Explain why this is.

EXPLANATION: You were probably thinking of ordinary Jeopardy where the winner gets whatever he gets, and the losers take-home is based ONLY on their rank (2000 for second place, 1000 for first place). Hence Larry's bet seems risky since he may lose 29,999 and Moe and Curly's bets seem irrelevant (or barely relelvent- they both want to finish in second)

BUT- these are Larry, Moe, Curly, The Three Stooges. This is CELEBRITY JEOPARDY! The rules for money are different. First place gets MAX of what he wins, and 50,000. So Larry has NOTHING TO LOSE by betting 29,999.  Second and Third place BOTH get MAX of what they win and 10,000. So Moe and Curly have NOTHING TO LOSE by betting 10,000. (I suspect they do this because the money goes to a charity chosen by the celebrity).

SIDE NOTE: I saw Celebrity Jeopardy and wanted to verify the above before posting. So I looked on the web for the rules for Celebrity Jeopardy. THEY WERE NO WHERE TO BE FOUND! A friend of mine finally found a very brief you-tube clip of Penn Jillette wining  Celeb Jeopardy and a VERY BRIEF look at the final scores and how much money everyone actually got. Thats how I verified what I thought were the rules for celebrity jeopardy.

IF I am looking up a theorem in Recursive Ramsey theory and can't find it on the web I am NOT surprised at all since that would be somewhat obscure (9 times out of 10 when I look up something in Ramsey Theory it points to one of the Ramsey Theory Websites that I maintain. Usually is there!). But the rules for final Jeopardy -- I would think that is not so obscure. Rather surprised it was not on the web.

Monday, July 27, 2015

Explain this Scenario on Jeopardy


Ponder the following:

Larry, Moe, and Curly are on Jeopardy.

Going into Final Jeopardy:

Larry has $50,000, Moe has $10,000, Curly has $10,000

Larry bets $29,999, Moe bets $10,000, Curly bets $10,000

These bets are ALL RATIONAL and ALL MATTER independent of what the category is. For example, these bets make sense whether the category is THE THREE STOOGES or CIRCUIT LOWER BOUNDS.

Explain why this is.

I'll answer in my next post or in the comments of this one
depending on... not sure what it depends on.

Thursday, July 23, 2015

New Proof of the Isolation Lemma

The isolation lemma of Mulmuley, Vazirani and Vazirani says that if we take random weights for elements in a set system, with high probability there will be a unique set of minimum weight. Mulmuley et al. use the isolation lemma to randomly reduce matching to computing the determinant. The isolation lemma also gives an alternative proof to Valiant-Vazirani that show how to randomly reduce NP-complete problems to ones with a unique solution.

Noam Ta-Shma, an Israeli high school student (and son of Amnon), recently posted a new proof of the isolation lemma. The MVV proof is not particularly complicated but it does require feeling very comfortable with independent random variables. Ta-Shma's proof is a more straight-forward combinatorial argument.

Suppose you have a set system over a universe of n elements. Give each element i, a weight wi uniformly chosen between 1 and m. The weight of a set is the sum of the weights of the elements of that set. Ta-Shma shows that there is a unique minimum weighted set with probability at least (1-1/m)n, which beats out the bound of (1-n/m) given by MVV.

Here is a sketch of his proof: Suppose all the wi's had weights between 2 and m. Let S be the lexicographically minimal weight set given these weights. Consider the function φ(w), defined on weights with all the wi at least 2, as the following:
  • φ(w)i = wi -1 if i is in S
  • φ(w)i = wi if i is not in S
Note that S is the unique minimal set now in the weights φ(w)i. Moreover φ is 1-1 for we can recover w from φ(w) by taking the unique minimal weight set in φ(w) and adding one to the weight of each element in that set.

So we have the probability that random weights yield a unique minimum set is at least
|range(φ)|/mn = |domain(φ)|/mn = (m-1)n/mn = (1-1/m)n.

Read all the details in Ta-Shma's paper.

Tuesday, July 21, 2015

Hartley Rogers, Author of the first Textbook on Recursion Theory, passes away

Hartley Rogers Jr passed away on July 17, 2015 (last week Friday as I write this).He was 89 and passed peacefully.

For our community Rogers is probably best known for his textbook on Recursion Theory which I discuss below. He did many other things, for which I refer you to
his Wikipedia page here.

His book was:

The theory of recursive functions and effective computability.

It was first published in 1967 but a paperback version came out in 1987.

It was probably the first textbook in recursion theory. It was fairly broad. Here are the chapter headings and some comments.
Recursive functions

Unsolvable problems (The first edition came out before Hilbert's tenth problem was solved),

Purpose: Summary,

Recursive invariants,

Recursive and recursively enumerable sets,

Reducibilities,

One-One Reducibilities; Many-one Reducibilities, (Maybe its just me but I can't imagine caring if the reduction is 1-1 or m-1.)

Truth-Table Reducibilities;simple sets, (``simple sets are not simple'' was a quote from Herbert Gelernter who taught me my first course in recursion theory.)

Turing Reducibilities; hypersimple sets,

Post's Problem; incomplete sets. (Posts problem was to find an r.e. set that is neither recursive nor Turing-complete. when I tell people there such a set they they often say `Oh, Like Ladner's Theorem.' Thats true but backwards. Its  still open to find a NATURAL set that is incomplete, though they prob don't exist and its hard to pin that down.)

The Recursion Theorem,

Recursively enumerable sets as a lattice,

Degrees of unsolvability,

The Arithmetic Hierarchy (Part 1),

The Arithmetic Hierarchy (Part 2),

The Analytic Hierarchy.

Looking over his book I notice the following

1) He thanks Noam Chomsky (a linguist) and Burton Dreben (A philosopher). I think we are more specialized now. Would it be surprising if a text in recursion theory written now thanked people who are not in math?

2) He thanks his typist. I think that people who write math books now type it themselves. I wonder if novelists also now type it themselves.

3) I think that Soare's book replaced it as THE book that young recursion theorists read. (Are there young recursion theorists?)  Soare's book is chiefly on r.e. degree theory, Rogers book is broader. When Rogers wrote his book much less was known (no 0'''-arguments, very little on random sets). It was possible to have most of what was known in one book. That would be hard now, though Odilfreddi book comes close. Note that Odilfreddi book is in two volumes with a third one to be finished... probably never.





One personal note- I had a course on Recursion theory taught by Herbert Gelernter at Stonybrook (my ugrad school) in the Fall of 1979. We covered the first six chapters of Rogers text. It was a great course from a great book taught by a great teacher and set me on the path to do work in recursion-theoretic complexity theory.

Thursday, July 16, 2015

Microsoft Faculty Summit

Last week I participated in my first Microsoft Faculty Summit, an annual soiree where Microsoft brings about a hundred faculty to Redmond to see the latest in Microsoft Research. I love these kinds of meetings because I enjoy getting the chance to talk to computer scientists across the broad spectrum of research. Unlike other field, CS hasn't had a true annual meeting since the 80's so it takes events like this to bring subareas together. "Unlike other fields" is an expression we say far too often in computer science.

This was the first summit since the closing of the Silicon Valley lab and the reorganization of MSR into NExT (New Experiences and Technologies) led by Peter Lee and MSR Labs led by Jeannette Wing. Labs focusing on long-term research while NExT tries to put research into Microsoft products. Peter gave the example of real-time translation into Skype already available for public preview. Everyone in MSR emphasized that Microsoft will remain committed to open long-term research and said the latest round of cuts (announced while the summit was happening) will not affect research.

HoloLens had the most excitement, a way to manipulate virtual three-dimensional images. Unfortunately the summit didn't have HoloLenses for us to try out but I did get a cool HoloLens T-shirt. While one expects the most interest in HoloLens for gaming, Microsoft emphasized the educational aspect. Microsoft has a call for proposals for research and education uses for HoloLens.

I didn't go to many of the parallel sessions, instead spending the time networking with colleagues old and new. I did really enjoy the research showcase which highlight many of the research projects. I tried out the Skype translator, failing a reverse Turing test because I thought I was talking to a computer but it was really a Spanish speaking human. My colleagues at MSR NYC were showing off their wisdom of the crowds. Microsoft is moving their defunct academic search directly into Bing and Cortana. I tried Binging myself on the prototype and it did indeed list my research papers but not my homepage and this blog. They said they'll fix that in future updates.

Monica Lam showed off her latest social messaging system Omlet to improve privacy by keeping data on the Omlet server for no longer than two weeks though I was more excited by their open API. Feel free to Omlet me.

While the meeting had its share of hype (quantum computers to solve world hunger), I really enjoyed the couple of days in Redmond. Despite the SVC closing, Microsoft is still one of the few companies that has labs focused on true basic research.

Monday, July 13, 2015

Is there an easier proof? A less messy proof?


Consider the following statement:

BEGIN STATEMENT:

For all a,b,c, the equations

x + y + z = a

x2 +y2 + z2 = b

x3 + y3 + z3 = c

has a unique solution (up to perms of x,y,z).

END STATEMENT

One can also look at this with k equations, k variables, and powers 1,2,...,k.

The STATEMENT is true. One can use Newton's  identities (see here) to obtain from the sums-of-powers all of the symmetric functions of x,y,z (uniquely). One can then form a polynomial which, in the k=3 case, is

W^3 -(x+y+z)W^2 + (xy+xz+yz)W - xyz = 0

whose roots are what we seek.

I want to prove an easier theorem in an easier way that avoids using Newton's identities. Here is what I want to prove:

Given those equations above (or the version with k-powers), and told that a,b,c are nonzero natural numbers, I want to prove that there is at most one natural-number solution for (x,y,z)  (OR for x1,...,xk in the k-power case).

Its hard to say `I want an easier proof' when the proof at hand really isn't that hard. And I don't want to say I want an `elementary' proof- I just want to avoid the messiness of Newton's identities. I doubt I can formalize what I want but, as Potter Stewart said, I'll know it when I see it.




Thursday, July 09, 2015

Will Our Understanding of Math Deteriorate Over Time?

Scientific American writes about rescuing the enormous theorem (classification of finite simple groups) before the proof vanishes. How can a proof vanish?

In mathematics and theoretical computer science, we read research papers primarily to find research questions to work on, or find techniques we can use to prove new theorems. What happens to a research area then when researchers go elsewhere?

In a response to a question about how can one contribute to mathematics, Bill Thurston notes that our knowledge of mathematics can deteriorate over time.
Mathematical understanding does not expand in a monotone direction. Our understanding frequently deteriorates as well. There are several obvious mechanisms of decay. The experts in a subject retire and die, or simply move on to other subjects and forget. Mathematics is commonly explained and recorded in symbolic and concrete forms that are easy to communicate, rather than in conceptual forms that are easy to understand once communicated. Translation in the direction conceptual -> concrete and symbolic is much easier than translation in the reverse direction, and symbolic forms often replaces the conceptual forms of understanding. And mathematical conventions and taken-for-granted knowledge change, so older texts may become hard to understand. In short, mathematics only exists in a living community of mathematicians that spreads understanding and breaths life into ideas both old and new.
Once a research area fills out, researchers tend to move on to new and different ideas. Much of the research in the theoretical CS community in the 50's, 60's and 70's have been lost to journal articles, now nicely digitized but rarely downloaded.

What will happen with complexity classes once people stop studying them? You already don't see that many recent papers on complexity classes, even in the Computational Complexity Conference. A victim of our own success and failures: We settled most of the easy questions and the rest are very hard.  As my generation retires, the classes may retire as well, outside of a couple of the biggies like P and NP. The old papers will still be out there, and you can always look up the classes in the zoo or on Wikipedia, but the understanding that goes with people studying these classes, and why we cared about them, may deteriorate just like computer programs that go unattended.