Computational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill Gasarch
Monday, February 29, 2016
It works in practice, but does it work in theory (Pollard's Factorization algorithm)
Throughout this post I ignore polylog factors.
It is trivial to factor N in time N1/2. Pollard's rho-algorithm (see my write up here or Wikipedia Entry) for factoring does bette expected time N1/4. Or does it? It works well in practice but has not been proven to work well in theory. (If I missed some paper that DID prove it works well in theory please leave a polite comment.)
Here we state a conjectures that, if true, will show that Pollard's algorithm is in time (randomized) N1/4. Let p be a prime. Let c be in {2,...,p-1}.
Let fc(x)= x2 + c mod p.
x1 will be specified in the conjecture. xi is f(xi-1).
For at least half of the elements x1 in {2,...,p-1} and at least half of the elements c in {2,...,p-1} the sequence x1, x2,... will have a repeat within the first O(p1/2) items.
This is thought to be true since it is thought that the sequence is random-enough so that the birthday paradox will work. Still... no proof.
When reading some algorithms papers the interest is in getting an algorithm that you can PROVE the run time of. By contrast, papers on factoring and discrete log and other things that are used to break crypto systems the interest is more in getting something that actually works. I have to learn to stop thinking ``but they haven't proven that!'' and start thinking ``Oh, yes, that would work''. And to be fair, for Pollard's algorithms and others (e.g., quad sieve, number field sieve, which do better in practice than Pollard for large enough numbers) there are REASONS to think they will work well.
More generally, theoretical and applied work may need different mentalities.
Thursday, February 25, 2016
Primary Game Theory
[Nominations open for the SIGACT Distinguished Service Prize. Deadline: April 1]
The US presidential primaries have not gone as expected as you can see from the crazy shifts in the prediction markets. This year besides the usual democratic/republican split, we have an establishment/non-establishment split in both parties. Back in my day outside candidates like Trump, Cruz and Sanders would have run as independents like Ross Perot and John Anderson.
Despite the split, the establishment candidates focus more on themselves than the establishment. Christie's attack on Rubio in New Hampshire may have handed Trump the election and it certainly didn't save Christie's campaign. Kasich should just drop out now if he cares about keeping the nomination for an establishment candidate--it's just not his year, though maybe he's playing some game theory of his own.
The democratic side does not offer such interesting game theory, since we have a two horse race. Mostly a one horse race because the delegate math doesn't work well for Sanders.
Let's look at the election from the point of view of a hypothetical Georgia voter voting on Super Tuesday next week. Such a voter can choose which primary to vote on in election day.
Clinton will easily win Georgia but as long as Bernie gets at least 15% of the vote (likely), delegates will be allocated proportionally. So a vote in the democratic primary could affect a delegate but less likely to to affect who will be the nominee than on the Republican side. Unless Bernie surprises in South Carolina, the hypothetical voter may opt to vote in the Republican primary instead.
The republican delegate allocation rules most likely mean that the candidates receiving at least 20% of the votes will get a proportional allocation of 31 delegates and the winner in each of the 14 congressional districts gets two delegates while the runner up gets one. Looking at the polls, Trump will easily win the election with Cruz and Rubio hovering about 20%. A single vote could affect 6 delegates (20% of Georgia's at large 31 delegates). A vote for Kasich or Carson would not net Kasich or Carson any delegates but could bolster Trump by pushing Rubio's vote percentage down towards that 20% mark.
This scenario plays out across the Super Tuesday primaries. Trump is favored to win in every state voting that day except Cruz's Texas. If Rubio can get at least 20% of the vote in those states he keeps the race alive and could make up ground in winner-take-all states coming up later. Kasich doesn't draw much voters but enough that by not dropping out he may help close out this election on Tuesday. Game theory indeed.
In an early primary season already full of surprises we may see many more. It would be a lot more fun to watch if the fate of the US and the entire world didn't depend on the outcome.
The US presidential primaries have not gone as expected as you can see from the crazy shifts in the prediction markets. This year besides the usual democratic/republican split, we have an establishment/non-establishment split in both parties. Back in my day outside candidates like Trump, Cruz and Sanders would have run as independents like Ross Perot and John Anderson.
Despite the split, the establishment candidates focus more on themselves than the establishment. Christie's attack on Rubio in New Hampshire may have handed Trump the election and it certainly didn't save Christie's campaign. Kasich should just drop out now if he cares about keeping the nomination for an establishment candidate--it's just not his year, though maybe he's playing some game theory of his own.
The democratic side does not offer such interesting game theory, since we have a two horse race. Mostly a one horse race because the delegate math doesn't work well for Sanders.
Let's look at the election from the point of view of a hypothetical Georgia voter voting on Super Tuesday next week. Such a voter can choose which primary to vote on in election day.
Clinton will easily win Georgia but as long as Bernie gets at least 15% of the vote (likely), delegates will be allocated proportionally. So a vote in the democratic primary could affect a delegate but less likely to to affect who will be the nominee than on the Republican side. Unless Bernie surprises in South Carolina, the hypothetical voter may opt to vote in the Republican primary instead.
The republican delegate allocation rules most likely mean that the candidates receiving at least 20% of the votes will get a proportional allocation of 31 delegates and the winner in each of the 14 congressional districts gets two delegates while the runner up gets one. Looking at the polls, Trump will easily win the election with Cruz and Rubio hovering about 20%. A single vote could affect 6 delegates (20% of Georgia's at large 31 delegates). A vote for Kasich or Carson would not net Kasich or Carson any delegates but could bolster Trump by pushing Rubio's vote percentage down towards that 20% mark.
This scenario plays out across the Super Tuesday primaries. Trump is favored to win in every state voting that day except Cruz's Texas. If Rubio can get at least 20% of the vote in those states he keeps the race alive and could make up ground in winner-take-all states coming up later. Kasich doesn't draw much voters but enough that by not dropping out he may help close out this election on Tuesday. Game theory indeed.
In an early primary season already full of surprises we may see many more. It would be a lot more fun to watch if the fate of the US and the entire world didn't depend on the outcome.
Monday, February 22, 2016
What is a `previous publication'?
Here are the guidelines about submission to STOC 2015 with regard to
submitting a prior published paper. I assume that most of the Theory Conferences have a similar policy.
Prior and Simultaneous Submissions: The conference will follow SIGACT's policy on prior publication and simultaneous submissions. Abstract material which has been previously published in another conference proceedings or journal, or which is scheduled for publication prior to July 2015, will not be considered for acceptance at STOC 2015. The only exception to this policy are prior or simultaneous publications appearing in the Science and Nature journals. SIGACT policy does not allow simultaneous submissions of the same (or essentially the same) abstract material to another conference with published proceedings. The program committee may consult with program chairs of other (past or future) conferences to find out about closely related submissions.
Here is a question that I ask non-rhetorically. What if Alice has a paper in arXiv in 2010 and submits it to STOC in 2012. Technically it has not been published before. However, it certainly is not new.
Should this be allowed? Under the current rules of course YES. Should the rules be changed? A paper can be out there without it being published. Should the rules be changed to reflect this? I think NOT since it might be hard to define carefully and I don't want people to discourage posting on arXiv.
Should the committee be allowed to take its not-newness into account in judging it? Do they already? And the notion of well known or out there are subjective.
BOB: This paper has been known about for years.
EVE: Well, I didn't know about it, so for ME its new!
There might be a newness/quality trade off. If Donna posted her proof that P=NP in 2020 but submitted it to STOC 2030, I think it would still get in. By contrast if Bob posts a proof of a good but not great paper that is STOC-worthy in 2020, and then submits it in 2030,, I think it would not get in.
Then again, by 2030 maybe we will have changed the prestige-conference model we currently use.
submitting a prior published paper. I assume that most of the Theory Conferences have a similar policy.
Prior and Simultaneous Submissions: The conference will follow SIGACT's policy on prior publication and simultaneous submissions. Abstract material which has been previously published in another conference proceedings or journal, or which is scheduled for publication prior to July 2015, will not be considered for acceptance at STOC 2015. The only exception to this policy are prior or simultaneous publications appearing in the Science and Nature journals. SIGACT policy does not allow simultaneous submissions of the same (or essentially the same) abstract material to another conference with published proceedings. The program committee may consult with program chairs of other (past or future) conferences to find out about closely related submissions.
Here is a question that I ask non-rhetorically. What if Alice has a paper in arXiv in 2010 and submits it to STOC in 2012. Technically it has not been published before. However, it certainly is not new.
Should this be allowed? Under the current rules of course YES. Should the rules be changed? A paper can be out there without it being published. Should the rules be changed to reflect this? I think NOT since it might be hard to define carefully and I don't want people to discourage posting on arXiv.
Should the committee be allowed to take its not-newness into account in judging it? Do they already? And the notion of well known or out there are subjective.
BOB: This paper has been known about for years.
EVE: Well, I didn't know about it, so for ME its new!
There might be a newness/quality trade off. If Donna posted her proof that P=NP in 2020 but submitted it to STOC 2030, I think it would still get in. By contrast if Bob posts a proof of a good but not great paper that is STOC-worthy in 2020, and then submits it in 2030,, I think it would not get in.
Then again, by 2030 maybe we will have changed the prestige-conference model we currently use.
Thursday, February 18, 2016
Posting Papers
In the ancient days of the 80's, if someone wanted a paper from you, they would ask and you would mail via post. Sometimes I would get a self-addressed envelope asking for a certain paper. Departments would maintain collections of local technical reports. Someone could request a paper, an admin would make a copy, slap on a cover and send it out.
Sometimes I wonder why I bother and just let people use Google Scholar or DBLP to find my papers. I guess I'm just not ready to give up this record of my research life.
In the 90's, we started distributing papers by email, but then who you sent papers to started to matter. As soon as we had a browser in 1993, for fairness, though more because I got tired of responding to paper requests, I put together a page that had electronic copies of all my papers. Over the years those files have gone from postscript to pdf and the page started as html and later I used bib2html which I kept going on my old Chicago CS account that nobody bothered turning off. Bib2html failed to work for me last week, I asked the twitterverse for an alternative and they answered. I went with bibbase and now can reveal my new paper page. Pretty easy to tell from the page when I started as a department chair. I kept the old page active just in case but it will no longer be updated.
Sometimes I wonder why I bother and just let people use Google Scholar or DBLP to find my papers. I guess I'm just not ready to give up this record of my research life.
Sunday, February 14, 2016
∑{p≤ n} 1/p = ln(ln(n)) + o(1). read it here because....
(Last April fools day I posted four links to stories that seemed absurd and asked which one was false. They all were true. I recently came across five stories that all seem absurd but are all real, but three of them can't wait until April 1 since they are about current political events. Here they are:
Amazon to open many brick-and-mortar stores.
Why John Kasich got second place in New Hampshire (whch was better than expected).
Jim Gilmore's (who?) low expectations,
Why Ben Carson left Iowa.
airpnp- not about P vs NP
Donald Trump defends.... (I added this one on March 14)
And now back to our regularly scheduled blog)
A while back Larry Washington (number theorist at UMCP) showed me a simple proof that
∑p ≤ n 1/p = ln(ln(n)) + o(1)
(p goes through all the primes ≤ n.)
Recently I tried to remember it and could not so I looked on the web and... I could not find it! I found proofs that the sum is at least ln(ln(n)) as part of a proof that the series diverges, but could not find a simple proof of the equality.
I asked Larry Washington to email me the proof, and then (and this happens often) while waiting for the response I came up with it.
Anyway, to try to avoid what Lance pondered, the deterioratation of math over time, I post the proof here.Read it here since you probably can't find this proof elsewhere.
I continue to be amazed at both what IS and IS NOT on the web.
(ADDED LATER- one of the comments pointed to links on the web that DO contain the proofs I could not find.)
Amazon to open many brick-and-mortar stores.
Why John Kasich got second place in New Hampshire (whch was better than expected).
Jim Gilmore's (who?) low expectations,
Why Ben Carson left Iowa.
airpnp- not about P vs NP
Donald Trump defends.... (I added this one on March 14)
And now back to our regularly scheduled blog)
A while back Larry Washington (number theorist at UMCP) showed me a simple proof that
∑p ≤ n 1/p = ln(ln(n)) + o(1)
(p goes through all the primes ≤ n.)
Recently I tried to remember it and could not so I looked on the web and... I could not find it! I found proofs that the sum is at least ln(ln(n)) as part of a proof that the series diverges, but could not find a simple proof of the equality.
I asked Larry Washington to email me the proof, and then (and this happens often) while waiting for the response I came up with it.
Anyway, to try to avoid what Lance pondered, the deterioratation of math over time, I post the proof here.Read it here since you probably can't find this proof elsewhere.
I continue to be amazed at both what IS and IS NOT on the web.
(ADDED LATER- one of the comments pointed to links on the web that DO contain the proofs I could not find.)
Thursday, February 11, 2016
Test of Time Award- a good idea but...
The ESA Conference (European Symposium on Algorithms) has a test-of-time award
which
recognizes outstanding papers in algorithms research that were published in the ESA proceedings 19-21 years ago and which are still influential and stimulating the field today.
This sounds like a great idea- some papers are more influential then people might have thought when they first got into ESA, and some papers are less influential then people might have thought. And I am happy that Samir Khuller (my chair) and Sudipto Guha (a grad student when the paper was written) won it for their paper Approximating Algorithms for Connected Dominating Sets.
But there are two things that are not quite right.
1) 19-21 years. That seems like a very small window. '
2) The paper has to have been published in ESA.
Together this makes the job of the panel that decides the award easier as they only have to look at three years of conferences. But there are many fine papers in algorithms that are not in ESA and there may be an awesome three year period and then a draught, so the window seems short.
But rather than complain let me ask some well defined questions:
Are there any other awards with a lower limit (in this case 19 years) on how long the paper has to be out, so that its influence can be better appreciated? This is a good idea, though 19 seems high. Awards for a lifetime of work are often similar in that the are given after the works influence is known.
Are there any other awards that restrict themselves to ONE conference or journal? Of course best-paper and best-student-paper awards to that, but I don't know of any others.
ADDED LATER: A commenter says that there are LOTS of test-of-time awards associated to conferences:
see here
STOC, FOCS, SODA, CCC don't have them so I foolishly thought that was representative.
That raises another question - why do some conferences have it and some dont'?
which
recognizes outstanding papers in algorithms research that were published in the ESA proceedings 19-21 years ago and which are still influential and stimulating the field today.
This sounds like a great idea- some papers are more influential then people might have thought when they first got into ESA, and some papers are less influential then people might have thought. And I am happy that Samir Khuller (my chair) and Sudipto Guha (a grad student when the paper was written) won it for their paper Approximating Algorithms for Connected Dominating Sets.
But there are two things that are not quite right.
1) 19-21 years. That seems like a very small window. '
2) The paper has to have been published in ESA.
Together this makes the job of the panel that decides the award easier as they only have to look at three years of conferences. But there are many fine papers in algorithms that are not in ESA and there may be an awesome three year period and then a draught, so the window seems short.
But rather than complain let me ask some well defined questions:
Are there any other awards with a lower limit (in this case 19 years) on how long the paper has to be out, so that its influence can be better appreciated? This is a good idea, though 19 seems high. Awards for a lifetime of work are often similar in that the are given after the works influence is known.
Are there any other awards that restrict themselves to ONE conference or journal? Of course best-paper and best-student-paper awards to that, but I don't know of any others.
ADDED LATER: A commenter says that there are LOTS of test-of-time awards associated to conferences:
see here
STOC, FOCS, SODA, CCC don't have them so I foolishly thought that was representative.
That raises another question - why do some conferences have it and some dont'?
Monday, February 08, 2016
The Moral Hazard of Avoiding Complexity Assumptions
Moshe Vardi's CACM editor letter The Moral Hazard of Complexity-Theoretic Assumptions practically begs a response from this blog. I also encourage you to read the discussion between Moshe and Piotr Indyk and a twitter discussion between Moshe and Ryan Williams.
I take issue mostly with the title of Vardi's letter. Unfortunately we don't have the tools to prove strong unconditional lower bounds on solving problems. In computational complexity we rely on hardness assumptions, like P ≠ NP, to show that various problems are difficult to solve. Some of these assumptions, like the strong exponential-time hypothesis, the Unique Games Conjecture, the circuits lower bounds needed for full derandomization, are quite strong and we can't be completely confident that these assumptions are true. Nevertheless computer scientists will not likely disprove these assumptions in the near future so they do point to the extreme hardness of solving problems like getting a better than quadratic upper bound for edit distance or a better than 2-ε approximation for vertex cover.
If you read Vardi's letter, he doesn't disagree with the above paragraph. His piece focuses instead on the press that oversells theory results, claiming the efficiency of Babai's new graph isomorphism algorithm or the impossibility of improving edit distance. A science writer friend once told me that scientists always want an article to be fully and technically correct, but he doesn't write for scientists, he writes for the readers who want to be excited by science. Scientists rarely mislead the press, and we shouldn't, but do we really want to force science writers to downplay the story? These stories might not be completely accurate but if we can get the public interested in theoretical computer science we all win. To paraphrase Oscar Wilde, as I tweeted, the only thing worse than the press talking about theoretical computer science results is the press not talking about theoretical computer science results.
I take issue mostly with the title of Vardi's letter. Unfortunately we don't have the tools to prove strong unconditional lower bounds on solving problems. In computational complexity we rely on hardness assumptions, like P ≠ NP, to show that various problems are difficult to solve. Some of these assumptions, like the strong exponential-time hypothesis, the Unique Games Conjecture, the circuits lower bounds needed for full derandomization, are quite strong and we can't be completely confident that these assumptions are true. Nevertheless computer scientists will not likely disprove these assumptions in the near future so they do point to the extreme hardness of solving problems like getting a better than quadratic upper bound for edit distance or a better than 2-ε approximation for vertex cover.
If you read Vardi's letter, he doesn't disagree with the above paragraph. His piece focuses instead on the press that oversells theory results, claiming the efficiency of Babai's new graph isomorphism algorithm or the impossibility of improving edit distance. A science writer friend once told me that scientists always want an article to be fully and technically correct, but he doesn't write for scientists, he writes for the readers who want to be excited by science. Scientists rarely mislead the press, and we shouldn't, but do we really want to force science writers to downplay the story? These stories might not be completely accurate but if we can get the public interested in theoretical computer science we all win. To paraphrase Oscar Wilde, as I tweeted, the only thing worse than the press talking about theoretical computer science results is the press not talking about theoretical computer science results.
Thursday, February 04, 2016
Go Google Go
In 2009 I posted about a surprising new approach that moved computer Go from programs that lose to beginners to where it could beat good amateurs. That approach, now called Monte Carlo Tree Search, involves evaluating a position using random game play and doing a bounded-depth tree search to maximize the evaluation.
Google last week announced AlphaGo, a program that uses ML techniques to optimize MCTS. This program beat the European Go champion five games to none, a huge advance over beating amateurs. In March AlphaGo will play the world Go champion, Lee Sedol, in a five game match. The world will follow the match closely (on YouTube naturally). For now we should keep our expectations in check, Deep Blue failed to beat Kasparov in its first attempt.
Google researchers describe AlphaGo in detail in a readable Nature article. To oversimplify they train deep neural nets to learn two functions, the probability of a win from a given position, and the probability distribution used to choose the next move in the random game play in MCTS. First they train with supervised learning based on historical game data between expert players and then reinforcement learning by basically having the program play itself. AlphaGo uses these functions to guide the Monte Carlo Tree Search.
AlphaGo differs quite a bit from chess algorithms.
Google last week announced AlphaGo, a program that uses ML techniques to optimize MCTS. This program beat the European Go champion five games to none, a huge advance over beating amateurs. In March AlphaGo will play the world Go champion, Lee Sedol, in a five game match. The world will follow the match closely (on YouTube naturally). For now we should keep our expectations in check, Deep Blue failed to beat Kasparov in its first attempt.
Google researchers describe AlphaGo in detail in a readable Nature article. To oversimplify they train deep neural nets to learn two functions, the probability of a win from a given position, and the probability distribution used to choose the next move in the random game play in MCTS. First they train with supervised learning based on historical game data between expert players and then reinforcement learning by basically having the program play itself. AlphaGo uses these functions to guide the Monte Carlo Tree Search.
AlphaGo differs quite a bit from chess algorithms.
- AlphaGo uses no built-in strategy for Go. The same approach could be used for most other two player games. I guess this approach would fail miserably for Chess but I would love to see it tried.
- Machine learning has the nice property that you can train offline slowly and then apply the resulting neural nets quickly during gameplay. While we can refine the chess algorithms offline but all the computation generally happens during the game.
- If a computer chess program makes a surprising move, good or bad, one can work through the code and figure out why the program made that particular move. If AlphaGo makes a surprising move, we'll have no clue why.
- I wonder if the same applies to human play. A chess player can explain the reasoning behind a particular move. Can Go players do the same or do great Go players rely more on intuition?
Machine learning applications like AlphaGo seemingly tackle difficult computational problems with virtually no built-in domain knowledge. Except for generating the game data for the supervised learning, humans play little role in how AlphaGo decides what to do. ML uses the same techniques to translate languages, to weed out spam, to set the temperature in my house, and in the near future to drive my car. Will they prove our theorems? Time will tell.
Monday, February 01, 2016
Math questions that come out of the Iowa Caucus
Link for info I refer to here
Jim Gilmore, republican, has 0% of the vote. Does that mean that literally NOBODY voted for him?
Hillary beat Bernie , but BOTH get 21 delegates. I hardly call that a win. I call that a tie.
Why do we refer to Hillary and Bernie by their first names, but most of the republicans by their last name. The only exception is Jeb! who we call Jeb to dist from his brother. Also, I think he wants to play down his relation to his brother. Not that it will matter.
Cruz/Trump/Rubio will get 8,7,6 delegates.(This might change but not by a lot). I'd call that a tie. Later states will be winner-take-all which make no sense in a race with this many people (though it may go down soon-- Huckabee has already suspended his campaign which seems like an odd way of saying I quit). But in winner-take-all states there will be real winners. Why are there winner-take-all states? It was a deal made so that those states wouldn't move their primaries up.
This is a terrible way to pick a president. I don't mean democracy which is fine, I mean the confusing combination of Caucus's and Primaries, with some states winner-take-all, some by proportion, and Iowa and NH having... more power than they should. This was NOT a planned system it just evolved that way. But its hard to change.
If you are a registered Republican but want Hillary to win then do you (1) vote for the republican you like the best, or (2) vote for the Republican that Hillary can most easily beat. The problem with (2) is that you could end up with President Trump.
Stat analysis of polls and looking at past trends have their limits for two reasons:
1) The amount of data is small. The modern primary system has only been in place since 1972. Some nominations are incumbents which are very different from a free-for-all. The only times both parties had free-for-alls were 1988, 2000, 2008, and 2016.
2) Whatever trends you do find, even if they are long term (e.g., the tallest candidate wins) might just change. The old Machine Learning warning: Trends hold until they don't.
Most sciences get BETTER over time. Polling is a mixed bag. On the one hand, using modern technology you can poll more people. On the other hand, people have so many diff ways to contact them that its hard to know what to do. For example, its no longer the case that everyone has a landline.
Is this headline a satire?:here
AI project- write a program that tells political satire from political fact. Might be hard.
My wife pointed out that
Hillary WINS since she didn't lose!
Bernie WINS since an insurgent who TIES the favorite is a win
Cruz WINS since... well, he actually DID win
Trump WINS since his support is mostly real and he didn't collapse. And he was surprisingly gracious in his concession speech. (This is the weakest `he WINS' argument on this list)
Rubio might be the BIG WINNER since he did way better than expected. Thats a rather odd criteria and it makes people want to set their expectations low.
SO--- like a little league game where they all tried hard, THE'RE ALL WINNERS!
The American Public--- not so much.
Jim Gilmore, republican, has 0% of the vote. Does that mean that literally NOBODY voted for him?
Hillary beat Bernie , but BOTH get 21 delegates. I hardly call that a win. I call that a tie.
Why do we refer to Hillary and Bernie by their first names, but most of the republicans by their last name. The only exception is Jeb! who we call Jeb to dist from his brother. Also, I think he wants to play down his relation to his brother. Not that it will matter.
Cruz/Trump/Rubio will get 8,7,6 delegates.(This might change but not by a lot). I'd call that a tie. Later states will be winner-take-all which make no sense in a race with this many people (though it may go down soon-- Huckabee has already suspended his campaign which seems like an odd way of saying I quit). But in winner-take-all states there will be real winners. Why are there winner-take-all states? It was a deal made so that those states wouldn't move their primaries up.
This is a terrible way to pick a president. I don't mean democracy which is fine, I mean the confusing combination of Caucus's and Primaries, with some states winner-take-all, some by proportion, and Iowa and NH having... more power than they should. This was NOT a planned system it just evolved that way. But its hard to change.
If you are a registered Republican but want Hillary to win then do you (1) vote for the republican you like the best, or (2) vote for the Republican that Hillary can most easily beat. The problem with (2) is that you could end up with President Trump.
Stat analysis of polls and looking at past trends have their limits for two reasons:
1) The amount of data is small. The modern primary system has only been in place since 1972. Some nominations are incumbents which are very different from a free-for-all. The only times both parties had free-for-alls were 1988, 2000, 2008, and 2016.
2) Whatever trends you do find, even if they are long term (e.g., the tallest candidate wins) might just change. The old Machine Learning warning: Trends hold until they don't.
Most sciences get BETTER over time. Polling is a mixed bag. On the one hand, using modern technology you can poll more people. On the other hand, people have so many diff ways to contact them that its hard to know what to do. For example, its no longer the case that everyone has a landline.
Is this headline a satire?:here
AI project- write a program that tells political satire from political fact. Might be hard.
My wife pointed out that
Hillary WINS since she didn't lose!
Bernie WINS since an insurgent who TIES the favorite is a win
Cruz WINS since... well, he actually DID win
Trump WINS since his support is mostly real and he didn't collapse. And he was surprisingly gracious in his concession speech. (This is the weakest `he WINS' argument on this list)
Rubio might be the BIG WINNER since he did way better than expected. Thats a rather odd criteria and it makes people want to set their expectations low.
SO--- like a little league game where they all tried hard, THE'RE ALL WINNERS!
The American Public--- not so much.
Subscribe to:
Posts (Atom)