It has been noted that while there are 17 Republican candidates for the nomination, of which 10 have been declared serious by FOX News via the first debate, there are far less democratic candidates for the nomination and only one has been declared serious by the powers that be. This may change if Biden runs.
I want to suggest two Democrats who I think should run. They have not made ANY moves in that direction, so it won't happen... until they see that this blog post endorsing them and they get inspired!
Rush Holt. Was a US Congressman from NJ (NOTE- earlier version of this post incorrectly had him as a US Senator from WV which his father was. Thanks to Dave MB comment for pointing that out.) He has a PhD in Physics. I want a president who wins a Nobel Prize in something OTHER THAN Peace (T. Roosevelt, Wilson, Carter, Obama have won it for peace, fictional Bartlett on West Wing won if for Economics). I can imagine one winning for Literature. But Physics- that would be awesome! While I doubt Dr. Holt will win one, it would be good to have someone as Prez who knows SOME science so he won't say stupid things about global warning. (Counter question- do the climate change deniers in the Senate really believe what they are saying or not?) He's also a Quaker, not sure how that plays into all of this. (I had a longer and incorrect passage here, but thanks to Andy P's comment below I changed it and it is now correct--- I hope.)
Would having a scientist in the Whitehouse be good for Science Funding? I would guess a marginal yes. Would it improve my chance of getting my REU grant renewed? I would guess no.
Sheldon Whitehouse. US Senator from Rhode Island. Look at his name- he was born to be Prez!
You may think these are stupid criteria. You may be right. But is it any better than I voted for X in the primary since X has a better chance of winning in the general (One of Clinton's arguments against Obama in 2008 was that Obama couldn't win since... well, you know) or he looks just like a prez (Warren Harding's main qualification) or he's Rich and obnoxious (I would say who I am thinking of, but he's been known to sue people).
Computational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill Gasarch
Sunday, August 30, 2015
Thursday, August 27, 2015
PACM
I serve on the conference committee of the ACM publications board and we've had extensive discussions on the question of the role of journals in publication venues. A number of CS conferences, though notably not in TCS, are moving to a hybrid publication model where their conference presentations make their way into refereed journal papers. One of our proposals is the creating of a specific venue for these activities, a new Proceedings of the ACM. In the September CACM, Joseph Konstan and Jack Davidson lay out this proposal, with pros and cons by Kathryn McKinley and David Rosenblum respectively. The community (that means you) is being asked to give their input.
The theory model has not significantly changed since I was a student. Papers submitted to a conference get reviewed but not refereed, the proofs read over usually just enough to feel confident that the theorem is likely correct. Once authors started submitting electronically they could submit entire proofs, though often in an appendix the program committee is not required to read.
The papers appear in a proceedings and to quote from the STOC proceedings preface
Should theory conferences move towards a more hybrid or PACM type of model? I'd had several debates with my fellow theorists many of whom feel the advantages of requiring journal-level papers get outweighed by the extra effort and time required by the authors and the reviewers.
The theory model has not significantly changed since I was a student. Papers submitted to a conference get reviewed but not refereed, the proofs read over usually just enough to feel confident that the theorem is likely correct. Once authors started submitting electronically they could submit entire proofs, though often in an appendix the program committee is not required to read.
The papers appear in a proceedings and to quote from the STOC proceedings preface
The submissions were not refereed, and many of these papers represent reports of continuing research. It is expected that most of them will appear in a more polished and complete form in scientific journals.A small select number of papers from a conference are invited to a special issue of a journal where they do go through the full referee process. Some, but not most, of the other papers get submitted to journals directly. We don't have the proper incentives for authors to produce a journal version with full and complete proofs.
Should theory conferences move towards a more hybrid or PACM type of model? I'd had several debates with my fellow theorists many of whom feel the advantages of requiring journal-level papers get outweighed by the extra effort and time required by the authors and the reviewers.
Sunday, August 23, 2015
Interesting properties of the number 24 on someone's 24th wedding anniversary
The story you are about to read is true. Only the names have been changed to protect the innocent. The Alice and Bob below are not the crypto Alice and Bob.
------------------------------------------------------
BOB (to ALICE): Its our 24th anniversary! Last year when it was our 23rd anniversary we celebrated by having you tell me that 23 was the ONLY number that required 9 cubes so sum to it, and that its open how many cubes you need for large n, though its between 4 and 7. Oh that was fun! What do you have planned for our 24th anniversary!
ALICE (to BOB): I've prepared FIVE facts about 24! Oh, I mean 24, not 24 factorial! We'll see which one you want to discuss. Here they are:
1) 24 is the largest nat number n such that all nat numbers m ≤ sqrt;(n) m divides n.
2) 24 is the least nat number that has exactly 8 distinct factors. (1,2,3,4,6,8,12,24)
3) 24 is the ONLY number m≥2 such that 1^2 + 2^2 + ... + m^2 is a square. Its 70^2, so if we are married 70 years, I'll have an interesting fact about 24.
4) Let S be an n-sphere. How many spheres of the same size as S can kiss S? Thats the kissing number, called kiss(n). kiss(2)=6 (so given a circle you can position 6 identical circles that kiss it), kiss(3)=12 (thats 3-dim), and kiss(4)=24.
5) Its one of the few numbers that is the title of a TV show.
BOB: Since its our anniversary I'll go with the kissing number~ Mathematically I'd go with the square thing.
---------------------------------------------------
I am sure that for all numbers ≤ 94 one can come up with some facts of interest. The book Those Fascinating Number has an interesting fact about many numbers. The least number that it has no interesting fact about is 95, but I suspect Alice and Bob won't be married that long.
What are the coolest numbers (mathematically)? See here for a possible answer.
Thursday, August 20, 2015
Crowdsourcing the Truth
A new project Augur aims to create a decentralized prediction market. If this post so moves you, Augur is in the midst of a reputation sale. Don't miss out if you would like to be an Augur reporter.
A prediction market takes a future event, such as Hillary Clinton winning the 2016 Presidential Election, and creates a security that pays off $100 if Hillary wins and $0 otherwise. The market allow buying, selling and short selling the security and the price of the security represents that probability the event will happen. Predictwise, which aggregates prediction markets, has the probability of Hillary winning at 47% as I write this. But there are a limited amount of markets out there for Predictwise to draw from.
Intrade, which shut down due to financial improprieties in 2013, used to run markets on all aspects of elections and other current events. Many other prediction markets have disappeared over time. The Augur team put out a white paper describing their fully decentralized prediction market immune to individuals bringing it down. They build on cryptocurrencies for buying and selling and a group of "reporters" financially incentivized to announce the correct answer for each market.
It's that last part I find the most interesting, instead of having an authority that reports the results, Augur will crowdsource the truth.
The purchasers of reputation tokens likely won't represent the public at large and biases may come to play. Would this consensus have agreed that Bush won the 2000 election?
What would have the consensus done on the North Korea controversy?
Despite my reservations, I'm quite excited to see Augur set up a prediction market system that can't be shut down and wish them all the luck. You have to until October 1 to buy reputation if you want to be deciding the truth instead of just predicting it.
A prediction market takes a future event, such as Hillary Clinton winning the 2016 Presidential Election, and creates a security that pays off $100 if Hillary wins and $0 otherwise. The market allow buying, selling and short selling the security and the price of the security represents that probability the event will happen. Predictwise, which aggregates prediction markets, has the probability of Hillary winning at 47% as I write this. But there are a limited amount of markets out there for Predictwise to draw from.
Intrade, which shut down due to financial improprieties in 2013, used to run markets on all aspects of elections and other current events. Many other prediction markets have disappeared over time. The Augur team put out a white paper describing their fully decentralized prediction market immune to individuals bringing it down. They build on cryptocurrencies for buying and selling and a group of "reporters" financially incentivized to announce the correct answer for each market.
It's that last part I find the most interesting, instead of having an authority that reports the results, Augur will crowdsource the truth.
A key feature of Augur is tradeable Reputation. The total amount of Reputation is a fixed quantity, determined upon the launch of Augur. Holding Reputation entitles its owner to report on the outcomes of events, after the events occur...Reputation tokens are gained and lost depending on how reliably their owner votes with the consensus.Consensus may not be "the truth". Reporters are not incentivized to report the truth but what the other reporters will report as the consensus. Those who buy and sell on the market are not betting on the truth but the outcome as it is decided by the consensus of reporters. We have an ungrounded market.
The purchasers of reputation tokens likely won't represent the public at large and biases may come to play. Would this consensus have agreed that Bush won the 2000 election?
What would have the consensus done on the North Korea controversy?
[The Intrade security] allowed speculation on whether North Korea would, by 31 July 2006, successfully fire ballistic missiles that would land outside its airspace. On 5 July 2006 the North Korean government claimed a successful test launch that would have satisfied the prediction, a launch widely reported by world media. [Intrade] declared that the contract's conditions had not been met, because the US Department of Defense had not confirmed the action, and this confirmation was specifically required by the contract. (Other government sources had confirmed the claim, but these were not the sources referenced in the contract.) Traders considered this to be in strict compliance with the stated rule but contrary to the intention of the market (which was to predict the launch event, and not whether the US Defense Department would confirm it).Since Augur will allow arbitrarily worded contracts on topics such as proofs of P v NP, the consensus reporting may lead to some interesting future blog posts.
Despite my reservations, I'm quite excited to see Augur set up a prediction market system that can't be shut down and wish them all the luck. You have to until October 1 to buy reputation if you want to be deciding the truth instead of just predicting it.
Sunday, August 16, 2015
Have we made Progress on P vs NP?
While teaching P vs NP in my class Elementary Theory of Computation (Finite Automata, CFG's, P-NP, Dec-undecid) I was asked What progress has been made on P vs NP?
I have heard respectable theorists answer this question in several ways:
1) There has been no progress whatsoever- but the problem is only 40 years old, a drop in the mathematical bucket sort.
2) There has been no progress whatsoever- this is terrible since 40 years of 20th and 21st century mathematics is a lot and we already had so much to draw on. We are for the long haul.
3) We have made progress on showing some techniques will not suffice, which is progress--- of a sort.
4) We have made progress on showing P=NP: Barringtons result, FPT, Holographic algorithms, SAT in PCP with O(1) queries. Too bad- since P NE NP we've made no progress.
5) We have made progress on showing P=NP: Barringtons result, FPT, Holographic algorithms, SAT in PCP with O(1) queries. We should not be closed minded to the possibliity that P=NP. (NOTE- other theorists say YES WE SHOULD BE CLOSED MINDED.)
6) Note that:
a) We have pathetic lower bounds on real models of computation.
b) We have Meaningful lower bounds on pathetic models of computation.
c) We DO NOT have meaningful lower bounds on real models of computation.
7) Have we made progress? Once the problem is solved we'll be able to look back and say what was progress.
I told the students my view: We have made progress on showing some techniques will not suffice, which is progress--- of a sort which my class found funny. Then again, students find the fact that there are sets that are undecidable, and sets even harder than that! to be funny too.
I have heard respectable theorists answer this question in several ways:
1) There has been no progress whatsoever- but the problem is only 40 years old, a drop in the mathematical bucket sort.
2) There has been no progress whatsoever- this is terrible since 40 years of 20th and 21st century mathematics is a lot and we already had so much to draw on. We are for the long haul.
3) We have made progress on showing some techniques will not suffice, which is progress--- of a sort.
4) We have made progress on showing P=NP: Barringtons result, FPT, Holographic algorithms, SAT in PCP with O(1) queries. Too bad- since P NE NP we've made no progress.
5) We have made progress on showing P=NP: Barringtons result, FPT, Holographic algorithms, SAT in PCP with O(1) queries. We should not be closed minded to the possibliity that P=NP. (NOTE- other theorists say YES WE SHOULD BE CLOSED MINDED.)
6) Note that:
a) We have pathetic lower bounds on real models of computation.
b) We have Meaningful lower bounds on pathetic models of computation.
c) We DO NOT have meaningful lower bounds on real models of computation.
7) Have we made progress? Once the problem is solved we'll be able to look back and say what was progress.
I told the students my view: We have made progress on showing some techniques will not suffice, which is progress--- of a sort which my class found funny. Then again, students find the fact that there are sets that are undecidable, and sets even harder than that! to be funny too.
Thursday, August 13, 2015
What is Theoretical Computer Science?
Moshe Vardi asks a provocative question in Windows on Theory and CACM: "Why doesn't ACM have a SIG for Theoretical Computer Science?" The reaction of myself and many of my fellow Americans is the question has a false premise. SIGACT, the ACM special interest group for algorithms and computation theory, plays this role as stated in its mission:
The term "theoretical computer science" just doesn't have a universal meaning. Neither definition is right or wrong, though we all have our biases.
Why does TCS have such a different meaning in Europe and the US? A different research culture and relatively little mixing. Very few North Americans go to grad school, take postdocs or faculty positions in Europe and very few Europeans go the other way, and those that did tended to do algorithms and complexity. Until we started to see web-based archives in the mid 90's, distribution of research between the Europe and US went quite slowly. Things have changed since but by then the different notions of TCS have been set.
SIGACT fully acknowledges that it doesn't do a good job covering the logic community and it has always strongly supported SIGLOG, the special interest group for logic and computation.
I would love to see joint events between SIGACT and SIGLOG. LICS should be part of FCRC with STOC and Complexity or hold a co-located meeting in other years. But SIGACT does do a great job representing the theoretical computer science community in North America.
SIGACT is an international organization that fosters and promotes the discovery and dissemination of high quality research in theoretical computer science (TCS), the formal analysis of efficient computation and computational processes. TCS covers a wide variety of topics including algorithms, data structures, computational complexity, parallel and distributed computation, probabilistic computation, quantum computation, automata theory, information theory, cryptography, program semantics and verification, machine learning, computational biology, computational economics, computational geometry, and computational number theory and algebra. Work in this field is often distinguished by its emphasis on mathematical technique and rigor.Theoretical computer science in Europe has a much different balance, putting as much or even more emphasis on automata and logic as it does on algorithms and complexity. So from that point of view (a view shared by Moshe who has strong ties to CS logic) SIGACT does not cover the full range of TCS.
The term "theoretical computer science" just doesn't have a universal meaning. Neither definition is right or wrong, though we all have our biases.
Why does TCS have such a different meaning in Europe and the US? A different research culture and relatively little mixing. Very few North Americans go to grad school, take postdocs or faculty positions in Europe and very few Europeans go the other way, and those that did tended to do algorithms and complexity. Until we started to see web-based archives in the mid 90's, distribution of research between the Europe and US went quite slowly. Things have changed since but by then the different notions of TCS have been set.
SIGACT fully acknowledges that it doesn't do a good job covering the logic community and it has always strongly supported SIGLOG, the special interest group for logic and computation.
I would love to see joint events between SIGACT and SIGLOG. LICS should be part of FCRC with STOC and Complexity or hold a co-located meeting in other years. But SIGACT does do a great job representing the theoretical computer science community in North America.
Monday, August 10, 2015
Ways to deal with the growing number of CS majors.
(Univ of MD at College Park is looking to hire a Comp Sci Lecturer. Here is the link: HERE)
Univ of MD at College Park will have 2100 students in the CS program next year. Thats... a lot! CS is up across the country which is mostly a good thing but does raise some logistical questions. How are your schools handling the increase in the number of CS students? Here are some options I've heard people use:
1) Hiring adjuncts who come in, teach a course, and leave is good economically (they don't get paid much) but bad for the long term. Better to have people that are integrated into the dept (lectures are, see next point). Also, CS is a changing field so you can't just give someone some notes and slides and say TEACH THIS. For Calculus you probably can, and they can even do a good job. Calculus doesn't change as fast, or maybe even at all. I envy my friends in math departments who can count
on a stable first year course that everyone in the dept can teach. They envy CS profs who have the freedom to have diff versions of CS1 in diff schools. Even in diff semesters!
2) Hiring lecturers who are full time and have a high teaching load is good in terms of them being fully integrated into the dept and being involved with course syllabus changes. Also, if they stay long term there is stability. If the first year courses are only taught by lecturers this may be bad abstractly as profs should be in touch with all levels of the dept. I once tried to make this
point over lunch at a Dagstuhl meeting and before I could even finish my point they shouted me down and asked if I wanted to teach Intro Programming. I do not, so I'll just shut up now.
3) Don't let profs buy out. I've heard of this at some schools as a way to at least have the profs that are there teaching. Alternatively only let profs buy out of grad courses. I don't know if either is a good idea. For one thing, the money use to buy out can be used to hire someone else. But that goes back to point 1- not really good to have part timers. If other profs teach overloads with the money that sounds okay. If half of the profs are paying the other half to teach for them, that sounds odd, but I'm not sure its bad. Also, it would never get that extreme.
4) Postdocs sometimes teach a course for extra money. This is good for them for teaching experience and resume, and if they are teaching a course with someone else this can work well. However, if a postdocs point is to get more research done, this will of course cut into that.
5) Grad students sometimes teach a course. I knew a grad student in math who was teachng the junior-course in number theory. I asked her if this was an honor or exploitation. She just said YES.
6) Increase class size. Going from lecturing to 80 to lecturing to 300 might be okay, though (a) you NEED to use powerpoint or similar and have resources on the web, maybe also Piazza, and (b) you NEED to have LOTS of recitations so they at least are small and (c) you NEED to havehigh quality TAs for the recitations. In some schools its even a problem getting ROOMS of that size!
7) HIRE MORE PROFS! Profs have lower teaching loads than lecturers and in CS its harder to teach outside of your area then in (say) Math. (How is it for other fields? If you know, please comment.) Should you hire based on research needs or teaching needs? If a recursive model theorists can teach graphics, that might be a real win if you NEED research in recursive model theory but also
NEED someone to teach graphics.
8) Not a suggestion but a thought- IF many of the new majors aren't very good then many might flunk out of the major in the first year so this is not a problem for Sophmore, Junior, Senior courses. At least at UMPC this does NOT seem to be the case. To rephrase: many of the new majors are good and do not flunk out. I call that good news! But what about at your school?
9) What does your school do? Does it work?
Univ of MD at College Park will have 2100 students in the CS program next year. Thats... a lot! CS is up across the country which is mostly a good thing but does raise some logistical questions. How are your schools handling the increase in the number of CS students? Here are some options I've heard people use:
1) Hiring adjuncts who come in, teach a course, and leave is good economically (they don't get paid much) but bad for the long term. Better to have people that are integrated into the dept (lectures are, see next point). Also, CS is a changing field so you can't just give someone some notes and slides and say TEACH THIS. For Calculus you probably can, and they can even do a good job. Calculus doesn't change as fast, or maybe even at all. I envy my friends in math departments who can count
on a stable first year course that everyone in the dept can teach. They envy CS profs who have the freedom to have diff versions of CS1 in diff schools. Even in diff semesters!
2) Hiring lecturers who are full time and have a high teaching load is good in terms of them being fully integrated into the dept and being involved with course syllabus changes. Also, if they stay long term there is stability. If the first year courses are only taught by lecturers this may be bad abstractly as profs should be in touch with all levels of the dept. I once tried to make this
point over lunch at a Dagstuhl meeting and before I could even finish my point they shouted me down and asked if I wanted to teach Intro Programming. I do not, so I'll just shut up now.
3) Don't let profs buy out. I've heard of this at some schools as a way to at least have the profs that are there teaching. Alternatively only let profs buy out of grad courses. I don't know if either is a good idea. For one thing, the money use to buy out can be used to hire someone else. But that goes back to point 1- not really good to have part timers. If other profs teach overloads with the money that sounds okay. If half of the profs are paying the other half to teach for them, that sounds odd, but I'm not sure its bad. Also, it would never get that extreme.
4) Postdocs sometimes teach a course for extra money. This is good for them for teaching experience and resume, and if they are teaching a course with someone else this can work well. However, if a postdocs point is to get more research done, this will of course cut into that.
5) Grad students sometimes teach a course. I knew a grad student in math who was teachng the junior-course in number theory. I asked her if this was an honor or exploitation. She just said YES.
6) Increase class size. Going from lecturing to 80 to lecturing to 300 might be okay, though (a) you NEED to use powerpoint or similar and have resources on the web, maybe also Piazza, and (b) you NEED to have LOTS of recitations so they at least are small and (c) you NEED to havehigh quality TAs for the recitations. In some schools its even a problem getting ROOMS of that size!
7) HIRE MORE PROFS! Profs have lower teaching loads than lecturers and in CS its harder to teach outside of your area then in (say) Math. (How is it for other fields? If you know, please comment.) Should you hire based on research needs or teaching needs? If a recursive model theorists can teach graphics, that might be a real win if you NEED research in recursive model theory but also
NEED someone to teach graphics.
8) Not a suggestion but a thought- IF many of the new majors aren't very good then many might flunk out of the major in the first year so this is not a problem for Sophmore, Junior, Senior courses. At least at UMPC this does NOT seem to be the case. To rephrase: many of the new majors are good and do not flunk out. I call that good news! But what about at your school?
9) What does your school do? Does it work?
Thursday, August 06, 2015
How hard would this cipher be for Eve to crack?
I've been looking at old ciphers since I am teaching a HS course on Crypto. We've done shift, affine, matrix, Playfair, 1-time pad, Vigenere, and then noting that in all of the above Alice and Bob need to meet, we did Diffie-Hellman.
The Playfair Cipher cipher is interesting in that it gives a compact way to obtain a mapping from pairs of letters to pairs of letters that (informally) LOOKS random. Of course NOW it would not be hard for Alice and Bob to AGREE on a random mapping of {a,b,...,z}^2 to{a,b,c,...,z}^2 and use that. For that matter, Alice and Bob could agree on a random mapping from {a,b,c,...,z}^k to {a,b,c,...,z}^k for some reasonable values of k.
So consider the following cipher with parameter k: Alice and Bob generate a RANDOM 1-1, onto mapping of {a,...,z}^k to {a,...,z}^k. To send a message Alice breaks it into blocks of k and encodes each block using the mapping they agree on.
Question One: How big does k have to be before this is impractical for Alice and Bob?
Question two: If k=1 then Eve can easily break this with Freq analysis of single letters. For k=2 Eve can easily break this with Freq analysis of digraphs (pairs of letters). Is there a value of k such that the distribution of k-grams is not useful anymore? As k goes up does it get easier or harder to crack? I originally thought harder which is why i thought this would be a good code, but frankly I don't really know.Looking over the web it is EASY to find freq of letters and digraphs, but very hard to find any source for freq of (say) 10-grams. So at the very least Eve would need to build up her own statistics--- but of course she could do this.
SO, here is the question: Is there a value of k such this code is easy for Alice and Bob but Hard for Eve TODAY? How about X years ago?
The Playfair Cipher cipher is interesting in that it gives a compact way to obtain a mapping from pairs of letters to pairs of letters that (informally) LOOKS random. Of course NOW it would not be hard for Alice and Bob to AGREE on a random mapping of {a,b,...,z}^2 to{a,b,c,...,z}^2 and use that. For that matter, Alice and Bob could agree on a random mapping from {a,b,c,...,z}^k to {a,b,c,...,z}^k for some reasonable values of k.
So consider the following cipher with parameter k: Alice and Bob generate a RANDOM 1-1, onto mapping of {a,...,z}^k to {a,...,z}^k. To send a message Alice breaks it into blocks of k and encodes each block using the mapping they agree on.
Question One: How big does k have to be before this is impractical for Alice and Bob?
Question two: If k=1 then Eve can easily break this with Freq analysis of single letters. For k=2 Eve can easily break this with Freq analysis of digraphs (pairs of letters). Is there a value of k such that the distribution of k-grams is not useful anymore? As k goes up does it get easier or harder to crack? I originally thought harder which is why i thought this would be a good code, but frankly I don't really know.Looking over the web it is EASY to find freq of letters and digraphs, but very hard to find any source for freq of (say) 10-grams. So at the very least Eve would need to build up her own statistics--- but of course she could do this.
SO, here is the question: Is there a value of k such this code is easy for Alice and Bob but Hard for Eve TODAY? How about X years ago?
Sunday, August 02, 2015
17 candidates, only 10 in the debate- what to do?
On Thursday Aug 6 there will be Republican debate among 10 of the 17 (yes 17) candidates for the republican nomination.
1) There are 17 candidates. Here is how I remember them: I think of the map of the US and go down the east coast, then over to Texas then up. That only works for the candidates that are or were Senators or Govenors. I THEN listthe outsiders. Hence my order is (listing their last job in government) George Pataki (Gov-NY), Chris Christie (Gov-NY), Rick Santorum (Sen-PA), Rand Paul(Sen KT),JimGilmore(Gov-VA), Lindsay Graham (Sen-SC),Jeb Bush (Gov-FL), Marco Rubio (Sen-FL), Bobby Jindal (Gov-LA), Ted Cruz (Sen-TX), Rick Perry (Gov-TX), Mike Huckabee (Gov-AK), Scott Walker (Gov-Wisc), John Kaisch (Gov-Ohio) Donald Trump (Businessman), Ben Carson (Neurosurgeon), Carly Fiorina (Businesswomen).
9 Govs, 5 Sens, 3 outsiders.
2) Having a debate with 17 candidates would be insane. Hence they decided a while back to have the main debate with the top 10 in the average of 5 polls, and also have a debate with everyone else. There are several problems with this: (a) candidates hovering around slots 9,10,11,12 are closer together than the margin of error, (b) the polls are supposed to measure what the public wants, not dictate things, (c) the polls were likely supposd to determine who the serious candidates are, but note that Trump is leading the polls, so thats not quite right.QUESTION: Lets say that Chris Christie is at 2% with a margin of + or - 3%. Could he really be a -1%?
3) A better idea might be to RANDOMLY partition the field into two groups, one of size 8 and one of size 9, and have two debates that way.What randomizer would they use? This is small enough they really could just put slips of paper in a hat and draw them. If they had many more candidates we might use Nisan-Wigderson.
4) How did they end up with 17 candidates?
a) Being a Candidate is not a well defined notion. What is the criteria to be a candidate? Could Lance Fortnow declare that he is a candidate for the Republian Nomination (or for that matter the Democratic nomination). YES. He's even written some papers on Economics so I'dvote for him over... actually, any of the 17. RUN LANCE RUN! So ANYONE who wants to run can! And they Do! I'm not sure what they can do about this---it would be hard to define ``serious candidate'' rigorously.
b) The debate is in August but the Iowa Caucus isn't until Feb 1. So why have the debate now? I speculate that they wanted to thin out the field early, but this has the opposite effect--- LOTS of candidates now want to get into the debates.
c) (I've heard this) Campaign Finance laws have been gutted by the Supreme court, so if you just have ONE mega-wealthy donor you have enough money to run. Or you can fund yourself (NOTE- while Trump could fund himself, sofar he hasn't had to as the media is covering him so much).
d) Because there are so many, and no dominating front runner, they all think they have a shot at it. So nobody is dropping out. Having a lot of people running makes more people want to run. (All the cool kids are doing it!)