Thursday, December 27, 2012

2012 Complexity Year in Review

As we turn our thoughts from Turing to Erdős, a look back at the complexity year that was. Written with help from co-blogger Bill Gasarch.

The complexity result of the year goes to Samuel Fiorini, Serge Massar, Sebastian Pokutta, Hans Raj Tiwary and Ronald de Wolf for their paper Linear vs Semidefinte Extended Formualtions: Exponential Separation and Strong Lower Bounds (ArXiv version). It is easy to show that TSP can be expressed as an exponentially sized Linear Program (LP). In 1987 Swart tried to show that TSP could be solved with a poly sized LP. While his attempt was not successful it did inspire Yannakakis to look at the issue of how large an LP for TSP has to be.He showed in 1988 that any symmetric LP for TSP had to be exponential size. (Swart had used symmetric LP's).

What about assymetric LP's? This has been open UNTIL NOW! The paper above proves that any LP formulation of TSP requires an exponential sized LP. They use communication complexity and techniques that were inspired by quantum computing.

Runners Up
And don't forget the solution to Bill's 17x17 problem.

News and trends: The new Simons Institute for the Theory of Computing at Berkeley, the great exodus of Yahoo! researchers mostly to Google and Microsoft, the near death of Florida computer science (anyone want to be chair?), and the rise of the MOOCs. 

We remember Dick de Bruijn, Tom Cover, Mihai PătraşcuErnst Specker and David Waltz, not to mention Neil Armstrong and Ray Bradbury

Thanks to our guest posters Bernard Chazelle, Yoav Freund, Andrew Goldberg, Mohammad Taghi Hajiaghayi, William Heisel, Lane Hemaspaandra, John Purtilo, Janos Simon and Vijay Vazirani.

Enjoy 2013 and remember that when living in a complex world, best to keep it simple. And try not to fall off that fiscal cliff.

Thursday, December 20, 2012

Flash Fill

Sometimes it just takes a simple new feature in a popular piece of software to remind us how computer science just does cool stuff.

Excel 2013 has a new feature, Flash Fill, where you can reformat data by giving an example or two. If you have a column of names like

Manuel Blum
Steve Cook
Juris Hartmanis
Richard Karp
Donald Knuth

You can start a column to the right and type
Blum, M.
Cook, S.
and the rest of the table gets filled in automatically.

Flash Fill is based on a 2011 POPL paper by Sumit Gulwani (later a CACM highlight). It's been explained to me as applying machine learning to binary decision diagrams.

Flash Fill allows a user to manipulate data without having to write macros or Perl scripts. Someone with no technical background can use Flash Fill and enjoy the CS goodness inside without even knowing it is there.

Monday, December 17, 2012

Goodstein Sequences (Its his 100th birthday!)

LANCE: Bill, on Dec 15, 2012 it will be
Reuben Goodstein's
100th birthday.

BILL: Is he still alive? Will there be a conference in his honor? Free Food? Cake?

LANCE: No, no, and no. But you could blog about Goodstein sequences. (thinking: or they could just look it up here).

BILL: That is a Goodstein idea. You have a Goodstein sequence of good ideas.

I first define a sequence that is not a Goodstein sequence but will be good for education. Let n be a number. Say let n=42 Base 10. We then decrease the number by 1 but increase the base by one. We keep doing this. We get

  1. 42 base 10 = 42 base 10
  2. 41 base 11 = 45 base 10
  3. 40 base 12 = 48 base 10
  4. 3E base 13 = 50 base 10 (E stands for Eleven)
  5. 3T base 14 = 52 base 10 (T stands for Ten)
  6. 39 base 15 = 54 base 10
The sequence looks like its increasing. But note that it eventually gets to
  1. 30 base 24 = 72 base 10
  2. 2(23) base 25 = 73 base 10 (the ``units digit'' is 23)
  3. 2(22) base 26 = 74 base 10
OH MY. Its still increasing. But note that it eventually gets to
  1. 20 base 48 base 10 = 96 base 10
  2. 1(47) base 49 = 96 base 10
  3. 1(48) base 50 = 98 base 10
  4. 1(47) base 51 = 98 base 10
It seems to be at 98 for a while. Indeed we eventually get to
  1. 11 base 97 = 98 base 10
  2. 10 base 98 = 98 base 10
  3. 0(97) base 99 = 97 base 10
And from there on it it goes to 0. Given n, how many iterations do you need to get to 0? This function grows rather fast, but not THAT fast. To prove that it goes to 0 you need (I think) an induction on an omega-squared ordering. The true Goodstein sequences initially write the number in base 10 but also writes the exponents in base 10 and does that as far as it can: 4 x 102 x 104 + 8 x 103 + 7 x 102 + 8 x 102 x 101 + 3 x 100

(This is called Hereditary 10 notation.) At each iteration we subtract 1 and then turn all of the 10's into 11's. More generally we write the number in base b and the exp in base b... etc and then subtract 1 and make all of the b's into b+1's. This sequence also goes down to 0. NOW how long does it take to goto 0. The function is not primitive recursive. Also, the theorem that states the sequence eventually goes to 0, cannot be proven in Peano Arithmetic.

I use Goodstein sequences as an example of a natural (one can debate that) function that is not primitive recursive. Ackermann's function comes up more often (lots more often) but is harder to initially motivate.

So Reuben Goodstein, wherever you are, I salute you!

Thursday, December 13, 2012

The Fiscal Cliff

Unless the republicans and democrats get a deal before year's end, the country will head over the so-called "Fiscal Cliff" including a budget sequestration that will cause automatic cuts to most federal agencies. This will be a disaster for science, grants will be delayed or not awarded, perhaps even spending freezes on current funds. University presidents have banded together in my old state and new to drive this point home.

Don't panic too much about the fiscal cliff, which will be short lived if it happens at all. But the consequence may lead to deep short or long term budget cuts in science. If charitable deductions are eliminated, that can be a large hit on university endowments. Pell grants might also be in play. 

On the other hand, science and education has its friends in Washington, starting with Barack Obama. So maybe the whole fiscal mess will work out just fine for us and we can go back to worrying whether universities will be decimated by MOOCs. 

Tuesday, December 11, 2012

Book review column

(I am a bit behind on posting links to my book review column- this blog post is about the Third (of four) for the year 2012, and the fourth of four has already appeared. I'll post that one later.)

My second-to-last book review column can be found here. The file pointed to does NOT include the BOOKS I NEED REVIEWED since that has changed. For that go here.

The column has an editorial! In a book review column for CS theory books?! It is an editorial against the high prices of books and the fact that many books are NOT online. These are well known problems, so what of it? I could say why the publishers are at fault, but frankly, I don't know the economics of the book market. So instead I recommend the community to do the following:

  1. Only buy books used or at a cheaper than list price (you probably already do this).
  2. If you write a book then insist that the contract allow for it to be online for free. This is not as outlandish as it sounds: (1) Blown to Bits by Abelson, Ledeen, Lewis (which I reviewed in this Column) is a book for a wide audience--- it is a popular book about computers and society. Even so, it is available online for free here. (2) Some book companies of Math and Science books allow the author to have the book free on line.
  3. When assigning a course textbook allow them to get earlier editions that are usually far far cheaper. There is no reason to insist they get the most recent edition.
Also of interest with regard to all of this- the Kistsaeung vs Wiley case: here

Thursday, December 06, 2012

Where is the youth of TCS/Market Eq/Guest Post by Vijay

(Guest post by Vijay Vazirani.)

Theory Day on Nov 30, 2012
Vijay Vazirani gave a talk:
New (Practical) Complementary Pivot Algorithms for Market Equilibrium.
He was inspired by the reaction to the talk to write a guest blog
which I present here!

Where is the Youth of TCS?

by Vijay Vazirani

I have always been impressed by the researchers of our community, especially the young researchers -- highly competent, motivated, creative, open-minded ... and yet cool! So it has been disconcerting to note that over the last couple of years, each time I have met Mihalis Yannakakis, I have lamented over the lack of progress on some fundamental problems, and each time the same thought has crossed my mind, ``Where is the youth of TCS? Will us old folks have to keep doing all the work?''

Is the problem lack of information? I decided to test this hypothesis during my talk at NYTD. To my dismay, I found out that there is a lot of confusion out there! By a show of hands, about 90% of the audience said they believed that Nash Equilibrium is PPAD-complete and 3% believed that it is FIXP-complete! I would be doing a disservice to the community by not setting things right, hence this blog post.

First a quick primer on PPAD and FIXP, and then the questions. Ever since Nimrod Megiddo, 1988, observed that proving Nash Equilibrium NP-complete is tantamount to proving NP = co-NP, we have known that the intractability of equilibrium problems will not be established via the usual complexity classes. Two brilliant pieces of work gave the complexity classes of PPAD (Papadimitriou, 1990) and FIXP (Etessami and Yannakakis, 2007), and they have sufficed so far. A problem in PPAD must have rational solutions and this class fully characterizes the complexity of 2-Nash, which has rational equilibria if both payoff matrices have rational entries. On the other hand, 3-Nash, which may have only irrational equilibria, is PPAD-hard; however, its epsilon-relaxation is PPAD-complete. That leaves the question, ``Exactly how hard is 3-Nash?''

Now it turns out that 3-Nash always has an equilibrium consisting of algebraic numbers. So one may wonder if there is an algebraic extension of PPAD that captures the complexity of 3-Nash, perhaps in the style of Adler and Beling, 1994, who considered an extension of linear programs in which parameters could be set to algebraic numbers rather than simply rationals. The class FIXP accomplishes precisely this: it captures the complexity of finding a fixed point of a function that uses the standard algebraic operations and max. Furthermore, Etessami and Yannakakis prove that 3-Nash is FIXP-complete. The classes PPAD and FIXP appear to be quite disparate: whereas the first is contained in NP INTERSECT co-NP, the second lies somewhere between P and PSPACE (and closer to the harder end of PSPACE, according to Yannakakis).

Now the questions (I am sure there are more):

  1. Computing an equilibrium for an Arrow-Debreu market under separable, piecewise-linear concave utilities is PPAD-complete (there is always a rational equilibrium). On the other hand, if the utility functions are non-separable, equilibrium consists of algebraic numbers. Is this problem FIXP-complete? What about the special case of Leontief utilities? If the answer to the latter question is ``yes,'' we will have an interesting demarcation with Fisher markets under Lenotief utilities, since they admit a convex program.
  2. An Arrow-Debreu market with CES utilities has algebraic equilibrium if the exponents in the CES utility functions are rational. Is computing its equilibrium FIXP-complete? Again, its epsilon-relaxation is PPAD-complete.
  3. A linear Fisher or Arrow-Debreu market with piecewise-linear concave production has rational equilibria if each firm uses only one raw good in its production, and computing it is PPAD-complete. If firms use two or more raw goods, equilibria are algebraic numbers. Is this problem FIXP-complete?

Monday, December 03, 2012

Too Much Competition?

On Friday the New York Times ran an article on how online retailers constantly adjust prices to match their competitors. Their ability builds on many tools from computer science, from networks to algorithms, not unlike airlines and hedge funds. But is this good for the consumer?

Suppose I work for and have a television priced at $500, which matches the price on If I lower my price to $450, than I can expect Amazon to do the same. I've gained little from lowering the price, only $50 less dollars than I had before. Likewise Amazon has little incentive to lower their price. This hypercompetition can actually lead to collusion without colluding. Hal Varian talked a similar theme of price guarantees in a 2007 Times viewpoint.

In practice, companies still lower their prices but as networks get faster, algorithms get smarter and more people shop online, we might actually see higher prices and less competition, which I believe is already happening with the airlines.

Wednesday, November 28, 2012

App Love

First a shout out to our friends up north on the 30th anniversary of the New York Theory Day this Friday.

Just two years ago I wrote a post Gadget Love but now I don't use many gadgets any more, it's all built into my iPhone and iPad. No wonder a company like Best Buy is having problems. Not only do they have to compete against Amazon they also compete against the Apple App Store.

Some of my favorite apps:

Goodreader - Manage, view and mark-up PDF files. Syncs with Dropbox and nearly every other file sharing service.

Evernote - Manages short notes and photos. I often just take pictures of a whiteboard and save it to Evernote.

JotNot - The iPhone becomes a scanner.

Those three apps let me lead a nearly paperless life.

WolframAlpha - Whatever you think of Stephen Wolfram, this is still a very useful tool.

TripIt - Forward your emails from airlines, rental cars and hotels to trip it and it organizes all your info. Invaluable when traveling.

ComplexityZoo - OK, I rarely use, it but it's pretty cool there's a free app that let's you find complexity classes.

There are many many great and not-so-great apps. I have seven pages of apps on my iPhone. But I can always download more so tell me some of your favorites.

Monday, November 26, 2012

Inverse Closure Problem

In the undergraduate complexity course we spend some time on closure properties such as (1) REG closed under UNION, INTER, COMP and (2) R.E. closed under UNION and INTER but NOT COMP.

I propose the following inverse problem: For all possible assignments of T and F to UNION, INTER, COMP, find (if it is possible) a class of sets that is closed exactly under those that are assigned T. I would like the sets to be natural. I would also like to have some from Complexity theory, which I denote CT (e.g., Reg, P, R.E.) and some not (e.g., set of all infinite sets) which I denote HS for High School Student could come up with it. If I want other examples I will say OTHERS? We assume our universe is {a,b}*.

    1. CT: Reg, P, Poly Hier, PSPACE, Prim Rec, Decidable, Arithmetic Hier.
    2. HS: Set of all subsets of {a,b}*. OTHERS?
    1. CT: NP (probably), R.E. OTHERS?
    2. HS: Set of all finite subsets of {a,b}*. OTHERS?
    1. CT: Context Free Langs. OTHERS?
    2. HS: Set of all infinite subsets of {a,b}*. OTHERS?
    1. CT: The set of all regular langs that are accepted by a DFA with ≤ 2 states. (Can replace 2 with any n ≥ 2.)
    2. HS: Set of all subsets of {a,b}* that are infinite AND their complements are infinite. OTHERS?
    1. CT: I can't think of any- OTHERS?
    2. HS: { emptyset, {a}, {b} }
    1. CT: The set of all reg language accepted by a DFA with ≤ 3 states and ≤ 2 accepting states. (Can replace (3,2) with other pairs) OTHERS?
    2. HS: I can't think if any- OTHERS?

Tuesday, November 20, 2012

Recursion Theorem follows from Church-Turing

During a homework assignment in a graduate complexity course I took at Cornell back in 1985 I used the following reasoning: Since a computer code sits in RAM that a program can read, by the Church-Turing thesis we can assume a program has access to its own code.

The TA marked me wrong on the problem because I assumed the recursion theorem that we hadn't yet covered. I wasn't assuming the recursion theorem, I was assuming the Church-Turing thesis and concluding the recursion theorem.

I did deserve to lose points, the Church-Turing thesis is not a mathematical theorem, or even a mathematical statement, and not something to use in a mathematical proof. Nevertheless I still believe that if you accept the Church-Turing thesis than you have to accept the recursion theorem.

Now the recursion theorem does not have a trivial proof. So the Church-Turing thesis has real meat on it, in ways that Turing himself didn't anticipate. Since the recursion theorem does have the proof, it only adds to my faith in and importance of the Church-Turing thesis.

Thursday, November 15, 2012

A Simple PSPACE-Complete Problem

Back in the typecast last month I promised a simple PSPACE-complete game in a future post. Here it is:


Given: A collection of finite sets S1,...,Sk.

The Game: Each player takes turns picking a non-empty set Si. Remove the elements of Si from all the sets Sj. The player who empties all the sets wins.

This game came up in a discussion I had with Steve Fenner trying to extend his student's work that Poset Games were PSPACE-complete. The PSPACE-completeness of determining a winner of the SET GAME is an easy reduction from from Poset Games.

An open question: Do we still get PSPACE-completeness if the size of the Si are bounded? I don't even know the answer if the sets have size at most two.

Tuesday, November 13, 2012

If GASARCH Tweeted what would he tweet?

If I tweeted this is what I would tweet:
  1. The problem with trying to predict who will win the election.
  2. Prez election thought: if you run as yourself and lose (Stevenson, Goldwater, McGovern) then you have your integrity and got some discussions started. If you run as someone else (George W Bush ran as a moderate) and win then you have the presidency. If you run as someone else and lose you have nothing. Now that he's lost Will the real Mitt Romny please stand up?.
  3. My bet on Ryan being the Republican Nominess in 2016 (if I am right I get $10.00, if I am wrong I lose $3.00) is close to the odds here. Are you surprised they already have this up? I'm surprised INTRADE doesn't have this bet up yet.
  4. An APP based on my 17x17 challenge: here
  5. Most bloggers don't last: see here
  6. An Origami proof of the Pythagorean theorem: here
  7. I got three emails in two minutes about a talk on how to avoid spam.
  8. Its official! Physics is hard
  9. A paper is retracted because it has no scientific content: here.
  10. Is this a real question or a joke or both?
  11. Julia Child was born Aug 15, 1912 and died Aug 13, 2004. Almost born and died the same day. What is the probability that someone is born and dies on the same day? This is not hard, but might make a good problem.
  12. The most common PIN number is 1234 with 11% of all PIN numbers. This is alarming but not surprising. The least common PIN number is 8068. Here is the article on it.
  13. Do e-books make censorship easier or harder? I still don't know; however, here is an example where it was easier, though calling it censorship isn't quite right.
  14. Movies have low Kolm Complexity: here
  15. One sign that you've been working on a paper too long- right before submitting it you realize that most of the authors have changed their affiliations and emails. (This happened to me recently.)

Sunday, November 11, 2012

Fifty Years of Computational Complexity

From Juris Hartmanis’ Observations About the Development of Theoretical Computer Science on the research leading to his seminal 1965 paper On the Computational Complexity of Algorithms with Richard Stearns.
Only in early November 1962 did we start an intensive investigation of time-bounded computations and realized that a rich theory about computation complexity could be developed. The first explicit mention of classifying function by their computation time occurs in my logbook on November 11. 
And so today we celebrate the fiftieth anniversary of the conception of computational complexity. We have learned much since then and yet we still know so little. Here’s to another fifty years of trying to figure it out.

Thursday, November 08, 2012

Andrew Goldberg guest blog on new max flow result

Guest Blog by Andrew Goldberg on the recent Max Flow in O(nm) time algorithm.

Maximum Flow in O(nm) Time

Recently, Jim Orlin published an O(nm) maximum flow algorithm. This solves a long-open problem. In this blog entry, we assume that the reader is familiar with the maximum flow problem, which is a classical combinatorial optimization problem with numerous applications. (If not then see the Wikipedia entry here.)

Let n and m denote the number of vertices and arcs in the input graph, respectively, and if the input capacities are integral, U denotes the value of the biggest one. Running time of a strongly polynomial algorithm is a polynomial function of n and m; the time of a polynomial algorithm may depend on log(U) as well.

Maximum flow algorithms have been studied since 1950's with the first strongly polynomial-time algorithm developed in 1972 by Edmonds and Karp. This was followed by faster and faster algorithms. In 1980, Galil and Namaad developed an O(nm log2n) algorithm, coming close to the nm bound. The latter bound is a natural target for a maximum flow algorithm because a flow decomposition size is THETA(nm). In 1983, Sleator and Tarjan developed the dynamic tree data structure to improve the bound to O(nm log(n)); in 1986, Goldberg and Tarjan developed the push-relabel method to get an O(nm log(n2/m)) bound. The race towards O(nm) continued, with improvements being made every few years, until King, Rao and Tarjan developed O(nm + n2+ε) and O(nm logm/(n log(n) n) algorithms in 1992 (SODA) and 1994 (Journal of algorithms-the paper pointed to), respectively. These bounds match O(nm) except for sparse graphs.

No better strongly polynomial algorithm for sparse graphs has been developed for 18 years, until Orlin's recent result. Orlin not only gets the O(nm) bound, but also an O(n2/log(n)) bound for very sparse graphs with m = O(n). His result is deep and sophisticated. It uses not only some of the most powerful ideas behind the efficient maximum and minimum-cost flow algorithms, but a dynamic transitive closure algorithm of Italiano as well. Orlin closes the O(nm) maximum flow algorithm problem which has been open for 32 years.

Sometimes solving an old problem opens a new one, and this is the case for the maximum flows. The solution to the maximum flow problem is a flow, and flows have linear size, even when they have decompositions of size THETHA(nm). For the unit-capacity problem, an O(min(n2/3, m1/2) m) algorithm has been developed by Karzaov (and independently by Even and Tarjan) in the early 1970's. This bound is polynomially better than nm. In 1997, Goldberg and Rao developed a weakly polynomial algorithm that comes within a factor of O(log(U) logn2/m n) of the unit flow bound, and Orlin's algorithm uses this result. A natural question to ask at this point is whether there is an O(nm/nε) maximum flow algorithm for some constant epsilon.

Monday, November 05, 2012

Produce or Perish

 Every year or so the National Science Foundation releases a new version of the holy bible of grant submission procedures, the Grant Proposal Guide. Last month's update (which applies to grants due in 2013) has this tidbit in the Summary of Significant Changes.
Chapter II.C.2.f(i)(c), Biographical Sketch(es), has been revised to rename the “Publications” section to “Products” and amend terminology and instructions accordingly. This change makes clear that products may include, but are not limited to, publications, data sets, software, patents, and copyrights.
So you can list your patents or open-source software as a "product" right up there with the same status as an academic publication. This seems like a harmless change, us theorists can continue to just list our publications. But I worry about the slippery slope. Does this signal a future change to the NSF Mission that supports "basic scientific research"? We will be expected in the future to have "products" other than research publications?

Or is the NSF just saying that while they fund our research, it's not the research, but the manifestations of that research in whatever form they take, that gets judged for future grants?

Thursday, November 01, 2012

Random thoughts on the election

Neither Lance and I have commented much on the Prez election.
I only found one post from 2012 that mentioned Romney:
Romney vs Aaronson.
A few mentioned Obama but not with regard to the election.
I give you some Random thoughts on the election before its over.
They are nonpartisan unless they are not.

  1. I polled the Sophmore discrete math class (secret ballot) and got the following: of the 99 students in the class (1) 64 for Obama, (2) 17 for Romney, (3) 8 for Gary Johnson (libertarian), (4) 2 for Jill Stein (Green). Those were the only ones on the ballot; however, there were some write-ins: (5) 2 for Ron Paul, (6) 1 each for Newt Gingrich, John the Baptist, Mickey Mouse, Gumby, and two names I did not recognize but may have been the students themselves. You know what they say: As goes discrete math, so goes the nation. Hence Obama now has it in the bag.
  2. I predicted it would be Romney vs Obama on Feb 15, 2012. I also predicted that Obama would win. I never wavered from that prediction, so you can't call me a flip-flopper. You can read it here. The first part (Obama vs Romney) has already come true; we will see if the second one does.
  3. Assuming Obama wins I have a bet on the Republican nominee in 2016: I have bet Lance Fortnow, Chris Umans, and Amol Despande (DB guy in my dept) that it will be Paul Ryan.
    (ADDED LATER- I misunderstood Amol- he wants to bet WITH me, that Ryan will win,
    with the odds I got.) If I win I get $1.00, if they win they get 30 cents. I have a bet with Mike Barron (a friend of mine not a theorist--- yes I have non-theorists friends), who is more of a risk-taker, where if I win I get $10.00 and if he wins he gets $3.00. Are these good odds? I ask this nonrhetorically.
    1. Why Ryan? 1972 is the beginning of the modern political era. That's when Prez candidates had to compete in primaries to win the nomination. (Humphrey got the nomination in 1968 without entering a single primary, then the McGovern Commission changed the rules so that a lot more primaries were included. And, the man who understood the rules, McGovern, got the nomination in 1972.) Since 1972 the Republicans have almost always nominated a KNOWN person, someone you heard of four years earlier. Not including incumbents here is the list:
      1. 1980 Reagan. Known- Had run in 1976.
      2. 1988 Bush Sr. Known- Was VP under Reagan.
      3. 1996 Dole. Known- Had run with Ford as VP, had run for Prez before.
      4. 2000 Bush Jr. Unknown- One can argue he was known via his dad, but I'll just say Unknown.
      5. 2008 McCain. Known- Had run before in 2000.
      6. 2012 Romney. Known- Had run before in 2008.
      By contrast the Dems have sometimes nominated someone you had not heard of. Here is their record:
      1. 1976 Carter. An Unknown Former Gov or Georgia.
      2. 1984 Mondale. Known, Former VP.
      3. 1988 Dukakis. An Unknown Gov of Mass.
      4. 1992 Clinton. An Unknown Gov of Arkansas.
      5. 2000 Gore. Known. Was VP.
      6. 2004 Kerry. An Unknown Senator.
      7. 2008 Obama. An Unknown Senator.
      (One could debate how unknown some of these were.) Note that whenever the Dems nominated a known person they lost- perhaps a cautionary note to those who want Biden or H. Clinton in 2016, and an encouraging note to Andrew Cuomo, current gov of NY. (If you say whose that? you've proven my point.) But ANYWAY, the Republicans have ALMOST ALWAYS given it to a KNOWN person. None of the people who ran for the nomination in 2012 seem plausible to get the nomination in 2016, though The Daily Show is doing a segment on the fictional Cain Presidency. Some sort-of-known people who didn't run in 2012 but may in 2016: Chris Christie (Gov of NJ), Jeb Bush (Gov of Florida), Tim Pawlenty (Gov of Minnesota), Mitch Daniels (Gov of Indiana), Marco Rubio (Senator from Florida), Bobby Jindal (Gov of Louisiana) . The last two are more known for being talked about as Prez of V Prez Candidate then for anything they've actually done. I grant that any of these people are possible. However, they are not quite as well known as Ryan. Also, I predict that if Romney loses it will be blamed on we were not true to our principles and they will go further rightwing with Ryan.
    2. Why it might not be Ryan: The above argument sounds convincing but the problem with predictions in politics (and elsewhere) is that, to quote a friend in Machine Learning, Trends hold until they don't. Anything could happen! Things may change drastically! As an example see this XKCD.
  4. Another prediction, though harder to quantify. When Gore, Kerry, and McCain lost they or people around them said things like I let my handlers handle me too much- if I had run as myself I would have won. I predict that Romney will think the same thing. I doubt he'll say it.
  5. There is an issue on the Maryland Ballot that involves Game Theory and Gaming. Roughly speaking the issue is should we allow more gambling in our state. PRO: People are going to adjacent states to gamble and we should get that money. CON: Gambling is a regressive tax and bad for the economy in the long run. The more states have gambling (or build baseball stadiums or give businesses who move there tax breaks) the more other states have to go along to compete. A classic Prisoners Dilemma--- except that West Virginia and Delaware have already defected so we have no choice. Or do we? There is a rumor that the anti-gambling adds in Maryland are being paid for by the West Virginia Casinos. The anti-gambling ads are not anti-gambling, they are just against this bill- they claim that the money won't really go to education for example. I admire the honesty--- if a West VA casino had an add in MD saying how bad Gambling was morally that would look rather odd. Even so, Should I vote FOR gambling just to spite the out-of-state casinos running adds in my state? Should I vote FOR it since the ads against it are not giving MY arguments against it? Should I vote FOR IT and tell people I voted against it?
  6. There is a marriage-equality referendum on the ballot- Question 6. There has been almost no ads or talk about it. Why? One speculation--- the people against it know they will be on the wrong side of history, and the people for it don't quite know how to sell it. Its ahead in the polls so maybe they don't want to rock the boat.
  7. If you ask a pro-Obama pundit who will win he might say Obama because people know Romney is a liar. If you ask a pro-Romney pundit will win he might say
    Romney because Obama has not fixed the economy and Mitt can. Either may use poll data as window dressing, but they tell you what they want to happen rather than what an honest scientific study will show. Nate Silver, a scientific pollster, says in his book The signal and the noise: Why so many predictions fail--- but some don't that pundits are right about half the time. Not surprising.
  8. George McGovern died recently at the age of 90. The 1972 prez election, McGovern vs Nixon, was the first Prez campaign I paid attention to. I passed out McGovern pamphlets in my precinct of Brooklyn and McGovern DID win that Precinct. I regard that as a Moral Victory.

Wednesday, October 31, 2012

Monday, October 29, 2012

Fall Jobs Post

Time again for the annual fall jobs post. As always the best places to look for academic CS positions are the job sites at the CRA and the ACM. Also check out the postdoc and other opportunities on the Theory Announcements site. It never hurts to check out the webpages of departments you might want to be at or to contact people to see if positions are available.

I encourage everyone who has a job to offer in theoretical computer science at any level to post links in the comments.

With computer science enrollments expanding and the economy slowing recovering, I'm expecting quite an increase in the number of tenure-track jobs in computer science this year. On the other hand I'm expecting a decrease in the number of new postdoc positions though maybe more overseas.

Good luck to everyone in the market.

Thursday, October 25, 2012

Planarizing Gadgets for Perfect Matching do not Exist!

At Dagstuhl I was delighted when I saw the title of a talk to be given Planarizing Gadgets for Perfect Matching do not Exist because I had asked the question about a gadget for planar Ham Cycle here. I was hoping to ask the authors if there techniques could be used to show that there was not Planarizing gadget for Ham Cycle (NOTE- I had either forgot or never knew that this was already known and was posted as an answer to my query to cstheory stackexchange, here.)

The paper Planarizing Gadgets for Perfect Matching do not Exist (or if you can get to it the MFCS 2012 version here) is by Rohit Gurjar, Arpita Korwar, Jochen Messner, Simon Straub, and Thomas Thierauf. The talk was given by Jochen and was excellent.

Perfect matching is in P (Edmonds 1965) but is it in NC? Not known--- however it is in RNC (Mulmuley, Vazirani, Vazirani 1987). What about Planar graphs? They are different--- counting the number of perfect matching in a graph is Sharp-P complete (Valiant 1979) but counting the number of perfect matchings in a planar graph is in NC (Vazirani 1989). So of course Planar Graph Matching is in NC. Can we use this to get Graph Matching in NC? perhaps be a reduction? This would be neat since we would be using a reduction to prove a problem EASY rather than to prove a problem HARD. (I think this has been done before but is rare-- readers, if you know a case comment on it.) Perhaps there is some planarizing gadget: given a graph G use some gadgets to get rid of crossings and produce a planar graph G' such that G has a perfect matching iff G' has a perfect matching. That would be AWESOME! However, from the very title of the paper, we can guess this is not true. This paper shows that something AWESOME is not possible! A downer but worth knowing.

Jochen proved this and then went on to say that they had done the same thing for HAM CYCLE! That is, there is no planarization gadget for Ham cycle! (He also acknowledged that this was already known independently.) SO I didn't get to ask my question since they already had answered it. Great!

  1. Their interest in planarization was related to an OPEN problem--- is Graph Matching in NC? By contrast my interest in Planarization gadgets for Ham Cycle was pedagogical--- I was in search of a better proof that Planar Ham Cycle is NPC- though there is no new theorem here.
  2. I am delighted to know the result!
  3. Their results says that a certain type of reduction won't work. Might some other reduction work? My sense is this is unlikely.
  4. So--- is Graph Matching in NC? Since I believe NC=RNC I think yes. Will it be proven by showing NC=RNC or will it be proven directly (leaving NC=RNC open)? Or will the ideas that lead to Graph Matching in NC help to show NC=RNC? This is one of those questions that might be solved within a decade, as opposed to P vs NP which won't be resolved for quite some time.

Monday, October 22, 2012

Song of the Complexity Classes

I tweeted the audio of this song last week and here is the video. Recorded at Dagstuhl on October 18th. Written by Fred Green who also plays piano. Performed by David Barrington with Steve Fenner on chorus.

Fred gives apologies to Gilbert and Sullivan, the Complexity Zoo, and Tom Lehrer

Lyrics by Fred Green, copyright 2012

To the tune of "I Am the Very Model of a Modern Major General"

There's P and NP, BPP and ZPP and coNP,
And TC0 and AC0 and NC1 and ACC,
And LIN and L and Q and R, and E, EE and E-E-E.
There's SPARSE and TALLY, PL, P/Poly, NP/poly,
There's PromiseP and PromiseBPP and PromiseBQP,
There's FewP, UP, QP, UE, N-E-E, N-E-E-E,
And EXP and NEXP, FewEXP, and NE-EXP, and also Max-N-P.
  And EXP and NEXP, FewEXP, and NE-EXP, and also Max-N-P
  And EXP and NEXP, FewEXP, and NE-EXP, and also Max-N-P
  And EXP and NEXP, FewEXP, and NE-EXP, and also Max-N, Max-N-P.
There's Sigma_nP, Delta_nP, Theta_nP, Pi_nP,
We know BPP's in Sigma_2P intersection Pi_2P.
And NP to the NP to the NP to the NP
To the NP to the NP, that's the pol-y-nom-yal hierarchy!

There's #P, gapP, PP, coC=P and MidBitP,
And ModP, Mod_kP, Mod_kL, ParityP, MPC,
There's FNP, NPSV, NPMV, and SAC,
SAC0, SAC1, SZKn and SPP.
There's BQP and DQP and EQP and NQP,
And RQP and VQP and YQP and ZQP,
QAC0, QNC0, QNC1, Q-A-C-C.
   QAC0, QNC0, QNC1, Q-A-C-C
   QAC0, QNC0, QNC1, Q-A-C-C
   QAC0, QNC0, QNC1, Q-A-C, A-C-C
There's QSZK, QMA and QAM and QIP,
And IP, MIP, QMIP and also PCP,
And PPPad and PPcc, PSK and PQUERY,
And PP to the PP, PExp, PPA and PPP.

These complexity classes are
the ones that come to mind,
And there may be many others but they
haven't been defined.

Saturday, October 20, 2012

Short Announcements

With a shout out to the friendly folks attending FOCS this week, some short announcements.

Read the STOC CFP before you submit the paper. There are significant changes to the submission format and procedure. Deadline is November 2.

Complexity will be co-located with STOC in 2013. Submission deadline is November 30.

The new Simons Institute for the Theory of Computing has a call for workshop proposals and research fellowships.

There will be a symposium to celebrate a new professorship named after SIGACT and STOC founder Patrick Fischer at Michigan on November 5 and a celebration of the 80th birthday of Joe Traub on November 9th at Columbia.

Wednesday, October 17, 2012

Dagstuhl Typecast

Nerd Shot from Dagstuhl Seminar 12421

Lance: Welcome to another Typecast from beautiful Schloss Dagstuhl. I’m here with Bill for the Workshop on Algebraic and Combinatorial Methods in Computational Complexity.

Bill: Beautiful? I thought this place was designed to be ugly so that we actually get work done.

Lance: So what work did you get done today, BIll?

Bill: I watched the debate. And you?

Lance: Steve Fenner and I came up with the easiest to describe PSPACE-complete problem ever!

Bill: Was it one of those poset things that you and Steve’s students work on.

Lance: A generalization of poset games but easier to describe. But we are getting off topic...

Bill: as did Obama and Mitt.

Lance: Bill my two minutes aren’t up yet. Anyway you’ll have to read about this new PSPACE-complete problem in a future post.

Bill: Since you didn’t ask, let me tell you about my favorite talk, Rank bounds for design matrices and applications by new Rutgers professor Shubhangi Saraf (Powerpoint). Despite the awful title

Lance: which is why I skipped that talk

Bill: it used complexity theory techniques to prove new things in math, a generalization of the Sylvester-Gallai theorem. You have n points on the plane...

Lance: Wait Bill, It will take longer to tell the S-G theorem than it would have to explain the new PSPACE-complete problem!

[Steve Fenner shows up with beer in hand. He goes off to get Lance one too.]

Bill: OK, I’ll leave this for a later post. What was your favorite talk?

Lance: Believe it or not it was an algorithms talk. Atri Rudra gave a very simple algorithm to do a join operation motivated by reconstructing 3-d collections of points from projections. [Powerpoint]

Bill: Yes, and it may have applications to complexity as most real world algorithms do.

[Steve arrives with Lance’s Beer. There is much happiness.]
Steve: My favorite talk so far was Rahul Santhanam’s [abstract] Reminded me of the good old days of complexity.

Lance: Let the guy give me beer and he thinks he can weasel his way into our typecast.

Bill: Lance, that’s how I got started in this business.

Lance: Rahul had some clever co-author, didn’t he?

Steve: No one important. Lance something?

Harry Buhrman: I like the GCT talk by Josh Grochow. [abstract]

Bill: In the future we’ll all have to learn GCT to get started in this field. I’m glad I’m living in the past. Lance, you paid me the highest compliment in my talk. You didn’t fall asleep and you even picked a fight with me.

Lance: Only because I had to stay awake to help the audience understand your confusing presentation.

Bill: It was only one slide.

Lance: It was only one fight.

Bill: I still feel as complimented as a bit that’s just been toggled .

Lance: I’m happy for you. Actually, it was not that bad a result. Now that’s my highest compliment.

Harry: Hey, this isn’t fair, we’ve haven’t heard all the talks yet.

[Both Harry and Steve are talking later tonight]

Lance: Life isn’t fair, get over it.

Bill: Let’s call it a day.

Lance: Watch my twitter feed later this week for a special musical complexity tribute.

Bill: I can’t wait.

Lance: So until next time, remember that in a complex world, best to keep it simple.

Monday, October 15, 2012

Matching Nobel Prizes

This week Bill and I have traveled to Germany for the Dagstuhl Seminar on Algebraic and Combinatorial Methods in Computational Complexity. Plenty of newly minted Nobel laureates here, winners of the Peace Prize last Friday. But this post celebrates today's winners of the Economics Prize, Al Roth and Lloyd Shapley for their work in matching theory that has made a difference in the real world.

In 1962, Shapley and David Gale created the first algorithm that finds stable marriages. David Gale would surely have shared this award had he not passed away in 2008. Nicole Immorlica's guest obit of Gale nicely describes this work and its applications including matching medical students with residencies.

Al Roth uses matching algorithms for a variety of projects, most notably creating large scale kidney exchanges, saving lives with algorithmic mechanism design. Doesn't get cooler than that.

Thursday, October 11, 2012

Why Does College Cost So Much?

When John Hennessy gave his talk on MOOCs at the CRA Snowbird meeting he recommended the book Why Does College Cost So Much? by Robert Archibald and David Feldman, both economics professors at William and Mary. I've never seen a good answer to the title question so I read through the book. To overly simplify their main thesis: It's not that college has gotten more expensive, it's that most everything else has gotten cheaper. Technological advances in manufacturing and shipping have made greatly lessened the cost of goods, and the rate of inflation is calculated based on a basket of goods. So service industries, particularly those that require highly educated people and don't benefit directly from technology, look expensive in comparison. College costs closely map to medical and dental expenses, and closely followed broker expenses until technology made brokerages cheaper.

Archibald and Feldman even argue that there isn't a college affordability crisis for the majority of Americans: They are still better off than 30 years ago even if we take out college expenses. Hardly the doom and gloom scenario that Hennessey was portraying.

Their main point is that one cannot increase the number of students to faculty without decreasing the quality of education. That's where MOOCs come in, supposedly the solution to allow faculty to be far more efficient in the number of students they can teach without reducing quality. Might help control college costs but could harm research at top tier universities and many other universities might cease to exist.

Alas perception is reality and the public sees college expenses growing dramatically compared to the general cost of living and blames wasteful spending at universities. Curing this "disease" might kill the patient.

Tuesday, October 09, 2012

Theory Day. How to get the word out on this and other events?

Aravind asked me to post on this again (NOTE- registration-for-free deadline
is TOMMOROW!!!!!)

The University of Maryland at College park is having a Theory Day on Wed Oct 24! Come hear
  1. Distinguished talks by Julia Chuzhoy and Venkataesan Guruswami!
  2. Short talks (is that code for NOT distinguished?) by Bill Gasarch, MohammadTaghi Hajiaghayi, Jonathan Katz, Samir Khuller, David Mount, Elaine (Runting) Shi, and Aravind Srinivsans. (I never realized I was first alphabetically until now.)
  3. Discussions in hallways for those that learn more that way!
For more info see the link above, but note one thing: Its FREE! NOTE: This is purposely after the NJ FOCS conference and is an easy Amtrak ride from NJ. I like theory days in general and often go to the NY theory days. They are free and only one day. I recommend going to any theory day that is an Amtrak Ride away. (Might depend on how long the trip is- There is a 13-hour Amtrak from Atlanta Georgia to Maryland, though I doubt I'll see Lance there.) I get a lot out of theory day as noted in this post about NY theory day. What are good ways to get the word out about events.
  1. The major conferences and also the NY Theory Days have a long enough tradition that they don't need much advertising.
  2. Email is not as useful as it used to be since we all get too much of it.
  3. There IS a website for theory announcements here, and also one of our links, but more people need to post there and read there. A chicken and egg problem.
  4. Twitter. No central authority. If Aravind had a twitter account (I doubt he does) then he could tweet to his followers, but that would not be that many people.
  5. Any ideas?

Monday, October 08, 2012

Cruel XOR unusual Punishment

(This post was done with the help of Lane Hemaspaandra and John Purtilo.)

The 8th amendment of the US Constitution states

Excessive bail shall not be required, nor excessive fines imposed, nor cruel and unusual punishments inflicted.
There is an ambiguity here. Let C be cruel and U be unusual. They are saying NOT(C AND U) = NOT(C) OR NOT(U). Common sense would dictate that they meant NOT(C) AND NOT(U).

  1. (This article was emailed to me by Lane H. along with the idea for this post.) This article (see also this Wikipedia article) is an example where the CRUEL but NOT UNUSUAL argument seems to have been explicit. The case was about a MANDATORY life sentence in prison for possessing over 650 grams of cocaine, in Michigan. Is that a lot? (I never could figure out that Metric System.) In terms of numbers or getting high I really don't know if 650 grams is a lot, but legally its NOT A LOT--- the only other state that comes close to this kind of penalty is Alabama with a life-sentence for 6500 grams---that is not a typo. (See the Wikipedia articles section on White's criticism of Kennedy's argument.) I quote the syllabus of the decision which is not written by the members of the Supreme Court and is not part of the decision, but is rather prepared by the Office of the Clerk (of the Supreme Court)---who, one assumes, is pretty darned good at extracting the key points of the ruling, and so the syllabi are very useful.
    Severe, mandatory penalties may be cruel, but they are not unusual in the
    constitutional sense, having been employed in various forms throughout
    the Nation's history.
    Some past rulings HAVE indicated that a sentences that is out-of-proportion with the crime MAY be considered Cruel and Unusual. But, alas, unlike mathematics, definitions can change over time. (Well- in math that happens sometimes, but not often and usually not with dire consequences.)
  2. One could argue that Capital Punishment is C but NOT(U). And indeed, the courts have often upheld it. Did they they use the argument that Capital punishment is C but NOT(U), hence it does not violate the 8th amendment? This article (emailed to be my Lane) makes that line of reasoning explicit and is against it.
  3. If someone commits anti-Semitic vandalism and the courts decide that he or she is forced to read Anne Frank's Diary, that would be U but NOT(C). Not sure how they would enforce this- give a quiz? Are Cliff notes okay? What if the vandal saw the movie instead? Would this really work? (I honestly don't know.) Is this Hypothetical? In America YES. John found a case in Italy and I found a case in Germany). If this gets to be a common punishment for anti-Semitic crimes then it may no longer be unusual. I could find no other real cases where people convicted of crimes had, as part of their sentence, that they had to read something (though IANAL so there could be some I don't know about).
  4. If an Occupy Wall Street guy vandalizes a Financial Institution's offices and is forced to read Atlas Shrugged that would be unusual. But is it cruel? (My opinion: YES) How about the Cliff notes? (My opinion: NO) Is this hypothetical? (My opinion: YES.)
  5. What if a teenage girl was in Juvenile court for cutting off the hair of a 3-year old (against the 3-year old's will) and the Judge agreed to reduce the sentence if the teen's mother cut off the teen's pony tail in court. This would be considered unusual. But is it cruel? Is it hypothetical? No
It is most likely that the phrase Cruel and Unusual was not meant to be
broken down into its component parts.

So what Logic did the founders use?
Thomas Jefferson knew more math than any of the founding fathers. But alas,
he was off in France when the constitution was written.

Wednesday, October 03, 2012

Close to Genius

The MacArthur Foundation announced their 2012 Fellows, also know as the genius awards. Among the list two names of interest to my readers, Maria Chudnovsky and Daniel Spielman.

My long time readers first heard of Maria back in 2003 when I posted about a great talk she gave as a graduate student giving a polynomial-time algorithm to test for perfect graphs. That was just a start in her incredible career as a graph theorist.

Dan is a regular in the blog for the various awards he's won, most notably (before the MacArthur) for his Nevanlinna prize. I believe Dan is my first genius co-author, alas not one of the papers that causing him to win awards.

I've seen many cases where researchers get fantastic results early in their career and can never live up to the hype. Dan and Maria exceeded it. Congrats to both of them.

Tuesday, October 02, 2012

Quantum Workshop

I went to the QIS workshop on quantum computing which was on the College Park Campus. I went Thursday (reception- free food!) and Friday (free lunch!) but had to miss the Friday free dinner and the Saturday session.

  1. Going to a conference that is ON your campus usually makes it FURTHER away for you. If I was from out of town I would have gotten a Hotel Room in the same hotel as the conference. As it was I walked from my office- a 45 minute walk It would have been shorter but it was a quantum random walk.
  2. Scott Aaronson was there. We were talking about teaching class while being taped. He said that being taped changes what he does. I cleverly pointed out that the act of measuring Scott, changes Scott. He cleverly replied that the search for a NEW and FUNNY quantum joke has not ended yet.
  3. Frank Gaitan gave a talk on using quantum annealing to find Ramsey Numbers. FINALLY a real application for Quantum Computing! (The downside- I was going to use Quantum Computers Find Ramsey Numbers! for an April Fools Day post.)
  4. Umesh Vazarni's talk on CLASSICAL results proven using QUANTUM techniques was great. This notion seems to be for real. Its looking more and more like even if you don't like quantum you will have to learn it. A particular example of this is this paper
    Linear vs Semidefinite Extended Formulations: Exponential Separation and Strong Lower Bounds by Fiorini, Massar, Pokutta, Tiwaray, de Wolf.
  5. Yi-Kai Liu gave a talk on Quantum Information in Machine Learning and Cryptography. We discuss a small part of his talk, a result by Oded Regev. (Daniel Apon gave a full talk on this small part at the UMCP Complexity Seminar, his slides are here.) GAPSVP(γ) is the following problem: Given an n-dim lattice and a number d output YES if the shortest vector in L is ≤ d, and output NO if the shortest vector in L is > γ d (if it's neither we don't care what you output). This is NP-hard to solve exactly or within O(1) approx (though to be hard for evern poly approx) and it's a good problem for crypto to use. LWE is the Learning with Errors Problem. There is a quantum-reduction that shows that GAPSVP ≤ LWE, so if GAPSVP is hard then LWE is hard. So there are now three scenarios:
    1. Quantum computers are not built. Factoring is still hard classically. Crypto goes on as it is now (maybe not- there is a classical reduction from GapSVP to LWE, but for weaker parameters- so maybe you can base crypto on LWE).
    2. Quantum computers are not built. Factoring is easy classically. GAPSVP is hard. Do Crypto based on GAPSVP.
    3. Quantum computers are built. Factoring is now easy. GAPSVP is hard. Do Crypto based on LWE. THIS is what the result allows us to do!
    4. Quantum computers are built. Factoring is now easy. GAPSVP is easy. Now you are in trouble.
  6. New word: Stoquastic. Not sure what it means.
  7. Issac Chuang spoke about the difficulty of teaching quantum computing since the students have different backgrounds. He has devised (or helped devise) Online Tutoring systems for it that seem to be working very well. I didn't know that quantum computing was at the level to worry about how-to-teach-it. Then again, any course has these concerns, so it's good to see that he did something about it. (Even so, I doubt I'll invest a lot of time and effort into an online tutoring system for my Ramsey Theory course next spring.)
  8. There were some talks on or touching on Quantum-Prog Languages, Quantum-CAD, Quantum-architecture. I suspect that if quantum computers are ever built we will find that some of the assumptions of this work were wrong; however, I also suspect that having people who have thought about these issues will be valuable.

Thursday, September 27, 2012

Things a Complexity Theorist Should Do At Least Once

A few weeks ago, Suresh wrote a post Things a TCSer should have done at least once with the caveat
This list is necessarily algorithms-biased. I doubt you'll need many of these if you're doing (say) structural complexity. 
Basically begging a response. So here is what every structural complexity theorists should have done.
  • Define a new complexity class that has some reason for being.
  • To keep balance in the world, you should also collapse two other complexity classes. 
  • While you are at it, separate two complexity classes that weren't separable before. 
  • Create a new relativized world. Extra points if in this work you collapse two complexity classes while separating two others. 
  • Use Kolmogorov complexity, information theory or the probabilistic method as a proof technique. They are really all the same technique in disguise.
  • Use the sunflower lemma, or the Local Lovasz lemma, or some other weird probabilistic or combinatorial lemma just for the fun of it.
  • Invoke VC dimension to solve a problem. I should have at least one in common with Suresh and Sauer's lemma is sometimes useful in complexity. 
  • Have a theorem that starts "Assuming the extended Riemann hypothesis..."
  • Give a "simpler" proof of a nasty theorem. 
  • [Advice given by Noam Nisan many moons ago] Try to settle P v NP. In both ways. Only by really trying and failing can you understand why the easy stuff doesn't work. 

Tuesday, September 25, 2012

A new kind of Spam

(In this post I quote attempted posts to the blog.
I transcribe them as they are, so if you see a missing period
or awkward language, its not me (this time) its them.)

We moderate comments but do so very lightly.
There is no hard and fast rules but roughly speaking
we block comments that are BOTH offensive AND off-topic.
There may be exceptions- like if its on topic but REALLY REALLY offensive
and adds nothing to the discussion.
There may more benign exceptions- like if I post a question and will block the
answers so that when I reveal the answer the next day its more dramatic.
Of if someone posts information that is not public yet.
In these case we hope they try to post non-anonymously so we can email them
and tell them why they were blocked. There are other isolated cases as well.
All of these are very rare.

Recent attempted comments do not fall into these rules and we had to
decide on them. The following was an attempted comment on my post
STOC 2012-Workshops and honored talks

Hi there STOC 2012- workshop and honors talks Loved every second! Great views on that!
That actually breaks the mold! Great thinking!

This comment is NEITHER offensive NOR off-topic.Its a bit odd- I can't tell if
the Great view is of my post or of the talks I was writing about.
It does sound awkward. Why is that? IT WAS GENERATED BY A SPAMBOT!!!
How do I know this? Because if you click on the author you are directed to a site thatsells you paints for your living room.Hence we block such posts.

So, they think the readers of our blog are into interior decorating.
I am sure that some are, I don't think our readers are a particularly good market for this. Technology is good enough to find our blogs and try to use spambots on them,but not good enough (or there is no incentive) to figure out which blogs are worthtargeting.This is part of a bigger problem I blogged about
herewhere I noted that technology is good enough to know that I am a book review editor for SIGACT NEWS but notgood enough (or there is no incentive) to figure out that I only review comp sci and math books, and not
books on (say) politics.

A borderline case: an attempted comment on the blog
A natural function with very odd properties was

Awesome logic. You truly have some expert skills and enhanced my knowledge on Cantor Set Construction Agreements.

This one did not link to any product so it might be legit, except thatit is awkward sounding and the same person tried to submit,as a comment to Six Questions a about natural and unnatural mathematical objects

This truly enhanced my skills. very helpful Job Proposal.

Clearly spam, though I'm not sure why since there is no link to a product.

These posts are trying to pass a Turing Test- but so far they are not succeeding.

Sometimes they only positive comments I get are from spambots. Oh well.

Thursday, September 20, 2012

Poset Games are PSPACE-complete

Consider the following game on a poset, each player takes turns picking an element x of a finite poset and removes all y ≥ x. First one to empty the poset wins. I posted last March about a high school student, Adam Kalinich, who showed how to flip the winner of a poset game.

Finding the winner of a poset game is in PSPACE by searching the game tree. A corollary of Adam's work showed that poset games were hard for Boolean formula leaving a huge gap in the complexity of finding the winner.

Daniel Grier, an undergrad at the University of South Carolina, has settled the problem and shows that determining the winner of a poset game is PSPACE-complete. His reduction is ridiculously simple (though not obvious) and the proof is not that complicated either.

Grier starts from Node Kayles which is a game on an undirected graph where each player takes turns removing a vertex and all its neighbors. Whomever empties the graph first wins. Thomas Schaefer showed the PSPACE-completeness of Node Kayles back in 1978.

Grier's reduction from Node Kayles to posets is very simple: Let G be the graph. Have one element in the poset for each vertex of G, all incomparable. For each edge e=(u,v) we add two more elements, one above the vertex elements corresponding to u and v, and one below every vertex element other than u and v. That's the whole construction.

Grier shows that if G has an odd number of edges and no immediate win for the first player then the first player wins the Node Kayles game if and only if the first player wins the corresponding poset game.

You can read more details in Grier's short paper. It's really neat seeing high school students and undergrads solving interesting open problems. We need more problems like poset games.

Wednesday, September 19, 2012

Theory Conferences Galore

Early registration for the FOCS conference in New Jersey is September 27th. There is some travel support available for students and postdocs, deadline is this Friday the 21st.

STOC and Complexity will be co-located in Palo Alto in early June. STOC CFP (deadline November 2), Complexity CFP (deadline November 30).

SODA comes back to the US and New Orleans January 6-8. Accepted Papers.

Monday, September 17, 2012

Imagining Imaginary Probabilities

In the year 4000BC my great-great-...-great grandmother tried to solve (in today's terms) the equation
x2 + 2x + 2 = 0
She discovered that if it had a solution then there would be a number a such that a2=-1. Since there clearly was no such number, the equation had not solution. She missed her chance to (depending on your viewpoint) discover or invent complex numbers.

Fast Forward 6012 years.

In the year 2012 I wondered: is there a probability p such that if you flip a coin that has prob(H)=p twice the prob that you get HT is 1/2? This leads to
If you solve this you get p=(1+i)/2. Hence there is no such coin. WAIT A MINUTE! I don't want to miss the chance that my great...great grandmother missed! In the real world you can't have a coin with prob(H) = (1+i)/2. But is there some meaning to this?

More generally, for any 0 ≤ d ≤ 1 there is a p ∈ C (the complex numbers) such that ``prob(HT)=d.'' The oddest case (IMHO) was to take d=1. You then get that if a coin has prob(H)=(1+\sqrt(-3))/2 then prob(HT)=1. Does that mean it always happens? No since prob(TH)=1. Do the probs of HH, HT, TH, TT add up to 1? Yes they do since some are negative.

Is there an interpretation or use for this? I know that quantum mechanics uses stuff like this. Could examples like this be good for education? Are there non-quantum examples of the uses of this thatcould be taught in a discrete math course?

Thursday, September 13, 2012

Max Flow

A couple of weeks ago Suresh tweeted the following result of James Orlin
I'm thinking, wow, max flow is one of the major standard algorithms problems, and O(nm)  time (n = number of vertices, m = number of edges) seems like a great clean bound. But there hasn't been much chatter about this result beyond Suresh's tweet.

Reading Orlin's paper gives some clues. The previous best bound due to King, Rao and Tarjan has a running time of O(nm logm/(n log n)n) = O(nm log n) just a logarithm off from O(nm). Orlin doesn't directly give an O(nm) algorithm, his takes time O(nm+m31/16log2n). It's the minimum of the running times of King-Rao-Tarjan and Orlin's algorithms that yields O(nm). Nor is O(nm) tight, Orlin also gives an algorithm with a running time of O(n2/log n) when m=O(n).

I don't mean to knock Orlin's work, he makes real progress on a classical algorithmic problem. But somehow I think of O(nm) as a magical bound when it is really just another bound. I'm just fooled by simplicity.

Tuesday, September 11, 2012

Some quantum Stuff

Two quantum announcements (emailed to me by Umesh Vazirani, and producedhere almost exactly) and then some thoughts of mine quantum computing.

Announcement one: The NSF has a new initiative to try to address the lack of tenured faculty (particularly in computer science departments) involved in quantum computation research.  CISE-MPS Interdisciplinary Faculty Program in Quantum Information

The initiative provides a paid sabbatical year to interested tenured faculty to visit a strong quantum computing group, so that can reposition their research interests. The rationale behind the solicitation is to increase the number of tenured researchers in quantum computation, but also to break through the "quantum skepticism" in faculty hiring decisions in those departments where there are no faculty actively involved in quantum computing research.

Announcement two: In support of this program, Carl Williams (a physicist working in Quantum Information who put together the US Vision for Quantum Information Science for the Office of the President) and Umesh have put together a workshop where interested individuals can learn about the initiative, the field and make contacts with people from the major quantum computing centers: see here.

The initiative comes at a particularly opportune moment for researchers in complexity theory, given the increasing relevance of quantum techniques in complexity theory --- the 2-4 norm paper of Barak, et al (SDPs, Lasserre), exponential lower bounds for TSP polytope via quantum communication complexity arguments (See Drucker and de Wolf  paper Quantum proofs for classical theorems for several apps of Q to Complexity, and see
here for the TSP polytope result)
quantum Hamiltonian complexity as a generalization of CSPs, lattice-based cryptography whose security is based on quantum arguments, etc.

MY COMMENTS: Umesh gives as a reason quantum is important its uses in other parts of complexity theory. While that is certainly good there are other intellectual reasons why Quantum is worth studying.
  1. Factoring is in Quantum P! There are MANY problems (maybe 10) where Quantum seemsto be faster than classical.  I wouldn't really want to push this point sincequantum computer aren't build yet. More generally, if one claims a field is validfor real practical value, those arguments may become less believable over time.
  2. Quantum computing can be used to simulate quantum systems- I think this was one of the original motivations.
  3. Quantum computing is valuable for a better understanding of Physics.
    This was first told to be my Fred Green (A Physics PhD who went into computer science)and I made it the subject ofthis blog entry.
    I like his quote so much that I will quote it here

    Learning quantum computing helped me understand quantum mechanicsbetter. As a physicist I never thought about measurement theoryor entanglement, which were foundational issues, irrelevantto what was doing. In quantum computing, we reason about thesethings all the time.

    Over the years others have told me similar things.

  • Side note: The word Quantum is mostly misused in popular culture. Quantum Leap meant a big leapwhen actually quantum means small. The James Bond movie Quantum of Solace used it correctlybut was an awful movie. Oh well.