Thursday, December 27, 2012

2012 Complexity Year in Review

As we turn our thoughts from Turing to Erdős, a look back at the complexity year that was. Written with help from co-blogger Bill Gasarch.

The complexity result of the year goes to Samuel Fiorini, Serge Massar, Sebastian Pokutta, Hans Raj Tiwary and Ronald de Wolf for their paper Linear vs Semidefinte Extended Formualtions: Exponential Separation and Strong Lower Bounds (ArXiv version). It is easy to show that TSP can be expressed as an exponentially sized Linear Program (LP). In 1987 Swart tried to show that TSP could be solved with a poly sized LP. While his attempt was not successful it did inspire Yannakakis to look at the issue of how large an LP for TSP has to be.He showed in 1988 that any symmetric LP for TSP had to be exponential size. (Swart had used symmetric LP's).

What about assymetric LP's? This has been open UNTIL NOW! The paper above proves that any LP formulation of TSP requires an exponential sized LP. They use communication complexity and techniques that were inspired by quantum computing.

Runners Up
And don't forget the solution to Bill's 17x17 problem.

News and trends: The new Simons Institute for the Theory of Computing at Berkeley, the great exodus of Yahoo! researchers mostly to Google and Microsoft, the near death of Florida computer science (anyone want to be chair?), and the rise of the MOOCs. 

We remember Dick de Bruijn, Tom Cover, Mihai PătraşcuErnst Specker and David Waltz, not to mention Neil Armstrong and Ray Bradbury

Thanks to our guest posters Bernard Chazelle, Yoav Freund, Andrew Goldberg, Mohammad Taghi Hajiaghayi, William Heisel, Lane Hemaspaandra, John Purtilo, Janos Simon and Vijay Vazirani.

Enjoy 2013 and remember that when living in a complex world, best to keep it simple. And try not to fall off that fiscal cliff.

Thursday, December 20, 2012

Flash Fill

Sometimes it just takes a simple new feature in a popular piece of software to remind us how computer science just does cool stuff.

Excel 2013 has a new feature, Flash Fill, where you can reformat data by giving an example or two. If you have a column of names like

Manuel Blum
Steve Cook
Juris Hartmanis
Richard Karp
Donald Knuth


You can start a column to the right and type
Blum, M.
Cook, S.
and the rest of the table gets filled in automatically.

Flash Fill is based on a 2011 POPL paper by Sumit Gulwani (later a CACM highlight). It's been explained to me as applying machine learning to binary decision diagrams.

Flash Fill allows a user to manipulate data without having to write macros or Perl scripts. Someone with no technical background can use Flash Fill and enjoy the CS goodness inside without even knowing it is there.

Monday, December 17, 2012

Goodstein Sequences (Its his 100th birthday!)

LANCE: Bill, on Dec 15, 2012 it will be
Reuben Goodstein's
100th birthday.

BILL: Is he still alive? Will there be a conference in his honor? Free Food? Cake?

LANCE: No, no, and no. But you could blog about Goodstein sequences. (thinking: or they could just look it up here).

BILL: That is a Goodstein idea. You have a Goodstein sequence of good ideas.

I first define a sequence that is not a Goodstein sequence but will be good for education. Let n be a number. Say let n=42 Base 10. We then decrease the number by 1 but increase the base by one. We keep doing this. We get

  1. 42 base 10 = 42 base 10
  2. 41 base 11 = 45 base 10
  3. 40 base 12 = 48 base 10
  4. 3E base 13 = 50 base 10 (E stands for Eleven)
  5. 3T base 14 = 52 base 10 (T stands for Ten)
  6. 39 base 15 = 54 base 10
The sequence looks like its increasing. But note that it eventually gets to
  1. 30 base 24 = 72 base 10
  2. 2(23) base 25 = 73 base 10 (the ``units digit'' is 23)
  3. 2(22) base 26 = 74 base 10
OH MY. Its still increasing. But note that it eventually gets to
  1. 20 base 48 base 10 = 96 base 10
  2. 1(47) base 49 = 96 base 10
  3. 1(48) base 50 = 98 base 10
  4. 1(47) base 51 = 98 base 10
It seems to be at 98 for a while. Indeed we eventually get to
  1. 11 base 97 = 98 base 10
  2. 10 base 98 = 98 base 10
  3. 0(97) base 99 = 97 base 10
And from there on it it goes to 0. Given n, how many iterations do you need to get to 0? This function grows rather fast, but not THAT fast. To prove that it goes to 0 you need (I think) an induction on an omega-squared ordering. The true Goodstein sequences initially write the number in base 10 but also writes the exponents in base 10 and does that as far as it can: 4 x 102 x 104 + 8 x 103 + 7 x 102 + 8 x 102 x 101 + 3 x 100

(This is called Hereditary 10 notation.) At each iteration we subtract 1 and then turn all of the 10's into 11's. More generally we write the number in base b and the exp in base b... etc and then subtract 1 and make all of the b's into b+1's. This sequence also goes down to 0. NOW how long does it take to goto 0. The function is not primitive recursive. Also, the theorem that states the sequence eventually goes to 0, cannot be proven in Peano Arithmetic.

I use Goodstein sequences as an example of a natural (one can debate that) function that is not primitive recursive. Ackermann's function comes up more often (lots more often) but is harder to initially motivate.

So Reuben Goodstein, wherever you are, I salute you!

Thursday, December 13, 2012

The Fiscal Cliff

Unless the republicans and democrats get a deal before year's end, the country will head over the so-called "Fiscal Cliff" including a budget sequestration that will cause automatic cuts to most federal agencies. This will be a disaster for science, grants will be delayed or not awarded, perhaps even spending freezes on current funds. University presidents have banded together in my old state and new to drive this point home.

Don't panic too much about the fiscal cliff, which will be short lived if it happens at all. But the consequence may lead to deep short or long term budget cuts in science. If charitable deductions are eliminated, that can be a large hit on university endowments. Pell grants might also be in play. 

On the other hand, science and education has its friends in Washington, starting with Barack Obama. So maybe the whole fiscal mess will work out just fine for us and we can go back to worrying whether universities will be decimated by MOOCs. 

Tuesday, December 11, 2012

Book review column

(I am a bit behind on posting links to my book review column- this blog post is about the Third (of four) for the year 2012, and the fourth of four has already appeared. I'll post that one later.)

My second-to-last book review column can be found here. The file pointed to does NOT include the BOOKS I NEED REVIEWED since that has changed. For that go here.

The column has an editorial! In a book review column for CS theory books?! It is an editorial against the high prices of books and the fact that many books are NOT online. These are well known problems, so what of it? I could say why the publishers are at fault, but frankly, I don't know the economics of the book market. So instead I recommend the community to do the following:

  1. Only buy books used or at a cheaper than list price (you probably already do this).
  2. If you write a book then insist that the contract allow for it to be online for free. This is not as outlandish as it sounds: (1) Blown to Bits by Abelson, Ledeen, Lewis (which I reviewed in this Column) is a book for a wide audience--- it is a popular book about computers and society. Even so, it is available online for free here. (2) Some book companies of Math and Science books allow the author to have the book free on line.
  3. When assigning a course textbook allow them to get earlier editions that are usually far far cheaper. There is no reason to insist they get the most recent edition.
Also of interest with regard to all of this- the Kistsaeung vs Wiley case: here

Thursday, December 06, 2012

Where is the youth of TCS/Market Eq/Guest Post by Vijay

(Guest post by Vijay Vazirani.)

On
Theory Day on Nov 30, 2012
Vijay Vazirani gave a talk:
New (Practical) Complementary Pivot Algorithms for Market Equilibrium.
He was inspired by the reaction to the talk to write a guest blog
which I present here!

Where is the Youth of TCS?

by Vijay Vazirani

I have always been impressed by the researchers of our community, especially the young researchers -- highly competent, motivated, creative, open-minded ... and yet cool! So it has been disconcerting to note that over the last couple of years, each time I have met Mihalis Yannakakis, I have lamented over the lack of progress on some fundamental problems, and each time the same thought has crossed my mind, ``Where is the youth of TCS? Will us old folks have to keep doing all the work?''

Is the problem lack of information? I decided to test this hypothesis during my talk at NYTD. To my dismay, I found out that there is a lot of confusion out there! By a show of hands, about 90% of the audience said they believed that Nash Equilibrium is PPAD-complete and 3% believed that it is FIXP-complete! I would be doing a disservice to the community by not setting things right, hence this blog post.

First a quick primer on PPAD and FIXP, and then the questions. Ever since Nimrod Megiddo, 1988, observed that proving Nash Equilibrium NP-complete is tantamount to proving NP = co-NP, we have known that the intractability of equilibrium problems will not be established via the usual complexity classes. Two brilliant pieces of work gave the complexity classes of PPAD (Papadimitriou, 1990) and FIXP (Etessami and Yannakakis, 2007), and they have sufficed so far. A problem in PPAD must have rational solutions and this class fully characterizes the complexity of 2-Nash, which has rational equilibria if both payoff matrices have rational entries. On the other hand, 3-Nash, which may have only irrational equilibria, is PPAD-hard; however, its epsilon-relaxation is PPAD-complete. That leaves the question, ``Exactly how hard is 3-Nash?''

Now it turns out that 3-Nash always has an equilibrium consisting of algebraic numbers. So one may wonder if there is an algebraic extension of PPAD that captures the complexity of 3-Nash, perhaps in the style of Adler and Beling, 1994, who considered an extension of linear programs in which parameters could be set to algebraic numbers rather than simply rationals. The class FIXP accomplishes precisely this: it captures the complexity of finding a fixed point of a function that uses the standard algebraic operations and max. Furthermore, Etessami and Yannakakis prove that 3-Nash is FIXP-complete. The classes PPAD and FIXP appear to be quite disparate: whereas the first is contained in NP INTERSECT co-NP, the second lies somewhere between P and PSPACE (and closer to the harder end of PSPACE, according to Yannakakis).

Now the questions (I am sure there are more):

  1. Computing an equilibrium for an Arrow-Debreu market under separable, piecewise-linear concave utilities is PPAD-complete (there is always a rational equilibrium). On the other hand, if the utility functions are non-separable, equilibrium consists of algebraic numbers. Is this problem FIXP-complete? What about the special case of Leontief utilities? If the answer to the latter question is ``yes,'' we will have an interesting demarcation with Fisher markets under Lenotief utilities, since they admit a convex program.
  2. An Arrow-Debreu market with CES utilities has algebraic equilibrium if the exponents in the CES utility functions are rational. Is computing its equilibrium FIXP-complete? Again, its epsilon-relaxation is PPAD-complete.
  3. A linear Fisher or Arrow-Debreu market with piecewise-linear concave production has rational equilibria if each firm uses only one raw good in its production, and computing it is PPAD-complete. If firms use two or more raw goods, equilibria are algebraic numbers. Is this problem FIXP-complete?

Monday, December 03, 2012

Too Much Competition?

On Friday the New York Times ran an article on how online retailers constantly adjust prices to match their competitors. Their ability builds on many tools from computer science, from networks to algorithms, not unlike airlines and hedge funds. But is this good for the consumer?

Suppose I work for bestbuy.com and have a television priced at $500, which matches the price on amazon.com. If I lower my price to $450, than I can expect Amazon to do the same. I've gained little from lowering the price, only $50 less dollars than I had before. Likewise Amazon has little incentive to lower their price. This hypercompetition can actually lead to collusion without colluding. Hal Varian talked a similar theme of price guarantees in a 2007 Times viewpoint.

In practice, companies still lower their prices but as networks get faster, algorithms get smarter and more people shop online, we might actually see higher prices and less competition, which I believe is already happening with the airlines.