Monday, December 28, 2009

2009 Complexity Year in Review

We go all the way back to January for the paper of the year, Mark Braverman's Poly-logarithmic independence fools AC0 circuits. Runners up include the Moser-Tardos Constructive Proof of the Lovász Local Lemma (mostly for Robin Moser's great STOC talk) and Rahul Jain, Zhengfeng Ji, Sarvagya Upadhyay and John Watrous for QIP=PSPACE. Yet another great year for complexity.

We welcomed new bloggers Glencora BorradaileJon KatzHarry LewisRichard Lipton, and Noam Nisan. And we have a new president who promises to restore science to its rightful place.

US Science funding in general and CS funding at the NSF in particular got a strong boost from the stimulus package and continues to be well funded in the new budget. Microsoft Research New England gobbles up Madhu Sudan and Boaz Barak. A new innovations conference starts next week and we explored barriers in August. Lots of postdoc jobs out there for theorists. If you are looking for a faculty job, well you might want to consider another postdoc. 

This year we started vidcasts and typecasts with special guests Molly Fortnow, Meena Mahajan, John Rogers, Rahul Santhanam, Chris Umans and Ryan Williams.

I had a fun year. My journal ACM Transactions on Computation Theory had its first two issues (submit your papers). I discovered Twitter. I became Theory Czar (SIGACT chair) with fellow blogger Michael Mitzenmacher as vice-chair. It's hard to complain about the powers-that-be when you are a power-that-is. I had two CACM articles Time for Computer Science to Grow Up and The Status of the P versus NP Problem, another year and the problem is still open. I just started a 3-month blog sabbatical to turn the latter CACM article into a book for the masses, though I can't resist the Year in Review post. Thanks to Bill for keeping the blog going.

We remember Rajeev MotwaniAmir PnueliJack SchwartzImre Simon and Ray Solomonoff.

Thanks to our guest posters and contributors: Michele Budinich, Hans Courant, Dave Doty, Sorelle Friedler, Samir Khuller, Clyde Kruskal, Joe Kruskal, Bill Kahn, Michael Lucas, Lucy Moser, Ryan O'Donnell, Tal Rabin, Rahul Santhanam, Janos Simon, Aaron Sterling, Vijay Vazirani and Lenore Zuck.

Have a great New Years. Bill will be back next week and I'll be posting regularly again in March starting in Columbus

Tuesday, December 22, 2009

How to tell how good a TV show is

(This is my last blog of the year. Lance will interrupt his blog sabbatical to do an END OF THE YEAR blog later.)

The TV show MONK recently finished its 8th and final season. My wife and I are big fans and have seasons 1-7 on DVD (and we will get 8). But this post is not about Monk. Its about the question: How to determine how good a TV show is? I am sure that whatever I say here may apply to other problems.

First assign to each episode a number between 1 and 10 depending on how much you liked it. (This could be the hardest part of the method.) Let t be a parameter to be picked later. t stands for threshold. If your criteria is How likely is it that an episode is OUTSTANDING? then you would pick t large, perhaps 9. If your criteria is How likely is it that an episode DOES NOT SUCK? then you would pick t small, perhaps 2. Some of the methods use t, some do not.

There are many different ways to do this. We give a few of them:
  1. The mean or median of all of the episodes.
  2. The probability that a randomly chosen episode is rated above t. (Could also get into prob that it is within one standard deviation from t.)
  3. The probability that a randomly chosen disc has an episode rated above t.
  4. The probability that a randomly chosen disc has fraction f of its episodes rated above t.
  5. Rate each disc in the DVD set for the entire season. The mean or median of all of these ratings.
  6. The mean or median of the best season.
  7. The mean or median of the worst season.
There are others as well. But the question really is, given a set of numbers grouped in a natural way (in this case roughly 8 sets of 16 numbers, and each set of 16 in groups of 4) how do you judge the quality?
For those who are fans of the show MONK here are my choices for OUTSTANDING and UNWATCHABLE episodes: here

Friday, December 18, 2009

What is an explicit Construction?

The Prob method (usually credited to Erdos) was once considered quite novel: You show something exists but you don't show how to construct it! An early example was lower bounds on the Ramsey Numbers: You can prove that there exists a 2-coloring of the edges of Kn with no monochromatic k-cliques where n=2k/2, and the proof does not give a way to construct the coloring! When Joel Spencer wrote his first book on The Probabilistic Method (appeared in 1974) this kind of argument was still considered novel. Now they are quite common. Such arguments were (and still are) called Nonconstructive. However, at the time, the term was not defined rigorously.

Of course, once you know that such a coloring exists, it is easy to find one by just looking at all possibly colorings. Hence this use of nonconstructive proofs would not be an issue for logicians.

Many of the early prob method arguments actually showed that most colorings have the property you want. Hence they would lead to randomized polynomial time algorithms. With this in mind, here is one possible definition that seems to cover what Erdos and company were thinking informally:

Definition: A coloring of an object X is constructive if it can be obtained in time polynomial in the size of X.

Is this a good definition? Now that we think P=BPP, or at least that BPP is a good notion of feasible, perhaps we should call randomized algorithms constructive. Moser and Tardos think so since there paper entitled A constructive proof of the General Lovasz Local Lemma has a randomized algorithm.

Here is a history of the prob method as applied to lower bounds on VDW numbers (see here for definitions and some of the proofs). Let W(k) be the least number such that, no matter how you 2-color {1,...,W(k)}, there will be a monochromatic arithmetic sequence of length k.
  1. W(k) &ge sqrt(k)2(k-1)/2 is an easy application of the Prob Method. The proof is so easy that I could not find it written down anywhere, so I wrote it down in the paper pointed to above.
  2. W(k) &ge sqrt(k)2(k-1)/2 can be proved constructively by the method of conditional expectations. This is also so easy that I could not find it written down anywhere. So I wrote it down myself in the paper pointed to above.
  3. Berlekamp showed that, if p is prime, then W(p) &ge p2p constructively. (I prove a slightly weaker result in the paper linked to above.) (Berlekamp's original paper can be found on my website of VDW papers: here.)
  4. Using the Local Lovasz Lemma one can show that W(p)&ge 2k/ek. This proof is nonconstructive and does not yield and almost-all result, so it cannot be turned into a randomized algorithm.
  5. Moser and Tardos (above link) (and later Beigel with a different proof) showed that the LLL can always be made to yield a prob algorithm. Hence they have a randomized algorithm to obtain a 2-coloring of {1,...,2k/ek} with no mono k-AP's. This is not a constructive algorithm in the way that Erdos or Spencer might accept, though it does yield a feasible algorithm.
  6. Chandrasekaran, Goyal, Haeupler in the paper Deterministic algorithms for the Lovasz Local Lemma have a deterministic version of the LLL with slightly worse bounds. From their paper one can obtain: W(k) &ge 2k/(1+&epsilon) via a poly algorithm.

Thursday, December 17, 2009

A hard problem inspired by an easy problem

The following problem was problem 1 (the easy one) on the Maryland Math Competition 2009 (I will later report on how the students did on it).
  1. Show that for every set of three integers we can find two of them whose average is an integer.
  2. Show that for every set of five integers we can find three of them whose average is an integer.
I am sure that most of my readers can do this problem. Here is a generalization. I do not know if it is true.
(Conjecture) For all k, every set of 2k-1 integers, there exists k of them whose average is an integer.
  1. The UMCP competition asked for the k=2 and k=3 cases of the conjecture. They are true and easy.
  2. I have done the k=4 case. It was tedious but not hard.
  3. I think I have done the k=5 case but it was alot of cases so I may have missed one.
SO, is the conjecture true or not? I have asked around and people who should know don't seem to, so I throw it open to the blogosphere.

What I hope happens: Someone posts a nice proof.

What would also be okay: Someone posts a counterexample.

Wednesday, December 16, 2009

Guest Post- Women in Theory Workshop

(Tal Rabin requested to post this so I am doing so. This post is essentially her email, so call it a guest post.)

There will be a Women In Theory workshop for female graduate students in theoretical computer science in Princeton on June 19-23, 2010(see here) Tell all of the female students in your department about this workshop.

The workshop will have first-rate technical content and we also hope that it will help encourage female researchers and perhaps in the long run increase the unfortunately low number of women in our field. We will supply rooms and food for the participants, and will probably also be able to cover at least part if not all of their travel expenses - please let us know if that will be an issue. There may be a nominal registration fee.

The format will be similar to the WIT 2008 workshop. You can view information on that workshop at: here and view a video of WIT08 at: here.

For any questions, please email WomenInTheory2010@gmail.com . The deadline to apply is February 1st, 2010.

Tuesday, December 15, 2009

Mild Request for Guest Posters.

(Deadline to submit a paper to CCC is Dec 15. Depending on when you read this that could be today or in the past.)

As you all know from Lance's last post, Lance is taking a Blog Sabbatical. I will try to have 4-posts-a-week. But not holidays (I may define what a holiday is- like a week when I only have 3 posts I may declare Friday to be a holiday: no-post-day.)

This blog has always been generous about Guest Posts. Mostly people who wanted to guest post did so. More common is the following kind of exchange:

READER: You should do a post on Davenport-Schnitzel Sequences!

LANCE OR BILL: I've got a better idea, why don't you write one and we'll edit it and post it.

READER: Uh. Hmmm. Uh.... never mind.

Now that Lance is on Blog Sabbatical and I still WANT to get something up 4-times-a-week. I offer the following: IF you want to guest post then let me know and we'll see what makes sense. Rather than merely accepting Guest Posts, I am now ASKING for them. If I don't get any guest bloggers then I'll be singing this song:

Monday, December 14, 2009

CCC deadline Dec 15, 2009! (not factorial)

Submissions to 25th CCC are due TOMORROW! (Actually it could be TOMORROW, TODAY, or IN THE PAST depending on when you read this.) Should you submit?
  1. If you have a paper all set to go, with proofs complete, that is in scope and interesting, then you certainly COULD submit. Whether you should depends on other factors.
  2. If this is the first you've heard of the deadline then you probably shouldn't RUSH to get it done in time. However, some people work well that way. Not only am I not one of those people, I don't understand those people.
  3. This year STOC and CCC are back-to-back in Boston. That's my favorite arrangement- the advantage of FCRC (STOC and CCC in the same place) without the disadvantage (too big!). Hence, submit, accept, or not, I think you should GOTO STOC and CCC if circumstances permit.
  4. Should you look at who is on the committee and based on that decide if it will get in? I tend to ignore the committee. So many papers are subrefereed. Trying to figure out which conference will be better for you based on the committee is too hard to deduce. Better to go for SCOPE- is your paper a complexity paper or not? Also, if for some reason a STOC or FOCS paper is better for your resume you can submit there; however, you can then end up with NO conferences for a while and resubmit later and and and...
  5. If you submit then make sure to get your submission spellchecked and proofread before you submit.

Friday, December 11, 2009

A Blog Sabbatical

With the end of the fall quarter I will take a break from the blog for a few months. This is not another End, just a chance to move my creative juices in another endeavor, to expand my CACM article into a book to spread the gospel of P v. NP to an even larger audience.

Bill will continue to post though not as often as we did together. I will post as needed for "really important stuff" and will continue to feed my Twitter with the less important but occasionally interesting/insightful/humorous stuff. 

I will return to blogging in mid-March with the start of Northwestern's spring quarter. I need the blog as I teach to keep my sanity. By then I should have written enough of the book to get past the point of no return. Maybe I'll even share some of it with you.

P.S. If any of you have good suggestions for publishers, let me know.

Thursday, December 10, 2009

Whats your Game Mr. Bond? Nim?

BILL: Clyde is teaching a graduate course titled Games, Game Theory, and the Theory of Games. He tells me that there are basically eight kinds of games governed by three 2-valued parameters. (1) 2-player or multi-player, (2) complete information or incomplete information, (3) luck or no luck. For example Chess or Nim are 2-player, no hidden information, no luck (I won't quibble about the coin toss to see who goes first.) There are natural examples of most of the combinations. The one I have the most trouble with is multiplayer with no hidden information and no luck. (Clyde is NOT as dogmatic as the above indicates. One other aspect I've left out for this post is that some games are impartial and some are not.)

DARLING: Here is a multiplayer game with no luck and no hidden information: pick up sticks.

BILL: For Clyde's class that not a game.

DARLING: Why not? Its alot more fun than NIM!

Help me answer my darling. What makes something a game rather than a sport? Not sure pick-up-sticks is a sport either.

Lets try to get NATURAL examples of each category: One issue- the terms are not that well defined so you may disagree with some of these. We break it into two basic categories (for a better picture of the results goto here.)

1) Complete Information
  1. 2-player, no luck. Go, Chess, Nim: Nim has had some nice math associated to it. Chess and Go have inspired very nice computer techniques (was alpha-beta pruning developed for chess originally?). There are some recent techniques for GO which I will talk about in a later posting.
  2. 2-player, some luck: Backgammon, Can't Stop (a natural game but not that well known). Sorry! (that's not be apologizing, that's the game Sorry!). Parcheesi.
  3. Multiplayer, no luck: Chinese Checkers. I do not know of ANY other examples.
  4. Multiplayer, luck: Monopoly (and similar games).
2) Incomplete Information
  1. 2-player, no luck. Stratego, Battleship: These are debatable. In fact, it may that that if defined carefully this combination is impossible.
  2. 2-player, luck: Gin and most 2-player card game.
  3. Multiplayer, no luck: Clue where each player only moves 5 squares per turn, so dice needed. One of Clyde's students families plays it this way, hence it is a natural game.
  4. Multiplayer, luck: Poker. Most multiplayer card games.


QUESTION: Many games are hard via complexity theory. For example, GO and CHESS are EXPTIME complete (see here). Do these results tell us things about real games?

Wednesday, December 09, 2009

Is posting about 17x17 problem BAD FOR ACADEMIA?

(The 17x17 problem has gotten far wider attention than I imagined--- Brian Hayes posted it on his website: here, and its also here and here. The last website is odd in that it mentions my co-authors as also putting up the money, which is not true.)

(UPDATE- 17x17 and even 18x18 was solved. See my arXiv paper on grid colorng and/or Feb 8, 2012 post.)


(Reminder: NYU-IBM-COLUMBIA-THEORY DAY: see here.)

Anon 9 on comments on 17x17 post writes about my 17x17 post:
This is just like when teachers ask their students to model or code parts of a system that will be used in the teachers own research eventually. this is really bad for academia in general. Never again propose such things, please.
I will take this comment as the starting point in an intellectually honest discussion. I reply to it after this paragraph. I then URGE the poster to reply with either: (1) Yes, GASARCH you are correct, or (2) Yes, GASARCH you are correct, but its still bad for academia because ..., or (3) No, GASARCH you are wrong because ... . Doing either would be intellectually honest. Doing neither would be intellectually dishonest. You need not use capitals or use my phrases, but you get the idea.
  1. When a teacher assigns students to write code for a system the students are forced to do it in order to get a good grade. I am not forcing anyone to work on the 17x17 challenge.
  2. When a teacher assigns students to write code for a system and the students do not get a co-authorship then this could be bad (this might depend on the situation). I stated explicitly that whoever cracks 17x17 can publish it themselves, or, if they do enough other stuff, with me. In any case the terms are out in the open. Also note that whatever I do will get much scrutiny because I posted it on a public blog rather than a private classroom.
  3. When a teacher assigns students to write code for a system the students might get nothing for his efforts. If a student cracks my problem they will get $289.00. If they try and fail they may learn things of interest (to be fair, this may be true for student-code-writer as well).
  4. The 17x17 challenge has already stimulated discussion on three blog and might get some people interested on the math end (Ramsey Theory) or the computer end. How is this bad for academia?
  5. As Michael pointed out what I am doing is similar to what Erdos did, though my problem is not deep mathematics. Do you think that when Erdos offered money for problems, this was bad?
  6. If I had posted about the problem WITHOUT the cash prize would you still object to it? (One point: if I offered it without a cash prize I would have asked to RESOLVE the problem rather than to GIVE ME a verifiable coloring). If I had offered ALOT MORE money would you object? Is it a U-shaped curve: offer either less than 10 or more than 2000 and you are okay with it? I am not being funny--- I look forward to your intelligent to comment on this.)

Tuesday, December 08, 2009

Dequantification

After a talk on derandomization at Midwest Theory Day, someone asked if those techniques could also be used in quantum computing. 

In classical randomness under reasonable assumptions we can create a polynomial in m sized set S number of strings of length m such that every circuit of size m will accept with roughly the same probability whether we draw the input uniformly from S or from all strings of length m. Can we have an analog for quantum? 

There are many good parallels between probabilistic computing and quantum computing. Both seem to allow an exponential number of possible computations in parallel combined in a limited way. Probabilistic computation is essentially multiplying exponentially large stochastic matrices and quantum computing is multiplying exponentially large unitary (or just orthogonal) matrices.

If a quantum algorithm uses classical randomness, one can replace that randomness with measurements of appropriate qbits. So classical randomness does not give any additional power to quantum computation.

But could one use some hardness assumptions to remove the quantumness from a quantum computation leaving a deterministic algorithm? It's not clear we can pull out the quantumness and even if we could quantumness and randomness just work differently. Quantum algorithms get their power from interference, allowing adding positive and negative amplitudes causing cancellation, something very difficult to simulate probabilistically or with any form of psuedorandom generator.

In short no hope for dequantification. You'll just have to build that quantum computer if you want to run those quantum algorithms.

Monday, December 07, 2009

Complexity Vidcast 3

Quick announcement: If you are a student who wants to go to SODA but doesn't have the funds, click here.

I had forgotten we did this. Here's a video of my daughter Molly interviewing Bill and I about complexity our third Vidcast filmed last October before our Dagstuhl trip.



Friday, December 04, 2009

The Probability of P=NP

Dean Foster asked me for a probability that P=NP. Now P=NP is not a probabilistic event, either P=NP or P≠NP (if it's independent it's still equal or unequal in whatever model of set theory we happen to live in). So I responded that I was highly confident that Prob(P=NP)=0.

Not good enough for Dean, a professor in the Wharton statistics department, who said "But you aren't willing to give a number? Good Bayesians / Game Theorists have to bet on everything!"

A good Bayesian puts a probability p on every future event E where one would be indifferent to taking a small bet that pays off 1-p if the event is true and loses p if the event is false. 

As a computational complexity theorist we tend not to be Bayesians, rather look at worst-case performance under that assumption that everyone is working against us. But I've been talking with economists a bit recently so lets take the Bayesian approach.

Richard Lipton asked if one would bet their life that P≠NP. In some sense I have since the vast majority of my research becomes trivial or uninteresting if P=NP. But how much one bets isn't really the right question since that involves taking risk into account as well as beliefs. 

So what odds do I give? The problem is that I could only bet conditional on P v. NP being solved in some reasonable amount of time which would alter my beliefs since a proof that P=NP requires only an algorithm but a proof that P≠NP requires showing no algorithm can work. And the no trade theorem comes into play: If someone were to offer me a bet on P v. NP, I'd secretly suspect that they knew an algorithm or a proof I hadn't seen yet. But suppose that I could make a bet against a magical oracle would reveal the correct answer once a bet is made.

Still I can't in my heart give a positive probability to P=NP. So the probability of P=NP is zero, but in the measure theory sense that because an event has probability zero it doesn't mean it can't happen, only that it won't.

Thursday, December 03, 2009

Congrats to new ACM fellows

Congrats to ALL of the ACM Fellows which were annouced here.

There are several theorists among them. I could try to list them or count them; however, the term theorist is not that well defined so I'll pass.

I wonder if any of them could use an additional $289.00...

Wednesday, December 02, 2009

17x17: Comments on your comments

One of the comments on my last post, the 17x17 post, inquired if I am also interested in the other unknown grids (NOW just 17x18, 18x18,21x12, 22x10). I AM. In fact, Brad Larson emailed me a 4-coloring of 21x10. Here it is. He does not get $289.00 but he does get my eternal gratitude. (ADDED LATER: BRAD LATER GOT A 4-COL OF 21x11. I ADDED IT TO THE LINK THAT GAVE THE 21x10 COLORING. I WILL SOON UPDATE MY LAST POST ABOUT POSSIBLE OBSTRUCTION SETS, AND MY CHART.) (ADDED EVEN LATER: BRAD GOT A 4-COL OF 22x10 ALSO.)

(UPDATE- 17x17 WAS solved, as was 18x18. See my arXiv paper on grid coloring and/or my post on Feb 8, 2012).


One of my former high school students emailed me because he was excited that my 17x17 posting was picked up by another blogger: here Is that worth being excited about? Will it make me a celebrity? Will it get me on THE COLBERT REPORT or THE DAILY SHOW? I doubt it.

Many readers commented on my last post. These comments have lead to two corrections- my chart had typos, and the paper had a typo in an equation. The other objections raised were (I think) incorrect; however, I could be wrong. In THIS post I will answer all of the questions in the last post. If there are remaining questions, please post.
  1. The typos in the chart were fixed. THANKS for the corrections.
  2. The equation that Aaron Sterling pointed out was incorrect had a typo. THANKS Aaron. That should be fixed soon.
  3. The Obstruction set raised alot of questions. OBSc is NOT the set of all non-c-colorable grids. It is the set of all MINIMAL non-c-colorable grids. More precise: if nxm is in OBSc then nxm is NOT c-colorable but both (n-1)xm and nx(m-1) are c-colorable.
  4. Sky thinks 9,10,11 are in conflict. I corrected the chart so that may no longer be an issue, but if it is, please comment.
  5. Grid coloring is not related to coloring planar graphs. Grid coloring problems can be rephrased at hypergraph coloring problems. I doubt this is helpful, but I could be wrong.
  6. Why $289.00? Because 17x17=289. I originally thought I would offer $2.89 but Lance pointed out that that might not be enough to generate interest. Then I thought $28.90, but Lance again thought that would not get people interested. $289.00 seems just about right to get people interested but not break the bank of Gasarch. The person who won't give away the answer fo $289.00--- will you take $2.89 instead? How about $28.90?
  7. dut asks if the 74-sized Rect Free Subset is the smallest there is, or could there be one of sized 73. Actually we could just take out one element and get a 73-sized Rect Free Subset. I think dut meant to ask: in a 4-coloring do we know a number x such that no color appears less than x times? YES. Lets say (and this is true but probably can be improved) that there is no Rect Fres set of sized 80 of 17x17. Then the smallest possible number of times a color can appear can easily be lower bounded. li> One of the Anon posits that its impossible since this person thinks you can't have a 73-sized Rect Free Set. Alas, my original post had a 74-sized Rect Free Set. In fact, thats why I think you CAN 4-color 17x17. I look at it as a barrier result: All of the techniques to show a grid can't be c-colored used rect free sets. Hence 17x17 (and the others) will not be able to be proven not 4-colorable using our techniques. Hence new techniques are needed. Not quite as impressive as Oracles, Natural Proofs, or Algebraization.
  8. Scott claims that my claims are riddled with errors. Scott- please either point out an error or post that you are mistaken. Clearly the CHART was riddled with errors, but has been corrected.
  9. I agree with commenter after David Benson who would like to know what David Benson did. David Benson- please elaborate so we can see what you did and if its right.

Tuesday, December 01, 2009

Who Pays for Trips?

If Professor Alice at Faber College visits Dr. Bob at the University of Southern North Dakota, who should cover Alice's expenses? 

It depends on who does the asking. If Bob invites Alice to visit then Bob's school or grant should pay. If Alice invites herself she should offer to cover her expenses via her funds. Whether or not she gives a talk doesn't really matter though a distinguished lecturer should always get reimbursed.

If Alice goes to a conference, she should cover the expenses even if she gives a talk. Plenary lecturers often get free conference registration and/or travel reimbursement but this seems to vary dramatically by the conference and discipline.

What if Alice is going to a wedding in North Dakota and offers to stop by USND? Certainly Alice should not expect money from Bob. How much can Alice use of her grant money versus personal money for such trips. That's up to her own ethics (and various laws).

All of the above goes out the window if just one of Alice or Bob has travel funds.

Typically Alice books her own air travel. Local travel and lodging is either booked by Bob or Bob makes suggestions to Alice.

Sometimes to save money or just be friendly, Bob is willing to host Alice at his house (in the guest room). If Alice feels more comfortable in a hotel, she should carefully suggest it and offer to cover the extra cost.

Alice shouldn't expect an honorarium since part of Alice's job is to promote her research, and thus her university, to the broad community. But Alice need not turn down an honorarium if offered, though she needs to declare it as income on her tax returns. If Alice works for a company or the government there might be policies against receiving honorariums. 

Above all Alice and Bob should work out who pays ahead of time. Also one needs to know rules on funding that differ at universities and granting agencies (NSF requires use of US airlines for example) and what level of hotel one should use. And always best to avoid bad feelings if one has disagreements on reimbursements after the fact.

Monday, November 30, 2009

The 17x17 challenge. Worth $289.00. This is not a joke.

The 17x17 challenge: worth $289.00. I am not kidding.

Definition: The n x m grid is c-colorable if there is a way to c-color the vertices of the n x m grid so that there is no rectangle with all four corners the same color. (The rectangles I care about have the sides parallel to the x and y axis.)

The 17x17 challenge: The first person to email me a 4-coloring of 17x17 in LaTeX will win $289.00. (I have a LaTeX template below.)

UPDATE: THE RESULTS ABOUT OBS4 HAVE CHANGED SINCE THE ORIGINAL POST SINCE BRAD LARSON EMAILED ME A 4-COL OF 21x10 and 21x11. UPDATE: BRAD LARSON EMAILED ME A 4-COL OF 22x10.

UPDATE- PROBLEM WAS SOLVED. See my arXiv paper on grid colorings OR
my Feb 8, 2012 post.

Why 17x17? Are there other grids I care about? We (We=Stephen Fenner and Charles Glover and Semmy Purewal) will soon be submitting a PAPER (see TALK for a superset of the slide-talk I've given on this paper) on coloring grids. Here are the results and comments:
  1. For all c there is a finite set of grids a1xb1, ..., akxbk such that a grid is c-colorable iff it does not contain any of the aixbi (n x m contains a x b if a &le n and b &le m). We call this set of grids OBSc (OBS for Obstruction Set).
  2. OBS2 = {3x7, 5x5, 7x3}
  3. OBS3 = {19x4, 16x5, 13x7, 11x10, 10x11, 7x13, 5x16, 4x19}
  4. OBS4 contains 41x5, 31x6, 29x7, 25x9, 23x10, 10x23, 9x25, 7x29, 6x31, 5x41
  5. Exactly one of the following sets is in OBS4: 21x13, 21x12.
  6. Exactly one of the following sets is in OBS4: 19x17, 18x17, 17x17.
  7. If 19x17 is in OBS4 then 18x18 might be in OBS4 . If 19x17 is not in OBS4 then 18x18 is not in OBS4 .
  8. A chart of what we do and do not know about 4-colorings of grids is here.
  9. A Rectangle Free Subset of a x b is a subset of a x b that has no rectangles. Note that if a x b is 4-colorable then there must be a rectangle free subset of a x b of size Ceil(ab/4). We actually have a rectangle free subset of 17x17 of size Ceil(289/4)+1. Hence we think it is 4-colorable. For the rectangle-free subset see here. For the original LaTeX (which you can use for a template when you submit yours) see here. THIS IS WHY I AM FOCUSING ON 17x17--- BECAUSE I REALLY REALLY THINK THAT IT IS 4-COLORABLE. I could still be wrong.


What if 17x17 is not 4-colorable? Then nobody will collect the $289.00. Even so, if you find this out I would very much like to hear about it. I don't want to offer money for that since if your proof is a computer proof that I can't verify then its not clear how I can verify it. I also don't want to offer money for a reasonable proof since this may lead to having the Supreme Court decide what is a reasonable proof.

Can I get a paper out of this? FACT: If you get me the coloring and want to use it in a publication then that is fine. OPINION: It would not be worth publishing unless you get all of OBS4. See next question.

What do you hope to get out of this? My most optimistic scenario is that someone solves the 17x17 problem and that the math or software that they use can be used to crack the entire OBS4 problem. If this happens and my paper has not been accepted yet then we could talk about a merge, though this is tentative on both ends.

Has there been any work done on this problem that is not in your paper?
  1. A Rutgers grad student named Beth Kupkin has worked on the problem: here.
  2. A high school student (member of my VDW gang) Rohan Puttagunta has obtained a 4-coloring of 17x17 EXCEPT for one square: here.
  3. SAT-solvers and IP-programs have been used but have not worked--- however, I don't think they were tried that seriously.
  4. Here are some thoughts I have had on the problem lately: here.
  5. There is a paper on the 3-d version of this problem: here.


Is Lance involved with this at all? When I gave a talk on grid colorings at Northwestern, Lance fell asleep.

Wednesday, November 25, 2009

Birthday Paradox Variance

First a message from David Johnson for proposals on locations for SODA 2012 both in and outside the US.

Here's an interesting approach to the birthday paradox using variances.

Suppose we have m people who have birthdays spread uniformly over n days. We want to bound m such that the probability that there are are at least two people with the same birthay is about one-half.

For 1 ≤ i < j ≤ m, let Ai,j be a random variable taking value 1 if the ith and jth person share the same birthday and zero otherwise. Let A be the sum of the Ai,j. At least two people have the same birthday if A ≥ 1.

E(Ai,j) = 1/n so by linearity of expectations, E(A) = m(m-1)/2n. By Markov's inequality, Prob(A ≥ 1) ≤ E(A) so if m(m-1)/2n ≤ 1/2 (approximately m ≤ n1/2), the probability that two people have the same birthday is less than 1/2.

How about the other direction? For that we need to compute the variance. Var(Ai,j) = E(Ai,j2)-E2(Ai,j) = 1/n-1/n2 = (n-1)/n2.

Ai,j and Au,v are independent random variables, obvious if {i,j}∩{u,v} = ∅ but still true even if they share an index: Prob(Ai,jAi,v = 1) = Prob(The ith, jith and vth person all share the same birthday) = 1/n2 = Prob(Ai,j=1)Prob(Ai,v=1).

The variance of a sum of pairwise independent random variables is the sum of the variances so we have Var(A) = m(m-1)(n-1)/2n2.

Since A is integral we have Prob(A < 1) = Prob(A = 0) ≤ Prob(|A-E(A)| ≥ E(A)) ≤ Var(A)/E2(A) by Chebyshev's inequality. After simplifying we get Prob(A < 1) ≤ 2(n-1)/(m(m-1)) or approximately 2n/m2. Setting that equal to 1/2 says that if m ≥ 2n1/2 the probability that everyone has different birthdays is at most 1/2.

If m is the value that gives probability one-half that we have at least two people with the same birthday, we get n1/2 ≤ m ≤ 2n1/2, a factor of 2 difference. Not bad for a simple variance calculation.

Plugging in n = 365 into the exact formulas we get 19.612 ≤ m ≤ 38.661 where the real answer is about m = 23.

Enjoy the Thanksgiving holiday. We'll be back on Monday.

Tuesday, November 24, 2009

DIMACS at 20

Last Friday DIMACS celebrated its 20th anniversary. Muthu summarizes the event.

DIMACS has served the theoretical computer science community well over these two decades. They have hosted a number of postdocs and visitors usually around a Special Focus (originally Special years but one year is usually not enough). DIMACS runs a large number of educational and research activities but most importantly the great workshops over the years. DIMACS's reputation for having strong workshops allows it to continue to attract strong workshops and has helped make New Jersey (my home state) a major center of theory. 

The first DIMACS workshop I attended, Structural Complexity and Cryptography is where I first heard about about the first Gödel-prize winning research relating interactive proofs to hardness of approximation that was done primarily during that Special Year on Complexity Theory and Interactive Computation.

When I moved to the NEC Research Institute in 1999, I became quite active in DIMACS which had then started a Special Year on Computational Intractability including a great workshop on the Intrinsic Complexity of Computations with many great talks and discussions on the hardness of proving lower bounds. My talk on Diagonalization later became an article in the BEATCS complexity column.

I then joined the DIMACS executive board as the NEC representative just as the center was ending its 11-years of funding as an NSF Science and Technology Center. Amazing that DIMACS survived that transition and survived for these twenty years and beyond. Most of the credit goes to DIMACS director Fred Roberts who has often hustled for funding for specific special foci, projects and workshops, as well as finding people to run those foci and workshops.

The 2001 Workshop on Computational Issues in Game Theory and Mechanism Design truly established a new discipline in connecting computer science and economic theory. Based on the excitement from that workshop, DIMACS started a Special Focus on Computation and the Socio-Economic Sciences which Fred talked me into co-organizing with Rakesh Vohra, from Northwestern's Kellogg business school. After I moved back to Chicago in 2003, I met with Rakesh to plan the focus which led to collaboration and eventually my moving to Northwestern. 

The special focus had a number of exciting workshops particularly Information Markets which restarted that research area a couple years after the PAM disaster and our closing workshop on the Boundary between Economic Theory and Computer Science one of the few meetings that truly attracted both strong computer scientists and economists.

That's just a few of my DIMACS memories. Many others have similar stories for a center that helped shape the professional lives of myself and many other CS researchers. Congrats for 20 years and here's hoping for many more.

Monday, November 23, 2009

An undervalued Math Problem

As most of you know there are 7 problems worth $1,000,000 (see here). It may be just 6 since Poincare's conjecture has probably been solved. Why are these problems worth that much money? There are other open problems that are worth far less money. What determines how much money a problem is worth?

When Erdos offered money for a problem (from 10 to 3000 dollars) I suspect that the amount of money depended on (1) how hard Erdos thought the problem was, (2) how much Erdos cared about the problem, (3) how much money Erdos had when he offered the prize, and (4) inflation. (If anyone can find a pointer to the list of open Erdos Problems please comment and I'll add it here.)

Here is a problem that I have heard is hard and deep, yet it is only worth $3000 (Erdos proposed it). I think that it should be worth more.

BACKGROUND: Szemeredi's theorem: Let A&sube N. If the limit as n goes to infinity of size(A &cap {1,...,n})/n is bounded below by a positive constant then A has arbitrarily long arithmetic sequences. Intuition: if a set is large then it has arb long arith seqs. The CONJECTURE below uses a diff notion of large.

CONJECTURE: Let A&sube N. If &suma&isin A 1/a div

KNOWN: Its known that if A is the set of all primes (note that &suma&isin A 1/a diverges) then A has arbitrarily large arithmetic progressions. Nothing else is known! The conjecture for 3-AP's isn't even known!

Is this a good problem? If it is solved quickly (very unlikely) than NO. If absolutely no progress is made on it and no interesting mathematics comes out of the attempts than NO. It has to be just right.

Friday, November 20, 2009

Citing Papers

A student asked me which version of a research paper to cite, a journal (the last reviewed version) or a conference (the first reviewed version) of a paper. I generally cite papers in this precedence list.
  1. The fully refereed journal version, even if it is "to appear".
  2. The reviewed, though not usually refereed, conference proceedings version, again even if it is "to appear".
  3. On an electronic archive, like arXiv or ECCC.
  4. As a departmental technical report.
  5. On a generic web page, like a personal page.
  6. As a "Manuscript", if I have seen the paper but it's not publicly available.
  7. As "Personal Communication" if the paper doesn't exist.
If the original paper is not in English I'll cite both the paper and an English translation if there is one.

The journal version can distort precedence. Paper A that depends on paper B can have a much earlier journal publication date. If precedence is a real issue, say when I am trying to give an historical overview, then I will cite both the journal and conference versions.

What if you use a theorem that appears in a page-limited conference paper but who's proof only appears in the longer tech report. Then I cite the conference paper for the theorem and the tech report for the proof. Even if a proof exists in a paper, I'll often cite another paper or book if it has a cleaner or simpler proof.

What if you cite a paper for a theorem for a proof that doesn't exist (the infamous "will appear in a later version of the paper")? If your paper critically needs that theorem, you really should give the proof for it yourself. At the very least later papers will cite your paper for the proof.

What if the conference or journal version is not on-line or behind a pay wall? I still cite the latest version figuring that if someone wants to read the paper they can use a service like Google Scholar to find an accessible version.

I try to use the same rules for links to papers on this blog because it's more important to give out the citation. If someone wants to download the paper again it's usually easy enough to find it.

In my .bib file (of bibtex entries), I replace the entries of papers as they get updated under the same citation-key. That way when I go back to latex older paper they get the latest references. We have too many people in our field who don't bother updating references, pointing to a tech report when the conference or journal version has been published.

Should you add hyperlinks in your bibliography to other papers? Nice if you do so and probably good if you are young and get into the habit now. But I haven't found the impetus to add links to papers in my now quite large .bib file.

In my ideal world, each research paper would have a web location which has human and machine readable descriptions of and pointers to all versions of that paper. We would just input that location into bibtex and it would automatically pull the information from the web and make the appropriate entry in your references. Then we would all cite correctly, or at least consistently.

Thursday, November 19, 2009

Laptops in Church?

There are now bibles online where you can click for different versions, different translations, different interepretations, historical context, etc. The same is true, or will be soon, for other faith's holy books as well.

Will there come a day when people bring their laptops (or smaller devices) to church? They can claim that they are looking up things in their online bible. Some will indeed be looking up bible passages. Some will be balancing their checkbook. Some will be reading blogs. Some will be looking at porn. Will the church need to deal with this? If it does not distract others (and smaller devices won't) than perhaps not. But the notion of looking at porn while you are in Church is troublesome.

How does this compare to the laptops-in-classroom debate that we had here?

- Nothing I have said here is particulary to Christianity-- All faiths will have these problems. I wonder when it will come up and how they will deal with it.

Wednesday, November 18, 2009

FOCS Videos

As I tweeted yesterday, the videos of talks from the 2009 FOCS conference are now online. Thanks to FOCS PC chair Daniel Spielman and Georgia Tech's ARC Center for making it happen.

Let me be a bit Billish (or is it Gasarchian) and make my comments as questions. 
  1. Which talks are most worth watching?
  2. How many of these videos do you plan on watching?
  3. Do you get value over these talks over reading the papers? The papers aren't on the IEEE DLs yet but Shiva has collected some links.
  4. If STOC/FOCS talks were generally available on-line shortly after the conference, would this affect your attendance?
  5. Are the videos useful even if you did attend FOCS?
  6. Would you prefer videos on a established site like having a YouTube channel so the talks will always be available and you can use YouTube features like embedding?
  7. Should STOC/FOCS and other conferences continue to make videos of their talks and post them freely on-line? How much would you be willing to pay in increased registration fees to make it happen? This would be a subsidy from those attending the conference to those who don't.

Tuesday, November 17, 2009

A Prayer Book for the Internet Generation

As a young kid in the Reform Jewish community we used the Union Prayer Book, a traditional book with Hebrew on the right and English on the left with lots of instructions for page jumping, standing, when to sing and whether everyone should speak. When I became a Bar Mitzvah in 1976, the temple sisterhood gave me a copy of the new prayer book, Gates of Prayer. The Gates of Prayer had services in a linear fashion using different fonts to denote when to sing and who should speak and with instructions on when to stand and sit. One could run an entire service giving no instructions from the Bima.

Gates of Prayer lasted more than three decades with only some rewording mostly to make the prayers gender neutral.

But in this Internet age people no longer read linearly. Last Friday I had my first taste of the new Reform Jewish prayer book, the Mishkan T'filiah.  No instructions or any indication when to stand or who should talk or sing. Here is a sample page (via the New York Times).


Each prayer gets its own two page spread with Hebrew, a faithful transliteration and translation and a couple alternative interpretations. No instructions or indications on when to stand, who should talk or sing and even when to turn the page. Our temple put in a notecard in each prayer book to explain the new book with some clues like whenever we hear "Baruch Atah (Blessed are you..)" we should turn the page.

One can easily see the Internet's influence. Each prayer gets the equivalent of a web page (or maybe a blog post). Lists on sides are like hyperlinks to other prayers. At the bottom are Twitter-length commentaries on the prayer. 

I expect this will be the last paper prayer book in the Reform movement. In the future we'll all download the prayers on our e-readers or smart phones with links to the next and other relevant prayers, all preset by the rabbis for that day's service. 

Monday, November 16, 2009

Rules about Blogs becaues Blogs RULE!

There are now laws about blogging and twittering that Lance and I (and all the bloggers) will need to be aware of. Here is a short summary:
  1. If a blogger posts about a product that she got for free then she must disclose that she got it for free. (Applies to guys also.)
  2. There are no such laws for traditional media.


Some questions:
  1. What problem is this trying to solve? Did Lance get a free copy of some textbook, twitter about it, and it sold 1,000,000 copies? Or did Alaska Nebraska twitter about (say) a car she got for free and it sold alot? And if she did, so what? If the book or car is terrible then Lance's and Alaska's credibility will go down and people will stop following them. Thats how the free market is supposed to work. But does it?
  2. Lets say that the evil trying to be prevented here is that Alaska Nebraka takes the car as a bribe and gives it a good review. Should that be illegal? And if so, why is that okay in old media? Because nobody pays attention to old media anymore?
  3. If Lance just took straight-up-cash to write a good review, is that illegal?
  4. If I get something for free, do not acknowledge that I got it for free, and TRASH IT on the blog, is that illegal?
What will Lance and I do? Well, in my SIGACT NEWS book review column I will put at the top of every column that the books were given to me for free. And if I ever comment about something I got for free (again, likely a book) then I will note that fact.

The wallet that Lance blogged about here is AWESOME!!!!!!!!!!!

Friday, November 13, 2009

Jobs

As many university's still feel the effect of the financial crises, many have limited or no positions to hire new tenure-track faculty so I expect the academic job market to be difficult again this year. But a few people have asked me to post announcements. Here are some places that are likely planning to hire tenure-track faculty in theory-related areas.


Feel free to add other announcements in the comments.

Also watch the DMANet and TheoryNet lists, now collected in a new blog, and the listings at the CRA and the ACM as well as individual departmental home pages. Even if a place doesn't specifically advertise for theory, or at all, they still may hire so it never hurts to ask or apply.

Nevertheless finding a tenure-track job will be difficult this year. Should be a bit easier to find a postdoc position and no one will count it against you if you take a second or third postdoc to tie you over until the market improves.

Thursday, November 12, 2009

Breaking Walls

Many of you readers don't remember a time when there were two Germanys or when we didn't think IP = PSPACE. Two walls collapsed in November of 1989 that changed all that. One marked the end of the Cold War. The other marked the start of the most exciting seven weeks of my research career.

It all happened during my rookie year as an assistant professor at the University of Chicago. On November 27, 1989, Noam Nisan emailed a paper draft showing that verifying the permanent (and thus co-NP) has multiple prover interactive proofs. Besides being a major breakthrough, this was the first time someone had proven a theorem where there was a previously published oracle result that went the other way. In my very first theorem in grad school (with Mike Sipser) we had an oracle relative to which co-NP did not have single prover interactive proofs (IP). With John Rompel, we extended the result to an oracle where co-NP did not have multiple prover interactive proofs (MIP). Noam showed co-NP did have multiple prover proof systems in the non-relativized world. This led me to thinking about whether we can find a single prover proof. I worked on this problem with then student Carsten Lund and Howard Karloff who had the office across from me. After a few weeks of hacking we finally did get such a protocol. Noam agreed to merge his now weaker result into ours and we quickly wrote it up and on December 13 emailed it out to the masses (these were the days before web archives, or even a web).

It took Adi Shamir less than two weeks to find the twist to add to our proof to get the full result that IP = PSPACE that he emailed out the day after Chirstmas. Babai (jokingly I think) chastised me afterwards for going away on a planned ski trip in mid-December.

In Fortnow-Rompel-Sipser we had also showed that MIP was contained in NEXP, non-deterministisic exponential time. After IP=PSPACE I then embarked on showing that you could put all of NEXP in MIP. At one point someone asked me why I didn't first try EXP in MIP and I mumbled something about how one ought to get nondeterminism for free in this model, but deep down I didn't want another partial result that someone else could usurp. My approach (not the one anyone would teach today) involved relativizing the LFKN co-NP in IP protocol using the provers to simulate the oracle. But we needed a low-degree test to make the relativization work, and working with Carsten and Laszlo Babai, we finally came up with a proof of the test. We sent out that paper on January 17, 1990.

I remember most the breakthrough moments when Carsten walked into my office asking me why his protocol for the permanent didn't work (it did) and when Babai, Lund and I discovered that the expansion properties of the hypercube was the last piece we needed to make our linear test work.

After MIP=NEXP, I thought we now knew nearly everything that matters about interactive proofs and moved on to other areas of complexity. Thus I missed the entire PCP craze that started about a year later.

At the 1990 Structures in Complexity Conference in Barcelona, Babai gave a cool talk E-mail and the Unexpected Power of Interaction describing those then recent results and the process behind them. I used that write-up to recover the dates above.

Wednesday, November 11, 2009

New York Theory Day

IBM-NYU-COLUMBIA theory day on Dec 11 ! Here are pointers to more information: here and here and here.

My advice: If you are able to go (distance, time, money all okay) then you should go. If you are unable to go then you shouldn't go.

Will this by available on video later? Also- if it is, will people actually watch it? There is something about BEING at a talk that is given at a particular time that seems more compelling then being able to look at it whenever you want, and hence not get around to looking at it. ~

Tuesday, November 10, 2009

Multi-Agent Biological Systems and Nat Algorithms Workshop (Guest Post)

Guest Post from Aaron Sterling

Multi-Agent Biological Systems and the Natural Algorithms Workshop

I attended the Natural Algorithms Workshop on November 2nd and 3rd. This was an event held by the Center for Computational Intractability in Princeton (which brought us the Barriers in Complexity Workshop). Bernard Chazelle was the organizer.

Chazelle is interested in applying complexity-theoretic techniques to the mathematical modeling of biological systems. Recently, Chazelle applied spectral graph theory, combinatorics of nondeterministic computation paths, and other techniques, to obtain upper and lower time bounds on two representative models of bird flocking. (Control-theoretic methods, e.g., Lyapunov functions, had obtained only an existence theorem the models converge without providing time bounds.) Chazelle's Natural Algorithms won the Best Paper Award at SODA 2009. (I found the SODA paper hard to read, as it moves through multiple difficult techniques in a compressed fashion. I recommend reading the first thirteen pages of this version instead, to get a sense of what he's doing.)

While computational complexity's entry into this field may be new, other areas of computer science have much longer-standing connection to multi-agent biological systems. A notable example is the work of Marco Dorigo, whose ant colony optimization algorithm has been influential. Dorigo's research group won the AAAI video contest in 2007; their video is a lot of fun to watch.

Back to the Workshop! The speakers were from diverse backgrounds (control theory, robotics, biology, distributed computing, mechanical engineering), and the talks were great. For reasons of space, I'll limit myself to summarizing one talk I found particularly exciting.

Magnus Egerstedt, a roboticist at Georgia Tech, talked about dolphin-inspired robot algorithms. He explained how dolphins feed. They use three basic strategies. In the first, a group of dolphins swims circles around a school of fish, in a formation called a carousel. Once the fish are well trapped by the carousel, the dolphins swim through the school one at a time to eat. The second strategy involves swimming on either side of the fish to herd them in a particular direction. The third involves physically pushing the fish from behind, either into a waiting group of dolphins, or onto dry land. Egerstedt showed a video of dolphins beaching themselves so they could push fish onto the sand. These feeding strategies demonstrate team effort, despite (presumably) limited ability to communicate. As one example, the dolphins have to take turns to feed during the carousel, or the entire strategy will be ruined. Egerstedt's group has used the behavior of dolphins to inspire several multi-agent algorithms. Here is a recent technical paper of theirs that was dolphin-inspired.

All the talks were videotaped, and are supposed to be on the Center's web page soon.

As a final note, Chazelle has written a followup paper to Natural Algorithms, and will be presenting it at ICS in January. There's no online version yet, but the abstract says the paper fits within a larger effort to develop an algorithmic calculus for self-organizing systems. With a battery of tools available, we might see quite a bit of TCS activity in this area over the next couple years.

Monday, November 09, 2009

Is FACEBOOK the new email?

Back in 1993 I had the following conversation with one of my relatives:

BILL: Just give me your email address and I'll email it to you.

RELATIVE: I don't have an email account. Why would I? Only you techno-geeks need email.

In 1994 I had the following conversation with the same relative:

BILL: Just give me your postal address and I'll mail it to you.

RELATIVE: Better- I'll give you my email address.

BILL: I thought you didn't have an email account.

RELATIVE: Only a techno-geek like you wouldn't know that everyone has email now.


At one time most people (outside of academia) did not have email. Now everyone has email. It is quite standard to have email on business cards and to ask for someones email address (are business cards still standard?).

I have run into people who are surprised that I don't have a FACEBOOK page. But here is the question: will having a FACEBOOK page be on the same level as having email in that everyone is expected to have it, and if you don't then you are just so 2005? My question is not tied to FACEBOOK per se--- I really mean will it be standard to be in some social network. Is having a webpage already standard?

(NOTE: My spell program tells me that its FACEBOOK not Facebook or facebook. So at least this time the capitol letters are not my idea.)

Friday, November 06, 2009

Button Button

Here is the offer: If you press the button you will receive $200,000. The caveat: Someone you don't know will die.


I was born during run of the original Twilight Zone but growing up watched them over and over again in reruns. People dealing with some small change in reality often with a clever twist in the ending. But they made you think.

CBS had a remake of the Twilight Zone series in the mid-80's. But nearly all the episodes were quite bland and highly predictable stories. But one episode in that new series, "Button Button", which made that offer above, caught be by surprise and worthy of being in the same caliber of the best of the original series. I won't spoil anything else about the episode.

A new movie The Box, a remake of Button, Button that opens today. I thank the commercials for The Box for reminding me of that sole great Twilight Zone episode of the 80's. Can a full length movie do justice to that short and powerful episode, even with the writer and director of Donnie Darko? I doubt it so skip the movie and rent the the original (disk 5) instead.

Thursday, November 05, 2009

Innovation

The new Innovations in Computer Science conference announced their accepted papers earlier this week including my paper with Rahul Santhanam "Bounding Rationality by Discounting Time". Shiva is collecting PDF pointers (hope to get ours posted and on that list soon).

According to Noam, "this is the most interesting list of accepted papers that I’ve seen in years". Suresh seems happy too but some of his commenter's were less impressed. I view ICS as playing an orthogonal to STOC/FOCS, trying to present papers with potentially interesting ideas that wouldn't normally be accepted into STOC or FOCS. 

Why does STOC and FOCS not accept many innovative papers, even after adding the line "Papers that broaden the reach of theory, or raise important problems that can benefit from theoretical investigation and analysis, are encouraged" to the CFP? Since I can remember, STOC has focused more on solving well-defined problems with a deference towards technical proofs. A more innovative papers, that is a new model with a few simple results, tends to be harder to sell as one needs strong motivation and a model that really captures that motivation. When we do have more innovative papers they tend to be selected based on the authors more than the content. 


But this problem has gotten worse in recent years as our field has grown and PCs seem unwilling to trade a solid technical paper for a risky innovative one. When the PC meets, an innovative paper has a higher standard, the need to justify a new model or problem. Papers that answer old open questions or extend earlier work can justify their models simply because those models have appeared in earlier conferences. But no theoretical model can exactly capture a real world scenario so one can always find negative things to say about any innovative model, causing more traditional papers to often win the fight for those last few acceptance slots.

ICS did not accept as wide a spectrum of papers as I would have liked, probably due more to lack of a broad submission base. And the models of most of these papers will likely go nowhere but the hope is some will go to instigate new research areas.

Given the accepted papers can we tell yet if ICS is a success or a failure? Not yet and that's the point. Wait five years and see what fraction of papers from ICS (compared to STOC/FOCS) are still relevant and then we'll know.


Wednesday, November 04, 2009

Amir Pnueli (1941-2009)

Amir Pnueli passed away Monday from a brain hemorrhage. Amir was an expert in program verification who won the 1996 Turing Award primarily for his 1977 FOCS paper The Temporal Logic of Programs. Lenore Zuck shares her thoughts.

I'm reading through what people wrote about Amir in the past 30 or so hours since we received the sad news of his passing. I'm leafing through the citations of his numerous awards and honorary degrees, the list of academies of which he was a member, and the lists of his unique accomplishments. Throughout today I was helping with NYU's press release about him. Everybody mentions how brilliant he was, how modest he was, how pleasant he was, how gracious he was. Some mention his patience with those of us who are less talented than him. What is mentioned less is how much fun he was to work with, how he put the people around him at ease, how he managed to find good ideas in seemingly random thoughts people brought to him.

I recall the hours I've spent with Amir as a graduate student: joking, our regular Friday afternoon meetings solving the acrostic puzzle of "Koteret Rashit", discussing books, news, politics, and taking work breaks. It's a wonder we got anything done. It's a wonder we got so much done. This was Amir's style -- alternating between work and play, and making large strides in research while fostering the most pleasant work environment imaginable. Years later, when we were working together at NYU, it was the same with our joint PhD students. Long sessions consisting mainly of laughter, with some breaks for coffee and sweets, at the end of which I always found myself stunned how we made so much progress during all the hilarity.

I'm trying to recall Amir's funniest lines. Some don't translate, some require too much background. But I thought of a few that may illustrate Amir's special humor. When a student of his was called to military reserve duty whenever his oral exam for the completion of his MSc was scheduled, (temporal logic) until he found a note from Amir saying "you had your oral exam in absentia and you passed with flying colors." Or when I interchanged N and M in a paper, on which, upon proof reading, Amir wrote a note "even for N=M" when I claimed I proved something for any N. Or... after long arguments with me that the past operators in temporal logic are not needed, he gave a lecture in which he basically admitted they were useful, with slides whose titles were "Admitting the Past" and "Why Should We be Ashamed of the Past" that I felt were for me. And the list goes on.

I've collaborated with Amir since I was a graduate student. I've had several joint students with him. I've worked with his students. I've held regular meetings with him, the next one is scheduled for this Friday -- I still cannot bring myself to remove it from my calendar. I cannot imagine life without him. He was a genius, a friend, a colleague, one of a kind. I feel fortunate to have had the opportunity to know him, to have worked him, to have learnt so much from him. May his memory, legacy, and example stay with us.

Lenore D. Zuck, Arlington, Virginia, November 3rd, 2009

Update 11/15: NYT Obit for Pnueli

Tuesday, November 03, 2009

Really Liked Bertinoro- Ramsey Thing

(Reminder: STOC Deadline Thursday Nov 5, 7:00PM, Eastern: link.)

After yesterday's post about RaTLoCC 2009 (Ramsey Theory in Logic, Combinatorics, and Complexity) at Bertinoro some people emailed me and others commented asking Did you like it?. AH- my post was about what I learned there but not whether I liked it. IT WAS GREAT! I was at Dagstuhl Complexity a few weeks earlier and that was also GREAT! Some comparisons:
  1. Both meetings had the same format. Guests are invited, the area is specialized, there are talks in the morning, then a long lunch break (12:00-3:00 or so) then more talks. Also an excursion-- a Hike in Germany and a Guided Tour of a city nearby in Italy.
  2. Dagstuhl had 50 people or so, of which I probably knew 30 before hand. Bertinoro had 28 people of so, of which I probably knew 2 before hand. Both are good- I got to meet NEW people at both.
  3. I got more out of Bertinoro. There were talks at Bertinoro where stuff I learned about Ramsey Theory 10 years ago was all I needed. At Dagstuhl I needed more recent background knowledge that I often didn't have.
  4. I got more out of both then I do out of conferences which, by their nature, ONLY have the latest results.
  5. At Dagstuhl out last meal was served late which was annoying since we had a plane to catch. At Bertinoro all meals were served on time. While some may use this to support the tired stereotype that Italians are better at organization then the Germans I stand by my prior postings about stereotypes (I'm against them).

Monday, November 02, 2009

Report From Ramsey-Logic-Complexity Workshop

Back from RaTLoCC 2009 which is Ramsey Theory in Logic, Combinatorics, and Complexity.
  1. Here are the list of talks: here.
  2. Reverse Mathematics tries to classify theorems by what is the strength of the axioms needed to prove them. There are five levels that almost all theorems in mathematics fit into. One curious outlier: the infinite Ramsey Theorem for pairs (for all c, for all c-colorings of the unordered pairs of naturals, there exists an infinite homogeneous set--- that is, a set of naturals where every pair has the same color). It does not fit in nicely. Ramsey for triples, etc, does.
  3. One of the first natural theorems to be proven independent of PA was a Ramsey-type theorem. (One can debate if its natural.) See here.
  4. Van der Waerden's Theorem In 1985 and earlier people thought that one of two things would happen with VDW's theorem. At that time the only proof lead to INSANE bounds on the VDW numbers. EITHER a combinatorist would obtain a proof with smaller bounds OR a logician would prove that the bounds were inherent. What happened: A logician (Shelah) came up with a proof that yielded primitive recursive bounds on VDW numbers. (Later Gowers got much better bounds.)
  5. The talks on Complexity were all on Proof Complexity- things like blah-blah logical system needs blah-blah space to prove blah-blah theorem. Some of the theorems considered were Ramsey Theorems.
  6. A while back I did two guest posts on Luca's blog on the poly VDW theorem here and here At the conference I found out two things that I said or implied that are INCORRECT.
    1. I said that the first proof of Poly VDW, using Ergodic theory, did not give bounds. Philipp Gerhardy gave a talk at RaTLOcCC that claims that the ergodic proofs either do yield bounds, or can easily be made to yield bounds. See his webpage (above pointer) for his paper. (When I later asked him if he prefers the classical proof of VDW's theorem or the Ergodic proof he said he can't tell them apart anymore.)
    2. I said that the only bounds known for PolyVDW numbers are the ones coming from an omega^omega induction. It turns out that Shelah has a paper that gives primitive recursive bounds. See either here or entry 679 here. (The entry 679 version looks like its better typeset.) The paper is from 2002. I am not surprised that this is true or that Shelah proved it. I am surprised that I didn't know about it since I am on the lookout for such things.
  7. The conference was about 1/3 logicians, 1/3 combinatorists and 1/3 complexity theorists. There were only 27 people there, partially because it conflicted with FOCS and partially because there is some institute having the entire Fall devoted to Combinatorics (I forget which inst. but they are in California).
  8. They plan to run it again, and if so I will so be there. Or I will be so there. Or something.

Friday, October 30, 2009

FOCS News

Last month I suggested that students should go to FOCS even if they didn't have a paper there. I doubt my post had much to do with it, but I heard FOCS did attract many students this year. However there seemed to be fewer of the older participants and in particular very little news from the blogosphere.

Some news beyond my student reports earlier this week: Lipton posted about the 50th FOCS day of talks. Shaddin Dughmi guest posts for Noam. Joe Fitzsimmons "pretends" to be a computer scientist and tweeted from FOCS. Shiva Kintali promises a post soon on open problems from FOCS talks.

Still sad to see the 50th FOCS with such limited coverage. So if you went, let me know how you enjoyed the conference. What were your favorite talks? Any controversies in the business meeting? What's the gossip? Inquiring minds want to know.

Thursday, October 29, 2009

Proud to be a Monomath

A girl told her father that history was much easier when he was in school. "Why?" he asked. She responded "Because there was so much less of it."

An old joke, but one that ran through my mind as I read the article The Last Days of the Polymath in the Autumn issue of Intelligent Life. That's polymath, not in the Tim Gower's sense of a massive collaborative effort to solve math problems, but the more traditional sense of people who know a lot about a lot, people like Leonardo da Vinci and Ben Franklin. But the article reminisces about an earlier time, when one could learn "a lot" about an area, such as physics, without needing to know all that much, basically what's covered in a first-year college sequence today. As Bill pointed out, we don't even have many polymaths in sense of knows a lot about a lot of math either. 

Advances in knowledge have made it impossible to know a lot about a lot. Advances in communication and travel have made polymaths less important since we can now better pool our talents. I might only know a lot about a little, but there isn't much I can't find a lot about quickly.

Richard Posner's quote in the article caught my eye.
“Even in relatively soft fields, specialists tend to develop a specialized vocabulary which creates barriers to entry,” Posner says with his economic hat pulled down over his head. “Specialists want to fend off the generalists. They may also want to convince themselves that what they are doing is really very difficult and challenging. One of the ways they do that is to develop what they regard a rigorous methodology—often mathematical.

“The specialist will always be able to nail the generalists by pointing out that they don’t use the vocabulary quite right and they make mistakes that an insider would never make. It’s a defense mechanism. They don’t like people invading their turf, especially outsiders criticising insiders. So if I make mistakes about this economic situation, it doesn’t really bother me tremendously. It’s not my field. I can make mistakes. On the other hand for me to be criticizing someone whose whole career is committed to a particular outlook and method and so on, that is very painful.” [Spelling Americanized]
We monomaths develop specialized vocabularies and mathematical tools and models because it helps use deeply understand an area. But much of what Posner says rings true. For example I've seen these "defense mechanisms" kick in quite often from both computer scientists and economists towards those trying to cross into each other's fields.

Wednesday, October 28, 2009

FOCS Last Day

Last (but surely not least) day at FOCS 09!

Jan Vondrak talked about some nice results that unify previous query complexity approaches in proving lower bounds for submodular function maximization. For a general class of functions he proves that good approximations require an exponential number of queries to the value oracle. This duplicates known inapproximability results, and also implies new bounds in two particular problems.

Another interesting work was presented by Heiko Roeglin who proved bounds on the size of Pareto-optimal sets in the context multiobjective optimization. In the general case it is almost always possible to find instances of problems with exponentially large Pareto sets. This renders any approach based on analyzing Pareto sets unfeasible. However, in many real world applications, it turns out that these sets are typically small. Roglin and Shang-Hua Teng give a possible theoretical explanation for this, in the framework of smoothed analysis, proving that the expected number of Pareto-optimal solutions is polynomially bounded.

To conclude the 50th of FOCS, Ryan Williams gave a much appreciated talk on new approaches in boolean matrix multiplication. The aim of the paper was to come up with a more efficient combinatorial algorithm for Boolean matrix multiplication. Ryan and Nikhil Bansal succeed in this direction by connecting Boolean matrix multiplication and the triangle removal lemma. Their approach gives the first improvement on general upper bounds for this problem in 40 years!

Tuesday, October 27, 2009

FOCS Day 2

After an early morning injection of much needed caffeine, Team Northwestern was ready for the next round of talks.  Today's program consisted of talks more related to our interests than the previous days.

In the morning, 2 papers related to the computational complexity of equilibria. In the first one Xi Chen, Decheng Dai, Ye Du and Shang-Hua Teng consider Arrow-Debreu markets in which agents' utilities are additively separable and piecewise linear concave. In this context, via an interesting reduction from 2-player games, they were able to show that the problem is PPAD-complete, confirming the fact that even additively separable concave utilities are not easy to deal with. In another talk, Laura J. Poplawski et al. introduced a simple PPAD-complete game, claiming it is a natural candidate for PPAD-completeness reductions. To support their claim they show how it naturally reduces to known (and also to many unknown) PPAD-complete problems.

Also in the morning session an interesting paper by Andrea Montanari and Amin Saberi talked about coordination games and their relation to spread of news, technologies and other social phenomena on networks. They were able to show how convergence times directly depend on certain graph properties.

A bit later in the morning, the winning student papers were presented.  Alexander Sherstov showed his work on strong improvements of the bounds for the degree of intersecting halfspaces.  Jonah Sherman was next, presenting improvements on sparsest cut algorithms.

Bringing the day to a close, session 8B covered three of our favorite topics: social network formation, revenue maximization, and mechanism design. What more could we ask for in one session?

Monday, October 26, 2009

FOCS Day 1

Michele and Michael continue their guest posts from Atlanta.

The first day of FOCS is officially over. 8 sessions, 26 talks and 270 proceedings pages have already gone by.

Following this blog we have learned many useful things. One of them is not to rank talks, another one is not to limit posts to dry descriptions of the presentations we attended.

So you're stuck with our (admittedly not so interesting) impressions and thoughts. The quality and level of the works presented is amazing, and often wanders above our heads. However it has been a very pleasant surprise that talks are often so good and entertaining that they become manageable even when the topics are not in our background. So only positive impressions came from Day 1.

As a colorful aside we will mention another conference that is taking place in the next room, aimed at soon-to-be-extremely-wealthy people. The mixture between the two communities during coffee-breaks is often entertaining and amusing, and we have to say we spotted a prominent computer scientist carrying around the "proceedings" of this other meeting... is this some kind of sign? and if so how should we interpret it?

Sunday, October 25, 2009

FOCS Day 0

Alas, I'm not in Atlanta but in New York City for another meeting. Something different this time--FOCS through the eyes of two Northwestern students, Michele Budinich and Michael Lucas, both attending their first major theory conference.


The organization on day 0 was fantastic. Fox had even reserved a parking lot for us!!

Wait a second.

Are we sure this is where we're supposed to be?

Luckily enough we met more conscious computer scientists who pointed us in the right direction: the FOCS conferenceWhat a novice mistake.

We finally made it to Georgia Tech's LeCraw Auditorium for Day 0 of the FOCS conference. Also called Theory Day, it was a dual celebration marking the 50th anniversary of FOCS and the 20th anniversary of Georgia TechsACO program.

Theory Day started off in the most appropriate way with a talk by Richard Karp who shared his favorite algorithms, pointing out the many different qualities that can make an algorithm "great."

After that two slightly more technical talks by Mihalis Yannakakis and Noga Alon, speaking about computational complexity of equilibria and path disjointness in graphs.

As very fitting conclusion for a 50th year anniversary the last talk gave some fascinating views and possible directions for computer science: Manuel Blum talked about getting to grips with consciousness, with some new neuroscience models that could shed some light (in a very literal senseon how computers could eventually prove their own theorems.

Day 0 ended with the first official FOCS 2009 event: reception, where much food and camaraderie could be found.  Afterwards, the attendees of FOCS '09 returned to their hotel rooms to dream of algorithms and optimization.