Thursday, December 28, 2006

2006 Year in Review

The paper of the year goes to Settling the Complexity of 2-Player Nash-Equilibrium by Xi Chen and Xiaotie Deng which finished characterizing the complexity of one of the few problems between P and NP-complete. The paper won best paper award at FOCS.

The story of the year goes to Grigori Perelman, his proof of the Poincaré Conjecture, his declining of the Fields Medal and Shing-Tung Yau's portrayal in the New Yorker and the lawsuit threat that followed. Science magazine named Perelman's proof the Breakthrough of the Year.

Meanwhile the theory-friendly number theorist Terrence Tao accepted his Fields medal and CMU cryptographer Luis von Ahn and Tao won MacArthur genius awards.

My post FOCS Accepts and a Movie received over 200 comments mostly about the state of the theory conferences. Sadly the movie and the rest of the site have disappeared. I also finished my favorite complexity theorems, at least for now.

In 2006 we celebrated the centennials of the births of Kurt Gödel and Richard Rado and mourned the early death of Mikhail Alekhnovich.

Thanks to guest blogger Claire Kenyon, guest posters Eldar Fischer, Jérémy Barbay, Janos Simon and podcast guests Luis Antunes, Eldar Fischer, Troy Lee and his opponents. The biggest thanks to Bill Gasarch who did all of the above several times.

Happy New Years to all my readers and let's look forward to an exciting FCRC and a renewed US commitment to increased funding in the basic sciences.

Wednesday, December 27, 2006

Foundations and Impacts

As we start thinking about our Theoretical Foundations proposals, a few related items from the theory community.

Joan Feigenbaum and Michael Mitzenmacher have written a report "Towards a Theory of Networked Computation" available on the group's web page. The report is still under revision and Joan welcomes your comments.

SIGACT Chair Richard Ladner writes in the latest SIGACT News about the importance of the required Broader Impacts criteria in NSF proposals.

One way to think about the Broader Impacts criterion is that when we receive money from the people of the United States through NSF, the people would like to know ahead of time of what benefit the research may or will be to society. If there is little or no benefit then why should the people continue to support NSF? When NSF goes to Congress to ask for money, it is going to the people's representatives, who ask for justification to spend the people's money on scientific research. Basically, NSF's funding, and ours indirectly, depend on the belief by the public that broader impacts come from our research. Some people have said to me that a focus on Broader Impacts is a move away from basic research to more mission oriented research, or research with strings attached. If we look at the ways that we can satisfy the Broader Impacts criterion, they are very general, and relate to education, broadening participation by underrepresented groups, and other benefits to society. Please read the representative activities for concrete ideas for how to include Broader Impacts in our proposals.

As SIGACT Chair, I am trying to help increase the funding for computer science theory research. The best way to increase funding for research is to convince people it is important to them and the people around them. There is a difference between "important" and "useful". Artists are able to convince people to buy art, not because it is useful, but because it inspires them. Astronomers convince people to pay them to study the stars, not because they are useful (except for our own star, the sun), but because the stars are fascinating in their own right. Understanding the birth and possible death of the universe is of no practical value, but is just a fundamental question.

All this said, I am a firm believer in serendipity. Often, research leads to unexpected results and unanticipated applications. Unfortunately, this phenomenon is quite rare and probably not common enough to convince people to provide large amounts of research money. The best approach is to have a great story about the benefits of theoretical computer science research and its promise for the future. This will generate enough money for all of us so that rare serendipitous events will happen naturally in the course of doing our research.

Tuesday, December 26, 2006

An Internet-Free Week

From about early December to late January the academic world takes a little breather as many universities end their fall quarters and start their spring. Many students and faculty are away, the universities are ghost towns. A time for rest, a chance to catch up on some of those tasks we've been putting off for the fall. A chance to get ready for the next quarter or semester. This week between Christmas and New Years marks the nadir of activity: Absolutely nothing interesting should happen this week.

But over recent years this season seems far less quiet. We also work in a more global society and many countries, like Israel and India, treat this week not much differently than any other week. We can access the internet from anywhere and more importantly, we know everyone else can access the internet from anywhere. Taking time to visit relatives and friends or even going on vacation for many does not mean a break from email. Yesterday, Christmas Day, I received several actionable emails almost at the level of a typical workday.

We need an internet-free week. We should just shut down the whole network for seven days. Some people would use the time to relax and take a break knowing they will not be missing anything important. Others would continue to work finding themselves surprisingly much more productive than usual.

Friday, December 22, 2006

A Recommendation Letter

December 22, 2006

Dear Recruiting Committee:

George Henry is among the top fifty computational complexity theorists on the market this year and you should consider him for any faculty, postdoc or janitorial position in your department.

Computational Complexity compares complexity classes representing different amounts of complexity resources among different computational models. There are hundreds of complexity classes and thousands of variations on those classes. Henry's best result shows that for two of these classes, one of them is not contained in the other assuming that some third class is not contained in some fourth class. This work appeared in a theoretical computer science conference you've never heard of.

For service, George Henry has wasted his money joining the ACM, IEEE, AMS, MAA, ASL and SIAM. He's even (under duress) refereed a paper or two.

Henry gave a seminar once and nobody ran out screaming, probably because they were too busy sleeping. Henry also taught a course once. He was not actually convicted on any harassment charges.

George Henry has no two body problem since he's never had a relationship last more than three days.

In short, there are several great complexity theorists on the market this year but since your department has no chance of hiring any of them you might as well look at Henry.


Lance Fortnow
Professor of Computer Science

Thursday, December 21, 2006

The Necessity of Engineering for Science

Last month the University of Chicago faculty received an email from new president Robert Zimmer and soon-to-be-provost Thomas Rosenbaum about discussions on creating a program in Molecular Engineering.
The boundary between science (as the study of natural phenomena) and engineering (as the development and study of man-made artifacts) has become much more porous and in certain areas has virtually vanished. Historically, the University of Chicago has had a major international presence in science, but with a few special exceptions, has not systematically developed programs in engineering. With this important and evolving paradigm shift in the relationship between science and engineering, there are important questions regarding how the University should respond. These questions arise both because of exciting and important new areas of investigation at the science/engineering interface and because a lack of an explicit investment in engineering may hamper the development of our science.
Does science need engineering because engineering problems lead to important intellectual scientific questions or because engineering provides the tools needed by the scientists to carry on their research? Perhaps a bit of both.

Wednesday, December 20, 2006

Entertainment Tidbits

Can a CS degree propel you to a major acting role on a popular new TV series? Worked for this person.

I heard a complaint that in the movie Deja Vu they used face-recognition algorithms to find a suitcase in a large city in a matter of seconds. Because it's important to keep the computer science accurate in a time-travel movie.

In the last Numb3rs, Charlie the mathematician was seen carrying a copy of the SIAM Journal on Computing, a prominent TCS journal. Was he reading my paper or yours? At the end of the episode Larry the physicist left on the space shuttle to spend six months on the International Space Station while the actor, Peter MacNicol, moves over temporarily to the show 24. Couldn't Larry just have gone on a normal sabbatical?

On a more serious note we finally got around to watching Al Gore's documentary An Inconvenient Truth. Gore seriously impressed me with how he laid out the arguments and effects of global warming. The movie really affected my daughters leading to some interesting family discussions about warming and what we can do. I highly recommend watching the movie for those who haven't yet done so.

Tuesday, December 19, 2006

Show Us Your Research

Now that most of the FCRC Deadlines have passed, I would again suggest that you post your papers on a public archive like the Electronic Colloquium on Computational Complexity or the Computing Research Repository. The world wants to know about your research.

Which one should you choose? You don't have to, you can freely submit to both ECCC and CoRR. But how do they compare? [Disclosure: I am on the ECCC Scientific Board.]

  • ECCC focuses on computational complexity though often contains papers across theoretical computer science. CoRR broadly covers all of computer science (with tags for subfields) and is part of the arXiv project covering physics and math as well.
  • An article has to be approved by an ECCC board member to meet a minimum standard before it can appear. CoRR only checks for relatedness to the topic area.
  • Both plan to have papers posted forever. ArXiv is currently run by the Cornell Library that gives stronger backing to this promise. However every paper on the ECCC and CoRR should later appear in a conference proceedings and/or journal.
  • ECCC takes postscript submissions. CoRR prefers LaTeX submissions and processes them with hypertex.
  • Both systems allow revisions and all versions remain available.
  • ECCC has a (not-so-user-friendly) discussion system and email announcements of new papers. CoRR has RSS feeds for each subject class. Both systems plan to continually update their interfaces and features.

Monday, December 18, 2006

The Mega-Conferences

Chicago will be invaded by economists in early January, coming to the American Economic Association's Annual Meeting. At the same time the mathematicians meet in New Orleans. The physicists meet in March and April. We computer scientists all get together…never.

Most fields have their big annual get togethers with their plenary talks and many parallel sessions. New Ph.D's meet with potential employers often in a very organized way. Most importantly the entire community comes together to discuss the fundamental scientific and political issues of their discipline.

We don't have those meetings in computer science. The ACM has an annual get together where they give out awards but that is relatively small. Every four years we have the Federated Conference, a joint meeting of several conferences but they don't span the field, usually lacking a major AI presence.

So why don't we have a CS Annual Meeting drawing tens of thousands from across the discipline? Many of the other annual meeting started in a time when travel was more difficult and a single, or small number, of large general meetings made sense. We are a much more conference-oriented field and few of us would like to take yet another trip to a larger conference.

We lose something by not having a single regular meeting across computer science. We rarely meet people outside our field who are outside our departments. Different subfields in CS have developed different cultures. We lack the cohesiveness of other fields. When someone says "I am a Physicist" we know what that means. When someone says "I am a Computer Scientist", do we?

Thursday, December 14, 2006


You can tell a lot about a field by how researchers motivate their results in papers and talks. Pure mathematics often gives little or no motivation starting a paper or talk with something like "Let λ be an inaccessible cardinal…" In economics, even in theoretical papers, considerable time is spent in coming up with stories to justify a given model. More discussion is spent in economics talks about the model than the particular proofs and results that derive from that model.

In theoretical computer science and in particular computational complexity we straddle between these two worlds. Our goal is to understand the power of efficient computation so we have complexity classes like NC, P, P/poly, BPP and BQP that try to approximate real-world models of computation. We have classes like NP, #P and PSPACE that capture a large number of interesting problems that we would like to solve. We have models like Probabilistically Checkable Proof Systems (PCPs) whose parameters help us understand the limitations of approximation. We have combinatorial tools like expanders and extractors that have wide applications in many areas of complexity and beyond.

But all these classes, models and tools have very nice theoretical properties as well. We tend to focus more on the theoretical foundations judging papers more for their theorems and the difficulty of the proofs than the importance of the underlying problem. In the end we reduce the amount of motivation in the paper often to a single sentence of the introduction and a theory audience only rarely questions the importance of a model during a talk.

Once we deemphasize the motivation of a model, then others, in an attempt to find open problems, look at variations of the model. Often these variations are motivated solely by the importance of the original model, even if the variations have little relevance with the original motivation of the model. Researcher then consider variations on the variations deviating quite far from the original model and its motivations.

Other computer scientists often complain, rightly or wrongly, that theoretical computer science and computational complexity have lost touch with real computational issues. We must be careful to not focus too much on presentations that don't express or don't even have some reasonable motivation beyond the difficulty of the proofs.

Wednesday, December 13, 2006

Science a Victim of Politics Again

The NSF has a new Theoretical Foundations solicitation. Due date is February 19. Theory of Computing has its own component within this cluster.

But not all NSF news is good. Remember how Bush announced an American Competitive Initiative in his State of the Union back in February. ACI promised to double the NSF budget over ten years and the president's proposed budget included an NSF increase of 7.8% for FY 2007 that started October 1. The ACI had good support among both political parties in congress. So what happened?

Congress couldn't pass most of the budget resolutions before the elections. Monday Congressional democrats announced they won't finish the spending bills left unfinished by the current congress leaving budgets at last year's level until the beginning of FY 2008 next October.

In a joint statement, the incoming Democratic chairmen of the House and Senate Appropriations Committees said the urgency of new business and the administration's next spending request for the war in Iraq gave them little choice but to abandon efforts to pass the overdue bills.
The increases for NSF and other scientific agencies weren't singled out but science was one of the few programs slated for a long-needed budget increase this year.

More from the CRA.

Tuesday, December 12, 2006

You Ask, We Answer

In the ninth Complexitycast, Bill Gasarch and I answer reader's questions. MP3 (25 minutes, 4.3MB)

In the podcast we mentioned posts on finding jobs and the tradeoff between working on reasonable versus difficult problems.

Monday, December 11, 2006

Favorite Theorems: Second Decade Recap

This past year I listed my favorite computational complexity theorems from 1975-1984. I have now completed my favorite theorems cycle for the first four decades of complexity including 1965-74, 1985-94 and 1995-2004.

Next favorite theorems in 2014. Will your name be on that list?

Sunday, December 10, 2006

Reductions To SAT

Standa Zivny asks
I'd like to ask you about CLIQUE→SAT reduction. The reduction 3-SAT→CLIQUE is a standard one from undergrad course. Since SAT is NP-Complete, every problem from NP, i.e., CLIQUE as well, is reducible to SAT. Is this reduction "known", i.e. defined by some "natural way" as the reduction 3-SAT→CLIQUE is? Is that true for all NP-C problems?
The reduction in Cook's paper create formulas with variables that represent the entire computation of an NP machine accepting CLIQUE. We can often do much better. Consider the problem of determining whether a graph on n vertices has a clique of size k.

Variables: yi,r (true if node i is the rth node of the clique) for 1 ≤ i ≤ n, 1 ≤ r ≤ k.


  • For each r, y1,r∨y2,r∨…∨yn,r (some node is the rth node of the clique).
  • For each i, r<s, ¬yi,r∨¬yi,s (no node is both the rth and sth node of the clique).
  • For each r≠s and i<j such that (i,j) is not an edge of G, ¬yi,r∨¬yj,s. (If there's no edge from i to j then nodes i and j cannot both be in the clique).
That's the entire formula that will be satisfiable if and only if G has a clique of size k.

While simple, an optimized Cook-Levin style reduction can produce smaller formula for large k. This reduction has Θ(n2k2) clauses. Using Robson's reduction one can create formulas of size O(n2logO(1)n).

We can get similarly nice reductions for many other NP-complete problems like 3-COLORING and HAMILTONIAN CYCLE. But there is no general procedure for producing simple formula, especially if there are calculations involved like SUBSET SUM.

Friday, December 08, 2006

Save the Mathlete, Save the World

An Off-Broadway tale of beauty and the geeks.
Vickie Martin is über-popular. She's also wicked smart. Victoria Martin: Math Team Queen demonstrates that chaos theory rules when the third most popular sophomore is roped into joining the all-male, all-nerd Longwood High School math team, upsetting the axis of symmetry of boys becoming men. Will Vickie Martin invert the curve or become the coefficient for her team winning the state math championship? Can this goddess of π possibly become the common denominator that makes the mathletes victorious? Totally.

Thursday, December 07, 2006

The Efficient Church-Turing Thesis

The Church-Turing thesis roughly states that everything computable is computable by a Turing machine. I strongly believe the Church-Turing thesis and have vigorously defended the thesis in this weblog.

Sometimes we hear about an efficient or strong version of the thesis, for example in the Bernstein-Vazirani paper Quantum Complexity Theory.

Just as the theory of computability has its foundations in the Church-Turing thesis, computational complexity theory rests upon a modern strengthening of this thesis, which asserts that any "reasonable" model of computation can be efficiently simulated on a probabilistic Turing machine (an efficient simulation is one whose running time is bounded by some polynomial in the running time of the simulated machine). Here, we take reasonable to mean in principle physically realizable.
You mostly hear about the thesis from the quantum computing folks as they use the thesis to justify why quantum computing changes everything we believe about efficient computation. But did anyone actually state such a strong version of the Church-Turing thesis before quantum computing came along? The closest I can find comes from Steve Cook's 1982 Turing Award Lecture.
The identification of P of with the tractable (or feasible) problem has been generally accepted in the field since the early 1970's…The most notable practical algorithm that has an exponential worst-case running time is the simplex algorithm for linear programming…but it is important to note that Khachyian showed that linear programming is in P using another algorithm. Thus, our general thesis, that P equals the feasible problems, is not violated.
But not even Cook in 1982 seemed to believe that P captured all the "physically realizable" efficient algorithms as he goes on to describe probabilistic and parallel algorithms.

By no means does computational complexity "rest upon" a strong Church-Turing thesis. The goals of computational complexity is to consider different notions of efficient computation and compare the relative strengths of these models. Quantum computing does not break the computational complexity paradigm but rather fits nicely within it.

Having said all that, as of today the strong version of the Church-Thesis as described by Bernstein and Vazirani seems true as we are not even close to physically-realizable quantum computers. We don't even need "probabilistic" since we believe we have strong pseudorandom generators. Maybe someday we will build those quantum machines or create other physical devices that will efficiently compute problems beyond polynomial time. But we are not there yet.

Wednesday, December 06, 2006

Shifting Time

At a community concert last Sunday the host said he was pleased with the attendance given the competition with the Bears game. He also said someone had instructed him not to reveal the score. Why not? Because some people in the audience were saving the game and would want to watch it after the concert.

I flew during the last game of the World Series. After we landed I checked the final score on my mobile phone (cool enough that I can do that) and told it to the St. Louis fan sitting across to me. But the person behind me got upset since he was planning to watch the game later.

In my youth you had to watch a sporting event live or you couldn't watch it a all. Everyone watched TV shows at the same time. We even all saw movies at roughly the same time, if you missed a movie in the theater the few weeks it played you would have to wait years until it played in a retrospective or came to TV. You could safely have a discussion about the latest game, TV show or movie knowing that everyone had either seen it or won't get the chance to.

Technology has changed the notion of time itself. Events, particularly entertainment, don't happen at the same time for everyone. They just have an earliest time. Events happen when we want, or have time, for them to happen. This freedom of time makes life more convenient but harder to talk about events that have occurred for some people and not others. Even email causes time shifting. How often have we tried to talk to someone who can't because he "hasn't read that email yet."

The local cable company taped the concert for later broadcast. People could have watched the football game when it happened and the concert later. Their choice.

Monday, December 04, 2006

On Paper Titles

How do you take a twenty page research paper and condense its essence into a few words? A couple of title don'ts with some (made up) examples.
  • Starting a title with "On", "Towards", "New" or "Improved" or ending with "?"
    You are announcing that you have failed to solve the problem you really care about and this is the best you can do. Nobody would title a paper proving P≠NP "On an Open Problem of Cook".
  • "Breaking the … Barrier"
    Either it wasn't a barrier after all or you cheated and changed the model.
  • Cutesy Titles
  • "A slice of π"
  • Ending with a Roman Numeral
    "Pig Complexity I". Does the world need more than one paper on pig complexity?
  • Out of Context Titles
    "Three rounds suffice"
  • Technical Terms
    Don't use highly technical terms or complexity classes in the title. Any computer scientist should at least understand the words in the title.
  • Formal Statement of Result
    "A O(n3/2log n log log n/log* n)-time one-sided randomized algorithm giving a O(n1/3/(log n)4/3) approximation to 12-sided 3-way 20-dimensional hypergraph coloring."
  • Long Titles
What makes a good title? Short and to the point. Some of several titles I liked from the last FOCS: "Cryptography from Anonymity", "Fault-Tolerant Distributed Computing in Full-Information Networks", "Planar Point Location in Sublogarithmic Time". Enough to give you the idea of the paper and the desire to read more.

I went through my bibtex file trying to find great papers with lousy titles. Except for a few "On"s (On computable numbers, with an application to the Entscheidungsproblem), great papers seem to have at least reasonable titles. A lesson for all of us paper titlers.

Sunday, December 03, 2006

My International Day

Friday morning I IM'd a co-author in India, in the evening I Skyped to Hong Kong. A Dutch professor emailed me about a paper we have with co-authors in Russia and the Czech republic. A New Zealand professor asks me a complexity question. On the train ride home I worked on a paper with Portuguese co-authors.

What amazes me is not just the international connections but that I usually don't even realize when I deal with someone overseas. Academic research has gone truly global, where I can call, instant message, email and send papers to colleagues across oceans as quickly and easily as across campus. And as we get more used to this technology the smaller the world becomes and at some point we stop connecting people to countries but rather to points on the internet.

On the other hand these colleagues didn't get to share my experience with Chicago's first blizzard of the season. Too bad I couldn't IM the snow to India.