Wednesday, December 29, 2010

Complexity Year in Review 2010

Complexity Theorem of the year goes to Ryan Williams for his exciting separation of NEXP from ACC0. The runner up is Arora, Barak and Steurer for their algorithm for unique games. Also some great progress on some of Bill's favorite questions including Arithmetic Progressions and the Erdos Distance Problem.

None of these papers got mentioned in the New York Times so the most notable paper of the year goes to Deolalikar's P ≠ NP. Many of you got upset that I didn't give this paper the respect it didn't deserve. I did appreciate the publicity the paper generated for our great open problem but the status of the P versus NP question remains: still open.

Last year we highlighted several new blogs. This year the trend is reversing as several theory bloggers have slowed down or stopped blogging. A few of our commentors got very ugly on our blog this year and finally we have given in to comment moderation, though we rarely block.

But social networking in the theory community continues on in other ways highlighted by the Theoretical Computer Science Q&A site. The SIGACT Facebook and Twitter pages have well over a hundred followers each.

The jury is still out on how a near complete change of NSF personnel and the fall elections will affect funding for theoretical computer science. We can always hope.

In this year I started a discussion on remaking STOC. The most popular thing I ever wrote is now this tweet. And don't forget my daughter Molly and her friend Danielle as NP and P.

Gone but not forgotten: Martin Gardner, Joseph Kruskal, Avner Magen, Benoît Mandelbrot, Robin Milner, Partha Niyogi, Sam Roweis and Numb3rs.

Thanks much to our guest posters: Daniel Apon, Paul Beame, Rance Cleveland, Ben Fulton, Josh Grochow, M.T. Hajiaghayi, Bernhard Haeupler, Nicole Immorlica, Subrahmanyam Kalyanasundaram, Clyde Kruskal, Michael Mitzenmacher, Rahul Santhanam, Aaron Sterling, Richard Taylor and Vijay Vazirani. We also thank guest photographer Evan Golub.

Looking forward to 2011 with the big FCRC meeting, learning the location for the Simons Institute for the Theory of Computing and just maybe I'll finish writing my P versus NP book (a bit more than half finished).

Wednesday, December 22, 2010

America's Most Important Algorithm

Yesterday the Census Bureau announced the new apportionment of the 435 representatives to states based on the 2010 census. Illinois lost one representative. Texas gains four. Not only do these affect the makeup of the House of Representatives but also the Electoral College that chooses the president.

Since 1940 the apportionment is not done by a specific formula but by an algorithm.
  • Input: Pop, a population array for the 50 states.
  • Output: Rep, a representatives array for the 50 states.
  • Let Rep[i] = 1 for each state i.
  • For j = 51 to 435
    • Let i = arg max Pop[i]/sqrt(Rep[i]*(Rep[i]+1))
    • Rep[i] = Rep[i]+1
This algorithm, called the Huntington-Hill method or the Method of Equal Proportions, minimizes the relative difference between sizes of congressional districts.
Check out the Census Bureau video The Amazing Apportionment Machine


Implemented naively the running time is O(rn) for n the number of states and r the number of representatives. I'll leave it to you readers to find better implementations. Can I compute how many representatives New Jersey gets from Pop in o(n) space?

Of course with n=50 and r=435 the numbers don't get big enough to cause any problem at all for today's computers. I wonder how long it took in 1940?

Monday, December 20, 2010

BREAKTHROUGH in algorithms: Improved algorithm for Metric TSP!!!!!!!!

BREAKTHROUGH in Algorithms: Improved Algorithm for Metric TSP,

(Guest Blog by Mohammad Hajiaghayi)

We all recently heard about the breakthrough complexity result by Ryan Williams on non-uniform circuit lower bounds. Here is a breakthrough from algorithms side.

All of us may heard about the Christofides' algorithm from 1976 for metric Traveling Salesman Problem (TSP), in which, given an undirected graph G with nonnegative edge weights on the edges satisfying the triangle inequality, the goal is find a shortest possible tour that visits each city once. The algorithm and its analysis is very simple. Create the minimum spanning tree (MST) T of G, find a minimum weight perfect matching M (T-join) in the complete graph over the vertices with odd degree in T, combine the edges of M and T to form an Eulerian circuit and shortcut the circuit by skipping visited nodes. This simple algorithm is a 3/2-approximation algorithm for metric TSP since both the weight of T and twice the weight of M are lowerbounds for the optimum. You can write a natural LP known as Held-Karp relaxation for the problem, for which one can show an integrality gap of at most 3/2 essentially via Christofides' algorithm. For lowerbounds, we know that the integrality gap is at least 4/3 for this LP. (ADDED BY BILL: According to Wikipedia it is known that you can never get an approx better than 220/219=1.00456... times opt, unless P = NP. ADDED LATER: Here is the link to the Wikipedia post and here is the paper by Papadrimtriou and Vempala that contains the result. The post lead me to find it. )

Though Christofides' algorithm seems very simple and easy to improve, so far there has been no progress in this regard despite consistent efforts from 1976. Today I'm happy to announce a breakthrough for this problem by Shayan Oveis-Gharan, Amin Saberi and Mohit Singh.

They obtain a (3/2-c)-approximation algorithm, for some positive constant c, for the graphical TSP where the metric is the shortest path distance in an unweighted graph (a very natural case considered by many so far). Their approach is natural also, they find the tour by sampling a random spanning tree from a maximum entropy distribution defined by the linear programming relaxation of the problem (similar to the approach of Asadpour, Goemans, Madry, Oveis-Gharan, and Saberi from SODA'10 on improved algorithms for asymmetric TSP). Since again the cost of this tree is upper bounded by the optimum solution, it suffices to show that the cost of its Eulerian augmentation (or T-join) is strictly less than half of the optimum. Of course the analysis is not simple at all. According to the authors:

``The first ingredient of the proof is a deeper study of random spanning trees and their corresponding generating functions. We build on a recent and very interesting study of real-stable polynomials and their applications in discrete probability. In particular, we use these techniques to bound the probability of events defined on the parity of the number of edges chosen from a certain subset to be in the random spanning tree. This is crucial for bounding the cost of the T-join.

The other essential ingredient of our proof is a more detailed analysis of the near-minimum cuts of the LP solution. We study the cross graph of (1+d)-near minimum cuts for some d < 1/100 and observe several interesting properties. In particular, we show that the cross graph corresponding to these cuts has a structure very similar to cactuses. Our analysis generalizes to all graphs and it could be of independent interest.''

Amin told me that they should post the paper in arxiv by the end of this month.

Thursday, December 16, 2010

Low, Superlow, supersuperlow sets, and Paywalls

Recall the following:
  1. If A is a set then A' (pronounced 'A jump') is the halting set relative to A. Formally it is:
    { e | MeA(e) converges }
  2. A set A is low if A' &leT HALT.
  3. A set A is superlow if A' &lett HALT.
There is a well known construction of an undecidable low c.e. (What I call c.e., computably enumerable, used to be called r.e., recursively enumerable. See Soare's article for why the change makes sense.) This is one way to get an intermediary Turing Degree. However, it turns out that the set is not just low, its superlow. Consider the following definition of supersuperlow which I made up:
A set A is supersuperlow if A' &lebtt HALT.
I wondered if there exists an undecidable supersuperlow set. I asked four prominent computability theorists (spellcheck wanted me to write computable theorists). They all (1) thought it was a good question, (2) thought the answer was NO and KNOWN, and (3) didn't have a proof or reference. I then asked Carl Jockusch and he (1) thought it was a good question, (2) knew the answer was NO and KNOWN, and (3) had both a proof and a reference.

Since they all thought it was a good question, and because papers that contain the needed results are behind paywalls or unpublished, and hence lost to humanity forever, I did an exposition, which you can get free online here where I present the following:
  1. The classic proof that there exists an undecidable c.e. low set.
  2. Comments on this classic proof that show how it really resulted in an undecidable c.e. superlow set.
  3. A complete unpublished proof that if A is supersuperlow then A is decidable.
Should I add more to it and make it a real survey for a real journal?
  1. PRO: The referees comments may help improve it.
  2. PRO: If the journal is free online then it will be better archived.
  3. PRO: I will get to learn all of this material.
  4. PRO: A paper on my resume.
  5. CON: Referees can be a pain in the neck.
  6. CON: Math arXiv is just as good, perhaps better, than a journal for archiving (I will certainly polish it a bit and submit it to math arXiv at some point)
  7. CON: I will have to learn all this stuff. Do I care beyond what I have already found out? I stopped doing recursion theory back when it was still called recursion theory. Now I'd rather spend my time and energy on Ramsey Theory and other things. (I could, of course, do a survey of Computable Ramsey Theory. Sample theorems by Carl Jockusch: (1) If COL is a COMPUTABLE 2-coloring of the edges of the complete graph on N then there exists a &Pi2 homogeneous set. (2) There exists a COMPUTABLE 2-coloring of the edges of the complete graph on N such that no homogeneous set is &Sigma2. There is one problem with that: I already did! Its part of my survey of recursive combinatorics.
  8. CAVEAT: Is having a survey on my resume that valuable? I ask this non-rhetorically which is why this is a CAVEAT rather than a CON.

Monday, December 13, 2010

Math- Old School

In the last month we have reported on NEW RESULTS by Williams, Katz and Guth, Sanders, and Pinkerton and Setra. For a change of pace lets look at some really OLD math from a really OLD book- the Bible. (NOTE- this has nothing to do with whether the Bible is true, just that its old.)

In Genesis 18 God wants to destroy Sodom and Gomorrah. Abraham wants to argue against this. Here is a paraphrase using modern terms.

GOD: If there are 50 righteous people in Sodom then I will spare the city.

ABRAHAM: What if there are 45? 45 is pretty close to 50.

GOD: Okay. If there are 45 righteous people then I will spare the city.

ABRAHAM: What if there are 40? 40 is pretty close to 45.

GOD: Okay. If there are 40 righteous people then I will spare the city.

ABRAHAM: You know, uh, 30 is pretty close to 40.

GOD: Okay. If there are 30 righteous people then I will spare the city.

ABRAHAM: 20 is only 10 less than 30 so how about....

GOD: Okay. If there are 20 righteous people then I will spare the city.

ABRAHAM: 10 is only 10 less than 20 so how about....

GOD: Okay. If there are 10 righteous people then I will spare the city.

ABRAHAM: Good. (He stops bargaining. Perhaps he shouldn't have--- They couldn't find 10 and the city was destroyed.)

I think Abraham should have used smaller increments as he went down since going from 20 to 10 is cutting it in half which sounds like a lot. But who am I to argue--- he got God down to 10 which is impressive even though it didn't work.

This reminds me of two paradoxes:
  1. The Small Number Paradox. All numbers are small. Proof by induction. Clearly 0 is small. If n is small then adding 1 can't make it much bigger, so n+1 is small. Hence all numbers are small.
  2. The Sorties Paradox also called The Heap Paradox. 1,000,000 grains of sand makes a heap. If n grains of sand make a heap then so do n-1 grains of sand. Hence 0 grains of sand make a heap.
Is the passage in Genesis 18 really math? In a very primitive form I would say yes. Is there any older source for anything that resembles math?

Thursday, December 09, 2010

46 free lunches!

(CONGRADS to all the new ACM fellows. Among them are theorists Jennifer Chayes, Anne Condon, Phil Klein, S. Muthu, and Dan Spielman.)



In my post about mentoring High School Students I noted that for every $1000 my HS mentees win in math and science research competitions I get a free lunch. Two of my students, James Pinkerton and Rafael Setra entered the Team Siemens Competition and won $23,000 each! (The article says $20,000 but James and Rafael tell me it's $23,000.) Looks like I will be eating well for a while. If you visit UMCP and they are free for lunch with us then you can have a free lunch from them.
  1. Here is the announcement of the winners. They came in third place in the team category.
  2. Their project was on DUPLICATOR SPOILER games. (Like most games defined in math papers this game is not fun.) Recall that these games provide a way to prove certain properties are not expressible in certain languages (e.g., well-foundness for linear orderings is not first-order expressible). These games go for a FINITE number of moves that is specified before the game begins. They looked at these games when the number of moves can be an ordinal. Here is how it works: (1) at the beginning of the game there is a counter that has in the ordinal &alpha, and (2) after SPOILER makes his move he decrments the counter to some ordinal &beta < &alpha of his choice. Here is one result: for any ordinal &alpha there exists two linear orderings L1 and L2 such that if the game is played with these two orderings and both DUPLICATOR and SPOILER play perfectly then SPOILER wins the game in exactly &alpha moves. This is not just an existence proof- they actually say what the orderings are. They are natural in that they were not constructed just for the purpose of having this property.
  3. Here is their paper.
  4. The money actually goes towards tuition and other college-related expenses.
  5. What is the secret of mine or their success? There are two things that are needed here (this applies to ugrad projects, Masters Thesis, PhD thesis, and, if properly generalized, to life):
    1. Student that are hard working, intelligent, and care about the material. (James actually thinks Dup-SPOILER games are fun!)
    2. A project that is doable.
  6. Note that if a project doesn't go very far it might be that the PROJECT wasn't good or that the STUDENTS weren't good. This happens quite a bit, though ALWAYS something can be salvaged for a high school project.
  7. I usually get to re-use projects since some students don't do a good job or give up. This was the FIRST TIME I had done the DUP-SPOILER GAMES WITH ORDINAL NUMBER OF MOVES project. Now, alas, I can't use it again.

Monday, December 06, 2010

Do Uniform Lower Bounds Matter?

From Ryan Willams' paper:
Non-uniform lower bounds establish impossibility results for computation in the physical world: it could be that P ≠ NP, yet NP-complete problems can still be efficiently solved using a “bloated” program with sufficiently many lines of code. Non-uniform circuit size lower bounds for NP would rule out this possibility.
The class P contains a language L where have a single fixed program that efficiently solves L for all inputs at all lengths. L is in P/poly if for every length n there is a program of size polynomial in n that efficiently solves L for all inputs of length n. The programs for two different lengths may have no relation to each other.

Karp and Lipton showed that if NP is in P/poly then the polynomial-time hierarchy collapses so we generally don't believe that NP is in P/poly. Suppose though we lived in a world where NP is in P/poly but still P ≠ NP.

In one argument, even though we study complexity in asymptotics we generally live at one input length, the size of the problems we want to solve. Once we find the program that works at this length then P = NP for us, even though P ≠ NP in general.

But the only constant over time is time itself. An hour now is the same as an hour in 1971, but the amount we can compute in that hour has grown dramatically. We don't care about fixed problem sizes, rather we care about the largest problem we can solve in a given amount of time. As our technology gets better, those input sizes also grow. For problems in P, like linear programming, the algorithms we have for smaller inputs also work on larger inputs by just increasing the computation time. However, if NP is in P/poly the code for NP-complete problems that worked a few years ago may fail miserably now as we may have to find completely different code for the larger inputs we care about now. If P ≠ NP we can't have an efficient process to find that code even though we know it exists.

Thursday, December 02, 2010

A BREAKTHROUGH result on density and 3-AP's

We use the following terminology: [n] means the set {1,...,n}. k-AP means an arithmetic progression of length k. A 3-free set is one with no 3-AP.s in it. We use the following conventions: (1)all inequalities have a big-O or big-&Omega which we do not include, and (2) we have the pointers to the papers be at the authors name when possible.

The following result was posted Oct 30, 2010 by Sanders.
For large n, for all A &sube [n], |A| &ge n(log log n)5/log n then A has a 3-AP.
I give the history and some misc information about the result.
  1. In 1927 van der Waerden published the following theorem which is now known as van der Waerden's theorem:
    For all k, for all c, there exists W=W(k,c) such that for all c-colorings of [W] there exists a monochromatic k-AP.
    Online sources: (1) Wikipedia, (2)blog by Gilish Varma, and (3) preprint of a book by Gasarch et al .The upper bounds on W(k,c) from the original proof and the links (which are essentially the same) are BIG. In particular they are not primitive recursive. (NOTE- if you know of an online source for van der warden's original article and/or a translation into English, let me know.)
  2. Erdos and Turan(you'll need to scroll down some) wanted an alternative proof of this theorem with smaller bounds. To this end they made the following conjectures:
    1. (ER1) For all k, for all &epsilon for large enough n, for all A &sube {1,...,n} such that |A| &ge &epsilon n, A has a k-AP.
    2. (ER2) Let A be a set of natural numbers. If &Sigmax ∈ A 1/x diverges then A has arbitrarily long arithmetic sequences.
    Both of these imply VDW's theorem. The hope was that they would provide new proofs with better bounds.
  3. Roth showed ER1 for k=3. In fact, he showed the following: for large enough n, for all A &sube {1,...,n} such that |A| &ge n/log log n, A has a 3-AP. The proof used non-combinatorial methods. For an online proofs see either Terry Tao's blog or Alex Iosevich's notes.
  4. Szemeredi proved ER1 for k=4 and then for general k. The proof for general k used VDW's theorem and hence did not provide better bounds for W(k,c). Szmeredi's Regularity Lemma which he developed to prove it, has found many many applications. His proof for k=4 was purely combinatorial. A scaled down version of it for k=3 was presented in Graham-Rothchild-Spencer in about 2 pages. For an online exposition of this see pages 121-130 of Gasarch et al. For an online exposition of the proof for the general case see this exposition by Tao.
  5. Furstenberg obtained a different proof of ER1 using ergodic theory. This proof was nonconstructive and hence yielded no bounds on the VDW numbers. His technique was later used by Bergelson and Leibman to prove the poly VDW theorem and Poly HJ Theorem. Later Walters found a combinatorial proofs for both. See also the exposition by Gasarch et al.
  6. Shelah obtained primitive recursive bounds on the VDW numbers. His proof was purely combinatorial.
  7. Gowers proved ER1 in a way that lead to much better bounds on the VDW numbers.
  8. A purely combinatorial proof of the Density Hales-Jewitt theorem was done by the polymath group. This is likely the easiest proof of ER1.
  9. Green and Tao showed that the set of primes have arb large arithmetic progressions. Aside from this, there seems to have been little progress on ER2. Some people think ER2 should replace Poincare's conjectures on the list of Millennium Prizes.
That would seem to be the end of the story for now. ER1 was proven in a way that lead to better bounds on VDW numbers. But notice Roth's theorem. The condition on the density of a set that lead to a 3-AP are better than ER1. There have been improvements on Roth's theorem.
  1. Szemeredi and Heath-Brown obtained the following result independently: There exists a constant d such that, for large enough n, for all A &sube [n], |A| &ge n/(log n)d, A has a 3-AP. There is a nice exposition by Green (NOTE- if you know of an online source for the original papers let me know.)
  2. Bourgain obtained the following result: for large enough n, for all A &sube [n], |A| &ge n((log log n)/log n)0.5 A has a 3-AP.
  3. Bourgain improved his result to: |A| &ge n/(log n)2/3-o(1)
  4. Sanders improved Bourgain's result to |A| &ge n/(log n)3/4-o(1).
  5. Sanders NEW result is that we only need |A| &ge n(log log n)5/log n.
OKAY- why is this important? It breaks a barrier that seemed very stubborn and it also leads to better bounds on W(3,c):

Let f be a function such that for large enough n, for all A &sube [n], |A| &ge f(n), A has a 3-AP. Assume [n] is c-colored. Some color must appear n/c times. If n/c &ge f(n) then there is a 3-AP. Hence we need n/f(n) &ge c. So if n/f(n) &ge c then W(3,c) &le n.
  1. Roth's result yields that W(3,c) &le 22O(c). ( Graham and Solymosi obtained this result purely combinatorially.)
  2. Szemeredi's and Heath-Brown result yields W(3,c) &le 2cO(1).
  3. Bourgain's first result yields W(3,c) &le 2c2log c.
  4. Bourgain's second result improves the exponent of c from 2 to 3/2-o(1)
  5. Sanders's result results improve the exponent of c from 3/2-o(1) to 4/3-o(1).
  6. Sanders NEW RESULT yields W(3,c) &le 2c(log c)5.


The theorems above are of the form: If A is big enough then A has a 3-AP. What about the other side of the ledger: How large can 3-free sets be? There has been an empirical study for small n by Gasarch et al., an unpolished (not yet submitted) survey by Gasarch et al.. (ADDED LATER- SOMEONE POSTED A WEBSITE WHERE YOU CAN FIND ACTUAL LARGE 3-FREE SETS. here it is.) Behrend had the largest 3-free sets for about 50 years before Elkin obtained an improvement. A shorter proof of Elkin's result was given by by Green and Wolf. The current state of affairs on this was summarized in a nice blog entry by Gil Kalai.

Do these result help find lower bounds on W(3,c)? YES! Chandra-Furst-Lipton in their paper on multiparty protocols (of all things!) showed that if A &sube [n] and A is k-free then there is a coloring of [n] with nlog n/|A| colors without any k-AP's. Its an easy prob argument.

SO- how far are we from showing ER2 in the case of k=3? If Sanders NEW result could be improved to |A| &ge n/(log n)1+&delta or even |A| &ge n/log n (log log n)1+&delta or any function such that if you divide by n2 the series converges then ER2 for k=3 would be proven. How far away is this result? A few weeks ago I would have told you that getting NEXP not in ACC0 was not in likely for at least 20 years. Ryan Williams proved me WRONG! Two different knowledgeable sources told me personally that the Erdos Distance Problem had reached an impasse at n0.864. Guth and Katz proved them WRONG and me WRONG for believing them. Hence, for this one, I do not venture a guess.

What about k-AP's and k-free sets?
  1. Gowers has shown that there is a constant c such that if |A| &ge n/(log log n)c then A has a 4-AP. Gowers also proved that a function c(k) such that if |A| &ge n/(log log n)c(k) then A has a k-AP. In the paper cited he takes c(k) to be 2-2k+9; however, I have heard this has been improved.
  2. Green and Tao. have shown that there is a constant c such that if |A| &ge n/(log n)c then A has a 4-AP. Tao's website also indicates they have improved this and a paper is in preparation.
  3. Laba and Lacey have a construction of k-free sets which it turns out was already known.


This post has 35 distinct links (UPDATE- now its 37) and mentioned four Field medalists: Roth (1958), Bourgain (1994), Gowers (1998), and Tao (2006). These are probably both complexityblog records.

For more information on this NEW result see also a nice blog posting by Gil Kalai.

Monday, November 29, 2010

Complexity as Friction

What advantages can we get from computational hardness? Cryptography and pseudo-random number generators come to mind. But perhaps the universe made computation difficult for a reason, not unlike friction.

In the physical world friction usually costs us energy to overcome but on the other hand we can't walk without it. In the computational world, complexity can often slow progress but if it didn't exist we could have many other problems.

We've seen a taste of this as computers got faster, information more accessible and better algorithmic tools particularly in learning and search.
  • Privacy: We can now store more information about each person and even the old information is much easier to access. Perhaps as important we can now learn more from less with much better algorithms. We're almost at the point where computers know more about us than we know about ourselves.
  • Markets: Two words: Flash Crash 
  • Airlines: We can easily find the cheapest airline flights and so airlines have put considerable effort into competing on this single dimension of price at the expense of the whole flying experience. This has also reduced competition where now airlines have cut capacity so they can keep prices higher.
  • Politics: Politicians act as a proxy for the people in a democracy and we entrusted them to make difficult decisions because we didn't have the tools to help in these processes ourselves. But now we have much more access giving less flexibility to politicians, making them focus more on the short term instead of the longer picture and polarizing our politics.
This is just a few of many such stories. As computer scientists our goal is to make computation as efficient and simple as possible but we must keep in our minds the costs of reducing the friction too much.

Wednesday, November 24, 2010

Game Theory, Terrorism, Hardness and SAT Solving

Last week I attended the Army Research Office sponsored Workshop on Reasoning in Adversarial and Noncooperative Environments related to Topic #22 of the 2011 MURI. Most of the attendees were AI types many of whom look at games with incomplete information against opponents you can't trust. A group at USC talked about their work using game theory to deploy federal air marshals on planes and road-side checking at LAX. Game theory that might save lives.

I talked in a panel on the complexity of computing equilibrium on why we should turn the question around and use computational complexity to redefine equilibrium. Bart Selman, also on the panel, talked about his limited success in using SAT solving heuristics for QBF. During the Q&A we got into an interesting debate on the importance of computational complexity in real-world scenarios. The word "irrelevant" came up to note that NP-completeness is too blunt a tool to capture hardness in practice. Bart didn't think average-case was quite the right approach either, some other structural property of problems was needed.

Though we both care deeply about satisfiability, Bart and I don't hang out much in the same circles and this workshop was the first time we really had to a chance to chat. Despite the panel, Bart later showed quite an interest in computational complexity. I found myself explaining Ryan's new result to him, which get complexity results from betters circuit SAT algorithms.

The PCP theorem says it is as hard to approximate max-SAT as it is to solve it exactly. In practice, according to Bart, this separation makes SAT easier to solve heuristically. So Bart suggested one might use the PCP theorem as a method to preprocess a formula to improve current heuristics. The PCP theorem still has large constants in the size of the reduction making a direct application impractical. But perhaps parts of the theorem, for example using good error-correcting codes, can help. That would take the PCP theorem, normally used for showing problems hard, to help make SAT easier. Cool if it could be made to work.

Monday, November 22, 2010

Erdos Distance Problem SOLVED!

(All papers referred to in this post can be accessed from my website on the Erdos Distance Problem.).

In 1946 Erdos raised the following question: Given n points in the plane how many distinct distances between points are you guaranteed? Let g(n) denote the number. How does g(n) grow? This problem is now called The Erdos distance Problem.

A while back I created a website of all(?) of the papers on this problem. Today I happily added one more paper to it which comes close to just about settling it. See also Tao's Blog on the recent result.

Julia Garibaldi, Alex Iosevich, and Steven Senger have a book due out in Jan 2011 entitled The Erdos Distance Problem. I proofread it for them so, without giving away the ending, I'll just say it is AWESOME.

I have seen it written that Erdos conjectured either g(n) &ge n/sqrt(log n) or g(n) &ge n1-&epsilon. These paper reference Erdos's 1946 paper. No such conjecture is there. I suspect Erdos did state the conjecture in talks.

I give a brief history:

  1. Erdos (1946) showed O(n/\sqrt(log n)) &ge g(n) &ge &Omega(n0.5). These proofs are easy by today's standards.
  2. Leo Moser (1952) showed g(n) &ge &Omega(n0.66...). (Actually n2/3.)
  3. Chung (1984) showed g(n) &ge &Omega(n0.7143...). (Actually n5/7.)
  4. Chung, Szemeredi, Trotter (1992) showed g(n) &ge &Omega(n 0.8/log n).
  5. Szekely (1993) showed g(n) &ge &Omega(n0.8).
  6. Solymosi and Toth (2004) showed g(n) &ge &Omega(n0.8571). (Actually n6/7.)
  7. On page 116 of Combinatorial Geometry and its Algorithmic Applications by Pach and Sharir there is a prediction about how far these techniques might go: A close inspection of the general Solymosi-Toth approach shows that, without any additional geometric idea, it can never lead to a lower bound greater than &Omega(n8/9). The exponent is 0.888... .
  8. Gabor Tardos (2003) showed g(n) &ge &Omega(n0.8634...). (Actually n(4e/(5e-1)) - &epsilon.)
  9. Nets Hawk Katz and Gabor Tardos (2004) showed g(n) &ge &Omega(n0.864...). Actually n((48-14e)/(55-16e)) - &epsilon).
  10. (ADDED LATER) Elekes and Sharir (2010) have a paper that sets up a framework which Guth and Katz used to get their result.
  11. (THE NEW RESULT) Guth and Katz (2010) showed g(n) &ge &Omega(n/log n).
Some thoughts
  1. There is a big gap between 1954 and 1984. I'd be curious why this is.
  2. I have seen it written that Erdos conjectured either g(n) &ge n/sqrt(log n) or g(n) &ge n1-&epsilon. These paper reference Erdos's 1946 paper. No such conjecture is there. I am sure that Erdos made the conjecture in talks.
  3. The paper by Szekely is a very good read and is elementary. Solymosi and Toth is still elementary but is getting harder. It is not clear when these papers cross the line from Could be presented to a bright high school student to Could not be.
  4. While there is a casual statement about needing new techniques this field did not go in the barriers-direction of formally proving certain techniques could not work.
  5. Did the new result use additional geometric ideas? Not sure yet- though I have emailed some people to ask.
  6. The first result by Erdos really is sqrt(n)-O(1), not \Omega(\sqrt(n)). The rest are asymptotic.
  7. If there are 289 points in the plane then there are at least 17 distinct distances. Are there more? I would be curious what happens for actual numbers like this. Exact results are likely very hard; however, some improvement may be possible.
  8. Gabor Tardos is Eva Tardos's brother. Nets Hawk Katz is not related to Jon Katz (Cryptographer and Blogger). (Nets Hawk Katz is a great stage name!) I do not know if Leo Moser and Robin Moser are related.

Thursday, November 18, 2010

Eleven Tweets from GASARCH

I don't do tweets-I don't think I have enough interesting 140-char notes to tell people (though that doesn't stop Lance). However, several tweet-worthy notes did come up recently so I'll post them here. Hope they are of interest. Some are more than than 140 characters. Oh well.
  1. Godel Prize nominations due soon.
  2. Knuth Prize nominations due soon.
  3. Would you rather win a Godel Prize or a Knuth Prize?
  4. Howard Karloff (no relation to Boris), Phil Gibbons, and Sergei Vassilvitskii are organizing a DIMACS workshop on Parallelism: A 2020 Vision on March 14-16, 2011. They will try to predict the future!
  5. This looks like its from THE ONION, but it is real.
  6. This looks like it is real, but its from THE ONION.
  7. NO progress on my 17x17 problem BUT someone was inspired by it to make a game out of grid colorings. See Brian Hayes's Post about it
  8. UPDATE: 17x17 was solved. See Feb  8, 2012 post. Also see my paper on grid coloring on arXiv.
  9. My colleague Hal Daume has a Blog on NLP (Nat Lang Processing). One of his recent entries is of interest to all academics so I link to it here.
  10. Lance is impressed with Ryan William's result, as am I. However, Lance also has some Family Pride in seeing his Uncle Ryan get the result. (I'll let you figure out what that means.)
  11. My brush with fame: I know this guy. He is Phil Resnick (1) he does NLP in the Linguistics Dept at UMCP, and (2) I graded his papers in Automata Theory when he was an ugrad and I was a grad student (at Harvard).
  12. Private Information Retrieval: A Dilbert Strip and a Wikipedia Entry.
ADDED LATER SINCE IT WAS SENT TO BE LATER: Midwest theory day, Local Arrangements ~

Monday, November 15, 2010

Is Ryan's Theorem Only Interesting to Complexity Theorists?

Last week, the complexity theory bloggers DickScottLuca and myself all heaped praise on Ryan Williams' new circuit lower bound, NEXP not in ACC0. Outside the computational complexity community the reaction has been something like "Wow, those complexity theorists are excited by Ryan's paper."  So let me try to explain why we are excited by the result and perhaps why you should be too. 


In the 1980's complexity theorists started looking at circuit complexity. Circuits are ANDs, ORs and NOT gates that feed into each other (no cycles) with input bits on the bottom and an output bit at the top. Every function mapping n bits to one bit can be expressed as a circuit but may require an exponential (2O(n)) number of gates. Every polynomial-time computable function can be expressed by circuit with a polynomial number of gates. If one could show some problem in NP cannot be computed by polynomial-size circuits then you would have proven P ≠ NP.

Furst, Saxe and Sipser made the first breakthrough in 1981. They showed a simple function (parity) cannot be computed by circuits with polynomial-size and constant depth where depth is the length of the longest chain of gates from the input bits to the output. Yao and Hastad would soon show that you need an exponential number of gates for a constant-depth circuit to compute parity.

I started graduate school in 1985 with Mike Sipser as my advisor. During that first year we saw Hastad's paper and then a new result by a Russian, Alexander Razborov. Razborov showed that no polynomial-size monotone circuit (just AND and OR gates, no NOTs) can compute the clique function. At this point Sipser was sure a proof of P ≠ NP using circuit complexity was just around the corner.

In 1987, Razborov and Smolesky has a nice extension on the constant-depth bounds. They showed that one cannot compute the parity function with constant-depth polynomial-size circuits even if we augment them with mod3 gates. A mod3 gates outputs 1 if the number of inputs is not a multiple of 3. More generally they showed that for any primes p ≠ q, one cannot compute the modp function with ANDs, ORs, NOTs and modq gates. Their proof broke down if q is not a power of a prime.

That's six major circuit results in six years. We had every expectation that the results would keep coming. But nothing in 1988. Nothing in 1989. Nothing in the 90's. Nothing in the first decade of the 2000's.

Nothing is a bit strong, there were several nice lower bounds for uniform and restrictive models. Some new characterizations of circuit classes. We also saw arguments that further progress would be difficult, Razborov showed that his techniques for his clique result break down completely with NOT gates and, of course, natural proofs.

But for 23 years we had no progress on general circuits beyond Razborov-Smolensky. Forget NP, we couldn't even show there were problems in NEXP (problems verifiable in exponential time) that didn't have polynomial-size constant depth circuits with only Mod6 gates. (Nothing special about 6, same holds for any number that is a multiple of at least two distinct primes).

That is until last week when Ryan showed that NEXP cannot have polynomial-size constant-depth circuits with AND, OR, NOT and Modm gates for any m, prime or not.

And that's why we are so excited about this paper.

Thursday, November 11, 2010

A Problem with Wikipedia and the Next Best Thing to Winning a Godel Prize

(There are TWO theory day events in NY this semsester: Nov 11, 2010 and Nov 12, 2010. The first one has likely already started as you read this.)

I searched for myself on Wikipedia and I found out something scary. Not about me. About Wikipedia.
  1. One of the hits was to the page Godel Prize Winners. Since I don't recall winning a Godel prize this surprised me. The links to the Godel-prize winning papers Proof Verification and the Hardness of Approximation Problems by Arora, Lund, Motwani, Sudan, Szegedy and Probabilistic checking of proofs: a new characterization of NP by Arora and Safra both point to the copies of these papers on MY PCP website. This is truly frightening. I plan to keep the link up (I suppose now I have to); however, Wikipedia has such a problem with Link Rot that they even have an entry on it! For now. Why did they use my link for those papers? Was it the only copy of the paper that was free online? Doubtful- the authors surely have a copy on their websites. My PCP website is NOT well known: I whipped it up for a seminar. I never intended it to be stable or referenced by anyone else. (NOTE to Ryan Williams- I'll be runing a seminar on circuit lower bounds next semester. Please put your recent AWESOME paper someplace free and easy to find so that they point to your copy, not mine, when you win a Godel Prize.)
  2. One of the hits was on the page Recursively inseparable sets. Two sets A and B are rec. insep. if they are disjoint and there is no decidable set C that contains A and is disjoint from B. Here is a quote from the entry:
    Let φ be the standard indexing of the partial computable functions. Then the sets {e | φe(0) converges and equals 0} and {e | φe(0) converges and equals 1} are recursively inseparable (Gasarch 1998, p. 1047).
    This is NOT my result- I think it's folklore. They reference my Survey of Recursive Combinatorics which states the result without proof or reference (my bad). They don't even point to the (free online) paper-they point to the math review of it. (If you know who proved that result then let me know. I plan on changing this entry by correcting the mis-attribution, giving a correct attribution if I have one (else attribute folklore), and including a proof. If you don't know who proved it then you can always look it up on Wikipedia. Oh. Don't!)
  3. One of the hits was on the page The Coppersmith-Winograd algorithm for matrix multiplication in O(n2.376) time. The link to their paper points to my applications of Ramsey Theory website (the algorithm uses 3-free sets which is a concept from Ramsey Theory). This pointer makes some sense since my applications-of-Ramsey website is well known and it seems to be the only free online copy. Even so, seems fragile.
  4. One of the hits was on the page Graham's Number. The link to the paper Ramsey's Theorem for n-Parameter Sets by Graham and Rothchild points to my van der Warden's Theorem website (not to be confused with my applications of Ramsey Theory website). This is very odd since (1) there is a well known website of Ronald Graham's papers and (2) my vdw website is NOT well known.
I will at some point FIX some of these entries.

The above points to both the strength and weakness of Wikipedia. On the one hand, some of these links are fragile and there are likely better ones out there. On the other hand, for some of them they found a link that worked and went with it. If they had waited around for a better or less fragile link then the entry would not have been made.

One lesson: If you find something online that you want an ecopy of, download it! It may not be there next time. (I made the same point here.)

Wednesday, November 10, 2010

Dr. Scott Goes to Washington

Fellow blogger Scott Aaronson was chosen as one of 19 NSF PECASE awardees and wins a trip to the White House. Congrats to Scott! He received the award "for pushing the frontiers of quantum complexity theory, and for making the insights of theoretical computer science accessible to the general public."

For all those of you who wonder how having a blog could affect your academic career, Scott won the PECASE not despite his having a blog but because of it.

Tuesday, November 09, 2010

A Breakthrough Circuit Lower Bound

On October 8th, when in my graduate complexity course as we discussed the 1987 Razborov-Smolensky result that one can't compute the Modp by constant-depth polynomial-size circuits with AND, OR, NOT and Modq gates for primes p≠q, I noted the limits of our knowledge,
We can't even prove that NEXP (the exponential time version of NP) can't be solved by constant-depth polynomial-size circuits with just Mod6 gates, even without AND and ORs.
Now we can. And by "we" I mean Ryan Williams who just yesterday posted his proof that NEXP cannot be solved in the class ACC0, the class of constant-depth poly-size circuits with AND, OR, NOT and Modm gates for any fixed m. For the larger class EXPNP, Ryan gets exponential lower bounds. Dick, Scott and Luca already heaped their praise on these new results. No doubt, this is perhaps the most interesting computational complexity result since Reingold and the best progress in circuit lower bounds in nearly a quarter century.

It's not so much the result itself, in class I'll now just say "We don't even know how to prove that NEXP isn't in TC0 (poly-size constant-depth circuits with majority gates)" but an outright example that the proof techniques described in Ryan's STOC paper can be used to get new lower bounds, really opening the door to the first fresh approach to circuits in a long time. This approach converts weak algorithms for solving circuit satisfiability questions into circuit lower bounds. Ryan's proof doesn't use deep mathematical techniques but rather puts together a series of known tools in amazingly clever ways.

Ryan breaks through the natural proofs barrier in an interesting way. We didn't know if one can have one-way functions described by ACC0 circuits so it wasn't clear if natural proofs applied to lower bounds for ACC0. Ryan's paper doesn't break these one-way functions, rather he avoids the issue by using diagonalization and so his proof does not fulfill the constructivity requirement of natural proofs. Modifying his proof to make it "natural" would imply showing there aren't ACC0 one-way functions but that's still open. So there is no inherent reason Ryan's techniques couldn't work on TC0 or larger classes without violating natural proofs.

We're still a long long long way from showing NP does not have arbitrary polynomial-size circuits and P ≠ NP but Ryan has taken the first real baby step in decades.

Monday, November 08, 2010

The FOCS Experience

You can now watch videos of the recent FOCS talks online. Glencora, who I had the pleasure of meeting at the conference, has her favorites. If you watch any, check out the talk of Dan Spielman for no reason other than to see how a talk should be given.

I don't have other recommendations for I go to very few talks at STOC and FOCS. Why should I when the talks are online? 

I expect most of you, whether or not you attended the meeting, will watch few to none of the videos. If you do it will be on in the background while you check email or are busy with other things. You would feel guilty spending a day just watching videos. If you went to the conference you might feel guilty not going to the talks. Though some people spend the time during the talks checking email or being busy with other things.

The transportation revolution before I was born makes it possible for everyone to meet up in multiple conference every year. The communication revolution since I was born has brought us to the point where there is no more reason to go. We can replicate the activities of a conference from our desktops, a point Bill brought up last week. Many of you go just because you are supposed to give a talk, but you could just record the talk back home.

Going to a conference is an excuse to attend the conference, a justification for missing meetings and office hours and getting others to cover your classes. We pay many dollars for hotel, registration, airfare not because it allows us to attend the conference but because it doesn't allow us not to. 

Thursday, November 04, 2010

A non rigorous proof technique

As a grad student (1980's) I found a book in the library that claimed to solve Fermat's Last Theorem using elementary methods. (This was before Wiles had shown it true using advanced methods.) The book claimed that their proof was likely what Fermat had in mind. I KNEW that it was wrong. Why? Because
If FLT had really been solved then I would know that. (Note- I am not bragging about how much math I knew; I am bragging about how much math I knew of.)
Is that a rigorous proof technique? No, but it is useful and often correct.

(All of the math that I refer to below is done rigorously in my writeup here.) As a Grad student I noticed the following:
  1. Let C(x) be the length of the shortest description of x (the usual Kolmogorov complexity).
  2. It is easy to show that C ≤T HALT.
  3. The only proof I had seen that C was not computable was by contradiction: assume C is computable and then use that to create a string x that had no short description along with a short description of x. This proof did not use HALT at all! It was the only proof I knew of that a set was undecidable that did not use HALT (are there others?)
  4. Hence, from what I knew, it was possible that C was of intermediary Turing degree- neither decidable nor Turing-complete. This would be a natural intermediary Turing degree!! (As opposed to sets that are constructed for the sole purpose of being of intermediary Turing degree.)
But- I knew this could not be the case. How?
If there was a natural intermediary Turing degree then I would know that. (Note- I am not bragging about how much computability theory I knew; I am bragging about how much computability theory I knew of.)
Was my intuition correct? Unfortunately yes- one can show that HALT ≤T C. Too bad- I would have preferred to be wrong and have seen a natural intermediary Turing degree. (The writeup I pointed to above has the classic proof that C is undecidable, the proof that C ≤T HALT and the proof that HALT ≤T C. The proof of the latter I got from Peter Gacs.) (NOTE- I had it as Gac but a commenter pointed out that it is Gacs.)

Tuesday, November 02, 2010

The revolution will be DVRed

(There are TWO theory day events in NY this semester: Thu Nov 11. and Fri Nov 12.)

BILL: Will you be going to the RALLY TO RESTORE SANITY AND/OR FEAR? (See also here.) It will be the event that defines this generation!!!! Just look at all of these pictures that will be taken there! Here are some pictures by Evan Golub who went, took pictures, then went home early to actually watch some of it. And here is a picture that is more suited to the readers of this blog.

DONNA: I'll DVR it.

Is this the wave of the future?
  1. Even though watching a sports event on TV gives a better view and reasonably priced food (as opposed to the stadium which gives a terrible view and has overpriced food-like substances) most people say that being AT the game has a certain je ne sais quoi. At least people who think they know French say that. Will technology get SO good that being at home is preferred? Give the viewer the ability to pick what parts of the field he wants to see? Give him the ability to rewind in real time (already can via DVR)? 3-D? (not for a while). They might pay us to go to the game since an empty stadium looks bad for TV.
  2. Museums: Why go to the Louvre when you can do a virtual reality tour of it which saves you the cost of a trip to France? I think this is possible with today's technology. If not, then certainly soon. There already is a partial virtual tour.
  3. The rally was really crowded and I couldn't see much. Even so, being there had a certain I know not what which made it worth going. However, I'm glad I DVRed it so I can see what really happened. When I tell my great nieces that I was there at the dawn of a new age it will be partially fiction since I'll tell her of what I saw on DVR. Or I'll just show her the recording. You can also currently see it here.
  4. Movie Theaters are facing a problem with people waiting for the DVD to come out rather than go to the theater. For families the economics are insane: buying the movie on DVD (and thus OWNING it) costs less than taking the family to see it once. COUNTERPOINTS: (1) Some people want to get out of the house. (2) Lance told me that I really should see AVATAR in a theater.
  5. With conference talks online will people still go to conferences? (This was already discussed here but it fits today's theme also.)


Will technology make it easier and easier to stay home? I think yes. When that happens you may have some surprises- some Rally has far less people than anticipated because people worry it will be too crowded, but then for the next Rally people think it won't be that crowded so it gets more people than anticipated. Rather than count how many people were actually there you might calculate some weighted sum of how many were there, downloaded it, DVRed it, sign up on the Facebook page for it, etc.

Monday, November 01, 2010

By Any Other Name Would Be Just As Hard

In 1973 Donald Knuth searched for a name for the hardest problems in NP. Steve Cook didn't give a name in his paper and Karp called them P-complete. Knuth suggested three names "Herculean", "Formidable" and "Arduous". He sent out a ballot to people in the theory community and also allowed write-in votes.

The results he put in a SIGACT News Article, a fascinating and hilarious read. Of course Knuth's suggestions didn't fly and he got many quite inventive counter-proposals. My favorite: Hard-Ass Problems (Hard As satisfiability).

The winning solution came from a group at Bell Labs (likely including one of their new hires David Johnson), which you readers should all know as NP-hard and NP-complete. These names did have the advantage that it could generalize to other notions of hardness (like PSPACE-complete).

Knuth muses at the end
NP-hard problems hits lots of people, and that's why I began searching for a special term. In other words, I don't consider it a major goal to invent descriptive terminology for every conceivably interesting question of the type considered here; the major goal is to have a good term to use for the masses, in the one case which experience shows is almost omnipresent. To say NP-hard actually smacks of being a little too technical for a mass audience, but it's not so bad as to be unusable. 
The importance of the P versus NP problems has reached well beyond the theory community despite its name but I wish Knuth had been more successful in finding a descriptive term. P, NP, NP-hard and NP-complete are technical names that tend to isolate us and confuse others.

Friday, October 29, 2010

Advice on getting connecting to the community (guest post by Daniel Apon)

(Guest Post by Daniel Apon.)

As a follow up to the last post on Do Conferences Build Community? I offer some advice for people to get connected to the community.
  1. Be Confident (And Smile!). It's absolutely important to give a positive first impression when introducing yourself to someone. Making eye contact, smiling, and speaking clearly go a long way toward making yourself an inviting person to have a conversation with. On other hand, one of the best ways to guarantee an awkward social experience is to act creepy; avoid that at all costs. And most importantly, meet as many people as you can!
  2. Great Researchers Are People Too! Sure, the person you're talking to could have a history of landmark publications dating back before you even know was NP-completeness was, but don't forget that everyone you're meeting is a human being (really). Be respectful of course, but remember that they're probably in a similar situation to you -- meeting new people during a coffee break!
  3. In the words of the immortal Fonz: Be Cool. One of the biggest turn-offs will be giving the impression that you want something from them. Go ahead and throw those types of ideas away immediately. If you're interested in the same things, conversation will flow naturally. Don't try to force anything; just let it happen.
(Comment from Bill G: The same advice applies to meeting people in bars, though you probably won't be discussing Fourier transforms over finite fields until your third drink.)

Thursday, October 28, 2010

Do conference build community? (joint post)

(Joint Post by Daniel Apon and Bill Gasarch. Does doing joint posts build community?)

In GASARCH's post on Will there be a 50th CCC He mentioned that conferences help to build community. One of the comments challenged that. Daniel Apon is very new to the community and William GASARCH is very old to the community, so we decided to do a joint post on the topic: Do Conferences Help Build the Community?
  1. The talks do not build community. The coffee hours and meals should. Do they?
  2. Having people with similar interests all in the same place should build community. Does it?
  3. Does size matter? CCC is only about 100 people so they can all sort-of know each other. MATHFEST has over 1000 so I never saw the same person twice.
  4. I've heard the complaint that older established professors do not bother to talk to younger people in the field. This is bogus in its generality: it varies tremendously from person to person. Also, if an older professor ignores you it might not be that he thinks he is better than you, it could just be that he has no social skills. (YES- there are academic computer scientists who have no social skills!) (I used `he' instead of `he or she' since I have never heard this said about a female established professor.)
  5. Is it worth the money the individual spends going to the conference to build the community? Are there other more cost-efficient ways to build community? I do not know; however, I just want to know, for now, is the current system helping to build community albeit inefficiently.
  6. Bill Gasarch a long time ago and Daniel Apon recently have had very positive conference experiences in terms of getting into the community (see below). We DO NOT claim these experience are typical. We do not know. But we urge you to share your stories, positive and negative. so that we can get a sense of it. Please be brief, to the point, and not nasty.
    1. Bill Gasarch: I went to a workshop on complexity in 1984 (There was no CCC then) where I met Stuart Kurtz and Ron Book, both of whom were friendly to me. My advisor Harry Lewis introduced me to Alan Selman at STOC (probably 1985). Steve Homer (who I had worked with) introduced me to other people at CCC, including Juris Hartmanis. I met long-time collaborator Richard Beigel at the 1986 CCC. All very positive for getting me into the community.
    2. Daniel Apon: My first conference was STOC 2010, and I bumped into Bill during a coffee break between talks. We had previously been in email contact since he was in charge of handling the travel awards for STOC that year, so it made for an easier ice-breaker. He introduced me to Lance and others. Later, when I went to the Barriers II workshop, I met Aaron Sterling (who is regular participant with myself on the TCS StackExchange site), and we had dinner together one night. I also had the pleasure of meeting a number of other people between the two trips and talking some. Here's a non-exhaustive list (just the first few who come to mind): Eric Allender, Dana Moshkovitz, Scott Aaronson, Andy Drucker, Paul Beame, Anup Rao, and Russell Impagliazzo. My impression of everyone were that they were friendly, open, fun (and smart!) people. It was definitely a positive experience getting the chance to interact with those that I did.

Tuesday, October 26, 2010

FOCS Part II

I'm heading back to Chicago this morning. Dan Spielman had a special talk in honor of his recent Nevanlinna prize. He gave an amazing talk (as always) about solving Laplacian matrix that comes from graphs, basically putting springs at every edge, nailing down some vertices and seeing where the other vertices end up. Dan's and other talks were filmed, be sure to look for them on the FOCS page in the future.

I spent most of the conference in the hallways talking to people but as someone pointed out to me, I talked almost entirely to people I already knew. I've heard complaints before young people feel they can't talk to senior researchers at STOC/FOCS. We don't do that on purpose, just like to catch up with people we've known for years, but I should try harder to meet the younger crowd.

I had one of those interesting discussions with Ketan Mulmuley on his views on the P versus NP problem (yes we are from the same city but somehow it's easier to talk in these meetings). Ketan talks about his algebraic geometry approach as a very length process towards a solution. Complexity theorists need to give up ownership of the P v NP problem (can anyone "own" a mathematical problem?) and realize that we need the algebraic geometers to help or even lead us in this journey. Ketan also view the journey as more important that the eventual resolution of P and NP. The search for a solution of the Riemann Hypothesis has yet to produce a proof but no one would say that the effort to finding one has been a failure as great math has come from that line of work. The algebraic geometry path to P v NP will yield exciting work as well. 

Monday, October 25, 2010

FOCS Part I

Some thoughts from the FOCS conference in Las Vegas. One result I hadn't seen before I heard people excited by, Determinant Sums for Undirected Hamiltonicity by Andreas Björklund, giving an O(1.657n) algorithm for testing Hamiltonian Graphs beating the O(2n) bound of Bellman and Held-Karp from the 60's. 

I spend much more time hanging in the halls than attending talks. And much of the hall discussion focused on the Simons Institute whose letters for intent are due Wednesday. Most major theory groups seem to be putting together a letter, the interest is in what consortiums are forming, i.e., how are the theory groups being partitioned.  Various conjectures about what the Simons Foundation is looking for. For example, will it help or hurt to already have a strong center for theory? What is the right balance of "core theory" and connections to other fields? Should be interesting to see which groups make it to the second round.

Wherever this institute ends up, if run well it will immediately become a major influence in our community. Luca made a good point, that even this process where we all talked to our deans and other administrators about the proposal, helps to sell the importance of theoretical computer science to academic leaders.

I won't give the blow by blow at the business meeting, Suresh already did so on his Twitter. I watched Suresh type furiously into his phone two rows ahead of me and see what he typed immediately on my iPhone. Cool.

This will be the last FOCS with paper proceedings. Luca Trevisan asked some questions about moving to a larger PC or allowing PC members to submit but no one took the bait for a discussion.

The biggest issue came from Dmitry Maslov from the NSF. The Algorithmic Foundations program that funds core theory has one of the highest acceptance rates at the NSF. With the complete turnover of NSF leadership, there is a concern that some might thing theory is over-funded. The CATCS committee is working to educate the new NSF directors that theory funding is going to strong proposals but still it would help to submit more proposals to bring down the rate. Large and small proposals are due soon so submit early and often. If you don't get funding, thanks for taking one for the team. 

Friday, October 22, 2010

New York Area Theory Day- Nov 12, 2010

New York Area Theory Day, Organized by: IBM/NYU/Columbia, External sponsorship by: Google, Friday, November 12, 2010

The Theory Day will be held at Courant Institute of Mathematical Sciences, New York University, 251 Mercer Street, Auditorium 109, New York.

For the program see here. For directions, please see here or here (building 46). (ADDED LATER- THE ORGANIZERS SEND ME A NICER PAGE TO POINT TO: here)

My advice: If you CAN GO then GO! If you CAN"T GO then DO NOT GO!

Thursday, October 21, 2010

Will there be a 50th CCC? Will you be there?

At the 25th CCC Juris Hartmanis gave a great talk to celebrate having a 25th. Will there be a 50th? I asked people at the conference. What did they say? Watch the video!

So what do you think? Will there be a 50th CCC? Will you be there?

Will there be conferences?
  1. Comp Sci may grow up.
  2. We may videotape the talks more. Then NSF will stop paying people to go since you can see the talks anyway. So less people will go. This is part of a more general trend where as a society we are sacrificing community for efficiency.
Will there be Complexity theory?
  1. The field will still be vibrant, P vs NP will still be unsolved.
  2. L=RL may be proven.
  3. Some form of the Unique Game Conjecture may be proven or disproven.
  4. If GI is in P we may see that proven within 25 years. If GI is not in P then I doubt we'll see that proven within 25 years.
  5. If Ketan's Geometric Complexity approach begins to work we may change the name to Conference on Applied Algebraic Geometry and Representation Theory.
  6. If we find more and more about what we cannot do and less and less about what we can do, we may change the name to The Barrier's Conference.

Wednesday, October 20, 2010

A Note from the Trenches (Guest Post from Michael Mitzenmacher)

Former blogger Michael Mitzenmacher talks about being chair and not blogging. In a day for guest posts, over at Geomblog, David Johnson wants to know practical applications of approximation algorithms.

It's been about about seven weeks since I've given up blogging, and I have to say, I do miss it. There's certainly been plenty of things I could have written about, from large-scale CS issues (the Simons foundation call for a new institute for the theory of computing, the NRC rankings and the CRA reaction, new people at the NSF, and the movie the Social Network), more Harvard-centric issues (our intro CS class jumping to over 500 students -- about 200 more than last year, Harvard CS is hiring this year, the Harvard endowment performance (+11%), and the movie the Social Network), to more personal issues (my class for this semester, my take on the CoNEXT PC meeting, my very fun trips to Princeton and UCLA, and why I still haven't seen the movie the Social Network). Each of these could easily have been a post, I'm sure. And I've apparently become terribly accustomed to being able to just announce what I'm thinking (as though everyone should care).

On the other hand, I've been busy. Quite busy. Lots of meetings, lots of answering people's e-mail, lots of solving minor problems, lots of pointing people to the right other people to get problems solved. The start of the academic year is always busy anyway, and my graduate class takes up a fair bit of time even if a good chunk of it is material from previous years, because a good chunk of it is also always new. But the new job as "Area Dean" really sucks up a good bit of time. For the first month, I really don't think I got any research done. This month is better, though much of the research time has been going to revising and finishing off old work rather than new work. Next month I hope it will get better still.

I don't want to say the job sucks. (Well, maybe sometimes it does.) But it does take time, and I'm glad there's a planned exit. As I tell my colleagues, not that I'm unhappy with the job, but only 2.7 more years to go.

On the plus side, there's a lot of positive things going on that I feel like I'm pushing forward. To a large extent, that really seems to be the job: just pushing projects (and the corresponding people) forward, so something gets done. I find things like organizing the class schedule for the next X years and getting a slot to hire don't just happen by themselves, but with the right prodding, they do happen. I'm also blessed with incredibly collegial colleagues, many of whom are going extra miles to make my job easier.

My main management technique is to figure out (or get told) what needs to get done, tell people about it, and then see that it gets done, by me or, hopefully, someone else. I'm getting more used to fixing tasks and delegating them to other people. I also use affirmations constantly. I figure if I tell enough people enough times that something is going to happen, they'll all believe it, and so it will happen. For example, for various reasons for several years we haven't hired new junior faculty. One thing I keep telling people is that after my stint, the question every year at Harvard will not be, "Is CS doing a search this year?", but rather, "What areas is CS focusing its search in this year?" We are doing a search this year; I'm already working to get buy-in on next year's planned search, and it looks very promising; and while it's a bit early to start asking the powers-that-be about year three, I'll start laying the foundation there soon. And see, by telling all of you about it, I'm just in my own positive-thinking (or, alternatively, manipulative) way helping ensure it will happen. Once people accept your basic plan, after that it's just (a lot of) paperwork.

So while I miss blogging, it's been good for me to stop. Besides (desperately) needing the time, it's clear that blogging would give me too much opportunity to say my mind about things brewing before the plans are all fully baked (to mix cooking metaphors). Quietly building consensus doesn't go well with writing a provocative blog. The payoff, I hope, is that you'll be hearing lots of good things about Harvard CS over the next couple of years. After all, we have to get ready for the next round of NRC rankings. I'll try to fire off the occasional update from the trenches, and as for returning to blogging, we'll see how things look in about 2.7 years.

Tuesday, October 19, 2010

My Trip to Denmark

Last week I had a pleasant short trip to Aarhus, Denmark for the inauguration of the new Center for Research in the Foundations of Electronic Markets. Kevin Leyton-Brown had a more interesting trip to the center.

The Center focuses on real-world pricing mechanisms helped by secure computation building on research of Aarhus cryptographer Ivan Damgård. We heard several talks from government agencies and energy companies, seeing how the electricity network flows through Scandinavia and Northern Europe. Denmark with its many windmills is often an electricity exporter and messy pricing mechanisms come to play.

Dale Mortensen, the Northwestern Economics professor who won the Nobel prize last Monday, spends the falls in Aarhus so the Nobel prize was already big news there. When the local dean officially inaugurated the center, he noted that the center had international partners at Northwestern (Nicole Immorlica and myself) and used that fact to brag about Dale being at Aaarhus not once but twice. He never mentioned the other international partners. Take that Harvard. Dale himself was nowhere to be found.

At the workshop someone ran one of these weird rule auctions that on large scales is essentially a lottery. It costs 5 DKK (about US$1) for each bid. Each bid must be a multiple of 5 DKK. The player with the lowest unique bid pays that bid and gets an iPod Touch. I tried with a random bid. Game theorist Hervé Moulin won the auction. His strategy: Bid on every number from 5 to 75 DKK. A seemingly crazy strategy but with 15 DKK as his winning bid, he got the iPod for all of $18.

Monday, October 18, 2010

Benoît Mandelbrot (1924-2010)

Clouds are not spheres, mountains are not cones, coastlines are not circles, and bark is not smooth, nor does lightning travel in a straight line.—Mandelbrot, in The Fractal Geometry of Nature.

Benoît Mandelbrot, famous for his study of fractals including the one named after him, passed away on Thursday from pancreatic cancer.

I first heard about Mandelbrot as an undergrad in the early 80's as fractals became the mathematical curiosity du jour. One of the popular screen savers on the Sun computers in the late 80's was either zooming in on the Mandelbrot set to reveal the same set inside. One could spend hours watching--this and Tetris wasted many grad student hours in those years before Facebook and Twitter.

I saw Mandelbrot speak just once in 1996 at a 50th Anniversary of CWI celebration, also the only time I've seen Knuth give a talk. All I remember from the talk were pretty pictures of imaginary fractal mountains on other planets generated by Mandelbrot for some movie--shades of Avatar.

I always thought of fractals as mostly descriptive--yes, fractals appear almost everywhere, but so what? Nevertheless here's to a very uncommon mathematician who literally showed us the beauty of mathematics.

Thursday, October 14, 2010

How hard is this problem?- I ask this non-rhetorically

I recently saw the following puzzle:

Which two numbers come at the end of this sequence? (That is, what are x and y?)
2,4,6,30,32,34,36,40,42,44,46,50,52,54,56,60,62,64,x,y
(NOTE ADDED LATER- I had a typo in this post, the worst kind of typo one could have- I had a 5 instead of a 6. I looked up the original source and they had the same typo. SORRY!)

I could not figure it out. I went to the sequence-website which gave me the answer. Before the web I would not have been able to do this and I may have had to wait until there was a web to look it up on in order to solve it. Or maybe I could have solved it, though seeing the solution I doubt that.

How hard is this problem? How to tell how hard it is? How well known is this puzzle? YOU can help me!
  1. Try to solve it without using any other resources.
  2. Leave a comment either saying either I solved it without any help OR I was unable to solve it OR I knew how to solve it since I already saw it.
  3. Please do not include the solution. I will not post a solution--- if you are curious just type it into the sequence website.
  4. Please do not lie. I want to use this to judge how hard this problem is.

Wednesday, October 13, 2010

Two Candidates for an Ig Noble in Mathematics

(This is a sequel to my post on the Ig Nobel Prize.)

Two candidates for an Ig Nobel prizes in Mathematics. They deserve it for opposite reasons. (They are old results and hence, I assume, do not qualify.) (ADDED LATER- I checked with the Ig Nobel Committee- they DO allow old results to be submitted and have even given the prize out to people who are dead.)
  1. The Banach Tarski paradox deserves an Ig Nobel since it is obviously false.
  2. The Jordan Curve Theorem deserves and Ig Nobel since it is obviously true.

Monday, October 11, 2010

The NSF Takes Shape

As I've mentioned before, the NSF is turning over top to bottom especially in computer science. Most of the pieces are now in place so let's check out the new personnel at the NSF that affect theoretical computer science.

The Senate has now confirmed MIT Dean Subra Suresh as the new NSF Director. Suresh replaces Arden L. Bement, Jr.

Michigan CSE Chair Farnam Jahanian takes over as Assistant Director of CISE (as in assistant to Director Suresh) starting in February. Jeannette Wing went back to Carnegie-Mellon last summer. CISE covers computer science funding at the NSF.

Purdue professor Susanne Hambrusch heads the Division of Computing and Communication Foundations (CCF) which includes theoretical computer science. Sampath Kannan returned to Penn.

The other divisions at CISE have new leadership as well. UCSD Chair Keith Marzullo heads Computer and Network Systems (CNS) and CMU Vice Provost for Research Computing Howard Wactlar heads Information and Intelligent Systems (IIS).

The CCF program directors handling the Algorithmic Foundations proposals from last month are Mitra Basu, Tracy Kimbrel and Dmitry Maslov. The NSF is still working to replace Richard Beigel who specialized in computational complexity and has recently returned to Temple.

What do all these changes mean for US funding for computer science and theory in particular? We'll just have to wait and see.

Thursday, October 07, 2010

Noble and Ig Noble Prizes

What is the Ig Nobel prize? To quote the website:
The Ig Nobel Prizes honor achievements that first make people laugh, and then make them think. The prizes are intended to celebrate the unusual, honor the imaginative --- and spur people's interest in science, medicine, and technology.
Some science seems real, though odd: In 2006 the Ig Nobel in Mathematics went to (quoting the website)
Nic Svenson and Piers Barnes of the Australian Commonwealth Scientific and Research Organization, for calculating the number of photographs you must take to (almost) ensure that nobody in a group photo will have their eyes closed.
Some science does not seem real: In 1993 the Ig Nobel in Mathematics went to (quoting the website)
Robert Faid of Greenville, South Carolina, farsighted and faithful seer of statistics, for calculating the exact odds (710,609,175,188,282,000 to 1) that Mikhail Gorbachev is the Antichrist.
There is no specific Ig Nobel prize in computer science. What computer science work deserves an Ig Nobel? What complexity work deserves an Ig Nobel?

Has anyone every won BOTH an IG NOBEL PRIZE and a NOBEL PRIZE? YES! It just happened! Andre Geim won the 2010 Nobel Prize for Physics and had previously won 2000 Ig Nobel Prize for Physics. I describe the equipment used in both, but I will not say which one won the Nobel prize and which one won the Ig Nobel prize.
  1. One used Magnets and Frogs.
  2. One used Scotch Tape and Pencils.

Tuesday, October 05, 2010

Partha Niyogi (1967-2010)

University of Chicago Computer Science and Statistics Professor Partha Niyogi passed away on Friday after a battle with brain cancer. Partha worked in machine learning in a number of theoretical and applied areas, particularly memorable for his use of manifold theory in semi-supervised learning.

I knew Partha well from my years at Chicago. It's hard to lose someone who was a close colleague and friend, especially when they die so young. A great loss.

Monday, October 04, 2010

The Annual Fall Jobs Post

The CRA is working on setting guidelines for job deadlines to help out with some of the gridlock in the job market. Many of the top departments have already moved their deadlines for full consideration to early December or November. Keep an eye out and remember to apply early this year.

Possible theory tenure-track faculty jobs at Harvard, Rutgers, TTI-C and Federal University of Rio Grande do Sul (Brazil). For more faculty and postdoc listings: CRA, ACM, Theory Announcements and CCI. Also check out web pages of departments that interest you.

A little early to tell but this year will likely be similar to last year: a small number of tenure-track positions in TCS and a large number of postdoc positions. Out of necessity almost everyone does a postdoc now and many people doing a second or third as well.

Have the theoretical computer science community actually moved to postdoc culture, where people are now expected to do a postdoc (or multiple postdocs) before taking a tenure-track position like physics, chemistry and biology? When did the field make that jump?

Wednesday, September 29, 2010

NRC Rankings

The NRC "rankings" of Graduate programs was released yesterday. I put up a Google spreadsheet of the CS rankings. Phds.org will also generate rankings using various criteria.

You are not getting actual ranking from the NRC rather two ranking ranges: Statistical-based (R-ranking for "Regression") and Reputation-based (S-ranking for "Survey"). For example, Northwestern CS has a 90% chance of being ranked between 42nd and 73rd (R-ranking) and between 26th and 69th (S-ranking).

Even with these wide ranges there are a number of problems in CS rankings: no citation information is being used, the data is five years old and much of the information is inaccurate (as the Pontiff complains). A simple sanity check on any ranking of CS departments would list the top four as some permutation of MIT, Berkeley, Stanford and Carnegie-Mellon. Congrats to Princeton for breaking this check.

Why no citation information? The NRC originally used the ISI data which didn't cover most conference proceedings that computer scientists consider their main venues of publication. After some discussions with the CRA, the NRC admitted this as a problem and decided to ignore CS publication data citing lack of time to find a different approach.

Yesterday the CRA released a statement on the reliability of the CS rankings.
CRA is pleased that the NRC acknowledges there are errors in the data used to evaluate computer science departments and that, in the words of NRC Study Director Charlotte Kuh, “There’s lots more we need to look at for computer science before we really get it right.”
Schools are taking the opportunity to promote their rankings. Northwestern Engineering boasts how well they did with some nice charts. We're certainly not the only ones.

Did the NRC fulfill their main mission of giving a valuable alternative to the US News rankings? US News uses a "wisdom of crowds" approach by just surveying department and graduate program chairs. The NRC tried a scientific approach which led to years of delays, many complaints about methodologies and accuracy, and a lack of a true ranking. After all the measures we can feed into a formula, the one thing that draws students and faculty to a department is its reputation. Complain as you will about US News, they do try to capture that one statistic.

Meanwhile what do I say to prospective graduate students who cite the low Northwestern CS numbers? I could mention the problems in the methodology, point to the CRA statement, say the numbers are based on data that predates most of the theory group. More likely I will fall back to that trite but true statement: You shouldn't choose a graduate program for their numbers, but for their people.

Tuesday, September 28, 2010

Review of Lipton's Book-Blog/Review of Goldreich and Arora-Barak/New Book Review Column/Do you want to review?

  1. My review of Lipton's new blog-book is here. It will appear in my SIGACT NEWS at some later time.
  2. Daniel Apon's joint review of Computational Complexity: A Conceptual Perspective by Oded Goldreich and Computational Complexity: A Modern Approach by Sanjeev Arora and Boaz Barak is here. It will appear in my SIGACT NEWS at some later time.
  3. My latest SIGACT NEWS book review column (along with all of my other SIGACT NEWS book review columns) can be obtained from here (Ignore the list of books I want reviewed. See next item.)
  4. HERE. is a link to the list of books I want reviewed If you see a book you want then email me at gasarch@cs.umd.edu the name of the book and your postal address. We will then work out details over email- when the review is due, and whether me or the publisher sends it to you. Before volunteering you should read my advice for reviewers. Here is the LaTeX Template.