Thursday, December 29, 2005

Year in Review

Paper of the year goes to Irit Dinur's PCP Theorem by Gap Amplification. We will never teach the PCP theorem the same way again. Concept of the year goes to the Unique Games Conjecture, with its applications to hardness of approximation and metric embeddings. We've also seen the settling of the complexity of Nash Equilibrium in matrix games, list decoding better than we had hoped for and much more.

This was the year that Theory Matters and we welcomed several new bloggers in our community including Scott Aaronson and D. Sivakumar.

A good year for math in the media. The year started with a series about a crime-solving mathematician and ended with a game show about probability, with both shows continuing into 2006. Walk into any bookstore and you'll see a number of popular books on mathematics like Mario Livio's The Equation That Couldn't be Solved and Stephen Hawking's God Created the Integers. Perhaps the biggest disappointment came from the movie Proof which never had enough traction to get a wide release in the US.

Did I mention the White Sox won the world series?

Thanks to guest bloggers Rahul Santhanam and Ryan O'Donnell, guest posters Boaz Barak, Ron Fagin, Bill Gasarch, Michael Mitzenmacher, Rocco Servedio, Rakesh Vohra and my first two podcast guests Bill Gasarch and Scott Aaronson. Thanks to all of your for your comments, emails, support and just reading my rambling thoughts.

In Our Memory: George Dantzig, Seymour Ginsburg, Frank Harary, Leonid Khachiyan and Clemens Lautemann.

Have a great New Year's. More fun in 2006.

Tuesday, December 27, 2005

Start Your Engines

Many fields, like mathematics and economics, have a civilized recruiting process. They have their annual conferences in January with organized meetings between graduate students and postdocs looking for academic positions and members of the faculty recruiting committees from many departments. Some serious weeding is done in both directions and then only a small number of candidates are then considered for positions in each department. The whole hiring season is mostly over in a month or two.

Computer science has no such civilized process. We have no annual meeting that can serve to bring job candidates and recruiters together. So we have a much more haphazard process that starts in earnest in January and doesn't wind down until May or June. We need a better process.

Some advice to the job seekers.

  • Apply early and often. Get your applications out by the end of December even if there is a later stated deadline. Faculty start looking at applications in January and you want your name to be there. Don't take the lack of an announcement or lack of mention of a theory position to deter you from applying to an institution.
  • If you are not sure whether to apply then apply. You don't have any decisions to make until you have two offers in hand.
  • Make a web page that sells you. Make the page visually appealing. Put links to all your recruiting material (CV, Research and Teaching Statements) as well as electronic versions of all of your papers. Just as important remove the embarrassing pictures at least until you have your offers.
  • Use personal contacts. Contact professors you know and let them know you are job hunting and ask if they know of positions at their school or others.
  • Start working on your job talk now. Make it accessible to a general computer science audience while convincing the theorists you have some meat in your results. Practice the talk with your fellow graduate students and faculty in your department.
  • Be patient. Many positions are tied up for a few months until the top few candidates make some decisions. The market will shake out, just give it time.

Monday, December 26, 2005

Deal or No Deal

A new US game show started last week, Deal or No Deal hosted by Howie Mandel (the same Howie from this post). A New York Times article describes the game as a exercise in probability.
Twenty-six briefcases are distributed to 26 models before the show begins. Each case contains a paper listing a different dollar amount, from one penny to $1 million. At the start of the game, a contestant chooses one case, which becomes his; he is then allowed to see the sums in six of the remaining cases. After these have been disclosed, a mysterious figure known as the Banker calls the set, offering to buy the contestant's case for a sum derived, somehow, from the cash amounts that are still unrevealed.

The contestant can take the offer and cash out, or move on to the next round, during which he's allowed to open five more briefcases before the Banker's next offer. The second offer might exceed or fall short of the first offer, but it clearly reflects the newly adjusted odds about what the contestant is holding. If the contestant refuses it, he requests to see the contents of four, three, two, and then one more case, with offers from the Banker coming at the end of each round. Each time the contestant can accept and end the game, or proceed to the next round. If he doesn't accept any of the offers, he is left with the sum in his own case.

Is it wise to take a bank offer when it's below the mathematical expectation, as it always seems to be? As the game goes on, the offers asymptotically approach mathematical expectation; maybe contestants should wait.

If the contestant just wanted to maximize the expected value of their winnings they should always turn down the Banker, but many do accept the Banker's offer. Are they acting rationally?

When the amount of money involved becomes a significant fraction of the contestant's net worth, a contestant becomes risk averse and is often willing to accept a sure amount rather than an expected higher amount.

Economists model this phenomenon using utility theory. A person has a utility function of their net worth, usually with first derivative positive and second derivative negative (think square root or logarithm). They aim not to maximize expected net worth, but expected utility which leads to risk aversion. For example, if you had a square root utility then you would be indifferent to having a guaranteed net worth of 360,000 and playing a game that would give you a net worth 1,000,000 or 40,000 with equal chance.

Economists can't afford to run these experiments at their universities; they can't offer enough money for serious risk averse effects to kick in. But television game shows like this do give us a chance to see risk aversion in action.

Friday, December 23, 2005

All I Want for Christmas is a Proof that P≠NP

For the first time in my lifetime, Christmas and Chanukah land on the same day. So to all my readers, Happy Holidays!

And what do we find under our theory Christmas tree? Our first present is a new randomized polynomial-time algorithm for linear programming from Kelner and Spielman. The card reads

In this paper, we present the first randomized polynomial-time simplex method. Like the other known polynomial-time algorithms for linear programming, the running time of our algorithm depends polynomially on the bit-length of the input. We do not prove an upper bound on the diameter of polytopes. Rather we reduce the linear programming problem to the problem of determining whether a set of linear constraints defines an unbounded polyhedron. We then randomly perturb the right-hand sides of these constraints, observing that this does not change the answer, and use a shadow-vertex simplex method to try solve the perturbed problem. When the shadow-vertex method fails, it suggests a way to alter the distributions of the perturbations, after which we apply the method again. We prove that the number of iterations of this loop is polynomial with high probability.
Our next present is a list-decodable code from Guruswami and Rudra. Back in October I posted about the Parvaresh-Vardy code that list decode a 1-ε fraction of errors using a code of rate O(ε/log(1/ε)). Guruswami and Rudra, for any constant δ>0, create a code with rate ε-δ.

Many more presents under the tree, presumably many STOC and Complexity submissions. No proofs (at least correct ones) that P≠NP this year but maybe Santa will come through for us in time for next Christmas.

Wednesday, December 21, 2005

A Second Helping of Numb3rs

I've been catching up on my Tivo on the second season of Numb3rs, the CBS television series about a math professor Charlie Epps who uses math to help his brother at the FBI.

Most of the episodes this season do a nice job explaining mathematical concepts including several relating to theoretical computer science (see below), though the connections between the math and the plot get more and more tenuous.

A couple of Numb3rs related web sites give more details on the mathematics described in the show. Texas Instruments created a site giving high school level descriptions and activities on the topics discussed in the show including Voronoi Diagrams, Entropy, Eulerian Tours, Steiner Trees, Error-Correcting Codes, The Art Gallery Problem and Pseudorandom Numbers. Also, a Northeastern professor writes a weblog giving more detailed mathematical explanations of the topics on the show.

In my favorite episode of the season Convergence, a rival mathematician gives a talk finding a hole in the proof of the Epps Convergence, Charlie's best work. This causes Charlie to go through an introspective phase questioning whether his FBI work keeps him away from doing his real research as a mathematician. He considers the importance of being a mathematician while being part of a larger world, philosophical issues that many real mathematicians also contemplate.

Tuesday, December 20, 2005

Pulling Out The Quantumness

A recent comment made the mistaken assumption that if NP is in BQP (efficient quantum algorithms) then the whole polynomial-time hierarchy lies in BQP. We do know if NP is in BPP (efficient probabilistic algorithms) then PH is in BPP. Why the proof doesn't work for BQP illustrates an interesting difference between probabilistic and quantum computation.

How do we prove NP in BPP implies PH in BPP? Let me just do NPNP ⊆ BPP, the rest is an easy induction. We use the following inclusions.

NPNP ⊆ NPBPP ⊆ BPPNP ⊆ BPPBPP = BPP

The first and third "⊆" are by assumption, the "=" is by simulation. To get NPBPP ⊆ BPPNP we first make the error of the BPP algorithm so small that a randomly chosen string will give, with high probability, the correct answer for every query made on every computation path of the NP algorithm. We can then safely pick a random string r first and then make an NP query that simulates the NP algorithm which will use r to simulate the BPP queries.

Now suppose we wanted to show NP in BQP implies PH in BQP. We just need to show NPBQP ⊆ BQPNP, the rest works as before. Like the BPP case we can make the error of the BQP algorithm very small, but we have no quantum string that we can choose ahead of time. In probabilistic algorithms we can pull out randomness and leave a deterministic algorithm with a random input but we don't know any way to pull the quantumness, even with quantum bits, out of a quantum algorithm.

Whether NPBQP ⊆ BQPNP remains an open question. Showing NPBQP ⊆ BQPNP or finding a relativized world where NPBQP is not contained in BQPNP would give us a better understanding of the mysterious nature of quantum computation.

Monday, December 19, 2005

Favorite Theorems: First Decade Recap

After I chose my ten favorite theorems in the decades 1985-1994 and 1995-2004, I went back to the first decade in complexity. Here is a recap of my ten favorite theorems in computational complexity from 1965-1974. I wrote the first two lists at the end of their respective decades but for this last one we have over thirty years of perspective. For example, at the time one might have included some of the difficult results that related the complexity of different Turing machine models (number of tapes, number of heads, number of dimensions of tapes and others) that seem less important today.

We have one missing decade, so some more list making next year.

Friday, December 16, 2005

SIGACT News

In the chair's letter of the December SIGACT News, Richard Ladner points to this and other weblogs for discussions about the FOCS Business meeting panel discussion. In case you have come here for that reason, you can find those posts here, here, here, here, here and here.

Also in this month's SIGACT News, Neil Immerman writes a guest complexity column on Progress in Descriptive Complexity (which defines complexity classes using logical structures), Karp publishes his committee report on TCS funding, Tim Roughgarden interviews the STOC best student paper award winner Vladimir Trifonov and much more.

For a mere $18 ($9 for students), you can become a SIGACT member and get SIGACT News in print and online, online access to some other ACM theory publications, reduced rates at conferences and feeling good about just helping out the theory community. (Full disclosure: I'm currently vice-chair of SIGACT. You should join anyway.)

I'll end with the Quarterly Quote from the latest issue.

I'd like a large order of FiboNachos.

Okay sir, that'll cost as much as a small order and a medium order combined.

Thursday, December 15, 2005

What is PPAD?

Christos Papadimitriou gave an overview talk at the NIPS conference last week and mentioned the new result by Chen and Deng showing that, given the payoff matrix, the complexity of computing a 2-Player Nash Equilibrium is PPAD-complete.

NIPS is mainly an AI conference and some of my Machine Learning friends, who have a good understanding of NP-completeness, went around asking who knew what PPAD-complete meant. They got many blank stares and a few confusing answers.

So what is PPAD? Let's start with TFNP (Total Functions in NP). Consider a polynomial-time computable predicate P(x,y) where for every x there is at least one y such that P(x,y) is true. A TFNP function is the problem of given an input x, finding a y such that P(x,y).

The interesting TFNP functions come from combinatorial theorems that guarantee solutions. One such theorem, called the Polynomial Parity Argument (PPA), is given a finite graph consisting of lines and cycles (every node has degree at most 2), there is an even number of endpoints.

The class PPAD is defined using directed graphs based on PPA. Formally PPAD is defined by its complete problem (from an earlier post): Given an exponential-size directed graph with every node having in-degree and out-degree at most one described by a polynomial-time computable function f(v) that outputs the predecessor and successor of v, and a vertex s with a successor but no predecessors, find a t≠s that either has no successors or predecessors.

Besides two player Nash, arbitrary k-player Nash and a discrete version of finding Brouwer fixed points are also complete for PPAD.

So how do we explain this Nash Equilibrium result say when we try to explain the result to economists and AI folk. Just say Nash Equilibrium for 2-player games has the same complexity as k-player Nash and finding fixed points and we have some evidence that no efficient solutions exist for such problems.

Wednesday, December 14, 2005

Physicists and Complexity

In my second Complexitycast, Scott Aaronson and I try to answer the question "What should physicists know about computational complexity?" MP3 (21:52, 3.8MB)

Tuesday, December 13, 2005

Intellectuals and Science

Nicholas Kristof writes in a New York Times op-ed column The Hubris of the Humanities (paid subscription required) that the lack of appreciation for science comes not only from the masses but even from the intellectual elite.
The problem isn't just inadequate science (and math) teaching in the schools, however. A larger problem is the arrogance of the liberal arts, the cultural snootiness of, of … well, of people like me – and probably you.

What do I mean by that? In the U.S. and most of the Western world, it's considered barbaric in educated circles to be unfamiliar with Plato or Monet or Dickens, but quite natural to be oblivious of quarks and chi-squares. A century ago, Einstein published his first paper on relativity – making 1905 as important a milestone for world history as 1066 or 1789 – but relativity has yet to filter into the consciousness of otherwise educated people.

Most of the intellectuals I know are well-versed in the sciences but that is the company I keep. But we do expect the well-learned computer scientist to have read their Shakespeare and we don't expect English professors to know diddly about NP-completeness (or calculus for that matter).

After giving the usual statistics like 40% of Americans don't believe in evolution Kristof ends with the following.

But there's an even larger challenge than anti-intellectualism. And that's the skewed intellectualism of those who believe that a person can become sophisticated on a diet of poetry, philosophy and history, unleavened by statistics or chromosomes. That's the hubris of the humanities.

Sunday, December 11, 2005

UG-Hard

We have seen many exciting papers based on the unique games conjecture for example that improving the Goemans-Williamson approximation algorithm for Max-Cut would imply P=NP if the unique games conjecture is true.

Many complexity theorists have told me that they have doubts and in some cases outright don't believe the unique games conjecture. What good are results based on a conjecture that complexity theorists won't stand behind?

The unique games conjecture states that a certain unique games problem is NP-complete. Even if unique games is not NP-complete, it still might be difficult to solve, a situation we believe for some other famous problems like factoring, graph isomorphism and Nash equilibrium. We've seen hardness results based on reductions from these later problems so perhaps we can do the same for unique games.

Most, if not all, of the theorems based on the unique games conjecture actually prove something stronger, they reduce the unique games problem to the approximation problem, e.g., improving the Goemans-Willamson bound would yield an algorithm that solves the unique games problem.

Let me suggest the term UG-Hard for the class of problems to which we can reduce the unique games problem. Theorem: Improving the Goemans-Williamson algorithm for Max-cut is UG-Hard. No conjectures necessary.

If the unique-games conjecture is true, the UG-hard is the same as NP-hard. If unique games are not solvable in polynomial-time then the UG-hard problems will not have a polynomial-time algorithm. And as long as we don't have a polynomial-time algorithm for unique game then finding an efficient algorithm for any UG-hard problem would imply settling the now major open question of the complexity of the unique games.

Thursday, December 08, 2005

Texas Ice and Ribs

This week I am visiting the University of Texas in Austin. Yes, another football school, but they also have a boffo complexity group with Anna Gál, Adam Klivans and David Zuckerman as well as theorists Greg Plaxton, Vijaya Ramachandran and Tandy Warnow.

But what the school can't handle is the weather. Yesterday's high temperature in Chicago was 15ºF, low was 0 (-18ºC). People in Chicago bundled up and went to work as usual. Never in my memory has the University of Chicago closed due to weather.

In Austin, the temperature was a balmy 30ºF with a little rain. So they shut down the university and much of the rest of the town. Drivers in Austin cannot handle a little ice. Pathetic.

But the night was not a loss. Adam Klivans and his fiancée Jean took me to their favorite ribs place in Austin, The County Line. When we got there the restaurant had also shut down due to the weather. But Adam made such a good plea to the manager ("My friend came all the way from Chicago for your ribs"), that they gave us, free of charge, a tray full of beef and pork ribs, chicken and sausage (pictured below). We took them back to Adam's place and a good time was had by all.


Photo by Jean Kwon

Wednesday, December 07, 2005

Surprising Results

Guest Post by Bill Gasarch with help from Harry Lewis and Richard Ladner.

What are the surprising results in theory? By surprising we DO NOT mean surprising that they were PROVEN (e.g., PRIMES in P) but surprising that they were TRUE. Some results are surprising since people thought the opposite is true, but some results are surprising because people didn't even think in those terms. The latter we call BIG surprises. I will note which ones are BIG. Here is a list of results that were surprising on some level.

While compiling this list and talking to people one (obvious) thing really struck me: what people consider surprising is very much a function of WHEN they were in graduate school. For example, Median Finding in Linear time was a big surprise at the time, but we now teach it to undergrads who are not impressed. And neither are we.

  1. Gödel's Inc. Theorem. (1931) REALLY surprised Hilbert and others. BIG Original Version Modern Version
  2. HALT is undecidable. (1931–though in a different form). BIG
  3. Multiplication in subquadratic time (Toom-1963, Cook-1966). Later improved to n log n loglog n (Schonhage-Strassen 1971) People did it in O(n2) for so long! BIG
  4. Matching in Polynomial Time. (Edmonds 1965, Canadian Journal of Mathematics) Poly time seen as important. BIG.
  5. Matrix Multiplication in O(n2.87) time (less than n3 !!) (Strassen 1969).
  6. The Speed Up Theorem–There are decidable sets that cannot be assigned a complexity intelligently. (M. Blum STOC69, JACM71)
  7. Savitch's Theorem (JCSS 1970): NSPACE(T(n)) ⊆ DSPACE(T2(n)).
  8. Cook's Theorem. (1971 STOC) EVERY NP-problem is reducible to SAT! (Cook also had CLIQUE–so I've left off Karp's 22 problems). BIG.
  9. Equivalence problem for Regular Expressions with Squaring is EXP-SPACE complete (Meyer, Stockmeyer FOCS72). A `natural' problem proven to NOT be in P!
  10. Median in Linear time (Blum,Floyd,Pratt,Rivest,Tarjan STOC72, JCSS73).
  11. Super-Exponential Complexity of Presburger Arithmetic (Fischer-Rabin, 1974). A new way of looking at Hilbert's program and its difficulties.
  12. Amortized Union-Find in Θ(nINVACK(n)). (Tarjan- 1975). Inverse Ackerman comes up naturally! First time?
  13. P vs NP cannot be solved by techniques that relatives (Baker, Gill, Solovay 1975). BIG
  14. Public Key Crypto (Diffie Helman, 1976, IEEE Trans Inform Theory, Rivest-Shamir-Adleman, 1978). Establishing shared key without meeting. In retrospect solved 2000 year old problem. BIG.
  15. Permanent is #P complete. (Valiant 1979) BIG–Introduced #P.
  16. Linear Programming in P. (Khachiyan 1979).
  17. Integer Programming with fixed number of variables in P (H.W. Lenstra 1983) Note that the range of the variables is unbounded.
  18. Shortest Path problem on planar graphs BETTER THAN O(n log n) (Fredersickson, 1983, FOCS). You don't need to sort!
  19. Bit commitment⇒SAT in Zero Knowledge (Brassard,Crepeau FOCS86, Goldreich,Micali,Wigderson CRYPTO86, JACM91) if φ in SAT, can convince you that φ in SAT without giving you a clue on how to satisfy it. Result did not relativize.
  20. NSPACE(n) = coNSPACE(n). (1987-Szelepcsenyi-BEATCS, Immerman-1988-IEEECCC, SICOMP).
  21. Fixed Parameter Tractability material—Hard to pin down ONE result. I was surprised at Vertex cover for fixed k in O(f(k)n3) via Graph Minor Theorem. Other candidates are solid evidence that Dominating Set and Clique are NOT Fixed Point Tractable. More progress on Vertex Cover (down to something like n+(1.34)k + O(1)) and Subgraph Homeomorphism for fixed graphs in O(n3) (Robertson and Seymour).
  22. Under assumptions, P=BPP (Nisan, Wigderson FOCS 1988, JCSS 1994). People had thought P ≠ BPP. BIG.
  23. PH in P#P. (Toda FOCS89, SICOMP91) PH is the Poly Hierarchy. #P REALLY powerful!
  24. IP=PSPACE (Lund-Fortnow-Karloff-Nisan, Shamir FOCS90 JACM 92). Bonus: Proof did not relativize. BIG.
  25. Connection between PCP and Approximation. (Feige,Goldwasser,Lovasz,Safra,Szegedy FOCS91, JACM96). BIG.
  26. Factoring is in QUANTUM P (Shor 1994 FOCS). Natural candidate for QP-P ! BIG.
  27. Limits on what we can prove about circuits (Razborov and Rudich STOC94, JCSS97).
  28. Randomized approx algorithms for MAX CUT which are .87856.. OPT (Goemans, Williamson, 1995).
  29. Grover's Quantum O(n0.5) search algorithm (STOC 1996, Phy Review Letters 1997)
  30. HALT is truth-table reducible to RAND, where RAND is Kolmogorov rand strings (Kummer STACS 96). All the people working on this problem (granted, not many) thought it was false.
  31. Planar TSP has a PTAS. (Arora FOCS97, JACM98). Most people thought false.
  32. Assuming Unique Game Conjecture Max Cut algorithm of GW is optimal unless P=NP (Mossel, O'Donnell, Oleskiewicz 2005FOCS)

Tuesday, December 06, 2005

How I Became a Theorist

Someone asked me recently how I became a complexity theorist? After all most high school students don't say they want to be a theoretical computer scientist when they grow up. Every one of us has a story. Here's mine.

In high school I wanted to be an engineer. "Using science to help society" read one brochure I saw in junior high. I applied to several engineering programs and matriculated at Cornell.

During my first semester in an engineering math class, the professor writes an equation on the board. I raised my hand "Why is that equation true?"

"We don't do that in this class" the professor responded.

I then dropped out of engineering and transferred within Cornell to the College of Arts and Sciences and started my life as a math major. My mother, convinced I would become an unemployed mathematician, talked me into a double major in math and computer science, which was a relatively easy double major at Cornell.

In my junior year, as part of the CS requirements, I took an introductory theoretical computer science class with Juris Hartmanis. I had found my calling and the rest, as they say, is history.

Monday, December 05, 2005

The Day After

Many of you submitted papers to the Complexity conference by yesterday's deadline, now you should let the world see them. Submit your papers to an archive center like CoRR and/or ECCC. It's never too late to submit your STOC submissions as well.

Talking about the ECCC, Xi Chen and Xiaotie Deng have posted a new paper that apparently shows that 2-player Nash Equilibrium has the same complexity as finding fixed points (PPAD-complete), extending the 4-player result I mentioned in October. A bit surprising, most of us thought that 2-player Nash would be easier to solve.

Friday, December 02, 2005

Complexity in the Mid-80's

Luca asked about the topics in the complexity courses I took in 1985 and 1986. I dug up my old course notes and wrote down what was covered. Some more in the intersection than I remembered.

Cornell University CS 682–Theory of Computation
Spring 1985
Juris Hartmanis

Recursion Theorem, Rice's Theorem, Myhill's theorem that all r.e.-complete sets are recursively isomorphic, Kolmogorov complexity, Relativization, arithmetic hierarchy leading to the polynomial-time hierarchy, Ladner's theorem, oracles for P=NP and P≠NP and some other hypotheses, Blum Speed-Up Theorem, Cook's theorem, isomorphism conjecture for NP-complete sets, Mahaney's theorem (Sparse NP-complete sets imply P=NP), Blum's axioms, probabilistic classes, random oracle hypothesis, E=NE iff no sparse sets in NP-P, Karp-Lipton, Union Theorem, Elementary and Primitive recursive functions.

Much of this course revolved around the isomorphism conjecture that Hartmanis felt at the time could lead to a proof that P ≠ NP.

On 2/18/85 Hartmanis stated that whether there is an oracle where PH is infinite is an open question and on 3/13/85 Hartmanis announced that Yao found such an oracle but no mention of how Yao's proof worked, i.e., no circuit complexity.

UC Berkeley CS 278–Machine-Based Complexity
Spring 1986
Michael Sipser

Turing machines, Cook's Theorem, Savitch's theorem, Path is NL-complete, alternation and PSPACE-completeness, Circuit valuation is P-complete and P/poly, Oracles and Random Oracles, Probabilistic classes, Undirected path in RL (now known to be in L), Graph Isomorphism in co-AM, Kolmogorov complexity, using Kolmogorov complexity to show BPP in the hierarchy, Karp-Lipton, Mahaney's theorem, Parallel Computation (NC and AC), CFL in NC2, Parity not in AC0 (Furst-Saxe-Sipser, 4 lectures with mention of Yao and Hastad's improvements), Connections to oracles and bounds on depth-k circuits, Razborov's proof that CLIQUE does not have poly-size monotone circuits (6 lectures), Barrington's Theorem (bounded-width branching programs=NC1).

Sipser ended the course talking about his approach to showing NP ≠ co-NP by trying to find a finite analogue of a combinatorial proof that the analytic sets are not closed under complement.

Many of the theorems in Sipser's course were proven after the start of Hartmanis' course such as graph isomorphism in co-AM and Razborov's theorem.

Thursday, December 01, 2005

The First Saturday in December

This Saturday comes the annual William Lowell Putnam Mathematical Competition. The contest is open to undergraduates at American universities. Any number can compete but each university picks three members ahead of time to serve as the team for that school.

As a freshman I did well on practice exams and made the 1981 Cornell University Putnam team. I rewarded their confidence by scoring the median score that year, a zero. Needless to say they didn't put me on the team again and I didn't even bother taking the exam my senior year.

Some members of our community have done very well on national and international math competitions. Some have done not so well but have turned into strong scientists nevertheless. Other people who have excelled at these competitions have not had success as mathematical researchers.

What conclusion can we draw? It takes more than mathematical problem solving to make a good mathematician.