Thursday, December 29, 2005

Year in Review

Paper of the year goes to Irit Dinur's PCP Theorem by Gap Amplification. We will never teach the PCP theorem the same way again. Concept of the year goes to the Unique Games Conjecture, with its applications to hardness of approximation and metric embeddings. We've also seen the settling of the complexity of Nash Equilibrium in matrix games, list decoding better than we had hoped for and much more.

This was the year that Theory Matters and we welcomed several new bloggers in our community including Scott Aaronson and D. Sivakumar.

A good year for math in the media. The year started with a series about a crime-solving mathematician and ended with a game show about probability, with both shows continuing into 2006. Walk into any bookstore and you'll see a number of popular books on mathematics like Mario Livio's The Equation That Couldn't be Solved and Stephen Hawking's God Created the Integers. Perhaps the biggest disappointment came from the movie Proof which never had enough traction to get a wide release in the US.

Did I mention the White Sox won the world series?

Thanks to guest bloggers Rahul Santhanam and Ryan O'Donnell, guest posters Boaz Barak, Ron Fagin, Bill Gasarch, Michael Mitzenmacher, Rocco Servedio, Rakesh Vohra and my first two podcast guests Bill Gasarch and Scott Aaronson. Thanks to all of your for your comments, emails, support and just reading my rambling thoughts.

In Our Memory: George Dantzig, Seymour Ginsburg, Frank Harary, Leonid Khachiyan and Clemens Lautemann.

Have a great New Year's. More fun in 2006.

Tuesday, December 27, 2005

Start Your Engines

Many fields, like mathematics and economics, have a civilized recruiting process. They have their annual conferences in January with organized meetings between graduate students and postdocs looking for academic positions and members of the faculty recruiting committees from many departments. Some serious weeding is done in both directions and then only a small number of candidates are then considered for positions in each department. The whole hiring season is mostly over in a month or two.

Computer science has no such civilized process. We have no annual meeting that can serve to bring job candidates and recruiters together. So we have a much more haphazard process that starts in earnest in January and doesn't wind down until May or June. We need a better process.

Some advice to the job seekers.

  • Apply early and often. Get your applications out by the end of December even if there is a later stated deadline. Faculty start looking at applications in January and you want your name to be there. Don't take the lack of an announcement or lack of mention of a theory position to deter you from applying to an institution.
  • If you are not sure whether to apply then apply. You don't have any decisions to make until you have two offers in hand.
  • Make a web page that sells you. Make the page visually appealing. Put links to all your recruiting material (CV, Research and Teaching Statements) as well as electronic versions of all of your papers. Just as important remove the embarrassing pictures at least until you have your offers.
  • Use personal contacts. Contact professors you know and let them know you are job hunting and ask if they know of positions at their school or others.
  • Start working on your job talk now. Make it accessible to a general computer science audience while convincing the theorists you have some meat in your results. Practice the talk with your fellow graduate students and faculty in your department.
  • Be patient. Many positions are tied up for a few months until the top few candidates make some decisions. The market will shake out, just give it time.

Monday, December 26, 2005

Deal or No Deal

A new US game show started last week, Deal or No Deal hosted by Howie Mandel (the same Howie from this post). A New York Times article describes the game as a exercise in probability.
Twenty-six briefcases are distributed to 26 models before the show begins. Each case contains a paper listing a different dollar amount, from one penny to $1 million. At the start of the game, a contestant chooses one case, which becomes his; he is then allowed to see the sums in six of the remaining cases. After these have been disclosed, a mysterious figure known as the Banker calls the set, offering to buy the contestant's case for a sum derived, somehow, from the cash amounts that are still unrevealed.
The contestant can take the offer and cash out, or move on to the next round, during which he's allowed to open five more briefcases before the Banker's next offer. The second offer might exceed or fall short of the first offer, but it clearly reflects the newly adjusted odds about what the contestant is holding. If the contestant refuses it, he requests to see the contents of four, three, two, and then one more case, with offers from the Banker coming at the end of each round. Each time the contestant can accept and end the game, or proceed to the next round. If he doesn't accept any of the offers, he is left with the sum in his own case.
Is it wise to take a bank offer when it's below the mathematical expectation, as it always seems to be? As the game goes on, the offers asymptotically approach mathematical expectation; maybe contestants should wait.
If the contestant just wanted to maximize the expected value of their winnings they should always turn down the Banker, but many do accept the Banker's offer. Are they acting rationally?
When the amount of money involved becomes a significant fraction of the contestant's net worth, a contestant becomes risk averse and is often willing to accept a sure amount rather than an expected higher amount.
Economists model this phenomenon using utility theory. A person has a utility function of their net worth, usually with first derivative positive and second derivative negative (think square root or logarithm). They aim not to maximize expected net worth, but expected utility which leads to risk aversion. For example, if you had a square root utility then you would be indifferent to having a guaranteed net worth of 360,000 and playing a game that would give you a net worth 1,000,000 or 40,000 with equal chance.
Economists can't afford to run these experiments at their universities; they can't offer enough money for serious risk averse effects to kick in. But television game shows like this do give us a chance to see risk aversion in action.

Friday, December 23, 2005

All I Want for Christmas is a Proof that P≠NP

For the first time in my lifetime, Christmas and Chanukah land on the same day. So to all my readers, Happy Holidays!

And what do we find under our theory Christmas tree? Our first present is a new randomized polynomial-time algorithm for linear programming from Kelner and Spielman. The card reads

In this paper, we present the first randomized polynomial-time simplex method. Like the other known polynomial-time algorithms for linear programming, the running time of our algorithm depends polynomially on the bit-length of the input. We do not prove an upper bound on the diameter of polytopes. Rather we reduce the linear programming problem to the problem of determining whether a set of linear constraints defines an unbounded polyhedron. We then randomly perturb the right-hand sides of these constraints, observing that this does not change the answer, and use a shadow-vertex simplex method to try solve the perturbed problem. When the shadow-vertex method fails, it suggests a way to alter the distributions of the perturbations, after which we apply the method again. We prove that the number of iterations of this loop is polynomial with high probability.
Our next present is a list-decodable code from Guruswami and Rudra. Back in October I posted about the Parvaresh-Vardy code that list decode a 1-ε fraction of errors using a code of rate O(ε/log(1/ε)). Guruswami and Rudra, for any constant δ>0, create a code with rate ε-δ.

Many more presents under the tree, presumably many STOC and Complexity submissions. No proofs (at least correct ones) that P≠NP this year but maybe Santa will come through for us in time for next Christmas.

Wednesday, December 21, 2005

A Second Helping of Numb3rs

I've been catching up on my Tivo on the second season of Numb3rs, the CBS television series about a math professor Charlie Epps who uses math to help his brother at the FBI.

Most of the episodes this season do a nice job explaining mathematical concepts including several relating to theoretical computer science (see below), though the connections between the math and the plot get more and more tenuous.

A couple of Numb3rs related web sites give more details on the mathematics described in the show. Texas Instruments created a site giving high school level descriptions and activities on the topics discussed in the show including Voronoi Diagrams, Entropy, Eulerian Tours, Steiner Trees, Error-Correcting Codes, The Art Gallery Problem and Pseudorandom Numbers. Also, a Northeastern professor writes a weblog giving more detailed mathematical explanations of the topics on the show.

In my favorite episode of the season Convergence, a rival mathematician gives a talk finding a hole in the proof of the Epps Convergence, Charlie's best work. This causes Charlie to go through an introspective phase questioning whether his FBI work keeps him away from doing his real research as a mathematician. He considers the importance of being a mathematician while being part of a larger world, philosophical issues that many real mathematicians also contemplate.

Tuesday, December 20, 2005

Pulling Out The Quantumness

A recent comment made the mistaken assumption that if NP is in BQP (efficient quantum algorithms) then the whole polynomial-time hierarchy lies in BQP. We do know if NP is in BPP (efficient probabilistic algorithms) then PH is in BPP. Why the proof doesn't work for BQP illustrates an interesting difference between probabilistic and quantum computation.

How do we prove NP in BPP implies PH in BPP? Let me just do NPNP ⊆ BPP, the rest is an easy induction. We use the following inclusions.

NPNP ⊆ NPBPP ⊆ BPPNP ⊆ BPPBPP = BPP

The first and third "⊆" are by assumption, the "=" is by simulation. To get NPBPP ⊆ BPPNP we first make the error of the BPP algorithm so small that a randomly chosen string will give, with high probability, the correct answer for every query made on every computation path of the NP algorithm. We can then safely pick a random string r first and then make an NP query that simulates the NP algorithm which will use r to simulate the BPP queries.

Now suppose we wanted to show NP in BQP implies PH in BQP. We just need to show NPBQP ⊆ BQPNP, the rest works as before. Like the BPP case we can make the error of the BQP algorithm very small, but we have no quantum string that we can choose ahead of time. In probabilistic algorithms we can pull out randomness and leave a deterministic algorithm with a random input but we don't know any way to pull the quantumness, even with quantum bits, out of a quantum algorithm.

Whether NPBQP ⊆ BQPNP remains an open question. Showing NPBQP ⊆ BQPNP or finding a relativized world where NPBQP is not contained in BQPNP would give us a better understanding of the mysterious nature of quantum computation.

Monday, December 19, 2005

Favorite Theorems: First Decade Recap

After I chose my ten favorite theorems in the decades 1985-1994 and 1995-2004, I went back to the first decade in complexity. Here is a recap of my ten favorite theorems in computational complexity from 1965-1974. I wrote the first two lists at the end of their respective decades but for this last one we have over thirty years of perspective. For example, at the time one might have included some of the difficult results that related the complexity of different Turing machine models (number of tapes, number of heads, number of dimensions of tapes and others) that seem less important today.

We have one missing decade, so some more list making next year.

Friday, December 16, 2005

SIGACT News

In the chair's letter of the December SIGACT News, Richard Ladner points to this and other weblogs for discussions about the FOCS Business meeting panel discussion. In case you have come here for that reason, you can find those posts here, here, here, here, here and here.

Also in this month's SIGACT News, Neil Immerman writes a guest complexity column on Progress in Descriptive Complexity (which defines complexity classes using logical structures), Karp publishes his committee report on TCS funding, Tim Roughgarden interviews the STOC best student paper award winner Vladimir Trifonov and much more.

For a mere $18 ($9 for students), you can become a SIGACT member and get SIGACT News in print and online, online access to some other ACM theory publications, reduced rates at conferences and feeling good about just helping out the theory community. (Full disclosure: I'm currently vice-chair of SIGACT. You should join anyway.)

I'll end with the Quarterly Quote from the latest issue.

I'd like a large order of FiboNachos.

Okay sir, that'll cost as much as a small order and a medium order combined.

Thursday, December 15, 2005

What is PPAD?

Christos Papadimitriou gave an overview talk at the NIPS conference last week and mentioned the new result by Chen and Deng showing that, given the payoff matrix, the complexity of computing a 2-Player Nash Equilibrium is PPAD-complete.

NIPS is mainly an AI conference and some of my Machine Learning friends, who have a good understanding of NP-completeness, went around asking who knew what PPAD-complete meant. They got many blank stares and a few confusing answers.

So what is PPAD? Let's start with TFNP (Total Functions in NP). Consider a polynomial-time computable predicate P(x,y) where for every x there is at least one y such that P(x,y) is true. A TFNP function is the problem of given an input x, finding a y such that P(x,y).

The interesting TFNP functions come from combinatorial theorems that guarantee solutions. One such theorem, called the Polynomial Parity Argument (PPA), is given a finite graph consisting of lines and cycles (every node has degree at most 2), there is an even number of endpoints.

The class PPAD is defined using directed graphs based on PPA. Formally PPAD is defined by its complete problem (from an earlier post): Given an exponential-size directed graph with every node having in-degree and out-degree at most one described by a polynomial-time computable function f(v) that outputs the predecessor and successor of v, and a vertex s with a successor but no predecessors, find a t≠s that either has no successors or no predecessors.

Besides two player Nash, arbitrary k-player Nash and a discrete version of finding Brouwer fixed points are also complete for PPAD.

So how do we explain this Nash Equilibrium result say when we try to explain the result to economists and AI folk. Just say Nash Equilibrium for 2-player games has the same complexity as k-player Nash and finding fixed points and we have some evidence that no efficient solutions exist for such problems.

Wednesday, December 14, 2005

Physicists and Complexity

In my second Complexitycast, Scott Aaronson and I try to answer the question "What should physicists know about computational complexity?" MP3 (21:52, 3.8MB)

Tuesday, December 13, 2005

Intellectuals and Science

Nicholas Kristof writes in a New York Times op-ed column The Hubris of the Humanities (paid subscription required) that the lack of appreciation for science comes not only from the masses but even from the intellectual elite.
The problem isn't just inadequate science (and math) teaching in the schools, however. A larger problem is the arrogance of the liberal arts, the cultural snootiness of, of … well, of people like me – and probably you.

What do I mean by that? In the U.S. and most of the Western world, it's considered barbaric in educated circles to be unfamiliar with Plato or Monet or Dickens, but quite natural to be oblivious of quarks and chi-squares. A century ago, Einstein published his first paper on relativity – making 1905 as important a milestone for world history as 1066 or 1789 – but relativity has yet to filter into the consciousness of otherwise educated people.

Most of the intellectuals I know are well-versed in the sciences but that is the company I keep. But we do expect the well-learned computer scientist to have read their Shakespeare and we don't expect English professors to know diddly about NP-completeness (or calculus for that matter).

After giving the usual statistics like 40% of Americans don't believe in evolution Kristof ends with the following.

But there's an even larger challenge than anti-intellectualism. And that's the skewed intellectualism of those who believe that a person can become sophisticated on a diet of poetry, philosophy and history, unleavened by statistics or chromosomes. That's the hubris of the humanities.

Sunday, December 11, 2005

UG-Hard

We have seen many exciting papers based on the unique games conjecture for example that improving the Goemans-Williamson approximation algorithm for Max-Cut would imply P=NP if the unique games conjecture is true.

Many complexity theorists have told me that they have doubts and in some cases outright don't believe the unique games conjecture. What good are results based on a conjecture that complexity theorists won't stand behind?

The unique games conjecture states that a certain unique games problem is NP-complete. Even if unique games is not NP-complete, it still might be difficult to solve, a situation we believe for some other famous problems like factoring, graph isomorphism and Nash equilibrium. We've seen hardness results based on reductions from these later problems so perhaps we can do the same for unique games.

Most, if not all, of the theorems based on the unique games conjecture actually prove something stronger, they reduce the unique games problem to the approximation problem, e.g., improving the Goemans-Willamson bound would yield an algorithm that solves the unique games problem.

Let me suggest the term UG-Hard for the class of problems to which we can reduce the unique games problem. Theorem: Improving the Goemans-Williamson algorithm for Max-cut is UG-Hard. No conjectures necessary.

If the unique-games conjecture is true, the UG-hard is the same as NP-hard. If unique games are not solvable in polynomial-time then the UG-hard problems will not have a polynomial-time algorithm. And as long as we don't have a polynomial-time algorithm for unique game then finding an efficient algorithm for any UG-hard problem would imply settling the now major open question of the complexity of the unique games.

Thursday, December 08, 2005

Texas Ice and Ribs

This week I am visiting the University of Texas in Austin. Yes, another football school, but they also have a boffo complexity group with Anna Gál, Adam Klivans and David Zuckerman as well as theorists Greg Plaxton, Vijaya Ramachandran and Tandy Warnow.

But what the school can't handle is the weather. Yesterday's high temperature in Chicago was 15ºF, low was 0 (-18ºC). People in Chicago bundled up and went to work as usual. Never in my memory has the University of Chicago closed due to weather.

In Austin, the temperature was a balmy 30ºF with a little rain. So they shut down the university and much of the rest of the town. Drivers in Austin cannot handle a little ice. Pathetic.

But the night was not a loss. Adam Klivans and his fiancée Jean took me to their favorite ribs place in Austin, The County Line. When we got there the restaurant had also shut down due to the weather. But Adam made such a good plea to the manager ("My friend came all the way from Chicago for your ribs"), that they gave us, free of charge, a tray full of beef and pork ribs, chicken and sausage (pictured below). We took them back to Adam's place and a good time was had by all.


Photo by Jean Kwon

Wednesday, December 07, 2005

Surprising Results

Guest Post by Bill Gasarch with help from Harry Lewis and Richard Ladner.

What are the surprising results in theory? By surprising we DO NOT mean surprising that they were PROVEN (e.g., PRIMES in P) but surprising that they were TRUE. Some results are surprising since people thought the opposite is true, but some results are surprising because people didn't even think in those terms. The latter we call BIG surprises. I will note which ones are BIG. Here is a list of results that were surprising on some level.

While compiling this list and talking to people one (obvious) thing really struck me: what people consider surprising is very much a function of WHEN they were in graduate school. For example, Median Finding in Linear time was a big surprise at the time, but we now teach it to undergrads who are not impressed. And neither are we.

  1. Gödel's Inc. Theorem. (1931) REALLY surprised Hilbert and others. BIG Original Version Modern Version
  2. HALT is undecidable. (1931–though in a different form). BIG
  3. Multiplication in subquadratic time (Toom-1963, Cook-1966). Later improved to n log n loglog n (Schonhage-Strassen 1971) People did it in O(n2) for so long! BIG
  4. Matching in Polynomial Time. (Edmonds 1965, Canadian Journal of Mathematics) Poly time seen as important. BIG.
  5. Matrix Multiplication in O(n2.87) time (less than n3 !!) (Strassen 1969).
  6. The Speed Up Theorem–There are decidable sets that cannot be assigned a complexity intelligently. (M. Blum STOC69, JACM71)
  7. Savitch's Theorem (JCSS 1970): NSPACE(T(n)) ⊆ DSPACE(T2(n)).
  8. Cook's Theorem. (1971 STOC) EVERY NP-problem is reducible to SAT! (Cook also had CLIQUE–so I've left off Karp's 22 problems). BIG.
  9. Equivalence problem for Regular Expressions with Squaring is EXP-SPACE complete (Meyer, Stockmeyer FOCS72). A `natural' problem proven to NOT be in P!
  10. Median in Linear time (Blum,Floyd,Pratt,Rivest,Tarjan STOC72, JCSS73).
  11. Super-Exponential Complexity of Presburger Arithmetic (Fischer-Rabin, 1974). A new way of looking at Hilbert's program and its difficulties.
  12. Amortized Union-Find in Θ(nINVACK(n)). (Tarjan- 1975). Inverse Ackerman comes up naturally! First time?
  13. P vs NP cannot be solved by techniques that relatives (Baker, Gill, Solovay 1975). BIG
  14. Public Key Crypto (Diffie Helman, 1976, IEEE Trans Inform Theory, Rivest-Shamir-Adleman, 1978). Establishing shared key without meeting. In retrospect solved 2000 year old problem. BIG.
  15. Permanent is #P complete. (Valiant 1979) BIG–Introduced #P.
  16. Linear Programming in P. (Khachiyan 1979).
  17. Integer Programming with fixed number of variables in P (H.W. Lenstra 1983) Note that the range of the variables is unbounded.
  18. Shortest Path problem on planar graphs BETTER THAN O(n log n) (Fredersickson, 1983, FOCS). You don't need to sort!
  19. Bit commitment⇒SAT in Zero Knowledge (Brassard,Crepeau FOCS86, Goldreich,Micali,Wigderson CRYPTO86, JACM91) if φ in SAT, can convince you that φ in SAT without giving you a clue on how to satisfy it. Result did not relativize.
  20. NSPACE(n) = coNSPACE(n). (1987-Szelepcsenyi-BEATCS, Immerman-1988-IEEECCC, SICOMP).
  21. Fixed Parameter Tractability material—Hard to pin down ONE result. I was surprised at Vertex cover for fixed k in O(f(k)n3) via Graph Minor Theorem. Other candidates are solid evidence that Dominating Set and Clique are NOT Fixed Point Tractable. More progress on Vertex Cover (down to something like n+(1.34)k + O(1)) and Subgraph Homeomorphism for fixed graphs in O(n3) (Robertson and Seymour).
  22. Under assumptions, P=BPP (Nisan, Wigderson FOCS 1988, JCSS 1994). People had thought P ≠ BPP. BIG.
  23. PH in P#P. (Toda FOCS89, SICOMP91) PH is the Poly Hierarchy. #P REALLY powerful!
  24. IP=PSPACE (Lund-Fortnow-Karloff-Nisan, Shamir FOCS90 JACM 92). Bonus: Proof did not relativize. BIG.
  25. Connection between PCP and Approximation. (Feige,Goldwasser,Lovasz,Safra,Szegedy FOCS91, JACM96). BIG.
  26. Factoring is in QUANTUM P (Shor 1994 FOCS). Natural candidate for QP-P ! BIG.
  27. Limits on what we can prove about circuits (Razborov and Rudich STOC94, JCSS97).
  28. Randomized approx algorithms for MAX CUT which are .87856.. OPT (Goemans, Williamson, 1995).
  29. Grover's Quantum O(n0.5) search algorithm (STOC 1996, Phy Review Letters 1997)
  30. HALT is truth-table reducible to RAND, where RAND is Kolmogorov rand strings (Kummer STACS 96). All the people working on this problem (granted, not many) thought it was false.
  31. Planar TSP has a PTAS. (Arora FOCS97, JACM98). Most people thought false.
  32. Assuming Unique Game Conjecture Max Cut algorithm of GW is optimal unless P=NP (Mossel, O'Donnell, Oleskiewicz 2005FOCS)

Tuesday, December 06, 2005

How I Became a Theorist

Someone asked me recently how I became a complexity theorist? After all most high school students don't say they want to be a theoretical computer scientist when they grow up. Every one of us has a story. Here's mine.

In high school I wanted to be an engineer. "Using science to help society" read one brochure I saw in junior high. I applied to several engineering programs and matriculated at Cornell.

During my first semester in an engineering math class, the professor writes an equation on the board. I raised my hand "Why is that equation true?"

"We don't do that in this class" the professor responded.

I then dropped out of engineering and transferred within Cornell to the College of Arts and Sciences and started my life as a math major. My mother, convinced I would become an unemployed mathematician, talked me into a double major in math and computer science, which was a relatively easy double major at Cornell.

In my junior year, as part of the CS requirements, I took an introductory theoretical computer science class with Juris Hartmanis. I had found my calling and the rest, as they say, is history.

Monday, December 05, 2005

The Day After

Many of you submitted papers to the Complexity conference by yesterday's deadline, now you should let the world see them. Submit your papers to an archive center like CoRR and/or ECCC. It's never too late to submit your STOC submissions as well.

Talking about the ECCC, Xi Chen and Xiaotie Deng have posted a new paper that apparently shows that 2-player Nash Equilibrium has the same complexity as finding fixed points (PPAD-complete), extending the 4-player result I mentioned in October. A bit surprising, most of us thought that 2-player Nash would be easier to solve.

Friday, December 02, 2005

Complexity in the Mid-80's

Luca asked about the topics in the complexity courses I took in 1985 and 1986. I dug up my old course notes and wrote down what was covered. Some more in the intersection than I remembered.

Cornell University CS 682–Theory of Computation
Spring 1985
Juris Hartmanis

Recursion Theorem, Rice's Theorem, Myhill's theorem that all r.e.-complete sets are recursively isomorphic, Kolmogorov complexity, Relativization, arithmetic hierarchy leading to the polynomial-time hierarchy, Ladner's theorem, oracles for P=NP and P≠NP and some other hypotheses, Blum Speed-Up Theorem, Cook's theorem, isomorphism conjecture for NP-complete sets, Mahaney's theorem (Sparse NP-complete sets imply P=NP), Blum's axioms, probabilistic classes, random oracle hypothesis, E=NE iff no sparse sets in NP-P, Karp-Lipton, Union Theorem, Elementary and Primitive recursive functions.

Much of this course revolved around the isomorphism conjecture that Hartmanis felt at the time could lead to a proof that P ≠ NP.

On 2/18/85 Hartmanis stated that whether there is an oracle where PH is infinite is an open question and on 3/13/85 Hartmanis announced that Yao found such an oracle but no mention of how Yao's proof worked, i.e., no circuit complexity.

UC Berkeley CS 278–Machine-Based Complexity
Spring 1986
Michael Sipser

Turing machines, Cook's Theorem, Savitch's theorem, Path is NL-complete, alternation and PSPACE-completeness, Circuit valuation is P-complete and P/poly, Oracles and Random Oracles, Probabilistic classes, Undirected path in RL (now known to be in L), Graph Isomorphism in co-AM, Kolmogorov complexity, using Kolmogorov complexity to show BPP in the hierarchy, Karp-Lipton, Mahaney's theorem, Parallel Computation (NC and AC), CFL in NC2, Parity not in AC0 (Furst-Saxe-Sipser, 4 lectures with mention of Yao and Hastad's improvements), Connections to oracles and bounds on depth-k circuits, Razborov's proof that CLIQUE does not have poly-size monotone circuits (6 lectures), Barrington's Theorem (bounded-width branching programs=NC1).

Sipser ended the course talking about his approach to showing NP ≠ co-NP by trying to find a finite analogue of a combinatorial proof that the analytic sets are not closed under complement.

Many of the theorems in Sipser's course were proven after the start of Hartmanis' course such as graph isomorphism in co-AM and Razborov's theorem.

Thursday, December 01, 2005

The First Saturday in December

This Saturday comes the annual William Lowell Putnam Mathematical Competition. The contest is open to undergraduates at American universities. Any number can compete but each university picks three members ahead of time to serve as the team for that school.

As a freshman I did well on practice exams and made the 1981 Cornell University Putnam team. I rewarded their confidence by scoring the median score that year, a zero. Needless to say they didn't put me on the team again and I didn't even bother taking the exam my senior year.

Some members of our community have done very well on national and international math competitions. Some have done not so well but have turned into strong scientists nevertheless. Other people who have excelled at these competitions have not had success as mathematical researchers.

What conclusion can we draw? It takes more than mathematical problem solving to make a good mathematician.

Wednesday, November 30, 2005

The Graduate Complexity Course

After my post on teaching PCPs, a reader questioned the wisdom of spending 6-8 lectures on PCP and asked what topics should be taught in an introductory graduate complexity course. This is not a new issue. In the spring of 1985 I took a graduate complexity course with Juris Hartmanis at Cornell and in the spring of 1986 took a graduate complexity course with Michael Sipser at Berkeley with only the polynomial-time hierarchy in the intersection of the two courses.

Here is my list of the topics and theorems that should be covered in a graduate complexity course. Depending on the school, some of this material is covered in undergraduate courses. More material should be added based on the interests of the instructor.

  • DTIME(f(n)) ⊆ NTIME(f(n)) ⊆ DSPACE(f(n)) ⊆ NSPACE(f(n)) ⊆ ∪cDTIME(cf(n)).
  • Basic time and space hierarchies.
  • NSPACE(s(n))⊆DSPACE(s2(n)) and NSPACE(s(n)) = co-NSPACE(s(n)).
  • The P versus NP problem and NP-completeness. SAT is NP-complete.
  • The polynomial-time hierarchy.
  • Alternation (Alternating polynomial-time = PSPACE).
  • Probabilistic Complexity Classes (BPP, RP, ZPP). BPP ⊆ PH.
  • Counting (PH ⊆ P#P).
  • The PCP theorem. Even if you don't do the proof, state the theorem and the implications for hardness of approximation.

Tuesday, November 29, 2005

Game Theory Finds Computer Science

Dear Game Theorist/Computer Scientist:

In keeping with our mission "to facilitate cross-fertilization between theories and applications of game theoretic reasoning," Games and Economic Behavior has decided to expand its publications of papers in the interface of game theory and computer science. To this end, we are pleased to announce that the following individuals have accepted our invitation to join the editorial board:

  • Joseph Halpern, Cornell University
  • Michael Kearns, University of Pennsylvania
  • Noam Nisan, Hebrew University
  • Christos Papadimitriou, University of California at Berkeley
  • Moshe Tennenholtz, Technion - Israel Institute of Technology
Submitted papers will be subject to the journal's high standards of evaluation. In particular, they must be of interest and must be readable to researchers in other fields. Given the differences in the publication processes of different fields, GEB is not opposed to publishing expanded versions of papers that were originally published in conference proceedings (provided they satisfy copyright law).

We hope that you see GEB as a venue to reach a broad group of readers for your publications.

Sincerely yours,
Ehud Kalai
Editor GEB

Sunday, November 27, 2005

Chess and Poker

Two very different articles in today's New York Times about battling the decline of interest in Chess. In the Op-Ed section, Jennifer Shahade, a recent US women's chess champion argues that chess should learn lessons from Poker.
How can chess save itself? No doubt it would make purists protest, but chess should steal a few moves from poker. After all, in the past few years, poker has lured away many chess masters who realized that the analytical skills they've learned from chess would pay off in online card rooms.

And that's a shame. There are plenty of smart people playing poker (and I love playing it myself), but there's no denying that when it comes to developing mental acuity, chess wins hands down, so to speak. Dan Harrington, a former world poker champion who quit chess because there wasn't enough money in it, laments that poker is thin and ephemeral in comparison.

Meanwhile in the Style section, Dylan Loeb McClain discusses the World Chess Beauty Contest which has the stated goal of raising interest in the game.

Why has chess been undergoing a decline in interest in recent years? Perhaps after Deep Blue defeated Kasparov in 1997 the world views the best chess player as a machine, reducing interest in the game for humans.

And we shouldn't lament poker so. Prime time coverage of a game that uses probabilities, Bayesian analysis and complex strategies can't be completely a bad thing.

Friday, November 25, 2005

Goodbye Computing Chris

Chris Leonard, Elsevier editor of the CS theory journals is leaving Elsevier to be head of communities of the digital music service Digimpro. Theory loses a friend in Chris who talked with many of us about our Elsevier concerns and ran his own weblog at Elsevier (and will continue blogging at a new site). Chris helped push through the free year of Information and Computation.

As he leaves he presents his list of important aspects of the "perfect" CS journal. Many good ideas to think about, though we need to ensure a proper peer review of every journal article.

Thanks for your work for our community Chris and good luck in your new career.

Wednesday, November 23, 2005

Teaching PCPs

Two years ago for the first time, I gave the proof of the PCP (Probabilistically Checkable Proof) theorem in my graduate complexity course. The result, first proved by Arora, Lund, Motwani, Sudan and Szegedy, shows that every language in NP has a proof where a randomized verifier requires only a logarithmic number of coins and a constant number of queries to the proof. The result has helped show hardness of approximation for a large number of optimization problems.

I worked off of Mahdu Sudan's lecture notes and spent 8 fifty-minute lectures and still required a considerable amount of hand waving.

This year I used slightly less than 6 fifty-minute lectures with much less hand-waving based mostly on Irit Dinur's new proof of the PCP theorem. The six included a lecture by Jaikumar Radhakrishnan on expander graphs on a day I was out of town. I departed from Dinur's writeup in two ways:

  • I used Radhakrishnan's version of Dinur's gap amplification argument.
  • I used the proof that NP has a PCP using a polynomial number of coins and a constant number of queries in Sudan's notes (Lecture 3, Part II) instead of the long-code test in the appendix of Dinur's paper.
With experience I should be able to cut the number of lectures to five and the expander graphs lecture will help with many other theorems in complexity, not the least of which is Reingold's construction putting undirected graph connectivity in logarithmic space.

If the students already know expander graphs, the proof takes half as much time as before. Thanks Irit. But still that one lecture proof of the PCP theorem remains elusive.

Tuesday, November 22, 2005

Complexity Deadline Approaching

The December 4th paper submission deadline for the Computational Complexity Conference in Prague is fast approaching. Get your papers ready.

Other deadlines: Computational Geometry (Abstracts Nov. 23), Electronic Commerce (Dec. 6), COLT (Jan. 21), ICALP (Feb. 10), SPAA (Mar. 6). Leave a comment if I left out your favorite conference.

The accepted papers for STACS and LATIN have been posted.

Monday, November 21, 2005

Introducing a Speaker

When a scientist visits another university to give a seminar, someone gets assigned as host who during the talk introduces the speaker, makes sure the talk doesn't go too long and the post-talk questions don't get out of hand.

So how do you introduce a speaker? I've seen everything from "Let's start" to a reading of the speaker's CV. A few memorable ones (names have been changed):

  • Jane Smith needs no introduction so I'll introduce myself instead…
  • Last week we had a terrible talk on ABC, today we will learn whether it was the speaker or the area.
  • John Doe is famous for proving the XYZ theorem. Today he will talk about something far less interesting.
Often the host gets much more anxious about the introduction than the speaker does about the talk. The host worries the speaker will get insulted if given the wrong introduction. Just find a couple of nice things to say and you'll be fine.

Don't ask the speaker how he would like to be introduced, as it puts the speaker in an awkward position. Last time my host asked me, I suggested he introduce me as "The person who put the 'W' in AWPP," but he didn't bite.

Saturday, November 19, 2005

Another Satisfied Customer

You attend the University of Chicago for three years, take a few years off and come back to finish your Bachelor's degree in Chemistry. You worked really hard to get that degree but you have trouble finding a job afterwards. What do you do? How about setting small fires in a number of University of Chicago buildings including the elevator in Ryerson Hall, home of computer science.

Luckily no one was hurt and there was very little property damage. Thanks to Nita Yack, our department administrator, and others who quickly put out the fire before it caused any real damage. Thanks also to the university and Chicago police who quickly caught the culprit.

Thursday, November 17, 2005

Relativized Collapse of P and NP

When we teach relativization, we often ask the class for a set A such that PA=NPA. The usual answers we get are an NP-complete A (which doesn't work unless the polynomial-time hierarchy collapses) or a PSPACE-complete A which does work:
PSPACE ⊆ PA ⊆ NPA ⊆ NPSPACE = PSPACE
the last equality by Savitch's theorem.

According to Steven Rudich, a student at Carnegie-Mellon suggested using A=K, the halting problem. Does this work? This led to an interesting email discussion between Rudich, Richard Beigel, Lane Hemaspaandra and few more of us getting cc'd on the messages.

We need to use a reasonably dense encoding of K, such as the set of Java programs with no input that halt. Then in fact PK does equal NPK, but the proof is not as obvious as one would imagine. Think about it.

And if anyone knows the original reference to PK=NPK, let us know.

Wednesday, November 16, 2005

Cornell versus Intelligent Design

As a Cornell University alum I get the occasional email from the president talking about the great things going on on the Ithaca campus. Today's email I received from Hunter Rawlings, the old president who's minding the store while the university finds a replacement for Jeff Lehman.
This strength and stability of purpose allowed me to use this year's state of the university speech to address a matter I believe is of great significance to Cornell and to the country as a whole, a matter with fundamental educational, intellectual, and political implications. The issue in question is the challenge to science posed by religiously-based opposition to evolution, described, in its current form, as "intelligent design."

This controversy raises profound questions about the nature of public discourse and what we teach in universities, and it has a profound effect on public policy.

I believe the time has come for universities like Cornell to contribute to the nation's cultural and intellectual discourse. We must be willing to take on a broader role as defenders of rational thought and framers of discourse about culture and society. In this spirit, I have asked our three academic task forces, on life in the age of the genome, wisdom in the age of digital information, and sustainability, to consider means of confronting the following questions: how to separate information from knowledge and knowledge from ideology; how to understand and address the ethical dilemmas and anxieties that scientific discovery has produced; and how to assess the influence of secular humanism on culture and society.

Makes me proud to see my alma mater taking a proactive approach to this important debate.

Tuesday, November 15, 2005

The History of RAM

Janos Simon gives a history of RAMs expanding on my recent Favorite Theorems post.

A single paper is like a snapshot of the state of research at one point in time. Research itself is a dynamic, changing, live stream of results and ideas, and a single snapshot often cannot do justice to the richness of the topic. The RAM paper is an excellent summary of definitions and problems, and worth reading today, but, at the time, it seemed to me more like a crystallization of concepts that were "in the air" and a clear and precise summary of important known questions rather than trailblazing exposition of new ideas. The theory of RAMs is fascinating, and I'll try to summarize some of the relevant work that preceded Cook's.

The RAM formulation dates back to von Neumann (after all the "von Neumann architecture" IS a RAM). von Neumann uses the RAM formulation to derive instruction counts for some programs for his first computer. So "unit cost RAMs" were well known from the beginning of computers, and counting the number of operations was known to be important. Knuth was a very strong advocate of the idea of analyzing the running time of algorithms using instruction counts: the first edition of the first volume of The Art of Computer Programming is from 1968.

Theoreticians interested in computability theory have published extensively on RAMs: an example of an early paper is Sheperdson and Sturgis JACM 1963. It has a bibliography of earlier work. These papers came from two different motivations: one was to find further examples of formalisms equivalent to Turing machines, as a kind of experimental evidence for Church's Thesis (see the book by Brainerd and Landweber for an exposition of a few dozen formalisms—Markov Algorithms, Post Systems, λ-calculus, and so on). The other was to find "better", more realistic theoretical models for real computers.

For example, one of the novel features of the ENIAC was that the program actually resided in the computer's memory (as opposed to outside fixed set of instructions as in the earlier Harvard Mark machines). Much was made of this feature of "stored program" that allows for the program to use itself as data and modify itself on the run, something that was judged to be "good for AI." Of course, the existence of a two-state universal Turing machine is a clear illustration that at a fundamental level of computability there is no difference between "hardware" and "software". Still, there was a great deal of interest to model such "ease of programming" features at a theoretical level. For example, Juris Hartmanis has an elegant result showing that there is a function that can be computed faster on a RASP (random access stored program machine) than on a RAM (Math Systems Theory, 1971).

So "RAM complexity" was alive and well. What made things confusing was that fixed length register RAMs are uninteresting, but if one allows unbounded length registers, it is unclear whether unit cost is a realistic measure, and, if not, what would be reasonable. A natural option is to charge for the length of the register that is effectively used, the log of the value stored. Of course, there is the problem that determining the complexity of an algorithm becomes even harder. Even peskier questions appear if one asks whether addition and multiplication should have the same cost, and if not, should one use the schoolbook (quadratic) cost, or perhaps the Sconhage-Strassen cost? Most researchers opted to use the unit cost, and avoid all these complications.

To make things worse, many algorithms in optimization are expressed naturally in terms RAMs with real numbers registers. Note that fundamental questions about this latter model are still very much open.

To summarize, measuring number of RAM steps as a complexity measure was not a novel idea. What made the Cook paper relevant was exactly the proliferation of RAM measure results. In particular the Stockmeyer-Pratt-Rabin vector machine paper (and the later Hartmanis-Simon multiplication RAMs) as well as RAMs holding reals used in the OR community made it important to be precise about the exact meaning of "number of RAM instructions" measure. The community was quite aware that logarithmic cost was polynomially equivalent to Turing machines, and these papers showed that unit cost likely wasn't. Cook and Reckhow wrote down precisely what was likely a kind of unwritten consensus among the researchers in the area. This was necessary and useful, but it did not "set the stage to make RAMs the gold standard". The stage was already set, people were using RAMs to analyze algorithms, and Cook and Reckhow was a precise and meticulous description of what this meant.

In short, if one wants a picture of what great things got started in the period, the paper is an excellent choice, but, as usual, the actual history is complicated, dynamic, and, I hope, interesting.

Monday, November 14, 2005

Acceptance Rates versus Competitiveness

The acceptance rates at conferences for theoretical computer scientists tend to run higher than acceptance rates at conferences in other areas of computer science. Does this mean that theory conferences are less competitive than their counterparts in other areas? In a word, no.

Researchers look at their papers and for a given conference, either feel they have a very good chance of acceptance, a possibility of acceptance or a long shot of acceptance and tend to submit only if their paper falls in one of the first two categories.

In non-theory areas like artificial intelligence, the committee must take a subjective look at the papers which means many more papers fall into the second "possibility of acceptance" category. Many more people therefore take the risk and submit their paper because they can't immediately put the paper in that third "long-shot" category. This leads to more submissions and a low acceptance rate.

For theory we do a much better job putting our papers into these categories, as we can self-judge the hardness of the results and have a good feeling of the importance of the results for the conference. Theorists can tell when their papers won't have much of a chance of acceptance and will, usually, not waste their and the program committee's time in submitting these papers to the conference. This leads to a relatively higher acceptance rate in theory conferences.

A similar analysis would also explain why the funding rates for the theory NSF program also tends higher than the funding rates at many other programs in CISE.

Saturday, November 12, 2005

The Enemy of the Good

Alice (not the real name) has a STOC submission with Bob and wanted to put the paper on a public archive. Bob insists that the paper not go public until the "exposition is perfect", which if taken literally means never. I told Alice about a phrase my wife liked to use

Don't let perfection be the enemy of the good.

I hesitate to write this post because we far too often have the opposite problem, authors who take their hastily written deadline-driven conference submissions and just put them out on the web in its messy state. But aiming for that impossible perfection in the exposition spends considerable time making tiny changes to an abstract that, in most cases, no one would have noticed. One can better spend their time in other ways, like doing research for the next paper.

So take a little time to clean up the conference submission but then don't worry about every little detail. As soon as possible make it available for all to see. If there are problems in the exposition, people will let you know (especially if you fail to cite their research) and you can fix the paper accordingly. Psychologically you will feel better getting that conference submission you had spent that hard concentrated effort on out of your mind, until it (hopefully) gets accepted and you have to work on the proceedings version.

Thursday, November 10, 2005

Favorite Theorems: Defining the Future

October Edition

We end the list of favorite theorems from 1965-74 with two seminal papers by Cook and Reckhow.

Stephen Cook and Robert Reckhow, Time-bounded random access machines, STOC 1972, JCSS 1973.

Stephen Cook and Robert Reckhow, On the lengths of proofs in the propositional calculus, STOC 1974, JSL 1979.

The first paper developed the random-access machine (RAM) for measuring the time complexity of algorithms. "Random" here has nothing to do with probability, it just means we can access the memory in an arbitrary order.

Up to that time the multitape Turing machine was considered the model of choice for complexity but one had to access memory in a sequential fashion. The RAM model allowed quick access to all memory and much more accurately captured how real-world computers operated that previous models. Most algorithms papers give their time bounds in the RAM model and RAM is the gold-standard for proving lower bounds.

The second paper had a more delayed effect on complexity. Cook and Reckhow formalize a proof system for Tautologies as a polynomial-time computable function f whose range is exactly the set of tautologies. If f(p)=φ, p is said to be a proof for φ. They show that NP=co-NP iff there is some proof system such that every tautology φ has a proof polynomial in the length of φ.

This suggests an approach to separating NP from co-NP (which would imply P≠NP): Show that tautologies cannot have such proof systems. In 1985, Haken showed that the pigeonhole principle, encoded as a tautology, did not have polynomial-size resolution proofs. This led to a very active research area on proof complexity that finds lower bounds for various classes of tautologies on different proof systems, though we still remain quite far from NP≠co-NP.

Paul Beame and Toni Pitassi have a nice though slightly dated 1998 survey on propositional proof complexity.

Tuesday, November 08, 2005

Negotiating Your Job

An excellent Chronicle article on negotiating your job offer. I fully agree with the article that you lose out considerably by not negotiating and that you focus more than anything else on salary. An extra $5000 now will grow through the years as salary increases are usually a percentage of the current salary. Universities try to hold down the salary for similar reasons.

Once a department makes an offer, even if it was a tough decision, they then try as hard as they can to get the candidate to accept. Losing a candidate looks bad in their university and in the CS community. Take advantage of this and negotiate hard. Remember that once you agree to a contract your negotiating power goes to zero and remains zero for years to come.

For computer science, the Faculty Salary section of the Taulbee Survey can give you some idea what kind of income to aim for.

After salary, try to negotiate larger startup (or individually money for summer salary, students, travel), course reductions and any special needs you might have. As a rule, anything is negotiable.

If you have two offers, even if one offer dominates the other you should keep both offers open to be in a better negotiating position. This isn't good for the community as it keeps two positions tied up, but you need to think about yourself. Try to wrap up the negotiations as quickly as possible though so you don't keep jobs away from other people.

And, of course, if you get an offer from Chicago, ignore all of the above and just accept our initial offer immediately.

Monday, November 07, 2005

Games for Girls

This notice came into my inbox last week.
ChicTech, an outreach program of the University of Illinois Department of Computer Science, extends an open invitation for college women to participate in the second annual Games for Girls Programming Competition (G4G).

G4G was conceived in response to research indicating that boys enjoy a relatively greater degree of confidence with computers because they spend more time as children playing computer games. The research suggests that this difference in confidence contributes to the gender imbalance seen in the field of Computer Science.

So the gender imbalance in CS is due to the fact that girls don't play enough computer games. Guess that makes me a bad father for limiting the amount of time my daughters spend on the computer.

I do see at least one of my daughters more confident talking anonymously in a virtual world than in her real classroom. So suppose we had some virtual classrooms where students were represented by avatars (cartoon characters) and nobody knew the mapping from real students to avatar. Then the women (and the men) might be more willing to participate if they felt they had no risk from speaking up. But is this the environment we really want to teach in?

Saturday, November 05, 2005

Bringing Complexity to Your iPod

Welcome to ComplexityCast, the first of a very occasional podcast, an audio version of this weblog. First up, Bill Gasarch and I talk about the P versus NP problem. Download the MP3 (3.4MB, 20 minutes) or subscribe to the Podcast Feed.

Friday, November 04, 2005

Whither to Compete

A guest post by Rakesh Vohra.

Fortnow's post on competitive ratio's has prompted a number to speculate on the `right' number of people who should engage in competitive analysis. I took Fortnow's post as an invitation to revisit the arguments in favor of doing this kind of analysis. As I see it there are four arguments:

  1. The argument from beauty: Truth is beauty and beauty is truth. The first direction I buy, the second, except in the case of Grecian urns, requires a proof. In any case, I am prepared to accept the aesthetics (or wit, cunning) of competitive analysis as sufficient justification for engaging in competitive analysis. Perhaps competitive analysis is to CS what number theory is to Maths. There are, however, shared aesthetic standards (otherwise the criterion has no bite), so only some kinds of work can be justified in this way. My guess is that the analysis of problems that can be stated with a minimal of fuss, have an immediate appeal and seem simple in the first telling (pretty much what makes for a good number theory problem) meet this criteria. So, TSP, SAT, clique, coloring, max-cut are in. However, the stochastic, dynamic traveling dog and pony problem with piecewise linear costs is out.
  2. The argument of fine distinctions: Not all NP-complete problems are alike. Some really are easier than others. Therefore one would like a device to make distinctions between them. The bounds from competitive ratios appear to do a nice job in this regard. Knowing that a problem can be approximated to within a log factor but not within a constant factor does tell us about its difficulty relative to others. Notice that order of magnitude of the ratio suffices for this purpose and not the best possible ratio.
  3. The argument from ignorance: If we do not know what the distribution of problem instances looks like what choice do we have? The assumption of complete ignorance is an extreme one and can be justified in some but not all cases. If one adopts this justification for engaging in a competitive analysis one must first argue that the assumption of complete ignorance is reasonable to make. One can imagine natural situations where partial information is available. In these cases it seems reasonable to expect error bounds that incorporate such information. Bounds that are data dependent are not as pretty as ones that are independent of the data, but may be more useful.
  4. The beneficial spin-off argument: This is the argument that Vazirani makes. Putting a man on the moon is a Quixotic task but we received non-stick frying pans as a by product. However, we might have had non-stick frying pans by focusing on non-stick frying pans and saved ourselves the trouble of putting a man on the moon. The point is that this argument does not rule out other ways of achieving the same benefits.
The arguments do not provide a universal defense of a competitive analysis but justifications to be used on a case by case basis. I think the question to be asked is this: if an exercise in competitive analysis cannot be defended on aesthetic grounds, reveals/verifies no new and novel distinction, inhabits an environment in which complete ignorance is an unreasonable assumption and has no identifiable beneficial spin-off, why is it being done?

Competitive analysis is now an export business. One of the areas it is being exported to is game theory. This raises a new problem not present in the clean optimization framework. That is, does the notion even make sense when what one is comparing are preferences rather than objective function values?

Thursday, November 03, 2005

Losing the Office Phone

About a month ago I had the phone in my office removed. The number was one digit off from both maternity and a nurse's station at the U of C hospitals and if you Googled "Chicago Ballroom Dancing" my office phone number came up. About 90% of the phone calls were wrong numbers and I had gotten to the point of not answering the phone unless I recognized the Caller ID.

I could have had the number changed but I spend enough time outside the office (at TTI, other universities, working at home) that calling me at the office was not a reliable way to reach me. So I just eliminated the office phone and put my mobile number on my home page and in the university directory.

I don't rack up lots of minutes; we are primarily an email-based community and I get on average about one work related phone call a week. I made the change not because I want to use the phone less, rather to make myself more accessible. Our community relies on email too much, there are sometimes the old telephone still comes in handy.

  • Calendar coordination.
  • Convincing someone to do something. It's much harder to say "no" on the phone than on email.
  • Sensitive information. Email leaves an electronic trail and one little typo in the email address can send your scathing comments who knows where.
  • Some people like to use the phone for research. I prefer email because it forces you to think about what to write and you get a record of the discussion. But if there is a technical point you disagree on, a phone call can often quickly resolve the issue.
  • Handling a disagreement, particularly when one or both sides are emotional over the issue. This situation is even better handled in person, if possible.
  • Catching up. At the end of a phone call we often talk about other things going on in our lives. Happens far less in email.
More and more of our community are beginning to use instant messaging for many of these purposes. Also with VOIP services like Skype becoming more popular, the rest of you might lose your office phones sooner than you expect.

Tuesday, November 01, 2005

Competitive Analysis

When you cannot achieve the optimum solution of a problem, how do you measure the performance of an algorithm? If you knew the distribution of instances, you can see how well the algorithm performs on average. But most theoretical computer scientists prefer a worst-case analysis that tries to minimize the ratio of the optimal solution to the algorithmic solution. But many algorithms achieve seemingly large ratios that don't seem practical.

Vijay Vazirani defends this competitive analysis of algorithms in the preface of his 2001 book Approximation Algorithms.

With practitioners looking for high performance algorithms having error within 2% or 5% of the optimal, what good are algorithms that come within a factor of 2, or even worse, O(log n) of the optimal? Further, by this token, what is the usefulness of improving the approximation guarantee from, say, factor 2 to 3/2?

Let us address both issues and point out some fallacies in these assertions. The approximation guarantee only reflects the performance of the algorithm on the most pathological instances. Perhaps it is more appropriate to view the approximation guarantee as a measure that forces us to explore deeper into the combinatorial structure of the problem and discover more powerful tools for exploiting this structure. It has been observed that the difficulty of constructing tight examples increases considerably as one obtains algorithms with better guarantees. Indeed, for some recent algorithms, obtaining a tight example has been a paper in itself. Experiments have confirmed that these and other sophisticated algorithms do have error bounds of the desired magnitude, 2% to 5%, on typical instances, even though their worst case error bounds are much higher. Additionally, the theoretically proven algorithm should be viewed as a core algorithmic idea that needs to be fine tuned to the types of instances arising in specific applications.

But still why should a practitioner prefer such an algorithm to a heuristic that does as well on "typical" instances but doesn't have a worst case bound? Our usual argument says that we don't really know what "typical" means and we can promise you something no matter what happens.

Besides approximation algorithms, theoreticians have taken competitive analysis into other arenas, like comparing on-line versus off-line job requests and auctions that make a constant factor of the optimal revenue where achieving a competitive factor of 2 can mean a serious loss of income.

Sometimes these algorithms will allow themselves to do poorly when the optimum is bad in order to achieve the best ratio. If we truly want to sell these algorithms to the practitioners, should we focus on doing well on the situations they care about and then, only secondarily, worry about the performance in the worst case?

Monday, October 31, 2005

Making Yourself Known

An assistant professor asks
How do I get on program committees and editorial boards?
PC chairs and editors-in-chief usually have several excellent candidates to choose from so you really have to make yourself stand above the crowd. How do you do this?

Prove. Easy said than done, but no better way to make yourself known than by proving great theorems. Quality counts more than quantity. Be sure to make your results public, by submitting them to sites like ECCC as well as putting them on your own homepage.

Talk. When you give a talk, take the time to prepare, sell your work, make the talk understandable and audience-appropriate. Someone told me recently they treated every talk like a job talk. Not bad advice.

Meet. Go to workshops and conferences not for the talks but to meet people. Don't just hang out with people from your own university. Skip some sessions, hang out in the hallways and talk to whomever is around. Reconnect with your old colleagues from graduate school and make an effort to meet new people. Have lunch and dinner with people you don't know.

Write. Write up your research well so people enjoy rather than suffer when reading your papers. Put some effort into your introductions and really sell the importance of your work. In addition write a survey paper, write a book, write a weblog. Get others to view you as an expert in the field.

Organize. Organize a workshop, do local arrangements for a conference or other service to the community. I don't recommend this route for assistant professors as it takes considerable time and won't help your tenure case much.

Wait. Be patient. Your time will come.

Sunday, October 30, 2005

List Decoding Beyond Expectations

The recent FOCS conference had two best paper award winners, the Khot-Vishnoi paper I had mentioned in my post on unique games and Correcting Errors Beyond the Guruswami-Sudan Radius in Polynomial Time by Parvaresh and Vardy.

List decoding considers codes where we have too many errors in the code to uniquely decode a string. These algorithms creates a short list of all of the possible decodings. Last year I had discussed list decoding in my Favorite Theorems series and said

Guruswami and Sudan give a polynomial-time algorithm that handles what is believed to be the best possible amount of error.
Guruswami and Sudan's algorithm works for the standard Reed-Solomon codes and is likely the best possible for that code. Parvaresh and Vardy develop a new but related encoding to get an improvement on the rate of the code, the ratio of the original message length to the code length. Guruswami and Sudan show how to list decode a 1-ε fraction of errors using Reed Solomon codes with a rate of O(ε2) while Parvaresh and Vardy achieve the same errors with their code with rate O(ε/log(1/ε)).

Friday, October 28, 2005

Algorithms for a Ten-Year Old

Yesterday my fifth-grade daughter, doing her math homework, asked
Is there a faster way to find greatest common factors other than with factor trees?
I live for these days. After I impressed her with Euclid's algorithm she asked
Is there a faster way to factor than with factor trees?
I thought for a while and then just answered "No."

Wednesday, October 26, 2005

Selling Theory

Thanks to Rocco for bringing us the news from FOCS, particularly a comprehensive summary of the business meeting. I am glad to have watched my White Sox in the World Series live and read a recap of the business meeting than the other way around.

A number of bloggers including Scott, Suresh, Sariel and physicist David Bacon have weighed in on the big panel discussion on how to generate interest in theoretical computer science. There was a big push for our community to publish in Science and Nature. I have seen more than a couple rather mediocre CS papers in Science. We need more than to just send our papers to these journals, we need members of our community on the editorial board.

The most interesting comments came from science writer Sara Robinson.

There is a perception among science editors that TCS is not what people want to read about: they want stories about health, things that cost a lot of taxpayer dollars, etc.
The New York Times, which Sara Robinson has written for in the past, used to give good coverage to research in theoretical computer science. But their Tuesday section Science Times has moved over the last couple of years from a general covering of science to a focus on medicine, environment and astronomy. Not just computer science but physics and chemistry get far less coverage than they once did.

What scares me most is what I see from incoming undergraduates. Far more high school students use computers now than say ten years ago but far fewer of them know how to program. Computer science is a victim of its own success, by making computers powerful, easy to use and well-connected, we have turned computers into a commodity like cars with the mystique of computation and complexity lost on the users.

Page Six

Final guest post from FOCS attendee Rocco Servedio.

Well, another FOCS has come and gone. In 50 years -- 100, tops -- we will know which papers from the conference stood the test of time. But meanwhile on to more pressing matters. Last week Lance promised that I would provide "all the gossip from the conference;" I'd hate to disappoint, so here goes. All names have been changed to protect the guilty, and pronouns should not be used to infer gender. Presenting...

"THE THEORY TATTLER"

  • POTTED PROFESSOR: WHICH thirsty theorist drank so much beer at the business meeting that his PhD students had to help him back to his room? The greedy guzzler was next spotted Monday afternoon nursing a mug of black coffee in the back row and wincing at microphone feedback.

  • DINNER DILEMMA: WHICH graph theory guru created an scheduling snafu when he separately told two rival gangs of theorists that he'd "meet you in the lobby in 15 minutes?" Let's hope his administrative assistant at Famous University manages things better when he's on his home turf.

  • ENOUGH ALREADY! WHICH logorrheic logician went so far over time that he "practically had to be dragged off the podium?" Our sources say the session chair was scant seconds from pulling the projector plug when the babbling bore finally zipped it.

  • HEARTBREAKER: WHICH complexity theorist Casanova has a love life that's more complicated than the proof of the Parallel Repetition Theorem? It seems there's no lower bound on this cheating cad's bad behavior.

Sadly, as you've probably guessed, none of these things actually took place (or if they did I didn't know about it; if so that's even sadder). In all seriousness, thanks to Lance for letting me post these last few days; it was fun, especially the chance to branch out into fiction-writing at the end.

Tuesday, October 25, 2005

Knuth Prize

The 2005 Knuth Prize was awarded to Mihalis Yannakakis of Columbia University. Mihalis's Knuth prize lecture was on "Probability and Recursion."

Monday, October 24, 2005

FOCS Business Meeting

Notes from the FOCS 2005 business meeting, reported by Rocco Servedio.

  • Yuengling, Sam Adams, Aspen Edge (low-carb).

  • Local arrangements: There were 144 non-student registrants and 141 students for a total of 285 registered attendees. Thanks to Avrim Blum and Anupam Gupta for a job well done on local arrangements. An interesting fact: doing local arrangements ourselves saves about $100/person on registration fees.

  • PC report: There were 268 submissions. 7 papers were retracted (an all-time high?); two of these were because of bugs found by the PC and five were initiated by the authors. Authors are encouraged to submit correct papers. There are 62 papers in the proceedings; 3 pairs of papers were merged (these papers got extra space in the proceedings). Distribution of authors (multiple papers means you get counted multiple times): 121 from U.S.A., 23 from Israel. 6 from Canada. 3 from Denmark, Italy, 2 from Germany, India, Czech, Hungary, 1 from Poland, Netherlands, Japan. Average # of authors per paper is 2.64 (or 2.48 depending on how you count merged papers). There were 7 single-author papers.

  • The two papers that were selected for Best Paper awards are "The Unique Games Conjecture, Integrality Gap for Cut Problems and Embeddability of Negative Type Metrics into \ell_1" by Subhash Khot and Nisheeth Vishnoi, and "Correcting Errors Beyond the Guruswami-Sudan Radius in Polynomial Time" by Farzad Parvaresh and Alexander Vardy.

  • Two papers shared the Machtey Award for the best paper authored solely by students. These were "On the Complexity of Real Functions" by Mark Braverman and "On the Complexity of Two-Player Win-Lose Games" by Tim Abbott, Daniel Kane, and Paul Valiant.

  • As all readers of this weblog know, the next CCC (Computational Complexity Conference) will be held in Prague from July 16-20, 2006. The submission deadline is December 4.

  • LICS 2006 will held in Seattle in August 2006 as part of the Federated Logic Conference.

  • FOCS 2006 will be held in Berkeley; Sanjeev Arora will be PC chair.

  • Following an entertaining "Star Wars" themed bid, it was decided that FOCS 2007 will be held in Providence.

  • STOC 2006 will be held May 20-23, 2006 in Seattle. The submission deadline is November 3 (so stop reading this weblog and get back to work).

  • SPAA 2006 will be held July 30-August 2, 2006 in Cambridge, MA.

  • There was a panel discussion on "Exciting the public about (theoretical) computer science." The panelists were Bernard Chazelle, Dick Lipton, Ivars Peterson (writes about math and computer science for Science News), Sara Robinson (freelance writer in math and CS), Steven Rudich, and Eva Tardos; Avrim Blum moderated the discussion. A few fragmentary snapshots from the discussion:

      Chazelle: Computing has never been more important, and never been more misunderstood. We are not doing as good a job of getting our work into the public eye as other fields such as physics. If the broader scientific community comes to use algorithms as a conceptual framework for the problems they are dealing with, we will have done well.

      Lipton: We have lots of really profound and interesting intellectual challenges; one way to excite the public is to talk to them about these fundamental questions.

      Rudich: How do we take what are doing and translate it into problems that people can relate to and care about? We have a million forms of encoding and should be able to do this; everyone can relate to the problem of trying to pack items into a suitcase of limited size.

      Tardos: Whatever you do, it is probably possible to explain it to the public. There is an awful lot of stuff we do that is really not that hard to explain. A straw poll of the audience showed that very few people in our community have ever published in Science or Nature; it would be good if this could change.

      Peterson: Publicity takes effort. The American Chemistry Council is spending twenty million dollars on advertising to sell the importance of research in chemistry. Astronomy often gets the front page of The New York Times; this is because of carefully orchestrated arrangements behind the scenes. The ACM, SIAM, IEEE do no publicity that I (Peterson) am aware of as a journalist. To get into the media: publish in Science and Nature. Lay language summaries and images are provided to the media a week in advance of each issue. There is always a Nature story in the newspaper on Thursday and a Science story on Friday. For newspaper coverage, one writer or a very small group can make a difference.

      Robinson: Even all the approaches suggested above will have only a limited effect. Two reasons for this: (1) Theoretical computer science is hard to understand for the lay public and for reporters (and, as one audience member shouted out, for us). It is easier to write about global warming or why the coyotes are multiplying. (2) There is a perception among science editors that TCS is not what people want to read about: they want stories about health, things that cost a lot of taxpayer dollars, etc. Perhaps we should explore new models such as a dedicated math/science news agency?

      (anonymous science writer audience member): "People like dinosaurs, asteroids, and things coming out of the ground...very little of what you guys have is concrete."

  • Finally, Dick Karp gave a report on behalf of the SIGACT committee on funding for theoretical computer science. The main goal of the committee is to improve stature and support of TCS within NSF. Based on a sample of 90 TCS investigators receiving funding between 2000 and 2004, 23% of funding came from TCS Foundations of Theoretical Computer Science and 55% came from ITR grants (now terminating). The average number of grants/year received per investigator was 2.4, and the median grant size per investigator per year was $70K. The 2005 TCS budget is about $6M. Some concrete items on the agenda for the future are to make a well-documented case that TCS is underfunded and to move TCS up a level in the CCF hierarchy.
  • Sunday, October 23, 2005

    First day of FOCS

    A guest post from FOCS attendee Rocco Servedio.

    Thanks to Lance for giving me this opportunity to fill in. I'm in Pittsburgh for FOCS, which started today and runs through Tuesday. Unfortunately because of travel constraints I couldn't make it to the Frieze Fest or FOCS tutorials that took place yesterday.

    Pittsburgh is a city which sometimes gets a bad rap, but I've always enjoyed coming here. Besides the obvious -- at least for this blog's readers -- attraction of CMU, there are lots of neat things that you can't find anywhere else. I personally like the National Aviary, and the Carnegie museums are fun too. At the Warhol museum you can watch movies that make even the driest FOCS talk seem like a Jerry Bruckheimer production.

    No time for museums today, though; the FOCS schedule is full of cool-looking talks, and tonight there is the business meeting and a panel discussion on "Exciting the public about (theoretical) Computer Science." I'll give a report on the business meeting and panel discussion later.

    Thursday, October 20, 2005

    Math in Complexity

    Another guest post by Bill Gasarch
    Combinatorics is a branch of Computer Science
    First episode, Second Season of NUMB3RS
    I could have a post on how stupid this statement is, I'd rather ask the following better question:
    How much do we use Combinatorics in Complexity Theory? Do we use anything else?
    Towards this end I looked through the COMPLEXITY 2005 proceedings and assigned to each paper what sort of math I thought they used. I'm sure some can be argued. But here are the results:
    • Probability: 8
    • Combinatorics: 6
    • Linear Algebra: 6
    • Abstract Algebra: 4
    • Number Theory: 2
    • Diagonalization and Simulation: 2
    • Calculus: 1
    • Dynamic Programming: 1
    • Philosophy: 1
    OBSERVATIONS:
    1. Very little continuous math. Even the Linear Algebra is usually over a finite field.
    2. Very little Diagonalization/Simulation. This is part of the general trend away from methods of logic. I suspect there was a lot more of those at the first STRUCTURES back in 1986.
    3. More abstract algebra than I would have thought, but I don't know if this is unusual or not.

    Wednesday, October 19, 2005

    Football Schools

    I am spending most of this week at the University of Nebraska for a talk and a workshop. What does Nebraska have to do with Notre Dame, where I visited last month? Both are traditional football powerhouses, a place where the sport dominates the school and more Americans know these universities for their teams than their academics. I've heard most universities actually lose money on their football programs (though Notre Dame is an noted exception). Still schools use football to attract students, raise school spirit and bring back alumni and their money. In many states the highest paid public employee is the football coach. Notre Dame attracted a star professor by promising him season tickets "between the 45s" and Nebraska smartly has an admissions office inside the stadium.

    Many foreigners find the level of US college athletics surprising but having grown up in this country I was shocked to find out European universities, for the most part, do not play each other in any sport, not even soccer. Where's the fun in that?

    My next university trip will be to the University of Rochester, not a football powerhouse and in the same Division III wannabe-ivy league as the University of Chicago. Chicago used to be a football powerhouse, part of the Big Ten and had the first Heisman trophy winner, Jay Berwanger, in 1935. But then the new president Robert Maynard Hutchins who has been claimed to say "Whenever I feel like exercising, I lie down until that feeling goes away," eliminated the athletic programs and focused the university on academics. Only in the past few decades have they even had Division III teams.

    With all this traveling, I won't be going to FOCS. But don't worry, I have lined up a special guest blogger to bring us all the gossip from the conference.

    Tuesday, October 18, 2005

    Finding Nash has Same Complexity as Finding Fixed Points

    In a new paper, Daskalakis, Goldberg and Papadimitriou show that finding Nash Equilibrium in matrix games with four or more players is complete for the search class PPAD. PPAD is best described by the problem: Given an exponential-size direced graph with every node having in-degre and out-degree at most one described by a polynomial-time computable function f(v) that outputs the predecessor and successor of v, and a vertex s with a successor but no predecessors, find a t≠s that either has no successors or predecessors. The underlying combinatorial statement that such a t exists is used to prove Sperner's Lemma which can be used to prove the Brouwer fixed point theorem which in turn can be used to prove the existence of Nash Equilibrium.

    The authors leave open the complexity of finding Nash Equilibrium for two and three players. They conjecture that for three players the problem remains complete for PPAD but two player Nash Equilibriums can be found in polynomial time.

    Monday, October 17, 2005

    True Impact

    How do you measure your impact as a computer scientist? You can try measures like the Citeseer rank or the h-index, but the only scientifically valid test would compare the world today with the world where you were never born.

    We can never actually run such a test but we can try the thought experiment. Even if you are one of the "greats," most of your theorems, even the best and most surprising, would have been eventually proved a few months or a few years later. Other theorems would never have been proved because no one, other than the non-existent you, would have cared. Other than speeding up science a little bit, you cannot get a long-term individual impact on the field solely by proving theorems.

    But proving those theorems builds your reputation and with that reputation you can shape the direction of the field. With this reputation you can, for better or for worse, help shape the direction of the field and set the research agenda for a generation of young graduate students. You also have lasting influence through your graduate students and the undergrads you convince to study computer science.

    We can run this thought experiment the other way. Suppose many years ago a sperm darted right instead of left and fertilized an egg that hatched a true genius in our field. How much difference could that one person have made on our research and our lives?

    Sunday, October 16, 2005

    Blogging and Academics

    The University of Chicago denying tenure to an assistant professor is rarely a breaking news story. Yet political scientist Daniel Drezner's case received considerable press including a Chicago Tribune story. Why? Because he had a popular blog.

    I doubt the content of the weblog or its existence or popularity played negatively towards his tenure case. Perhaps some feel his time would have been better spent on "real academics" but most likely they considered his more traditional academic writings and, frankly, it's very difficult to get tenure at the U of C, particularly in the social sciences.

    Will Drezner's weblog help him in his future job hunt? Ivan Tribble argued that weblogs can hurt a candidate for an academic position.

    The content of the blog may be less worrisome than the fact of the blog itself. Several committee members expressed concern that a blogger who joined our staff might air departmental dirty laundry (real or imagined) on the cyber clothesline for the world to see. Past good behavior is no guarantee against future lapses of professional decorum.
    I disagree with Tribble. Most non-anonymous academic webloggers know better than to discuss departmental politics in their blogs and departmental hiring committees should or will realize they have nothing to fear. A popular weblog raises one visibility in and out of their field—far more people read this weblog then download my research papers, for example. A weblog like Daniel Drezner's (much more read than this one) gives him an edge over his peers, a popularity that will open some doors that others will have to fight harder for.

    Thursday, October 13, 2005

    Fonts

    Fonts are the last thing I want to worry about when I write a research paper. Unfortunately fonts have often become the last thing I need to worry about when I write a research paper.

    In the olden days (circa 1990), we all wrote our LaTeX papers using the Computer Modern font. When we sent a paper to a proceedings we printed up a clean copy and sent it via Federal Express.

    Now we have choices of fonts. Fonts are a surprisingly complicated process. A good font is a work of art and a scalable font is actually a computer program for each letter. If you intellectual property issues for digital music is complicated, IP for typefaces is nearly impossible to implement well.

    When some societies like the IEEE first started taking electronic uploads for their proceedings we would get the occasional disastrous effects because the IEEE fonts didn't match the fonts people used to create a paper. For example the "<" would appear as a "⇒" making some of the papers unreadable. Most of these organizations have become more aware of this issue but now require us to jump through some hoops (use the right fonts and style files and putting the paper in the appropriate format using the right program to do so). Makes me wish for the old days when I could send a paper and they would scan it, which the IEEE will still do but charge extra for.

    Sometimes you'll see "¿From" in older papers. This is not a font problem but a property of sending text files through email would add a ">" to a line beginning with "From" which would come out "¿From" after LaTeX processed the file. You see it less now as files get sent via attachments instead of directly in the mail body.

    Distractions from worry about something as minor as fonts really keeps us away from focusing on research and other important activities. Remember, no one was ever denied tenure for bad font selection.

    Wednesday, October 12, 2005

    Early or Late

    As you can see from the timestamp of this post, I came into work quite early this morning. I had to drive and needed an early start (about 6 AM) to beat Chicago traffic and get a good parking spot. When I arrived I saw another professor in his office. I knew he wasn't the early type and likely spent the night here. When I said "Hello," he replied "Deadline."

    Which one of us is keeping the crazier hours?

    Monday, October 10, 2005

    Favorite Theorems: Logical Characterization of NP

    September Edition

    Usually we think of the class NP as either languages accepted in polynomial-time by a nondeterministic Turing machine or languages with polynomial-time verifiable witnesses. Ronald Fagin gives a characterization of NP based on logic without references to Turing machines or polynomial time.

    Ronald Fagin, Generalized first-order spectra and polynomial-time recognizable sets. Complexity of Computation, ed. R. Karp, SIAM-AMS Proceedings 7, 1974, pp. 43-73.

    In this paper Fagin shows that NP consists of exactly the languages expressible with existential second-order formulas. For example consider a graph G described by an edge relation E(i,j) and we can define whether G is k-colorable by

    ∃C ∀i,j (1≤C(i)≤k ∧ (E(i,j)→C(i)≠C(j)))

    With some more work you can use binary predicates. In general every language in NP has an existential second-order characterization with binary predicates and a universal first-order part.

    Stockmeyer generalizes Fagin's result to characterize the polynomial-time hierarchy with second-order formulas.

    Fagin's result started the area of descriptive complexity that characterized many common complexity classes in various logics and has connections to the complexity of database queries. Neil Immerman's work in descriptive complexity led him to his proof that nondeterministic space is closed under complement. Robert Szelecpsényi independently came up with a similar proof through a different approach.

    Papadimitriou and Yannakakis use Fagin's result to characterize the class MAX-SNP of optimization problems. One of the first corollaries of the PCP Theorem is to show the MAX-SNP hard problems cannot be approximated within an arbitrary constant unless P=NP. In fact the concept of probabilistically checkable proof itself originally comes from a second-order view of NP that originated from Fagin's paper.

    Update 10/12: Fagin adds an addendum.

    Thanks to Lance and Siva for the kind words about my theorem. Let me clarify the story on the arity of the existentially quantified relations.

    An existential second-order formula about, say, graphs, is a formula of the form

    ∃ Q1 ... ∃ Qk S( E, Q1, ..., Qk)

    where E represents the edge relation, Q1, ..., Qk are existentially quantified predicates of arbitrary arity, and S(E, Q1, ..., Qk) is a first-order formula that involves only E, Q1, ..., Qk. As an example, 3-colorability of the graph can be expressed by an existential second-order formula

    ∃ Q1 ∃ Q2 ∃ Q3 S(E, Q1, Q2, Q3),

    where Q1, Q2, and Q3 are unary predicates (that represent the 3 colors), and S(E, Q1, Q2, Q3) is a first-order formula that says "Each point has exactly one color, and no two points with the same color are connected by an edge''.

    In the case of graphs, my theorem says that if T is a class of graphs that is closed under isomorphism (that is, whenever T contains a graph G, then it contains every graph isomorphic to G) , then T is in NP if and only if T is defined by an existential second-order formula. In the case of graphs, it is an open problem as to whether we really need to allow existentially quantified predicates of arbitrary arity. On the one hand, it is conceivable that there are NP properties of graphs that require existentially quantified predicates of arbitrarily large arity. On the other hand, it is conceivable that we can capture every NP property of graphs by allowing only a single existentially quantified binary predicate.

    If we consider NP properties not just of graphs, but of arbitrary structures (such as structures with, say, two ternary relations and five 7-ary relations), then the characterization of NP in my theorem continues to hold, but in this case, it is known that existentially quantified binary predicates do not suffice. In particular, Ajtai proved (in the same amazingly rich 1983 paper [Σ11-formulae on finite structures, Annals of Pure and Applied Logic 24, 1983, pp. 1-48] where, among other things, he proved the Furst-Saxe-Sipser theorem independently of Furst, Saxe and Sipser), that if we consider structures consisting of a single m-ary relation, then the statement "The number of m-tuples in the relation is even" cannot be captured in existential second-order logic with any number of existentially quantified predicates of arity less than m.

    A gentle introduction to the work I've mentioned in this note and to some other work in finite model theory appears in my survey paper Finite model theory-a personal perspective.