Wednesday, August 31, 2005

Theory Still Thrives at IBM

Ron Fagin writes in response to my post The New Research Labs.

Lance has invited us to give an update on the state of theory at the IBM Almaden Research Center, and I am happy to do so. The Theory Group at IBM Almaden has a long and distinguished history. I first formed the group in 1979. Over the years, many leading theoretical computer scientists have been members of the group, either as regular Research Staff Members or as "temporary members" (including Visiting Scientists, Postdoctoral Fellows, and Summer Interns). Former members of the theory group have held and still hold a variety of positions in premier institutions, including deanships and endowed chairs in top universities.

The primary mission of Theory Group members has always been to do first-rate research in theoretical computer science. Our primary mission remains the same today: to do world-class science. This is our raison d'etre. In addition, group members spend a portion of their time interacting with other research teams at IBM. This has led to a number of successes. Many times we have been in the happy scenario where such work has led not only to impact on IBM products, but also to exciting, leading-edge theory. This balance of activities has led to a very stable environment where Theory Group members enjoy the strong support of the organization.

Every research group, including the Theory Group, experiences turnover. In recent months, there has been more turnover than usual in the Theory Group, because of new opportunities in Silicon Valley. The Theory Group has a number of open slots to hire outstanding theoretical computer scientists, for both Research Staff Member and Postdoctoral positions. The availability of these positions is an affirmation that the Theory Group will continue its mission, and maintain its long tradition of excellence.

Tuesday, August 30, 2005

Back From Vacation

I am back from our family vacation to the Black Hills of South Dakota. Thanks to Ryan O'Donnell for guest blogging. I have donated $26 to hurricane relief in honor of all of the winners of Ryan's Game.

I used to keep off the internet completely over vacation but the web has become such a useful resource (for directions, hotels, site information) that we brought along my wife's laptop. All of the hotels we stayed in (as well as the highway rest areas in Iowa) have free internet so we had good access. Still I avoided checking my email and Ryan's weblog entries. I didn't want to worry about anything work related during the week. Of course that meant I came back to a mountain of email and if you had sent me some I will get back to you soon.

Some of these hotels also gave us a free USA Today which ran an article on the recent popularity of Sudoku books in the US. The article had the line "Sudoku involves no math." What did they mean? Probably Sudoku involves no arithmetic. But can you really be logical without being mathematical?

Sunday, August 28, 2005

Saturday, August 27, 2005

Blogger.com -- stinks (by Ryan O'Donnell)

Man, blogger.com screwed that last post up in at least 3 different ways! Or maybe it was my fault. Anyway, hopefully the rest of the pictures appear below...

PS: My PS from the last post also got cut off; it said, "PS: Thanks very much to Lance for the opportunity to guest blog; it was a lot of fun. Thanks also for all the comments from readers."

1. (Correctly identified as Dimitris Achlioptas.)

2. (Correctly identified as Joan Boyar.)

3. (Correctly identified as Artur Czumaj.)

4. 5. 6. 7. 8. 9. (Correctly identified as Pino Italiano.)

10. (Correctly identified as Mike Jordan.)

11. 12. 13. 14. 15. (Correctly identified as Tatsuaki Okamoto.)

16. (Correctly identified as Ren� Peralta.)

17. (Correctly identified as Jean-Jacques Quisqater.)

18. (Correctly identified as Eric Ruppert.)

19. (Correctly identified as Adam Smith.)

20. (Correctly identified as Adam Tauman Kalai.)

21. (Correctly identified as Alasdair Urquhart.)

22. 23. (Correctly identified as Rebecca Wright.)

24. 25. 26.

Friday, August 26, 2005

The Game (by Ryan O'Donnell)

Adam Klivans and Rocco Servedio and I used to play what we called The Game. To play The Game, you email the others a picture of a person in theoretical computer science. The others' job is to identify that person. I was reminded of The Game recently when looking at Homin Lee's look sharp, think sharp, act sharp page. No one gets any points for identifying the people on that page -- they're way too easy. (Also the pictures are links to home pages.)

For my last blog entry, I thought we could all play The Game. Lance has generously offered $26 in prizes; the first commenter to identify any given person wins a buck (you need to post non-anonymously, obviously). One rule: no poster is allowed to identify him- or herself. Hint: the number 26 has its obvious significance. Caveat: some of this was done with Google Image Search; if I got the wrong person, I apologise.

1. 2. (Person on the left. )3.4. 5. 6. (In the green sweater.)

Thursday, August 25, 2005

Depth-two TC^0 (by Ryan O'Donnell)

For today's post, some hard-core, old-school, low-level structural complexity, coupled with a woe-is-me tale.

A couple of summers ago a friend and I decided to gird ourselves and take a stab at proving a circuit lower bound. To dial down the quixotism as much as possible, we picked the absolutely least ambitious circuit class we could think of to work on: depth-two TC^0.

I should clarify what I mean here: TC^0 is usually defined as being the class of poly-size constant-depth circuits with constants, negations, and unlimited fan-in majority gates. But here when I say "depth-two TC^0", I want to allow arbitrary threshold gates instead of just majority gates. By a threshold gate I mean a function, determined by integer constants a_1, ..., a_m and c, which on input (x_1, ..., x_m) outputs 1 when a_1 x_1 + a_2 x_2 + ... + a_m x_m > c, and 0 otherwise. Siu and Bruck in 1991 showed that an arbitrary threshold gate can be simulated by a poly-size depth-three majority circuit (this was improved to depth two in a very nice paper by Goldmann, H�stad and Razborov, and the construction simplified a few times, most elegantly by Hofmeister). So if you only care about constant depth and poly size, it doesn't matter what sort of threshold gates you allow for TC^0. But we were interested in depth-two arbitrary thresholds of arbitrary thresholds.

[Side note: One might try to argue that small-depth threshold circuits are interesting from a physical point of view, due to resemblance to neural nets. Since allowing arbitrary threshold gates seems a bit dubious, physically -- Circuits Of The Mind people, what do you think? -- I won't try to argue this. I'll merely say that the class of thresholds of polynomially many thresholds is a pretty natural "circuit class".]

Now when people like to exclaim over how bad we are at circuit lower bounds, the usual trope is that "As far as we know, NP might be contained in AC^0 with mod 6 gates!" (Or do they say EXP?) But you can take it one step further: correct me if I'm wrong, readers, but I think that as far as we know NP might be in depth-two TC^0. To think: solving the travelling salesperson problem, say, with a threshold of polynomially many thresholds of inputs.

Also as far as I know, there would be no amazing consequence of a lower bound against depth-two TC^0; this is one reason why friend and I tried to tackle it. The other reason was that it seems like we already practically have it:

  • If you restrict the top threshold gate to be a majority, then a neat 1993 result of Hajnal, Maass, Pudl�k, Szegedy and Tur�n shows that Inner Product Mod 2 is not in the class. The proof uses the classical randomized communication complexity lower bound for IP2 by Chor and Goldreich.
  • If you instead restrict the lower level thresholds to be majorities (or any gates with logarithmic deterministic communication complexity), a superb result of J�rgen Forster -- published originally in NeuroCOLT (!) 2000 -- again excludes IP2 from the class. Forster showed this by proving IP2 has linear randomized communication complexity even when the players need only succeed with any probability exceeding 50%.
So. IP2 cannot be expressed as a majority of poly many thresholds nor as a threshold of poly many majorities. You'd think it would only be a small step from there to show that it cannot be expressed as a threshold of poly many thresholds...

Long story short, after a summer of banging our heads over it, we came up with absolutely nothing to say about whether or not IP2 is in depth-two TC^0.

SAT, computable as a threshold of thresholds...? Man.

Wednesday, August 24, 2005

Explicit expander exaggerations (by Ryan O'Donnell)

Until pretty recently, when it came to expanders, I was a bit of a naif and -- I'll admit it -- a bit of a faker. Sure, when randomness was precious I could say, "Hey, maybe we should take a walk on an expander graph," as well as the next guy. Thing was, I was always hoping the next guy would be able to take it from there. So I finally got around to understanding them properly and I found out that one aspect of the expander story I had in my head was greatly exaggerated...

Now don't get me wrong, the Zig-Zag Product is swell, it's Annals of Math-worthy, and it's inspired a lot of recent work, both in expander constructions and elsewhere (e.g., Reingold's SL = L theorem). However, the story I sometimes heard as to why it was cool was that it was the first explicit expander construction that was actually understandable; that you didn't have to know lots of deep number theory to verify the proof; that all other previously known constructions used zaniness like Weil sums, Ramanujan conjectures, Kazhdan constants and whatnot.

But as you probably know, this is an exaggeration; it's really not so. The first explicit expander construction was given by Margulis in 1973 and its expansion was explicitly determined by Gabber and Galil, 26 FOCSes ago. Although the first proofs used some deep stuff, by STOC '85 Jimbo and Maruoka had made the proof completely elementary. Here's a simple version of the construction: Take the graph on Z_m x Z_m and connect (x,y) to (x+2y,y), (x+2y+1,y), (x,x+2y), (x,x+2y+1). Also put in the reverse edges. The degree is 8 and the second largest eigenvalue is at most 5 sqrt(2) = 7.07 < 8. You're done.

The proof? Three pages of elementary fiddling around. Zig-Zag's proof? A bit of a tricky definition followed by three pages of elementary fiddling around. (If you prefer a more leisurely treatment, both proofs take about five pages in David Xiao's senior thesis.) Zig-Zag is maybe easier to follow if you like linear algebra, Gabber-Galil if you're down with a little of Fourier analysis. The Zig-Zag construction wins out in terms of intuition -- the GG proof has a very clever trick in it that would be hard to come up with on your own. On the other hand, it's not like one should find it baffling that the Gabber-Galil construction might work -- indeed, Jin-Yi Cai shows that pretty much any similar construction on Z_m x Z_m is an expander.

Finally, the Gabber-Galil construction is hands-down better when it comes to explicitness of construction. (It takes a bit of work and thought to show that Zig-Zagging gives you good explicitness.) And GG's extreme explicitness can certainly come in handy -- just ask our old friend Lance Fortnow and former guest blogger Adam Klivans who used it (via a result of Gutfreund and Viola) to show that RL is contained in L with linear advice.

Tuesday, August 23, 2005

Additive combinatorics (by Ryan O'Donnell)

For a while now I've taken a dilettantish interest in the subject of Additive Combinatorics. If A is a subset of the integers {1, ..., N}, let A + A denote the set {a + b : a, b in A}. Additive Combinatorics studies such questions as, "If A + A is small compared to |A|^2, what can be said about A?" and "What is the size of the largest A that contains no nontrivial length-3 arithmetic progression?"

My interest in this topic was piqued a little over a year ago by the confluence of three events. First, the Green-Tao theorem was announced: The primes contain arbitrarily long arithmetic progressions. Second, the Barak-Impagliazzo-Wigderson paper on extracting randomness from independent sources came out, which crucially used a 2003 result from additive combinatorics by Bourgain, Katz, and Tao. Third, I was reading some of the course notes and expositional papers on Ben Green's webpage (mainly because he's a masterfully clear writer) and I realized that many of the favourite techniques used in additive combinatorics are also favourite techniques in theoretical computer science -- the probabilistic method, graph theory, discrete Fourier analysis, finite fields and low-degree polynomials therein...

So what are the applications to theoretical computer science? There are already at least a few:

  • More recent work on multisource extractors; e.g., the paper of Dvir and Shpilka or the recent work of Jean Bourgain (mentioned here).
  • Szemer�di's Regularity Lemma, used nonstop in the area of property testing, was developed originally for "Szemer�di's Theorem" from additive combinatorics. Ben Green also has a version of the regularity lemma for boolean functions, which I used in a paper on hardness of approximation for "Grothendieck problems".
  • Tao and Vu's recent papers on the probability that a random boolean matrix has determinant zero.
  • Chandra, Furst, and Lipton, inventors of the "number on the forehead" model in communication complexity, gave surprisingly efficient communication upper bounds based on arithmetic progressions in sets.
  • The most-cited work in TCS history (maybe): Coppersmith and Winograd's n^{2.376} matrix multiplication algorithm. The title: Matrix multiplication via arithmetic progressions.
I like to think that many more applications and connections to TCS are on the way. Here are a few scattershot ideas... anyone want to chime in with others?
  • Property Testing: Since Szemer�di's Regularity Lemma is so useful for property testing, there ought to be some use in the recently discovered hypergraph versions.
  • Low-degree testing: given n-variate functions f over small fields, both communities like to look at quantities like E[f(a)f(b)f(c)f(a+b)f(a+c)f(b+c)f(a+b+c)] -- see this paper on low-degree testing by Alon, Kaufman, Krivelevich, Litsyn, and Ron but also the many works on the "Gowers norm" such as this one by Green and Tao.
  • Pl�nnecke's Theorem: The proof of this theorem, which is used constantly in additive combinatorics, is pure graph theory -- the main tool is the max-flow/min-cut theorem. It'd be great to see it written in TCS language.
  • Lattices: The topic is screaming out for a connection to lattice problems; see Chapter 3 of this book for some background.
For an introduction to additive combinatorics, one might look here.

Monday, August 22, 2005

More on parallel repetition (by Ryan O'Donnell)

Ryan O'Donnell here, guest blogging for Lance while he's on vacation. As a matter of fact I'm on vacation myself, in the sunny but recently tornadoed Toronto.

The parallel repetition problem has come up in this blog more than once before (perhaps because the topic of interactive proofs is near and dear to Lance's heart). I've been thinking about it lately because Venkat Guruswami and I will be teaching a course on PCPs and hardness of approximation this term.

The dirty little secret about PCP courses is that everyone always skips the proof of Ran's Parallel Repetition Theorem, even though it's an essential component of very many optimal hardness of approximation results. I used to think Dinur's new proof of the basic PCP theorem promised an opportunity to get all the way from soup to nuts in one course on PCPs and hardness of approximation. However it really only promises the following: instead of sweating for 10 lectures to get through the basic PCP theorem and then waving your hands over parallel repetition, you get to proceed happily for 3 lectures through the basic PCP theorem and then sweat over how to deal with parallel repetition.

So how to describe getting from a basic PCP with soundness, say, .9, to a PCP with soundness .0001? Here are three possible strategies I've considered:

1. Take a stab at explaining Ran's paper. This is no mean feat -- I've tried reading it maybe six times lifetime, and only on my last attempt did I make any sort of headway. That included completely disregarding the 20 pages of calculations at the end. Still, I've absorbed enough intuition from Avi Wigderson and Uri Feige (check out the latter's survey, the only explanatory writing on the theorem I've ever found) that maybe I could say something in a couple of lectures that would vaguely make sense.

2. Attempt a proof or sketch of Feige and Kilian's parallel repetition theorem. This underappreciated result is definitely easier to follow than Ran's, but it would still probably be challenging to cover in a small number of lectures. It has weaker parameters than Raz's result -- to get soundness down to epsilon you need poly(1/epsilon) parallel repetitions, rather than log(1/epsilon). But come on: if the new proof size is going to be n^{2^2^10000} anyway, does it really matter?

3. Teach the Lov�sz-Feige method of parallel repetition from STOC '92. Unless I'm mistaken, this is a neat and simple way to do a kind of parallel repetition, using multilinear extensions and a sumcheck-protocol-style analysis. The only downside: the proof gets blown up to quasipolynomial size (although, as a compensatory bonus, you get quasipolynomially small soundness). But again, does it really matter? The hypotheses NP not in P and NP not in TIME(n^{polylog(n)}) are pretty much equally good to me.

I'll probably go with either #1 or #3. Any opinions?

Friday, August 19, 2005

Conference Crashers

In a popular summer movie Wedding Crashers, two friends go to weddings and receptions uninvited for food, drink, entertainment and to pick up single women. We have a similar problem with conferences, people who come, not really for the above reasons, but to see talks, visit with their friends and avoid paying the registration fee.

While small conferences don't have the manpower to check for registered participants, the vast majority of participants do register. But as conference costs go up (for reasons like extra proceedings sales going down dramatically) and grants getting smaller and harder to get, there has been a mild increase in people skipping out on registration. If this trend continues, the problem feeds back onto itself, as those who do pay feel foolish, and we have a serious concern on our hands.

If you don't get a copy of the proceedings or eat at the conference meals you might think that you are not costing the conference anything by attending and so don't feel guilty by not paying. But conferences have some fixed expenses and most of the other expenses are cheaper per participant if there are more participants so you are costing your fellow researchers real dollars by not paying your fair share.

Sometime the registration fee can make the different in a decision on whether to attend a conference. If so talk to the conference organizers; if you explain the situation sometimes arrangements can be made. Better to attend at a reduced rate than not attend at all.

On a side note, I'm off the web next week. Microsoft postdoc Ryan O'Donnell will guest blog. Enjoy.

Thursday, August 18, 2005

Research Annealing

Simulated Annealing is a heuristic technique for optimization problems. Think of an optimization problem as hills and valleys where you want to find the lowest point. First the ball starts "hot" and bounces around randomly. As you start to cool the ball down, it doesn't bounce as much as gravity will cause it to go down more often than up. When it cools completely it falls into a local minima. Hopefully you've reduced the temperature at such a rate that the local minima the ball finds is close to the true minimum. Simulated annealing doesn't solve NP-complete problems quickly in general, but it some cases it does reasonably well in practice.

I used an analogy of simulated annealing to describe to a student how one chooses a research area. First you bounce around for a while looking at many topics through talks, classes and reading some broad papers finding the right fit for your strengths and interests. Then you start to focus, reading more specific papers until you find the right place to start drilling for results.

One you start drilling you will hopefully find some oil but eventually the well will just output sludge. At this point you should start bouncing around again, slowly at first, to find a new topic. The trick is to know when to start bouncing again. Leave too early and you might miss a new oil supply right under the old one. Leave too late and you will find yourself knee-deep in sludge, forgotten and unable to escape.

Tuesday, August 16, 2005

US Visa Limit Reached

The US has reached its limit for H1-B visas not just for this fiscal year but for the next. A foreign technical worker wanting to work in the US wouldn't be able to start until October 1, 2006.

First some background. Here is the official description of the H-1B.

Established by the Immigration Act of 1990 (IMMACT), the H-1B nonimmigrant visa category allows U.S. employers to augment the existing labor force with highly skilled temporary workers. H-1B workers are admitted to the United States for an initial period of three years, which may be extended for an additional three years. The H-1B visa program is utilized by some U.S. businesses and other organizations to employ foreign workers in specialty occupations that require theoretical or technical expertise in a specialized field. Typical H-1B occupations include architects, engineers, computer programmers, accountants, doctors and college professors. The current annual cap on the H-1B category is 65,000.
Of those 65,000, 6,800 are set aside for free trade agreements with Chile and Singapore, effectively leaving 58,200 visas. During the dot.com boom the number of H1-B visas available was raised to 195,000 but lowered again in a misguided attempt to save American jobs. Congress did allow an additional 20,000 visas for foreigners who have received advanced degrees in the US and there are still a few of those visas available.

These low visa limits will just cause large corporations to continue to develop and grow their overseas R&D labs. Instead of having these workers in the US helping our economy and advancing science and technology in the US, these limits will add to the erosion of the US dominance in these areas.

Do a Google News search on this topic and you get mostly foreign articles on the topic. While the visa limit does not directly affect US citizens, we should all be concerned about its effect on our country's ability to lead in S&T.

Monday, August 15, 2005

Extreme Oracles

Let's look at some relativized worlds which really push the limits of what we don't know how to prove. Once again I refer you to the zoo for definitions of these classes.
  • P=PSPACE (Baker-Gill-Solovay)
    One of the original oracles collapses everything. P=NP=co-NP=PH=⊕P=PP=BQP=P#P=PSPACE=NPSPACE.
  • Generic Oracles (Blum-Impagliazzo or see our toolkit paper)
    These oracles separate as much as possible. P≠NP, the polynomial-time hierarchy is infinite, PP is not in PNP, NP is not in ⊕P and much more. Oddly enough they diagonalize against machines that need to fulfill a promise condition and so with the appropriate construction one also gets P=NP∩co-NP=UP=BPP=BQP.
  • P=⊕P and NP=EXP (Beigel-Buhrman-Fortnow)
    Relative to this oracle ZPP=⊕EXP and the isomorphism conjecture holds (all NP-complete problems are reducible to each other via invertible bijections).
  • P=NP and ⊕P=EXP (Beigel-Maciel)
    The polynomial-time hierarchy collapses to P and yet the exponential hierarchy sits inside ⊕P.
  • P=⊕P and BPP=EXPNP (Buhrman-Torenvliet [Corollary 4.8])
    No even very weak derandomization for BPP. Valiant-Vazirani puts NP in RP⊕P but in this oracle NP is not even in co-NP⊕P.
  • PRP=NEXP (Buhrman-Fenner-Fortnow-Torenvliet)
    No even very weak derandomization for RP. Implies PNP=PNEXP (also implied by the next oracle).
  • PNP=⊕P=PEXP (Aaronson)
    A strong version of Beigel's oracle where PNP is not in PP (though the entire polynomial-time is in PPP) and PP is not closed under Turing-reductions.
You can replace ⊕P with ModkP for any prime k in any of the above. We don't believe any of the statements to be true in the "real world" but all of them remain open and would require nonrelativizing techniques to disprove.

Friday, August 12, 2005

Information and Computation

Elsevier is opening up I&C for the rest of the year.
The Publisher and Editorial Board of Information and Computation are pleased to announce that for one year, effective immediately, online access to all journal issues back to 1995 will be available without charge. This includes unrestricted downloading of articles in pdf format. Journal articles may be obtained through the journal's web site or Elsevier's ScienceDirect.

At the end of the year, the retrieval traffic during the open access period will be evaluated as future subscription policies are considered.

Albert R. Meyer, Editor-in-Chief, MIT Computer Science & AI Lab
Chris Leonard, Publishing Editor, Elsevier
Moshe Vardi, Associate Editor, Rice University

I should note I am a member of the I&C editorial board.

A good place to start reading is the top 25 downloaded articles.

Thursday, August 11, 2005

Sudoku Revisited

Robin Houston writes
Although I'm not a complexity theorist, I very much enjoy reading your weblog. I also enjoy solving the Sudoku puzzles published in the British press, so it was doubly nice to see your May 25 post about the complexity of Sudoku!

As far as I can tell, it follows from Yato's work that the problem:

  1. Given a partially completed grid, find a valid completion if there is one; otherwise report that there isn't one.

    is solvable in polynomial time iff P=NP.

    That's interesting of course, and it's a problem that faces those who set the puzzles; but the problem that we solvers are faced with is not quite (1). It's:

  2. Given a partially completed grid that has a unique valid completion, find that completion.
Can anything be said about problem (2)? If there were a polynomial- time algorithm for (2), would it follow that P=NP? If not, would there be any other significant consequences for complexity theory?
Good question. Since Yato's reductions preserve solutions the problem is equivalent to finding a satisfying assignment of a Boolean formula that has exactly one satisfying assignment (Unique SAT).

We don't know if Unique SAT is NP-complete in the traditional sense. However Valiant and Vazirani have a nice paper that shows how to randomly reduce SAT to Unique SAT. Putting it together we get the following equivalence:

  • Given a partially completed grid that has a unique valid completion, probabilistically find that completion in polynomial time.
  • NP=RP (i.e. all NP problems have efficient probabilistic solutions).
Since we don't believe that NP has fast probabilistic algorithms, we expect that there are no efficient procedures to completing a generalized Sudoku grid, even if there is only one such completion.

Wednesday, August 10, 2005

Is the Thrill Gone?

Sanjeev Arora and Bernard Chazelle write the Viewpoint Column Is the Thrill Gone? in this months Communications of the ACM.
One wonders if the failure of computer scientists to articulate the intellectual excitement of their field is not one of the causes of their current funding crisis in the US. Too often policymakers, and hence funding agencies, treat computer science as a provider of services and infrastructure rather than an exciting discipline worth studying on its own. Our promises of future technological innovations and scientific advances will be more credible to them if they actually understand that past and current breakthroughs arose from an underlying science rather than a one-time investment in "infrastructure."

We think it is high time that the computer science community should reveal to the public our best kept secret: our work is exciting science—and indispensable to the nation.

You would think that Arora and Chazelle are preaching to the choir by publishing in the communications of the main computer science academic society. But the ACM also tries to represent the broader computer professional and the CACM reflects these mixed priorities. Each month CACM takes some current topic (Spyware this month) and has a collection of academic papers on that topic that look of little direct interest to practitioners. (Compare this approach with Technology Review that hires science writers to explain the work of academic and industrial researchers.)

So we need people like Arora and Chazelle to remind the ACM about the science in computer science. We need a separate Computing Research Association to push the computer science research agenda. And most importantly, as Arora and Chazelle say, we need to make broader public aware of the excitement and importance of computer science.

Monday, August 08, 2005

Favorite Theorems: Abstract Complexity

July Edition

As a graduate student, Manuel Blum wanted to study computational complexity freed from any specific machine model. His paper set the tone for much of complexity of the late 60's.

Manuel Blum, A Machine-Independent Theory of the Complexity of Recursive Functions, JACM 1967.

Blum defined a resource measure as any partially computable function ΦM (think time) of a machine M that fulfilled some simple axioms.

  1. For all x, M(x) halts iff ΦM(x) halts.
  2. The language {<M,x,m> | ΦM(x)=m} is computable.
Blum argues that time complexity on any reasonable model of Turing machine would fulfill these axioms. He also noticed that space complexity also fulfills the axioms. Though the axioms are very general, Blum shows their power with the speed-up theorem.

Speed-Up Theorem: For every computable function r, there is a language L such that if M accepts L there is an N accepting L with r(x,ΦN(x))≤ΦM(x) for almost all x.

For example there is some language L such that if any algorithm computes L in t(n) steps there is another algorithm computing L in log t(n) steps. This might seem to violate the time hierarchy but t(n) does not have the time constructibility needed for the hierarchy. Blum also showed there was a language that couldn't be sped up much and a computably-bounded relationship between any two abstract resource measures.

Borodin and Trakhtenbrot independently proved the gap theorem for abstract complexity measures.

Gap Theorem: Given any computable function g(x)≥x there is a recursive function t(x) such that if ΦM(x)≤g(t(x)) for all x then there is an N accepting the same language as M with ΦN(x)≤t(x) for almost all x.

In particular there is a function t(n) such that DTIME(t(n))=DTIME(2t(n)) and thus DTIME(t(n))=DSPACE(t(n)).

McCreight and Meyer proved the union theorem.

Union Theorem: Let t1, t2, … be a computable enumeration of increasing resource bounds. There is a computable function t such that the following are equivalent for all M.

  • For some i and almost all x, ΦM(x)≤ti(x).
  • For almost all x, &PhiM(x)≤t(x).
For example there are computable functions t1 and t2 such that DTIME(t1(n)) is equal to P and DTIME(t2(n)) is exactly the primitive recursive languages.

McCreight and Meyer also give an honesty theorem showing that computable t there is (in a weak sense) a time-constructible t' such that languages computable with resource bound t are equal to languages computable with resource bound t'.

After the P versus NP problem was popularized by Cook and Karp in 1971, the focus of complexity went to polynomial-time (which also was machine independent) and away from abstract complexity.

Sunday, August 07, 2005

The New Research Labs

I am just finishing the last of three west coast trips this summer. I went to the Complexity conference, two universities (U. Wash and Caltech), three ballparks, a Bat Mitzvah and a film shoot. I also visited the Holy Trinity of internet companies: Microsoft Research (both in Silicon Valley and Redmond), Yahoo! Research (Pasadena) and a Google (Mountain View), the last of which seemed more like a summer camp than a corporation.

Microsoft, Yahoo and Google all deal with large amounts of data and need to look at a number of CS related issues often requiring good theoretical techniques in areas like search, auctions on search words, recommender systems, spam filtering and much more. While the research labs of the 80's and 90's (AT&T, Bell Labs, IBM, Bellcore/Telcordia, NEC and others) have pared down their research groups, Microsoft, Yahoo and Google are currently hiring many computer scientists from programming positions to pure theory researchers. For example take the two IBM theorists who organized the last Complexity conference: Sivakumar just joined Google and Ravi Kumar went to Yahoo Research which by the way is now headed by theorist Prabhakar Raghavan. You can see how important researchers are to these companies in the recent fight between Microsoft and Google over Kai-Fu Lee (whom I best know because his Othello program beat my Othello program in a 1989 tournament).

Corporate research labs go in cycles from where they need new ideas in a developing field and build up strong research groups to the point where they have have basic commodities (think long-distance phone calls) and need to cut back research groups to remain competitive. Hartmanis and Stearns developed complexity at the GE Research Labs in Schenectady and soon after both left for academic positions and a few years after that IBM and AT&T built up their theory groups. Will Microsoft, Yahoo and Google eventually find less need for theoretical research? Probably but for now, we once again see theoretical computer scientists needed by companies setting the future of computing.

Thursday, August 04, 2005

Screenplays of Science

An email from Rocco Servedio.
Did you see this New York Times article? Thought you might be interested in pointing this out on the weblog, maybe a CS theory screenplay will come out of it… :)
Thanks Rocco though Suresh beat me to it. Reminds of this old (and just updated) post from the early days of this blog.

On a similar vein, my daughters are excited about Disney's new Virtual Magic Kingdom. Perhaps we need a Virtual Science Kingdom to get kids excited about math and science: Help Captain Complexity three color the map to save the Traveling Salesperson.

Wednesday, August 03, 2005

Moons and Planets

When I was in grade school we learned that Jupiter had twelve moons. We had a test. "How many moons does Jupiter have?" I wrote "12" and it was marked correct. In 1974 a thirteenth moon was discovered. The moon didn't just pop into existence in 1974, it was always there (at least when I took my test). Now we know Jupiter has at least 63 moons. The answer of 12 wasn't even close; what I was taught to be a fact was simply not correct.

Now we have ten planets (or is it eight?) circling our sun. Is nothing sacred? What about "My very educated mother just served us nine pies?" What other "facts" from my childhood were incorrect. Are we sure we just have one sun in our solar system?

Maybe that's why I like mathematics and theoretical computer science. Eight plus seven will always be fifteen; nondeterministic space will always be closed under complement. We know what we know; we know what we don't know; sometime what we didn't know we now know but nothing we knew later becomes false.

Tuesday, August 02, 2005

Two New Blogs

Chris Leonard who edits the Elsevier journals in theoretical computer science has started a weblog Computing Chris. He plans to address some of the concerns of the community to commercial publishers and Elsevier in particular. Feel free to suggest topics to Chris.

Jeff and Suresh point to David Eppstein's new weblog 0xDE. Eppstein always had a number of fun and useful stuff on his website including a catalog of the complexity various games and puzzles.

Monday, August 01, 2005

Relativizing Space

We normally define relativization to an oracle A with a special Turing machine that has an extra tape where the machine can write down a string x and move to a special state q? and will magically go to a state qy if x is in A and qn otherwise.

Scott Aaronson asked me about relativization for space classes noting the difficulty of the above definition. Does the oracle tape count as space? You want a log-space bounded machine to at least write its own input on the oracle tape but you don't want the tape to be used as auxiliary storage.

Ruzzo, Simon and Tompa developed the best model for oracle relativization. They allow a long oracle tape with the following restrictions:

  • The oracle tape is one-way write only.
  • A nondeterministic (or probabilistic or quantum) machine must act deterministically while writing on the oracle tape.
  • Once the query is made the tape is magically erased.
In this model a log-space machine can ask polynomial-size queries but these queries can be computed in log-space from the input and an extra O(log n) bits (the configuration of the machine right before writing the oracle query).

The Ruzzo-Simon-Tompa definition works well for defining Turing reductions for space-bounded machines but not so much for relativizing theorems, for example showing that alternating polynomial-time equals polynomial space for all oracles. Jonathan Buss gave a model to handle this case but we don't really get limitations on techniques for relativization results on space classes like we do for time. Best not even to bother looking for relativized space class oracles and just think of PSPACE of alternating polynomial-time in results like there is an A such that PA=PSPACEA.