Tuesday, July 12, 2005

Favorite Theorems: P = NP for Space

June Edition

In 1970 Walter Savitch proved one of the truly classic results in complexity showing that one can simulate nondeterministic space in deterministic space with only a quadratic slowdown.

Walter Savitch, Relationships Between Nondeterministic and Deterministic Tape Complexities, Journal of Computer and System Sciences, 1970.

In complexity terms, for any space constructible function s(n) ≥ log n,


You can find the proof in an earlier post.

As a consequence you get PSPACE=NPSPACE, which is why you don't see NPSPACE in the zoo.

Chandra, Kozen and Stockmeyer used a modification of the Savitch algorithm to show that polynomial space can be simulated in alternating polynomial time. This relationship led to showing lots of games PSPACE-complete and played a critical role in showing IP=PSPACE.

Circuit wise, Savitch's algorithm puts NL (nondeterministic log-space) in SAC1 (log-depth circuits with constant fan-in ANDs and unbounded fan-in ORs).

For a directed graph G=(V,E), let Gk=(V,Ek) where (u,v)∈Ek if there is a path of length at most k from u to v in G. One way to view Savitch's theorem is to compute G2k from Gk using O(log n) additional space. You then apply this graph powering log n times to get from G=G1 to Gn where (u,v)∈En if there is a path from u to v in G.

Reingold uses a similar paradigm in his result that SL = L. He shows for undirected graphs that you can do a weaker form of graph powering (using expander graphs) but with a similar effect using only O(1) additional space at each step.

But after 35 years we still have no better upper bound on the deterministic space complexity of NL than O(log2 n) from Savitch's Theorem.


  1. Actually, NPSPACE is in the Zoo: http://www.complexityzoo.com/#npspace

  2. This is a good example of how STOC/FOCS have grown significantly in quality over the years. Savitch's theorem was in STOC 1969, and the proof is trivial.

  3. Ah the benefits of hindsight. I don't think the proof if Savitch's theorem is trivial at all. Next time you teach a complexity course ask your students to prove it and see how they complain.

  4. Funny -- I would think this is a good example of how STOC's quality has decreased over the years. Rare is the paper in STOC/FOCS that's both extremely readable and fundamentally mind-altering.

  5. (1) It's not mind-altering. What reasons, a priori, did you have for believing that PSPACE != NPSPACE? In order for the theorem to alter your mind, you must have had some incorrect preconceived notion.

    (2) Yes, graduate complexity students have proved it for themselves in the past.

  6. I am confused. Do you really like lots of current long non-readable proofs in STOC/FOCS that I'm pretty sure even the referees didn't check the details exactly?
    Can't we have a simple proof for an important theorem, e.g. see the new result O(\log n) of Fakcheroenphol, Talwar, and Rao versus very complicated result
    O(\log n\log\log n) of Bartal?

  7. If anything it is an example of the difference between the trivial and the seemingly trivial. As our knowledge base expands, the forefront of the past becomes the undergrad exercises of the present. But timely being the first with the proof is not the same as rediscovering it after you had several courses that put you in the mindset of related and similar proofs.

  8. First of all, STOC/FOCS is not a seriously reviewed publication. It is essentially a popularity contest (this isn't meant as an insult). You shouldn't take the fact that something is in STOC/FOCS even as evidence that it's correct, as the PC's responsibility is not to ensure such things. People should send their revised and carefully written full versions to reputable journals as a matter of course.

    No one said that simple proofs were not preferred. The non-trivial nature of a result and proof is in the insight that it provides, not necessarily the difficulty or length of the proof. Despite having a one page proof, the F-R-T result of which you speak follows and incorporates insights of Leighton-Rao, Seymour, Alon-Karp-Peleg-West, the first attempt of Bartal, an approach of Calinescu-Karloff-Rabani, and others. The very nice proof of FRT would not be nearly as useful without the insight gained from this entire body of work.

    The reason one might think of Savitch's result as "trivial" is the same reason you would call a linear-space algorithm for SAT "trivial." Because the proof is essentially: Enumerate everything in the obvious way while minimizing space as your resource. State the bound.

  9. I don't think Savitch's algorithm is "enumerating in the obvious way." On the other hand, he points out in his paper that this divide-and-conquer approach to reducing memory usage is not new (Lewis, Stearns, Hartmanis 1965).

  10. Yes, graduate complexity students have proved it for themselves in the past.

    Of course they have, but they didn't do it in five minutes as your "trivial" comment suggests. The proof now looks trivial because we know the way there.

    To be clear the proof was never a tour de force but claiming it was trivial disregards the historical context.

  11. First of all, STOC/FOCS is not a seriously reviewed publication. It is essentially a popularity contest (this isn't meant as an insult).

    Over the years this has been sometimes more the case in some than in others. In the early 90s it seemed like the best way to get a paper in STOCS/FOCS was to submit an incremental technical improvement over the topic-du-jour, whereas generally it seems that this is less so nowadays. For one it is harder to single out a particular subject which dominates the conference as some once did in the past.

    Also while the results are not reviewed for correctness, a very talented group of friends of the Chair (a.k.a. the program committee) are asked for their opinion about the general credibility of the line of attack, the outcome, and it's relevance. This judgement is quite often bang on the money (read Gladwell's Blink to see why).

  12. It is a mistake that graduate
    students often make to equate
    complexity or difficulty of a
    proof with its importance.

    The field has become highly competitive and STOC/FOCS papers
    carry an unhealthy burden. There
    seems to be resentment
    out there from people who feel
    that their papers are being
    rejected while weaker papers are
    being accepted for other reasons.
    I wish we could move to more inclusive forums. I am convinced
    that it would not harm the
    quantity or quality of results
    that come from our community.
    A number of areas such as
    math/physics/or do it successfully.

  13. It is a mistake that graduate students often make to equate complexity or difficulty of a proof with its importance.

    You are being too kind. Over the years I've seen seasoned program committee members make this mistake.

    Even today, STOC and FOCS give too much value to technical difficulty and not enough to simplicity and elegance. This fact was mentioned by others a few weeks back.

  14. Yes, that's correct. The paper essentailly gets accepted in FOCS/STOC if the referee cannot understand the paper completely and feels that there is something complicated going on the proofs. Of course, it is not always the case (for some very good results), but it is usually the case.
    Usually, if your proofs are not complicated enough, you should not send it for these conferences, unless your work improves some previous FOCS/STOC papers (whose authors missed simple proofs)

    In addition, the community does not have too much respect for the journal papers of conference papers and you see only a small fraction of SODA/STOC/FOCS papars which appear in journals. This causes the problem of reference to papers which are not properly refereed.

  15. On the topic of the misplaced appeal of technical difficulty (not only in FOCS/STOC but in journals as well) I can recount a personal rant I had a couple of times upon reading a referee report: "Trivial they say? Don't they know how hard I worked to make the proof this readable?"

    But to be truthful, I am not sure that I can always separate the trivially simple from the untrivially simple. This pitfall can be reduced but probably not avoided altogether.

  16. Coming from a pure math background, where simplicity is highly valued, I had to teach myself to stop making my CS proofs simpler, as this only reduced the chances of the paper being published. Now I settle for the clearest presentation of the initial messy proof.

  17. I hope people know not to take comments on this weblog too seriously. Everyone who had once a rejection for the wrong reasons remembers it for a long time - it doesn't mean that such things happen all the time.

    From my experience on a S/F committee I would certainly not accept a paper because of a complicated proof (although a proof that introduces a new idea, perferrably simple but could be also complicated, is certainly a plus). Furthermore, I wasn't aware of any other paper that got in for such a reason.

    It's certainly scientifically very bad not to simplify proofs as much as possible. I think it's also not a good career advice.

    The excitment about Dinur's new proof for the PCP theorem shows that this community does place a high value on simplicity.

  18. Well then let's address some of the bizarre statements made here:

    1. On the difficulty of coming up with Savitch's algorithm: It is on par with some of the harder problems in Chapter 1 of Structure and Interpretation of Computer Programs (SICP), a course taught to undergrads across the country.

    2. On the judgements of STOC/FOCS program committees and the relevance of the "blink" phenomenon: This is naive. Any expert in any field could submit a paper with an incorrect proof which would be OK'd by the S/F review process even though it is fundamentally flawed.

    The reason is that we know the expected answer (or the lack of one), and we usually have reasonable guesses about what the proof "should look like." Authors fool themselves into believing their own incorrect proofs easily enough; a program committee with a huge stack of papers and limited time is not going to fare any better (especially not with a two second "gut check").

    Fortunately, this doesn't seem to be too much of a problem, partially because FOCS/STOC _is_ a popularity contest. When a topic is popular, people care about it enough to read and understand the results written about it (and, in the process, the proofs are naturally checked). Most papers about significant results are circulated amongst the community long before they appear in F/S proceedings.

    Of course this also means that F/S is involved in determining the course of the field, i.e. in guiding researchers to come together to focus on particular topics. With this service comes a responsibility that shouldn't be taken lightly.

    3. On the need for more "inclusive forums": There was a vote at SODA'05 about whether certain steps should be taken so that the PC could increase the number of papers accepted. The vote was overwhelmingly against such measures, and I overheard the sentiment "I don't feel like I'm missing something that should be here."

    Yes, convincing arguments can be made that after the top 30-40 papers, there isn't a distinction (at least that the PC can make) between the next 100. But I always thought the goal was to accept just enough papers so that you don't miss those 30-40.

    For papers in my own field, I don't care what forum they are published in; I will figure out which ones are important enough to read. Where the F/S PC really helps is in distilling out the papers from _other fields_ that I should be paying attention to, since I have only limited time myself.

    4. On simplicity being "highly valued" in pure math: This is a funny thing to say. Certainly in all branches of mathematics, and in TCS, simplicity is highly valued on the level of principle. (In fact, one could argue that "progress" and "simplification" are almost synonymous in science.)

    But the discussion here is more about how research papers are judged for merit _at the time of their submission_. (Since we can't wait 10-100 years to judge their impact.) And yes, it's certainly the case that two papers proving the same theorem can be judged quite differently based on the difficulty of the proof, and that the more difficult proof will often make a theorem seem "deeper" to the referee.

    But the situation in pure mathematics is actually far worse than in TCS. In fact, this is one of the reasons why I like the TCS community so much. In general, the community is not afraid of (i.e. embarassed by) simplicity.

    It is often the case that voluminous mathematics papers can, with enough effort, be distilled down to 5-10 pages of interesting and novel material. Some of the best mathematicians have a brilliant way of relating their intuition with remarkable clarity and familiarity. But many, many papers are rife with level upon level of purposeless abstraction meant to disguise routine proofs and trivial results. Even papers submitted to the top journals are sometimes padded to make them seem more complicated.

    5. On rewarding technical difficulty: I actually don't think that it's the "technicalities" we are trying to reward. Certainly complicated case analyses are frowned upon as being opaque and uninteresting.

    Rather, we are trying to reward theorems for which the proof seems to require more than one non-trivial step, proofs which embody a certain level of fundamental insight.

    We've all started trying to prove something only to give up later, feeling that we aren't getting anywhere. It requires a certain amount of passion and, for lack of a better word, faith, to pursue an approach even when the problem is fighting back just as fiercely.

    So I would offer that, in thinking highly of "difficult" proofs, we are actually _not_ trying to reward technical brilliance, but instead trying to honor less tangible qualities like creativity and drive. We like the idea that, although computers can churn out technical arguments, it is still only us humans that can produce the brilliant ones.

  19. It's certainly scientifically very bad not to simplify proofs as much as possible.


    I think it's also not a good career advice.

    Disagree. The extra amount of time one might spend simplifying the proof of a not-terribly-groundbreaking theorem past a certain point will not be rewarded and, to the contrary, will only increase the chances that the paper be rejected as trivial. You might wish to believe this is not the case, but here we have an actual example with the dismissal of Savitch's theorem as trivial.

    Some claim that the field values simplicity failing to observe that Dinur's paper is the exception. I know first hand of three substantial simplifications by others of known results which had a very hard time finding a forum. In fact two of them were never accepted for publication, and the authors eventually gave up.

  20. --- we are actually _not_ trying to reward technical brilliance, but instead trying to honor less tangible qualities like creativity and drive.

    What I call the "pain factor" of the paper. Papers in conferences (this is by no means restricted to STOC/FOCS) must meet a certain pain factor to be accepted, almost independently of the relevance of the proof. Dinur's proof while simple, certainly meets the pain factor criteria.

    I sometimes joke about having a two line proof of P=NP that keeps getting rejected due to it's low pain factor.

    Alex Lopez-Ortiz

  21. "Disagree. The extra amount of time one might spend simplifying the proof of a not-terribly-groundbreaking theorem past a certain point will not be rewarded and, to the contrary, will only increase the chances that the paper be rejected as trivial."

    The key phrase is "not-terribly-groundbreaking".
    It is useful to have a forum for
    these but it is not clear that conferences are the right place.