Wednesday, February 03, 2010

Time Space Tradeoffs for SAT- good resuls but...

This is a real conversation between BILL and STUDENT (a Software Engineering Masters Student who knows some theory). As such there are likely BETTER arguments BILL or STUDENT could give for there point of view. I invite you to post such as comments. (ADDED LATER: Some of the comments below DO give VERY GOOD arguments for why the results are interesting. I URGE anyone reading this now to read the comments. They are an integral part of this post, more than usual.)

BILL: In 2007 the best student paper award at COMPLEXITY went to Ryan Williams for proving that SAT cannot be done in time O(n1.801...) and O(no(1)) space!! The constant 1.801 is actually 2cos(π/7). (The paper is Time Space Tradeoffs for Counting SAT mod P. Note that the space is n to the little-o(1) which would include log space.)

STUDENT: Doesn't everyone think SAT is not in P?

BILL: YES.

STUDENT: So what's the big deal in proving that SAT is not in time O(n1.801...) and not much space?

BILL: Its a stepping stone towards better lower bounds!

STUDENT: Are better lower bounds using these techniques likely?

BILL: Well, it is thought that these techniques cannot possibly get beyond SAT not in O(n2).

STUDENT: So... why is it such a good result?

BILL: Actually he proved more than that. He proved that, for all but one prime p, finding the number of satisfying assignments mod p requires O(n1.801) time if you insist on O(no(1)) space.

STUDENT: Is the number of satisfying assignments mod p problem a problem people care about?

BILL: Complexity Theorists care about it!

STUDENT: I meant real people!

BILL: Um, ur...

STUDENT: Are there any interesting upper bounds on the number of satisfying assignments mod p problem so that the lower bounds are a counterpoint to them?

BILL: YES! There are some poly upper bounds are known for planar read-twice monotone formulas mod 7.! (see this paper by Valiant.)

STUDENT: Do people care about the number of solutions of planar read-twice monotone formulas mod 7?

BILL: Complexity Theorists care about it!

STUDENT: I mean real people! Oh. Are we entering an infinite loop?

BILL: The planar read-twice-monotone-formulas-mod-7 result is an example of a very powerful framework for algorithms.

STUDENT: So that paper may be important, but why is the SAT time-space tradeoff paper important?

I like these results, but STUDENT raises a good question: Are Time Space Tradeoffs for SAT important? IS SAT mod p important? I would argue YES since they are absolute lower bounds and there techniques combined with something else may yield more interesting results. However, if you have better arguments please leave comments.

Should Time Space Tradeoffs for SAT be taught in a grad theory course? There is one in Arora-Barak that is not hard. Hence it likely will be taught. If you teach it I hope you have students challenge you as STUDENT challenged me. It will mean they are awake and care. However, be prepared to answer them.

When this was taught in seminar recently the question arose: In the paper Time space Lower bounds for Satisfiability by Fortnow, Lipton, van Melkebeek, Viglas showed the following: For all c < φ (the golden ratio) there exists d such that SAT cannot be solved in time O(nc) and space O(nd). The techniques are such that it could have been proven in the 1970's. So why wasn't it? The seminar speculates that back then people were not so pessimistic about lower bounds to consider this a problem worth looking at. Now people do.

18 comments:

  1. I think the overall point is that the progress in lower bounds has been extremely pathetic by any measure.

    (Almost) 40 years of research into NP-Completeness and we can't prove that solving SAT needs super-linear time.

    This is bound to shock students when they learn it for the first time.

    ReplyDelete
  2. i dont even understand lowerbounds.

    ReplyDelete
  3. Don't make me turn around in my grave. I've been sleeping well all these years. But ur ignorance makes me wanna wake up an slap u.

    ReplyDelete
  4. Well, I think Paul is right. Your comment is making me thirsty. I haven't had my Coca-Cola for quite some time.

    Paul, I regret never having co-authored with you u. If you do get up, make sure to wake me up to ... I'd like to rectify my Erdoes number.

    ReplyDelete
  5. @Gian-Carlo, your arrogant attitude hasn't changed, has it ? All these years, you were striving to imitate me.
    By the way, I was the one introducing you to the notion of lower bounds, remember ?

    ReplyDelete
  6. One minor typo I believe:

    BILL: "...finding the number of satisfying assignments mod p requires O(n^1.801) *space* if you insist on O(n^o(1)) space."

    I think the first 'space' is supposed to be 'time'.

    ReplyDelete
  7. If for nothing else, these lower bounds are good for humor purposes.

    ReplyDelete
  8. Deskin Miller-

    Thanks- I fixed it.

    ReplyDelete
  9. could give for there point of view

    s/there/their/

    ReplyDelete
  10. also

    s/resuls/results/

    ReplyDelete
  11. Stating the result as something explicit about SAT is technically stronger but obscures the larger context. The result immediately implies that NTIME(n) is not contained TIME-SPACE(n^d,n^c) for these suitable d,c even for a random-access TM (and indeed that is what is proved directly).

    The natural line of work that this relates to was that of Kannan (and Karp-Lipton) around 1978-1980. Interestingly, time-space tradeoff lower bounds were a focus of study at this time for non-uniform and structured computational models. The first strong general time-space tradeoff lower bounds were shown by Borodin-Cook for multi-output problems in 1980-82.

    I attribute the fact that these sorts of tradeoff lower bounds for SAT were not found until Fortnow developed the key technique in the late 1990's is the impact of Baker, Gill, Solovay's results on non-relativizing of P v NP. This meant that the sorts of TM reductions used in proofs of these results were unlikely to answer the big questions about the complexity of SAT, so a large part of the community shifted to the combinatorial methods for non-uniform computation. The focus of work in uniform computation after BGS and Karp-Lipton shifted to understanding connections to circuit complexity and randomization and hence to problems such as the NP-completeness of sparse sets considered by Mahaney et al and BPP in Sigma_2^P \cap Pi_2^P of Sipser et al.

    Lance made the point in the talks on his time-space tradeoff paper that people had over-reacted to the BGS result and as a result had dropped a potentially fruitful line of research. It is pretty clear to me that had his paper appeared in 1981 or 1982, it still would have been considered a major contribution.

    ReplyDelete
  12. Let's define "real people" to be "software engineers", since that is who was asking. (It's not the best possible definition, but whatever...)

    Of course "real people" do care about SAT; it is used in the real world all the time. There's a whole conference on it.

    Real people should also care about counting SAT assignments modulo a prime, since this (coupled with randomness and Valiant-Vazirani) would already be enough to do approximate model counting, which can be used to simulate different kinds of probabilistic reasoning with SAT.

    Linear time, tiny-space algorithms for these two problems would totally revolutionize applications. Why would "tiny-space" be nice? Because sometimes the instances you get from a reduction of your huge processor verification problem to SAT are so big, the instance can't fit in main memory all at once! The "tiny-space" requirement means you can still carry out the algorithm without running out of memory, without even holding the whole instance in memory at any given time.

    The time-space lower bound work shows that those algorithms just aren't out there.

    You don't like the results? Think they're too weak? Read the papers! Find the weaknesses! Prove something/anything better! Let's make the world of lower bounds a little less pathetic! Go, team!

    ReplyDelete
  13. NP_Mod_P_Equals_CoNP_Mod_P_Equals_Zero1:43 PM, February 03, 2010

    Mr. Gasarch, your conversation started going south the moment you said "Actually, he proved more than that..." and introduced Mod_k_SAT and things of that ilk.

    It is a cardinal mistake to assume that what we, within a narrow subfield of CS theory, driven by our internal logic, consider important and interesting is also important or interesting to science or society at large.

    A better response would be to delineate what we believe and what we know. For example, as far as I know, we don't know how to solve Graph s-t connectivity by an algorithm that runs in time O(n polylog(n)) and uses O(polylog(n)) space. A reasonable belief is that such an algorithm doesn't exist. Yet, we've just realized, thanks to Fortnow's result and its follow-ups, that the NP-Complete problem SAT doesn't admit such an algorithm. Before this, we did have TS = Omega(n^2) lower bounds, but it was a strange "internal state" of the field that we didn't know how to prove such a result for SAT.

    Ranting on a tangential note, in my humble opinion, the entire body of work in complexity theory research that mentions "Mod something", for something > 2, has been extremely unproductive. I sincerely believe that we tend to get carried away by the mathematical curiosity and some of the beautiful maths we can do, and forget the bigger picture. I will go out on a limb and say that the field would be no poorer without the (beautiful) results of Razborov and Smolensky on bounded-depth circuits with Mod gates. Similarly, complexity theory would've done just fine without all the follow-ups for Toda's theorem for various ModClasses and ModProblems.

    Another, not entirely dissimilar, red herring for the field is proof complexity. While motivated by an extremely important question (NP vs. co-NP), this field has been the analogue of "bounded-depth-circuit-complexity-with-Mod-gates" : the classes of "algorithms" for which lower bounds are proved are very specialized in nature, to the point that the proofs don't tell us less about proof complexity than about the artificial limitations of the models. Moreover, this field seems to be even less of a generator of new ideas, relying instead on transferring ideas from circuit complexity and other fields.

    A pity not enough people work on communication complexity: this is a field that takes the opposite view to circuit- and proof-complexity. Instead of crimping the model in highly artificial ways, it avoids looking at the minute details of the model, instead focusing on a very broad class of resources (bits exchanged). To me, there is greater hope that this field will connect better with the rest of mathematics (topology, information theory, etc.) and get us somewhere useful. From its inception, despite being a very "relaxed" model of resource consumption in computing, this field has shown itself to be rather useful and closely relevant to more pragmatic computational models such as VLSI, Sketching and Streaming algorithms.

    I might, of course, be entirely myopic and totally wrong, but I'd bet that none of the major complexity questions will be answered by techniques however remotely inspired by Mod-something (for something > 2).

    ReplyDelete
  14. the entire body of work in complexity theory research that mentions "Mod something", for something > 2, has been extremely unproductive.

    Even if your entire rant is correct, that doesn't mean the work was not worth doing. How are we supposed to know that a particular area fails to hold the Answer, unless some researchers explore the area? Intuition alone? Nobody's intuition has been good enough to get past the major barriers, so I think it's better to rely on concrete results than anybody's feelings about such matters.

    Besides, there's something weird going on with modulo, if you consider the work of Cai and others that has come out of Valiant's holographic algorithms. (Number of solutions mod "Mersenne" integers, for example.) Maybe not the notion of mod you meant, but you were wielding an awfully broad brush, so it was hard for me to know where your notion of mod ended. I certainly believe holographic algorithms/signature theory might tell us significant things about complexity that we don't already know.

    ReplyDelete
  15. I don't know much about CC so corrections are welcome to the following.

    It seems to me that CC is not going anywhere right now. BNS stoped at log n, and generally I don't expect the discrepancy method to achieve much alone by RR (unless we get something crossing the barrier generally) as it can be approximated well in polytime (cf. AB last chapter).

    CC is a very interesting model, but it seems that we are facing the very same problem we had in circuit complexity with mod m gates, i.e. crossing log n degree/depth barrier.

    Finally, proving results in proof complexity has its own extra problems. AFAIK, we don't even have the lower-bound for the proof system corresponding to AC^0 with mod 2.

    ReplyDelete
  16. Even if your entire rant is correct, that doesn't mean the work was not worth doing. How are we supposed to know that a particular area fails to hold the Answer, unless some researchers explore the area? Intuition alone? Nobody's intuition has been good enough to get past the major barriers, so I think it's better to rely on concrete results than anybody's feelings about such matters.

    This. Mathematicians have pursued things which were intellectually satisfying for a long time, with little regard for immediate practical consequences. The end result of course is lots of stuff that's very useful. Sure, there's lots of stuff that's (so far) useless too, but I doubt anyone could have predicted which was which years ago. Maybe I'm being a bit to idealistic, but "What have you done for me lately?" is what business is for, not academia. (Nothing against business -- it's more effective and efficient than academia for a lot of things.)

    ReplyDelete
  17. Do you think that Rota would have co-authored with Erdoes .. if he had another chance ?

    ReplyDelete
  18. Before this, we did have TS = Omega(n^2) lower bounds, but it was a strange "internal state" of the field that we didn't know how to prove such a result for SAT.

    This is not really true. We only knew such things for multi-output problems where it is not fair to compare, not decision problems like SAT.

    ReplyDelete