tag:blogger.com,1999:blog-3722233.post1834774971440354901..comments2020-09-30T09:13:54.199-05:00Comments on Computational Complexity: Nondeterministic ParallelismLance Fortnowhttp://www.blogger.com/profile/06752030912874378610noreply@blogger.comBlogger5125tag:blogger.com,1999:blog-3722233.post-19494381176766715982009-06-23T15:40:48.452-05:002009-06-23T15:40:48.452-05:00are the reductions from {NP problems} to SAT trivi...<i> are the reductions from {NP problems} to SAT trivially NC? </i><br /><br />Yes. It even works in restricted subclasses of NC. The tableau in the proof of the Cook-Levin Theorem is incredibly regular so it very easy to figure out.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-54607964809672267422009-06-22T22:15:29.632-05:002009-06-22T22:15:29.632-05:00I'm still surprised that such an easy conceptu...I'm still surprised that such an easy conceptual proof can establish NNC=NP. <br /><br />Doesn't the proof gloss over whether all of the reductions from {NP problems} to SAT are in NC? <br /><br />I mean, just because SAT is solvable in NNC doesn't mean that {NP problems} are solvable in NNC, unless the reductions to SAT are in NC.<br /><br />I'm assuming the reductions from SAT to SAT3 are trivial, but are the reductions from {NP problems} to SAT trivially NC?<br /><br />Anyway, I'm no expert, but just want to understand. It seems like a deep result to someone who hasn't followed the field closely. <br /><br />With the community of readers here, I'm hoping someone will be kind enough to explain :)Drewnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-17183433193315741962009-06-20T23:45:02.231-05:002009-06-20T23:45:02.231-05:00The checking step is quite obviously easily parall...<i>The checking step is quite obviously easily parallelizable, but the guessing step is not (at least not obviously so).<br /><br />Does this have any implications for P=NP? It seems odd to me that we can so easily prove a result about the difficulty of the checking v. the guessing step.</i><br /><br />As Lance mentioned (a corollary of what he mentioned) log-space machines with two-way access in a polynomially long non-deterministic tape can do the whole NP. <br /><br />When you say "guess" and "check" conceptually I don't see much of a difference. For example, when a log-space non-deterministic verifier is used, the "guessed bits" can simply be the computation of a non-deterministic machine (in e.g. logspace we can verify the validity of the computation). When, you say "check" you really refer to a deterministic log-space computation. Under this perspective you really compare two different types of computation. And I don't think we have any clue (or tools to understand) of what a machine can do within log-space.<br /><br />On a different note, things are more interesting in case of random bits - instead of non-deterministic - when we consider parallel (or space bounded) machines, where the random bits can be revisited. <br /><br />I believe that Nisan's paper - which is amazingly elegant - takes an interesting step in understanding more this issue.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-69278275387073712292009-06-20T06:04:44.886-05:002009-06-20T06:04:44.886-05:00This is quite interesting. Its been a while since...This is quite interesting. Its been a while since I learned these concepts, but doesn't this mean that the "checking" step of NP is vastly easier than the "guessing" step of NP? <br /><br />The checking step is quite obviously easily parallelizable, but the guessing step is not (at least not obviously so).<br /><br />Does this have any implications for P=NP? It seems odd to me that we can so easily prove a result about the difficulty of the checking v. the guessing step.Drewnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-91846371699942232262009-06-18T18:36:55.259-05:002009-06-18T18:36:55.259-05:00I have a question that is somewhat related to this...I have a question that is somewhat related to this.<br /><br />Biomolecular computation (based on reaction diffusion) is many orders of magnitude slower than electron-based computation. However, there might be 10^23 different processors, each working on a separate piece of a problem -- so possible parallelism that is orders of magnitude greater than what is achievable through multicore silicon chips.<br /><br />Is there a rigorous problem class that captures "massively parallelizable problems?" Meaning: problems that are solvable faster with a gazillion very slow processors, instead of just a lot of very fast processors.<br /><br />These biomolecular processors would be inherently nondeterministic, or random-ish, or both. Perhaps that matters.Aaron Sterlingnoreply@blogger.com