## Thursday, June 18, 2009

### Nondeterministic Parallelism

Can we define a class NNC such that NC ⊆ RNC ⊆ NNC. Is NNC known by some other name? It is clear that NNC ⊆ NP, but the reverse is not obvious.
NC is Nick's Class (named after Nick Pippenger) and can be characterized by either a PRAM model with polynomial processors and poly-logarithmic time or by polylogarithmic-depth polynomial-sized circuits1.

So what about nondeterministic NC or NNC? NNC is just the same as NP.

First let's consider randomized NC, RNC. Here each processors each had a limited number of random bits but they can share their random bits through memory, a technique used for example to find maximal matchings.

NNC, particularly if you want to have RNC ⊆ NNC, needs to have a similar definition. Suppose all the processors could act nondeterministically. Here is an NNC algorithm for 3SAT with n variables and m clauses.
1. Processor i (1 ≤ i ≤ n) guesses the ith bit of a potential satisfying assignment and writes it in memory location i.
2. Processor j (1 ≤ j ≤ m) checks whether this assignment makes clause j true.
3. If all processors in the second round accept than accept (which takes another log m rounds).

Since 3SAT is NP-complete under reductions computable easily in parallel, we have NNC = NP. The reverse direction is a simple simulation.

A simple observation but a very important point: Every nondeterministic computation has an easy nondeterministic parallelization.

Alternately you can define RNC or NNC with an additional random or nondeterministic input (in either the circuit or parallel model) and get the same classes.

RL and NL on the other hand handle randomness and nondeterministic differently. Here you have one processor using polynomial time and logarithmic space. While the processor can use a polynomial number of random/nondeterministic bits, the processor cannot remember them all. So RL and NL are considerably weaker, in particular both are contained in P.

Noam Nisan wrote paper On Read-once vs. Multiple Access to Randomness in Logspace that explores these definitions of randomness in more depth.

1. I'm ignoring uniformity issues in this post.

1. I have a question that is somewhat related to this.

Biomolecular computation (based on reaction diffusion) is many orders of magnitude slower than electron-based computation. However, there might be 10^23 different processors, each working on a separate piece of a problem -- so possible parallelism that is orders of magnitude greater than what is achievable through multicore silicon chips.

Is there a rigorous problem class that captures "massively parallelizable problems?" Meaning: problems that are solvable faster with a gazillion very slow processors, instead of just a lot of very fast processors.

These biomolecular processors would be inherently nondeterministic, or random-ish, or both. Perhaps that matters.

2. This is quite interesting. Its been a while since I learned these concepts, but doesn't this mean that the "checking" step of NP is vastly easier than the "guessing" step of NP?

The checking step is quite obviously easily parallelizable, but the guessing step is not (at least not obviously so).

Does this have any implications for P=NP? It seems odd to me that we can so easily prove a result about the difficulty of the checking v. the guessing step.

3. The checking step is quite obviously easily parallelizable, but the guessing step is not (at least not obviously so).

Does this have any implications for P=NP? It seems odd to me that we can so easily prove a result about the difficulty of the checking v. the guessing step.

As Lance mentioned (a corollary of what he mentioned) log-space machines with two-way access in a polynomially long non-deterministic tape can do the whole NP.

When you say "guess" and "check" conceptually I don't see much of a difference. For example, when a log-space non-deterministic verifier is used, the "guessed bits" can simply be the computation of a non-deterministic machine (in e.g. logspace we can verify the validity of the computation). When, you say "check" you really refer to a deterministic log-space computation. Under this perspective you really compare two different types of computation. And I don't think we have any clue (or tools to understand) of what a machine can do within log-space.

On a different note, things are more interesting in case of random bits - instead of non-deterministic - when we consider parallel (or space bounded) machines, where the random bits can be revisited.

I believe that Nisan's paper - which is amazingly elegant - takes an interesting step in understanding more this issue.

4. I'm still surprised that such an easy conceptual proof can establish NNC=NP.

Doesn't the proof gloss over whether all of the reductions from {NP problems} to SAT are in NC?

I mean, just because SAT is solvable in NNC doesn't mean that {NP problems} are solvable in NNC, unless the reductions to SAT are in NC.

I'm assuming the reductions from SAT to SAT3 are trivial, but are the reductions from {NP problems} to SAT trivially NC?

Anyway, I'm no expert, but just want to understand. It seems like a deep result to someone who hasn't followed the field closely.

With the community of readers here, I'm hoping someone will be kind enough to explain :)

5. are the reductions from {NP problems} to SAT trivially NC?

Yes. It even works in restricted subclasses of NC. The tableau in the proof of the Cook-Levin Theorem is incredibly regular so it very easy to figure out.

6. "If all processors in the second round accept than accept (which takes another log m rounds)." Why is it \$\log\$(number of clauses) number of rounds?