Wednesday, October 30, 2002

Complexity Class of the Week: SPP, Part I

Previous CCW

With the new FOCS paper by Arvind and Kurur, "Graph Isomorphism in SPP", people have asked me why they should be interested in SPP, a class first defined by a paper by Steve Fenner, Stuart Kurtz and myself. I thought I would discuss how this class was developed and why we feel it is important.

Gill, in his seminal paper on probabilistic complexity classes, defined the class PP and asked whether the class was closed under intersection. In 1990, Fenner and Kurtz and later myself, decided to try a new approach to the question: Consider a class defined like PP but with additional restrictions, show that this class is closed under intersection and then show the class was really the same as PP. Kurtz had a philosophical approach to the problem and defined three variants of PP, Epicurean-PP, Cynical-PP and Stoic-PP.

Recall that PP is the set of languages L accepted by probabilistic machines such that x is in L exactly when the probability of accepting is greater than the probability of rejecting. Epicurean-PP machines were happy to accept but only rejected by barely rejecting--having one more rejecting paths than accepting paths. Cynical-PP machines were the opposite, willing to reject in any way but would only barely accept. Stoic-PP machines stood their ground and would just barely accept or barely reject. Cynical-PP turned out to be the same as the well-studied class C=P and Epicurean-P was co-C=P. Stoic-PP or SPP was new and thus a complexity class was born.

While it was easy to show SPP was closed under intersection it is unlikely to be the same as PP and thus we failed in this attempt to show PP was closed under intersection. While we were having this discussion, sitting on the printer was a paper Richard Beigel had emailed me earlier, his paper with Nick Reingold and Daniel Spielman entitled "PP is Closed Under Intersection". Their successful approach was completely different the ours. They used rational functions to approximate the sign function.

Not to be deterred we started studying SPP and related classes which also led to GapP functions. Valiant had defined the class #P, functions f such that there was some nondeterministic polynomial-time Turing machine M such that f(x) was the number of accepting paths of M(x). GapP functions were the closure of #P functions under subtraction, or equivalently the difference or gap of the number of accepting and rejecting computation paths of an NP machine.

GapP functions are closed under many of the same properties as #P functions such as polynomial products and exponential sums as well as subtraction of course. The power of subtraction made GapP a much cleaner approach to studying counting classes and the study of GapP showed the great importance of the class SPP.

Independently of us, Ogihara and Hemachandra defined a class XP and Gupta defined a class ZUP, both of which were equivalent to SPP.

I will stop here and in a later post describe the actual properties of SPP that make it such an interesting class.

Tuesday, October 29, 2002

Talks by Manindra Agrawal

I don't usually give talk announcements in this web log, but if you are in the Boston area this week you can see Manindra Agrawal give talks about prime numbers and his new algorithm with Kayal and Saxena. This new algorithm, giving the first provably deterministic polynomial-time algorithm to check primality, will go down as one of the classic results in theoretical computer science.

Manindra is giving a non-technical talk on the history of primes at the Clay Mathematics Institute on Wednesday and technical talks on the primality algorithm at MIT on Thursday and Harvard on Friday.

Monday, October 28, 2002

Foundations of Complexity
Lesson 5: Reductions

Previous Lesson | Next Lesson

In the previous lesson we gave examples of two noncomputable sets. The set

LA = { <M> | Machine M does accept input <M>}
is computably enumerable but not computable while the set
LD = { <M> | Machine M does not accept input <M>}
is not even computably enumerable.

We also defined computable functions. In this lesson we will use computable functions to create other languages that are not computable. To do so we use the notion of reduction. Informally a reduction takes one decision problem and reduces it to another problems. For example to know your current longitude, you only need to know the time of day in a fixed location. The problem of computing the longitude reduces to the problem of proper time-keeping. This is one way the longitude issue was dealt with before the days of GPS.

Formally we say a language A reduces to language B if there is a computable function f such that for all x in Σ*, x is in A if and only if f(x) is in B.

The power of reductions come from the following lemma.

Lemma: Let A and B be sets such that A is reducible to B.

  1. If B is computable then A is computable.
  2. If B is computably enumerable then A is computably enumerable.
  3. If A is not computable then B is not computable.
  4. If A is not computably enumerable then B is not computably enumerable.

Lines 1 and 2 are easy to see: Just compute f(x) and simulate the program for B on f(x). Lines 3 and 4 are just the contrapositive of 1 and 2 and turn out to be especially useful.

For example, consider the universal Turing machine language,

LU = { (<M>,x) | Machine M does accept input x}
We have seen LU is computably enumerable. Let f(<M>)=(<M>,<M>). The function f is easily seen to be computable and reduces LA to LU. Thus by our Lemma, line 3, we have that LU is not computable.

Reductions play a major role in computational complexity so it is very instructive to see them in this context. Next lesson we will use more complicated reductions to show other simple languages are not computable.

Friday, October 25, 2002

Hard Problems

Occasionally someone comes to me and says, "I have a new algorithm for blah and it seems to work on all of the cases I can think of. Can you give me some real instances to test it on?" Now this is not my speciality--I would much rather see a proof of correctness of an algorithm. But for all those who think they have the next great algorithm for satisfiability, let me give you some useful links.

The first place to look is the problem instance collection of INFORMS, the Institute for Operations Research and the Management Sciences. They have links to all sorts of interesting problems like graph coloring, traveling salesman and combinatorial auctions.

For factoring, you can make some real money by solving the RSA Factoring Challenges.

For graph isomorphism, check out The Graph Database. The graph database only seems to have isomorphic graphs but a good isomorphism tester should give the isomorphism. I haven't been able to find a collection of nonisomorphic graphs that fool many isomorphism testing algorithms. See also the Nauty algorithm.

For satisfiability, there are algorithms that seem to generate hard instances. You can also take problems like graph coloring from the INFORMS site and convert them to satisfiability questions.

Tuesday, October 22, 2002

Infinity and the Supreme Court

Don't worry loyal readers. Circumstances are making it difficult for me to write posts this week but I hope to catch up soon.

The U.S. Supreme Court is taking on a case that deals with an interesting mathematical issue. The case consists of the extension of the copyright--often called the Mickey Mouse rule since just when Disney's copyright of Mickey is about to expire, the length of the copyright is extended.

The U.S. constitution prohibits unlimited copyrights. The lower courts have said that a finite extension of a finite term is still finite. So the lawmakers are getting around the constitution by approximating the unconstitional infinite term by longer and longer finite terms. This is basically a mathematical trick--creating infinity while always locally looking finite.

Will the supreme court put a stop to this? This will be an interesting case to watch.

Friday, October 18, 2002

Quantum Parody of Shakespeare's Hamlet III.i

To end the week I give you the following presented by Ken Regan at the Dagstuhl meeting.
To evolve, or not to evolve: that is the question:
Whether 'tis nobler in the mind to suffer
The slings and arrows of coherent waves,
Or to take arms against superpositions,
And by observing end them? To observe; to evolve---
No more; and by a measurement to say we end
The mixed states and the thousand natural shocks
NMR is heir to, 'tis a computation
Devoutly to be wish'd. To evolve, to run;
Perchance to decohere---ay, there's the rub;
For in too long a run what waves may come
When they have shuffled off magnetic coil,
Must give us pause: there's the respect
That makes calamity of so long runs;
For who would bear the whips and scorns of time,
Qubits flipped wrong, the proud man's "quantumly",
But that the dread of something after halting,
The undiscover'd light cone from whose bourn
No classical info returns, puzzles the will
And makes us rather study those models we know
Than fly to others that we know not of?
Thus deadlines do make cowards of us all;
And thus the native hue of evolution
Is sicklied o'er with the pale cast of thought,
And enterprises of great pith and moment
With this regard their funding turns awry,
To lose the name of action.---Soft you now!
The fair Ophelia! Nymph, in thy bra and kets
Be all my states remember'd.

Thursday, October 17, 2002

Scooping the Loop Snooper

Thanks to Jose Balcazar for showing me this poem by Geoffrey K. Pullum giving a proof, in verse, that the halting problem is undecidable.

Wednesday, October 16, 2002

More from Dagstuhl

The highlight of Tuesday was seeing Manindra Agrawal present the new primality algorithm. History in the making.

Graph Isomorphism is a common topic in the conference. Next to factoring, graph isomorphism is the most well-studied problem in NP not known to be in P or NP-complete. Graph isomorphism sits in co-AM, i.e. there is a two-round interactive proof system for showing that two graphs are not isomorphic. The best deterministic algorithm uses time exponential in (n log n)0.5.

Jacobo Toran today gave a talk on the hardness of graph isomorphism. He showed that given a black box for GI, one can compute the number of accepting paths in a directed graph, a class known as #L functions or equivalently the determinant of an integer matrix. He also showed that matching reduces to graph isomorphism. Whether every language in P is log-space reducible to graph isomorphism remains an interesting open question.

Later in the conference V. Arvind will present his result that GI is in SPP, or more specifically that one can determine graph isomorphism in PSAT where every query made to the SAT oracle has at most one satisfying assigment.

Graph Isomorphism is the current algorithmic challenge for quantum computers. It is an example of the hidden subgroup problem, a special case of which was used for Shor's factoring algorithm. Perhaps the interaction between the classical and quantum theorists at this conference may help find a efficient quantum algorithm for GI.

Monday, October 14, 2002

Howdy from Dagstuhl

Howdy from Schloss Dagstuhl, a conference center in Germany that hosts weekly computer science workshops. This week I�m here for the Seminar on Algebraic Methods in Quantum and Classical Models of Computation.

The interesting open problem of the day comes from Pierre McKenzie. Consider a circuit that works on sets of nonnegative integers. Inputs are the sets {0} and {1}. The gates consist of union, intersection, complement, addition and multiplication. Addition of two sets A and B is the set consisting of x+y for x in A and y in B. Multiplication is similar.

Given such a circuit with specified input sets and an integer w, is it decidable whether w is in the set generated by the output gate?

A decision algorithm for this problem yields a way to settle Goldbach�s conjecture that every even number greater than 2 is the sum of two primes. I�ll leave this implication as an excercise.

Thursday, October 10, 2002

Complexity Class of the Week: P

Previous CCW

Edmonds in his 1965 paper on matching suggests defining efficient computation as those running in time polynomial in the length of their input. This became the class P, the most basic of all complexity classes.

The class P has nice properties, for example it is model independent, i.e., P is the same whether one has a single-tape, multi-tape or random-access Turing machine. P is closed under subroutines--polynomial-time machines with access to an oracle for P accept languages in P. Perhaps a running time like n150 is not efficient but one needs the polynomial-time definition to keep the robustness of the model. In nearly every case, natural problems shown to be in P have also been shown to have algorithms with relatively low exponents.

Some have argued that P as efficient computation reflects old technology. Perhaps efficient computation should be classes like BPP (probabilistic) or even BQP (quantum). I don't know about you but the computer on my desk doesn't produce truly random bits or quantum entanglement.

P is equal to alternating log-space. Using this result, we get complete problems for P like the circuit value problem consisting of the set of AND-OR circuits that evaluate to true. For P-completeness we require the reductions to be computed in logarithmic space. P has many other natural complete sets including variations on depth-first search.

There are many examples of problems with nontrivial polynomial-time algorithms such as matching, linear programming and primality.

Every language in P can be expressed as a first-order formula with ordering and a least-fixed-point operator.

Many of the major open questions in complexity ask about the power of P, for example P = BPP, P = BQP, P = PSPACE, P = L and of course P = NP. Note that we cannot have both P = L and P = PSPACE since L = PSPACE violates the space hierarchy.

Wednesday, October 09, 2002

Foundations of Complexity
Lesson 4: Noncomputable Computably Enumerable Languages

Previous Lesson | Next Lesson

In the last lesson we showed that the set

LD = { <M> | Machine M does not accept input <M>}
is not computably enumerable. In this lesson we show there are languages that are computably enumerable but not computable.

For a set A, let A be the complement of A, i.e., all of the strings of Σ* not in A. If A is computable then A is also computable--at the end if we accepted then we reject and vice-versa. Note this does not work in general for computably enumerable; switching reject and accept does not affect whether a machine halts on a given input.

Now consider the set

LA = { <M> | Machine M does accept input <M>}
LA is just LD.

Note the LA is computably enumerable as we can just simulate M on <M>. If LA were computable then so would LD which we know is not even computably enumerable. Thus we have LA as our first example of a noncomputable computably enumerable set.

Besides languages we can also consider computable functions. Consider a Turing machine with an output tape. We can view it as computing a function f from Σ* to Σ* where the value of f is the contents of the output tape after the machine enters a specified halting state. We say a total function f is computable if there is such a Turing machine computing f. We can also consider partially computable functions where f(x) may not be defined for some inputs x. On these inputs the corresponding machine does not halt.

In the next lesson we will use computable functions to show that other languages are not computable or computably enumerable.

Monday, October 07, 2002

Wigderson Paper on Work of Sudan

Avi Wigderson has posted on his publications page an article about the research of Madhu Sudan, the recent Nevanlinna Prize recipient. The paper, written for a broad mathematical audience, gives a nice description of Madhu's work on probabilistically checkable proofs and error-correcting codes.

Friday, October 04, 2002

Chaitin's Omega - The Halting Probability

Chaitin's Omega is the most compact way possible to encode the halting problem. Fix a prefix-free universal Turing machine U, that is if U(p) and U(q) both halt then p is not a prefix of q and vice versa. We define Chaitin's Omega by
Ω = Σp:U(p) halts2-|p|.
By Kraft's Inequality, Ω ≤ 1. Since U halts on some p but not others we have 0 < Ω < 1. Sometimes Ω is called the halting probability because it is the probability of halting if we run U at the start of an infinitely long randomly chosen string.

One can determine whether U(p) halts from only the first |p| bits of Ω. Let n=|p| and Ωn be Ω truncated to the first n bits. We have Ωn < Ω < Ωn+2-n. Define Ωs as the same as the definition of Ω as above except then we only sum over the p of length at most s such that U(p) halts in s steps. Note we have

lims→∞ Ωs = Ω.
and Ωs is computable from s. So we find the smallest s ≥ n such that Ωs > Ωn. Note that U(p) halts if and only if it halts in s steps, otherwise Ω ≥ Ωs+2-n > Ωn+2-n a contradiction.

Consider ΩA, which has the same definition as Ω but U now has access to an oracle for A. Rod Downey asks whether this is well defined in terms of being machine independent? Is it degree invariant, that is if A and B have the same Turing degree does ΩA have the same degree as ΩB?

If the answer is yes, then, according to Rod, it is a solution to a very long standing question of Martin. Note you cannot necessarily compute even A from ΩA.

Wednesday, October 02, 2002

Foundations of Complexity
Lesson 3: Universal Turing Machines and Diagonalization

Previous Lesson | Next Lesson

A Universal Turing Machine is a machine so powerful that it can simulate any other Turing machine. Initially it seems amazing that such a machine can exist. But think about the microprocessor that sits on the computer you are now using. Every program that you use, your word processor, the spreadsheet, the browser, the mp3 player all use code that runs on this processor. This processor acts like a universal Turing machine. Another example, is an interpreter for a language like Java. Suppose we had a program written in C++. The Java interpreter can run code that lets it interprets C++ and thus run any C++ program. This works for any other language and thus a Java interpreter is also a universal Turing machine.

What we have done is to consider programs as data themselves. Fix a programming language. For a machine M let <M> be the binary encoding of the program describing M. Let LU be the set of pairs (<M>,x) such that machine M accepts input x. LU is a computably enumerable set as we can create a machine U that simulates M on input x. The machine U is a universal Turing machine.

We now show that there is a language that is not computably enumerable. Let LD be the set of <M> such that machine M does not accept <M>. Suppose LD is computably enumerable. There must be some machine N such that N(<M>) accepts if and only if <M> is in LD. We have two cases

  1. N(<N>) accepts: <N> is in LD so by definition of LD, N does not accept <N>, a contradiction.
  2. N(<N>) does not accept: <N> is not in LD so by definition of LD, N accepts <N>, a contradiction.
This kind of argument is called diagonalization. It is the main technique to show that problems cannot be computed.

Step back for a second. We have shown that the language LD cannot be computed by a computer. Any computer. Ever.

Tuesday, October 01, 2002

SIGACT News

Lane Hemaspaandra's Complexity Column in the September SIGACT News has an interesting article by Marcus Schaefer and Chris Umans on problems complete in higher levels of the polynomial-time hierarchy. Also of interest for complexity theorists, Bill Gasarch's Book Review Column has a joint review of Computability and Complexity Theory by Homer and Selman and The Complexity Theory Companion by Hemaspaandra and Ogihara.