For many years the standard example of a probabilistic algorithm checked whether a number was prime.

A Fast Monte-Carlo Test for Primality, Robert Solovay and Volker Strassen, SICOMP 1977.

Probabilistic Algorithm for Testing Primality, Michael Rabin, Journal of Number Theory, 1980.

Let n be an odd number and n≠r^{q} for r,q>1. Fermat's
little
theorem shows that for any a, 1<a<n, if
a^{n}≠a (mod n) then n is composite. But for a special
set of composites called the Carmichael Numbers, this test will fail
for all a. Miller adds the condition that
a^{(n-1)/2k}-1 and n are relatively prime for all
2^{k} dividing n-1 and shows that assuming the Extended
Riemann Hypothesis, for any composite n there is an a < O(log^{2}
n) passing the test. This gives a polynomial-time (in the
number of bits to express n) algorithm for
primality assuming ERH. Rabin showed that one could instead choose
random a's giving a probabilistic algorithm with no assumption. The
resulting test is now called the Miller-Rabin primality test. Solovay
and Strassen give a different test based on the Jacobi Symbol.

Of course now we know that primality has a polynomial-time algorithm with no unproven assumptions. So why did I choose these obsolete algorithms for favorite theorems? They are not so obsolete as they are considerably more efficient then Agrawal-Kayal-Saxena. But more importantly they served as the motivation for the study of probabilistic computation, much like Shor's Quantum Factoring Algorithm motivates quantum computing today. Without studying probabilistic computation we would have had no modern cryptography, no derandomization results, no Toda's theorem, no interactive proofs and no probabilistically checkable proofs with their hardness of approximation consequences.

So what's the best current example of something in BPP (or RP, or coRP), not known to be in P?

ReplyDeleteIdentity testing for low-degree multivariate polynomials (given as formulas or as circuits): it's in coRP, since inequivalent low-degree polynomials disagree on 'most' points.

ReplyDeleteOne important reason to try and derandomize this problem is shown by Impagliazzo and Kabanets: if it's in P then at least one of the following is true:

i) NEXP doesn't have polynomial sized boolean circuits;

ii) PERMANENT doesn't have *algebraic* poly-sized circuits.

The proof is very enjoyable as it uses the Theorems of Valiant, Toda, and Shamir as well as Nisan-Wigderson pseudorandom generators.

The use of the identity testing is this: if PERMANENT has small algebraic circuits and we're presented with them for all matrix sizes 1 thru n, then we can verify them probabilistically by the downwards-self-reducibility of the permanent (and PERM is in MA); with deterministic algebraic circuit equivalence testing we can strengthen this to 'PERM is in NP'. Then, using the assumption that NEXP has small circuits, it's shown that NEXP must be in the polynomial hierarchy (NEXP=MA), and since PERM is hard for the PH, NP = NEXP, which is impossible.

Paper at

ReplyDeletehttp://www-cse.ucsd.edu/users/kabanets/Research/poly.html

Has the Agrawal-Saxena-Kayal algorithm become more efficient since it was published? As I know, they proved log^{12} n complexity under no assumptions and log^6 n under the Sophie Germain conjecture.

ReplyDeleteThere's a new undergrad algorithms text to come out soon. A draft is now at http://www.cse.ucsd.edu/users/dasgupta/mcgrawhill/ and appears to cover some quantum algorithms.

ReplyDeleteAs far as I am aware, the best running time without any number-theoretic assumptions, and even without any deeper number-theory involved is (log n)^(10.5).

ReplyDeleteWith Fouvry's theorem, they show an exponent of 7.5 in the Annals of Mathematics paper. Finally, AKS mention that Lenstra and Pomerance came up with a modified version of the AKS algorithm. They achieve a running-time with a 6 in the exponent.