Wednesday, May 10, 2006

The Importance of Natural Proofs

Razborov and Rudich's paper Natural Proofs gives a combinatorial framework for proofs to show that circuits in a given class cannot achieve a given computational task. The paper explains the difficultly of extending these proofs to show, for example, that NP does not have polynomial-size circuits (and thus P≠NP). But do natural proofs really present a barrier to proving important circuit lower bounds?

One approach to showing NP does not have polynomial-size circuits: Find some property C of functions such that SAT is in C. We then show, using some sort of inductive argument, that no function computable by polynomial-size circuits can have property C. This would imply SAT, cannot have polynomial-size circuits.

Briefly a natural proof is such a proof where C has two properties.

  1. Largeness: C contains many functions.
  2. Constructivity: One can efficiently verify that a function f is in C.
Razborov and Rudich show that such a proof against polynomial-size circuits would break pseudorandom generators and in particular imply that the discrete logarithm is not hard. So under reasonable hardness assumptions, natural proofs cannot be used to prove lower bounds against polynomial-size circuits. See the paper for more details.

Sounds bad for proofs against circuits. But let's consider the two properties. The authors give a good argument why the largeness condition should hold, however

We do not have any similar formal evidence for constructivity, but from experience it is plausible to say that we do not yet understand the mathematics of Cn outside exponential time (as a function of n) well enough to use them effectively in a combinatorial style proof. We make this point in Section 3, where we argue that all known lower bound proofs against nonmonotone circuits are natural by our definition.
Indeed they do show all known proofs are natural, but in some cases go through considerable effort to "naturalize" these proofs (as opposed to relativization where the fact that a theorem relativizes follows immediately from the proof).

Consider what I call quasinatural proofs, where we only require the largeness condition. One might say that if discrete logarithm is hard then a quasinatural proof must prove the nonconstructivity of C. But really you get a conditional. If there are quasinatural proofs against polynomial-size circuits then

If C is constructive then Discrete logarithm is easy
which is just a "pigs can fly" theorem that we see often in complexity.

Avi Wigderson points out that unconditionally you cannot have a natural proof showing that the discrete logarithm problem is hard. If we unravel this statement then we get that giving a quasinatural proof showing discrete logarithm is hard would require proving that discrete logarithm is hard, hardly a surprise.

I don't have an example of a quasinatural proof not known to be natural as we have very limited techniques for proving circuit lower bounds. Natural proofs do give us some insight into what kind of proof techniques we need for stronger lower bounds, but they do not, in and of themselves, present a major barrier to finding such proofs.


  1. One possible critique -- it's true that a "proof" that C is hard is superfluous. However, the vast majority of hard predicates, including all known NP-complete (or even complete-on-average) problems, still have large easy subsets, which is all that is required for natural proofs. Thus, your predicate might be hard to compute in general, but if it even has a sub-polynomial fraction of easy instances, that can still be enough for natural proof limitations to kick in. I think it is fair to say that right now, we are a very long way from building any useful predicates that have no non-negligible easy subsets ("useful" meaning something beyond being useful because of its hardness, as in cryptographic primitives or diagonalization).

    I think this is a minority opinion (or at least one that Razbarov-Rudich disagree with), but because of the preceding I think attacking the largeness constraint is more promising right now. There is a long way to go in either case, but I think we are closer to building highly specialized/rare function properties than we are to finding useful predicates with no easy subsets.

  2. ====================
    In their paper, Razbarov and Rudich simply give evidence of a general limitation on the ability of highly combinatorial techniques, and their recursion-theoretic predecessors, to resolve P =/= NP and other hard problems.

    This should not be very surprising, since all such techniques are, essentially, dependent on algorithmic methods of determining satisfiability.

    Note that, Tarski's Theorem is that the set of true arithmetical propositions is not arithmetical.

    Similarly, P=/= NP, is the assertion that the set of true arithmetical propositions is not algorithmic.

    In other words, if we can define an arithmetical tautology that is not algorithmically decidable, then P=/=NP.

    The question, therefore, is whether there is a way of defining an arithmetical relation f that is instantiationally provable as true for any given set of values to the variables of f, but f is not algorithmically provable as true for any given set of values to the variables of f.

    The reason the PvNP problem 'seems' hard is that the distinction between instantiational provability and algorithmic provability, and its consequences, remains implicit so far.

    To place the distinction in perspective, note that the PvNP issue can be expressed conveniently in terms of Goedel's interpretively true, but formally unprovable, arithmetic proposition, say [(Ax)R(x)].

    Is Goedel's meta-mathematically proven, arithmetical, tautology, R(x), algorithmically provable?

    If not, then P=/=NP.

    However, if it is algorithmically provable, then it follows that there is some non-standard domain of values that satisfy the Peano Axioms, PA, over which R(x) is not tautologically true, even though it remains tautologically true over the sub-set of natural numbers in the domain of every interpretation.

    The question arises: Does any such domain exist?

    Now, if it exists, then it follows that each of the PA-unprovable formulas, [(Ax)R(x) and [~(Ax)R(x)], can be added as axioms to PA without affecting the recursive definability of PA, and without inviting inconsistency.

    Now, it can be argued that the theorems of any recursively definable arithmetic - such as PA, PA+[(Ax)R(x)], PA+[~(Ax)R(x)] - are algorithmically provable as true over every domain of an interpretation (since the axioms of PA are algorithmically provable in any interpretation, and the rules of inference preserve algorithmic provability).

    However, It would then follow that Goedel's arithmetical relation, R(x), is, both, algorithmically provable as true, and not algorithmically provable as true, over the natural numbers.

    It follows that no non-standard domain, such as the above, can be defined consistently, and so P=/=NP.


  3. Bhup,
    the post and the Razborov-Rudich paper deal with the non-uniform setting of proving circuit lowerbounds, and in particular, seperating NP from P/poly;
    How does your post relate to circuit lowerbounds?

  4. My apologies if my observations appear off-post.

    I was merely commenting on a foundational aspect of the significance, and a possible cause of the perceived constraints on, Razborov-Rudich's 'imprecise' concept of 'natural proofs', to which the author's draw attention.

    The author's argument seemed to be that if natural proofs were capable of precise, algorithmic, definition, they would "break pseudorandom generators and in particular imply that the discrete logarithm is not hard".

    My intention was to suggest the possibility that natural proofs may be precisely definable instantiationally.