The class P contains a language L where have a single fixed program that efficiently solves L for all inputs at all lengths. L is in P/poly if for every length n there is a program of size polynomial in n that efficiently solves L for all inputs of length n. The programs for two different lengths may have no relation to each other.Non-uniform lower bounds establish impossibility results for computation in the physical world: it could be that P ≠ NP, yet NP-complete problems can still be efficiently solved using a “bloated” program with sufficiently many lines of code. Non-uniform circuit size lower bounds for NP would rule out this possibility.
Karp and Lipton showed that if NP is in P/poly then the polynomial-time hierarchy collapses so we generally don't believe that NP is in P/poly. Suppose though we lived in a world where NP is in P/poly but still P ≠ NP.
In one argument, even though we study complexity in asymptotics we generally live at one input length, the size of the problems we want to solve. Once we find the program that works at this length then P = NP for us, even though P ≠ NP in general.
But the only constant over time is time itself. An hour now is the same as an hour in 1971, but the amount we can compute in that hour has grown dramatically. We don't care about fixed problem sizes, rather we care about the largest problem we can solve in a given amount of time. As our technology gets better, those input sizes also grow. For problems in P, like linear programming, the algorithms we have for smaller inputs also work on larger inputs by just increasing the computation time. However, if NP is in P/poly the code for NP-complete problems that worked a few years ago may fail miserably now as we may have to find completely different code for the larger inputs we care about now. If P ≠ NP we can't have an efficient process to find that code even though we know it exists.
These broad considerations regarding uniformity arise specifically in the study of half-exponential function, that is, functions satisfying the composition relation f(f(x)) ∝ exp(x) (this is called a fractional iterative relation).
ReplyDeleteFor complexity-theoretic purposes, it would be very convenient if (1) the coefficient of proportionality could be trivially renormalized to unity, (2) the normalized equation had a unique "naturally best" solution, and (3) that natural solution had a uniformly convergent power series.
The set of exponential functions has all of these properties of naturality and uniformity, yet counter-intuitively (and somewhat unfortunately for complexity theory) the set of half-exponential functions has none of them.
Of course, most of the special functions of mathematical physics lack uniform series expansions; entire functions like polynomials and exp are rare exceptions, not the generic case.
Thus, in practical functional computations one enthusiastically embraces non-uniform algorithms, and this can be accomplished systematically, for example, by exploiting Kummer relations among hypergeometric functions.
Does the practical computation of half-exponential functions convey broader lessons relating to the roles of naturality and uniformity in complexity theory and quantum information theory? Under Scott Aaronson's recent topic "Anti-Complexitism" I've been arguing that the answer is "yes".
Is the class P that you talk about in this the same "P" as in P versus NP or just a poorly chosen name for a new class that you are talking about?
ReplyDeleteIf I have a non-uniform upper bound, I want to know whether there's a uniform upper bound. If I have a uniform lower bound, I still want to know if there's a non-uniform lower bound. Both models are important; it's not that one makes the other irrelevant.
ReplyDeleteRussell Impagliazzo
To the best of my belief, anonymous, it is precisely the class P as defined in (for example) Oded Goldreich's text P, NP, and NP-completeness. Alternatively, it is the canonical definition of P in Stephen Cook's formal statement of the Millennium Prize problem.
ReplyDeleteThe point (informally) is that an language belongs in P if a Turing machine exists that recognizes it in polynomial time. But there is no requirement that there exist a proof that the machine runs in polynomial time.
Indeed (AFAICT), our present understanding of the complexity class P is entirely consistent with the hypothesis that for most languages in P, there exists *no* (finite length, finite number of axioms) proof—even in principle—that verifies their membership in P.
Supposing that this be the case, then by a natural verification criterion the class P is not uniform (and thus, neither is the class NP). Some algorithms in P have proofs of that membership; others (perhaps the majority?) don't.
One wonders whether more rapid progress on the vexing question of separating P from NP might be achieved by "splitting" these classes?
We engineers care very deeply about these concrete verification and validation issues ... because for us, oracles that validate membership in P cannot be ordered from any catalog.
In the event I am completely misguided with regard to the complexity (or even undecidability) of verifying membership in P, perhaps one of the big-name complexity theorists will blog about it.
Such a discussion doubtless would be very interesting and illuminating to me and many! :)
Well, there always is a proof of any true arithmetic statement, it's just a question how strong your axioms need to be to reach it.
ReplyDeleteI believe it's true for any recursive axiom system, there are infinitely many languages in P without a proof they're in P (diophantine equations with only finitely many solutions seem like candidates). If you want to put a probability measure on that, you might be able to say it's a small or large percentage.
Luke: Well, there always is a proof of any true arithmetic statement, it's just a question how strong your axioms need to be to reach it.
ReplyDeleteTo say the same thing in stronger language:
-------------
P is a nonuniform class; given any fixed, finite axiom set (ZFC for example) P partitions naturally into languages whose membership is provable in that axiom set, versus languages that are in P, but not provably so.
-------------
One wonders, for which axiom sets can members of *both* natural subsets of P be concretely instantiated (via Gödel-type methods, say)?
One wonders also, does progress on the vexing question of P versus NP require separate proof technologies for these two (hugely different) subsets of P?
One possibility is not only that uniform lower bound matters but that even uniformity is not sufficiently restrictive and still allows much-too-wide class of algorithms when modelling "human feasible" algorithms. So it is interesting if there is an even more restrictive or more uniform notion of "P".
ReplyDeleteThis thought motivated this question: http://cstheory.stackexchange.com/questions/1617/stronger-notions-of-uniformizations
John Sidles, P is a uniform class, and therefore is not non-uniform. This is not up for debate, it follows from the definitions of "P", "uniform", and "non-uniform". Non-uniformity of a class has nothing to do with proofs of running time or correctness.
ReplyDeleteJohn Sidles, P is a uniform class, and therefore is not non-uniform. This is not up for debate.
ReplyDeleteAnonymous, your points are absolutely correct ... and similarly, that the criteria for verifying membership in P are not uniform—even in principle—also is not up for debate!
Both observations are interesting ... both suggest useful avenues of research in complexity theory ... and this I took to be the broad theme of Lance's thought-provoking and highly enjoyable post. :)
John, no offense, but you write so many long comments on so many blogs that I think you should start your own blog and publish them there not as comments on other people's blogs, your comments are not really comments.
ReplyDeleteAnonymous, the world is abundantly blessed with outstanding weblogs ... the world is less abundantly blessed with thoughtful comments on those weblogs.
ReplyDeleteSo you and I (at least) can both try to do our best in this regard. As "anonymous", your responsibility is particularly strong ... after all, "anonymous" posts more than everyone else combined.
In regard to practical applications of complexity theory, I am old enough to remember, in the 1970s, how agitated some S-matrix theorists were by the new-fangled concept of quarks and gluons.
Quantum fields that are never instantiated in Nature as free particles? ... fields that nonetheless are fundamental? This postulate was viewed as preposterous by committed S-matrix theorists.
From a systems engineering point of view, the class P is similarly a dynamical blend of "P-quarks" and "P-gluons" ... the P-quarks being languages in P that are provably in P, the P-gluons being their complement. The dynamical blending comes when we take unions and intersections of languages ... isolated "quarks" thus naturally become dressed by "gluons"!
AFAICT, P-gluon algorithms are known to exist, but none have ever been concretely isolated. Thus, to wonder whether P-gluon isolation is feasible is a very natural question—for engineers at least!
And if a concrete P-gluon algorithm recognized some natural or even practically useful language ... why ... wouldn't that constitute a tremendous advance, in both our fundamental understanding of complexity theory, and in our capability to apply that understanding?
I agree with the previous anonymous. John, these comments are so long that one has to scroll through pages to get to anyone else's comments, especially on Scott's blog where a few other people, such as Tim Gowers, Gil Kalai, John Preskill, and Geordie Rose, also sometimes make nice comments too. So, I think your comments should be blog posts instead.
ReplyDeleteAnonymous asserts: Tim Gowers, Gil Kalai, John Preskill, and Geordie Rose also sometimes make nice comments too.
ReplyDeleteSurely, it is "a truth universally acknowledged" that names like Michael Nielsen, Peter Shor, Leonid Gurvitz, Charles Bennett, and Barbara Terhal (among many) might credibly be added to that list. Perhaps our confluent expression of appreciation will encourage these people to post more frequently.
When one contemplates the size of our planet's population, the scale of the challenges it faces, and the crucial roles of complexity theory in meeting those challenges ... well ... it is sobering to reflect that this list is rather short.
Yes, I might perhaps acknowledge that Peter Shor sometimes also makes comments nearly as interesting as those of John Sidles. But perhaps if John Sidles had his own weblog, it would allow a little more room for Peter and co-luminaries to post on existing blogs, rather than having every discussion sidetracked into P-gluons, compressed sensing, and such. It is sobering to reflect that many of the best give short, terse comments.
ReplyDeleteAnonymous, one nice thing about our exchange, is the many opportunities it is affording for mentioning classic references relating to uniformity.
ReplyDeleteThese references are full of useful information and (sometimes) delightful historical surprises too.
With regard to special functions, the polynomial and exponential functions have uniformly convergent power series, and so (naively) we might expect that the Lambert function defined by W(z) exp(W(z)) = z will be uniformly convergent too. But as the classic 1996 article (>1400 citations to date) On the Lambert W Function by Corless, Gonnet, Hare, Jeffrey, and Knuth, reminds us, that's not so ... W(z) has a branch cut at z=-1/2.
As a bonus, Knuth and his colleagues take the trouble to include an historical appendix briefly describing the life of Johann Heinrich Lambert ... the acutissimi ingenii Lambert (the ingenious engineer Lambert) as Euler called him:
——-
Lambert had dreams [in the 1760s] of building a machine to do symbolic calculation … Thus his wishes anticipated those of Lady Ada Lovelace by roughly one century, and the actuality of symbolic computation systems by roughly two.
——-
Such history-of-computation is interesting (and fun!) to me and many folks ... I'm grateful that Knuth and his colleagues included it.
And there is a mathematical lesson too: contrary to our instincts, naturality and uniformity do not always go hand-in-hand; thus the naturality of the definition of W(z) non-intuitively requires non-uniformity in its power series representations.
By consensus, special functions are defined as "any function about which one can find information in many books on special functions", and it seems plausible (to me) that future books on special functions will include the informatically natural half-exponential function h(h(z)) = γβ^z.
As with the Lambert function, the half-exponential function is constructed from the uniform (entire) functions z and exp(z), and yet h(z) itself has no uniform power series expansion; rather the analytic stricture of f(z) turns out to be wholly controlled by the analytic structure of (what else?) W(z). Thus again we see that that respecting mathematical naturality in regard to half-exponential functions requires that we give up some notions of uniformity.
Particle theorists too have learned—the hard way—that sometimes naturality requires the sacrifice of uniformity. An instructive document in this regard is Geoffrey Chew's 1970 Hadron bootstrap: triumph or frustration?, which makes a passionate argument for (in essence) the priority of uniformity:
---------
Chew: I would find it a crushing disappointment if in 1980 all of hadron physics could be explained in terms a few arbitrary entities [meaning, quarks and gluons].
---------
For better or for worse, though, what Chew feared is (uncannily) what happened.
These considerations lead us naturally to enquire: is present-day complexity theory making a mistake similar to the S-matrix theorists in the 1960 and 1970s? That is, by not distinguishing among algorithms in P on the basis of verifiability, is complexity theory lumping together algorithms that are similarly different ... and similarly hard to separate ... as quarks and gluons?
With regard to these questions, relating broadly to the proper balance of naturality versus uniformity, it seems good to me that there is (presently) *not* a narrow consensus, but rather a broad spectrum of opinions ... because this encourages everyone to read (and synthesize) wonderful articles in math, science, engineering, and history.
Happy holidays, and happy STEM reading to all! :)