Sometimes I get a flood of requests about a certain P versus NP attempt, such as the upcoming Berkeley math department Workshop on P vs NP. I don't have much to say, the workshop is not involving any of the Berkeley theoretical computer scientists and in the past we've seen other, sometimes famous, mathematicians who have believed they have an approach to the P versus NP problem that in the end don't pan out. To the organizers' credit, at least they acknowledge they don't yet have a proof.
Computational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill Gasarch
Friday, March 10, 2006
On P versus NP
I receive several requests to comment on various papers claiming to prove P =
NP, P ≠NP, or the independence of the P versus NP question on this
weblog. I have a healthy skepticism about all such claims and simply
don't have time to carefully read and evaluate those papers. If there
is verified significant progress towards the P versus NP problem you
will read about in on this weblog (though the converse is not true).
Subscribe to:
Post Comments (Atom)
Lance, the linked-to workshop isn't in the same ballpark as those unrefereed preprints with titles like "P!=NP and P!=coNP" that float around the internet.
ReplyDeleteUnless John Rhodes has gone wacky as the years have progressed, this is a serious attempt at recasting PvsNP as a problem in the "realm of" semigroups - an area that crops up a lot when studying low-level complexity classes like the insides of NC^1.
Rather than mock them for a vague and ambitious workshop title, we should talk to them, to see if they have any new ideas and to prevent them from reinventing wheels.
So what is a bi-machine?
ReplyDeleteProfessor... If it's not ask too much, could you post somewhere this claims you receive? You have no time to read them, but I will be glad to give a try...
ReplyDelete>>So what is a bi-machine?
ReplyDeleteMaybe is a machine with a non-deterministic part and a deterministic one? An hibrid alien-human? A Cyborg? :D
===================
ReplyDeleteSuch a position certainly seems justified by the difficulty of even getting an intuitive grasp of the PvsNP problem.
Current efforts still seem to labour under the parameters within which the issue is expressed in existing, standard, interpretations of classical theory.
So, perhaps, getting an intuitive grasp of the PvsNP problem may involve an 'out-of-the-box' approach, where we may need to re-examine the constricting aspects of some of our implicit assumptions.
For instance, the current forms of the (implicitly accepted as practically - if not logically - irrefutable) Church-Turing Theses postulate identities between well-defined, effectively computable (decidable), number-theoretic functions (and relations) that are used to, for instance, express physical phenomena mathematically, and recursive / Turing-computable functions (and relations - treated as Boolean functions where appropriate).
However, this constricts us to postulating that every well-defined, effectively computable, number-theoretic function (or relation), which expresses a physical phenomena mathematically, is necessarily algorithmically computable (decidable).
Is such a stringent thesis really necessitated by our experience?
There are, certainly, no intuitive reasons to support such an extreme assertion.
Prima facie, we can achieve the same ends if we weaken both the Theses to postulate only instantiational equivalences between such well-defined, effectively computable (decidable), number-theoretic functions (and relations), and recursive / Turing-computable functions (and relations - treated as Boolean functions where appropriate).
There is no reason to believe that such equivalences must also imply identities.
The questions arises:
Do such instantiationally equivalent, but not identical, functions / relations exist, and can such a weakening really help illuminate the PvsNP issue more intuitively?
Well, consider the following scenario.
Let us define Complexity Theory explicitly as the study that investigates, and differentiates between, the characteristics of those algorithms in Computability Theory (which is simply another name for classical recursive arithmetic, or its more modern cousin, recursion theory) that can be termed as 'regular', and of those that can be termed as 'chaotic'.
Let us, further, define 'regular algorithms' as those that halt in 'polynomial time', and 'chaotic' as those that require 'non-polynomial time'.
Loosely speaking, the PvsNP question, then, can be expressed as:
Can every chaotic algorithm be simulated by a regular algorithm?
However, although expressible in mathematically more precise terms, such a 'PvsNP' distinction is not, at the moment, very illuminating intuitively.
So, let us consider the following alternative - and, possibly equivalent - description / definition and see if it could be considered as intuitively more illuminating.
Now, an ideal 'human computer' (in Turing's original sense of what we today term as a Turing machine), can comfortably 'handle' both 'chaotic' and 'regular' algorithms mechanically (given, of course, an infinite tape and infinite time).
However, our experience is that the same 'human computer' is driven - by its 'intelligence' - to (preferably) express that which it is able to compute mechanically in finitely axiomatisable, symbolic, languages.
Languages, moreover, that can be interpreted, and their interpretations grasped, more 'intuitively'.
These are the recursively defined languages, such as first order Peano Arithmetic, PA, and set theory, ZF.
The reason that these are 'more' intuitive than Recursive Arithmetic (the 'natural' language of Turing-machines), is that RA is not recursively definable.
Both PA and ZF, however, can be expressed completely, within a finite alphabet, with a finite number of primitive symbols, definitions, axioms, and rules of inference.
Since all well-formed formulas and logical operations of PA and ZF are defined in terms of such a finite set, we only need to agree on the self-evident nature of the elements of this set, under a common, standard, interpretation, once.
(Of course, with ZF, we still have to arrive at such a standard, common, interpretation. Although mathematically interesting, and of significance later, consideration of this detail would distract from the main issue here.)
We can then construct complex expressions of, and operations within, the language without bothering about whether their standard interpretations can be intuitively grasped or not.
This is the essence of inductive, human, scientific creativity, which impels a 'human computer's intelligence towards the development of such languages.
The payoff lies, of course, in the increased ability to construct abstractions - based on formal symbolisms only - that drive the natural sciences into seeking physical interpretations that match their abstract mathematical models ex post facto.
(The mathematical language needed by Einstein to express, and communicate, his Theories of Relativity was, apparently, developed before any physical interpretation of it was conceived.)
Given this appeal of the human intelligence to express its mechanical actions in intuitively graspable, abstract, symbolism that can be completely expressed - and effectively communicated - in finite terms, the interesting question arises:
Does this description help us describe / define the terms 'regular / polynomial' and 'chaos / non-polynomial' in a more intuitively illuminating manner?
Well, we can prove that a function is partial recursive if, and only if, it is Turing-computable.
We can also prove that every recursive function (and relation) can be represented (expressed) by an arithmetical expression that is the standard interpretation of a well-formed formula of PA.
What this means is that any recursive function, say f(x), is instantiationally equal to some (at least one), constructively definable, arithmetical function, say A(x).
In other words, if f(m) = n for any two natural numbers m, n, then the formula [A(m) = n] is (finitely) provable from the axioms and logical rules of PA.
Similarly, every recursive relation, say q(x), is instantiationally equivalent to some (at least one), constructively definable, arithmetical relation, say R(x).
In other words, if q(m) is true for any natural number m, then the formula [R(m)] is (finitely) provable from the axioms and logical rules of PA.
Now, I have argued elsewhere that we can, indeed, interpret Goedel's reasoning - in his seminal 1931 paper on undecidable arithmetical propositions - constructively.
We simply postulate an arithmetical provability thesis that, if R(x) is a total arithmetical relation, then the formula, [(Ax)R(x)], is PA-provable if, and only if, the arithmetical relation R(x) is Turing-computable (treated as a Boolean function).
In view of Turing's Halting argument, this thesis is, essentially, both, unverifiable and irrefutable.
It is also consistent with the weakened Church-Turing Theses, where we weaken the identities to equivalences.
Now, Goedel has defined a total recursive relation (in his 1931 paper), say q(x), which is Turing-computable.
However, he has also constructed an instantiationally equivalent arithmetical relation, say R(x), such that the formula [(Ax)R(x)] is not provable in PA.
By the preceding thesis, it would follow that R(x) is not Turing-computable.
One could, reasonably, interpret the above to mean that, both, the arithmetical relation R(x), and its instantiationally equivalent, (in this case, the primitive, hence effectively decidable) recursive, relation, q(x), cannot be computed in Polynomial time.
(Of course, if there were a finite proof sequence for [(Ax)R(x)] in PA, then it can be shown that R(x) would, necessarily, be Turing-computable in polynomial time.)
Returning to the original question, this would give an intuitively illuminating description / definition of, for instance, 'chaotic / non-polynomial' Turing-computable functions as those whose arithmetic representations, when quantified, are not provable from the axioms and logical rules of PA.
The 'regular / polynomial' Turing-computable functions would, then, be those whose arithmetic representations, when quantified, are provable from the axioms and logical rules of PA.
The question, of course, is whether the constructive constraints implied by the weakened forms of the CT Theses, and the arithmetical provability thesis, are acceptable to standard interpretations of current theory.
The constructive interpretation in which these appear intuitively self-evident implies that axiomatic set theories such as ZF, which have an axiom of separation or its equivalent, are inconsistent.
At the moment, standard interpretations of classical theory are deeply committed to expressing all their mathematical concepts in the non-constructive language of a set theory, such as ZF, where we still have to arrive at an, acceptably intuitive, interpretation of the axioms that yields an intuitionistically unobjectionable model.
At the moment, the chances of abandoning such a deep commitment appear bleak, and seem only possible with the kind of a paradigm shift that some succeeding generation will, no doubt, provide.
I hope Osias is happy now
ReplyDeleteIn general, if a lengthy, obscure philosophical discussion occurs as the introduction to a P vs. NP approach, then that approach is bogus. Just get to the point, people. It took the above guy 14 paragraphs to finally say that NP = non-polynomial time.
ReplyDeleteAs for the Berkeley math workshop: At the very least, the organizers did not take time to write a coherent description of their approach. We all know that the P vs. NP problem can be cast in many different languages (because the notion of NP-completeness is so pervasive), but it seems that this is all they are doing--their approach will still need to analyze the "rate of convergence" for _every_ deterministic Turing machine that decides a particular NP-hard language. They are simply hoping that by associating some (very unwieldy) matrix representations, they can reduce this analysis to a spectral analysis that might be easier to handle. I sincerely doubt it.
Buried in a recent blog entry by Scott Aaronson is the following question and Scott's response, which I think is great:
ReplyDeleteAs an aside, how can you tell whether or not someone's P!=NP paper is a crank or not.
It's much easier than you'd think.
If the paper claims to show that 3SAT takes exponential time, but the same argument would also imply that 2SAT and Maximum Matching take exponential time, the paper is crap.
If there's no explanation for how the proof gets around the Razborov-Rudich and Baker-Gill-Solovay obstacles, the paper is crap.
If there are plenty of low-level encoding details but no striking mathematical idea, the paper is crap.
Any one of these three criteria, by itself, can already dispose of every P!=NP paper I've encountered.
So far the even simpler criterion�if the paper claims that either P=NP or P!=NP, then the paper is crap�has performed perfectly too.
ReplyDeleteWell, if you want to be pedantic then "either P=NP or P!=NP" is a tautology.
ReplyDeleteIt must be said:
ReplyDeletePlease, let no one come away from Anand's screed under the illusion that 'NP' stands for 'non-polynomial time' (it doesn't), or that it's an open question whether there are decidable problems requiring super-polynomial time (we know there are).
Referring to all non-polynomial algorithms as 'chaotic' is a conceptual mishmash without any justification. You can't make bold unifications of different theories just by proposing that their vocabularies be conflated. Complexity and computability are also being brazenly confused here.
One thing is certainly clear from all this: Mr. Bhupinder Singh Anand has way too much time on his hands :-)
ReplyDelete====================
ReplyDeleteThe criticism that my lengthy discourse may promote illusions that it shouldn't is fair.
However, to make finer distinctions that avoid all illusions might just obscure the point - about 'out-of-the-box' thinking - that I intended.
Perhaps some clarifications may help place my looser comments in better perspective.
We know that a non-deterministic Turing machine can always be simulated by a deterministic Turing machine.
I was only attempting to distinguishing between the computations of a NDTM, if any, that increase exponentially in time, but not in any discernible manner, when simulated on a DTM, as 'chaotic', and those that do not as 'regular'.
There was no intent to claim that the class NP is identically the 'chaotic' computations, or that the class P is identically the 'regular' computations.
If the overall direction of my scenario is, indeed, significant to the PvsNP issue, then provable relations between these concepts would, of course, require to be established.
Since I have not attempted this, I do not know whether it is even possible.
There was also no intent to suggest that the term 'chaotic', as used here, should be understood as a precisely defined term of chaos theory.
My apologies if it is found misleading. The term 'irregular' would do as well.
My simplistic remarks on complexity and computability are, again, probably not germane to my overall thesis, and can be dismissed as suggested. Any confusion is regretted.
Bhup: as author of several published proofs on these non-polynomial space/time/Piano/Zorn issues it is indeed sad to see you out there wasting your wisdom commenting on random blogs.
ReplyDeleteYou should seek out more appropriate avenues ... I'm sure a position for a pseudo-mathematician will be opening up real soon.
Bhup: as author of several published proofs on these non-polynomial space/time/Piano/Zorn issues ...
ReplyDeleteWhere, pray, may I ask are these articles? I could not find any of his articles on the web, nor does he have a DBLP entry.
I could not understand much after reading his lengthy discourses...I was hoping his papers would shed more light on the seminal issues that he discussed.
http://alixcomsi.com/index01.htm
ReplyDeleteMost of the articles here are cutting-edge articles in the exciting area of philosophical complexity theory.
>difficulty of even getting an
ReplyDelete>intuitive grasp of the PvsNP
>problem
What? Here is a intuitive "grasp": Is there a eficient method to find a solution for a problem whose solutuion is eficiently (poly) verifiable?
>>It's much easier than you'd think.
ReplyDeleteAs a "I-don't-understand-most-logic-but-I-know-how-to-program"-guy, I could add:
If the paper tells P=NP, have they implemented and run their algorithm to see "oh, how fast!!!"?
http://alixcomsi.com/index01.htm
ReplyDeleteThis one looks interesting enough for Kurt Godel to stir in his grave: G�del�s Incompleteness Theorems hold vacuously
I am surprised at these comments. We as a community should be happy by such a workshop. The fact that our problems draw the interest of mathematicians is a good thing. Whether or not there is anything particularly promising at their current approach is beside the point (by the way, do we have any particularly promising direction?)
ReplyDeleteThe P /= NP problem is ill-posed.
ReplyDeleteOpposite to one-way function is
two-way function. But has anyone
defined exactly what is a two-way
function? Between one-way function
and two-way function there is
gradation of onewayness which msut
be measured. We lack such a measure. If one-way function has
no absolute existence then how
do we prove its existence? HuenYK
Sigh
ReplyDeleteAs misguided as it is, the previous comment reminds me of the issue that there are two very different uses of the term 'one-way function' in the complexity literature:
ReplyDeleteThe first has the requirement that the inversion problem be 'hard on average' when given the function applied to a pre-image chosen uniformly at random. This is the really important definition for cryptographic applications.
The second, which has some applications in structural complexity (cf. the Berman-Hartmanis conjecture), requires that the function be hard to invert in the worst case.
It often seems that people with a minimal understanding of P vs NP confuse the two definitions. A very nice way to understand how the former fits is Russell Impagliazzo's personal view of average case complexity.
Yes, there is, but I didn't understand them when I read...
ReplyDeleteI was wondering if there have been any attempts to prove the impossibility of asserting either P!=/=NP.
ReplyDeleteThe most interesting result I've seen along these lines is that you can pick a turing machine T such that P^(L(T)) vs. NP^(L(T)) has no proofs of either equality or inequality, and L(T) is empty. (L(T) is the language computed by T)
The proof is a straightforward diagonalization against proofs argument and the result itself doesn't say anything at all about P vs. NP, but it's rather interesting.
David Klempner wrote: The most interesting result I've seen along these lines is that you can pick a turing machine T such that P^(L(T)) vs. NP^(L(T)) has no proofs of either equality or inequality, and L(T) is empty. (L(T) is the language computed by T)
ReplyDeleteDavid, do you recall the authors/title of that result?
I got it out of Du and Ko, but it came from a 1976 paper by Hartmanis and Hopcroft. ("Hartmanis, J. and Hopcroft, J. [1976], Independence results in computer science, ACM SIGACT News 8(4), 13-23")
ReplyDelete====================
ReplyDeleteOsias said...
>>What? Here is a intuitive "grasp": Is there a eficient method to find a solution for a problem whose solutuion is eficiently (poly) verifiable?<<
Well, you have simply re-defined the problem, again at a high level of abstraction.
This is far removed from the consensus of a minimum, common, intuition upon which axiomatic languages are attempted to be founded.
What I would argue, however, is that a thesis such as, for instance:
A true, total, arithmetical formula is Turing-computable if, and only if, it is PA-provable.
is, like the Church-Turing Theses, much nearer our intuition.
The concepts of 'true', 'total', 'arithmetical', 'Turing-computable', 'PA-provable' are, mathematically, far more elementary than those that occur in the standard, Cook's, formulation of the PvsNP problem.
Now, it is elementary to conclude from the above thesis that, if R(x) is a true arithmetical relation that is not provable from the axioms of first-order Peano Arithmetic, then R(x) cannot, itself, be Turing computable.
Hence, it is not in the class P.
However, since it is true for all natural numbers, then, assuming Church's Thesis, it follows that there is an instantiationally equivalent recursive relation which is Turing-computable.
Since Goedel has shown that we can actually construct an expression for R(x), we thus have, under the assumed thesis, a well-defined, constructible, relation, R(x), that must be in NP.
Whether this argument is eventually validated or not, I would consider it as providing a more intuitive 'grasp' of the PvsNP problem mathematically.
Anonymous said...
>> ...it is indeed sad to see you out there wasting your wisdom commenting on random blogs.
You should seek out more appropriate avenues...<<
Easier said than done, if you are not in an academic environment. I have, indeed, occasionally sent a paper to a refereed journal.
In all of them, my explicit thesis is that, if we introduce effective methods of determining the satisfiability, and truth, of the formulas of a formal system, under a given interpretation, into Tarski's (commonly accepted) definitions, then the consequences contradict standard interpretations of, for instance, Goedels formal reasoning, the Church-Turing theses, Turing's Halting Theorem, Cantor's Theorem, etc.
The referees remarks have generally included a phrase such as, "It is known that ...", where the reference is to some standard interpretation of the formal reasoning which, actually, is my starting point!
That is why I concluded that my iconoclastic interpretations of classical theory may require a paradigm shift that, by Kuhn's definition, is essentially unpredictable.
Hence my attempt to reach out to similar viewpoints through discussion forums, and what you see as 'random' blogs.
I see these, however, as the harbingers of an environment that nurtures the initial steps of iconoclastic thought, without the danger of attempting to harness it prematurely under the unavoidable pressures of academic refereeing.
Regards,
Bhup
Bhup: Your comments prove that you do not even have a basic knowledge of the classes P and NP. The following is one of many gems:
ReplyDeleteSince Goedel has shown that we can actually construct an expression for R(x), we thus have, under the assumed thesis, a well-defined, constructible, relation, R(x), that must be in NP.
Have you even bothered to read a textbook on complexity ? Maybe you did and you're still spouting this nonsense ... that's even more disturbing.
I think you ought to know that the academia welcomes iconoclasts ... they have a strong objection to charlatans, though.
I sincerely hope you realize how ridiculous your posts are. I suggest you spend some time reading up these topics (actually understanding them) before you try to be the new Goedel incarnate. It is very easy to come up with BS theories especially about things you have no knowledge of.
This is going to be my last response to you because this discussion is a waste of time and space.
The most interesting result I've seen along these lines is that you can pick a turing machine T such that P^(L(T)) vs. NP^(L(T)) has no proofs of either equality or inequality, and L(T) is empty.
ReplyDeleteCan someone better explain this result? If L(T) is empty, then why isn't P^{L(T)} = P (and similarly for NP)?
Can someone better explain this result? If L(T) is empty, then why isn't P^{L(T)} = P (and similarly for NP)?
ReplyDeleteThe problem is that, assuming there exists a proof of either direction of P vs. NP, there's no way to actually prove that L(T) actually is the empty set; that L(T) is empty (and thus P^{L(T)}=P) would be an unprovably true statement in the model. (the type that Godel's theorem says exist)
Note, also, that this is a "fix a formal model and..." theorem, so it make sense to say that L(T) actually *is* empty, in the same way it makes sense to say that the unprovably true statement constructed in Godel's theorem is actually true.
Anonymous said ...
ReplyDelete>>Your comments prove that you do not even have a basic knowledge of the classes P and NP. <<
A strong indictment, indeed. Undoubtedly, justified to large extent.
However, even though this poster has decided to close shop on the issue here, in view of his observations, some may want to take a glance at this, foundational, analysis of the validity of the PvsNP problem, when expressed in set-theoretical terms?
http://alixcomsi.com/CTG_06_Consequences.htm
#Endnote_7_3_P_vs_NP
It is Para III(3.6) on p70 of the associated pdf file.
Bhup
>>Well, you have simply re-defined
ReplyDelete>>the problem, again at a high
>>level of abstraction.
Redefined? Check this link:
http://weblog.fortnow.com/2002/12/foundations-of-complexitylesson-10-p.html
Solving P?=NP is actually trivial.
ReplyDeleteGet yourself a computer, get that package that generates random research papers, feed it all of the known stuff in complexity and computability. Run it iteratively until you get the answer.
I think there are plenty of people on the internet that have demonstrated a head-start in this approach.
But Scott, he/she "generates random research papers". Doesn't it put the suggestion on randomized complexity classes?
ReplyDelete