In 1988 I was down to 110% sure that P\ne NP. Why? Because the Graph Minor Theorem showed that many problems had faster algorithms than previously thought. Example:
For all g, Determining if a graph G is it of genus g. can be solved in O(n3) time (constant depends on g).Note that the Graph Minor Theorem involves some very deep math. It took Robertson and Seymour many years to get the result. The papers are called Graph Minors I, Graph Minors II, etc. and in there someplace (perhaps around Graph minors XVII) is the graph minor theorem. I do not think that P=NP will be shown by using the Graph Minor Theorem; however, the fact that some very deep math lead to some problems having low complexity means that it could happen again, perhaps to SAT. Hence my confidence in P\ne NP went from 120% to 110%.
In 2007 I was down to 100% sure that P\ne NP. Why? Because Valiant used some strange techniques to solve the following problem in polynomial time.
Given a monotone boolean planar formula in 3-CNF form determine if the number of satisfying assignments is a multiple of 7. (NOTE- the problem for multiple-of-2 is Parity-P complete and hence NP-hard).Again, a surprising algorithmic technique leads to problems being easier than we thought. To be fair, this is not a problem people looked at much (if at all). But the technique employed are truly new and my concern is that other truly new approaches may prove powerful enough to get SAT in P.
Neither NL closed under complementation nor Factoring in QP has made me lower by percent belief that P\ne NP. But they were surprising results and I can see someone else lowering theirs because of them.
So I'm down to 100% sure that P\ne NP. It will take a truly remarkable result for me to go lower than that. Like a polynomial time algorithm for SAT.
If there were a polynomial time algorithm for SAT, wouldn't this show that P = NP since any problem could be reduced to the SAT problem?
ReplyDeleteThat being his point ;)
ReplyDeleteWhat is this, the Maury Povich Show? "I'm 120% sure that I'm not the father of that polynomial-time algorithm for SAT." TCS sure does use some deep math... ;)
ReplyDeleteWow, Captain Obvious with the first post!
ReplyDeleteSuppose we assign a probability based on what odds we're willing to take in a wager on the outcome. OK, just for fun, here goes.
ReplyDeleteI am 99% sure that P != NP. Why not 100%? I am confident that there are many more interesting problems in P that are yet to be discovered, so it's still possible (though very unlikely) that all of NP could be among those remaining. For this reason I am also only 80% confident in the truth of the unique games conjecture. (Are SDP's really the end of the line for approximating such problems?)
Of course there are many heuristic reasons to believe that P != NP. But I have never found any of these reasons rigorous enough to be completely convinced that P != NP is a foregone conclusion and all we lack is its proof. For example, if P = NP then all this other pig-flying and horse-whistling stuff happens. But when is the level of surprise in a proposition correlated with its truth? I think, there is only a correlation when we already have a deep understanding of the area in which the proposition is stated. This goes back to my belief that I'm just not sure we know as much about P as we think we do. I wouldn't wager my life on P != NP, so I won't say 100%.
My probability estimate gets much higher with the DQL = NQL question: can SAT be solved in n*poly(log n) time? This question is intriguing in that
(1) if it is true in any reasonable sense then the implications would be far beyond our most prosaic imagination, and
(2) if it is false then it's very plausible we can prove it within our lifetimes.
I am 99.9999% confident that DQL != NQL.
The significance of such percentages will, possibly, remain uncertain until we refine the statement of the PvNP problem further.
ReplyDeleteFor instance, we can show that Goedel's formally undecidable arithmetical proposition, say, [(Ax)R(x)], is such that R(x) is a tautology which is not Turing-computable as always TRUE, where:
Definition 1: A total number-theoretical relation, R'(x1, x2, ..., xn), when treated as a Boolean function, is Turing-computable if, and only if, there is a Turing-machine T such that, for any given natural number sequence, (a1, a2, ..., an), T will compute R'(a1, a2, ..., an) as either TRUE, or as FALSE.
So, mathematically, one would conclude that SAT has been decided as P=/= NP.
However, the argument for the above has little significance from the point of view of computational complexity.
The reason: It also follows from Goedel's reasoning in his seminal 1931 paper (where he introduced his formally undecidable arithmetical proposition)that R(x) is Turing-decidable as always TRUE, where:
Definition 2: A total number-theoretical relation, say R'(x1, x2, ..., xn), when treated as a Boolean function, is Turing-decidable if, and only if, it is instantiationally equivalent to a number-theoretic relation, S(x1, x2, ..., xn), and there is a Turing-machine T such that, for any given natural number sequence, (a1, a2, ..., an), T will compute S(a1, a2, ..., an) as either TRUE, or as FALSE.
So, the PvNP problem may need to be refined to acknowledge the distinction between the questions:
(a) Is there a polynomial-time algorithm for deciding whether a tautology is Turing-decidable as always TRUE?
and:
(b) Is there a polynomial-time algorithm for deciding whether a tautology is Turing-computable as always TRUE?
Regards,
Bhup
http://alixcomsi.com/An_elementary_proof.pdf
The paper you linked was bogus. "Is there a poly-time algorithm A that gets as input a formula f and decides whether it is a tautology?". How is f given? A truth table? Then we can afford to loop through all inputs. If you come up with some other input format, you must keep in mind..for it to have any implication in P!=NP, the fact that f is a tautology has to be poly-time verifiable.
ReplyDeleteProbably a stupid question, but.. how is percent sure defined? I guess it should be defined in terms of odds in betting? If you are 99% sure that P != NP, does that mean you are 1% sure that P = NP, or is the relationship more complicated?
ReplyDeleteTo anonymous two up: By definition, a "formula" is a specific way of specifying a function, as a sequence of nested gates without cycles (allowing cycles gives you a "circuit"). So saying "formula f" implies it is not just given as a truth table. Sorry, haven't read the rest of your comment.
ReplyDeleteAlthough I have not read his paper, Bhup's question makes sense, if for any given n, a statement in the Peano arithmetic with numbers of at most n bits is translated into a proposition. For instance, you could write out a tautology that states that x^4+y^4\neq z^4 where x,y,z<1024 (think of circuits that square each input twice, and reduce outdegree in these circuits to one). There are chapters in books by Krajicek and by Cook and Nguyen on the topic of such "propositional translations" (which apply to proofs as well).
ReplyDeleteThe question NP=?coNP can be restated, given that bounded halting is NP-complete, and the complement of bounded halting is therefore coNP-complete, as "does the bounded behavior of the recursively enumerable language HALTING differ from the bounded behavior of the nonrecursive language coHALTING?"
Lots of interesting stuff here! First, this STOC'07 paper by Jin-Yi Cai and Pinyan Lu takes some of the "accident" out of Valiant's algorithm by showing that for all moduli other than 7 the problem remains NP-hard under randomized reductions. For k-SAT generalizations of Valiant's problem, counting is easy modulo divisors of 2^k - 1, randomly NP-hard otherwise. This demystifies the mechanism, although the original surprise remains.
ReplyDeleteIt is interesting to ask if the Razborov-Rudich "Natural Proofs" mechanism can be sharpened into an obstacle against proving SAT \notin DQL/poly, or for an easier task, against proving SAT \notin DTIME[p(n)]/poly for some specific polynomial p. Is there a padding trick I'm missing that makes this easy?
Mr. Anand, have you seen Lance's "Non-Standard post", and most important, the paper by Attila Ma'te' mentioned in comments there? Your paper (alternate link here) has an elementary misunderstanding of how induction holds in nonstandard models which Lance's post already fixes---and you need to demonstrate understanding at the level of Ma'te's paper, and cite it.
On the question of assigning "epistemic" probabilities to universal propositions, I'll have more to say---tomorrow---but first let me ask the "betting men" here what their odds are on FACTORING \notin P. Or stronger, and relevant to Razborov-Rudich, is there a fixed e > 0 such that SAT requires circuit size \Omega(2^{n^e}}? We must have e <= 1/3, while some think e <= 1/5 is needed.
ROFL. When I saw the title of the article I conjectured Bhup would make a colorfull post. I must have reloaded the article 20 times during the day, and finally at 5:30 he delivered. Then I had to see how many replies he got. Only 5 so far, but since he is 5/5 of the last comments I would say he has preformed a succesful thread jacking.
ReplyDeleteOn another note, I agree with the previous poster. The big wager right now is does FACTORING!=NP?
KWRegan
ReplyDelete========
Thanks for your comments.
However, I'm not clear on your reference to "... how induction holds in nonstandard models which Lance's post already fixes...".
Surely it is for a model of PA - whether standard or non-standard - to hold for the induction axiom, S9, that I have specified in my paper (which is a standard formulation of the induction axiom for a first-order Peano Arithmetic).
Now, my argument shows that the non-standard models considered by Mate, such as, for instance, the one "... obtained by taking a new constant symbol c, and adding the infinitely many sentences c> l,c> 1 + l,c> 1 + I+ I,... to the axioms of PA", do not satisfy the Induction Axiom S9 of my paper.
One can, of course, weaken the axiom S9 so that it admits non-standard "integers".
Mate's paper is, in fact, based on one such weakening, namely, on an "...axiom scheme of induction, which, in one formulation, says that every nonempty set defined by a formula of PA has a least element".
The question is: Why on earth would one weaken an axiom system that can formalise Dedekind's formulation of the Peano Axioms for the natural numbers categorically, in order to admit non-isomorphic models that contain non-standard integers which are anything but natural?
In fact, my overall impression of Lance's post was that it was a plaintive plea for a perspective that would help interpret the putative properties of non-standard models within classical, pre-Cantorian, theory which, from a computational point of view, is comfortably constructive.
Perhaps the significance of this is not brought out clearly in my paper, since I have not elaborated on the underlying investigation, which leads naturally to a resolution of the SAT formulation of the P=/=NP problem.
This investigation is contained in the paper:
http://alixcomsi.com/Two_Presumptions.pdf
I argue, there, that one of Turing’s remarkable, albeit unremarked, achievements in his 1936 paper is to enable a formal, and constructive, definition of what is meant, intuitively, by the assertion that the infinity of arithmetical assertions encapsulated in the axioms and rules of inference of a standard, first-order, Peano Arithmetic are intuitively true in the standard model.
I show that it follows from Turing’s 1936 paper that, if the formula, say, [R'(x1, x2, x3 …, xn)], is either an axiom, or a theorem, of a first-order Peano Arithmetic (such as the system S1 to S9), then, there is always a Turing-machine, T, such that, given any set of natural numbers (a1, a2, a3 …, an) as input, T will compute the arithmetical proposition R'(a1, a2, a3 …, an) as TRUE in a finite number of steps.
So, to say that a PA axiom, say [A], is intuitively true under the standard interpretation is equivalent to the assertion that the relation A is Turing-computable as TRUE for any given natural number input to the free variables contained in A.
This is the ‘constructive’ characteristic that differentiates a Peano Arithmetic - or a Recursive Arithmetic - from a ‘non-constructive’ language such as ZF!
Another way of looking at this is to say that Turing-Arithmetic is equivalent to Peano Arithmetic, in the sense that, first, a total arithmetical formula is PA-provable if, and only if, it is Turing-computable as always TRUE.
(Although the ‘if’ condition follows straightforwardly from Turing’s paper, the ‘only if’ condition of the equivalence, however, requires a formal proof, since it holds if, and only if, my argument, that first-order PA - as defined by the axioms S1 to S9 - has no non-standard models, holds.)
Second, a total arithmetical formula is PA-true if, and only if, it is Turing-decidable as always TRUE, where Turing-decidability is as defined in my earlier post.
Now, Lance's post (possibly) and Mate's paper (obviously) reflect set-theoretic formulations of the Peano Axioms where such equivalences may not hold.
Thus, the Induction Axiom S9, which disallows non-standard integers, is sought to be replaced by one that admits them - the Well-ordering axiom of set theory (i.e. Mate's "...axiom scheme of induction, which, in one formulation, says that every nonempty set defined by a formula of PA has a least element").
Perhaps, for some mathematical problems, there are advantages to defining an "Arithmetic" that includes axioms which are not Turing-computable as always TRUE (or Turing-decidable as always TRUE), and to considering models that are only conceivable Platonically (as Lance appears to lament in his post).
My paper suggests that the P=/=NP problem is not one of them.
Regards,
Bhup
This is a comment for Bill Gasarch:
ReplyDeleteYou should seriously consider adding Bhup's weblog (or at least his website) along with the links to the other weblogs. That way we can all easily check updates and seminal papers linked off his webpage.
Note to Bhupy:
ReplyDeleteLance stopped posting in March. Idiot.
http://weblog.fortnow.com/2007/03/end.html
Dear Bhupinder (if I may),
ReplyDeleteI meant in particular Tim Chow's clarification of what Lance and Mike O'Donnell were driving at in the third and last comment to his blog item. The presence of infinite descending chains in non-standard models of PA vitiates your conclusion at the end of Section 4 that in every model, every element is a successor of 0.
Every nonstandard model of PA has N as an initial subset, but there is no formula G(.) of PA such that in every nonstandard model M, {x \in M : G(x) holds} equals N. This known fact simply contradicts what you are trying to do. Here is a reference to something stronger, in a book you should read and cite.
In general you are confusing
(a) defining subsets of N, as interpreted by N,
with
(b) defining subsets in nonstandard models M, as interpreted by M.
Ideas of Turing-computability and the arithmetical hierarchy nicely apply only to (a). An indication of how slippery nonstandard models can be for trying to carry these these ideas into (b) is Tennenbaum's theorem, referenced in Wikipedia's article on the Peano axioms: Even though countable nonstandard models have domains that can be encoded as (decidable subsets of) {0,1}*, there is no way to make the + and * operations computable in such an encoding!
Anyway, until you can demonstrate command of concepts like overspill, let alone concrete knowledge of Ma'te's paper, no further discussion in your line here can be fruitful.
Sincerely, ---Ken Regan
Ken
ReplyDelete===
You're right. The argument for my conclusion is expressed poorly. Thanks for taking the trouble to point it out at some length, and for your constructive criticisms and suggestions.
I have revised the offending paragraph appropriately on my web-page.
After showing that the formula [(Ax)G(x)] is provable in standard, first-order PA, where [G(x)] is the formula [x=0 v ¬(Ay)¬(x=y')], I conclude that, except 0, every element in the domain of any interpretation of PA is a ‘successor’.
I then introduce the caveat:
Now, it follows, first, that, since there are no infinitely descending sequences of ordinals, every set-theoretical interpretation of PA must be restricted to the domain that consists only of the ordinal 0, and the ordinals that are the ‘successors’ of the ordinal 0.
By definition, this domain corresponds to the natural numbers only.
I further note that, if we share Saharon Shelah's conviction, namely:
... ZFC exhausts our intuition except for things like consistency statements, so a proof means a proof in ZFC ... all of us are actually proving theorems in ZFC. (cf. The Future of Set Theory)
then we must conclude that all non-standard models of PA are isomorphic, and that there are no non-standard models of a standard, first-order, Peano Arithmetic.
So the question is one of whether we can, indeed, have a non-standard model in which 0 is the only non-successor.
Now, on page 122 of Logic and Structure Van Dalen writes:
"Our technique of constructing models yields various non-standard models of Peano's arithmetic. We have at this stage no means to decide if all models of PA are elementarily equivalent or not. The answer to this question is provided by Goedel's incompleteness theorem, which states that there is a sentence [L] such that PA does not prove [L] and PA does not prove [¬L]."
Prima facie, the only ground suggested by Van Dalen for believing in a non-standard model for PA (one that is not "elementarily equivalent" to the standard model of PA) is Goedel's Incompleteness Theorem.
This, actually, states that, if PA is omega-consistent, then there is a sentence [L] such that PA does not prove [L] and PA does not prove [¬L].
However, as I have argued in various investigations on my web-page (particularly in http://alixcomsi.com/Two_Presumptions.pdf), PA can be shown to be omega-inconsistent.
I have also shown that this conclusion does not contradict what is known - it contradicts only the accepted interpretations of what is known.
This point is illustrated by Rosser's proof, which is interpreted as showing that there is an undecidable sentence in a simply consistent PA.
However, a closer analysis of the proof shows that it, too, implicitly assumes that PA is omega-consistent.
In a separate investigation, I show that this, essentially, is the assumption that implicitly underlies all current interpretations of classical logic, and which was objected to by Brouwer.
http://alixcomsi.com/Why_Brouwer_was_right.pdf
My argument that all models of first-order PA are isomorphic would, of course, fail if we could give a constructive method, or argument, for the existence of a non-standard model for PA in which 0 is the only non-successor.
(Despite your remark that "The presence of infinite descending chains in non-standard models of PA vitiates your conclusion ...", I am unable to locate an instance of a non-standard model of PA which tolerates the presence of infinite descending chains of integers. I shall be grateful if you could suggest one.)
The argument does, of course, fail in Robinson's system Q of Arithmetic, which accepts only the axioms S1 to S8. It fails also in Gentzen's system of Arithmetic where the Induction axiom S9 is replaced by an axiom of transfinite induction.
Where PA is concerned, however, the argument ought not to be dismissed simply because it contradicts non-constructive interpretations of the existence proofs of general first-order theorems that may not hold when the theory-specific axioms of PA, i.e., S1 to S9, are added to the logical axioms of a first-order theory.
Which is not to say that your criticisms are unwarranted or mis-directed - more than a passing acquaintance with the concepts and areas that you highlight is, indeed, desirable.
Regards,
Bhup
It looks like Bhup's standard response is typically a huge email, with a link to yet another one of his papers in each response.
ReplyDeleteBill, I share your open-minded attitude, and am also merely 100% sure that P≠NP.
ReplyDeleteBhup,
ReplyDeleteWhereass po-mo obfuscation of simple concepts may appear deep in other disciplines, it simply out of place in mathematics.
Bhup,
ReplyDeleteYou mention on your webpage that your articles are not peer reviewed. Why don't you get them peer reviewed?
Submission to a journal or a conference is free-of-cost, and I am sure that with breakthroughs of the magnitude that you claim, you will have no trouble finding sponsors if the paper does get in.
If there is a God and if has a sense of humor, then, for sure, P = NP.
ReplyDeleteSaile
Bhupinder, all nonstandard models of first-order Peano arithmetic have infinite descending chains---indeed, consist of N followed by an infinite dense endpointless order of such chains---as explicated by J.R. Lucas here. He references Boolos and Jeffrey, Computability and Logic for the basic construction of countable nonstandard models of (true) arithmetic. Then see Andrey Bovykin, especially his 2001 paper with Richard Kaye, for PA particulars.
ReplyDeleteSecond-order arithmetics have properties you seek---the rub, however, is that one loses much of the nice structure of first-order theories.
Computational complexity has many avenues tailored for "independent scholars"---here are 3 of mine:
(1) Construct linear-time computable operations a,m,lt on a subset S of {0,1}^* such that for some mapping d: S --> Z and all x,y \in S, d(a(x,y)) = d(x)+d(y), d(m(x,y)) = d(x)*d(y), and lt(x,y) <==> d(x) < d(y). And you need a 1-1 "encoding" map e: Z -> S such that d(e(n)) = n and |e(n)| = O(|n|) (i.e., = O(log n)). But neither e nor d needs to be linear-time computable, and d need not be 1-1. i.e. S can use "redundant representations". Note that (Z,+,*,<) itself doesn't (yet?) deliver since * is not known to be linear-time computable recent best word here)---but such an S might be incredibly useful for long intensive discrete calculations!
(2) Study the operation x @ y = log_c(c^x + c^y), for various fixed bases c. It differs from "smash" x # y = c^(log_c(x) * log_c(y)), and importantly, addition distributes over it: (a+x)@(a+y) = log_c(c^a c^x + c^a c^y) = log_c(c^a(c^x + c^y)) = a + (x @ y). One can round to nearest integers, or leave things continuous but study countable models of arithmetics that use @.
(3) Study ways to lower-bound the number of + gates in arithmetical formulas or circuits, perhaps aided by programming concrete instances along lines of Scott Aaronson's current item.
Thing is, I think all 3 of these ideas are related. Rather than get wrapped in "Big Concepts" everyone's thought of, follow Nassim Taleb's "Grey Swans" law as expounded in his new book reviewed here, namely that the surprising advances come from concepts nobody's thought of. Not just that this is a truism, but stuff that is hard to put on grants gives a comparative advantage for the independent scholar. The bird-flu rub is that these concrete low-level problems can become a "disease", as attested by a distinguished one-time colleague who confessed to me he "wasted a year" trying to prove a nonlinear time lower bound on deciding whether an undirected graph has a triangle...
Ken
ReplyDelete===
There seems to be a genuine mis-understanding here. The non-standard models with infinite, descending, chains, which you refer to, seem to be those of the standard interpretation of Peano Arithmetic (which is not axiomatisable), not those of first order Peano Arithmetic.
(See Boolos and Jeffrey, Computability and Logic, 4th ed., Section 25.1, Non-standard Models, p306.)
My paper, however, refers only to arguments that postulate such non-standard models for standard, first-order PA.
For instance, in a Corollary (ibid. p306), Boolos and Jeffrey argue that it follows from the Lowenhein-Skolem Theorem that:
" As in the proof of the existence of non-standard models of arithmetic, add a constant c to the language of arithmetic and apply the compactness theorem to the theory:
PA + {c=/=n; n=0, 1, 2, ...}
to conclude that it has a model (necessarily infinite, since all models of PA are). The denotation of c in any such model will be a non-standard model, guaranteeing that the model is non-standard."
Now, the Compactness Theorem, referred to above, has only been proved previously (along with the Skolem-Lowenheim Theorem and the Overspill Corollary) as holding for a theory that consists of only the standard first-order axioms of formal logic.
Boolos and Jeffrey's above Corollary assumes that the Compactness Theorem also holds when the number-theoretic axioms of Peano Arithmetic are added to the axioms of formal logic.
This, of course, need not be so, since, unless a theory is complete, a model of the theory need not be a model of an extension of the theory.
Thus, I have simply shown that, in the particular case of standard first-order Peano Arithmetic, the Compactness Theorem cannot be appealed to, as above, for concluding that the extended theory obtained by adding an undefined symbol c to PA, as defined above, yields a non-standard model of PA.
Moreover, if we cannot add such a c to the language of PA and maintain consistency, then we cannot construct the equivalence class {... c-2, c-1, c, c+1, c+2, ...}.
Now, for both B&J and Andrey Bovykin/Richard Kaye, postulation of a non-standard model of PA with a dense, linear, ordering of such equivalence classes of non-standard numbers, without end-points, depends on assuming the validity of the above argument for the existence of a model with at least one non-standard number for PA.
Thanks for your suggestions for possible subjects of investigation. Unfortunately, these lie outside my immediate area of interest, which is primarily studying accepted interpretations of Cantor’s, Gödel’s, Tarski’s, and Turing’s reasoning, and addressing some grey areas in the foundations of mathematics, logic and computability.
Thanks also for the various links, particularly the one to the fascinating, and passionate review of Nassim Taleb's book.
Regards,
Bhup
In comment #11 I queried whether it ever makes sense to put a probability estimate p strictly between 0 and 100% on a "universal" proposition S, such as a statement of pure math. To be scientifically relevant the probability has to be extensional (earlier I said "epistemic"), i.e. more than just the speaker's attempt to quantify his/her ignorance. Here is my sketch argument for a no answer:
ReplyDeleteThe only way I see it possibly making sense is if there is a physical event E such that (a) if E happens, then S must hold, and (b) a probability estimate 0 < p < 1 can be assigned to E. Here I'm thinking of p as a "reasonably close" lower bound on "Pr(E)" and hence on "Pr(S)", one that might follow from a theoretical analysis of conditions that could produce E.
Now (a), that certain theorems can be inferred from physical events, may not be far-fetched. Fred Hoyle 's prediction of a high-energy resonance in carbon-12 nuclei, with E = "stars produce carbon (else we wouldn't exist)" is arguably an example, with S asserting existence of a certain eigenstate in the mathematical model of the nucleus. Perhaps observations in fluid dynamics may entail there being certain solutions to the Navier-Stokes equations or the like. A similar situation may hold with Yang-Mills theory and high-energy physics, where E can be results from accelerator collisions or cosmological observations that constrain conditions shortly after the Big Bang. P =? NP itself may be "too asymptotic", but maybe there are cases involving "self-organizing structures" that can happen only if Nature can find efficient solutions to concrete small formulas/constraint-satisfaction problems.
However, I think the advent of multiverse cosmology (including the
"Landscape") puts a definite kibosh on 0 < p < 1 in (b). E must get at least one trial in the part of our Universe we can observe---else it didn't make sense to mention E as an estimator for S. But then in the Multiverse E gets infinitely many trials---not least from Tegmark's bland assertion that there exist infinitely many copies of our Universe in any of its states---and the probability analysis in the underlying technical papers assumes independence. Hence if Pr(E) >= p > 0, then E is statistically certain to occur somewhere. But E occurring anywhere was said to imply S, and as a "universal", S applies across the Multiverse. Hence Pr(S) = 1, so S must be true.
My point relating to Bill's item is that this conclusion actually defends his saying "at least 100%" in relation to S = "P != NP". Saying (at least) any smaller nonzero probability, be it 99% or 0.001%, would be scientifically fatuous!
I'm interested in what people think of this argument as a piece of philosophy-of-science. Summed up in one sentence, I'm trying to demonstrate rigorously that it never makes scientific sense to ascribe a nonzero, non-unit probability to a universal proposition. If this "multiverse argument" holds water, it may have other applications...
Ken: I think you're wrong, as we can never really know something is true, just that it best answers our questions about the structure of the cosmos.
ReplyDeleteThere are many historical incidents where theory T fit better the known experimental data than theory T-prime, but T-prime became the adopted theory, because it provided a better answer to the question, "Why?"
Take a look at Arthur Koestler's _The Sleepwalkers_, which is a social history of the Copernican revolution in astronomy. The Ptolemaic astronomers made better predictions about exactly where heavenly bodies would be than did ol' Nick Koppernik, who was sloppy and used outmoded telescopes. But his explanation was so *simple* that the why-concerned people realized it had to be closer to the truth.
We are in the middle of an analogous scientific debate today, though the verdict is still out as to which side is more correct. Feynmann, in his QED, provides a breathtaking explanation of quantum electrodynamics without using anything past high school math. Throughout, though, he says that he can only explain what photons do, and nobody has any idea why they do them. This is a famous example of an empirical/positivistic approach to what science "means."
Contrast this to the work of string theorists today. Critics say that string theory is pretty and all, but lacks minor details, like predictive power, or experimental verifiability or falsifiability, and instead of equations provides the papier mache and duct tape version of formulas: perturbation theory.
String theorists respond, well, yes, everything you say may be true, but we can answer *why things are the way they are.*
Your stated position does not address how much *that* should count for.
One commenter in this thread has been accused of postmodernism already, so let me quickly say that although it would be easy to go all Derridas meets Frijtof Capra on this, we do have a saving hard science: computational complexity. The study of our relationship to the intractability of problems provides (still rudimentary but rigorous) answers to what we are capable of learning, knowing, even guessing with nonzero percentage of being wrong.
People often forget that computational complexity is fundamentally a *human* science. Feasible means feasible *to us*, etc. Complexity/TCS is the study of the ability of sentient beings to control our destiny in the universe. (Oooh, cue lights and cool music.)
Hmmm... I got carried away by the philosophical thread, although I do think I'm right. I'll be more computery tomorrow.
I'm the commenter who accused another commenter of postmodernism but I now realize I was giving this commenter more credit than he deserved. Given the seriousness with which he has been pushing his "work" on this blog and his imperviousness to any notion of scholarship I think the word "delusional" would be more appropriate.
ReplyDeleteBhup: Sorry paaji, but that needed to be said. A bit more knowledge and intellectual rigour is expected even from an independent "scholar."
Aaron, in what exactly am I wrong? My thesis is that a probabilistic framework for a universal proposition can never have extensional meaning. Its meaning must stop at trying to quantify human ignorance and "taking bets", which are intensional.
ReplyDeleteI can wend back to CS by using Li-Vitanyi-et-al.'s Kolmogorov-complexity (KC) setting for your point about scientific inference and likelihood-as-probability. Fix a universal TM U for KC, let D be the planetary data, and let me represent initial conditions I as an explicit not implicit argument to U. Then the mathematical propositions S1 = "U(Koppernik,I) = D" (modulo precision and "bad telescopes":) and S2 = "U(epicycles,I) = D" are both true. The proposition H = "our planets follow Koppernik" should indeed be held more likely since |Koppernik| << |epicycles|, but it is not a math universal---it's specific to our solar system.
My argument may be falsifiable if you can find events E1,E2 such that E1 ==> S and E2 ==> not-S, plus reasoning that gives both E1 and E2 non-zero probabilities p1 and p2. Here E1 and E2 cannot come from the same theory T, since this would show T inconsistent, so they must arise from competing theories T1 and T2. But then I'd reply that even if E1,E2 concern the Big Bang radiation and hence get only one trial in our Universe, in the (Tegmark) Multiverse they get infinitely-many trials. Thus the interpretation of p1 as "lim N->infty of expected (num of N trials in which E1 happens)/N" morphs from a term of analysis into actuality, ditto E2, and hence a contradiction. I would root this contradiction by saying that whatever considerations gave you grounds for assigning p1 to E1 and p2 to E2 must have come from such a common theory T (thus exposed as inconsistent).
To clarify "come from" and "arise" in my last paragraph: theory T1 says Pr(E1) >= p1 > 0 but outcome E2 cannot happen, while theory T2 says E1 cannot happen and Pr(E2) >= p2 > 0.
ReplyDeleteIt also may be refuted by a case that knocks out the assumption of independence of trials in different "pocket Universes" in the Multiverse. But (1) one seems to need not just strong correlation but full entanglement between pockets to negate the conclusion that "E is statistically certain to occur somewhere", and (2) this not only seems to contradict premises in Tegmark's technical papers but might also require altering our understanding of quantum theory itself.
ReplyDeleteMy point is that it is possible for genuine scientists to disagree over whether a theory is the correct one, because empirical evidence is not the only criterion with which to evaluate.
ReplyDeleteOne sees this even in "completely pure" disciplines, such as set theory. Set theorists propound positions that axiom X should be added to ZFC because of its beauty, utility, etc.
Therefore, it is sensible to ascribe less than 100% certainty to statements of scientific theory. The difference here is the one between facts and truth. Facts are 100%. Either the sun rises in the east or it doesn't. But truth comes from analysis, i.e., *truth is the relation between facts.* And equally rigorous scientists can disagree on which theory to propound.
I think you are conflating individual data points with a "body of knowledge." They are not the same.
No disagreement from me on the human factors---I just regard them as "intensional". On the extensional side, I'm saying that you can't design an experiment for which one possible outcome E implies S, and argue that the probability of E carries over to a "probability of S". Instead, if your theory giving Pr(E) > 0 is sound, then S must be true---so a "thought experiment" suffices! :-)
ReplyDeleteYour set-theory example causes me to break a vow not to talk about "weird physics" in this blog, only in Scott's :-)---and I kept this vow during the Paul Cohen posts! Back in his Oxford office in 1985, philosopher Harvey R. Brown sketched to me an experiment involving field wavefront boundaries on a sphere that he thought might influence the Continuum Hypothesis (for aleph-1 only). I don't know what became of this idea---I guess now I'll have to contact him, to see if it's an example of what I'm saying...
Could this be an evidence to $\#P\subseteq FL^{Mod_7P}$ and hence $\#P\subseteq FL^{DET}$ by Valiant's result? The standard obstructions for $Mod_7P$ result should also apply to $Mod_2P$ result and vice versa but $Mod_2P$ does not have a shortcut through Valiant's result. Is there an obstruction to $\#P\subseteq FL^{Mod_2P}$? I am confused. Any oracle which gives $\#P\subseteq FL^{Mod_7P}$ might provide $\#P\subseteq FL^{DET}$. Conversely any oracle for $\#P\not \subseteq FL^{DET}$ should provide an oracle for $\#P\not\subseteq FL^{Mod_7P}$ which is not much different from $\#P\not\subseteq FL^{Mod_kP}$ at any integer $k>1$.
ReplyDeleteOn Valiant's mod 7 counting for a certain problem is in P. You are saying Mod 2 version is Parity P complete. Is the decision version NP complete? If so can Valiant Vazirani get to unique solution instance and from there we get US in P and NP in RP? Is this possible? Or is there something I miss?
ReplyDeleteEssentially I am wondering if reduction from 3CNF to monotone boolean planar formula in 3-CNF is parsimonious or at least behaves in a controlled way such as preserves 0 to 0 solutions and 1 to some 'non-multiple of 7' solutions?
ReplyDelete