Sunday, August 16, 2015

Have we made Progress on P vs NP?

While teaching P vs NP in my class Elementary Theory of Computation (Finite Automata, CFG's, P-NP, Dec-undecid) I was asked  What progress has been made on P vs NP?

I have heard respectable theorists answer this question in several ways:

1) There has been no progress whatsoever- but the problem is only 40 years old, a drop in the mathematical bucket sort.

2) There has been no progress whatsoever- this is terrible since  40 years of 20th and 21st century mathematics is a lot and we already had so much to draw on. We are for the long haul.

3) We have made progress on showing some techniques will not suffice, which is progress--- of a sort.

4) We have made progress on showing P=NP: Barringtons result, FPT, Holographic algorithms, SAT in PCP with O(1) queries.  Too bad- since P NE NP we've made no progress.

5) We have made progress on showing P=NP: Barringtons result, FPT, Holographic algorithms, SAT in PCP with O(1) queries.  We should not be closed minded to the possibliity that P=NP. (NOTE- other theorists say YES WE SHOULD BE CLOSED MINDED.)

6) Note that:

a) We have pathetic lower bounds on real models of computation.

b) We have Meaningful lower bounds on pathetic models of computation.

c) We DO NOT have meaningful lower bounds on real models of computation.

7) Have we made progress? Once the problem is solved we'll be able to look back and say what was progress.

I told the students my view:  We have made progress on showing some techniques will not suffice, which is progress--- of a sort which my class found funny. Then again, students find the fact that there are sets that are undecidable, and sets even harder than that! to be funny too.


  1. Systematic inequality and hierarchy in faculty hiring networks

  2. think that circuit and graph complexity over last few decades are nice/ useful/ significant reformulations of the problem. think the problem should be attacked empirically to some degree for analysis/ insight and that theoreticians avoiding this is part of the mental block going on. there are many empirical approaches, (now anticipating objections here) the idea that there arent is nearly a canard/ excuse/ denial or copout. more povs/ recent developments here

  3. There hasn't been much progress, but there is no shame in that. It is a difficult problem, for which we developed some novel and seemingly powerful tools, we gave them a go, failed and retreated when they failed to resolve the issue. So now we regroup, try to develop even more tools until we feel ready to give it another go. Likely we'll do several iterations of this before we succeed. This is no different than many (most?) other big open problem in science and math.

  4. two other thoughts (some near devil advocacy going on here):

    a) incentives. funding. where are they? simons grants are one model. nielsen covers this topic in his book "networked science"
    b) major teamwork/ collaboration! physics has "big projects" like LHC for "big questions" like Higgs etc. why not (T)CS? there may be too much of a "lone genius" tendency in (T)CS

  5. What do folks think about the Bringsjord & Taylor argument for P = NP ? I don't know Modal Logic enough to even begin to have an opinion. Its possible the argument is correct but too hard to accept just as IIRC Cantor Diagonalization was not immediately accepted.

    I'd love to know what GASARCH thinks about it in particular (since I can still remember being at his dissertation defense when he asked the audience what his work meant, which was the cue for undergraduate Adam Cotsen to give the answer our coterie had been prepped with: "We don't know, and we don't care!"

    But seriously, what do seriousl computational theorists thing of Bringsjord & Taylor?

    1. Th B&T paper claims that Steiner Tree (an NPC problem) can be solved quickly with an analog device. To their credit they DO NOT leap to `therefore P=NP'. They do realize that analog devices are NOT equivalent to digital ones. They try to bridge the gap but I can't really see that they do.

      Question for the audience: Can analog devices be used in the real world to get fast real-world algorithms for problems that real world people actually care about?

    2. Scott Aaronson disagrees with the Steinertree-solving soapfilm mentioned in Bringsfjord & Taylor's paper:

    3. There are error-correcting codes called "turbo codes", which can be decoded using "loopy belief propagation." The decoding can be done in analog circuitry, possibly allowing higher speed and lower power. See

      There are also analog implementations of support vector machines:

      These all seem to be custom chips; it sounds like no one is making general-purpose analog chips. Also, I don't know how widely used these all are.

  6. Hi,

    I believe I have proved that NP = PSPACE finding a problem in NP that it is also in PSPACE-complete. I have submitted to a journal and I have received the following answer:

    (Unfortunately we have judged not to accept your note.

    We believe that the result is erroneous.

    In general , when one claims an equality of P , or NP and PSPACE , one must first convince some known experts individually and put the proof in the web before submitting.

    Many journals receive many wrong proofs with respect to the above. Many reviewers simply deny to look at such claims , if the above conditions are not met...)

    In despite of that decision, I believe my proof is correct. I shared the paper in a preprint:

    Could you give your opinion about it, please?

  7. i guess unlocking P=NP means unlocking answers to every riddle there is and to much extent unlocking the secrets of the whole galaxy easily. To some point it might sound mathematical but religiously means being God

  8. I wish to announce the genuine/definite solution to the P vs NP problem, that is P does not equal NP. The solution by solid proof is included, among other solutions, in my published number theory, free-accessed at:

    Having proven what a (pure) number is, the essential criterion has come to surface; the true numeric function of addition/subtraction, and the fact that the multiplication/division has to absolutely be interpreted as addition/subtraction (unit-by-unit elementary function) so that there actually exists a function of multiplication or division. Simple: the computer counts 1 2 3 4 5, but it doesn’t count 12.345, which constitutes a division, thus a factorial deed. The numeric nature is this of addition, not multiplication. Multiplication is the basis for any factorial counting. Yet, e.g. 2×3=6 cannot be considered as existent unless there exists the statement 2+2+2=6. The latter holds all the essence of 6. The former bears no meaning at all when it is about pure numbers.

    Responsible critique is always welcome!

    Let me give an explanation of the problem with reference to my proof:

    The computer counts, i.e. thinks, sequentially, i.e. step-by-step moving like in a completely straight line. That is the dual counting 0 – 1. In duality there cannot be any factorial operation, for 0 and 1 alone cannot be multiplied or divided. This is why the computer is the criterion for the absolutely accurate counting (computing). We humans know that 6 springs from 2×3, but this is because we have experienced that or we have an experiential way to count, i.e. produce, 2×3, and this is mainly by means of geometry, and geometry is not pure counting (numbers) but contains empirical/descriptional (visual) data. The computer cannot produce, i.e. count, aXb when there is aXb=c and c is what the computer is given. That is unless the aXb is already stored in the computer’s memory, so then the computer doesn’t actually count (create countably) but rather presents the already checked (memorized) datum (result). When any descriptive way of counting is taken out of the process, then, in essence, any factorial operation (multiplication/division) is taken out of the process. So we face the challenge of completely disregarding description and empiricality (unlike the intuitive practice of a chef or a perfumer) and contemplate on pure numbers. That is called computing.

    The issue that finally arises is why factorial numeric operations are not accurate. If they are not accurate, they don’t exist in the absolute (absolutely accurate) manner, so they have to be referred to sequential operation (unit-by-unit addition) in order to gain their existence, that is be verified or checked.

    So, here is my contribution, and solution, to the P vs. NP Problem; having proven what a (pure) number is, how it exists, and how it exists together with another number, the truth is that only the linear operation is in accordance with the numeric nature; a number only exists by means of addition, and in no case by means of multiplication (factorisation). Factoring, like aXb cannot actually exist, so it cannot be counted, i.e. generated as a count. It can of course be verified by linear counting, but in this case the addition is the existent one.

    Let me not be misunderstood: I’m only referring to absolutely accurate counting, for this is the case in P vs. NP. This problem is a theoretical one; not an empirical one. Empiricism allows for approximation while the P vs. NP problem doesn’t allow that.

    Here again is the link for my published work:

    You may also find it impressive to see my quote on the circle hosted by:
    (so far the 9th quote in the list)

    As well as this Professor’s opinion of my work:
    (nearly at the end of the list)