Monday, August 17, 2020

Mathematics is not commutative

In my poll of P vs NP and other issues, one of the questions was

                             Is factoring in P?

One of the most interesting answers was

                             I don't really see why it shouldn't be. - Peter Shor

Recall that Peter Shor proved Factoring is in Quantum-P which lead to intense interest in Quantum Computing.

1) What if factoring was in P and this was shown before Shor's algorithm? Would Shor or someone else have ever proven factoring in quantum P? Would there be as much intense interest in quantum computing as there is now? Perhaps by physicists more than CS people?

2) What if factoring was in P and this was shown before RSA? Where would crypto be now? Zip drives with a googleplex random (or nearly random) bits and more 1-time pads? More lattice based crypto? Or RSA but with larger numbers? This may depend on how good the factoring algorithm is.

3) More generally, how much does the order of events matter for science?

a) If the Banach-Tarski paradox was discovered early on, would we have just tossed out the Axiom of Choice before so much more was build on it? Darling thinks we should toss out AC NOW because of Banach-Tarski.

b) In the model of set theory L you can do ALL of math except some parts of set theory and maybe a few other things (note quite: Harvey Friedman has found some combinatorial statements that need large cardinals to prove). Had L been discovered earlier then could we all now be working in L (except a few people who look at other models, but they are not in the mainstream)? We might know more about L and less about forcing. We would KNOW that AC and CH are true. Or we would think we know.

c) If  Engineers were the first ones to look at SAT and reductions, might they have been content to know that  if SAT \le A then A is probably hard? No need for the Cook-Levin Theorem! And then when someone proved Cook-Levin would the Engineers not really cares since they already knew SAT was hard?

d) I can imagine Ramsey's Theorem being discovered much later for some application, or perhaps never being discovered at all.

e) VDW's theorem has so few application, I can imagine it never being discovered. 

4) There are cases where if A was discovered before B then B has an easy proof, whereas if B was discovered before A, then B has a hard proof. I'll give one example:

Given HALT is undecidable, Godel's theorem is easy.

Assume HALT is undecidable. 

Let STAT(e) be the statement M_e(0) does not  halt.

There is some e such that M_e(0) does not halt  but ZFC cannot prove this.

PROOF: Assume, By Way of Contradiction that for all e such that M_e(0) does not halt,
ZFC could prove this. Then HALT is DECIDABLE:

Given e, run M_e(0) and at the same time enumerate all proofs in ZFC. It is guaranteed that
you will either find M_e(0) halts or a proof that M_e(0) does not halt. Hence you will,
in finite time, know if M_e(0) halts OR NOT.


Is the sequence of events where HALT is proven undecidable  before Godel's theorem plausible.
I  think so

I INVITE my readers to give there own examples of when Math is not commutative- meaning that
the order of events matters.


  1. My favorite example is Primes in P. If it was proven in the 70's instead of the 2000's we wouldn't have had all those partial results (Primes in RP, co-RP, UP, and in P if the generalized Riemann Hypothesis held).

  2. Evidence for integer factorization is in P:

  3. I disagree somewhat that if it had been known first that HALT is undecidable then Godel's theorem would have been easy. The proof you gave is implicitly using the fact that statements about Turing machines are equivalent to statements in the language of arithmetic. This may seem obvious now, but I think it was genuinely novel at the time and was considered a major part of Godel's breakthrough. It may seem more obvious to us now because working with computers has given us lots of practice translating statements between different languages (e.g. writing compilers and interpreters or even just various kinds of parsers).

    1. AH- excellent point.
      I wonder- is coding `M_e(0) does not halt' into an arithmetic
      statement hard? So could we NOW give an easier proof of Godel?
      I think so.

  4. @Lance: Good point. I had almost wanted to say that had we just come up directly with the deterministic version (for Primes), we would not have all these probabilistic results hinging on 'randomness'. This didn't fly quite as well :)

  5. I have wondered if the mathematical community had early on adopted dependent choice (DC) rather than the full-blown axiom of choice, and conjectured that "all sets of reals are Lebesgue measurable" (LM), would we now regard ZF + DC + LM (rather than ZFC) as the standard foundation of mathematics? See this answer by Andreas Blass on MathOverflow:

  6. Here's another potential example of non-commutativity that I just learned about. In a remarkable lecture, Brendan Guilfoyle speculates that if Simon Donaldson's work on smooth 4-manifolds had come out before Michael Freedman's paper on the 4-dimensional Poincare conjecture, then Freedman's paper in its current form would never have been accepted for publication, because its consequences would have seemed so absurd that referees would have demanded a much more detailed and careful argument.

  7. If Turing had come before Cantor, then modern mathematics would not use a set theory like ZFC as its foundation. As in, imagine the 2 were reversed: Cantor comes up with the theory of computation in the late 1800s (perhaps inspired by machines like Babbage's) and it is Turing who later comes up with infinite sets, "Turing" normal forms, ordinals, etc.

    I contend that in that universe modern math would've adopted a very different foundation, one based around concepts of computability and complexity. Cantor's set theory was only adopted as the foundation of 'pure' math because it lacked any better competitor in the early 1900s, despite already growing awareness of the ambiguities and paradoxes it contains. In the alternate world Hilbert and co. would not only not have Cantor's set theory (Turing wouldn't invent it for another 10-20 years), but would arguably have a superior alternative right in front of them.

    Such a change would have profound implications for the last 100+ years of mathematics. Pure math would be far less comfortable with infinities and non=constructive (or overly abstract) constructs than it really is. Putting computation and algorithms at the center would've been a big boost to things like combinatorics and complexity theory itself, but also potentially forced things like real analysis and topology to either A) come after Turing or B) adopt a very different foundation. I think the upshot would be a universe where a lot more of "pure" math is something you can do on a physical computer, but a fair amount of OUR pure math either never exists or looks very different.