Before FLT was solved did some people prove theorems like:

FLT cannot be proven using techniques BLAH. This is important since all current proofs use BLAH.

I do not believe so.

Replace FLT with Goldbach's conjectures or others and I do not believe there were ever such papers.

I have sometimes seen a passing reference like `the techniques of this paper cannot get past BLAH but it was not dwelled on. The most striking example of this (and what got me to right this post) was the

Erdos Distance Problem (see here)--- when the result Omega( n^{ (48-14e)/(55-16e) - epsilon}) was shown I heard it said that this was as far as current techniques could push it. And then 11 years later the result Omega(n/log n) was proven. I asked around and YES the new paper DID use new techniques. But there was not the same kind of excitement I here when someone in TCS uses new techniques (e.g., IP=PSPACE used techniques that did not relativize!!!!!!!!)

With P vs NP and other results we in TCS DO prove theorems and have papers like that. I am NOT being critical-- I am curious WHY we do this and other fields don't. Some options

1) Bill is WRONG- other fields DO do this- see BLAH. Actually proof theory, and both the recursive math program and the reverse math program DID look into `does this theorem require this technique' but this was done for theorems that were already proven.

2) Bill is WRONG- we are not that obsessed with barrier results.

3) P vs NP is SO HARD that we are forced into considering why its hard. By contrast there has been progress on FLT and Goldbach over time. Rather than ponder that they NEED new techniques they went out and FOUND new techniques. Our inability to do that with P vs NP might be because it's a harder problem- though we'll know more about that once its solved (in the year 3000).

4) P vs NP is closer to logic so the notion of seeing techniques as an object worth studying is more natural to them.

What do you think?

I believe that Mathematical Logic has such examples.

ReplyDelete1. Friedberg-Muchnik Theorem. There is the observation that diagonalization techniques using computable witnesses will not work. I believe this was explicitly known.

2. Continuum hypothesis. Both forcing and Boolean valued models were radically new techniques. Not sure whether the *need* for new techniques was explicitly mentioned in previous papers.

Finally, one could argue that many of the *counterexamples* in Mathematics are a kind of statement about the insufficiency of certain techniques. (Carmichael numbers show that Fermat's Little Theorem is not sufficient to exclude ALL nonprimes, continuity arguments do not suffice to prove differentiabilty, etc.)

Arguably, the genesis of modern mathematics is the Abel-Ruffini theorem, which states the impossibility of a general method for solution of quintic equations by radicals.

ReplyDeleteIs this different from what you had in mind?

Positive results are hard, yet papers need to be written (for graduation, jobs, promotions, etc)?

ReplyDeleteAll great examples- lets see why they are not QUITE what I had in mind

ReplyDelete1) FM Theorem- did they show you could not have computable witnesses BEFORE proving it and hence this was considered a barrier to a proof OR was this theorem about comp wit done after it was proven? I suspect after-the-fact. but if not then YES would qualify

2) CH and countrexamples and 5th degree can't be solved-- these tend to show that what you want to prove is FALSE (though in CH that might not be quite right-- at least showing ZFC--> CH can;t be done) and indeed being FALSE is

a barrier to showing something is true.

I am looking for a theorem T which is open and people think IS true and they prove certain techniques won't work to prove T, but not proves that T is false.

THanks for clarifying my question!

Parity barrier in sieve theory?

ReplyDelete"Entropy cannot decrease under symplectomorphic dynamical flows. This is important because all successful physical theories (both classical and quantum) are dynamically symplectomorphic."

ReplyDeleteIt's interesting that physicists and complexity theorists alike profess articles of faith in respect to oracles.

PhysicistNo measurement device can "oracularly" determine the position and momentum of a Newtonian particle.Complexity theoristP can be separated from NP even when class membership is "oracularly" determined.Physicists assume that (unphysical) oraclesVive la diffĂ©rence?don'texist, whereas complexity theorists assume (undecidable) algorithmic oraclesdoexist.Hmmmmmm …

Open questionMight a refrigerator whose working fluid comprised Chaplygin Sleigh molecules violate the Second Law? To my surprise — and delight — an arXiv search shows that work in this area is getting underway (arXiv:1603.00369)! :)I don't know if this is exactly what you are looking for, but before Borel determinacy was proved, Harvey Friedman showed that any proof of it would require the axiom of replacement. I'm not sure how surprising people found that at the time though.

ReplyDeleteBefore FLT was proved, I think it was understood why an infinite descent approach wouldn't work. Something connected with the structure of the Tate-Shafarevich group.

ReplyDeleteThere are also comments on Tao's blog and polymath project indicating that the Zhang/Maynard methods probably can't reach the full twin prime conjecture.

Great comments!

ReplyDeleteI rephrase my question in light of them:

While other fields have some barrier results, they don't seem to dwell on them. They don't have workshops on barrier methods. When a barrier to a technique is proven they don't try to apply it to all other open problems.

In TCS we seem to do this (This is not a criticism).

1) Is Bill right- is there a DIFFERENCE in mentalities about

barrier method in TCS vs other parts of math?

2) if so then why is this true?

I think there is a difference between TCS and other fields. Maybe it's because complexity theorists are already used to proving that things are hard, so it's natural to also try to prove that certain proofs are hard. Computer scientists also seem to like to explore specific models of computation, and a proof technique can be considered a very specific model of computation. So proving the limitations of a proof technique is already very close to what many complexity theorists are already doing.

ReplyDeleteTL;DR: TCS likes lower bounds so it's natural that they dwell on "lower bounds" for proof techniques.

Terry Tao's weblog discusses barrier results in essays tagged 'Navier-Stokes equations', 'Euler equations', and 'finite time blowup'. These essays inspire consideration of:

ReplyDelete(1) What are the barriers to solving the Millennium Problem "Navier-Stokes Equation"?

(2) What are the barriers to demonstrating (what John Preskill calls) "Quantum Supremacy"?

(3) What are the barriers to solving the (Millennium Problem) "PvsNP"?

Considering first the problem of Naver-Stokes blowup, Tao opines (in a comment to "Finite time blowup for Lagrangian modifications of the three-dimensional Euler equation", 29 June 2016):

---

"It is widely believed that for the Euler equations (in which the viscosity is zero), finite time blowup should be possible, and perhaps even quite generic. For positive viscosity (Navier-Stokes), there is less consensus, but personally I believe it should be possible to construct some very special solutions which blow up in finite time, even if 'generic' solutions will continue to exhibit global regularity (for some vaguely defined notion of 'generic')."

---

Considering second, the exhibition of Quantum Supremacy, Tao's language adapts as follows:

---

"It is widely believed that for closed quantum systems (in which the evolution is unitary), Quantum Supremacy should be possible, and perhaps even quite generic. For open quantum systems (e.g. quantum electrodynamical systems), there is less consensus, but many researchers believe it should be possible to construct some very special experiments which exhibit Quantum Supremacy, even if 'generic' electrodynamical systems can be simulated with resources in P (for some vaguely defined notion of 'generic')."

---

Here QED decoherence plays a role in respect to demonstrating Quantum Supremacy, that viscosity plays in respect to demonstrating Navier-Stokes blowup.

Professor Tao is (of course) open to the possibility that Navier Stokes equations do not blowup. E.g., Tao's "Why global regularity for Navier-Stokes is hard" (18 March, 2007) considers "Strategy 3: Discover a new method which yields global smooth solutions."

Transposing to Quantum Supremacy, "Strategy 3" becomes "Discover general quantum simulation methods that for evolution promised to be QED, require only polynomially many state-space dimensions." Garnet Chan's well-attended plenary lecture at last month's QIP 2017 "Simulating quantum chemistry on a classical computer" (YouTube yYwUQe2Aorw) made such a strong (albeit empirical) case for "Strategy Three", that even the most ardent Quantum Supremacists were observed to play hooky from subsequent QIP lectures, instead engaging in animated conversations with Professor Chan. QIP 2017 was fun! :)

Considering third, tseparating P from NP, Tao's language adapts as follows:

---

"It is widely believed that the class of languages in P is a proper subset of the class of languages in NP, promised that formal proofs of complexity class-membership are provided. For the broader problem in which class-membership is decided oracularly, there is less consensus, but many complexity theorists believe it should be possible to construct some very special proof techniques that separate P from NP, even if complexity class-membership is undecidable for 'generic' languages (for some vaguely defined notion of 'generic')."

---

In a nutshell, Navier-Stokes blowup is incompatible with viscosity, Quantum Supremacy is incompatible with QED, and separating P from NP is incompatible with class-oracles. We thus conceive a 21st century in which:

• Navier-Stokes flows cannot blow up, and

• Quantum Supremacy cannot be demonstrated, and

• PvsNP (with oracular promises) cannot be proved

Effectively we already live in this world, and (as was seen at QIP 2017) it's not so bad! :)

I think there might be a barrier mentality in TCS because

ReplyDeleteof the different type of laws of nature. The success of engineering in many sciences is based on laws of nature which allow for in-silico experiments: with more time and hardware you get mor reliability (i.e. exactness and probability) of simulations. The laws of nature of CS (Goedel, Turing, Rice,...) imply that there are no in-silico experiments for CS.

"other fields DO do this" - some examples above. I would like to mention polymath project about bounded gaps between primes. The results are (a) 246 unconditionally, (b) 6 using generalized Elliott-Halberstam conjecture, and (c) the proof that with the latter bound is the limit of the sieve-theoretic method.

ReplyDeleteHowever, I agree - there are MORE barrier results about P=NP questions than in number theory, for no obvious reasons: there is no reason to expect that this problem is harder than, for example, the twin prime conjecture, which waited centuries for the first substantial progress.

First, it is not TCS in general, it is complexity theory. And complexity theory is about proving impossibility results, whether computation is impossible or proof is impossible are not that different from each other.

ReplyDeleteWhy BGS prove their result? I don't know. Was it considered a barrier? I am not sure.

Why RR proved theirs? Because people spent 10 years and got no where despite initial optimism and it was natural for them to ask why. Also Razborov was working on proof complexity and formalization of circuit lower bounds if you look at his publications from that area. And that is when we started calling them barriers, right?

Algorithms is about showing what we can, complexity theory is about showing what we cannot, and crypto is the field of making impossibility results useful.

Why other fields don't do it? Some do like logic, algebra, combinatorics, ... Those who can prove limitations of techniques do prove them. They don't refer to them as impossibility results maybe because well it is not part of their culture. Whereas is past of the culture of complexity theory which has its roots in mathematical logic. In logic there are so many result that you can prove this and that by such and such, you cannot express this and that in such and such languages, ... Let's go back to Tarski's undefinability of truth theorem, Godel's unprovability of soundness theorem, etc.