## Wednesday, August 13, 2008

### Ridiculously hard proof of easy theorem

Justin Kruskal is a High School Student working with me on VDW stuff (of course). The following conversation happened recently.

BILL: Justin, you've seen a proof of VDW's theorem, but there are easier things you haven't seen and should. Can you prove that the number of primes is infinite?

JUSTIN: Thats easy.

BILL: Good. How does it go?

JUSTIN: By the Green-Tao Theorem there are arbitrarily long arithmetic sequences of primes. Hence there are an infinite number of primes.

What to make of this?
1. Justin does not know how to proof the Green-Tao theorem (neither do I). However, if the proof does not use the fact that there are an infininte number of primes, then Justin's proof is valid. READERS: does anyone know, does it use the infinitude of the primes?
2. Justin now knows the standard proof. However, he should also learn that, at the level of math where he is at, you should be able to prove everything you use.
3. When does one start using theorems whose proofs one does not know? In research this is common. For basic coursework it should be rare.

1. There is actually a deep philosophical question here.

Take, for example, the classification of finite simple groups. I doubt there are 10 living people who can claim they have verified the whole thing. So when basing further results on that result, what are we to make of it? (For that matter, one can ask how much faith we should put in the classification theorem itself.)

2. In fact, there is a great incentive in mathematics to find simpler proofs of known theorems. For instance, if one would be able to give an algebraic proof of the Poincare conjecture (without using Ricci curvature) it would be considered a great advance. I would think the same would be true for any result whose only known proof
uses the Classification theorem of finite simple groups.

But mathematicians are philosophically accustomed to such things since mathematical theorems say nothing about the physical world but rather zre just statements about what you can deduce from a set of axioms, and the choice of the set of initial axioms is a matter driven by rather subjective considerations (eg. how strong an axiom of choice (if at all) one should assume etc.).

For disciplines which claim (at least in principle) to say something about the physical world, such as Engineering or Computer Science, the situation is a bit different.

3. However, if the proof does not use the fact that there are an infininte number of primes, then Justin's proof is valid. READERS: does anyone know, does it use the infinitude of the primes?

The proof uses the prime number theorem, which is a much stronger assertion.

When does one start using theorems whose proofs one does not know? In research this is common.

I'd argue that it should be (and hopefully is) rare in research. There are occasional cases, like the classification of finite simple groups, where it's unreasonable to expect everyone to read the proof and at the same time unreasonable to declare that certain mathematical areas are unworthy of investigation because they are beyond the capabilities of a single person. Still, at least in relatively accessible areas like theoretical computer science or combinatorics, as a general rule you should understand almost everything your work directly relies on, and when you don't, you should be able to give a good excuse.

For example, I'd consider it OK to use the graph minor theorem without understanding the proof. I wouldn't consider it OK to use the PCP theorem without understanding the proof (although it would of course be OK to refer to it in a side comment). There's no principled distinction, just a judgement of what constitutes an unreasonable burden on a researcher.

The general moral principle is that, when proving theorems, you must take intellectual responsibility for everything that goes into your result. Using previous results amounts to an implicit endorsement that they have been correctly proved. It's foolhardy to do this for results that haven't been widely checked unless you have personally checked them, and it's uncurious as well as lazy not to read them yourself even if they have been widely checked.

4. Re: using theorems you don't understand the proofs of:

If nothing else, it's a good idea to understand the proofs of results you're using because they may well use techniques that you've overlooked. At a lower level, for example: A couple years ago, there was a problem on the Putnam competition that asked you to prove that there were an infinite number of integers satisfying some condition. The easiest proof was essentially a "tweaked" version of Euclid's proof of the infinitude of the primes. If you didn't know that proof, chances are you wouldn't have solved the problem.

5. This is perhaps an example of the negative side of outreach. Explaining the leading results in a field is generally viewed as a good thing (especially to the NSF), but if instead you get more of an "explanation" than a real explanation, this can lead to trouble.

6. "It's foolhardy to do this for results that haven't been widely checked unless you have personally checked them, and it's uncurious as well as lazy not to read them yourself even if they have been widely checked."

I am not sure I agree with this rather strong assertion. Reading proofs of results one relies on is beneficial, but hardly obligatory because of its impracticality in most cases.

It is beneficial because it will probably lead you to a better understanding of one's own new result that relies on it.

It is not obligatory because one can claim that it is not the best use of the researcher's time.

Is it obligatory to check all of the code in a software library before writing code that builds upon it? How about before building a more complicated physical machine out of simpler components; should we verify and understand all the components?

7. Reading proofs of results one relies on is beneficial, but hardly obligatory because of its impracticality in most cases.

I deny that it is often impractical in theoretical computer science. Sure, there may be occasional cases where it is difficult, but a large majority of theoretical computer scientists have never written a single paper in which there's any reasonable excuse for not understanding the background.

Here's a though experiment to clarify how the community feels. Imagine if, during a STOC talk, someone announced that he did not understand the previous work his paper relied on, and in fact had never tried seriously to read it. Unless that work was exceptionally technical, this would be considered a humiliating admission. Similarly, if someone remarked after a talk "I bet X doesn't even understand Y's work that was used in X's paper", it would be considered an insult. Right or wrong, the expectation is that researchers understand the results their work uses (and anyone who routinely violates this expectation without publicly admitting it is effectively faking their academic reputation).

http://www.math.utah.edu/~pa/math/a23.html

--
V

9. When does one start using theorems whose proofs one does not know? In research this is common. For basic coursework it should be rare.

In math, the existence and uniqueness of determinants is often used by undergrads without knowing a proof. I am aware of this because of a friend of mine (a math major) who was skeptical of every proof he saw which used determinants until he finally saw a proof in a graduate course.

10. Georges Gonthier along with another guy did a formal proof in Coq of the four-color theorem. Gonthier is now apparently working on formalizing the proof of the classification of finite simple groups.

11. Justin is mistaken in asserting that Ben Green and Terence Tao's theorem implies that the set of prime numbers is infinite. One can note the rather subtle mathematical interpretation of the theorem's main assertion, i.e., that the primes contain arbitrarily large arithmetic sequences. Please note the difference between this statement and the fact that the cardinality of the primes set is aleph zero (proved by Euclid quite some time ago).

http://en.wikipedia.org/wiki/Arbitrarily_large

12. Hi mihai,

"Justin is mistaken in asserting that Ben Green and Terence Tao's theorem implies that the set of prime numbers is infinite."

It seems you confused yourself. Suppose there are a finite number, n, of primes. Green-Tao says there's an arithmetic progression of primes of length N > n. Contradiction.

13. I’m not really sure if the Green-Tao Theorem does assume infinatly many primes or not. It does however assume the following theorem.

Theorem 1.1. The prime numbers contain infinitely many arithmetic progressions of length k for all k.

Which seems to be based off van der Corput’s discovery that there are infinitely many triples of primes in arithmetic progression. That work was based off Vinogradov’s method of prime number sums. So if Vinogradov assumed that there are infinitely many primes then Justin’s Theorem is a circular argument with a few detours.

They also assume Szemer´edi’s theorem, which may or may not include an assumption that there are infinitely many primes as well. The only proof of Szemer´edi’s theorem that I could find was written by Terence Tao and is available at arXiv:0405251. However, this is not the original proof, it is just Tao’s new way of proving the theorem.

14. Mihai seems to be thinking Justin thought there were infinitely long arithmetic progressions in the primes (although I doubt he did).

I’m not really sure if the Green-Tao Theorem does assume infinatly many primes or not.

Footnote 2, page 4: it assumes the prime number theorem, Dirichlet's theorem on primes in arithmetic progressions, etc. (which are much stronger than the mere infinitude of the primes).

It does however assume the following theorem.

Theorem 1.1. The prime numbers contain infinitely many arithmetic progressions of length k for all k.

This is the statement of the main theorem proved in the paper, not an assumption!