Wednesday, October 19, 2011

Theorems that are impressive at first but then....

Mission Impossible was my favorite show as a kid. As an adult it would not make my top 20, and I wonder why I liked it so much as a kid. (Actually I do know- at the beginning they are given a well defined problem and they solve it, which is how Math works.)

This inspired this post: Is there some theorem that you were initially impressed with but are now far less impressed? I list things I have heard of for this category. I request that you submit your own examples.
  1. Every number is the sum of 4 squares. This is impressive and still is. Number theorist must use this all the time! Alas, aside from its use in Hilbert's 10th problem, and maybe a few few other places, I has never seen it used and is now less impressed. However, this may be unwarranted. Some theorems in math are impressive for the achievement, others for their use later. This one IS impressive for its achievement. But, as far as I can tell (and I could be wrong), not for its uses.
  2. Every group is a group of permutations. Group Theorists must use this all the time! Alas the proof makes you realize its more of a tautology. Rarely used by Group Theorists. It is used in some of the proofs of Sylow's theorem. I do not know of any others uses. And this one is not impressive for its achievement.
  3. The Prime Number Theorem. Since results that are very very close to it can be gotten with much much much less effort, getting the actual constant down to 1 seems like too much sugar for a cent. (For more on PNT and a link to an easy proof of a weaker version see an old post of mine here.) However, this one is an achievement certainly. And it inspired other great mathematics.
  4. Poincare's Conjecture says that if something looks, feels, and smells like a sphere, then its a sphere. Is that really worth $1,000,000? Perhaps Perelman didn't think so either.


  1. Brouwer fixed-point theorem?

    "Take an ordinary map of a country, and suppose that that map is laid out on a table inside that country. There will always be a "You are Here" point on the map which represents that same point in the country"

    I had to walk to a friends house one day and stayed there the night. The next day on the walk home I realised the theorem was obvious when I was on exactly the same point on the road as I had been at that time the day before.

  2. Brouwer fixed-point theorem can provide a fun linkage between graph colouring and topology. I gave a talk at UofCalgary on this a few years back and was pleasantly surprised at how many difficult combinatorial proofs are resolved trivially with the "hairy ball" theorem.

  3. On the other hand, you continue to remain a bozo

  4. Re: Sums of Four Squares:

    The way I see it, this theorem's main purpose is to serve as a "proof of novelty" of quaternions. Quaternions are introduced, and then BOOM, sums of four squares. The theorem is not particularly useful for anything, but its unexpected nature forces the reader to perk up: "Oh, these quaternion things have some kick!"

    I wish that there were more of this in math. If, in the middle of a complicated, obscure math paper, you can find a way to throw in a simple corollary about the "real world" (something which doesn't seem, on first glance, to have anything to do with the obscure paper), it makes me much more interested. Even if it's just something like an alternate proof of the infinitude of primes.

  5. The best example, at least for me personally, is e^(i*pi)+1=0. It seems deep and mysterious right up to the point one takes complex analysis.

  6. For me the jury is still out on Onsager's regression hypothesis and reciprocity theorem. But Onsager's reciprocity theorem *was* impressive enough to win him a Nobel Prize (in chemistry). One wonders, what other mathematical theorems/insights/relations were impressive enough to play a key role in Nobel awards? Perhaps Dan Shechtman's award this year for observing quasicrystalline symmetries (also in chemistry) is an example?

  7. I have applied the Sum of Four Squares result! When I worked at a math tutoring lab, on slow nights the other tutors and I occasionally played the Sum of Four Squares game. Someone calls out a "biggish" number, and everyone scrambles to present it as a sum of four squares first. The winner then calls the next number. It was an unspoken rule that you're supposed to do it in your head.

    We also played Goldbach's game (an analogue of the above). To be safe, we only used numbers less than 10^18.

  8. One thing you said deserves emphasis: There are different reasons why one might be impressed by a theorem, e.g., (1) considerable technical virtuosity is needed to prove it; (2) the theorem statement itself is amazing and unexpected; (3) the theorem, or the techniques developed in order to prove the theorem, is very useful for proving other theorems. It doesn't seem to be "fair" to the theorem if you are impressed with it for one reason, then expect it to be impressive for another reason, and are disappointed when it isn't.

    More interesting would be cases where one is impressed with a theorem for one of the reasons, then later decides that it is not impressive for the same reason. I'm having a hard time coming up with examples of this for myself actually.

    Regarding your examples, I think you're right that the four-squares theorem is like Fermat's Last Theorem: It's not "useful," but it's a benchmark. "Every group is a group of permutations"—this is used all the time if you rephrase it as, "There is a representation of a group called the regular representation." The Prime Number Theorem is again a kind of benchmark. The surprise here is, given that it's easy to get close, why does it seem so difficult to get the exact answer? Phrased this way, it is still somewhat mysterious why it's so hard to prove. Finally, I think Poincare's conjecture may be more impressive (in the sense of a surprising theorem statement) if one knows that in four dimensions and higher, there are compact simply connected manifolds without boundary that are not homeomorphic to a sphere.

  9. After seeing this post I asked me: what theorem has actually impressed me? First answer: most of them. Next answer: almost none of them. Being "impressed" is very personal. If you worked on a thing for 10 years, and finally see somebody has done this - you are really impressed. If not - then it depends on whether you need this "impressing thing" later. Has somebody had a need to know that x^n+y^n=z^n has no integer solutions for n>2? Has somebody had a need to know that solving Diophantine equations is undecidable?

    Perhaps Pelerman has had similar feelings. Anyway, I am often mostly impressed by "trivial" theorems, like the Sunflower Lemma of Erdos and Rado, and the like. They are just nice. Sorry, "being impressed" cannot be applied to intellectual product. This is just product, feelings free.

  10. pythagoras theorem. no wait. i am still amazed by it. apparently there are around 365 proofs known

  11. No, no: what I mentioned is the Last Fermat's Theorem (n>2) - what Pythagoras proved (as you know) is that the case n=2 IS true. And this was used much more than 365 times in geometry. But somehow people are less impressed by Pythagoras' "trivial thing" than about a 300+ years "problem". People are often more impressed by the difficulty than by the beauty ...

  12. Once I got a referee report saying "While at first
    sight I was quite impressed by the results, my enthusiation decreased upon reading the paper."

  13. The generators of the Ramanujan graphs constructed by Margulis and by Lubtozky, Phillips and Sarnak come from the solutions to a^2 + b^2 + c^2 + d^2 = p, for a prime p.

  14. This is impressive and still is. Number theorist must use this all the time!

    What on earth would they do with it? It's a beautiful theorem, but it doesn't sound useful at all. From my perspective, the surprising thing is that the theorem statement itself is ever useful (for example, in Hilbert's 10th problem).

    The Prime Number Theorem. Since results that are very very close to it can be gotten with much much much less effort, getting the actual constant down to 1 seems like too much sugar for a cent.

    This is silly: "very very close to it" might be good enough if your only interest is in proving that certain number-theoretic algorithms run in polynomial time, but there's a huge gap conceptually. This is like asking why we should bother to prove that 1-1/3+1/5-1/7+... equals pi/4, when any idiot can prove that it equals 0.7854 plus or minus 0.0001, which is very very close to the exact answer.

    Poincare's Conjecture says that if something looks, feels, and smells like a sphere, then its a sphere. Is that really worth $1,000,000?

    If a typical professor at a research university earns $100,000 and devotes half of his/her efforts to research, then the university thinks that research is worth $50,000 per year. If one year of half-time work by an average researcher is worth $50,000, why shouldn't seven years of full-time work by a genius be worth $1,000,000? That's barely a 40% premium over average work.

  15. Following up on Dan's comment, I believe that LPS's arguments proving expansion of these graphs (at least for the SU(2) version) are related to the fact that p^n not only can be written as a sum of 4 squares, but can be so written in many ways (and indeed, the ways of doing so are nearly uniformly distributed over the 3-sphere).

    I used this (sort of) in my undergrad thesis, although not for a particularly useful result.

  16. That every group is a group of permutations and the Poincaré theorem may not have immediate corollaries or applications, but in both cases the proofs have had substantial effect on mathematics.

    To prove that every group is a group of permutations one lets the group act on the set of its elements. This at least means that we are considering group representations, and the idea itself represents a non-trivial step in abstraction. This abstraction can perhaps be seen as culminating in the functorial approach to algebraic geometry.

    And the Poincaré theorem was finally proved by inventing the Ricci-flow, which has been used to impressive effect in Riemannian geometry, for example in the study of Gromov-Hausdorff convergence of manifolds.

    So the theorems might not have immediate applications, but the techniques used to prove them represent important advances in our understanding of the world.

  17. Poincare's Conjecture says that if something looks, feels, and smells like a sphere, then its a sphere. Is that really worth $1,000,000?


  18. I would say Godel's incompleteness theorem is not as impressive once you know about computability theory. Initially it seems to be an amazing meta-mathematical result but it really boils down to showing that arithmetic is sufficiently expressive to describe the evolution of Turing machines.

    That is, once you understand that some things must be undecidable, it's not as impressive that arithmetic turns out to be one of them.

  19. Can a complex and inelegant proof of a theorem be seen as unimpressive where an elegant proof of the same thing would be impressive?

  20. Another application of sum of four squares: it is used in an efficient zero-knowledge argument that a committed number is positive. Two such arguments composed give you a range proof (committed number belongs to any fixed range [L, H]).

  21. Re: David said:

    Yes, once you become familiar with the consequences of Godel's theorem, it is not impressive that Godel's theorem looks like one of them.

    I am unfair: Turing's achievement of formalizing the notion of computation is enormous, perhaps even greater than Godel's.

  22. Anonymous, what you say about the PNT is stupid. No one spoke about the running time of
    number-theoretic algorithms here. Even if all you care about is the distribution of the primes per se, there seems to be no reason why it should be important to chase down the constant so it becomes 1. All the conceptual importance of the result is there anyway. It is nice that it can be done, but that's just about it.

    Regarding your example: I don't know how it relates to the topic at hand, but anyway if you were faced with the sum out of nowhere, it probably wouldn't matter to you whether the sum is precisely pi/4 or not as long as you can bound it. I don't think you can say it is a particularly interesting or important identity.

  23. Anonymous, what you say about the PNT is stupid.

    Indeed, it probably was stupid to waste time commenting here, but it's hard to resist a Gasarch post.

    Even if all you care about is the distribution of the primes per se, there seems to be no reason why it should be important to chase down the constant so it becomes 1. All the conceptual importance of the result is there anyway. It is nice that it can be done, but that's just about it.

    If you restrict yourself to a rather limited notion of "conceptual importance", then I agree. But then why should you care about the log factor at all? Surely saying there are n^(1-o(1)) primes up to n is close enough, and that's even easier to prove. Is that the conceptually important part? Or you could fall back on Euclid's proof that there are infinitely many primes, and declare that this is the real heart of the matter. The rest is all details.

    Computational complexity theory generally neglects constant factors. This is not because they are unimportant or uninteresting - plenty of people care deeply about 10% improvements in running times, and there are genuine ideas (sometimes even beautiful ideas) involved in these achievements. Instead, complexity theory neglects constants because they are too hard, and we can only handle crude approximations at best. That's no excuse for a sour grapes attitude of "Aw, who really cares about constant factors anyway? That's not the conceptually important part." Knuth would be appalled. :P

  24. I've noticed that some mathematicians are guilty of a certain kind of circularity when it comes to this kind of discussion. They will say, implicitly or explicitly, that a result is interesting or important only if it has applications to something else. But surely, this cannot be the only reason a mathematical result is interesting, or else we get into an infinite regress. Somewhere along the line, some results had better be intrinsically interesting or else we're just beavering away proving results in order to prove other results in order to prove other results...none of which are of any significance.

    If something like the prime number theorem or the Poincare conjecture is not spared the tyranny of applications, then there's really no room for interesting pure mathematics in the world.

    Having said that, let me try to say why nailing down the value of the coefficient in the PNT is of intrinsic number-theoretic interest. The error term in the PNT is (conjecturally) governed by the Riemann hypothesis. If you can't even nail down the coefficient of the main term, then you basically have no control at all over the error term, and the Riemann hypothesis will be totally inaccessible to you. That all the zeros lie exactly on the line has always been highly suggestive that they are eigenvalues of some self-adjoint operator, even before the empirical evidence relating the distribution of zeros to the spectrum of the Gaussian Unitary Ensemble. If this suggestion pans out, then it will be a stunning conceptual explanation of the distribution of the primes, and I don't see how you would possibly have the vision to see this if you couldn't even be bothered to show that the zeros have real part strictly less than one (i.e., the PNT).

    If the constant for something as basic as the PNT is really one, then that's probably a sign that there is some deep structure lurking somewhere in the vicinity. Having the taste to recognize this sort of thing is crucial to gaining access to the deeper secrets of mathematics. If you view theorems as isolated facts whose only property is that they may or may not be more interesting than their nearest neighbors then I don't think you will have any chance of doing anything more than scratch at surface phenomena all the time.