Thursday, July 09, 2015

Will Our Understanding of Math Deteriorate Over Time?

Scientific American writes about rescuing the enormous theorem (classification of finite simple groups) before the proof vanishes. How can a proof vanish?

In mathematics and theoretical computer science, we read research papers primarily to find research questions to work on, or find techniques we can use to prove new theorems. What happens to a research area then when researchers go elsewhere?

In a response to a question about how can one contribute to mathematics, Bill Thurston notes that our knowledge of mathematics can deteriorate over time.
Mathematical understanding does not expand in a monotone direction. Our understanding frequently deteriorates as well. There are several obvious mechanisms of decay. The experts in a subject retire and die, or simply move on to other subjects and forget. Mathematics is commonly explained and recorded in symbolic and concrete forms that are easy to communicate, rather than in conceptual forms that are easy to understand once communicated. Translation in the direction conceptual -> concrete and symbolic is much easier than translation in the reverse direction, and symbolic forms often replaces the conceptual forms of understanding. And mathematical conventions and taken-for-granted knowledge change, so older texts may become hard to understand. In short, mathematics only exists in a living community of mathematicians that spreads understanding and breaths life into ideas both old and new.
Once a research area fills out, researchers tend to move on to new and different ideas. Much of the research in the theoretical CS community in the 50's, 60's and 70's have been lost to journal articles, now nicely digitized but rarely downloaded.

What will happen with complexity classes once people stop studying them? You already don't see that many recent papers on complexity classes, even in the Computational Complexity Conference. A victim of our own success and failures: We settled most of the easy questions and the rest are very hard.  As my generation retires, the classes may retire as well, outside of a couple of the biggies like P and NP. The old papers will still be out there, and you can always look up the classes in the zoo or on Wikipedia, but the understanding that goes with people studying these classes, and why we cared about them, may deteriorate just like computer programs that go unattended.


  1. this is all true of science in general. its a human social endeavor. it has short term and long term trends. it evolves. its like the relationship of memes with written artifacts. science is a special class of memes. some of this is noted by science historians eg kuhn, popper. a nice book on math as a social experience is "the mathematical experience" by davis/ hersh. also abstruse/ theoretical areas can be revived when they find new applications, there are dramatic/ extraordinary cases of this in the history of math eg RSA encryption etc.... there is some improvement here in "open science". see also great new book Reinventing Discovery, new era of Networked Science by Nielsen....

  2. further thought, actually something like the opposite of what you are stating may be in play. the "algorithmic lens/ age" is upon us and CS research overall has increased dramatically in scope/ scale/ resources over last few decades & (seems to me) that trend is likely to continue for decades. there are some other "big waves/ trends" feeding into it such as big data & AI (eg deep learning). there is some case that some of this may be somewhat "at expense" or decreasing pure math research. or rather there seems to be a new emerging/ growing/ thriving field "algorithmics" that is a fusion of deep CS/ math theory, is cutting edge & will be here long into the future.

  3. I frequently find myself needing some of the integrals and series from Gradshteyn and Ryzhik, a massive book containing lists of (mostly) true identities. Every now and then, I find an integral that is *related* to but not exactly the same, and I feel like if I knew how Gradshteyn and Ryzhik knew, then I might be able to modify it.

    But following up some of the chains of references (sometimes GR references something else, like Erdelyi's table of integrals, which references something earlier still, and so on) leads to dead ends. I have no idea how some are proved, nor do I know where to find out how. (Though some mathematicians like Victor Moll dedicate an enormous amount of time to reproving large segments of GR for reasons like this).

    It would not surprise me if many of these identities are forgotten, or if they have in fact truly been forgotten already.

  4. The solution, is that science will be carried out by computer expert systems.
    Allready they do that, but over time those programs will get more advanced.
    Think of IBM watson like systems.

    Its time to change our paradigm to think that we would understand math.
    Because maybe you would understand to high detail a specific area, but a system like watson would understand all areas, and would be better add finding solutions in combining them..

  5. About 20 years ago, while doing research in algorithms for robot motion planning, we managed to reduce the problem into a certain property of quadratic curves. It was clear from the examples that both (1) the statement held, and (2) it was likely known in the XIX century. Lo and behold no modern book had the theorem. Eventually we took the plunge and proved the result from scratch.

    A year later, while perusing in a used bookstore I found the result an 1880s high school textbook. I still have the book in my shelves as a reminder.

  6. There's something poetic about the loss of knowledge. Humanity may forget mathematics and sciences- but these 'once discoveries' will always be immutable universal truths

  7. Quite the opposite. Works like McIlroy's breathe computational/conceptual relevance into stuffy subjects like generating functions,

  8. Alex Lopez-Ortiz: this story is haunting me. I'm only an armchair mathematician, but worry that in computer science we are creating the same problem. We're letting the lower rungs of the ladder of abstraction decay away, leaving new students no path to follow except by using the tools we created along the way.

    It seems the only means to prevent this is education for education's sake. Which is hardly a popular notion in modern neoliberal cultures.

  9. When hopefully all papers ever written are online will it make this problem better or worse? Alex Lopez-Ortiz's story is a good touchstone- even if that book had been online would they have found it?

    One reason to write monographs is so the knowledge that is a bit obscure does not get lost. Even if the books are online this might not help.

    The big question will be how easy are these things to search and find?

    A bigger issue, which might be closer to Lance's original point, is that even books and papers do not capture the intuitions the author had. Would you-tube videos,k
    Khan academy lectures, etc, help?

  10. Alex Lopez-Ortiz: can you tell us what the result was? I'm very curious about this.

    1. It is a simple property of the hyperbola which is why we believed from the onset that it had to be known, but it wasn't listed in any of the many books we consulted. This was back in 1995 mind you, today you will find it listed in wikipedia and/or math exchange.

      I'm traveling so this is going from memory: the property was that the hyperbola is the locus of points that keeps the difference between the two angles to two fixed points constant (in particular the difference is zero if the points in question are the foci).