Monday, June 06, 2011

A Valiant Weekend

In my role as SIGACT chair, I got to attend the ACM Awards Banquet held at the beginning of FCRC in San Jose. I shared a table with Mitzenmacher who posted on the banquet earlier. Theory did well among the award winners but none so notably as the Turing Award, the "Nobel Prize of Computer Science", awarded to Leslie Valiant. Valiant, in his acceptance speech, gave a wonderful shout out to the STOC conference for helping him shape his research, or at least the STOC conferences back in the 70's when the theory community were still figuring out the right questions and models. A nice contrast to the ACM Press Release that focused on the connections to AI.

Last night, Valiant gave the Turing Award lecture at FCRC. His main thesis is that evolution is just an example of computational learning and we need to understand the computational process that led to the development of us in a relatively short period of time. I heard a similar talk he gave at Berkeley caused some consternation among the biologists who don't want to give anyone fodder that evolution might not be possible because it is too complex.

My take: Evolution has a significant random element and we need to condition that randomness on the fact that we exist. The biologists and computer scientists on Mars aren't asking these same questions about why they never came to be.

5 comments:

  1. Regarding mathematical questions and inspirations that arise naturally in the context of biology, Misha Gromov's recent Bulletin of the AMS survey article "Crystals, proteins, stability and isoperimetry" (2011) provides a very readable perspective with dozens of concrete suggestions … certainly biology nowadays provides abundant mathematical inspiration (especially if your name is "Gromov" or "Valiant").

    ReplyDelete
  2. One more example to John's comment:

    http://www.math.tau.ac.il/~nogaa/PDFS/misbios.pdf

    ReplyDelete
  3. Something got wrong, am trying again ...

    Here is yet another example to John's comment:

    A biological solution to a fundamental distributed computing problem

    ReplyDelete
  4. I was at Les's talk and thought it was great! I walked out of the room with renewed enthusiasm that complexity theory can and should colonize every other part of science. :-)

    The way Les put his question was basically this: given that we know from biology, geology, etc. that the evolution of very complicated life took place in "merely" ~3 billion years (rather than, say, 10^100 years), what computational models of "evolvability" can we find, which would render evolution's ability to cut through the exponentially-large search space in that amount of time less surprising? That strikes me as pretty clearly a great scientific question, even if you dislike Valiant's own proposals for how to it.

    Having said that, I would add that it's partly a matter of taste which numerical facts we choose to be surprised by. So for example, it's also a worthy scientific problem to explain why evolution apparently needed a leisurely 3 billion years, rather than producing humans from RNA soup in a few million years or less!

    ReplyDelete
  5. Gromov's article speaks to the topics that Scott's post raises, as follows:
    --------------
    At first sight, Nature does not appear exceptionally clever: her evolutionary strategy is not sophisticated, to say the least. But she was selecting from billions upon billions of candidates and her selection criterion, “fit to survive”, may look simple only for a lack of mathematical imagination on our part: an enormous amount of structure goes into this “fit”. Besides, Nature does not run in a structural vacuum—all of physics and chemistry is at her disposal, and she excels in molecular dynamics and in catalysis.

    Yet, a mathematician might think that Nature is dumb: the primitive mutation/selection mechanism of evolution could not produce anything we, mathematicians, could not divine ourselves.

    But if so, we inevitably conclude that the human brain, which was cooked up by Nature in the last couple of million years, cannot be especially smart either: all our mathematics, or rather the mathematics-building mechanisms in the brain, must be confined to the rules that evolution had stumbled upon in this relatively short stretch of time and had installed into us.

    On the other hand, Nature had spent a much longer time (measured by the number of tries involved) in inventing such structural entities as the cell and the ribosome.

    One may conjecture that neither cell nor brain would be possible if not for profound mathematical “somethings” behind these, Nature’s inventions. But what are these “somethings”? Why do we, mathematicians, remain unaware of them?

    … Notwithstanding our much-glorified successes, we are, tautologically, blind to what we do not see. (Nature systematically hides from our mind what we are not supposed to know, such as the blind spot in our retina, for instance. The neurological mechanism of this hiding is far from clear.)

    Also, the history of mathematics shows how slow we are when it comes to inventing/recognizing new structures even if they are spread before our eyes, such as hyperbolic space, for instance.
    --------------

    Gromov's paragraphs remind us that all communication channels are two-way … which is of course a valid principle of quantum information theory too.

    ReplyDelete