Tuesday, November 10, 2009

Multi-Agent Biological Systems and Nat Algorithms Workshop (Guest Post)

Guest Post from Aaron Sterling

Multi-Agent Biological Systems and the Natural Algorithms Workshop

I attended the Natural Algorithms Workshop on November 2nd and 3rd. This was an event held by the Center for Computational Intractability in Princeton (which brought us the Barriers in Complexity Workshop). Bernard Chazelle was the organizer.

Chazelle is interested in applying complexity-theoretic techniques to the mathematical modeling of biological systems. Recently, Chazelle applied spectral graph theory, combinatorics of nondeterministic computation paths, and other techniques, to obtain upper and lower time bounds on two representative models of bird flocking. (Control-theoretic methods, e.g., Lyapunov functions, had obtained only an existence theorem the models converge without providing time bounds.) Chazelle's Natural Algorithms won the Best Paper Award at SODA 2009. (I found the SODA paper hard to read, as it moves through multiple difficult techniques in a compressed fashion. I recommend reading the first thirteen pages of this version instead, to get a sense of what he's doing.)

While computational complexity's entry into this field may be new, other areas of computer science have much longer-standing connection to multi-agent biological systems. A notable example is the work of Marco Dorigo, whose ant colony optimization algorithm has been influential. Dorigo's research group won the AAAI video contest in 2007; their video is a lot of fun to watch.

Back to the Workshop! The speakers were from diverse backgrounds (control theory, robotics, biology, distributed computing, mechanical engineering), and the talks were great. For reasons of space, I'll limit myself to summarizing one talk I found particularly exciting.

Magnus Egerstedt, a roboticist at Georgia Tech, talked about dolphin-inspired robot algorithms. He explained how dolphins feed. They use three basic strategies. In the first, a group of dolphins swims circles around a school of fish, in a formation called a carousel. Once the fish are well trapped by the carousel, the dolphins swim through the school one at a time to eat. The second strategy involves swimming on either side of the fish to herd them in a particular direction. The third involves physically pushing the fish from behind, either into a waiting group of dolphins, or onto dry land. Egerstedt showed a video of dolphins beaching themselves so they could push fish onto the sand. These feeding strategies demonstrate team effort, despite (presumably) limited ability to communicate. As one example, the dolphins have to take turns to feed during the carousel, or the entire strategy will be ruined. Egerstedt's group has used the behavior of dolphins to inspire several multi-agent algorithms. Here is a recent technical paper of theirs that was dolphin-inspired.

All the talks were videotaped, and are supposed to be on the Center's web page soon.

As a final note, Chazelle has written a followup paper to Natural Algorithms, and will be presenting it at ICS in January. There's no online version yet, but the abstract says the paper fits within a larger effort to develop an algorithmic calculus for self-organizing systems. With a battery of tools available, we might see quite a bit of TCS activity in this area over the next couple years.

5 comments:

  1. I think the "algorithmic lens" is a great way to look at many natural processes. But to play devil's advocate, I think most theory-type results about these processes miss the point.

    Natural algorithms work in nature precisely because they only have to work most of the time, and only on typical instances. Adversarial models of instances have little or no relevance in biology.

    For example, protein folding is NP-complete. So, proteins have evolved so that they correspond to easy instances of the protein-folding problem, without many local optima close to the real optimum.

    The challenge for theorists, then, is to explain why many algorithms that fail in the worst case, or take exponential time, perform so well in practice. We have very few results like this --- obviously smoothed analysis for simplex stands out.

    But the problem is worse than that. Natural instances are neither worst-case, nor are they random. They have rich statistical structure, to which organisms can adapt their strategies over time. At the end of the day, proving theorems about these instances, and about nature's approaches to them seems difficult --- especially if we want these theorems to actually be relevant to the biology, as opposed to contrived toy models of it. To be blunt, do theorems about flocking actually tell us anything about birds?

    ReplyDelete
  2. "These feeding strategies demonstrate team effort, despite (presumably) limited ability to communicate."

    I don't understand this. Taking turns doesn't take much communication at all, and dolphins seem to have a very good ability to communicate. Communication is not a bottleneck.

    ReplyDelete
  3. Thanks to Aaron for the nice post.

    "Anonymous" makes several interesting points I'd like to elaborate on just a little if I may.

    I agree that the worst-case outlook on natural algorithms is not the correct one. To answer his/her question "What do worst-case convergence results for bird flocking say about birds," the honest answer is "nothing at all."

    Then why do it?

    The glib answer is, Because everything else seems too hard. But there's a serious point behind this I'd like to discuss briefly. First, modeling (ie, getting the natural algorithm in written form) it itself a huge open problem. So, yes, worst case is unrealistic, but at this juncture so is any other model. (I am not talking about smoothed analysis here -- a wonderful success story -- only about algorithms from nature.) This is not an excuse: just evidence that it's premature to hope for definitive results in the near future (not to mention that speed and convergence are not terribly interesting: prediction and evolutionary optimality are much more important). So what should we do? Wait until the biologists give us the "proper" models? No! The reason we shouldn't wait is because even if we had the right model, we wouldn't know what to do with it. If there's one thing my work in this area has convinced me is that we are coming to the battle completely unarmed.

    So the priority should be to build tools. The measure of success should not be how cool is your asymptotic complexity bound in such and such model (since it will most likely be unrealistic anyway), but what new conceptual techniques did you develop along the way?

    Complexity theory and, in fact, all of mathematics are very much used to this line of thinking. But in algorithms, somehow, we tend to prefer the problem-solving mode: give me a problem and I'll give you an algorithm for it, with success measured by a consensual metric (speed/space/parallelism, etc). This is not criticism. I think this approach has done a lot of good. But I am also advocating making room for "structural" work in algorithms. Now if worst-case or average-case or whatever your favorite model is gives you a good platform on which to build tools, then fine. In the end, though, the measure must be what you add to the analytical toolkit. Before we can hope to make meaningful statements about natural algorithms we need to build a sort of "calculus" for them.
    Algebraic geometry was invented when someone said: OK, I know everything there is know about any single polynomial. Now what happens if I throw a bunch of them into the mix? Now, say, you know all about algorithm A, and you have a good grasp of algorithm B: now compose A and B. What do you get? Answer: not a clue.

    A small nitpicking point. I think it's much more than an "algorithmic lens." It's a whole language, in fact the only language expressive enough to model the complex dynamics of self-organization. Classical dynamics is not expressive enough as a language; in other words, collective behavior cannot be reduced to a set of PDEs. So the algorithmic view is not "one" view: it's the only game in town. The trouble is that it's a language in which no one is yet fluent.

    One final comment about Anonymous' excellent points: natural selection is the ultimate software optimizer. Now why is it that what should be hard is actually easy? What's the landscape of hardness instances? That's one of the most exciting questions in all of science and, miracle of miracles, oh-so computer-sciency.

    ReplyDelete
  4. I found the ant-colony inspired video beautiful and entertaining.

    But if there is one thing the theory of distributed computing has taught us, it is that doing anything distributedly is much harder than doing it in a centralized manner. The distributed computing, and distributed systems community has also given us excellent primitives, such as leader election.

    So what then is the motivation for not using a centralized algorithm? Fault tolerance can be achieved by electing a leader to do the relevant computation?

    (I know there are good reasons, and I can think of some. Just want to hear more from people in the know.)

    ReplyDelete
  5. Anonymous, (btw, can people just post their name?)

    I don't think distributed systems algorithms are very good analogues of "natural" algorithms. The focus is usually on a very pessimistic world-view of behavior (e.g. byzantine processes), for the purpose of proving correctness by some absolute measure. Somehow it seems like natural systems should have a little more "play" between perfect behavior and catastrophic failure. Or to riff off of earlier complaints, distributed protocols are usually focused on worst case behavior.

    That said, if you extend natural algorithms up to the sociological/behavioral setting, maybe there is some relevance to be found in leader election (alpha males?).

    ReplyDelete