Thursday, June 25, 2009

Complexity Report from DNA 15

Guest post by Aaron Sterling

I attended DNA 15, the 15th International Meeting on DNA Computing and Molecular Programming, held June 8-11, 2009, at the University of Arkansas. The co-chairs of the organizing committee were Rusell Deaton and Jin-Woo Kim, who I thought did a great job. The research presented, and the backgrounds of the attendees, were diverse from experimental nanochemistry to theoretical computer science. I'll highlight a few elements of the program that had TCS and complexity flavor.

Jim Aspnes and Kenichi Morita gave TCS plenary talks. Aspnes introduced the audience to the computational theory of “population protocols” (see here for a survey) a theory of computation over a large network of agents who each possess very limited computational power. This theory can model ad hoc wireless networks, and (potentially) smart molecules interacting in solution. Morita surveyed results on reversible cellular automata and presented animations that showed the inner workings of several such devices. A reversible computation device is one in which each noninitial configuration has exactly one preceding configuration. Many models of natural computing are reversible models.

NP-completeness of the Direct Energy Barrier Problem Without Pseudoknots, due to Manuch et al., was presented by Anne Condon. The Energy Barrier Problem is decision problem about molecular folding. Given a structure in some initial configuration, does there exist a pathway through subsequent configurations to a final configuration, such that the step from each configuration to the next requires at most energy k, for some fixed k. The paper reduces a simplified version of the Energy Barrier Problem to 3-Partition, a known NP-complete problem.

Several papers discussed the theory and practice of building DNA-based circuits. I would like to focus on Signal Propagation and Propagation Delays in Molecular Circuits by Seelig and Soloveichik, as it puts forth the thesis that circuit depth is not the correct measure of time complexity for chemical circuits (in contrast to electronic circuits, or classical circuit complexity). They present a theoretical model to capture something that has been observed in the lab: when there are just a few molecules of a certain species in solution, the reactions that depend on them will be slow, because the species will interact with low probability, since contact with it is so rare. As a result, stepping through a circuit is not a linear process. Approach to completion of a circuit becomes increasingly slow the deeper the circuit is, because the later layers require the output of all the earlier layers. One possible fix for this is to catalyze the reactions in question, so the reactions occur quickly even if there is only a small amount of species present. The drawback to this is leakage: if a chemical species is not fully consumed at an earlier stage, its presence can build exponentially at later stages, leading to the circuit providing incorrect answers. All in all, an intriguing model that raises a lot of questions.

DNA 16 will take place in Hong Kong, and DNA 17 will be organized by Erik Winfree at CalTech. I'm looking forward to them both!

3 comments:

  1. Great post, thanks!

    ReplyDelete
  2. Speaking of conferences--is anyone attending CCC09 in Paris still looking for a hotel room? If so, want to share a double? Thanks!

    adrucker at mit dot edu

    ReplyDelete
  3. One possible fix for this is to catalyze the reactions in question, so the reactions occur quickly even if there is only a small amount of species present.

    Catalysis is not a fix that can be applied to arbitrary reactions; in particular, the problem of rare collisions that you mention is not fixed by catalyzing that reaction. Consider the bimolecular reaction 2A -> B. Catalyzed, it looks like 2A + C -> B + C. Even with lots of C's, after all but two A's have reacted, the two lone A's need to find their way together to react, which will take at least as long with a catalyst as without.

    What Soloveichik did with his earlier paper with Winfree, Cook, and Bruck (Computation with Finite Stochastic Chemical Reaction Networks, which showed such networks constitute an efficient universal model of computation) was to ensure that all reactions have at most one reactant A that will be present in a small amount; the abundance of the other reactant ensures that the reaction will proceed quickly even when A is down to a single molecule. It does not matter whether the reaction is catalyzed or not. If a unimolecular reaction A -> B is too slow, it can be accelerated through the addition of a catalyst to make a bimolecular reaction: A + C -> B + C, whose the reaction rate is proportional to the amount of C, a quantity easily controlled by the experimenter (and not otherwise affecting the behavior of the system). But this "fix" only works with reactions that are unimolecular to start with, and is fixing a different problem from the rare collision problem.

    The Seelig/Soloveichik DNA paper broadly classifies reactions as catalytic and non-catalytic, but all the catalytic reactions they study happen also to be bimolecular (counting the catalyst as a reactant). That paper therefore uses the term "catalytic" to mean "bimolecular where one reactant is a catalyst assumed to be present in adundance". But they do not state that the slow speed of non-catalytic bimolecular reactions can be fixed through catalysis, only that "systematic use of [bimolecular] catalysis allows one to build faster circuits with considerably improved asymptotic behaviors". The fact that not all useful non-catalytic reactions are unimolecular makes such systematic use of bimolecular catalysis a non-trivial goal, and is a major focus of the earlier paper and the reason for the incredible depth of its main construction.

    A general purpose fix for accelerating non-catalytic bimolecular reactions would consitute a breakthrough in the study of stochastic chemical reaction networks. A good example to try out ideas is this: set up a cascade of reactions that will divide the number of A's by two; i.e., it will implement the reaction 2A -> B, but such that by increasing the concentration of one or more catalysts, the reaction can be made to run to completion arbitrarily fast, without the rare collision problem that plagues the final two reacting A's.

    Dave Doty

    ReplyDelete