Wednesday, February 06, 2008

This years Turing Award: Model Checking



Guest Post by Rance Cleavland, professor at University of Maryland at College Park, who works in Model Checking.

Earlier this week, the ACM announced the winners of the 2007 ACM Turing Award (awarded in 2008, for reasons that elude me). They are Edmund Clarke (CMU), Allan Emerson (U. Texas) and Joseph Sifakis (Verimag, in France). The award statement honors the contributions of these three to the theory and practice of model checking, which refers to an array of techniques for automatically determining whether models of system behavior satisfy properties typically given in temporal logic.

After first blanching at the application of an adjective ("temporal") to a term ("logic") that is usually left unqualified, a Gentle Reader may wonder what all the fuss is about. Is model checking really so interesting and important that its discoverers and popularizers deserve a Turing Award? The glib answer is of course, because the selection committee must have a fine sense of judgment. My aim is to convince you of a less glib response, which is that model checking is the most fundamental advance in formal methods for program verification since Hoare coined the term in the 60s.

What is "model checking"? In mathematical logic, a model is a structure (more terminology) that makes a logical formula true. So "model checking" would refer to checking whether a structure is indeed a model for a given formula. In fact, this is exactly what model checking is, although in the Clarke-Emerson-Sifakis meaning of the term, the structures - models - are finite-state Kripke structures (= finite-state machines, except with labels on state rather than transitions and no accepting states), and the logical formulas are drawn from propositional temporal logic (= proposition logic extended with modalities for expressing always in the future and eventually in the future").

The Clarke-Emerson-Sifakis algorithmic innovation was to notice that for certain flavors of temporal logic (pure branching time), model checking could be decided in polynomial time; this is the gist of the papers written independently in 1981 by Clarke and Emerson on the one hand, and Sifakis and Queille on the other. Subsequently, these results were improved to show that model checking for pure branching-time logic is proportional to the product of the size of the Kripke structure and the size of the formula (often, maybe misleadingly, called "linear time" in the model-checking community, since the size of the model dominates the product).

Of course, a linear-time algorithm (OK, I'm in the model-checking community!) is only of passing interest unless it has real application. This comment involves two questions.
  1. Is the general problem one people want solved?
  2. Can the algorithm produce results on the instances of the problem people want solved?
The answer to 1 is "YES YES YES". The ability automatically to check the correctness of a program/hardware design/communications protocol would offer incalculable benefits to developers of these systems. Early on, the answer to 2 for model checking was in doubt, however, for the simple reason that the size of Kripke structure is typically exponential in the size of the program used to define it. (State = assignment of values to variables, so num-of-states is exponential in num-of-variables, etc.) Throughout the 80s and 90s, the three winners worked on many techniques for overcoming the state-explosion problem: compositional techniques, symmetry reductions, etc. One of the most successful was symbolic model checking: the use of logic formulas, rather than linked lists, etc., in the model-checking process to represent large sets of states. While none of these techniques proved uniformly applicable, symbolic model checking found a home in the hardware-design community, and model-checkers are now standard parts of the design flow of microprocessor design and incorporated routinely in the design tools produced by companies like Cadence, Synopsys and The MathWorks.

So what to make of the Turing Award? I would say that the algorithmic innovation was deep and insightful, but not the source of the award. Rather, the combination of the initial insight, together with the persistence of the award winners in identifying engineering advances to further the state of the practice, is what earned them their, in my view well-deserved and maybe even over-due, prize.

3 comments:

  1. thank you for a instructive guest-post!

    it sounds like symbolic model checking is used for hardware verification, is it practical for software verification anymore?

    ReplyDelete
  2. Model checking is widely used in hardware but model checking techniques are also considered very valuable in practice on aspects of software. (One example in the SLAM project at Microsoft for verifying device drivers.) A great aspect is how much theory has been involved in making this successful. (One theorem about automata due to Buchi that I mention below deserves to be more widely known and allows one to verify pushdown systems and not just finite automata!)

    The award winners deserve a huge amount of credit for realizing that model checking was the way to go and championing it over more than a decade before it really caught on. It is hard to get a sense of how far from practical this approach was considered in the early 1980's. After all the problem they wanted to solve is a generalization of deciding graph reachability in an exponential size but polynomially-specified graph.

    The success of model checking owes much to theoretical ideas that have advanced the state of the art in a big way. These include the ideas of Vardi and Wolper (Godel prize a few years ago) who derived new efficient algorithms for automata-theoretic verification that were later incorporated and extend by Kurshan and Holzmann (along with refinements by others) in the SPIN model checker (for which the four won the Kannelakis prize). See Kurshan's 1994 STOC invited talk for an overview of the approach.

    A huge advance was symbolic model checking using BDDs (oblivious read-once branching programs championed as a data structure for Boolean functions by Randy Bryant) developed in Ken McMillan's ACM Award-winning dissertation in the early 1990's. (Kannelakis Award for Bryant, Clarke, Emerson, and McMillan.)

    In the late 1990's improvements in SAT solvers made them efficient enough to replace BDDs for bounded model checking.

    All three of these approaches have domains in which they are preferred.

    Subsequent to this line of work there has been a huge flowering of different ideas for verifying aspects of software. One major idea has been abstraction-refinement in which one abstracts away most of the detail of the software in the initial model and then adds detail automatically based on how verification fails and then tries again. (Interpolation results about resolution proofs actually can be useful here.)

    The most surprising aspect from my point of view, though maybe not the most important, has been the ability to verify pushdown systems. Verifying these is obviously necessary to handle the call stack. How can one possibly use finite state verification to determine properties of pushdown automata?

    Buchi's theorem I mentioned above does this:
    For any PDA M with stack alphabet Gamma, the language S(M) over Gamma^* of consisting of all possible stack configurations of M is regular! Moreover, given M one can build an NFA for S(M) fairly efficiently. (Actually one works with a language in Q Gamma^* that includes the state in the configuration.) So, for example, in order to verify a safety property (one that is always true) one can work with a verification that involves finite automata only!

    This property of stack languages seems like such a natural one that it would be described in standard textbooks. (Consider how simple the possible stacks are for any of the familiar PDAs you know.) What is the simplest proof that anyone can come up with?

    ReplyDelete
  3. I'm very happy to see the Turing go to model checking. That was one of the areas I felt should have been given preference over the previous two Yet Another Compiler Writer awards (of which I consider the Naur one to be an obvious mistake at this point in time -- he should have gotten it (very) early on or not at all because so many others have contributed more since, including some in his own department).

    I thought that Gerard Holzman would be one of the recipients if the Turing was ever awarded for model checking but he was left out. That surprises me.

    (Others that I think would have been preferable to Naur and possibly Allen: Scheifler and Gettys for X, Miller, Clifford, and Kohl for Kerberos.)

    ReplyDelete