The Lovász Local Lemma (LLL), slightly simplified, states that: Given a set of “bad” events, if for every event A there exists a subset of events with total probability mass at most 1/e such that A is mutually independent to all other events, then the probability that all the “bad” events are avoided simultaneously is nonzero. The LLL is used in combination with the probabilistic method to (nonconstructively) prove the existence of, e.g., a satisfying assignment for any k-CNF formula in which clauses do not share variables with more than 2k/e other clauses, optimal routing strategies, and many other (seemingly unrelated) structures of interest to TCS.
In the last STOC, Robin Moser gave his celebrated talk on how to find such structures efficiently, and later he and Gabor Tardos simplified his algorithm and generalized it to the full asymmetric LLL. They assume that the events are determined by a collection of independent random variables, and define two events as dependent iff they share a variable. This is the basic setting in most applications of the LLL.
The Moser-Tardos algorithm (henceforth MT-algorithm) is incredibly simple: Start with an arbitrary assignment and repeatedly take the variables of any one of the given bad events that holds currently, and assign new random values to these variables. Of course, attempting to “fix” such an event can lead to other neighboring events becoming violated, but the beautiful argument of Moser and Tardos shows that this branching process dies out quickly if the original LLL-conditions (even with the optimal constants) are fulfilled. This yields a randomized algorithm with expected linear running time in the number of bad events.
Recently, further important properties of the MT-algorithm have been shown:
1.) It can be derandomized using approximately logwise-independent probability spaces (or logspacefooling PRGs), if one allows an arbitrary small ε-slack in the LLL-conditions. This leads to deterministic (NC) algorithms for most LLL-applications with polynomially many bad events.
2.) It outputs a solution that not only avoids all the bad events, but has some further randomness properties: in exactly the same way as the conditional LLL-distribution (i.e., the distribution that conditions on all bad events being avoided) the output distribution of the MT-algorithm approximately preserves the probability of any event that is sparsely dependent on the bad events. While interesting in its own right, this can also be used to develop efficient algorithms for several LLL-applications in which the number of events is superpolynomial in the number of variables which in turn is the “size” of the desired assignment. It can be shown that even in these cases the number of resamplings done by the algorithm remains small (e.g., nearly linear in the number of variables), but often just finding a bad event that currently holds or verifying that the current solution avoids all the bad events – the two basic steps in the MT-algorithm – is (NP-)hard. The solution to this dilemma is to use the algorithm only on a suitably-chosen polynomial-sized core-subset of events. A union bound over the non-core events in the conditional LLL-distribution than shows that a good assignment is produced with high probability for essentially any LLL-application. This resolves, e.g., the curious situation of the Santa Claus problem for which two completely different non-constructive proofs for a constant LP-integrality gap had been developed, without leading to a polynomial-time constant-factor approximation algorithm.
Thank you for this informative post.
ReplyDelete