If you have two complexity classes that have complete sets, the intersection might not, for example NP ∩ co-NP. The world of total-function classes acts differently.

Christos Papadimitriou and others defined a number of classes based on finding solutions to problems where solutions are known to exists for some combinatorial reason. While TFNP, the set of all such problems, might not have complete sets, all the other classes are defined basically based on the complete search problem for the class, such as PLS, finding a local minimum. If you have two such classes **A** and **B** with complete search problems A and B define the search problem D as

D(x,y): Find a solution to either A(x) or B(y)

D is complete for the intersection of **A** and B: First the problem D is in **A** since you can reduce the problem of finding a solution to either A or B to finding a solution to A. Likewise D is in **B**.

Suppose you have a problem Z in **A**∩B. Then since Z is in **A** and A is complete for A, finding a solution to Z(u) reduces a finding a solution of A(x) where x is easily computed from u. Likewise Z(u) reduces to finding a solution of B(y). So whatever solution D(x,y) gives you, it allows you to find a solution to Z(u). Thus D is complete for **A**∩B.

Some people nerd out to helicopters on mars. I nerd out to the complexity of complete sets.

I learned about complete sets of intersections of total function classes from the talk by one of last week's STOC best paper awardees, The Complexity of Gradient Descent by John Fearnley, Paul W. Goldberg, Alexandros Hollender and Rahul Savani. The part above was well known but the paper goes much further.

Consider PPAD famously with Nash Equilibrium as a complete problem and PLS. PPAD ∩ PLS has complete sets by the argument above. But we can go further.

The class CLS is a variation of PLS where you find a local minimum in a continuous domain under some Lipschitz conditions and is known to sit in the intersection of PPAD and PLS. Fearnley et al. look at finding a minimum using gradient descent (the main tool for deep learning), and showing not only is it CLS-compete but complete for PPAD ∩ PLS. As a consequence CLS = PPAD ∩ PLS. Pretty cool stuff.

Thanks Lance. Let me admit that the paper is a fine example of the humorous stereotype of complexity theorists proving a problem "hard" when meanwhile the problem is being routinely solved in practice in the real world. (In this case, the problem of finding continuous local optima.) But of course, our hope is that by understanding the worst-case hardness, this will lead to a better understanding of solvability in practice.

ReplyDeleteThank you. Essentially the solution is there but it would take an exponential number of gradient iterations.. correct? Is there Ptime cases for the class CLS?

ReplyDeleteLance, nice fonts ... sets a and B rendered nicely on my screen.

ReplyDelete@Unknown -- yes, solutions are there and if you can find one, it's easy to recognize as a solution. Note that an "exponential number of gradient iterations" result is known from https://arxiv.org/abs/1807.01280 (On the Computational Power of Online Gradient Descent, by Vaggos Chatziafratis, Tim Roughgarden, Joshua R. Wang). Our paper shows that the solution concept (of local optimum of a continuous function) is "hard" to find by any algorithm (that may be allowed to inspect the circuit that computes the objective function, looking for short-cut alternatives to gradient iterations).

ReplyDelete