Sorry, completeness results are algorithms. They are proved by algorithms people using algorithmic techniques. There is simple-minded view that upper bounds are algorithms and lower bounds are complexity. But that doesn't reflect reality: Upper bound results like the PCP theorem or SL=L are complexity. It's not even clear whether a result like circuit lower bounds implies derandomization is an upper or a lower bound.
Since we both algorithmicists, like complexity theorists, lack techniques to prove general lower bounds, they instead use reductions to to show that problems are hard. This gives them a two-pronged attack on a problem where the failure to find a reduction might lead to an algorithm or vice-versa. In structural complexity, we see a similar dichotomy between inclusions and relativized separations.
There is no absolute dividing line between algorithms and complexity, but loosely algorithms deals with specific problems while complexity studies classes of problems based on some computation model with certain resource bounds. So the definition of PPAD and its relationship to other classes is complexity but the reduction from PPAD to Nash Equilbrium is algorithmic.
I find myself left feeling slightly offended by the post although I completely agree with it in spirit. (I think it's because something about the post makes "algorithmists" or "algorithms" sound second-class, although I'm sure that's unintentional.)
ReplyDeleteOne of the things I try to clearly spell out in my undergraduate algorithms class is that perhaps the most important idea we cover is the notion of reduction. Usually, we use it in a "positive" sense -- take a new problem, and turn it into an old problem we already know how to solve. But that can be turned around to use it in a "negative" sense to prove a problem is hard. (Even if they've seen reductions in their complexity undergraduate class, I don't think this dual role comes out until the algorithms class.)
My experience is that reduction is a powerful idea not appreciated nearly as much elsewhere. For instance, while I think reductions occasionally do appear in the information theory community, it doesn't really seem to be part of the mindset; they don't generally refer to reductions as reductions, and they don't seem to pursue them consciously (or as vigorously) as I think the CS theory community does. Although that seems to be changing -- it's a great framework for us to export.
As an algorithms person, I've heard discussions in PC meetings, job interviews or just someone trying to talk me into attending a seminar: Jane has an algorithms result, she shows that simple polygons can be triangulated in linear time.
ReplyDeleteSorry, upper bound theorems are complexity results. This particular result was motivated by a line of structural results showing a variety of decision problems to be in O(n)^{T[1]}, solvable in time O(n) given a single call to an oracle for simple polygon triangulation. It has implications for the transdichotomous reduction hierarchy, regarding languages that may use calls to an oracle for what is sometimes informally described as the "floor" function. While such a result tells us something quite fundamental about computation, human existence, and the nature of the universe, to call it an "algorithms" result, as though it had some bearing on our everyday, tedious, humdrum computing concerns, is to misunderstand its nature. It will never be a work-horse like the zig-zag product, built in to every computing device we own.
We have complexity theorists telling us that their complexity classes are important because they characterize the complexity of certain algorithms. And we have algorithmists telling us tht their algorithms are important because they can be applied to solve certain practical problems. That last step, "they can be applied to solve", is exactly where reduction fits in. It is taking a known algorithm and applying it to solve something else that's similar enough that the algorithm still mostly works, but different enough that some glue is needed.
ReplyDeleteFor this reason our graduate-level theory-for-nontheoreticians course is centered around the idea of reduction and leads up to the theory of NP-completeness. They don't necessarily need to be able to design and analyze algorithms de novo themselves, but they do need to be able to recognize when whatever other CS problem they're working on has a known algorithm that can be tweaked to fit.
Sorry, completeness results are algorithms.
ReplyDeleteThis sounds overly simplistic. Say, I have a proof right next to me that GI is NP-complete, would it not be appropriate for publication in CCC?
I'd say that straightforward completeness results are neither algorithms nor complexity, just like a simple arithmetic operation (3+11=14) is neither modern algebra nor topology.
Advanced, complicated completeness results can be either algorithms or complexity depending on their structural implications.
Say, Mulzer and Rote's "Minimum weight triangulation is NP-hard", which is a brekthrough on a long standing open problem has algorithmic implications as the most important applications are reductions from other problems.
A result such as GI is NP-complete, while also having many reductions associated to it, has many more structural implications and I'd say it would be mostly a complexity result.
loosely algorithms deals with specific problems while complexity studies classes of problems based on some computation model with certain resource bounds.
While this is currently true, I don't think there is any reason to assume they hold as intrinsic properties of either algorithms or complexity.
The fact that we can now test all minor closed properties in polynomial time feels algorithmic, since the result readily leads to actual algorithms. On the other hand the "algorithms" in "approximation algorithms" are highly problem specific, yet they feel complexity-like as they are more interested in the structural properties of the problems they are solving than in actual real life algorithms that one would use in practice.
This post is very important.
ReplyDeleteThe following fact has long been over looked: when god created the world, she created the problems that the computer scientists work on today, and tagged each one by a subfield name.
Using a wrong sub-field name in is simply blasphemy!
I would also like to warn that if we do not stop the madness here we might find mathematicians studying computer-science subjects and vice versa.
God help us all!
The issue in this that interests me most is, What complexity results do we have that are NOT purely reductions?
ReplyDeleteShannon's counting argument is one, we'd like to say. But you could also say it's proved by a reduction from the (unsolvable) 'problem' of injectively mapping a large domain into a small range. Similarly with lower bounds having an information theory flavor, we can often be seen as arguing that, if solvable, you'd be able to transmit n bits in a message of fewer than n bits.
So where do we draw the line? Also, is progress in complexity theory being slowed by the fact that we don't have many techniques that are not reductions?
Andy: Ladner's theorem.
ReplyDeleteThere is no absolute dividing line between algorithms and complexity, but loosely algorithms deals with specific problems while complexity studies classes of problems based on some computation model with certain resource bounds.
ReplyDeleteThis seems to be well wide of the mark; at least it seems to get the implication in the wrong direction. For example, research on lower bounds typically focuses on very specific problems often without particular focus on classes of them.
One of my favorite terms is due to Nancy Lynch: "counterexample algorithm". This is a some algorithm whose only purpose is to refute some impossibility proof or complexity lower bound. Such an algorithm might be impractical or ugly. Counterexample algorithms are a complexity-theorist's tool.
Maybe the difference is one of intent.
There's a trivial sense in which every mathematical proof is a reduction from one question to another question to a third question, etc. I'd say a CS result is "not a reduction" if the reductions take place mostly at that level (which is common to all of mathematics), rather than at the specific level of reducing one well-defined computational problem to another one. By this criterion, we certainly have plenty of non-reductions in CS (for example: most query complexity and communication complexity lower bounds).
ReplyDelete