Fortnow's post on competitive ratio's has prompted a number to speculate on the `right' number of people who should engage in competitive analysis. I took Fortnow's post as an invitation to revisit the arguments in favor of doing this kind of analysis. As I see it there are four arguments:
- The argument from beauty: Truth is beauty and beauty is truth. The first direction I buy, the second, except in the case of Grecian urns, requires a proof. In any case, I am prepared to accept the aesthetics (or wit, cunning) of competitive analysis as sufficient justification for engaging in competitive analysis. Perhaps competitive analysis is to CS what number theory is to Maths. There are, however, shared aesthetic standards (otherwise the criterion has no bite), so only some kinds of work can be justified in this way. My guess is that the analysis of problems that can be stated with a minimal of fuss, have an immediate appeal and seem simple in the first telling (pretty much what makes for a good number theory problem) meet this criteria. So, TSP, SAT, clique, coloring, max-cut are in. However, the stochastic, dynamic traveling dog and pony problem with piecewise linear costs is out.
- The argument of fine distinctions: Not all NP-complete problems are alike. Some really are easier than others. Therefore one would like a device to make distinctions between them. The bounds from competitive ratios appear to do a nice job in this regard. Knowing that a problem can be approximated to within a log factor but not within a constant factor does tell us about its difficulty relative to others. Notice that order of magnitude of the ratio suffices for this purpose and not the best possible ratio.
- The argument from ignorance: If we do not know what the distribution of problem instances looks like what choice do we have? The assumption of complete ignorance is an extreme one and can be justified in some but not all cases. If one adopts this justification for engaging in a competitive analysis one must first argue that the assumption of complete ignorance is reasonable to make. One can imagine natural situations where partial information is available. In these cases it seems reasonable to expect error bounds that incorporate such information. Bounds that are data dependent are not as pretty as ones that are independent of the data, but may be more useful.
- The beneficial spin-off argument: This is the argument that Vazirani makes. Putting a man on the moon is a Quixotic task but we received non-stick frying pans as a by product. However, we might have had non-stick frying pans by focusing on non-stick frying pans and saved ourselves the trouble of putting a man on the moon. The point is that this argument does not rule out other ways of achieving the same benefits.
Competitive analysis is now an export business. One of the areas it is being exported to is game theory. This raises a new problem not present in the clean optimization framework. That is, does the notion even make sense when what one is comparing are preferences rather than objective function values?
Yes a nice post. There are several
ReplyDeletepeople who engage in competitive
analysis including myself who
are sceptical of the the export
business on game theory. One often
feels that known results from the past
are dressed up as new high impact results.
"Stochastic" is out when mathematical beauty is the criterion? I am not so sure. It all depends on what is proved and how, and on what is revealed by the work.
ReplyDeleteIsn't it strange that mathematical beauty is so elusive to define, yet so easy to agree on on a case-by-case basis? We all recognise it when we see it, and I can't remember ever having arguments about it on any specific instance.
Yes, nice post.
ReplyDeleteI would like to mention another criterion that is often used in this context.
5. Historical argument. Many central problems in approximation algorithms (clique, coloring, maxcut) have been investigated for quite a while in TCS/CS. Actually, these are often the same problems that were mentioned to score high on criterion (1), so one can conjecture some causality here.
However, there is also the "algorithmic" question: how to test if a problem satisfies (1)..(5).
1. Beauty: It has been argued here that this one is easy to test case by case.
2. Classification/fine distinction: either a classification is known or not. Literature search should resolve the issue.
3. Real need to deal with ignorance about the input: not as easy due to our ignorance ;) Still, if more detailed models are widely accepted, one should be able to find them.
5. Problem age: lower bounds easy to get.
Now we are left with:
4. Beneficial spin-off.
This is, in my opinion, a heck of a problem. It essentially asks about the potential of an indirect future impact. And that is not an easy thing to estimate.
One can use heuristics, such as:
- beauty/originality (as in (1) )
- does the method overcome a significant obstacle
- past record of this line of research
- connections: the more, the merrier
and probably more. But in the end I think it is a judgement call.
Piotr
Darn! I guess my submission on the stochastic dynamic traveling dog and pony problem with piecewise linear costs isn't getting into STOC.
ReplyDeleteMost of the arguments here are about using worst-case analysis to evaluate approximation algorithms. I think another issue with the field of approximation algorithms is the fixation on the notion of "approximation factor". One example is spectral algorithms for finding a cut, which find cuts that are "close" to the optimal in the worst case (Cheeger's inequality), and are also known to perform well in practice, but are often ignored in any approximation algorithms course/textbook, just because they don't have a good approximation ratio (while at the same time, algorithms of approximation ratio sqrt{n} are sometimes taught in classes).
ReplyDelete