The greatest gift of computational complexity to society is the ability to use NP-completeness to show that a large variety of problems are likely hard. The field has also developed many tools to help easily show such problems are NP-complete. We shouldn't penalize people for using these tools.

This is one of those cases where having a "simple proof" hurts the paper. If the paper used PCP technology in the proof it would without question be published. And if the paper relied on the unique games conjecture (and thus a weaker result) the paper would likely get accepted into STOC.

I couldn't agree more with you Lance.

ReplyDeleteI agree completely. Furthermore if the result was reported in a theory conference (perhaps using a poster) theoreticians could then go and have a look and see if there are any other problems we can help them with. Some of those problems in turn might end up being non trivial and thus resulting in a new branch of theory of independent interest, just as it was the case with network game theory results.

ReplyDeleteWhen I use a simple result about concentration of random variables in a CS paper, do I try to publish it in a probability journal? No! That wouldn't make any sense.

ReplyDeleteIf I made a significant contribution to probability in the course of solving a CS problem, would I publish it in a probability journal? Of course!

What's the problem?

I think the problem is the trade-off: interesting result vs technical result.

ReplyDeleteIn partial defense of the status quo:

ReplyDeletei) The fact that a first published proof uses advanced techniques at least suggests that a simple proof is hard to find. Suppressing an easier proof to make yourself look smart is possible but seems to risk embarassment.

ii) Results alone justify mention but not sustained attention. It's charming to know that Tetris is NP-hard, but I'm not dying to read why because I suspect it'll involve very app-specific gadgetry. Hence,

iii) Methods matter. Even if a method hasn't shown much incremental value over known ones, seeing it at work in simple settings helps build facility and improves the prospects of eventually pushing the envelope with it.

Personally speaking--a large part of my current study consists of reading papers that make use of the arithmetization concepts, without requiring that they make large contributions to our complexity-hierarchy knowledge. The desire to understand how information and complexity properties of a function are transmitted/transmuted in its polynomial extension has taken over.

I think there is a legitimate place for adherence to a method; if it is rich enough, it will generate its own questions and gain a degree of autonomy from the original demands placed on it. Granted, as an undergrad I'm far from having a mature research perspective, but hopefully someone can back me up here.

As far as I'm aware the only papers with hardness results based on the unique games conjecture that appeared in STOC/FOCS, were for problems that people were trying to get such approximation hardness results for a long time with no success.

ReplyDeleteA paper that would obtain the same hardness result for these problems without relying on unique games

(for example, showing the Goemens-Williamson max-cut algorithm is optimal unless P=NP) would have been enthusastically accepted to STOC, no matter how simple the proof is. In fact, I believe that a simpler proof will only help the paper (e.g., the main difference between Reingold's algorithm for s-t connectivity which won the best paper award and Trifonov's algorithm is not the efficiency but rather the simplicity of algorithm and proof).

I seriously doubt that a paper showing unique-game hardness for a problem no one in the TCS community thought of before would be accepted to STOC. Of course I have no clue whether such a paper would have an easier time getting accepted into a conference in the appropriate non-CS area than a paper with using a simpler reduction from a known NP complete problem.

Yes, in fact the comment about unique games is an overstatement. I think there are only four papers in FOCS/STOC which have dealt with this: (1) The original paper of Khot introducing the unique games conjecture, (2) the paper showing that the UGC implies optimality of the GW algorithm for MAX-CUT, (3) the upcoming paper on sparsest cut which gives an _unconditional_ disproof of the Goemans-Linial conjecture, and (4) Trevisan's algorithm for approximating the value of a unique game.

ReplyDeleteDepending upon your viewpoint, some of these papers aren't very hard/technical, but as a group they have inspired a lot of beautiful research, e.g. the majority is stablest theorem, and the embedding lower bounds. So this is a case when FOCS/STOC got it exactly right.

Indeed it is HIGHLY relevant to know about the computational complexity of problems occuring in real-world (or at least other areas of) applications. Think of problems in scheduling or routing, which are of limited interested by themselves from a purely theoretical point of view -- knowing that they are NPC is important anyway because one actually wants to solve these problems, and if there is no easy-to-go polyonomial time algorithm, one needs to come up with branch-and-bound, ILP, approximation and whatsoever type of solutions.

ReplyDeleteThe same holds for a bunch of problems in bioinformatics (protein folding, protein threading,...) and numerous other fields.