- What is the problem being studied? Is it motivated in a compelling manner, by either practical or theoretical considerations? New problems warrant more careful scrutiny.
- What is the result? If there are several results, usually I only look at the main result and evaluate the paper based solely on its best result.
- Is the proof clever or difficult? Can I identify the key idea, and is it novel?
- Finally, I give the paper a bonus if it is particularly well-written and a big malus if it is particularly poorly written, especially if the authors are all well beyond their student years - I get impatient with them, and think to myself that they should know better.
At this point in my career, this is usually all fairly routine. However, as Daniel Dennett suggests: "Perhaps our approximation of a perfect Kantian faculty of practical reason falls so far short that our proud self-identification as moral agents is a delusion of grandeur. " I was recently given a paper to review for a conference, and something strange happened. Based on my usual criteria outlined above, I sent to the program committee a mild recommendation for rejection and promptly put the submission in the trash. But instead of instantly forgetting about it, I kept remembering bits and pieces of it and found myself trying to reconstruct parts of the proofs. After a couple of days, it dawned on me that, even though the submission did not pass the filter of my "objective" criteria, still, I was interested in it and actually liked it!
I wonder what one should do in that case. Should you trust your instinct, or obey your evaluation rules? Go with your Guts, or Let Reason Rule?
Comments page (Usual comments link does not work properly, use this instead.)
Claire
If you can articular the REASONS why you
ReplyDeletelike it then YES you can articulate them
and argue for it.
If you JUST like it in your gut but
cannot articular why, then do not
argue for it- you can't argue for it
since you can't articulate.
Do not be afraid of taking an unusual
or odd stand- thats why program committees
have lots of prople on them.
WILLIAM GASARCH
(Last time I posted a comment later
comment refered to my comment as
`anonymous said...' which was odd since
I definitly used my name. THIS time refer
to me by name please.)
I normally apply evaluation criteria before I have made up my mind, not afterwards, and the main purpose of that exercise is to help me reach a conclusion. It seems a bit hypocritical to develop an a-posteriori argument whose sole purpose would be to fake a rational line of thought leading to a conclusion which was already foregone anyway. I think that it is more honest to just state that I like it. (Of course I can try to understand why, and update my criteria accordingly for later use.)
ReplyDeleteI don't understand why you only look at the main result. Are you saying that a paper with one good result is better than a paper with three average-to-good results? Or that a paper with one good result is equivalent to a paper with 3 equally good results?
ReplyDeletePS: I find it really silly that anonymous comments are not allowed...
ReplyDeleteThat's right. I don't like (with a few exceptions) papers which accumulate not-so-good results and which then try to argue that quantity can be a substitute for quality. Conference submissions usually need to be evaluated really quickly, so focusing on a single result in the paper saves me time, and I find it to be an excellent proxy in general.
ReplyDeleteAnonymity: I don't have permission to change the settings. Maybe Lance did this on purpose to protect me from anonymous attacks??
Salut Claire ;)
ReplyDeleteI received some comments from referee which exposed both "reasonable" and "guts" arguments: the referee said that he *wanted* to accept the paper because he felt that this direction should be encouraged, and then proceeded to point out the bad points of the paper (basically, bad writing in this case).
As a referee, I don't see any problem in giving both opinions (reason ans guts): the PC member can pick from it. As a PC member, I guess that I would expose both arguments but vote with my guts: after all, the readers of the proceeding might have similar guts ;)
I think in many cases quantity can make up for quality. Here's one example: consider a medium complexity lower bound put together with a medium complexity matching upper bound. Neither of them alone would have been a STOC/FOCS/SODA paper, but together they close the problem and it is now a high quality result.
ReplyDeleteI guess the problem with the "best result criterion" is exemplified by papers like Karp's NP completeness paper. No one result from that paper really conveys its main message that NP-completeness is a rule rather than an exception. One could say that in this case the whole is more than the sum of the parts, and much more than the maximum of the parts.
ReplyDeleteAnother example along the lines of Piotr's is Anna Gál and Peter Bro Miltersen's "The cell probe complexity of succinct data structures". ICALP'03 (http://www.daimi.au.dk/~bromille/Papers/succinct.pdf)
ReplyDeleteThis is an excellent paper, in fact it won the best paper award at ICALP. The paper itself consists of a long list of rather weak lower bounds. What makes it so exceptional is (i) no bounds were known before and (ii) all bounds use the same general technique, illustrating the power of the line of attack.
I don't think any of the papers mentioned here are good examples of achieving quality through quantity. The right question isn't how many theorem statements there are or what the most impressive one looks like in isolation. Rather, it's whether one can give a short, compelling summary without using the word "and" very often.
ReplyDeleteKarp's paper shows that many well-known problems are NP-complete. The Gál-Miltersen paper introduces a new technique that proves unprecedented lower bounds. In each case, a detailed summary of the paper might be lengthy, but there's really a unity to the paper.
On the other hand, some papers are laundry lists of minor results. In the worst case scenario they are only loosely connected.
Anyway, there's a simple test. If someone submits a paper containing three results, ask yourself if you would publish all three as short papers if they were submitted at the same time by different authors. Sometimes you might: a matching upper and lower bound are of interest regardless of whether they were proved by the same person. Other times, it's clear that each one individually would fail to meet minimal standards. If you wouldn't print them back to back, why should bundling them into one paper make them any more interesting to readers/conference attendees?
The only objection I can think of to this criterion is the observation that an author who proves three results looks smarter than someone who just proves one of them. That may be true, but submissions should be judged on their own merits, not by how smart the author appears.