-
On relevance of work and best-paper awards:
We now have interesting tools to evaluate the success of work in the long term. Things like Citeseer and Google Scholar suddenly give us a view that we could not have had before. One thing that we discovered from these tools is that we are actually very poor in predicting the long term impact of work. There is very little correlation, for example, between the best paper awards that we give and the test-of-time awards that we give. Sometimes, miraculously, we have the same paper win the best paper and the test-of-time awards. But that is the exception rather than the rule. So I think people should focus on just doing the best work they can.
- On highly selective conferences and low acceptance rates:
I think the low acceptance rate is a terrible problem. I think that the idea that we are raising the quality is nonsense. I think that actually the quality goes down. I think we are very good at selecting about one third to one fourth of the papers; we do a pretty good job there. As we become more selective, the program committee gets larger, the whole discussion gets more balkanized, and the whole process gets more random. Nobody gets a global view.…Conferences are not scalable. They work nicely with up to roughly 300 submissions and a certain size of program committee. When you try to scale it up, very good papers get lost. It becomes more political. I think we are being held back by our own success.
- On conference publication vs journal publication (Here I did some
serious cut and pasting—Alex):
We are very unique among all the sciences in how we go about publication. We have these selective conferences. (People have stopped calling them refereed conferences. They are not really refereed. You don't get good referee reports.) Our conferences worked well in the world we used to inhabit…I think we had a model that worked successfully for about 30 years, but now we see cracks in the foundations…I don't have a good solution to this problem. We don't even have a good forum in which to discuss the problem… We ought to rethink how we do it. Right now, people try to fix things incrementally by having a larger conference with a bigger program committee, a two-level PC, a three-level PC. Maybe we need to rethink the way we do scholarly communication in our discipline…How can computer science go about changing its publication culture? Are there areas that move just as fast as we do, and have journal papers and conferences, but conferences are not the primary vehicle? I have questions about the basic model of scholarly publications. And I find it fascinating that it is difficult to have a conversation about this on a big scale, and make changes on a big scale. We are very conservative. It is interesting that computer science has been one of the slowest disciplines to move to open access publications. Other disciplines are way ahead of us in using online publications.
Computational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill Gasarch
Sunday, March 19, 2006
An Interview with Vardi
Alex Lopez-Ortiz points out that this month's SIGMOD Record has an
interview with Moshe Vardi from Rice
University that touches on several topics of interest to the readers
of this weblog. Lopez-Ortiz picked out some highlights of Vardi's
views.
wow, saying such things out loud takes a lot of balls. and tenure.
ReplyDeleteOn the best paper awards problem: probably the big problem is the completely arbitrary scehdule such things are set to. Perhaps reserving giving out special honors to the occasional obvious shoo-in would improve the situation, because those are the only ones which really deserve it.
ReplyDeleteIt's unfortunate that theory is influenced by such industry fashions as object databases and XML, which are ... how to put this ... obviously destined to be no more than fleeting fancies.
Re: Recall problem in conference. Ken Church has a recent squib in the Computational Linguistics journal titled Reviewing the Reviewers (sorry it requires ACM). His argument is essentially that we have a significant recall problem. He poses several solutions for solving it, but this basically boils down to "accept more papers." This can become logistically difficult and expensive, but it's hard to think of a better incremental solution.
ReplyDeleteThat said, I think we're seeing a natural evolution toward people just putting papers on their web sites for others to find. We need improved search/streaming techniques for finding ones we want to read, but this is just a technology problem. Perhaps this is the solution.
Ironically, 'Reviewing the Reviewers' isn't a freely available download.
ReplyDeleteWhile agreeing with a lot of what Vardi says, I feel that Citeseer and Google Scholar are poor measures for importance and quality of a paper.
ReplyDeleteOne of the advantages of our conference system is that bright young researchers can get wide exposure for their work. In some other disciplines it is a lot harder for newcomers to draw attention.
ReplyDeleteBest paper awards are a two-edged sword. They may be used to impress hiring and promotion committees, but then you have to make excuses for the candidates that didn't get them.
The problem with reviews are that they are anonymous and there is no incentive for good work.
ReplyDeleteWhy not provide three grades of review "thorough", "standard", "quick check", and then let the PC decide which reviewers did what kind of review. Rather than list a big "thank you" to all of the reviewers say something like "Anonymous did two thorough reviews and four quick checks."
Presumably peer pressure and the ethos of competition-in-all-things would then goad people into producing an at least average reviewing record.
The question is whether or not PC members would have the guts to call people out on passing off a quick-check review as a more thorough one.
Maybe conferences should stop having proceedings. Certainly conferences are a fantastic place to meet people, learn new results, and disseminate your own work quickly to the rest of the field. But no one should trust any theorem in a conference paper, precisely because they are not carefully refereed in any meaningful sense.
ReplyDeleteTCS is not like other branches of CS in this respect, and we need to start indoctrinating our students with the philosophy that everything significant should be sent to a journal. In the last two months alone, I have come across two papers (in recent STOC/FOCS/SODA proceedings) which are completely wrong, essentially unfixable, and for which the authors acknowledge the fatal mistake. These errors would have been found during any reasonable journal review.
Can you list the two papers so that the rest of us who stumble upon them need not go through the (at least) frustration that you must have gone through. (Also maybe posting them here would incite people to be more careful about the work they submit and would expose the two committees. It is, in my opinion, shameful for a committee to accept a wrong paper.)
ReplyDeleteanonymous 1:01 responding to anonymous 1:14:
ReplyDeleteI trust that the authors of these papers will eventually disclose the errors, and as long as they do so in a reasonable amount of time, I don't feel that it is my place to expose them.
It is, in my opinion, shameful for a committee to accept a wrong paper.
You have clearly never served on a program committee then. It is impossible for a single person to carefully check even 20 papers in a few months, and most committee members are responsible for many, many more.
I also think there is a problem with attaching best paper awards to conferences (as opposed to, say, a body like SIGACT). I remember as a grad student thinking "Yeah, I would have won the best student paper award in 2nd-tier conference ____, but I chose to publish in FOCS!"
ReplyDeleteDespite complaints that people have about conference PC's. I note that both Vardi and Church complain about conferences that are much larger and accept a lower percentage than STOC/FOCS (and suggest that the field does pretty well with conferences of the size and selectivity of STOC/FOCS).
ReplyDeleteVardi: I think we are very good at selecting about one third to one fourth of the papers; we do a pretty good job there. As we become more selective, the program committee gets larger, the whole discussion gets more balkanized, and the whole process gets more random. Nobody gets a global view.�Conferences are not scalable. They work nicely with up to roughly 300 submissions and a certain size of program committee. When you try to scale it up, very good papers get lost. It becomes more political.
Church's simple model for improved recall (i.e. getting all the good stuff) shows a graph with steep slope for improvement in recall up to 33%-40% acceptance rate but he goes on to say: In any case acceptance rates should never be allowed to fall below 20%. Even though I cannot use the above model to justify the magic threshold of 20%, it has been my experience that whenever acceptance rates fall below that magic threshold, it becomes too obvious to too many people just how low the recall is. Magic thresholds like 20% change the tone of grumbling in the halls from an ugly swap meet ... exchanging tales ... about inappropriate rejections.
How about the comment from Church that the "Page ranking" paper (of Google fame) was rejected from SIGIR? Quite possibly the most influential paper in IR for the last two decades or so (along with "Authorative sources" by A. Broder) and it gets rejected.
ReplyDeleteI would like to think that if the theory community ever erred that badly immediate reforms to conference reviewing would follow.
In the last two months alone, I have come across two papers ... which are completely wrong, essentially unfixable, and for which the authors acknowledge the fatal mistake.
ReplyDeleteInteresting. Unfortunately, in contrast to Science/Nature/etc., we don't seem to have any established method for issuing corrections. (I know of only two corrections to published papers, and they both involve Lance.) Does STOC/FOCS publish retractions/corrections of results? They should.
I have seen errata issued for both STOC and SODA, but it's certainly not commensurate with the number of bugs that I've heard about. I don't know if it's the conferences or the authors who are wary about publicizing mistakes.
ReplyDeleteWhile agreeing with a lot of what Vardi says, I feel that Citeseer and Google Scholar are poor measures for importance and quality of a paper.
ReplyDeleteThere is no single metric that determines the quality of the paper (in fact, some time ago there was a long discussion on this blog on what the paper quality really is). Large number of citations to a paper does not make the paper great (and neither does an acceptance to a prestigious conference, for that matter). There are many counterexamples to both theories.
This said, I think that if one acknowledges natural caveats and limitations of citation indices, then they can be pretty useful tools, even though they are not error-free. For example:
1) Large number of citations is a reasonable indicator of "high impact" of the paper. Although one can argue that this is an important aspect of paper quality, it does not tell the whole story.
In addition, citations measure only the "positive" impact. So if a paper closes the last important problem in the field, it is probably not going to be widely cited, if people move on. Still, it is an important paper, which had an impact, a "negative" one.
2) Different areas/conferences have different citation patterns. They depend on area size, conference size, citation habits (people cite everyone/no one/in between), etc. Comparing apples and oranges might not give meaningful results.
Altogether, I don't think we should make a choice between the "dictatorship of prestigious conferences" and the "dictatorship of citation indices". Both have strengths and limitations and both offer different and useful perspectives on paper quality. But we still need to use our brains, which is probably a good thing.