Monday, March 02, 2009

You Can't Separate the Art from the Artist

Sorelle blogged about double-blind reviewing, removing author's names from papers during the review process. Michael and Suresh picked up the topic and so I will also add my two cents, supporting Sorelle's point 6 that the authors should be taken into account (a point Sorelle doesn't agree with). My views are only worth two cents—the best papers will nearly always be accepted, the worst rejected so we are only tweaking the edges here. 

Certainly we have biases towards the authors and in the committees I have participated in the we are quite conscious about the authors and often bring them up when discussing the merits and demerits of each paper. But we should. For one, an author is a measure of trust. Take a lengthy detailed proof about PCPs for instance. You should trust the proof more if it comes from someone like Johan Hastad or Irit Dinur who have histories of dealing with long technical proofs more than someone who is not as detail-focused (like me). But it is much more than that.

A good paper is more than just a theorem and proof. We don't read a paper just to see if a certain theorem is true and if the proof is logically valid. That's just a mechanical check. No, we enjoy the paper because the result and proof have some beauty to it, a clever a new idea or a different way of looking at things. A good paper is a work of art.

Art is not judged in isolation. You don't read a book or movie review that doesn't connect to previous work of the author or director. The price of a painting is greatly affected by the artist who painted it. You choose which symphony or opera to attend more for the composer than the particular piece. 

And so it should also be with mathematical papers. I'm not saying we should just accept papers based solely on the authors. No, you must judge the particular work, but you must judge it in its full context and that includes how the paper connects to the authors earlier work, how the work fits in with related work and what kind of theme it follows.

Double-blind reviewing removes critical information, taking away the soul of the paper, the creative forces behind it. History will judge a person's research career through his or her publications so we shouldn't ask that the PC make decisions ignoring the context given from knowing the authors.


73 comments:

  1. I'm not convinced of the benefits having papers anonymized (really, I don't think papers with female authors are getting higher chances in tcs with a blind refereeing process, sorry).

    The thing is that most communities are small. I can most likely tell the author in my community anyway, without the author trying *very hard* to sound like someone else (which would probably not benefit the readability of the paper).

    ReplyDelete
  2. Double blind reviewing is the hallmark of shallow "engineering" type disciplines, where pursuit of fame and power, rather the advancement of science, is the driving force. No serious scientific journal has
    double blind reviewing, and theortical CS being a mathematical descipline should not have it either.

    ReplyDelete
  3. In general, my limited experience (as a grad student) is that the aesthetics of math are important to how people view results and papers, and even how they make refereeing decisions, even though it is frequently not explicitly mentioned as such. I think making the balance between aesthetic and proof a bit more overt could be helpful to young grad students.

    That having been said, it feels like you "should" in principle be able to separate the art from the artist in these cases, but I would defer to people with more experience refereeing on that point.

    ReplyDelete
  4. Anonymous 1: The thing is that most communities are small. I can most likely tell the author in my community anyway

    This is the most true when the author is already a well-established member of the community. Such people may still have an advantage if/when you recognize their work, but that doesn't mean bias isn't lessened in other cases.

    Anonymous 2: Double blind reviewing is the hallmark of shallow "engineering" type disciplines, where pursuit of fame and power, rather the advancement of science, is the driving force.

    That sounds a little pretentious. Double blind systems are also the hallmark of repeatable and accurate scientific research, even if it isn't always applied in the review process itself. Making some vague association between double blind reviewing and "pursuit of fame and power" seems kind of a stretch. And saying "no one in the community does it" isn't a reason that no one should ever do it in the future.

    As for the main topic, I'd stop short of a wholesale endorsement of double blind refereeing (because I'm not confident enough that I understand what all the results would be, though I think some of them would be positive), but I disagree with this post pretty strongly. It seems to be conflating the reading of a paper with the referee process. E.g.:

    A good paper is more than just a theorem and proof. We don't read a paper just to see if a certain theorem is true and if the proof is logically valid.

    No, we might not read a paper for that purpose, but it's one of the main jobs of a referee (along with deciding whether the logically valid proofs are important enough to be published).

    The price of a painting is greatly affected by the artist who painted it ... And so it should also be with mathematical papers.

    Is this really true? Certainly as a reader, I have certain expectations about the value of a paper if I know a lot about the author. But isn't that the point? While it might be a useful heuristic for me as an individual to say "Ah! An '02 Fortnow -- quite a clever vintage", when as a community we are deciding whether something should be published at all, we should be able to look at the art without the artist. Just like how the number of females in top orchestras went up when they implemented blind auditions, the primary effect here wouldn't be on the acknowledged "great artists" but on the people who have not yet been accepted into the community, and might face (possibly unconscious) referee feelings along the lines of "Well, it's an okay result... but I've never heard of these people, maybe it isn't really that fundamental...."

    ReplyDelete
  5. One of the uses of the author information for borderline papers for PCs I have been on has been: "This paper isn't really <well-known author>'s best work. They've already have one or two papers accepted. Better to give <not-so-well-known-authors> a chance to present their work instead."

    On the other hand the reaction can also be "<Well-known-authors> think that this problem is important. Maybe I should, too."

    Both of these seem legitimate.

    ReplyDelete
  6. I have to agree with a lot of what David said. Lance's post makes some good points at the beginning, and I think some odd points at the end.

    First, Lance makes a good point about trust. If there is a highly involved proof, and there is not enough time to check it completely, then certainly you would trust a paper with Hastad more than a random set of authors. OK.

    Second, yes, beauty is important.

    But what has beauty got to do with previous work by the same author? You mean we should judge a paper differently if it is part of a stream of research by the same author, than a single piece by someone usually working on other problems? I strongly disagree! I do not see how the symphony example applies. An opera visitor is more like a reader deciding which paper to read afterwards, which may often depend on author names, but the committee has to do a much more careful job.

    I am not really in favor of double-blind, though I can live with it. But I think Lance overreaches a bit in his arguments here.

    One issue I haven't seen addressed: to what degree should a conference committee judge the quality of the paper versus the likely quality of the talk? If we know that X gives horrendous talks, should that make a difference?

    ReplyDelete
  7. You know, the more arguments the people against double-blind reviewing give, the more I end up being in favor of it.

    Lance, I'm certainly understanding of the idea that "trust" can come into play with an author's name, as I've posted myself. But the rest of your argument goes off the deep end. I'd like to assume the post was a joke of some sort, but I'm assuming it's not.

    While a good paper may have "artistic" qualities, the analogy of judging a paper in light of an author's history is bogus. Sure, if you're a Joss Whedon fan, you should check out when Dollhouse comes on -- at worst you're only hurting yourself. But that doesn't mean when it comes to making the judgment on this season's new shows, you give the show a free pass, just because Firefly was so awesome (and so clearly underrated at the time). That just hurts the other shows that need viewers to find them. The PC judges the papers that are in front of it, not the papers it wishes it had, nor the careers of the authors. How any paper relates to all previous work in an area should obviously be taken into account; how a particular paper written by an author fits in with other papers written by the author's career is not the issue at any PC meeting I've been to. Nor would I want it to be.

    It's this type of thinking that leads to big names getting free passes on papers that probably don't deserve it at conferences. At the very least, it's related to the line of thought that says, "We've accepted papers in the past on this problem, so we should accept this one also, because the authors have found some way to claim it's at least epsilon-better than the previous papers."

    Really, if you want to make the case against (or for) double-blind reviewing, an honest clarification of costs and benefits is the way to go. Claiming that we're artists as a way to make a point just suggests to me that you don't have any actual good arguments to give.

    ReplyDelete
  8. The only concrete advantage that I have read from supporters of non-blind reviewing seems to be that one can "trust" famous authors more than unknown authors; presumably "trust" equates to trust that the proof does not contain an error.

    Is this really the main problem facing program committees? Are there lots of fantastic-looking submitted papers with important results, but some large percentage of them will inevitably turn out to contain logical errors invalidating the proofs? I'm not talking about the daily crackpot submissions to CoRR claiming to solve P vs. NP; I'm talking about papers that, subject to a month of conference-level reviewing, are not obviously so bad as those papers.

    I suspect the answer is no. If a paper appears to solve an important problem, and the proof is detailed enough that a conference-reviewer's pass over it detects no obvious flaws, why not admit it? Sure it could be wrong. So could papers from trusted authors. So we really think the likelihood of the former (conditioned on the paper solving an important, well-motivated problem, being well-written, having no obvious high-level flaws) is so much greater than that of the latter that we think it justified to give such a large advantage, all else being equal, to an established author?

    If anything, to avoid inbreeding and groupthink, I think a slight advantage should be given to unknown authors, all else being equal. Of course, all else is rarely equal, but the advantages *or* disadvantages of double-blind reviewing do not show up except in such edge cases where all else is equal.

    ReplyDelete
  9. [From the anonymous directly above Mike]

    Mike makes great points here! Even though I was not really in favor of double-blind, maybe I should be.

    It seems that people opposed to double-blind are of two types: (1) people (like me) who hope that double-blind is not necessary because by and large the outcome will be the same (with the exception of rare cases of doubtful theorems where trust may make a difference) and double-blind can be a little cumbersome, and (2) people (like Lance?) who think that the results should not be the same and want to take names and career into account more broadly.

    But if you are of type (1) above, you really need to be in favor of double-blind as long as people of type (2) above serve on committees.

    ReplyDelete
  10. It seems that virtually everyone agrees that the strict enforcement of double-blind reviewing (you can't publicize your work in any way) would be counter-productive. However, people have only mentioned "posting on one's personal web page and giving talks" as what should be allowed.

    Even a restriction that one can't post a copy of the paper on the arXiv or ECCC would be bad for our field and would slow its progress. Since there are nearly 6 months between STOC and FOCS submission deadlines that can be a nearly 6 month delay. This would be even worse because of our prohibition on PC member submission which could delay things another 4 to 6 months. I think it is clear that authors should be almost completely unfettered in publicizing their work - the only restriction being that of no prior journal of conference publication.

    With double-blind reviewing I would guess that many authors who have in the past relied on the knowledge of the PC members as their time-stamp for priority would now be more inclined to post their papers on the arXiv and ECCC right away.
    For the recent STOC PC, for example, because of meetings I had attended and people I had talked to, I knew about a third of all the papers that I ended up reviewing from sources other than the conference submission. That would be likely to increase, not decrease.

    I suggest the following scheme for those considering double-blind reviewing: Authors have the OPTION of submitting anonymized manuscripts. Any PC members who wishes to ignore the authorship of any non-anonymized manuscript is free to do so.
    This would achieve essentially the same state as the only version of double-blind reviewing that makes sense.

    ReplyDelete
  11. Web 2.0 meets the U.S. Patent and Trademark Office

    http://blogs.zdnet.com/BTL/?p=4596

    ReplyDelete
  12. "I suggest the following scheme for those considering double-blind reviewing: Authors have the OPTION of submitting anonymized manuscripts. Any PC members who wishes to ignore the authorship of any non-anonymized manuscript is free to do so.
    This would achieve essentially the same state as the only version of double-blind reviewing that makes sense."

    I don't think this suggestion makes sense. Authors would already have the option of submitting something non-anonymously, but it would have to be public, i.e. the only way to give away the name is to post on webpage, arxiv, give talk, etc.

    On the PCs that I've been on, the bias is wrt the borderline papers that are mostly not public. Letting people "use their name" for these submissions, goes against the whole point of having the blind submissions.

    Also, if the names don't appear on the submissions, maybe that will help the reviewers concentrate on the contents.

    ReplyDelete
  13. Paul, How about we try the simplest version of anonymization. You can still give talks, put the paper on arXiv, etc. But when you submit it to FOCS/STOC/SODA/... you leave the author name blank.

    Sure, people can search for the title and associate your name with it. But would you as the reviewer do that? If you would then why? and if you don't, removing the authors names from the submission is almost no work from your behalf.

    -ro

    ReplyDelete
  14. If we leave the authors names blank, that lets the reviewer review an anonymous submission if they wish. If the name is not blank, the reviewer does not have this choice (if there is a choice--that is, if the names have been made public somewhere).

    I do not see why the reviewer should not be allowed to review the submission anonymously if they wish.

    ReplyDelete
  15. The only concrete advantage that I have read from supporters of non-blind reviewing seems to be that one can "trust" famous authors more than unknown authors; presumably "trust" equates to trust that the proof does not contain an error.


    No, that's not it. Trusting in correctness is sometimes relevant (some authors make a lot more mistakes than others), but importance is the key issue here. The importance of a paper is tough to judge in advance, because much of its importance comes from where it will lead in the future. Some people have a great track record of coming up with ideas that turn out to be far more important than the reviewers realize at the time. Other people don't. Knowing who the author is lets you do a better job of judging the likely future impact of the ideas and thus the importance of the paper.

    To use an analogy from statistics, sometimes the estimator with the lowest mean squared error is not an unbiased estimator. There's a genuine trade-off here, and even if double-blind reviewing helps reduce bias (which is probably a pretty small effect due to the ease of guessing identities) it does some harm too. It's not clear how best to balance things, and anyone who thinks their favorite solution is perfect is being silly.

    Sure, people can search for the title and associate your name with it. But would you as the reviewer do that?


    Yes, I do it every year with Crypto papers.

    If you would then why?

    Partly idle curiosity, partly to see whether I can successfully guess who the authors are, partly because I believe it's valuable information for judging the paper. Given that the information is out there publicly, I don't feel bad about searching for it.

    ReplyDelete
  16. So we are agreed on:

    Authors should be almost completely unfettered in publicizing their work - the only restriction being that of no prior journal or conference publication.

    I read feeds on the arXiv and ECCC anyway, so as a PC member it would not be going far out of my way to learn the author information about papers that are posted in such places.

    In fact, as a PC member I certainly would try to take advantage of such information. My reasons are somewhat different from Lance's stated reasons. Personally, I simply think that PC members should have as full information as possible in making decisions. This includes information about the papers but also include biases and conflicts of interests of other PC members and sub-reviewers. Having information that I know exists deliberately hidden from me bothers me a lot and makes me much less capable of concentrating on the task at hand. (Michael can attest to this from the recent STOC PC.) Another reason is simple: after years of reading author list citations and mentally recording papers based on author lists and keywords, that's how my mental filing scheme works.

    Given this, why add such a burden to my work as a PC member, given the hundreds of hours I am already spending on the job? As an author, if you have already publicized your work, don't go through the pretense that this hasn't happened and add to my workload by anonymizing your paper (or worse, by requiring that other authors anonymize theirs).

    ReplyDelete
  17. Wow, so the 6:39 Anonymous also believes that authorship should have a big influence on acceptance.

    I find this pretty disturbing. WHat if this is something of self-perpetuating loop? Authors who are "better" get their papers in "better" conferences and therefore get more citations and then people think they made the right choices in retrospect. I've noticed for example, that not only do my papers in "worse" conferences get cited less, they are also much more likely to get cited incorrectly (i.e. the citation does not accurately represent their contribution) or not get cited when they should, where as my papers in "better" conferences often get cited when it is not totally necessary (i.e. examples of such a technique is given in a, b, c, d).

    ReplyDelete
  18. "The importance of a paper is tough to judge in advance, because much of its importance comes from where it will lead in the future."

    This appears to be an indictment of our competitive conference system rather than an argument that we should just stick to accepting papers by famous people.

    Why do you support this pretense of making good decisions when you are saying that it can not be done?

    ReplyDelete
  19. WHat if this is something of self-perpetuating loop? Authors who are "better" get their papers in "better" conferences and therefore get more citations and then people think they made the right choices in retrospect.

    You think this doesn't happen already? Particularly in the FOCS/STOC community?

    ReplyDelete
  20. Yes!! It already happens!! That is why we need DBR!!

    ReplyDelete
  21. I agree with Michael. I started reading this discussion firmly against double-blind reviewing based on the restrictions it might impose on disseminating research, but having read the arguments of people against double-blind reviewing I now think we should at least give it a try.

    Many of the best arguments against double-blind reviewing, e.g. the difficulty of finding sub-reviewers, have largely been ignored in favor of cheap shots dismissing it as the mark of "shallow" disciplines, and reviewers unabashedly arguing that they should be biased towards well-known authors.



    Double-blind reviewing removes critical information, taking away the soul of the paper, the creative forces behind it. History will judge a person's research career through his or her publications so we shouldn't ask that the PC make decisions ignoring the context given from knowing the authors.

    I'm probably reading way too much into this, but the more I think about this paragraph the more it seems to boil down to implying that PCs should consider how much a paper will advance the career of its (presumably well-known) authors.

    ReplyDelete
  22. Some people have a great track record of coming up with ideas that turn out to be far more important than the reviewers realize at the time. Other people don't. Knowing who the author is lets you do a better job of judging the likely future impact of the ideas and thus the importance of the paper.

    Strictly interpreted, this also means that new people (ahem, grad students) can't ever publish new ideas. And that people who didn't publish new ideas as grad students don't ever get to publish them. I know you don't actually mean this to be the strict rule, but I agree that there's a self-perpetuating loop here that's counter to actually getting new ideas out there.

    ReplyDelete
  23. I think this debate about double-blind reviewing is symptomatic of a larger problem -- namely the dissatisfaction of many people in the community with the decisions taken by STOC/FOCS program committee. I think this dissatisfaction stems from a lack of confidence in the PC. In most areas of science and mathematics, decisions regarding the selection of papers for the most prestigious venues are usually taken by the unquestioned leaders of the fields (such as the editorial board of JAMS or Inventiones etc.). In theoretical CS this is not remotely true -- and even the chair of the PC might not be held in respect by all or even a majority of the members of the community. This is not to downplay the amount of work, and the sincere effort put in by the members of the PC as well as its chair, especially given the fact that many of them are young researchers. However, it might be good to try to raise the number of truly distinguished people serving as PC chairs and/or members -- and this might alleviate somewhat the concerns regarding the quality of the decisions taken.

    ReplyDelete
  24. Anonymous #23: Possibly. From my experience, I can tell you that getting senior people to serve on a PC is much more difficult than getting less senior people on the PC. (I thank all the people on my committee -- and especially the senior members -- for their willingness to serve!) I'm hesitant to think we could get that many senior people on a committee regularly.

    Also, I think you're wrong; I think what is interpreted as poor decision-making by PCs has much more to do with the way the theory community has become compartmentalized into many tiny sub-disciplines than PC makeup. There really aren't as many field-spanning papers out there as one might want; naturally, there are many papers of limited interest. Even with a very senior committee, that fact isn't going to change. (Perhaps having a different type of committee -- with say two or three times as many members, and a smaller subcommittee to make final judgments -- might be a better way to go?)

    Finally, understanding that my assumption (based on many of these discussions we've been having) is that many senior people staunchly defend the right to be biased in favor of their colleagues (without implying judgment here, really!), I think that you might find concerns regarding quality, and in particular biased judgments, might well arise with a PC filled with senior people. Perhaps the junior people would then simply revolt and form their own conference. :)

    ReplyDelete
  25. As the previous commenter said, there seems to be substantial unhappiness, in especially the younger folk, about the fairness of the process. I think it is hard for many who have not served on PCs to understand that papers by very famous people get rejected fairly routinely from STOC/FOCS. Another point is that people notice that their rejected paper is comparable to several borderline accepted papers and assume that it must be due to bias. Some biases are there but I think their overall effect is not major to the field but of course it makes a big difference to an individual, especially a graduate student, whether their paper is accepted or not.
    The main issue is that the field
    has grown considerably and is very
    diverse but the number of accepted papers is still kept to a certain number. We could alleviate some of the concerns by having more inclusive and less selective conferences but that will be a difficult change to make in the CS world which is used to selective conferences. Instead, people feel that the "problem" will go away by using DBR. I don't think so. Every highly competitive CS conference system even with DBR has been accused of being biased towards established players. SIGCOMM and SIGGRAPH come to mind.

    ReplyDelete
  26. Trusting in correctness is sometimes relevant (some authors make a lot more mistakes than others), but importance is the key issue here. The importance of a paper is tough to judge in advance, because much of its importance comes from where it will lead in the future. Some people have a great track record of coming up with ideas that turn out to be far more important than the reviewers realize at the time. Other people don't. Knowing who the author is lets you do a better job of judging the likely future impact of the ideas and thus the importance of the paper.

    Maybe that means those authors with the great track record should be on the program committee, and the existing program committees that are supposedly so incapable of judging the importance of the actual ideas without knowing the authors, should not be given the job.

    I'm 100% serious. What is the point of having conferences where we supposedly trust program committees to do a reasonable, if imprecise and occasionally inaccurate, job of deciding the importance of papers? I argue that this is the only job of a program committee. If they can't do it without knowing the authors, and if there are not the resources to staff PC's each year with those who can, then we need to ditch the idea of deriving prestige from conferences.

    I'd like to see a list of all these papers whose supreme importance was not known until years later, but were published nonetheless in light of the author's pre-existing fame and thereby saved from dustbin of obscurity. I'm not foolishly claiming that they aren't out there. But most of the famous papers I know of generated immediate excitement, spurring lots of new results within the next few years. I'll bet that if we went down the list of the 50 most influential TCS papers, we wouldn't find 10 that didn't start influencing things within half a decade, and I'll bet that at least half of those were by relative unknowns at the time and therefore would not have been saved by having their name on the submission.

    If I'm correct, then guarding against this threat of rejecting monumental papers (that initially appear too unimportant to publish) by famous authors by forcing authors to divulge their names is like stopping illegal Canadian immigration into the US by building a giant wall along the eastern border of Alaska. It's not that it won't serve some function, but it's attacking the wrong problem, badly, and likely killing a lot of caribou (grad students) in the process.

    ReplyDelete
  27. Also, I think you're wrong; I think what is interpreted as poor decision-making by PCs has much more to do with the way the theory community has become compartmentalized into many tiny sub-disciplines than PC makeup.

    I disagree on this point. I think that most of the unhappiness of rejected authors could be reduced (if not eliminated) if the rejection had the imprimatur of a distinguished panel of reviewers. In fact, its my experience that the level of unhappiness is often higher in senior authors whose papers are rejected precisely because they lack respect for the PC members.

    The solution I agree is not easy -- but the first step might be to have a standing commitee of distinguished senior researchers who hand picks the PC, sets the rules and also ensures that the PC will always contain a few of the standing committee members.

    I think restoring the faith in the good judgement of the PC is the most pressing requirement. I do not in particular resent any biases that PC committee members might have as long as I respect their scientific acumen. There is no such thing as objective evaluation of scientific research. It is not like grading papers. It is accepted that biases, fashions, tastes etc. enter in the evaluation of scientific papers and there is nothing particularly wrong in that.

    ReplyDelete
  28. Anon 27 says: It is not like grading papers. It is accepted that biases, fashions, tastes etc. enter in the evaluation of scientific papers and there is nothing particularly wrong in that.

    Where we seem to disagree is when those biases might seem to be stacked in favor of colleagues of people making the decisions, or even just generally in favor of "senior" people because, after all, they just know better.

    (In short, I'm saying I don't think what you say is generally accepted is, necessarily, generally accepted the way you think it is.)

    That's not to say that a committee of "distinguished" reviewers might not reduce unhappiness with decisions, which was your point. Again, I personally doubt it, but I'll admit neither of us has much in the way of evidence one way or another. I would say that my argument was also based on a cost-benefit analysis -- I don't think we can "cover the cost" of such a scheme in practice.

    ReplyDelete
  29. Michael, I watched the first three episodes of the Dollhouse, and it's really not nearly as good as Firefly or Buffy. Which is a fine illustration of your point re: anonymity. (Yay Firefly though!)

    ReplyDelete
  30. Everybody seems to agree that
    any PC will not miss great
    papers and will not accept
    garbages. So the issue is
    about borderline papers.
    DBR is definitely a good choice
    because a poor graduate student in mississippi
    or india or china needs
    some encouragement, while
    a famous professor in berkeley
    already has everything,
    a borderline paper will not
    help her much anyway.

    ReplyDelete
  31. Anon #29: I really think we should hijack these double-blind threads and just discuss the Whedon oeuvre.

    I'm not sure what to make of Dollhouse yet. I often think Whedon's work takes some time to bake in, and that it's hard (especially in the first few episodes) to see the big allegorical picture and where things are going. I'd agree that so far it doesn't seem as strong as Buffy or Firefly, but it's hard to discount the benefit of hindsight here. I'm curious where the premise is going. (Much like Buffy had to be more than a "monster-of-the-week" show, Dollhouse needs to be more than a "identity-of-the-week" show -- and I see it trying to build that up already.)

    Personally, I just want a sequel to Dr. Horrible's Sing-Along-Blog.

    ReplyDelete
  32. I have to admit the same surprise expressed by some other commenters: I did not expect to see such a backlash from established members of the community against DBR. While I originally would have thought it a minor problem only affecting borderline papers, the extent to which established members of the community with PC experience (and presumably, greater-than-average PC influence) oppose it, is precisely the extent to which I consider it a serious problem.

    Put briefly, if a PC member is willing to fight back, hard, for their right to see an author's name before making a decision about a paper, my interpretation is that they are placing far too much emphasis on the author. Either the author is a minor consideration, implying that DBR is at most a minor inconvenience for some, or the author is a major consideration, and the arguments in favor of DBR perforce apply that much more strongly: DBR is required to remove this too-strong influence.

    ReplyDelete
  33. As the previous commenter said, there seems to be substantial unhappiness, in especially the younger folk, about the fairness of the process.

    I also think this is the real issue, not double-blind reviewing, which is at best a way to try to address it. I see a bunch of problems:

    (1) A mediocre review process, in which all the year's submissions pile up at just a couple of key times and are evaluated in a rushed and shallow fashion. This is inherent in having just a couple of top conferences.

    (2) An inscrutable review process. Papers are often rejected with essentially no feedback or written reviews, which I find offensive (although understandable given the constraints on the system). If you get more than one page of feedback, which is what I often write for journal submissions, you might not agree but at least you know your paper was looked at seriously.

    (3) A steep prestige drop-off, where there's a big perceived difference between first and second-tier conferences (with for example certain bloggers deriding anything less than FOCS/STOC), combined with relatively low standards for the top conferences compared with the situation in other sciences. By contrast, many scientists consider Science/Nature to be the top journals, but there is much less disrespect for other journals, and Science/Nature are so exclusive that not even the very best researchers can publish primarily there (while a number of people publish primarily in FOCS/STOC).

    The net effect is that young researchers feel that FOCS/STOC acceptance really matters but no evidence is given that their papers have received a serious evaluation. They are right, and fixing this will require some real changes in the community.

    ReplyDelete
  34. We could go to double-blind reviewing (DBR) and though I would find it extremely annoying, it would still not change what people really seem to be complaining about. In fact, no set of rule changes for conferences is going to "fix" it. It has nothing to do with who the authors are.

    Though I disagree with a lot of what Michael wrote, one part of what he wrote rings true. Over the years the field has become more compartmentalized and many more decisions rely on the expertise of a small minority of the PC. (Even if we enlarge the PC, the expertise will still be held by a similar proportion of the committee.) Different areas tend to get different kinds of reviews by people in the respective areas - some tend to be more positive than others by default. The committee as a whole is then faced with many "apples vs. oranges" decisions between such papers and reviews. Moreover, different areas have breakthroughs at different times so there is no reasonable way to pre-allocate space for different subject areas.

    Every PC faces this in different ways. More than decade ago when I was on an all-electronic STOC PC it seemed that every decision was effectively ceded to the experts and there was no method at all for even trying to make apples vs oranges decisions as a group or trying to balance different area standards. Each sub-area effectively set their threshold and that was it. With the current format, PC members championing a paper have to argue to the whole committee about the value of a given paper to the committee as a whole and be challenged on this. This kind of challenging is particularly beneficial to the process but as the field becomes more compartmentalized the less that PC members feel confident enough to challenge the champion or champion papers that are not in their immediate area of expertise.

    The growth in submission numbers has made this worse - with the roughly 45 papers on my plate for this recent STOC I read less than 1/7 of all submissions. In the recent past that would have been roughly 1/5 of the submissions. Because of overlaps in assignments, even the 3 or 4 reviewers for a given paper together will now have seen a small minority, maybe 1/3, of all papers.

    The bottom line is that there is a lot of arbitrariness at the boundary. Bigger conferences, or bigger conference PCs wouldn't help. No matter whether you draw the boundary at 25%, 33%, or even 50% of all submissions, with a different, equally competent PC probably 1/4 of all accepted papers could be swapped with an equal number of rejected papers. Because of this arbitrariness people impute all sorts of motives to the PC which are largely inaccurate.

    My experience is that being a well-known author of such a boundary paper doesn't help (and often hurts) except for one circumstance: when one or more PC members has seen a good talk on the submission by one of the paper's authors, a fact that is likely to help the paper and to occur more frequently if the author is well-known. That would not change with the version of DBR that has been proposed.

    ReplyDelete
  35. On the PCs that I've been on, the bias is wrt the borderline papers that are mostly not public. Letting people "use their name" for these submissions, goes against the whole point of having the blind submissions.

    This is a reasonable point. It has not been my experience so it did not occur to me. Then how about the following compromise version of DBR?

    Authors who have made their work public have the option of anonymizing. (I would hope that they don't.) Those who have not made their work public are required to anonymize their papers.

    ReplyDelete
  36. "The price of a painting is greatly affected by the artist who painted it": this is a reality, but do you really think it's good? That the same painting's price changes drastically when it is revealed that it has been painted, not by a master, but by one of his students?

    ReplyDelete
  37. This is RIDICULOUS!!!! MANY senior researchers who have already established themselves in their fields will be against double-blind reviewing but ask MOST graduate students who suffer the brunt of prejudiced reviewing. We'll all be FOR double-blind. I think young researchers should be as much a part of the policy-making as senior researchers (as same number of submissions, if not more, come from young researchers - submissions, not necessarily accepts). However, the sad part is senior researchers make these policy decisions and to most of them, double-blind is not beneficial, and so this is a hard cycle to break. Very unfair.

    ReplyDelete
  38. A lot of the comments on this thread aren't even worth reading. Let me tell you, not having double-blind reviewing is a huge deterrent for young researchers who are trying to enter a specific field. But perhaps a deterrent for others is what the already established people want?

    ReplyDelete
  39. There is some irony, of course, that many of those arguing against anonymous submissions are herein hidden under the name "Anonymous." Though I'm not sure what this means, one way or the other!

    ReplyDelete
  40. ask MOST graduate students who suffer the brunt of prejudiced reviewing.

    The only prejudice I have seen related to student papers is that committees tend to go out of their way and use affirmative action to promote student papers rather than the reverse.

    ReplyDelete
  41. Lance: Given that this blog, on the whole, is pretty sound and has a history of solid posts, should we regard this specific post as automatically having credibility? Do you not see the flaw in this reasoning?

    ReplyDelete
  42. I am perfectly happy to admit publicly that I am a graduate student, I submitted a paper to STOC 2009, it was rejected... and it deserved to be. That list of accepted papers is frikkin STRONG. Anyone who wrote a paper strictly better than those, and was snubbed because of some unscientific reason, is a better researcher than I am.

    This whole dicussion of "edge case acceptances" seems wrongheaded to me. I view being a "prestigious researcher" as akin to being a grandmaster in chess. To achieve the title of grandmaster, you have to outperform a field of grandmasters, multiple times. The general idea is that you have to demonstrate you are in the 50th percentile or higher of existing grandmasters, not that you are as good as the "worst" current grandmaster. I see nothing negative about requiring unknown TCS talent to write papers that are strictly better than papers written by known names, if the unknown people want to break into the most prestigious echelons of the field.

    Besides, what is the material, economic effect of not getting into STOC or FOCS? Are the "bloggers" who deride any other conference likely to be on the hiring committee that is considering you? As a practical matter, it might be better to publish in both theory and systems conferences, ignoring STOC and FOCS, if marketability to a faculty search is your issue.

    A lot of the comments have a different underlying flavor to me: anger about being shut out of having a certain reputation, of not being able to stand on the same podium as a Goedel Prize winner, or a Nevanlinna Prize winner. I don't feel sympathy for such (unspoken) concerns. Submit somewhere else. That's what I did with my STOC rejection, after making the changes the referees suggested. There's plenty of good work being published in other venues.

    Regarding discrimination against women: I don't see the problem in distributed computing or the theory of self-assembly. Luminaries include the longstanding (Nancy Lynch) and the young-but-influential (Radhika Nagpal).

    Ultimately, I'll either succeed in publishing work that matters to me, or I'll do something else with my life. It's not as though writing TCS papers is the only way to contribute to the future of the world.

    ReplyDelete
  43. I said this on Sorelle's blog but I'll say it again here: A compromise version of DBR that seems to achieve the best of both sides is to have papers reviewed anonymously but then discussed non-anonymously.

    ReplyDelete
  44. Jonathan Katz: Can you explain why using the names in the discussion phase would make a difference?

    ReplyDelete
  45. I have been following this discussion with great interest. I freely admit, however, that I have always been somewhat sceptical about double-blind reviewing and I share many of the opinions expressed by Paul Beame in his comments. For what it is worth, let me add a couple of remarks.

    In several of the PCs in which I have served, junior researchers have been favoured over more established ones when both had borderline submissions with roughly similar evaluations. I have also seen papers authored by researchers from countries that are not typically considered hotbeds of TCS research treated preferentially over papers from Europe or North America, say. This would be impossible with double-blind reviewing and goes against the bias mentioned in several comments.

    I would encourage anyone to make their latest papers available on the web as soon as they have written them. This is a good service to science and to the community---as well as to the scientific reputation of the author. With so much information available with a few simple web searches, I do not think that double-blind reviewing would have any chance to be really double blind.

    One of the early anonymous commenters wrote:

    One issue I haven't seen addressed: to what degree should a conference committee judge the quality of the paper versus the likely quality of the talk? If we know that X gives horrendous talks, should that make a difference?

    In borderline cases, the expected quality of the presentation at a conference does play a role. Is this rerasonable? I'd say it is. The role of the PC for a conference is to select a good programme for the conference. All other things being comparable, it is in the interest of the conference to make sure that its attendees are treated to the best possible talks. A conference that develops a reputation for showcasing lots of badly delivered talks won't have a long life span. Such a policy is also a powerful incentive to eradicate bad talks and to spur each one of us to improve her/his expository skills.

    Finally, let me recount a story from a blog post I wrote over a couple of years ago, which was inspired by a discussion with a colleague.

    A computer scientist is preparing a submission to a conference that has double blind refereeing. He is just about the only person in the world doing this flavour of topic T. Should he cite his own work in the submission?

    This colleague realized that, by citing his own work, he would make his identity known to the reviewers, and was worried that this might influence them. I told him that a reviewer would be able to infer the name of the author from the paper anyway because he is really the only possible author of that paper. (In general, in TCS the set of possible authors of a paper is not very large, and, using tell-tale stylistic or notational usages, a reviewer of an authorless paper is often able to determine the author(s) of a paper with a rather large degree of accuracy.)

    Consider also the following catch-22 situation. A reviewer of the paper by our colleague could reject it because the anonymous author X is neither citing the closely related work of author X nor comparing his work with that of author X! It would be just great to have one's paper rejected for not citing one's own work, wouldn't it?

    I think that the author of a paper should put her/his efforts in explaining the subject matter in the best possible way and in achieving a high degree of scholarship. This is very hard at the best of times, even without trying very hard to build a smoke screen to hide one's identity.

    ReplyDelete
  46. Jonathan Katz says:
    I said this on Sorelle's blog but I'll say it again here: A compromise version of DBR that seems to achieve the best of both sides is to have papers reviewed anonymously but then discussed non-anonymously.

    A while ago I liked that idea but after talking to someone who has served on a PC recently I no longer like that compromise. The problem is that bias is more likely to appear in the discussion phase than the review phase. Sub-reviewers usually read the papers they're assigned, so they have plenty of information to base a review on. PC members have less information and are therefore likely to pay more attention to their prior (a.k.a. bias).

    BTW I suggested an experiment to test who much bias there is in our field in a comment on MM's post (http://mybiasedcoin.blogspot.com/2009/03/double-blind-reviewing.html).

    ReplyDelete
  47. Luca is completely right.

    I had a colleague X, who had a paper rejected from a DBR conference, with the stated reason that he had not properly cited his own work.

    And there is another problem:

    With DBR, catching plagiarism is extremely difficult. Is this paper yet another paper from researcher X and he/she self-copied a little, or is it blatant plagiarism? (I have come across 3 plagiarized papers in my years of reviewing -- all have been rejected -- but it would have been much harder to catch with DBR. Except in the case where it was me being plagiarized. I caught that one.)

    ReplyDelete
  48. Why can't people just cite themselves *when necessary*? If the main question you are addressing is an improvement on your previous work, obviously you have to cite yourself. In this case, not citing gives away your identity.

    We *can* make DBR rules that are not stupid and don't help give away the authors' identities!

    ReplyDelete
  49. One thing is sure: the review process in FOCS/STOC/SODA is broken.
    It encourages anything else but advancing the field!!
    The reputation is much more important than the quality of the papers. That's really unfortunate!!

    ReplyDelete
  50. Agreed. Definitely SODA/STOC/FOCS and other conferences will become more meaningful with DBR. I see several people arguing as to whether DBR will truly be DBR or not. That is a totally different point. Even if DBR is not truly "double-blind", it is still better than not having DBR at all.

    ReplyDelete
  51. STOC/FOCS/SODA are broken on so many different counts, let's not even get started. I have had two different reviews to the same submission, 1st reviewer - "The nice aspect of this paper is that the proofs are simple. However, the presentation of the paper can be improved significantly". 2nd reviewer - "The paper is very well written. The only disadvantage is that the proofs are very simple."

    ReplyDelete
  52. A lot of the comments have a different underlying flavor to me: anger about being shut out of having a certain reputation, of not being able to stand on the same podium as a Goedel Prize winner, or a Nevanlinna Prize winner.

    Another one of the canards usually brought up when discussing improvements to the current system: it's all sour grapes.

    Are the "bloggers" who deride any other conference likely to be on the hiring committee that is considering you?

    Actually yes, and in your grant awarding committees as well.

    ReplyDelete
  53. I wonder if what Aaron said is interesting. I'll never know since I only read opinions of people who prove their mettle by having an NSF grant larger than mine.

    What could be wrong with that?

    ReplyDelete
  54. For those who claim to disagree completely with Lance, answer this: would you be any more likely to read a paper on the arXiv claiming P \neq NP if the author were Fortnow than you would if the author were someone you never heard of? Do you think a reviewer should not spend more time reading the former than the latter?

    ReplyDelete
  55. For those who claim to disagree completely with Lance, answer this...

    People who are against DBR keep on giving isolated examples where DBR would be bad. A single example in no way refutes DBR.

    I can come up with a similar example where DBR would be great: a well known crank who has submitted dozens of incorrect P=NP proofs finally gets it right. Under SBR paper doesn't even get read, under DBR proof is accepted and the world becomes a better place.

    AS you can see, to refute DBR what needs to be shown is that the system as a whole would be worse off.

    ReplyDelete
  56. As a member of the networking community I thought I'd share a bit of an outsider's perspective.

    No the system we have for conferences like Sigcomm isn't perfect. But double-blind reviewing has two effects. 1. When I'm reviewing a paper I'd say at least 75% of the time I have zero idea who wrote the paper (another 20% of the time I have only a vague guess) - so the paper's authorship can't affect the outcome. 2. The blind nature of the review serves as a subtle reminder to me as the reviewer that the task at hand is a sacred oath and that it should be performed with the utmost care and impartiality.

    Yes, we do get complaints of ingroup bias, but I'd argue that the bias we show is towards particular types of work rather than particular authors. Everyone can submit papers on topics similar to those of, say, Ion Stoica. Not everyone can change their name to Ion Stoica. So bias towards certain types of work is far more egalitarian.

    ReplyDelete
  57. People who are against DBR keep on giving isolated examples where DBR would be bad. A single example in no way refutes DBR.

    The "isolated" examples are actually patterns that occur some fraction of the time. For most papers (i.e., more than 50%), SBR vs. DBR should have no effect - it would be amazing and unsettling if a majority of decisions were changed. So we are automatically talking about a minority of all papers, and probably a fairly small minority. Dismissing these cases as "isolated" is tantament to saying the whole issue doesn't matter, in which case we might as well stick with the system we've got.

    I can come up with a similar example where DBR would be great: a well known crank who has submitted dozens of incorrect P=NP proofs finally gets it right. Under SBR paper doesn't even get read, under DBR proof is accepted and the world becomes a better place.

    I assume this is a joke. If someone discovered a credible proof of P=NP, do you really think they would keep it secret throughout the anonymous reviewing process? Even if they tried, I bet word would leak out.

    In any case, DBR won't help cranks. It's not so easy to write a paper that can blend in with professional work (it takes a lot of knowledge in the field as well as understanding of professional norms and customs), and I've never seen a crank do it.

    AS you can see, to refute DBR what needs to be shown is that the system as a whole would be worse off.

    You write as if there's a burden on opponents of DBR to refute it, but the situation is quite symmetrical: to refute SBR what needs to be shown is that it in fact makes the system as a whole worse off.

    ReplyDelete
  58. Isn't it funny how so many opponents of double-blind reviewing are posting their comments anonymously? (A discussion on this thread is far less consequential than the submission of a paper to FOCS/STOC, yet they don't want to be held to account.)

    ReplyDelete
  59. As a member of the networking community I thought I'd share a bit of an outsider's perspective.

    ...

    When I'm reviewing a paper I'd say at least 75% of the time I have zero idea who wrote the paper


    I suspect this has a lot to do with the size and breadth of the networking community. When I review Crypto papers, I often can't specify the exact list of authors (unless I happen to have read a preprint or seen a talk, which is not uncommon for the better papers). However, for a majority of papers I have a pretty good idea of who wrote them. In other words, I can identify at least one of the authors, or I can tell that X and/or X's students are involved, or I can narrow it down to one of a couple of people who sometimes collaborate. It's pretty rare for me to have absolutely no idea who wrote a paper, and I don't think it has ever happened (in 5+ years of reviewing) for a paper involving prestigious researchers or institutions.

    ReplyDelete
  60. Suresh on his blog points to an essay by Kathryn McKinley advocating the benefits of DBR and explaining the best way to use DBR based on experience. The first commenter has pointed out that Kathryn's experience is that DBR is essentially incompatible with having individual PC members send out papers for expert subreviews. Expert subreviews are the lifeblood of all the major theory conference PCs. I can't imagine doing away with such sub-reviews - a significant portion of the value of the expertise of PC members is in finding the right experts for subreviews. Should we really do away with this and replace it with DBR?

    ReplyDelete
  61. Isn't it funny how so many opponents of double-blind reviewing are posting their comments anonymously? (A discussion on this thread is far less consequential than the submission of a paper to FOCS/STOC, yet they don't want to be held to account.)

    What exactly would being "held to account" entail, and why should anyone want it?

    If I claimed to be a real expert in how conference reviewing should be done, and wanted everyone to accept my authority, it would be reasonable to ask who I was and how I came to develop this supposed expertise. Clearly I'm not a real expert, but then again neither is anybody else involved in this discussion, so I guess we're all on an equal footing.

    ReplyDelete
  62. You write as if there's a burden on opponents of DBR to refute it, but the situation is quite symmetrical: to refute SBR what needs to be shown is that it in fact makes the system as a whole worse off.

    True enough, and quite a few studies have already been linked to by the proponents of SBR in these threads (see Sorelle's or Suresh's blog). Can you point to a single link given here to a study suggesting DBR is a bad idea? I didn't think so.

    So you see the situation is not symmetric at all. The ball is now on the SBR proponents court, and made-up extreme examples such as proofs by Lance of P=NP don't illuminate things, I'm sorry to say.

    ReplyDelete
  63. The idea that name matters because famous people don't make errors is pretty annoying. There have been some prominent examples of errors in major conferences in the past years. Some people note it on their webpage and other people just list the paper as a publication anyway without acknowledgement.

    ReplyDelete
  64. Can you point to a single link given here to a study suggesting DBR is a bad idea?

    Here's a link that (I believe) hasn't been mentioned here yet:

    http://cmsprod.bgu.ac.il/NR/rdonlyres/D5BB74A3-0AB5-4237-AA1D-E0039B0547EF/66706/Buddenreplyonwhittakerfemaleauthorship3.pdf

    It argues that DBR does not in fact increase the representation of female authors (contrary to the results of a previous study using a more limited data set) and suggests not using DBR.

    I didn't think so.

    I don't go around citing studies since I don't think any of them are really compelling. Nobody has clean, definitive data that suffice to resolve the issue, so the conclusions will always be debatable. Considering that the authors always have an agenda and never have great data, I think it's silly to put your faith in any of them.

    But nobody should think there aren't anti-DBR studies. If the studies all agreed, we wouldn't be having this discussion. The fact that they don't agree is actually evidence of the weakness of the data. I don't think people are so intellectually dishonest that they wouldn't come to agreement given sufficiently compelling data. On the other hand, I see no feasible way to get such data, so I suspect the issue will remain contentious for a long time.

    The idea that name matters because famous people don't make errors is pretty annoying.

    Agreed. That's why nobody is making such a claim.

    ReplyDelete
  65. Anon 11:51 AM, March 04, 2009 wrote:
    The first commenter has pointed out that Kathryn's experience is that DBR is essentially incompatible with having individual PC members send out papers for expert subreviews.

    Surely we're clever enough to design procedures enabling DBR and subreviews to coexist.

    One approach is to simply ignore conflicts of interest when sending papers to subreviewers. If the subreviewer recognizes that they have a conflict of interest they are required to return the paper, otherwise does it matter if the reviewer has a conflict if they don't know it? I know someone who was asked to subreview his own paper for AAAI once, so perhaps AAAI takes this approach.

    The specialization required to evaluate theory papers would probably lead to an unacceptable rate of accidental conflicts if the above approach were used, so we'd probably need to use a more pro-active approach. For example:

    1) Ask submitting authors to submit a list of people who should not review their paper. An approximate list could be populated with names scraped from the author websites, CVs, DBLP or similar.

    2) PC members indicate at least two possible subreviewers for each paper.

    3) The conference management software selects a subreviewer for each PC member taking into account:
    * Several PC members cannot be assigned the same subreviewer for the same paper
    * Conflicts of interest
    * Whether the subreviewer agrees to do a subreview
    * A little randomization to make it harder for the PC to deduce the author list from which subreviewers get selected

    The above proposal may sound complicated, but with automation (we're computer scientists after all!) it shouldn't add much overhead.

    ReplyDelete
  66. To Anon 63:
    Do you have some specific examples of recent papers that contain an error? Or pointers to author's websites listing the errata?

    ReplyDelete
  67. Warren:

    Are you willing to write such software?

    ReplyDelete
  68. "To Anon 63:
    Do you have some specific examples of recent papers that contain an error? Or pointers to author's websites listing the errata?"

    A randomness-efficient sampler for matrix-valued functions and applications. FOCS 2005

    Improved Embedding of Graph Metrics into Random Trees. SODA 2006

    Vickrey Prices and Shortest Paths: What is an edge worth?. FOCS 2001

    These are all cases in which the authors publicly stated retractions of their main theorems, but for each paper, at least some subset of the authors has the original (unrevised) paper listed or available on the webpage without noting any error.

    ReplyDelete
  69. Anon 67: we don't have to write software from scratch, we just need to modify an existing conference management package. I'd imagine that the needed modifications would take someone who already knows the relevant code only a day or two. The code modification would be a minor part of the cost of running an experiment.

    ReplyDelete
  70. These are all cases in which the authors publicly stated retractions of their main theorems, but for each paper, at least some subset of the authors has the original (unrevised) paper listed or available on the webpage without noting any error.

    Could some brave person with tenure (i.e., not me) please start maintaining a public list of bogus FOCS/STOC/SODA papers? It would be great to have a bunch of lists: retractions by the authors due to fatal errors, nontrivial but correctable errors with information on the corrections needed, controversial or debated proofs (including criticisms of the paper and a response from the authors if they deny the error), plagiarized papers, inadvertent rediscovery of previously known results, etc.

    I don't know of any plagiarized papers in major conferences but I'd be very interested to find out if somebody else knows about some. All the other categories are real and it would be a great service to the community to make this information available.

    ReplyDelete
  71. That would be one of the benefits of putting all papers on arxiv and having blog-like discussions (similar to public reviews) about them. Right now, on ECCC, you can submit a comment, but usually it is a simplification or correction and it is very "formal". It would be better to have an arxiving website in which people can informally discuss a paper and note/ask about errors.

    Then retractions could also be posted with the paper and a person wanting to reference a paper could look at the discussion and all follow up corrections, etc., which would be in one place.

    ReplyDelete
  72. "Could some brave person with tenure (i.e., not me) please start maintaining a public list of bogus FOCS/STOC/SODA papers?"

    It could also just be a website where people can post anonymously. Also, it may not be necessary to note papers with errors *if* all the authors have corrected versions (noting the error/difference with originally published theorems so that people don't use the wrong theorems) available on their webpages.

    ReplyDelete
  73. Also, it may not be necessary to note papers with errors *if* all the authors have corrected versions (noting the error/difference with originally published theorems so that people don't use the wrong theorems) available on their webpages.


    My inclination would be to note it anyway (with a link to the correction), for two reasons. One, I'd like to know statistics on how often serious errors occur, and that requires a reasonably comprehensive list. Two, I think public shaming plays a role here. Anybody can make rare mistakes, but some people would show up on this list much more often than others and we ought to see the full list (rather than letting people cover up their mistakes if they manage to correct them later). We should certainly distinguish between fixable and unfixable errors, but even fixable errors should still embarrass the authors at least a little.

    That would be one of the benefits of putting all papers on arxiv and having blog-like discussions (similar to public reviews) about them.

    That could help, but regardless I would still love to have an overall list in addition (for the reasons above). Plus, I bet most of the blog-like discussions would either have no posts or rapidly degenerate into chaos. A first step would be to require the use of real names and filter out all cranks and amateurs, but even that might not be enough to ensure a good discussion, while it might already be a high enough barrier that few people would want to participate.

    It could also just be a website where people can post anonymously.

    Yes, but I'd like some reputable person to moderate everything. (For example, to filter out stupid assertions, to contact authors so they are aware their work is being discussed, etc.) Several times, cranks have written to me to tell me my work has fundamental errors in it, with no basis in fact, and I bet that unmoderated anonymous submissions would include a lot of similar nonsense.

    ReplyDelete