Monday, March 06, 2023

Peer Review

I ran into a partner of a computer scientist at a social event who asked me "Is the publication system in CS screwed up or really screwed up?" If you don't know my response you haven't been reading this blog long.

Today let's talk about peer review. Kevin McCurley and Adam Mastroianni have recent, not so positive, takes on this topic. 

Peer review came out of a system where we had limited slots in journals and, in computer science, conferences and we had to make tough decisions. Journals and conferences would gain a reputation based somewhat on how difficult it was to get papers published there.

Now we have basically unlimited space to publish your results. And you definitely should do so, posting your papers on your own webpage, and a paper archive site like arXiv or ECCC. The research community would flourish in a world where everyone posts their paper online for public comment, people can promote their favorite papers on social media and we have a TikTok-system for recommending papers to you.

So why do we still need paper review? Mostly because we have to review researchers for jobs and grants, and with the questioning the value of recommendation letters, publication quality and quantity has become a stronger proxy for measuring people for better or for worse.

First of all, peer review is a cornerstone of science. Would you rather have papers reviewed by faceless bureaucrats who know little about the subject area? Or papers only ranked by manipulable statistics like citations.  

But the way we apply peer review, to decide acceptances in conferences, just adds too much randomness to the system. CS conferences have multiplied and continue to get increased submissions as the field grows. It's just impossible to maintain any sort of uniformity in quality of acceptances. Or too often, we find conference committees and funding panels playing it safe rather than take risks with new research. With so much randomness, it's best to try many papers instead of focusing on a stronger result, leading to too much incremental research, especially in academia. 

For hiring, promotion, tenure and funding decisions, we too often rely on short cuts, such as the number of papers accepted to major conferences. Those who don't win the conference lottery get disillusioned and often leave academia for industry and no one wins.


  1. (Kevin McCurley here). There are many aspects of this question about peer review - probably too many to cover in a blog post. From a high level, there is some doubt about what we expect peer review to accomplish. Some want it to find errors in scientific work and ensure integrity. Some want it to provide ranking on scientific work, to inform others about what is worth reading. Both are valid goals, but ranking is very tricky and vulnerable to manipulation, bias, and competitive pressures. A good overview of some issues is here:

    Computer science has unique aspects to peer review, in part because it's more conference-based, using committees to rank papers. This provides more transparency because the committee is published, but it doesn't link reviews to reviewers. The machine learning community has gone further by adopting open reviews, where reviewers can be incentivized to write good reviews and be rewarded for work that is currently done behind closed doors.

    Peer review has been a generally good thing, but that doesn't mean it can't be improved.

  2. I'd like to question several assertions in this post.

    First, it conflates the process (peer review) with the outcome (e.g., decisions of granting agencies). Peer review provides signal, and funding panels/PCs/ACs/editors decide on how to act on this signal. For instance, risk and reward are separate axes in peers' assessment that ultimately are mapped to a binary decision.

    Second, I really doubt the statement that those who leave academia do so because they are risk averse. If they are, those poor souls are in for a rude awakening. It does not take long to discover that the industry is just as chaotic, faddish, and unfair as conference acceptances, except that the stakes are higher. Let me tell you - if you don't like how funding agencies mete out grants, you'll like even less how VC allocate capital.

    Finally, I take issue with the claim that transitioning from academia to industry leaves everyone worse off. Well, for one, companies get talent and reward it accordingly. Even from the perspective of public welfare, many recent advances in AI or quantum computing, to give just two random examples, have come from the industry and enriched the world. I doubt if similar rate of progress could be achieved within the standard model of academic research.

  3. How would you suggest conferences pick papers? Journals? Or do you think the sea of arXiv where good work mixes with "proofs" that P=NP replaces journals?

    "The research community would flourish in a world where everyone posts their paper online for public comment, people can promote their favorite papers on social media and we have a TikTok-system for recommending papers to you." sounds like someone who does not see the nonsense TikTok recommends and propagates or the nonsensical arguments online comments can take on as self-proclaimed experts shout down real ones or how trolls aim to frustrate and waste the time of people.