Tuesday, January 19, 2021

The Ethics Board

Back in 2014 I recommended a TCS ethics board mainly to deal with plagiarism and who should get credit for a result. It went the way of most suggestions I make in my blog, a quick road to nowhere. Bill asked "Does any other branch of CS have an ethics board? How have they worked?"

The NeurIPS conference created a such a board for the reviewing process, though with a broader mission.

We appointed an ethics advisor and invited a pool of 22 ethics reviewers (listed here) with expertise in fields such as AI policy, fairness and transparency, and ethics and machine learning. Reviewers could flag papers for ethical concerns, such as submissions with undue risk of harm or methods that might increase unfair bias through improper use of data, etc. Papers that received strong technical reviews yet were flagged for ethical reasons were assessed by the pool of ethics reviewers.

Thirteen papers met these criteria and received ethics reviews. Only four papers were rejected because of ethical considerations, after a thorough assessment that included the original technical reviewers, the area chair, the senior area chair and also the program chairs. Seven papers flagged for ethical concerns were conditionally accepted, meaning that the final decision is pending the assessment of the area chair once the camera ready version is submitted. Some of these papers require a thorough revision of the broader impact section to include a clearer discussion of potential risks and mitigations, and others require changes to the submission such as the removal of problematic datasets. Overall, we believe that the ethics review was a successful and important addition to the review process. Though only a small fraction of papers received detailed ethical assessments, the issues they presented were important and complex and deserved the extended consideration. In addition, we were very happy with the high quality of the assessments offered by the ethics reviewers, and the area chairs and senior area chairs also appreciated the additional feedback.

Without seeing the process I cannot say what criteria were used to reject the four papers. There could be legitimate reasons if the authors did violate professional ethics or even inadvertently based their results on biased data sets. But there's a fine line to rejecting papers because you don't like the outcome or the questions they ask. No easy answers here.


  1. Also, see https://statmodeling.stat.columbia.edu/2020/12/21/the-neurips-2020-broader-impacts-experiment/

  2. Is their critiera transparent? I ask non-rhetorically.
    If not then that is an irony.

    bill g.