Thursday, February 02, 2023

Responsibility

Nature laid out their ground rules for large language models like ChatGPT including

No LLM tool will be accepted as a credited author on a research paper. That is because any attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.

Let's focus on the last word "responsibility". What does that mean for an author? It means we can hold an author, or set of authors, responsible for any issues in the paper such as

  • The proofs, calculations, code, formulas, measurements, statistics, and other details of the research.
  • Any interpretation or conclusions made in the article
  • Properly citing related work, especially work that calls into question the novelty of this research
  • The article does not contain text identical or very similar to previous work.
  • Anything else described in the article.
The authors should take reasonable measures to ensure that a paper is free from any issues above. Nobody is perfect and if you make a mistake in a paper, you should, as with all mistakes, take responsibility and acknowledge the problems, do everything you can to rectify the issues, such as publishing a corrigendum if needed, and work to ensure you won't make similar mistakes in the future.

Mistakes can arise outside of an author's actions. Perhaps a computer chip makes faulty calculations, you relied on a faulty theorem in another paper, your main result appeared in a paper fifteen years ago in an obscure journal, a LaTeX package for the journal created some mistakes in the formulas or a student who helped with the research or exposition took a lazy way out, or you put too much trust in AI generative text. Nevertheless the responsibility remains with the authors. 

Could an AI ever take responsibility for an academic paper? Would a toaster ever take responsibility for burning my breakfast?


 

2 comments:

  1. You wrote: "Could an AI ever take responsibility for an academic paper? Would a toaster ever take responsibility for burning my breakfast?"

    You aren't getting any comments because you called this one exactly right.


    ReplyDelete
  2. Current pseudoAI is merely making the program (statistical prediction of sentence completion) initiated in 50s by Shannon and others robust and in a grander way. No one even has a proper working definition of AGI let alone whether if there are barriers through church turing hypothesis. The comparison to internet technology is not correct. But it is true theory people then as now tried to bs, publish papers and get tenure.

    ReplyDelete