A chatbot that can answer the biggest, and smallest, questions we can think of. Since its release last November, ChatGPT has been a phenomenal hit. But it's also become apparent that this type of conversational AI will have "huge implications on the way researchers work," writes Claudi Bockting, Professor of Clinical Psychology in Psychiatry at Amsterdam UMC co-director of the Centre for Urban Mental Health, today in Nature, as she and her co-authors present five priorities for research.

These implications are wide-ranging. Such technology may be used, not only to write papers but also to identify research gaps, to write programs, to review and improve existing texts, and may even be used as a search tool. How should researchers respond to this potentially disruptive technology? This is the central question of the Nature comment article written by Claudi Bockting and Evi-Anne van Dis, together with colleagues in computer science from the University of Amsterdam and Indiana University. "We think that banning this technology will not be an option, as it will likely be ubiquitous and, shortly, likely even integrated in word processing programs,” says Bockting.

Concerns

The authors believe that using technology like ChatGPT may pose several threats to scientific practice, with its accuracy as one of the most evident concerns. Bockting and her co-authors presented ChatGPT with numerous questions and tasks. However, this didn't always go as desired. "We found that it often came back with incorrect text. Or misleading text. For example, in one case, we asked how many patients with depression experience relapse after treatment. According to ChatGPT treatment was generally long-lasting - basically meaning that depressive relapse was uncommon. However, studies show that relapse occurrence can be as high as 51% in the first year after treatment.” says Evi-Anne van Dis, post-doctoral researcher at Amsterdam UMC, department of Psychiatry.

Five Priorities

The above experience is one of the reasons that van Dis, Bockting and their co-authors recommend that scientists "hold on to human verification" when using ChatGPT. This is one of 5 priorities that the authors believe all scientists should adhere to. “Even though ChatGPT may generate high-quality text, we give several examples how it may also introduce bias and inaccuracies. It may even lead to plagiarism. Thorough human fact checking is essential.” says van Dis. Another priority is to ensure that researchers remain accountable for their work and should be transparent about using ChatGPT. “If you think of researchers who may use ChatGPT for writing large parts of their manuscripts, without acknowledging it, this feels like cheating," Bockting adds.

Rules for Accountability

"There are tools that can detect whether text comes from a machine or from a human hand but AI-chatbots will probably become smarter than these AI-tools. We should skip the pointless arms race and come together to make some rules,” says Bockting. These rules would centre on "transparency, integrity and honesty,” and should involve the use of author contribution statements where the extent, and the nature, of the use of AI technology can be acknowledged. "We also need to work out who owns the rights,” adds van Dis, 'is it the person who trained the AI, those who produced the AI or the scientists who used it to aid their writing?”

Investing and Adopting

Currently, AI technology like ChatGPT is predominantly owned large tech companies, which are, for example, not transparent about the training sets used for ChatGPT. This causes great concern for van Dis, Bockting and their co-authors. They believe it is crucial to develop reliable and more transparent AI technology, owned by independent non-profit organizations with no conflicts of interest.
If technology like ChatGPT becomes more reliable and transparent, it may provide ample opportunities for science. "AI may give us the chance to increase diversity and may reduce the ever-increasing workload. However, we do need to discuss whether the trade-off between the benefits of AI and the loss of autonomy is worth it. How much room do we give to AI without undermining ourselves?” says Bockting. The authors are clear that while it is important to embrace the technology, you cannot do that without thinking about where the borders lie.

Opening the Debate

Van Dis, Bockting and their co-authors finally recommend that all research groups begin to discuss ChatGPT internally. This will help determining the best way to use the tool "honestly, transparently and with integrity," says van Dis. Furthermore, the scientific community needs to discuss this with all relevant parties, from publishers to ethicists, ideally in the form of an international summit. In their Nature article, the authors propose ten questions to guide such a summit. “We hope this article can be a starting point for a worldwide immediate and ongoing debate on how to use such AI technology for research purposes. Ultimately, this should lead to the development of concrete and practical guidelines for researchers,” says Bockting.

ChatGPT

ChatGPT relies on a large language model (LLM), a machine-learning system that autonomously learns from data and can produce sophisticated and seemingly intelligent writing after training on a massive data set of text.

Read here the article about ChatGPT and five priorities for research, published today in Nature.

Fotografie: Shutterstock