We've updated our Privacy Policy to make it clearer how we use your personal data. We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Advertisement

Scientific Journal Publishes Paper With AI-Generated Introduction

A computer with ChatAI on the screen.
Credit: iStock.
Listen with
Speechify
0:00
Register for free to listen to this article
Thank you. Listen to this article using the player above.

Want to listen to this article for FREE?

Complete the form below to unlock access to ALL audio articles.

Read time: 1 minute

A peer-reviewed scientific paper has gathered significant attention on the social media platform X (formerly Twitter), albeit for unfavorable reasons.


Published in Elsevier’s Surfaces and Interfaces journal, the first line of said paper’s introduction reads: “Certainly, here is a possible introduction for your topic”.


If you’ve had any interaction with ChatGPT or other large language models (LLMs), you’re likely well acquainted with this phrasing.


It suggests that AI-assisted technologies have been utilized in writing the manuscript, titled: The three-dimensional porous mesh structure of Cu-based metal-organic-framework - aramid cellulose separator enhances the electrochemical performance of lithium metal anode batteries.


Though Elsevier does not condemn the adoption of AI and AI-assisted technologies in manuscript writing, its policies do require that authors disclose this information.


“We ask authors who have used AI or AI-assisted tools to insert a statement at the end of their manuscript immediately above the references or bibliography entitled ‘Declaration of AI and AI-assisted technologies in the writing process’. In that statement, we ask authors to specify the tool that was used and the reason for using the tool,” the journal’s website reads.


Despite the clear-cut evidence that LLMs have been used, the authors – Zhang et al. – failed to include such statement, which begs the question: how has this paper survived the processes preceding peer-review, peer-review itself and, finally, publication?

Want more breaking news?

Subscribe to Technology Networks’ daily newsletter, delivering breaking science news straight to your inbox every day.

Subscribe for FREE

Elsevier has responded via X, stating, “Our policies are clear that LLMs can be used in the drafting of papers as long as it is declared by the authors on submission. We are investigating this paper and are in discussion with editorial team and the authors.” The journal has not yet released any further information.


The viral X thread features members of the scientific community expressing their frustration and disappointment with the situation. It also highlights other instances where LLM-generated text features in peer-reviewed journals, some of which are several months old and appear to have not yet been addressed.  

AI’s future in scientific publishing

The controversial Elsevier paper comes at a time when the value, risks and ethical implications of using AI in scientific publishing are under constant evaluation.


While some tout the benefits of AI-assisted plagiarism detection, text optimization and the tailoring of articles, others heed caution at the technologies’ potential to introduce factual inaccuracies, debates surrounding authorship and biases into the literature.


At Technology Networks, we want to hear from you – what do you think about the use of AI and AI-assisted tools in scientific publication?