"Artificial intelligence researchers are grappling with a problem core to their field: how to stop so-called “AI slop” from damaging confidence in the industry’s scientific work.
AI conferences have rushed to restrict the use of large language models for writing and reviewing papers in recent months after being flooded with a wave of poor AI-written content.
Scientists have warned that the surge of low-quality AI-generated material risks eroding trust and the integrity of the sector’s research by introducing false claims and made-up content.
“There is a little bit of irony to the fact that there’s so much enthusiasm for AI shaping other fields when, in reality, our field has gone through this chaotic experience because of the widespread use of AI,” said Inioluwa Deborah Raji, an AI researcher at the University of California, Berkeley.
Recent studies have highlighted the prevalence of the technology in AI research. In August, a study by Stanford University found that up to 22 per cent of computer science papers contained LLM usage."
https://www.ft.com/content/54e274c5-de86-4b3e-96a9-95a46b5e48a0