AI being used to churn out deluge of dodgy scientific research


Research say since the advent of AI, there has been a surge in papers that look scientific but don’t hold up to scrutiny. — Photo: Leonie Asendorpf/dpa

LONDON: Easy access to artificial intelligence (AI) has made medical and health research less scientifically rigorous and has facilitated a "flood" of shoddy journal papers full of superficial analyses based on "cherry-picked" data, a new study reports.

According to the University of Surrey and University of Aberystwyth, leaning on AI leads to the "production of large numbers of formulaic single-factor analyses" when a broader approach would likely better assess the range of possible causes of diseases.

Resorting to AI for a leg-up or head-start often ends up with researchers "relating single predictors to specific health conditions," the team said in a paper published by the science journal PLOS Biology.

"We’ve seen a surge in papers that look scientific but don’t hold up to scrutiny," said Matt Spick of the University of Surrey, who described such output as "science fiction."

The growing reliance on and hyping-up of AI is making so-called paper mills - where high volumes of quantity-over-quality medical or scientific journal papers get churned out - more proficient. Such would-be researchers can try to "exploit AI-ready datasets" to ensure "end-to-end generation of very large numbers of manuscripts."

According to the University of Surrey, some of the papers assessed were found to have featured "cherry-picked narrow data subsets without justification," practices that are "raising concerns about poor research practice, including data dredging or changing research questions after seeing the results."

Having thorough peer reviews and getting statisticians more involved with medical research that is based on large health datasets can help stem the tide, according to the team.

Researchers need to work harder too and "use the full datasets available to them unless there’s a clear and well-explained reason to do otherwise."

The growing use of AI means science publishing needs "better guardrails," according to the University of Surrey's Anietie Aliu. – dpa

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

Roblox gaming platform says it is ready to make changes to get Russian ban lifted
Oracle's $10 billion Michigan data center in limbo after Blue Owl funding talks stall, FT reports
Coursera to buy Udemy, creating $2.5 billion firm to target AI training
Factbox-By the numbers: How the Netflix and Paramount bids for Warner Bros stack up
Warner Bros Discovery board rejects rival bid from Paramount
Analysis-Qatar bets on cheap power to catch up in Gulf AI race
Analysis-Crypto investors show caution, shift to new strategies after crash
OpenAI’s ChatGPT updated to�make images better and faster
With freebies, OpenAI, Google vie for Indian users and training data
Does China have a robot bubble?

Others Also Read