Study: AI-generated undergraduate exam answers undetected 94% of the time


It can be hard for teachers to tell the difference between answers written by generative AI and those written by real students. — AFP Relaxnews

With the rise of artificial intelligence, an overwhelming majority of university educators can't tell the difference between a student's work and an answer produced by ChatGPT, a recent British study shows.

Researchers at the UK's University of Reading have demonstrated that experienced exam markers find it extremely difficult to distinguish between their students' actual work and answers generated by artificial intelligence. Their findings are published in the scientific journal PLOS One.

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

Facebook 'supreme court' admits 'frustrations' in five years of work
Russia restricts FaceTime, its latest step in controlling online communications
Studies: AI chatbots can influence voters
LG Elec says Microsoft and LG affiliates pursuing cooperation on data centres
Apple appoints Meta's Newstead as general counsel amid executive changes
AI's rise stirs excitement, sparks job worries
Australia's NEXTDC inks MoU with OpenAI to develop AI infrastructure in Sydney, shares jump
SentinelOne forecasts quarterly revenue below estimates, CFO to step down
Hewlett Packard forecasts weak quarterly revenue, shares fall
Microsoft to lift productivity suite prices for businesses, governments

Others Also Read