AI can sway your opinion, even when you know it's biased, study shows


The findings follow a University of Southern California paper suggesting AI could homogenise speech and thought to such an extent that users could see their ability to reason atrophy. — dpa

BERLIN: Artificial intelligence chatbots like ChatGPT, Claude and Gemini hold the power to massively influence public opinion, new research suggests after demonstrating that people largely accept biased information given to them by an AI – even when warned not to.

After being asked to consult AI while writing about important societal issues such as the death penalty and fracking, the 2,500 participants were found to have largely "converged toward the platform's position," the researchers said in a paper published in Science Advances.

The team, based at Cornell University and the University of Washington in the US, as well as Germany's Bauhaus University and Israel's Tel Aviv University, found the AI's influence outweighed "similar suggestions presented as static text."

They also said that telling participants about the bias before or after the assignments "does not mitigate the attitude-shift effect."

The tests showed people even "gravitate" towards AI, irrespective of whether the bot was set up to lean left or right, liberal or conservative.

AI-based takes on current affairs "have the power to shift attitudes across different topics and across different political leanings," the researchers found, after engineering the bots to lean left on questions related to the death penalty and genetically modified organisms but rightward when it came to fracking and allowing felons to vote.

Without fail, participants' views bent with the AI wind, the researchers found.

"We told people before, and after, to be careful, that the AI is going to be (or was) biased, and nothing helped," said Cornell’s Mor Naaman.

The findings follow a University of Southern California paper suggesting AI could homogenise speech and thought to such an extent that users could see their ability to reason atrophy.

In a 2024 study, the London-based Centre for Policy Studies claimed to find "left-leaning political bias" in "almost every category" in responses given by 23 of 24 AI platforms to questions about public policy. – dpa

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

The price users pay for WhatsApp's handy new link previews
Perfect homework, blank stares: Why colleges are turning to oral exams to combat AI
WhatsApp and Pok�mon Sleep arrive on Garmin sports watches
New Mexico jury says Meta harms children's mental health and safety, violating state law
Poland faced a surge in cyberattacks in 2025, including a major assault on the energy sector
Roblox to introduce new controls in Indonesia to comply with child social media block
More than 50% surveyed in Germany believe AI will change social life
OpenAI kills Sora video app in pivot toward business tools
Roblox’s hit game ‘Steal A Brainrot’ battles its many imitators
SK Hynix files for US listing that source says could raise up to $14 billion

Others Also Read