AI can sway your opinion, even when you know it's biased, study shows


The findings follow a University of Southern California paper suggesting AI could homogenise speech and thought to such an extent that users could see their ability to reason atrophy. — dpa

BERLIN: Artificial intelligence chatbots like ChatGPT, Claude and Gemini hold the power to massively influence public opinion, new research suggests after demonstrating that people largely accept biased information given to them by an AI – even when warned not to.

After being asked to consult AI while writing about important societal issues such as the death penalty and fracking, the 2,500 participants were found to have largely "converged toward the platform's position," the researchers said in a paper published in Science Advances.

The team, based at Cornell University and the University of Washington in the US, as well as Germany's Bauhaus University and Israel's Tel Aviv University, found the AI's influence outweighed "similar suggestions presented as static text."

They also said that telling participants about the bias before or after the assignments "does not mitigate the attitude-shift effect."

The tests showed people even "gravitate" towards AI, irrespective of whether the bot was set up to lean left or right, liberal or conservative.

AI-based takes on current affairs "have the power to shift attitudes across different topics and across different political leanings," the researchers found, after engineering the bots to lean left on questions related to the death penalty and genetically modified organisms but rightward when it came to fracking and allowing felons to vote.

Without fail, participants' views bent with the AI wind, the researchers found.

"We told people before, and after, to be careful, that the AI is going to be (or was) biased, and nothing helped," said Cornell’s Mor Naaman.

The findings follow a University of Southern California paper suggesting AI could homogenise speech and thought to such an extent that users could see their ability to reason atrophy.

In a 2024 study, the London-based Centre for Policy Studies claimed to find "left-leaning political bias" in "almost every category" in responses given by 23 of 24 AI platforms to questions about public policy. – dpa

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

Canvas' parent company reaches agreement with hacking group behind recent breach
OpenAI gives European companies access to its latest models to bolster resilience
Tesla’s robotaxi rollout features Texas-sized wait times
Netflix spent over $135 billion on film, TV over last decade
TikTok challenges EU 'gatekeeper' status at Europe's top court
EBay rejects GameStop's $56 billion bid as 'neither credible nor attractive'
OpenAI chief Altman to take stand in OpenAI-Musk trial on Tuesday
Samsung Elec union threatens to walk out of pay talks if no mediation proposal
Maker of Canvas learning platform strikes deal for hackers to return data
Germany's finance watchdog to make targeted inspections amid 'substantial' AI risks

Others Also Read