BERLIN: Artificial intelligence chatbots like ChatGPT, Claude and Gemini hold the power to massively influence public opinion, new research suggests after demonstrating that people largely accept biased information given to them by an AI – even when warned not to.
After being asked to consult AI while writing about important societal issues such as the death penalty and fracking, the 2,500 participants were found to have largely "converged toward the platform's position," the researchers said in a paper published in Science Advances.
The team, based at Cornell University and the University of Washington in the US, as well as Germany's Bauhaus University and Israel's Tel Aviv University, found the AI's influence outweighed "similar suggestions presented as static text."
They also said that telling participants about the bias before or after the assignments "does not mitigate the attitude-shift effect."
The tests showed people even "gravitate" towards AI, irrespective of whether the bot was set up to lean left or right, liberal or conservative.
AI-based takes on current affairs "have the power to shift attitudes across different topics and across different political leanings," the researchers found, after engineering the bots to lean left on questions related to the death penalty and genetically modified organisms but rightward when it came to fracking and allowing felons to vote.
Without fail, participants' views bent with the AI wind, the researchers found.
"We told people before, and after, to be careful, that the AI is going to be (or was) biased, and nothing helped," said Cornell’s Mor Naaman.
The findings follow a University of Southern California paper suggesting AI could homogenise speech and thought to such an extent that users could see their ability to reason atrophy.
In a 2024 study, the London-based Centre for Policy Studies claimed to find "left-leaning political bias" in "almost every category" in responses given by 23 of 24 AI platforms to questions about public policy. – dpa
