Studies: AI chatbots can influence voters


Experiments with generative artificial intelligence models, such as OpenAI's GPT-4o and Chinese alternative DeepSeek, found they were able to shift supporters of Republican Donald Trump towards his Democratic opponent Kamala Harris by almost four points on a 100-point scale ahead of the 2024 US presidential election. — AFP

PARIS, France: A brief conversation with a partisan AI chatbot can influence voters' political views, studies published Dec 4 found, with evidence-backed arguments – true or not – proving particularly persuasive.

Experiments with generative artificial intelligence models, such as OpenAI's GPT-4o and Chinese alternative DeepSeek, found they were able to shift supporters of Republican Donald Trump towards his Democratic opponent Kamala Harris by almost four points on a 100-point scale ahead of the 2024 US presidential election.

Opposition supporters in 2025 polls in Canada and Poland meanwhile had their views shifted by up to 10 points after chatting with a bot programmed to persuade.

Those effects are enough to sway a significant proportion of voting decisions, said Cornell University professor David Rand, a senior author of the papers in journals Science and Nature.

"When we asked how people would vote if the election were held that day... roughly one in 10 respondents in Canada and Poland switched," he told AFP by email.

"About one in 25 in the US did the same," he added, while noting that "voting intentions aren't the same as actual votes" at the ballot box.

However, follow-ups with participants found that around half the persuasive effect remained after one month in Britain, while one-third remained in the United States, Rand said.

"In social science, any evidence of effects persisting a month later is comparatively rare," he pointed out.

Being polite, giving proof

The studies found that the most common tactic used by chatbots to persuade was "being polite and providing evidence", and that bots instructed not to use facts were far less persuasive.

Such results "go against the dominant narrative in political psychology, which holds that 'motivated reasoning' makes people ignore facts that conflict with their identities or partisan commitments", Rand said.

But the facts and evidence cited by the chatbots were not necessarily truthful.

While most of their fact-checked claims were accurate, "AIs advocating for right-leaning candidates made more inaccurate claims", Rand said.

This was "likely because the models mirror patterns in their training data, and numerous studies have found that right-leaning content on the internet tends to be more inaccurate", he added.

The authors recruited thousands of participants for the experiments on online gig-work platforms and warned them in advance that they would be speaking with AI.

Rand said that further work could investigate the "upper limit" of just how far AI can change people's minds – and how newer models released since the fieldwork, such as GPT-5 or Google's Gemini 3, would perform. – AFP

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

WhatsApp says Italian surveillance company tricked around 200 users into downloading spyware
Exclusive-Intel looks to put millions more into SambaNova startup chaired by CEO Tan
Singapore gets robotaxis as Grab, WeRide launch driverless cars
Related Digital nears $16 billion financing for Oracle data center, source says
Analysis-SpaceX’s orbital data centers could face same hurdles as Microsoft’s abandoned undersea project
Italian bill proposes curbs on social media addiction
SpaceX IPO buzz lifts aerospace shares on spillover bets
Exclusive-SpaceX will host analyst day on April 21, source says
Factbox-Mega IPOs loom on Wall Street as Elon Musk's SpaceX confidentially files paperwork
Factbox-SpaceX's business and finances: rockets, satellite communications and budding AI

Others Also Read