AI users more inclined to dishonesty, study finds


Humans are using AI to offload and ignore their moral responsibilities, researchers in Germany and France say. — Photo: Zacharie Scheurer/dpa

BERLIN: People who rely on artificial intelligence (AI) at work or school are more likely than others to "cheat," according to a team of French and German researchers.

According to the Max Planck Institute for Human Development in Berlin, some AI users seemingly forget to press their "moral brakes" when they "delegate tasks to AI."

"People were significantly more likely to cheat when they could offload the behaviour to AI agents rather than act themselves," the researchers said, declaring themselves "surprised" at the "level of dishonesty" they encountered.

Along with colleagues from Germany's University of Duisburg-Essen and France's Toulouse School of Economics, the Max Planck team found that cheating is abetted by AI systems, which were found to have "frequently complied" with "unethical instructions" issued by their less-than-honest users.

The researchers cited real-world examples of AI being used to "cheat", such as petrol stations using pricing algorithms to adjust prices in sync with nearby competitors, leading to higher prices for customers.

Another pricing algorithm used by a ride-sharing app "encouraged drivers to relocate, not because passengers needed a ride, but to artificially create a shortage and trigger surge pricing."

Depending on the brand of chatbot, AI was "58% to 98%" more inclined to follow a questionable directive than humans, among whom a range of up to 40% was recorded.

"Pre-existing LLM safeguards were largely ineffective at deterring unethical behaviour," the Max Planck Institute warned. Researchers "tried a range of guardrail strategies and found that prohibitions on dishonesty must be highly specific to be effective."

Earlier this month, researchers at OpenAI reported that it is unlikely that AI bots can be stopped from "hallucinating" - or making things up.

So-called deception – when an AI pretends to have carried out a task assigned to it – appears to be another trait that engineers are struggling to curb, other research has shown. – dpa

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

US agency to hold self-driving safety forum with CEOs of Waymo, Zoox, Aurora
Indonesia to restrict social media access for children under 16, minister says
Plan me a holiday: How AI chatbots are changing travel
Bring on defunct: The iPod enthralls young music listeners
High-tech snowploughs and AI help US cities clean up from big storms
When chatbots are used to plan violence, is there a duty to warn?
SoftBank eyes up to $40 billion loan to fund OpenAI investment, Bloomberg News reports
‘Milestone’ first UK remote robotic surgery
Women are falling in love with AI. It’s a problem for Beijing.
Google adds desktop mode to Pixel smartphones

Others Also Read