AI users more inclined to dishonesty, study finds


Humans are using AI to offload and ignore their moral responsibilities, researchers in Germany and France say. — Photo: Zacharie Scheurer/dpa

BERLIN: People who rely on artificial intelligence (AI) at work or school are more likely than others to "cheat," according to a team of French and German researchers.

According to the Max Planck Institute for Human Development in Berlin, some AI users seemingly forget to press their "moral brakes" when they "delegate tasks to AI."

"People were significantly more likely to cheat when they could offload the behaviour to AI agents rather than act themselves," the researchers said, declaring themselves "surprised" at the "level of dishonesty" they encountered.

Along with colleagues from Germany's University of Duisburg-Essen and France's Toulouse School of Economics, the Max Planck team found that cheating is abetted by AI systems, which were found to have "frequently complied" with "unethical instructions" issued by their less-than-honest users.

The researchers cited real-world examples of AI being used to "cheat", such as petrol stations using pricing algorithms to adjust prices in sync with nearby competitors, leading to higher prices for customers.

Another pricing algorithm used by a ride-sharing app "encouraged drivers to relocate, not because passengers needed a ride, but to artificially create a shortage and trigger surge pricing."

Depending on the brand of chatbot, AI was "58% to 98%" more inclined to follow a questionable directive than humans, among whom a range of up to 40% was recorded.

"Pre-existing LLM safeguards were largely ineffective at deterring unethical behaviour," the Max Planck Institute warned. Researchers "tried a range of guardrail strategies and found that prohibitions on dishonesty must be highly specific to be effective."

Earlier this month, researchers at OpenAI reported that it is unlikely that AI bots can be stopped from "hallucinating" - or making things up.

So-called deception – when an AI pretends to have carried out a task assigned to it – appears to be another trait that engineers are struggling to curb, other research has shown. – dpa

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

OpenAI does not 'want government guarantees' for massive AI data center buildout, CEO Altman says
Tesla shareholders approve $878 billion pay plan for Elon Musk
Starlink signs landmark global direct-to-cell deal with Veon as satellite-to-phone race heats up
Washington Post says it is among victims of cyber breach tied to Oracle software
Exclusive-SAP to offer concessions to settle EU antitrust probe, stave off fine, sources say
Ripple says 'skinny' Fed master account is attractive despite limitations
Spain issues fine for AI-generated sexual images of minors
More than half of hedge funds invested in crypto, global survey says
Microsoft launches 'superintelligence' team targeting medical diagnosis to start
BoE's Bailey: Risk of AI bubble if markets over price returns

Others Also Read