AI users more inclined to dishonesty, study finds


Humans are using AI to offload and ignore their moral responsibilities, researchers in Germany and France say. — Photo: Zacharie Scheurer/dpa

BERLIN: People who rely on artificial intelligence (AI) at work or school are more likely than others to "cheat," according to a team of French and German researchers.

According to the Max Planck Institute for Human Development in Berlin, some AI users seemingly forget to press their "moral brakes" when they "delegate tasks to AI."

"People were significantly more likely to cheat when they could offload the behaviour to AI agents rather than act themselves," the researchers said, declaring themselves "surprised" at the "level of dishonesty" they encountered.

Along with colleagues from Germany's University of Duisburg-Essen and France's Toulouse School of Economics, the Max Planck team found that cheating is abetted by AI systems, which were found to have "frequently complied" with "unethical instructions" issued by their less-than-honest users.

The researchers cited real-world examples of AI being used to "cheat", such as petrol stations using pricing algorithms to adjust prices in sync with nearby competitors, leading to higher prices for customers.

Another pricing algorithm used by a ride-sharing app "encouraged drivers to relocate, not because passengers needed a ride, but to artificially create a shortage and trigger surge pricing."

Depending on the brand of chatbot, AI was "58% to 98%" more inclined to follow a questionable directive than humans, among whom a range of up to 40% was recorded.

"Pre-existing LLM safeguards were largely ineffective at deterring unethical behaviour," the Max Planck Institute warned. Researchers "tried a range of guardrail strategies and found that prohibitions on dishonesty must be highly specific to be effective."

Earlier this month, researchers at OpenAI reported that it is unlikely that AI bots can be stopped from "hallucinating" - or making things up.

So-called deception – when an AI pretends to have carried out a task assigned to it – appears to be another trait that engineers are struggling to curb, other research has shown. – dpa

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

Scale of social media use in pre-school children ‘deeply alarming’
Opinion: Are QR codes computer-friendly?
Pick your handle: WhatsApp preparing reservation queue for usernames
'Kirby Air Riders': A 'Mario Kart' alternative for the Switch 2
Meta delays release of Phoenix mixed-reality glasses to 2027, Business Insider reports
Opinion: How can you tell if something’s been written by ChatGPT? Let’s delve
'Stealing from a thief': How ChatGPT helped Delhi man outsmart scammer, make him 'beg' for forgiveness
A US man was indicted for allegedly cyberstalking women. He says he took advice from ChatGPT.
Apple, Tesla accused of profiting from horrific abuses, environmental destruction
Exclusive-How Netflix won Hollywood's biggest prize, Warner Bros Discovery

Others Also Read