More ‘charismatic’ AI can abuse trust but also help creativity


Society is set to be confronted with distinct advantages and threats as AI chatbots grow more charismatic. —dpa

COPENHAGEN: Smooth talkers, snake oil peddlers and bombastic demagogues have been taking people in since the dawn of time.

And then there’s the likelihood that people are more responsive when spoken to with confidence, empathy and enthusiasm than they are when hearing a voice that sounds indifferent or curt or even matter-of-fact – though proponents of tough talking and straight shooting might see this as no more than a truism.

Either way, it is no surprise that some of these dynamics are filtering through to robot-people interactions.

As it turns out, the more “human” and “charismatic” and artificial intelligence (AI) device or robot sounds, the more receptive the human audience.

“We had a robot instruct teams of students in a creativity task. The robot either used a confident, passionate tone of voice or a normal, matter-of-fact tone of voice,” said Kerstin Fischer of the University of Southern Denmark.

“We found that when the robot spoke in a charismatic speaking style, students’ ideas were more original and more elaborate,” said Fischer, part of team of researchers that published their findings in May in the journal Frontiers in Communication.

Around the same time, academics from University of California, Davis published results of experiments that showed people producing “louder and slower speech with less pitch variation” when addressing AI systems such as Alexa and Siri compared to how they spoke to other people.

“These adjustments are similar to the changes speakers make when talking in background noise, such as in a crowded restaurant,” said UC Davis’ Georgia Zellou, who, alongside colleague Michelle Cohn, presented the findings at a May conference staged by the Acoustical Society of America.

They found that not only do people adjust how they speak when dealing with devices, they are less likely to accurately understand in turn if the device sounds less than human.

In turn, while the robot does not have to exactly channel Cicero, the more “human” the delivery, the better the comprehension.

There could be a downside, however, going by another recent piece of research – from the University of Gothenburg’s Jonas Ivarsson and Oskar Lindwall – which suggested that “as AI becomes increasingly realistic, our trust in those with whom we communicate may be compromised”.

“The quality of the voice and interactivity are sometimes so good that the artificial can no longer be differentiated from real persons,” they said, meaning that “discerning whether an interactional partner is a human or an artificial agent is no longer merely a theoretical question but a practical problem society faces”.

As if on cue, police in China said on May 22 that a man had been scammed out of more than US$600,000 by a crook who used AI deepfake technology to mimic a friend of the victim during a video-call on the WeChat phone app. – dpa

Follow us on our official WhatsApp channel for breaking news alerts and key updates!
   

Next In Tech News

TikTok's Canada unit seeks judicial review of shutdown orders
Bain to raise offer for Fuji Soft to 9,600 yen a share, above KKR's bid, Nikkei says
OpenAI to release long-anticipated Sora video generation service
Meta warns against holiday shopping scams
Towards a simplified universal charging protocol for electric cars
OpenAI CFO sees Trump as AI president, trusts Musk to prioritize national interest
Google asks FTC to break up Microsoft's cloud deal with OpenAI, the Information says
Taylor Swift and bitcoin among most popular Alexa UK queries in 2024
Google's biggest bet is AI for search, investment chief says
S&P Global downgrades Intel's credit rating on slow recovery, management changes

Others Also Read