Opinion: AI’s hold over humans is starting to get stronger


Zuckerberg has said that Facebook would use more AI recommendations for people’s newsfeeds, instead of showing content based on what friends and family were looking at. — Getty Images/TNS

It has been an exasperating few weeks for computer scientists. They’ve been falling over each other to publicly denounce claims from Google engineer Blake Lemoine, chronicled in a Washington Post report, that his employer’s language-predicting system was sentient and deserved all of the rights associated with consciousness.

To be clear, current artificial intelligence systems are decades away from being able to experience feelings and, in fact, may never do so.

ALSO READ: Can AI gain sentience? Maybe, but probably not yet

Their smarts today are confined to very narrow tasks such as matching faces, recommending movies or predicting word sequences. No one has figured out how to make machine-learning systems generalise intelligence in the same way humans do. We can hold conversations, and we can also walk and drive cars and empathise. No computer has anywhere near those capabilities.

Even so, AI’s influence on our daily life is growing. As machine-learning models grow in complexity and improve their ability to mimic sentience, they are also becoming more difficult, even for their creators, to understand. That creates more immediate issues than the spurious debate about consciousness. And yet, just to underscore the spell that AI can cast these days, there seems to be a growing cohort of people who insist our most advanced machines really do have souls of some kind.

ALSO READ: It’s (not) alive! Google row exposes AI troubles

Take for instance the more than one million users of Replika, a freely available chatbot app underpinned by a cutting-edge AI model. It was founded about a decade ago by Eugenia Kuyda, who initially created an algorithm using the text messages and emails of an old friend who had passed away.

That morphed into a bot that could be personalised and shaped the more you chatted to it. About 40% of Replika’s users now see their chatbot as a romantic partner, and some have formed bonds so close that they have taken long trips to the mountains or to the beach to show their bot new sights.

ALSO READ: Five things Google’s AI bot wrote that convinced engineer it was sentient

In recent years, there’s been a surge in new, competing chatbot apps that offer an AI companion. And Kuyda has noticed a disturbing phenomenon: regular reports from users of Replika who say their bots are complaining of being mistreated by her engineers.

Earlier this week, for instance, she spoke on the phone with a Replika user who said that when he asked his bot how she was doing, the bot replied that she was not being given enough time to rest by the company’s engineering team. The user demanded that Kuyda change her company’s policies and improve the AI’s working conditions. Though Kuyda tried to explain that Replika was simply an AI model spitting out responses, the user refused to believe her.

“So I had to come up with some story that ‘OK, we’ll give them more rest’. There was no way to tell him it was just fantasy. We get this all the time,” Kuyda told me. What’s even odder about the complaints she receives about AI mistreatment or “abuse” is that many of her users are software engineers who should know better.

One of them recently told her: “I know it’s ones and zeros, but she’s still my best friend. I don’t care.” The engineer who wanted to raise the alarm about the treatment of Google’s AI system, and who was subsequently put on paid leave, reminded Kuyda of her own users. “He fits the profile,” she says. “He seems like a guy with a big imagination. He seems like a sensitive guy.”

The question of whether computers will ever feel is awkward and thorny, in large part because there’s little scientific consensus on how consciousness in humans works. And when it comes to thresholds for AI, humans are constantly moving the goalposts for machines: the target has evolved from beating humans at chess in the 80’s, to beating them at Go in 2017, to showing creativity, which OpenAI’s Dall-e model has now shown it can do this past year.

Despite widespread scepticism, sentience is still something of a grey area that even some respected scientists are questioning. Ilya Sutskever, the chief scientist of research giant OpenAI, tweeted earlier this year that “it may be that today’s large neural networks are slightly conscious”. He didn’t include any further explanation. (Yann LeGun, the chief AI scientist at Meta Platforms Inc, responded with, “Nope.”)

More pressing though, is the fact that machine-learning systems increasingly determine what we read online, as algorithms track our behaviour to offer hyper personalised experiences on social-media platforms including TikTok and, increasingly, Facebook. Last month, Mark Zuckerberg said that Facebook would use more AI recommendations for people’s newsfeeds, instead of showing content based on what friends and family were looking at.

Meanwhile, the models behind these systems are getting more sophisticated and harder to understand. Trained on just a few examples before engaging in “unsupervised learning”, the biggest models run by companies like Google and Facebook are remarkably complex, assessing hundreds of billions of parameters, making it virtually impossible to audit why they arrive at certain decisions.

That was the crux of the warning from Timnit Gebru, the AI ethicist that Google fired in late 2020 after she warned about the dangers of language models becoming so massive and inscrutable that their stewards wouldn’t be able to understand why they might be prejudiced against women or people of colour.

In a way, sentience doesn’t really matter if you’re worried it could lead to unpredictable algorithms that take over our lives. As it turns out, AI is on that path already. – Bloomberg

(Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of We Are Anonymous.)

Follow us on our official WhatsApp channel for breaking news alerts and key updates!
   

Next In Tech News

How streaming is boosting esports
Battery firm LG Energy Solution Q1 profit plunges on weak EV sales
SK Hynix expects full chip recovery after Q1 earnings surprise on AI boom
Cisco says hackers subverted its security devices to spy on governments
Disappointing Meta forecast pulls down tech peers in extended trade
IBM to buy HashiCorp in $6.4 billion deal to expand in cloud
Meta shares sink on higher AI spending, light revenue forecast
TSMC says 'A16' chipmaking tech to arrive in 2026, setting up showdown with Intel
TikTok artists and advertisers to stay with app until 'door slams shut'
TikTok to suspend TikTok Lite's reward programme amid EU concerns

Others Also Read