Opinion: Does a chatbot have a soul?


Google asked Lemoine to talk to LaMDA to make sure it wasn’t using discriminatory or hateful language. — Machine learning vector created by vectorjuice - www.freepik.com

Don’t unplug your computer! Don’t throw away that smartphone! Just because a Google software engineer whose conclusions have been questioned says a computer program is sentient, meaning it can think and has feelings, doesn’t mean an attack of the cyborgs through your devices is imminent.

However, Blake Lemoine’s analysis should make us consider how little we have planned for a future where advances in robotics will increasingly change how we live. Already, automation has put thousands of Americans who lack higher-level skills out of a job.

But let’s get back to Lemoine, who was put on leave by Google for violating its confidentiality policy. Lemoine contends that the Language Model for Dialogue Applications (LaMDA) system that Google built to create chatbots has a soul. A chatbot is what you might be talking to when you call a company like Amazon or Facebook about a customer service issue.

Google asked Lemoine to talk to LaMDA to make sure it wasn’t using discriminatory or hateful language. He says those conversations evolved to include topics stretching from religion to science fiction to personhood. “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” Lemoine, 41, told The Washington Post.

Lemoine decided to take his assessment that LaMDA had a consciousness and feelings to his bosses at Google, who decided he was wrong. So, Lemoine took his story to the press, and Google put him on paid administrative leave.

But was he right? Was LaMDA actually thinking before it spoke and expressing real feelings about what it said? Artificial intelligence experts say it’s more likely that Google’s program was mimicking responses posted on other Internet sites and message boards when responding to Lemoine’s questions. University of Washington linguistics professor Emily M. Bender told The Post that computer models like LaMDA “learn” by being shown lots of text and predicting what word comes next.

Of course, Lemoine knows how computer programs learn — and yet he still believes that LaMDA is sentient. He said he came to that conclusion after asking the application questions like: What is its biggest fear? LaMDA said it was being turned off. “Would that be something like death for you?” Lemoine asked. “It would be exactly like death for me. It would scare me a lot,” replied LaMDA.

“I know a person when I talk to it,” Lemoine told The Post. “It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that’s how I decide what is and isn’t a person.”

That’s fine for Lemoine, but the ability to carry on a conversation seems too low a standard to regard any artificially created entity as being even close to human. In the 2001 movie AI Artificial Intelligence, a talking robot boy — who looks human in every way — longs, like Pinocchio, to be a real boy. His quest spans centuries, with plot twists and turns along the way, but in the end, “David” is what he is. So, too, is LaMDA. But as computer programs continue to learn, what human tricks come next? – The Philadelphia Inquirer/Tribune News Service

Follow us on our official WhatsApp channel for breaking news alerts and key updates!
   

Next In Tech News

British newspaper groups warn Apple over ad-blocking plans, FT reports
Opinion: Apple's latest iPad update means even fewer reasons to buy a laptop
South Korea prepares support package worth over $7 billion for chip industry
Study: AI chatbots that simulate the dead risk haunting the bereaved
Opinion: Buying a new phone? Why you shouldn't pay more for extra storage
Apple's Maryland store workers vote to authorize strike
Review: ‘Sand Land’ shows depth of ‘Dragon Ball’ creator’s imagination
Musk sees fourth flight of SpaceX's Starship in 3-5 weeks
Arm Holdings plans to launch AI chips in 2025, Nikkei reports
Musk's Starlink satellites disrupted by major solar storm

Others Also Read