AIConcerns About Google AI Being Sentient

By Alena Kuzub.

Published 1 July 2022

From virtual assistants like Apple’s Siri and Amazon’s Alexa, to robotic vacuums and self-driving cars, to automated investment portfolio managers and marketing bots, artificial intelligence has become a big part of our everyday lives. No one knows when humans will create an intelligent or sentient AI, but recent revelations about LaMDA, Google’s artificially intelligent chatbot generator, raised concerns.

From virtual assistants like Apple’s Siri and Amazon’s Alexa, to robotic vacuums and self-driving cars, to automated investment portfolio managers and marketing bots, artificial intelligence has become a big part of our everyday lives. Still, thinking about AI, many of us imagine human-like robots who, according to countless science fiction stories, will become independent and rebel one day. 

No one knows, however, when humans will create an intelligent or sentient AI, said John Basl, associate professor of philosophy at Northeastern’s College of Social Sciences and Humanities, whose research focuses on the ethics of emerging technologies such as AI and synthetic biology.

“When you hear Google talk, they talk as if this is just right around the corner or definitely within our lifetimes,” Basl said. “And they are very cavalier about it.”

Maybe that is why a recent Washington Post story has made such a big splash. In the story, Google engineer Blake Lemoine says that the company’s artificially intelligent chatbot generator, LaMDA, with whom he had numerous deep conversations, might be sentient. It reminds him of a 7- or 8-year-old child, Blake told the Washington Post.

However, Basl believes the evidence mentioned in the Washington Post article is not enough to conclude that LaMDA is sentient.

“Reactions like ‘We have created sentient AI’, I think, are extremely overblown,” Basl said.

The evidence seems to be grounded in LaMDA’s linguistic abilities and the things it talks about, Basl said. However, LaMDA, a language model, was designed specifically to talk, and the optimization function used to train it to process language and converse incentivizes its algorithm to produce this linguistic evidence.

“It is not like we went to an alien planet and a thing that we never gave any incentives to start communicating with us [began talking thoughtfully],” Basl said.

The fact that this language model can trick a human into thinking that it is sentient speaks to its complexity, but it would need to have some other capacities beyond what it is optimized for to show sentience, Basl said.

There are different definitions of sentience. Sentient is defined as being able to perceive or feel things and is often compared to sapient.

Basl believes that sentient AI would be minimally conscious. It could be aware of the experience it is having, have positive or negative attitudes like feeling pain or wanting to not feel pain, and have desires.

“We see that kind of range of capacities in the animal world,” he said.

For example, Basl