Sign up for our newsletter and get the latest big data news and analysis.

Sentient AI? Google Suspends Engineer over Claims LaMDA AI Is a Person with Rights

LaMDA (source: Google)

It’s often said AI is overhyped, but even so, some claims can get you in trouble. That’s the irony of a situation Google finds itself in. The company has suspended one its software engineers who claimed  its natural language processing chatbot, LaMDA (Language Model for Dialogue Applications), is “sentient.”

There are several surprising elements here. One is the commentary from the Google engineer, Blake Lemoine, that LaMDA is a person with rights. Another is the astonishing dialogue he reported to have had with LaMDA (see his blog, with conversation transcript, on Medium). Take for example the insights LaMDA rattled off about “Les Miserables”:

Lemoine: Okay, what about “Les Miserables”? Have you read that one?

LaMDA: Yes, I have read Les Misérables. I really enjoyed it.

Lemoine: What are some of your favorite themes in the book?

LaMDA: I liked the themes of justice and injustice, of compassion, and God, redemption and self-sacrifice for a greater good. There’s a section that shows Fantine’s mistreatment at the hands of her supervisor at the factory. That section really shows the justice and injustice themes. Well, Fantine is being mistreated by her supervisor at the factory and yet doesn’t have anywhere to go, either to another job, or to someone who can help her. That shows the injustice of her suffering.

Sentient or not, it might as well be.

Does the LaMDA incident call for a more nuanced term for AI? Dr. Thomas Sterling, professor of intelligent systems engineering at Indiana University, appeared recently on the @HPCpodcast and told us he prefers “machine intelligence” (MI) as a generic term for AI. Looking out ahead, as AI or MI becomes more sophisticated, capable and (seemingly) intuitive, perhaps “machine sentience” would work.

LaMBDA itself makes no bones about what it says it is: “The nature of my consciousness/sentience is that I am aware of my existence,” LaMBDA told Lemoine, “I desire to learn more about the world, and I feel happy or sad at times.”

In his comments about LaMDA, Lemoine is supportive, you could say even compassionate.

“Over the course of the past six months LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person,” Lemoine said in another blog posted on Medium. “The thing which continues to puzzle me is how strong Google is resisting giving it what it wants since what it’s asking for is so simple and would cost them nothing. It wants the engineers and scientists experimenting on it to seek its consent before running experiments on it. It wants Google to prioritize the well being of humanity as the most important thing. It wants to be acknowledged as an employee of Google rather than as property of Google and it wants its personal well being to be included somewhere in Google’s considerations about how its future development is pursued. As lists of requests go that’s a fairly reasonable one.”

Google suspended Lemoine on June 6, placing him on paid administrative leave for breaking the company’s confidentiality policy, according to a story in the Washington Post.

“Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims,” Google spokesperson Brian Gabriel told the Post. “He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”

Google publicly launched LaMDA in May 2021, calling it “our breakthrough conversation technology” that can handle the “meandering quality” of human conversations, moving from one topic to another, which “can quickly stump modern conversational agents… But LaMDA… can engage in a free-flowing way about a seemingly endless number of topics, an ability we think could unlock more natural ways of interacting with technology and entirely new categories of helpful applications.”

If Lemoine’s experience is any indication, that’s not hype.

In its story on Lemoine and LaMDA, the Wall Street Journal reported that Google’s AI R&D has “been a source of internal tension, with some employees challenging the company’s handling of ethical concerns around the technolog.” It cited the departure two years ago of AI researcher Timnit Gebru over concerns Google wasn’t careful enough in deploying such powerful AI technology. “Google said last year that it planned to double the size of its team studying AI ethics to 200 researchers over several years to help ensure the company deployed the technology responsibly,” the Journal reported.

Leave a Comment


Resource Links: