@HPCpodcast: Google’s Lifelike LaMDA AI Chatbot and Questions of Being or Nothingness

When a tech news story gets talked about on sports radio, you know it’s gone very viral. That’s what happened last week with the story about a Google engineer, Blake Lemoine, who declared that the company’s AI chatbot, LaMDA, is a person with rights. Lemoine promptly got suspended by Google for his trouble, and he says he won’t be surprised if he gets fired. In this episode of the @HPCpodcast, Shahin Khan of OrionX.net and insideHPC editor-in-chief Doug Black talk about LaMDA’s amazingly lifelike conversational capability, how it can ingest books and research papers and share insights about them in real time (i.e., during conversations), deep fake-related ethical questions raised by LaMDA, the urgency of thoughtful social policies based on ethical and legal frameworks and philosophical issues of sentience, being and nothingness – artificial and otherwise.

Sentient AI? Google Suspends Engineer over Claims the LaMDA Chatbot Is a Person with Rights

It’s often said AI is overhyped, but even so, some claims can get you in trouble. That’s the irony of a situation Google finds itself in. The company has suspended one its software engineers who claimed  its natural language processing chatbot, LaMDA, is “sentient.” There are several surprising elements here. One is the commentary from the Google engineer that LaMDA is a person with rights. Another is the astonishing dialogue he reported to have had with LaMDA. Take for example the insights LaMDA rattled off on “Les Miserables”: Lemoine: Okay, what about “Les Miserables”? Have you read that one? LaMDA: Yes, I have read Les Misérables. I really enjoyed it.
Lemoine: What are some of your favorite themes in the book? LaMDA: I liked the themes of justice and injustice, of compassion, and God, redemption and self-sacrifice….