
Google suspends engineer after claims that AI system has become sentient
Global News
The engineer claims the company's most advanced artificial intelligence program has become sentient, complete with its own feelings and desires.
A senior Google software engineer has claimed one of the company’s most advanced artificial intelligence (AI) programs has become sentient, complete with its own feelings and desires for mutual respect.
According to the New York Times, Google placed the engineer, Blake Lemoine, on paid leave on Monday. The company’s human resources department claimed this was a result of Lemoine violating Google’s confidentiality policy.
The Times reported that the day before he was placed on leave, Lemoine shared several documents with the U.S. Senator’s office, alleging that Google engaged in religious discrimination.
Lemoine claimed the discrimination was a result of Google’s denial to accept his request to require consent from The Language Model for Dialogue Applications (LaMDA) program — the AI Lemoine claims is sentient — before proceeding with any experiments.
LaMDA is a program that can engage in “free-flowing” text conversations, much like a chatbot.
According to the BBC, Google representative Brian Gabriel told the outlet that Lemoine “was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).” Gabriel added hundreds of researchers and engineers had conversations with LaMDA, and Lemoine was the only one to conclude the program was sentient.
Gabriel explained that LaMDA “tends to follow along with prompts and leading questions, going along with the pattern set by the user.”
“These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic,” he said. “If you ask what it’s like to be an ice cream dinosaur, they can generate text about melting and roaring and so on.”