Google engineer claims its AI is sentient, gets suspended

A Google engineer that grew concerned that an AI chatbot system has become sentient has been placed on paid administrative leave for breaking confidentiality policy. According to the Washington Post, Blake Lemoine, the engineer that works for Google’s Responsible AI organization, was testing whether its LaMDA model generates discriminatory language or hate speech when he was suspended.

Read: TCL is selling a 100-inch TV for R100,000

Lemoine grew concerned because of convincing responses he saw the AI system generating about its rights and the ethics of robotics. In April he shared a document with executives titled “Is LaMDA Sentient?” containing a transcript of his conversations with the AI, which he believes was arguing “that it is sentient because it has feelings, emotions and subjective experience.”

The suspension followed, with Google saying Lemoine’s actions relating to his work on LaMDA have violated its confidentiality policies. According to the report, the engineer invited a lawyer to represent the AI system and spoke to a representative from the House Judiciary committee about claimed unethical activities at Google. On the day Lemoine was suspended, June 6th, he said he sought “a minimal amount of outside consultation to help guide me in my investigations” and that the list of people he had held discussions with included US government employees.

The claim from Lemoice has come almost exactly a year after Google originally announced LaMDA at Google I/O in 2021. Google’s aim with this product is to improve its conversational AI assistants and make for more natural conversations.

There is “no evidence” that LaMDA is sentient, according to spokesperson Brian Gabriel. “Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”

“Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient,” Gabriel said. “These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic.”

“Hundreds of researchers and engineers have conversed with LaMDA and we are not aware of anyone else making the wide-ranging assertions, or anthropomorphising LaMDA, the way Blake has,” Gabriel said.

This isn’t the first or last time we will hear about AI ethics. We will follow this story closely to see what additional evidence comes to light.