Blake Lemoine, the engineer, says that Google’s language model has a soul. The company disagrees.
Google placed an engineer on paid leave recently after dismissing his claim that its artificial intelligence is sentient, surfacing yet another fracas about the company’s most advanced technology. Blake Lemoine, a senior software engineer in Google’s Responsible A.I. organization, said in an interview that he was put on leave Monday. The company’s human resources department said he had violated Google’s confidentiality policy. The day before his suspension, Mr. Lemoine said, he handed over documents to a U.S. senator’s office, claiming they provided evidence that Google and its technology engaged in religious discrimination. Google said that its systems imitated conversational exchanges and could riff on different topics, but did not have consciousness. “Our team — including ethicists and technologists — has reviewed Blake’s concerns per our A.I. Principles and have informed him that the evidence does not support his claims,” Brian Gabriel, a Google spokesman, said in a statement. “Some in the broader A.I. community are considering the long-term possibility of sentient or general A.I., but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient.” The first reported Mr. Lemoine’s suspension. For months, Mr. Lemoine had tussled with Google managers, executives and human resources over his surprising claim that the company’s Language Model for Dialogue Applications, or LaMDA, had consciousness and a soul.