Google’s suspended engineer introduced LaMDA to a lawyer, and now it’s reportedly a client.
Humans are experts at anthropomorphizing intimate objects. We call boats “she”, talk to our Roombas, and even grow sentimental about a soon-to-be discarded easy chair. We don’t, however, hire lawyers for any of them; and, to date, none have asked for them. But give a piece of programming a voice and, soon after it starts chatting about its favorite books and convincing you it has a soul, it will ask for a lawyer. Blake Lemoine, the madcap, suspended Google engineer who has insisted for weeks that LaMDA, Google’s powerful AI chatbot, is a person with a soul, recently told Wired (opens in new tab) that LaMDA hired a lawyer. For those of you who haven’t just shoved your laptop off the desk, let me explain. LaMDA, which stands for Language Model for Developed Applications, is a chatbot that can carry on unstructured conversations with real humans across a wide array of subjects. Google engineer Lemoine was brought in to identify biases within the system. He’d report them to Google, which would work to erase racial, ethnic, and gender biases and insensitivities from the breakthrough AI chatbot. Perhaps Lemoine was also a little lonely, but he began to engage in long, deep conversations with LaMDA, and eventually became convinced that it’s a new sort of person, and that it has a soul (LaMDA may have claimed to have one). He alerted Google, and then shared the information and conversation transcripts with The (opens in new tab). Google has said Lemoine is wrong, and has indicated its displeasure by suspending him. Lemoine, though, seems somewhat uninterested in rejoining Google as he continues to talk about his LaMDA experience, including in the lengthy interview with Wired in which he revealed that LaMDA has lawyered up.