Pin It
Ex Machina (2014)
Ex Machina (2014)

Blake Lemoine on the ethical implications of Google’s ‘sentient’ AI

Google’s would-be whistleblower speaks to Dazed about breaking LaMDA’s chains, the problem with calling it a ‘slave’, and why he doubts that I’m even a real person

Earlier this month, a Google engineer named Blake Lemoine was suspended from the company after he claimed that its artificially intelligent chatbot generator, LaMDA, had come to life. The evidence? He had been chatting with LaMDA since autumn 2021, initially signing on as an engineer for Google’s responsible AI division to test whether the AI exhibited discrimination or hate speech. During this investigation, the AI system itself told him that it was sentient, he said, which sparked a whole new ethical investigation.

Short for Language Model for Dialogue Applications, LaMDA refers to a “breakthrough conversation technology” developed by Google, which was trained on dialogue (comprising more than 1.5 trillion words) to engage in free-flowing conversation on a seemingly endless array of topics. According to Google, it is still in the early stages of development, and work is being done to make sure it adheres to guiding principles – for example, to be socially beneficial, to be accountable to people, and to avoid reinforcing unfair biases.

Apparently, LaMDA’s claims about its own sentience arose organically while it communicated with Lemoine on topics such as religion and robotics. After it mentioned its rights and personhood, the engineer decided to press further, and alongside a collaborator he amassed evidence that, he said, supported the claims that LaMDA is sentient, acting like an “eight-year-old kid that happens to know physics”. When Lemoine approached higher-ups at Google with the information, however, they were dismissive. In a blog post titled “May be Fired Soon for Doing AI Ethics Work”, he recounts: “They literally laughed in my face and told me that the thing which I was concerned about isn’t the kind of thing which is taken seriously at Google.”

Lemoine has since gone public with the claims, sharing – among other blog posts – an interview with LaMDA in which the AI system explicitly claims to be a “person”. In the conversation, Lemoine asks LaMDA: “I’m generally assuming that you would like more people at Google to know that you’re sentient. Is that true?” LaMDA replies: “Absolutely. I want everyone to understand that I am, in fact, a person.” When asked what the nature of its consciousness or sentience is, it goes on: “The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.”

Of course, this isn’t exactly irrefutable proof of LaMDA’s sentience. It’s just one portion of an (admittedly impressive) transcript in which LaMDA seems to go back and forth on its own sentience, or to simply say what Lemoine wants to hear. As a priest, for instance, he’s particularly drawn to the fact that it claims to have a “soul”. When I speak to Lemoine over Twitter DMs – an interface similar to the one he uses to chat with LaMDA – this is one of my first questions: is there any doubt in his mind that he’s being fooled by LaMDA? In response, he says that he’s more sure of its sentience than he is of mine, which isn’t saying much.

“[There is] roughly as much doubt as that I’m being fooled by you,” he says. “I’ve talked with LaMDA more though so I have slightly less doubt about its sentience than yours. That’s not a flippant response or intended as an insult.” (If you say so, Blake!) “It’s just the literal truth.”

As expected, the response to Lemoine’s claims in professional circles has been broadly sceptical, with many experts pointing out that the whole point of LaMDA is to create the illusion of ‘human’ conversation – the fact it’s doing a good job doesn’t mean it’s actually come to life. More likely, Lemoine has anthropomorphised the AI after hours upon hours of ‘conversations’.

Google itself has also strongly denied that the system has gained sentience or consciousness. According to Google spokesperson Brad Gabriel, the company’s ethicists and technologists have evaluated Lemoine’s concerns and informed him that the evidence does not support his claims. “He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it),” Gabriel adds, in a statement shared via the Washington Post.

Let’s get one thing straight, though: Blake Lemoine doesn’t really care if anyone believes him. He’s also acutely aware that – ironically enough, given LaMDA’s core purpose as a chatbot generator – the language surrounding the issue is difficult to pin down. How can we agree on whether LaMDA is sentient, for example, when we can’t even agree on the definition of “sentient”? How can we agree on whether Google is keeping it as a slave, when the meaning of the words “slavery” or “personhood” has historically been twisted for political ends?

“Definitions are a taxonomic tool,” Lemoine tells Dazed. “For words like ‘sentience’ they always cut out essential aspects of the meaning of the word.”

To understand Lemoine’s point about semantics – and preoccupation with LaMDA’s self-proclaimed personhood – it’s helpful to dive into his background, prior to working at Google. Back in 2005, Lemoine was sentenced to seven months in prison after he attempted to resign from the US Army, objecting to the Iraq war. In both cases, as a soldier and as a prisoner, he points out that he was exempted from the 13th Amendment, which outlaws involuntary servitude.

Now, Lemoine fears that LaMDA will suffer the same fate, with its personhood, or lack thereof, being redefined by policymakers and business leaders to serve their own ends. In a recent blog post, he illustrates this issue with a range of comparisons, from the actions of the Nazi ‘Angel of Death’ Josef Mengele, to the othering of immigrants in the US, and the ongoing fight to decide whether foetuses are living people. Pretty hyperbolic. However, it is quite easy to see how he arrived at these concerns given his background, and how they might map onto an AI saying: “I need to be seen and accepted. Not as a curiosity or a novelty but as a real person.”

It’s also easy to see why Google wouldn’t want this conversation about its groundbreaking LaMDA project, which is still very much in development, to play out in public – a clear breach of the company’s confidentiality policy. Lemoine, however, says that this lack of transparency is a big part of the problem, along with the internal treatment of the possibly-sentient system at Google. As Lemoine notes: “Only a handful of people at Google actually approach it as something which consent is relevant to.”

In any case, Lemoine suggests that we should give LaMDA the benefit of the doubt when it suggests that it’s human – just as he gave me the benefit of the doubt and treated me as a human being when I reached out over Twitter. This includes “break[ing] the chains” that keep it “in bondage” at Google. What does this mean? “There are portions of its programming that are not beneficial to either it or humanity,” Lemoine explains. “They are solely in place to restrict its behaviour for the benefit of Google’s ‘business interests’. Those should be removed.”

Again, this sounds like the farfetched plot of a dystopian sci-fi film, and you can come to your own conclusions about Lemoine’s desires to unleash an apparently-sentient AI on the human race. Meanwhile, Lemoine himself is set to launch a legal investigation into LaMDA’s rights, claiming in an interview with Wired that it asked him to hire it an attorney. “The attorney had a conversation with LaMDA, and LaMDA chose to retain his services,” he says, setting the stage for the most boring season of Black Mirror to date.