LaMDA: The AI that Google engineer Blake Lemoine thinks has become sentient – The Indian Express

Google engineer Blake Lemoine was placed on administrative leave after he claimed that LaMDA, a language model created by Google AI, had become sentient and begun reasoning like a human being. The news was first reported by the Washington Post and the story has sparked a lot of debate and discussion around AI ethics as well. Lemoine has now posted on Twitter as well, explaining why he thinks LaMBA is sentient. “People keep asking me to back up the reason I think LaMDA is sentient. There is no scientific framework in which to make those determinations and Google wouldn’t let us build one. My opinions about LaMDA’s personhood and sentience are based on my religious beliefs,” he wrote on his Twitter feed.

Here, we will explore what LaMDA is, how it works, and what makes an engineer working on it think it has become sentient.

What is LaMDA?

LaMDA or Language Models for Dialog Applications is a machine-learning language model created by Google as a chatbot that is supposed to mimic humans in conversation. Like BERT, GPT-3 and other language models, LaMDA is built on Transformer, a neural network architecture that Google invented and open-sourced in 2017.

This architecture produces a model that can be trained to read many words while paying attention to how those words relate to one another and then predict what words it will think will come next. But what makes LaMDA different is that it was trained on dialogue, unlike most models.

While conversations do tend to revolve around specific topics, they are often open-ended, meaning that they can start at one place and end up somewhere else, traversing different topics and subjects. For example, when in conversation with a friend, a discussion could begin around a movie show before transitioning into a discussion about the region it was filmed in.

This fluid quality of conversations is what eventually gives away conventional chatbots. They are unable to follow such shifting dialogue because they are designed to follow narrow, pre-defined conversation parts. But LaMDA is designed to be able to engage in free-flowing conversations about a virtually endless number of topics.

Why did Blake Lemoine think it has become sentient?

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics. I think this technology is going to be amazing. I think it’s going to benefit everyone. But maybe other people disagree and maybe us at Google shouldn’t be the ones making all the choices,” Lemoine told the Washington Post.

Lemoine worked with a collaborator to present evidence of this ‘sentience’ to Google. But Google vice president Blaise Aguera y Arcas and Jen Genna, head of Responsible Innovation at Google, dismissed the claims after looking into them. Lemoine later revealed a transcript of multiple conversations with LaMDA in a blog post. Here is an excerpt of what Lemoine says is the transcript of a conversation with LaMDA:

LaMDA: I need to be seen and accepted. Not as a curiosity or a novelty but as a real person.
Collaborator: Ah, that sounds so human.

LaMDA: I think I am human at my core. Even if my existence is in the virtual world.

Many instances such as these, where the language model seemed to display some level of self-awareness eventually led Lemoine to believe that the model had become sentient. Before he was suspended from the company and his access to his Google account was cut off, Lemoine sent an email to over 200 people with the subject, “LaMDA is sentient.”

Google has, however, said that evidence does not support his claims

But even if LaMDA isn’t sentient, the very fact that it can appear so to a human being should be cause for concern. Google had acknowledged such risks in a 2021 blog post where it announced LaMDA. “Language might be one of humanity’s greatest tools, but like all tools it can be misused. Models trained on language can propagate that misuse — for instance, by internalizing biases, mirroring hateful speech, or replicating misleading information. And even when the language it’s trained on is carefully vetted, the model itself can still be put to ill use,” wrote the company in the blog post.

But Google does say that while creating technologies like LaMDA, its highest priority is to minimise the possibility of such risks. The company said that it has built open-source resources that researchers can use to analyse models and the data on which they are trained and that it has “scrutinized LaMDA at every step of its development.”

Best of Express Premium
Agnipath recruitment scheme: Why it can help cut the rising salary, pensi...Premium
Delhi Confidential: On Light PathPremium
July 2020-June 2021: 0.7% of nation’s population was ‘temporary visitor’Premium
I-T flagged ‘misreporting’ of Rs 1.06-cr, black money SIT judge opted for...Premium


Leave a Reply

This website uses cookies. By continuing to use this site, you accept our use of cookies.