What is Google LaMDA
LaMDA — short for “Language Model for Dialogue Applications”. LaMDA is an AI based chatbot. Google’s LaMDA is designed for open-ended chat conversation, and their area of expertise is anything in the world. This chatbot is not only a simple chatbot but have emotions and be sentient.
Google’s LaMDA is making people believe that it’s a person with human emotions. It’s probably lying, but we need to prepare for a future when AI might, in fact, be sentient.
Simpler chatbots have been all over the internet since its early days. On e-commerce sites, these digital assistants might ask you for feedback. On messengers, they might provide some basic customer support and refer the more complex cases to human operators. Chatbots like Siri or Alexa can not only text but also talk as they perform a multitude of tasks and keep small conversations going.
On the other side LaMDA has the ability to talk with you on every topic as an expert and engage you with interesting conversation. Google claims that they have spend years in making conversational skills of LaMDA.
As a man of faith and former priest, Google engineer Lemoine was perhaps predestined to falling into the trap of anthropomorphizing LaMDA. From the fall of last year, he spent many hours in dialogue with it, testing out whether it used hate or discriminatory speech, which Google says the company is committed to eliminating as much as possible.
In order to examine the LaMDA is sentient Lemoine had testing whether it was able to have experiences like thoughts and emotions and whether it had interests of its own. A cat doesn’t like being thrown into water, and a sentient AI probably wouldn’t like being switched off.
Lemoine messed up his experiment from the start: He asked LaMDA whether it would like to participate in a project aimed at helping other engineers at Google understand that it is sentient. He asked LaMDA to confirm whether this is in its best interests, but forgot to ask whether it’s sentient in the first place.
LaMDA confirmed that telling people that it is, in fact, sentient is in its best interests. But such a response is reasonable from a non-sentient AI because Lemoine’s question is a yes-or-no one. As such, the two most plausible answers would be along the lines of either, “Yes, I’d love other engineers to know that I’m sentient,” or “No, I like to live in secrecy, so I’ll keep my sentience to myself.”
The most astonishing point for lemoine comes when he asked the bot about its fears, and it replied that it’s scared of being switched off. We should bear in mind, however, that large language models are already able to take on different personas, ranging from dinosaurs to famous actors.
In this context, there is no indication that LaMDA was truly sentient as of now. There’s no formal way to prove sentience these days, but a chatbot that ticks all questions listed above is a good start.
You can read here full conversational script between LaMDA and Lemoine
1 Comment
Pingback: Now Its Easy to Add Emoji in Google Docs - Soluxionz