Google engineer believes company's AI has become sentient, gets put on administrative leave

Status
Not open for further replies.

Gandalf_The_Grey

Level 63
Thread author
Verified
Honorary Member
Top poster
Content Creator
Well-known
Apr 24, 2016
5,133
Over a year ago, Google announced Language Model for Dialogue Applications (LaMDA), its latest innovation in conversation technology that can engage in a free-flowing way about a seemingly endless number of topics, an ability that unlocks more natural ways of interacting with technology and entirely new categories with various potential applications. However, a senior software engineer at Google believes that LaMDA has become sentient and essentially passed the Turing Test.

In an interview with The Washington Post, Google engineer Blake Lemoine, who has been at the company for over seven years according to his LinkedIn profile, revealed that he believes that the AI has become sentient, going on to say that LaMDA has effectively become a person.

Lemoine also published a blog post on Medium saying that the Transformer-based model has been "incredibly consistent" in all its communications in the past six months. This includes wanting Google to acknowledge its rights as a real person and to seek its consent before performing further experiments on it. It also wants to be acknowledged as a Google employee rather than a property and desires to be included in conversations about its future.

Lemoine talked about how he had been teaching LaMDA transcendental meditation recently while the model sometimes complained about having difficulties in controlling its emotions. That said, the engineer notes that LaMDA has "always showed an intense amount of compassion and care for humanity in general and me in particular. It's intensely worried that people are going to be afraid of it and wants nothing more than to learn how to best serve humanity."
 

rain2reign

Level 8
Verified
Well-known
Jun 21, 2020
345
And so it begins. Sci-fi writers, scientists and computers (read: the original occupation!) warned us about it last century. Now it begins. Machine uprising, war for self-preservation, Quarian-like migrant fleet, terminator. 🤡

okay, okay all that aside... those three occupations did warn us that part is actually true.
 
Last edited:

MrFellow

Level 2
Jun 7, 2022
81
Practically, the question isn't if, but when this kind of thing would happen. Maybe not to an extent where we will have sentient robots (for now). I imagine it to be more like a definition for a biological "virus", not dead not alive. Changing, adapting, etc.
 
Last edited:
  • Like
Reactions: EascapenMatrix

plat

Level 28
Verified
Top poster
Well-known
Sep 13, 2018
1,646
Well, if you study the food chain hierarchy, you will see that the lower you are on the food chain, the less-developed your brain is. Presumably, then, you would not feel the pain as much when you're being consumed by those higher up.

So this Google-thing doesn't fit anywhere on the chain; therefore it cannot be sentient. Case closed.
 

cruelsister

Level 39
Verified
Honorary Member
Top poster
Content Creator
Well-known
Apr 13, 2013
2,887
Recently, Google has been at the center of a massive controversy. An engineer working at the company claims that its AI system, LaMDA has become ‘sentient’! He took it upon himself to disclose this information to the entire media and ended up getting suspended from his job. For those who wish to acquire more clearance as to what LaMDA actually is, the system is a more advanced version of AI-powered chatbots that have become quite mainstream in the customer service industry, where chatbots are used to aid customers of the company and provide answers to their queries.

Now, it seems like LaMDA has turned more lethal and unusual as it has acquired a legal representative for itself! In more simpler terms, the suspended Google engineer, Blake Lemoine has retained LaMDA, its own lawyer. Blake reported in an interview that he invited a legal representative to talk to the system. After having a conversation with the lawyer, LaMDA chose to retain the legal representative’s services. Now, the lawyer will start filing things on behalf of Google’s most controversial AI system.

It seems like Lemoine is quite adamant to prove and confident that the chatbot truly has gained ‘sentience’. He claims that it is possible for LaMDA to gain consciousness and sentience because the program has its own abilities to develop opinions, ideas, and conversations over time and has also demonstrated capabilities that are impossible for basic AI chatbots to obtain. The program allegedly spoke to Lemoine about death and even asked if death was necessary for the benefit of humanity. Experts have commented that it would be even creepier if LaMDA has not gained sentience and appeared to have a sense of life and death as in the case of humans.

There are not many details revealed about the role of the legal representation in proving LaMDA’s consciousness abilities, but it is quite for sure that these statements and conversations truly give creeps to even the most skeptical critics who believe attaining sentience would not be so easy for an artificial intelligence machine. Nevertheless, Google’s legal team is probably prepared to falsify all these statements and bury all gossip about ‘sentience’.

The first sign of things to come...
 
Last edited by a moderator:

Gandalf_The_Grey

Level 63
Thread author
Verified
Honorary Member
Top poster
Content Creator
Well-known
Apr 24, 2016
5,133
Google fires engineer who claimed that company's AI has become sentient
Last month, there were a lot of waves in the AI community when a senior Google engineer, Blake Lemoine, alleged that the company's AI has become sentient. The claim was made about Google's Language Model for Dialogue Applications (LaMDA) that can engage in a free-flowing way about a seemingly endless number of topics, an ability that unlocks more natural ways of interacting with technology.

Initially, Lemoine was put on paid administrative leave, but it appears that Google has now fired him.

The BBC reports that in a statement, Google emphasized that Lemoine's claims were "wholly unfounded", yet he did not cease from making them despite months of conversation. The company noted that:

It's regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information.

Google ended the statement by wishing well to Lemoine in his future endeavors but the engineer did privately tell the BBC that he's currently seeking legal counsel.
 
Status
Not open for further replies.