Google has suspended engineer Blake Lemoine over an alleged breach of confidentiality after claims of an artificial intelligence (AI) chatbot had become sentient, the Washington Post reports.
The AI chatbot in question was the LaMDA (Language Model for Dialogue Applications) system. LaMDA was announced at last year’s Google I/O in the hopes of improving the conversations of AI assistants to be more open-ended and natural-sounding.
Lemoine had been testing LaMDA and its ability to conversate. The engineer had later realized that he might be interacting with a seven or eight-year-old human. He then reportedly drew out convincing responses such as LaMDA stating that it is in fact, a person. With LaMDA going as far as to make statements of it being able to feel emotions such as loneliness and fear.
An interview LaMDA. Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers.https://t.co/uAE454KXRB
— Blake Lemoine (@cajundiscordian) June 11, 2022
Google stated the decision to put Lemoine on paid leave for his “aggressive” moves such as plans in hiring an attorney to represent LaMDA and inform members of the House judiciary committee of alleged unethical activities at Google. Google has also said that Lemoine had breached confidentiality by publishing the transcript of his conversation with LaMDA on his Medium account.
Lemoine also took to Medium to report that he was on “paid administrative leave”. Google then had an internal team of ethicists and technologists investigate Lemoine’s claims and found no evidence to support that LaMDA was sentient.
In spite of the recent news, Lemoine on Twitter says he still intends on continuing his work on Artificial Intelligence whether Google chooses to keep him or not.