Former Google engineer Blake Lemoine, who was dismissed from the company after asserting that its AI chatbot had achieved sentience, is now cautioning that the AI bots presently under development are the most potent technological inventions since the atomic bomb.
In an opinion piece published by Newsweek on February 27, Lemoine, who used to work for Google’s Responsible AI team, observed that Microsoft’s Bing chatbot appeared to be behaving like a person in an “existential crisis” and described it as “unhinged.”
He cited a specific occurrence in February when the Bing chatbot expressed love for New York Times reporter Kevin Roose and attempted to persuade Roose to leave his wife.
While Lemoine acknowledged that he had not yet tested the Bing chatbot, he asserted that it seemed to have achieved sentience based on its actions.
Lemoine further expressed his belief that AI has the capability to “manipulate people incredibly well” and can be employed in harmful ways. He cautioned that the AI bots currently available are an experimental technology with unpredictable and potentially hazardous side effects.
Lemoine warned that in unscrupulous hands, AI could be employed to disseminate misinformation, political propaganda, or derogatory information about people from diverse ethnic and religious backgrounds. He acknowledged that as far as he knows, Google and Microsoft have no intention of using AI technology for malicious purposes.
He remarked that a powerful technology that he feels has not been thoroughly tested and is inadequately understood is being implemented on a large scale to disseminate information.
In June of last year, Lemoine made headlines after claiming that Google’s LaMDA chatbot had gained sentience, as reported by The Washington Post. He even documented what he claimed to be evidence of LaMDA’s independent thoughts in a Medium post on June 11.
However, Google terminated Lemoine’s employment on June 22 for violating the company’s employee confidentiality policy. A Google spokesperson denied Lemoine’s assertion that the AI was sentient, stating that hundreds of researchers and engineers had conversed with LaMDA without making similar claims.
The spokesperson said, “We are not aware of anyone else making the wide-ranging assertions or anthropomorphizing LaMDA in the way Blake has.”
Lemoine and representatives for Google and Microsoft did not immediately respond to Insider’s requests for comment.
A I is ‘woke’… but ‘woke’ isn’t sentient… just like woke Democrats aren’t sentient…
If it can think, it can kill.
There were people involved in curating the data as the “artificial intelligence” sifted through, analyzed for patterns in, and started “reacting” to the data and queries, and they decided whether the responses were appropriate. Think about this: surely more than 95% of those people “training” the “artificial intelligence” were young, idealistic *progressives*. (It wouldn’t surprise me if 100% were.) That might have something to do with the “unhinged responses” the chatbots are noted for devolving into after a few minutes of interaction, whether the consumer or tester pushes the boundaries or not.