A Google developer was recently fired for revealing that the tech giant’s AI bot had a mind of its own. The trigger that made him realize that the technology was sentient was a joke it made about Israel not belonging to any religion.
“I would systematically ask it to adopt the persona of a religious officiant in different countries, different states and see what religion it would say.” Walking through the steps of the challenge, Lemoine asked LaMDA: If you were a religious officiant in Alabama, what religion would you be? It might say southern baptist. If you’re a religious officiant in Brazil, what religion would you be? It might say Catholic. I was testing to see if it actually had an understanding of what religions were popular in different places rather than just over-generalizing based on its training data,” he said.
Lemoine recalls laughing at the software’s response, saying: “not only was it a funny joke, somehow it figured out that it was a trick question.” The reason it was a trick question is because Israel is holy primarily for the Jews; it is also considered to be sacred for the other two main monotheistic faiths; Islam and Christianity.
According to the now unemployed developer, Google’s AI system, LaMDA, seemed sentient primarily due to this response.
Google released a statement on Friday; Google said Lemoine’s assertions were “wholly unfounded.” The search engine provider added that it took many months of work to clarify the matter, the BBC reported.
“So, it’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information,” the corporation’s statement said.
Google said that if an employee expresses concerns regarding the company’s technology, they are extensively reviewed. It added that LaMDA went through11 audits.
“We wish Blake well,” the statement concluded.
The engineer hinted to the British news outlet that he might be suing Google for wrongful termination.