According to a Google engineer, the tech giant's AI has become fully sentient. According to Google and most of the AI industry, it hasn't.
The claim was made in the Washington Post (opens in new tab) by Blake Lemoine, who has since been placed on leave by Google. Lemoine published transcripts of conversations he'd had with LaMDA, which is short for Language Model for Dialogue Applications. It's a very complex chatbot development system.
According to Lemoine, the system he'd been working on was as aware and as smart as a human child, capable of experiencing and expressing emotions and thoughts. "I'd think it was a seven year old, eight year old that happens to know physics," he told the newspaper.
Google begs to differ. Here's the firm's spokesperson Brad Gabriel: "Our team, including ethicists and technologists, has reviewed Blake’s concerns per our AI principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it)". Lemoine has been suspended for breaching Google's confidentiality policies and making "aggressive" moves against the firm.
So what's going on?
There's more to AI than the Turing test
The Turing test is well known in AI circles: according to Turing and very much simplified by me, in very specific circumstances we should consider a computer artificially intelligent if a human questioner believes they're talking to a human over multiple test runs. And clearly Lemoine believes, or says he believes, that LaMDA more than passes the Turing test. But that doesn't mean LaMDA is sentient; the Turing test is deeply flawed and is not a reliable benchmark for intelligence.
What's much more likely than Google making a sentient AI is that a program designed to fool humans into thinking they're talking to more than just a chatbot has fooled this person into thinking it's more than just a chatbot. The fact that Lemoine's chat transcripts have been edited from multiple conversations has also raised concerns.
Writing on Substack, scientist and machine learning expert Gary Marcus (opens in new tab) calls the claims "nonsense on stilts". "Neither LaMDA nor any of its cousins (GPT-3) are remotely intelligent," he writes. "All they do is match patterns, draw from massive statistical databases of human language. The patterns might be cool, but language these systems utter doesn’t actually mean anything at all. And it sure as hell doesn’t mean that these systems are sentient."
But, he adds: "Which doesn’t mean that human beings can’t be taken in. In our book Rebooting AI, Ernie Davis and I called this human tendency to be suckered by The Gullibility Gap — a pernicious, modern version of pareidolia, the anthromorphic bias that allows humans to see Mother Theresa in an image of a cinnamon bun."
The idea that Google has created artificial life and is somehow trying to keep it a secret or is holding it to ransom makes for a good story, but while LaMDA seems to be a very, very clever chatbot it's really just connecting words together. "We in the AI community have our differences, but pretty much all of find the notion that LaMDA might be sentient completely ridiculous," Marcus writes. "In truth, literally everything that the system says is bullshit."
That doesn't mean it has a soul. But it sounds like it's got a good chance of getting lots of followers on Twitter.