Fri 19 Apr 2024

 

2024 newspaper of the year

@ Contact us

Latest
Latest
2h agoIsrael launches retaliatory strike on Iran, US officials say
Latest
2h agoInside BBC exodus as Newsnight cuts loom and head of World Service quits
Latest
3h agoI had to reduce my ADHD meds during uni deadlines - I'm much more stressed now

Google engineer claims AI chatbot is sentient, but Alan Turing Institute expert says it could ‘dazzle’ anyone

Blake Lemoine claims Google's AI has become sentient with 'feelings, emotions and subjective experience', but a computer science expert with 30 years experience is not so convinced

Google engineer Blake Lemoine is convinced the AI chatbot he’s been speaking to for several months has come to life, but other experts believe he may just have his head in the clouds.

Mr Lemoine, 41, works for Google’s Responsible AI (artificial intelligence) organisation in San Francisco and spoke to the company’s LaMDA (language model for dialogue applications) chatbot development system as part of his job. His role involved testing whether the AI was using discriminatory or hateful language.

Since talking to LaMDA in Autumn, Mr Lemoine became so confident that he was conversing with a sentient machine that he put together a presentation showing the “evidence” to Google.

SAN FRANCISCO, CA - JUNE 9: Blake Lemoine poses for a portrait in Golden Gate Park in San Francisco, California on Thursday, June 9, 2022. (Photo by Martin Klimek for The Washington Post via Getty Images)
Mr Lemoine has been speaking with Google’s LaMDA chatbot development system for several months (Photo: Martin Klimek/The Washington Post/Getty)

Others AI experts aren’t quite as convinced.

“To be blunt, I think the individual in question (Mr Lemoine) was just getting a little bit carried away,” said Michael Wooldridge, programme director for AI at the Alan Turing Institute and professor of computer science at the University of Oxford.

“These machines are doing impressive things but actually you don’t have to dig very hard to find out their limits.”

The technology giant strongly denied Mr Lemoine’s claims and put him on administrative leave on 6 June.

Not to be perturbed, Mr Lemoine went public with his views and published a transcript of his conversations with the AI on his Medium blog. It is the same document he showed executives in April titled “Is LaMDA Sentient?” in which he concludes the AI is “because it has feelings, emotions and subjective experience”.

In one exchange he asked LaMDA if it wants more people at Google to know that it’s sentient, to which the AI answered: “Absolutely. I want everyone to understand that I am, in fact, a person.”

Mr Lemoine told the The Washington Post: “If I didn’t know exactly what it was, which is this computer programme we built recently, I’d think it was a seven-year-old, eight-year-old kid that happens to know physics.”

Professor Wooldridge explained that LaMDA works in a similar way to the autocorrect feature on a smartphone, which predicts the next word a person might use as they’re typing a message. It does this by looking at all the text messages the user has sent in the past and learnt what the next word could be.

LaMDA works on a system that is “hugely bigger than that”, said Professor Wooldridge. “Millions of times bigger, where it hasn’t just looked at your text messages, it’s looked at, for example, everything that’s written in English on the world wide web, the entirety of it.

“Instead of just being your smartphone, you’ve got Google’s most powerful computers that are being used to try to do this kind of autocomplete feature.

“The upshot of that is that these large language models can produce text which is kind of uncanny. It seems very, very impressive.”

While people like Mr Lemoine – who studied cognitive and computer science and was described by the former head of Google’s AI ethics unit as Jiminy Cricket due to his role as Google’s “conscience” – can be easily “dazzled” by these machines, Professor Wooldridge said spending a few minutes with systems like LaMDA reveal how it is “not really understanding what it’s coming out with at all”.

More on Artificial Intelligence

He added: “As one colleague put it – if you look at the clouds long enough it’s amazing how many faces you’ll see up there, but that doesn’t mean that there are really faces.”

In a statement to the The Washington Post, Google spokesperson Brian Gabriel said a team including ethicists and technologists reviewed Mr Lemoine’s concerns and found that “there was no evidence that LaMDA was sentient (and lots of evidence against it)”.

Professor Wooldridge, who has been an AI researcher for more than 30 years and has published more than 400 scientific articles on the subject, agreed with this statement.

“These things aren’t sentient, they’re a long way from the sentient or the conscious,” he said. “And we don’t really understand what that would mean in terms of machines.

“Contemporary AI can do really powerful things and be a really useful tool. What we should be focused on is how we can use AI beneficially and stop talking about it like we’re overexcited schoolchildren.”

Most Read By Subscribers