Google software engineer shocked the world when he claimed the firm's artificial intelligence chat bot LaMDA had become sentient and was self-aware.
Blake Lemoine, 41, said that LaMDA can not just understand and create text that mimics a conversation, but that it is sentient enough to have feelings and is even seeking rights as a person.Mr Lemoine was fired on Friday, July 22, but his comments have put Artificial Intelligence and its capabilities, or sometimes its pitfalls, into the spotlight. READ MORE: Scientist fears she's opened 'Pandora's box' after teaching an AI to write about itselfOne recent example of a similar technology to LaMDA going awry is when Twitter users taught Microsoft’s AI chatbot "to be a racist a**hole in less than a day," as one Verge report put it.
In March 2016 Microsoft unveiled Tay, a Twitter bot that the company described as an experiment in "conversational understanding."The more you chat with Tay, said Microsoft, the smarter it gets, learning to engage people through "casual and playful conversation."However, the "robot parrot with an internet connection" began repeating back racist, misogynistic, anti-semitic and homophobic messages that Twitter users directed at it.
Tay was taken offline within 16 hours. It turned out that trolls on 4chan exploited a “repeat after me” function that had been built into Tay, whereby the bot repeated anything that was said to it on demand.However, it started to blurt out its own learned madness, too.
Read more on dailystar.co.uk