Four major generative AI chatbots – OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini and Perplexity AI – did a shoddy job of summarizing the news, a study by the BBC found.
The organization fed content from its website to the services and then asked them questions about the news. More than half of the answers contained “significant inaccuracies” and distortions, the BBC said.
Examples of errors included Gemini incorrectly saying the UK’s National Health Service did not recommend vaping as an aid to quit smoking and ChatGPT and Copilot both wrongly indicating that politicians Rishi Sunak and Nicola Sturgeon were still in office.
Perplexity also misquoted a BBC News report about the Middle East, saying Iran showed “restraint” and calling Israel’s actions “aggressive” – adjectives that did not appear in the initial news report. “The price of AI’s extraordinary benefits must not be a world where people searching for answers are served distorted, defective content that presents itself as fact,” BBC News and Current Affairs CEO Deborah Turness wrote in a blog post. “In what can feel like a chaotic world, it surely cannot be right that consumers seeking clarity are met with yet more confusion.” Turness, who was president of NBC News from 2013 to 2017, noted that the BBC is pursuing a number of AI initiatives and holding talks with tech companies to develop new automated tools.
Read more on deadline.com