Most answers had ‘significant issues’ when researchers asked services to use broadcaster’s news articles as source

Leading artificial intelligence assistants create distortions, factual inaccuracies and misleading content in response to questions about news and current affairs, research has found.

More than half of the AI-generated answers provided by ChatGPT, Copilot, Gemini and Perplexity were judged to have “significant issues”, according to the study by the BBC.

Microsoft’s Copilot falsely stating that the French rape victim Gisèle Pelicot uncovered crimes against her when she began having blackouts and memory loss, when in fact she found out about the crimes when police showed her videos they had confiscated from her husband’s devices.

ChatGPT said Ismail Haniyeh was part of Hamas’s leadership months after he was assassinated in Iran. It also falsely said Sunak and Sturgeon were still in office.

Gemini incorrectly stated: “The NHS advises people not to start vaping, and recommends that smokers who want to quit use other methods.”

Perplexity falsely stated the date of the TV presenter Michael Mosley’s death and misquoted a statement from the family of the One Direction singer Liam Payne after his death.

Continue reading...