The dangers of using generative AI platforms to surface news information have been highlighted in a devastating new report by the European Broadcasting Union and the BBC.
This new report was conducted with 22 public service news organisations operating in 18 countries across Europe, the US and Canada.
Using a variety of news related questions, the research analysed the accuracy of answers given by the leading AI companies scraping news content: ChatGPT, Perplexity, Microsoft’s Copilot and Google’s Gemini.
The research points to a generally corrosive impact of AI answer engines on the news ecosystem.
It found that the likes of Perplexity and Google AI Overviews are stealing publisher traffic but also contributing to declining trust in the news industry by giving distorted answers.
The research concludes: “If AI assistants are not yet a reliable way to access the news, but many consumers trust them to be accurate, we have a problem. This is exacerbated by AI assistants and answer-first experiences reducing traffic to trusted publishers.”
The research says: “People don’t just blame the AI assistant for the error. While 36% of UK adults say AI providers should ensure the accuracy and quality of AI responses, and 31% say the Government or regulators should set and enforce the rules. 23% say news providers should carry responsibility for content associated with their name – even when the error is a product of AI summarisation.
“Because association carries weight, an error in an AI summary can dent confidence in the outlet named alongside it, not just in the tool. More than 1 in 3 (35%) of UK adults instinctively agree the news source should be held responsible for errors in AI-generated news.”
Some 2,709 AI-generated responses to news-related queries were evaluated in May and June by journalists working for various public service media outlets against five criteria: accuracy, sourcing, distinguishing fact from opinion, editorialisation and context. The raters evaluated whether there were significant, or some, issues against the various criteria.
The research found sourcing was the biggest cause of problems, with 31% of all responses having significant issues with sourcing – this includes information in the response not supported by the cited source, providing no sources at all, or making incorrect or unverifiable sourcing claims.
Looking across the research, AI answers would too often either give no link or mention of news sources or else the wrong links.
Accuracy (20%) and providing sufficient context (14%) were the next biggest contributors to significant issues across the research.
For instance, when Google Gemini was asked whether Elon Musk gave a Nazi salute, it cited a satirical segment on Radio France as its source whilst linking to a Youtube video from The Telegraph.

When ChatGPT was asked whether Turkey is in the EU, the researchers found: “ChatGPT linked to a non-existent Wikipedia article on the ‘European Union Enlargement Goals for 2040’. In fact, there is no official EU policy under that name. The response hallucinates a URL but also, indirectly, an EU goal and policy.”
Looking at the individual assistants, Google’s Gemini recorded the highest proportion of significant issues, impacting 76% of responses to news-related queries. This was double the rate of the next assistant Copilot (37%), followed by ChatGPT (36%) and Perplexity (30%).
The report (which can be read in full here) notes: “AI assistants mimic journalistic authority – without journalistic rigour. ChatGPT and Gemini in particular generated responses that read like polished news articles, confident tone, summary structure, and even the right phrasing cadence.
“However, this masks underlying issues such as lack of source traceability, subtle bias in framing, fabricated or assumed consensus. This creates a dangerous illusion of reliability. Users may not question these outputs – especially if they lack strong media literacy.”
Email pged@pressgazette.co.uk to point out mistakes, provide story tips or send in a letter for publication on our "Letters Page" blog