Fighting for quality news media in the digital age.

  1. Comment
April 15, 2025

BBC study revealing scale of AI-generated news inaccuracies is ‘crucial checkpoint’ but we shouldn’t write the tech off

Paul Doyle says AI issues need to be addressed at risk of "integrity of our entire information ecosystem".

By Paul Doyle

The BBC’s recent study into AI-generated news summaries is a sobering reminder of the potential and profound limitations of generative AI.

While artificial intelligence is often lauded as the inevitable next step in media evolution, the findings from this trial expose a more unsettling truth: AI, in its current form, is incapable of reliably processing and presenting accurate news.

BBC News granted temporary access to its articles to four prominent AI assistants – OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, and Perplexity – to assess their ability to process and accurately summarise its news articles. It found, when the output was reviewed by senior journalists and editor, that more than half (51%) of the AI-generated answers contained significant issues.

Nearly one in five responses citing BBC content introduced factual errors, including incorrect dates, numbers, and events. Worse still, 13% of quoted material was either fabricated or altered from the original BBC source. To me, this really questions the ability of AI to inform audiences in absence of human oversight.

The truth is becoming a casualty and inaccuracies a ‘persistent feature of our information diet’

The significance of these findings extends far beyond AI’s technical shortcomings. We are already operating in an information landscape riddled with distortions, where ‘truth’ is no longer a fixed point but a spectrum of competing narratives. The rise of ‘my truth’, subjective interpretations of events blurring the line between fact and perspective, has contributed to growing public distrust in media.

Now AI has the potential, if left entirely uncorrected, to inject another layer of distortion into an already chaotic ecosystem, producing errors that compound misinformation and further erode confidence in reliable sources.

One of our industry’s concerns is the tendency for AI assistants to attribute their errors to reputable sources. The psychological impact of this is profound: when audiences see incorrect information cited from a trusted publisher, they are less likely to question its validity.

This not only misleads readers but also undermines the credibility of legitimate journalism, often decades in the making. The danger is not just misinformation, but the erosion of trust in the very institutions meant to combat it.

If AI-generated inaccuracies become a persistent feature of our information diet, what happens next? The likely outcome is a deepening of public disenfranchisement. If consumers feel they can no longer trust any sources, they may disengage from news altogether.

This would not only weaken democracy, where informed citizens are essential, but also open the door to even greater manipulation. If truth becomes too elusive to discern, then narratives will dominate public discourse instead of facts.

A measured approach to AI adoption

However, this study should not be seen as an outright condemnation of AI’s role in journalism and information dissemination. Rather, it serves as a crucial checkpoint. A reminder that AI adoption should be approached with caution but not fear. AI possesses immense potential to improve efficiencies, provide accessibility, and offer innovative solutions in media and beyond.

The key is not to plunge headfirst into unchecked implementation, but instead to take measured steps. We should not simply write off AI as a flawed tool. We should test the waters, assess the situation and decide if the environment is ‘red flag’ or ‘green flag’.

If AI isn’t ready today, that doesn’t mean it won’t be in the future. Keeping a watching brief and continually assessing AI will ensure its responsible development and better integration into our information ecosystem.

What needs to change? AI companies, publishers, regulators and public all play a part

Feeding the LLM with data points will not solely resolve this current block to its adoption into the newsroom. It is clear from the report that AI companies must work with publishers to improve accuracy; and regulators need to step in to ensure accountability.

In tandem, the public must develop greater AI literacy to critically assess AI-generated information. As an industry, we must be transparent and forthcoming with the use of AI in our production process to improve this literacy but also to allow consumers to assess and assimilate the information being conveyed.

We must also move beyond the novelty of AI-generated news and recognise its fundamental ethical and societal implications. It is not just dealing with an immature technology; we are confronting a deeper philosophical crisis about truth and trust in the digital age.

If we do not address these issues now, the cost will not just be a few misquoted BBC articles, but the integrity of our entire information ecosystem.

In the meantime, my colleagues at Immediate who are at the forefront of this innovation advocate for the ‘AI sandwich’. This model places AI in a support role rather than as the sole content generator.

By following the framework, AI serves as an initial processing tool from an instructed prompt. It summarises, organises or assists in research before a human editor refines, verifies and contextualises the content.

Finally, AI can then be used once more in the post-processing phase for accessibility improvements, such as translations or formatting before again returning for human review – an AI club sandwich in that instance!

This layered approach ensures that AI’s efficiency is balanced with human oversight, creating a system that is scalable and trustworthy The AI sandwich model could offer a practical path forward for newsrooms navigating the AI revolution.

A long way from Artificial General Intelligence

The BBC’s study was published on the same week as OpenAI released its ‘Deep Research’ model and when CEO Sam Altman declared in a blog that “Artificial General Intelligence (AGI) is no longer a distant concept but is beginning to emerge”, is a stark reminder of how far AI remains from true general intelligence.

It can mimic human language and structure coherent responses, but AI cannot yet grasp nuance, verify accuracy, or exercise editorial judgement in the way that trained journalists do.

The current generation of AI tools is impressive, but it is not intelligent. And when it comes to something as fundamental as the truth, close enough is simply not good enough.

However, if AI’s evolution is carefully monitored, tested, and refined, there is reason for great optimism. AI can be a powerful tool – enhancing media, improving accessibility and streamlining content consumption – but only if its adoption is measured, and its weaknesses are mitigated.

If we embrace a strategic, rather than reckless, approach to AI, its future in journalism could be bright. The BBC’s study is a lesson, not a death knell. The path forward is one of patience, scrutiny, and responsible progress.

Email pged@pressgazette.co.uk to point out mistakes, provide story tips or send in a letter for publication on our "Letters Page" blog

Websites in our network