Fighting for quality news media in the digital age.

Publishers delete and amend stories based on dubious experts

News UK, Reach and Yahoo News remove stories from archive after Press Gazette investigation.

By Rob Waugh

Dozens of stories have been deleted and amended by leading publishers after a Press Gazette investigation into fake and dubious experts being widely quoted in the UK and US media.

The story exposed a troubling trend of fake profiles commenting in response to online journalists’ requests, targeting lifestyle features and possibly using AI to generate both profiles and comments.

Companies selling CBD oil, sex toys, vapes and essay writing services are apparently seeking to game the Google algorithm to achieve higher rankings in search by associating themselves with ‘expert’ voices and gaining links from bona-fide news outlets.

ResponseSource, the journalist request service which one alleged fake profile used to send comments to dozens of journalists in the UK, has instructed staff to be “vigilant”. The network also explained the “blind spot” which allowed a fake profile to be featured dozens of times in national newspapers over a number of years.

The Sun has confirmed that it will be searching for and removing all articles involving one apparently fake expert who had commented on issues for the paper ranging from what the colour of one’s underwear says about your personality to the intricacies of the “Italian chandelier” sex position.

Publisher Reach is now removing a number of articles across its publications, which include the Mirror and Express titles.

The Telegraph is understood to have warned freelances about the pitfalls of using journalist-request services. Yahoo News has either deleted or amended more than a dozen articles.

Press Gazette has compiled a dossier of more than 100 articles which have appeared in Mail Online, The Sun, Mirror, Express, HuffPost UK, Yahoo News, Metro, The Independent and the Telegraph based partly, or entirely, on input from apparently fake experts.

A spokesperson for Reach said: “As a responsible news publisher, we take the validity of our sources extremely seriously. We have therefore taken action by finding and removing articles across our newsbrands.”

Reach said that third-party databases like ResponseSource are widely used across the industry, and that it is natural that journalists should have a degree of trust in these sources.

The spokesperson said: “Our journalists trust that third-party databases, PR agencies and brands check and verify their voices before they are featured in press releases or expert listings. It is clear that in this example, those checks have fallen short and we will be seeking assurances from external providers that this will not be repeated.”

Fake media experts are an old issue worsened by AI

Tom Ritchie, president of Pulsar Group which owns ResponseSource, said: “That individual’s response to you was a direct violation of what we consider is acceptable use under what we do, and hence we took immediate action.”

He said that the expert Press Gazette highlighted was not spotted because they were part of a larger organisation that uses ResponseSource services legitimately, with real spokespeople.

Ritchie said: “What we do when we sign up customers is we do background checks. We make sure that they’re legitimate organisations, and are who they say they are. The person was connected to us through an organisation that procures our services very legitimately. They have other businesses and spokespeople that interact with other journalists’ requests very legitimately, and there’s no cause for concern around those other activities.”

Ritchie declined to name the organisation in question. Ritchie says that the problem of fake or dishonest commenters has been around for years, but has recently been “exacerbated” by how quickly AI can create fake profiles and fake content. For websites, having ‘experts’ associated with them can offer a boost in Google’s search rankings.

In the wake of the issue, ResponseSource has circulated an FAQ to staff to remain vigilant for similar fake profiles.

Rebecca Leigh. A widely quoted media expert who Academized admitted does not exist.
Rebecca Leigh. A widely quoted media expert who Academized admitted does not exist.

The challenge for organisations such as ResponseSource in dealing with fakers is scale and visibility, Ritchie says. The organisation deals with thousands of requests a day, and of the responses sent to journalists through ResponseSource, just one in 20 makes it into a story or piece of online content. Once customers have connected with journalists via email, ResponseSource also cannot see the messages, making it harder to track “bad actors”.

Ritchie says that ResponseSource is already working to revamp the way the site operates.

He said: “Our big focus really is about reimagining this, the model that we have today. Candidly, it’s a 25-year-old model. It’s worked very well in the past. But is there an opportunity to reinvent this and create something that’s more valuable to both sides of the network? Yeah, there definitely is, and we’re actually building that right now.”

The ‘next generation’ of ResponseSource will work on a ‘network’ model and will include ratings for sources – so that reporters can “thumbs down” dubious sources or highlight sources which have delivered copy reliably. Ritchie suggested that sources could be down-ranked if they behave in untrustworthy ways, and could have “credibility scores” based on factors such as whether they have authenticated their education, for example.

“Journalists should be able to look at profiles and think: “My peers are saying: don’t trust this source. This person is questionable.”

ResponseSource founder says ‘lazy journalists’ are part of the problem

Darryl Willcox, who founded ResponseSource in 1997 and sold it in 2018, says that the simplicity and speed of platforms like ResponseSource is key to their appeal and that attempts to add authentication risk slowing down the system.

Willcox said: “The other factor which complicates things a little bit is that these platforms are quite an open system. Once a journalist makes a request they can be forwarded around organisations, and sometimes between them, and often PR agencies are acting for multiple parties, and they will be forwarded onto their many clients.”

Willcox says that he believes tools like ResponseSource can be a force for good, but can also be abused. But he says that the real “multiplier” which has allowed fake commentary to spread so far is “lazy journalists” who failed to check whether experts are real.

The number of services offering responses to journalists has boomed in recent years, with US-based rival Qwoted entering the market, and other rivals such as Dot Star Media, MediaMatchmaker, Editorielle and PressPlugs.

The weakness of many of such networks is that they rely on email, says Matt Kneller, co-founder of Qwoted, which sends messages through its own servers.

Kneller said: “The real issue is that most of the activity in this industry still happens over email. That’s where the risk is because every pitch hits your inbox looking the same and there’s no way to easily differentiate what’s credible from what’s not. At Qwoted, you have to log in. You have to be verified. We’ve put guardrails in place specifically to identify and act on suspicious behavior.”

Press regulator warns that quoting fake experts is breach of Editors’ Code

Regulator IPSO points out that publishers have a legal obligation to ensure that sources quoted are real and not AI fakes.

A spokesperson said: “IPSO has noted with interest and concern the reporting of Press Gazette on unverified experts. The rise of real or created profiles purporting to be expert and using AI to generate comments is a serious challenge to journalists as they go about their work. But publishers and editors have an important responsibility to their readers and, through regulation, are accountable for their content.

“Clause 1 of the Editors’ Code does require publications to ensure that care is taken to ensure accuracy. Publishers should be alert to this new challenge and the potential of artificial intelligence to spread misinformation and have systems or policies in place to manage the risk. We are monitoring concerns and standards issues in relation to the use of AI.”

Topics in this article : , , , ,

Email pged@pressgazette.co.uk to point out mistakes, provide story tips or send in a letter for publication on our "Letters Page" blog

Websites in our network