Press regulator Impress has launched its new Standards Code, adding revisions that hold publishers to stricter standards on discrimination and prepare for the rollout of artificial intelligence in newsrooms.
The new code was published on Thursday morning following a two-year consultation period.
It is the first revision to Impress’ Standards Code. The first code launched in 2017, and distinguished itself from the Editors’ Code – the guidelines upheld by Britain’s other press regulator, IPSO – with stronger rules on identity-based discrimination.
Thursday’s revision strengthens those rules further by lowering the threshold for what Impress regards as discriminatory coverage.
Previously, point 4.3 of the code said: “Publishers must not incite hatred against any group… [on any] characteristic that makes that group vulnerable to discrimination.”
In the new code, the same point reads: “Publishers must not encourage hatred or abuse against any group” based on those characteristics.
Lexie Kirkconnell-Kawana, Impress’ deputy chief executive (and, starting in April, its chief executive), said: “The previous threshold was that a publisher would be in breach of the Code if they incited hatred.
“And inciting someone to commit hatred or violence against someone else because they’re different is, obviously, an incredibly important thing to be preventing. It’s the legal standard – although it’s not a very well enforced legal standard, particularly not online.
“And if we think about the fact that the narrative that the media spins and the stories, how they’re presented, how that’s incredibly important to shaping people’s kind of interactions with others – we saw that journalists should bear responsibility for an appropriate and significant weight of that effect.”
The new discrimination standard in the code, Kirkconnell-Kawana said, “accounts for prejudice that could be more insidious and be more cumulative or more thematic, and not a direct call to action or violence against a group of people – because that’s an incredibly high threshold, and it’s not often how news is carried. You don’t see headlines saying, you know, ‘Take up arms against x group’.”
Kirkconnell-Kawana said Impress’ new standard was the result of feedback the regulator had received during its consultation period, which ran from 2020 to 2022.
“Every single community that we talked to about the news narrative told us they feel like, now more than ever, the news narrative is really problematic.
“We were told words to describe how communities felt about how they were represented and what we got back was feedback like ‘it’s toxic’, ‘it’s hateful’, ‘it’s polarising’.
“And so we have to be really alive to that and what the effect is not just on the communities, but on society as a whole. And we know that there’s significant alienation going on right now, because of that toxic news environment.”
IPSO’s most recent Editors’ Code review, last conducted in 2020, ultimately rejected calls to add a clause explicitly banning discriminatory content. IPSO regulates much of the more established UK press – for example the Mail newspapers, Reach plc’s outlets and the News UK titles – whereas Impress’ publishers tend to be newer, digitally-focused publishers such as Bellingcat, Gal-dem and The Canary.
The other major change to the code, for Kirkconnell-Kawana, altered Impress’ transparency requirements, particularly as they pertained to AI-generated content.
“We’ve all probably been spending this last winter messing around with AI art generators and ChatGPT in the last couple of months,” she said, “and recognising that those technology advances are really fun and they may unlock creativity and efficiency for newsrooms.
“They can also have those downsides, which are that new innovation, new tech, typically doesn’t incorporate ethics and design.
“These are businesses creating tools and they have a function – so the ethics really need to be retrofitted to them and applied to them by the users, the human agents using them.”
Impress’ guidance on the Standards Code’s accuracy requirements now says publishers need to “be aware of the use of artificial intelligence (AI) and other technology to create and circulate false content (for example, deepfakes), and exercise human editorial oversight to reduce the risk of publishing such content”.
The advice comes shortly after technology site CNET was found to have inadvertently repeatedly published false information after allowing AI to write articles.
Kirkconnell-Kawana said Impress’ guidance on AI reflected less on how AI is currently used in newsrooms, and more the working assumption that it will soon be much more widely rolled out.
“We all know that as these tools become more accessible, newsrooms are just going to naturally acquire them…
“So it’s making sure that we’re being really future-focused, that we’re being proactive, we’re looking at what’s coming down the line in terms of evolving content practices, and that, again, those first principles still apply: editorial responsibility, oversight and accountability.”
Email firstname.lastname@example.org to point out mistakes, provide story tips or send in a letter for publication on our "Letters Page" blog