It took approximately ten minutes for generative AI to receive its first mention at the European Broadcasting Union’s NewsXchange conference in Dublin this week, and as at most journalism events in 2023, what the technology may or may not do to the news industry remained the leading topic of discussion for the whole two days.
Copyright infringement, the risk of misinformation, how AI tools might change the economics of the industry and what plans news organisations have right now were among the main AI themes of the broadcast-focused conference. Experts from the likes of Reuters, ITN, Storyful and ABC News chipped in.
[Also from News Xchange 2023: Liz Truss calls Daily Star’s Lizzy Lettuce stunt ‘puerile’ and unfunny and Laura Kuenssberg says it is not the job of broadcasters to sway public opinion]
Should AI companies be paying news organisations for use of their content?
Three separate panels discussed the possibility of AI businesses being made to pay news organisations for the content on which they have trained their large language models.
In a session focused specifically on the impacts generative AI might have on the news industry, Storyful founder Mark Little said: “I think we all have PTSD from the early part of the web – ‘information wants to be free’ and we were all going to let the information flow, there were no paywalls – and I think we got burned by that.
“So this time around I’m very impressed by – inspired by – the speed at which we have seen large news publishers challenge the ability, first of all, for these models to be trained without any kind of scrutiny or oversight from existing content… and secondly, then opening up a collective bargaining [process] with these tech startups to say: ‘If you want access to this, you pay for licensing.’”
Little added, however, that rather than ad hoc collective bargaining, he would prefer systems be put in place for news publishers to be paid continuously for the use of their content.
[Read more: Journalists, ChatGPT is coming for your jobs (but not in the way you might think)]
The Financial Times reported last week that executives from companies including ChatGPT creator OpenAI, Google, Microsoft and Adobe had been meeting with counterparts from publishers including News Corp, Axel Springer, Guardian Media Group and The New York Times to discuss copyright issues around AI training.
Press Gazette has previously reported the warnings of executives at the FT, Guardian and Le Monde about the need for news organisations to “get control of their IP” soon.
In a keynote interview at NewsXchange, Reuters president Paul Bascobert said he thought “all content creators are in some form of discussion” with AI companies about licensing their content – but stopped short of confirming if his company was striking any deals.
“I wouldn’t say doing deals – I think ‘in discussions’ was the term used, and I think we’re all trying to work with those companies, and there’s many of them out there today.
“There are certainly some cases out there in courts right now that we’re watching closely, some we’re involved in, to make sure that our content can’t be taken without our permission. And so there is a legal pathway there. But there’s also a negotiation approach that says maybe there’s a way we can work together in a productive way.”
And in an interview about challenges faced by commercial news broadcasters, ITN chief executive Rachel Corp said: “I know there’s some talk at the moment… on trying to get big tech to pay for the use of our data to help train and educate different AI tools.
“I think that’s only one part of it. But I think if big tech said ‘We can’t solve this, come in!’ – that’s quite good, isn’t it?”
[Read more: News execs fear ‘end of our business model’ from AI unless publishers ‘get control’ of their IP]
How big is the risk of misinformation from generative AI?
Emmanuelle Saliba, a senior reporter with the American broadcaster ABC News, said that ahead of the US general election in November 2024 her organisation expects to see “a lot of synthetic media” entering the information ecosystem.
“We’re starting to see that trickle in, sort of in a playful way, which I think is a great opportunity for us to cover it and to start educating and helping our audiences understand that these tools exist, that they can be used to create these fake images of Trump getting arrested.”
Saliba said AI-generated images like that of the Pope in a large white puffer jacket were “entertaining and pretty easy to spot– but our worry is what happens on the local level, where there’s less attention, fewer cameras, where voice is more difficult to detect”.
She added that ABC News was wary that political candidates or public figures might take advantage of the existence of synthetic content to cast aspersions on real footage or images that might be inconvenient to them.
“I think also it’s a very trendy thing to use AI, and so I think we’re going to see candidates… using it as a way to get attention and to be covered. And so we have to think really strategically – is [any given AI-generated piece of content] worth our audience knowing about?”
Storyful’s Little, who also founded Kinzen, a start-up that aims to curb the spread of harmful content, advised news publishers: “If you’re using comment sections, if you’re using user-generated content in any way, [if your] commissioning editors are going out there looking for people to send in submissions – make sure you can detect in all of that the synthetic media, the people who try to pitch you a scam…
“And make sure that you are transparent about the ways in which you treat synthetic media coming in and the way that you might create synthetic media yourself.”
The editor of the Irish Times apologised last month after the publication unknowingly published a comment piece that had been created with generative AI.
Little cited the recently-launched on-air verification project BBC Verify as a promising approach to dealing with shifts in the trustworthiness of content, as well as Project Origin, a collaborative project hoping to make it so the provenance of any given piece of content is easily checkable.
Reuters president Bascobert was relatively upbeat about the effect of AI on the information ecosystem, however: “For anybody who wants, there are 40 hours of misinformation every 24 hours to consume. So, you know, if we go from 40 to 70… There’s [already] more than people need today if they want to swim in a place that feels good but may not be true.”
Bascobert said he was more worried about a risk to cybersecurity from “generative AI’s capability to generate its own code”. The vulnerability of news organisations to cyberattack made headlines in December when The Guardian’s systems were compromised so badly staff were barred from the offices for months.
[Read more: BBC unveils Verify team of 60 journalists it says will be ‘transparency in action’]
The effect of AI on journalists’ jobs
Panellists across the two days at NewsXchange were broadly optimistic that AI would not displace the need for journalists – although that did not necessarily mean they thought no one would lose their job.
Storyful’s Little said: “I think there’s a possibility that, actually, generative AI will produce – not a new set of jobs necessarily – but a rebalancing, potentially very positively, so that we can actually do what we should be doing.”
Citing the closure of Buzzfeed News and Vice’s bankruptcy, Kirsten Dewar, of real-time data analysis company Dataminr, said she thought “there are far greater threats in the shorter term to journalism jobs” than AI.
“There are a lot of things that are happening in the news industry that scream more ‘diversify your revenue streams’ and ‘run a better, stronger business’ than ‘AI is coming for the jobs’.
“In our case we’ve actually seen the opposite effect, with newsrooms that use tools like ours actually needing more journalists to come in and do the fact-checking and the curation.”
And Ariane Bernard, a former chief digital officer of Le Parisien who now works as a data and analytics product consultant, added: “We have to be mindful that our businesses survive as businesses so that we can still have journalists.
“The thirst for knowledge is fairly stable! The fact that we might even have more ‘sludge’, to use Mark [Little]’s word [for low-quality information on which AIs might be trained], might actually increase the need.
“But fundamentally, as the population rises, as the world is more complex and more intertwined, and therefore in a way that the interest of the news to the public is actually wider than it used to be because we’re supposed to know of the world – there should actually be more of a need for journalists. But they probably won’t do the same thing.”
However, hours after this discussion the news broke that Axel Springer-owned German tabloid Bild would cut hundreds of roles and replace some editors with AI.
How AI might change the journalism business model
Beyond jobs, the executives and experts who spoke about generative AI at NewsXchange advised publishers there were commercial opportunities they were well placed to seize.
Storyful’s Little said “the end of the world is not coming” and “the really dangerous stuff that’s happening we know about: it’s accelerating a flood of misinformation into our ecosystem.
“I think it’s obviously diminishing the value when you create a piece of content because it’s now commoditised – that’s a huge challenge for the news business.”
But he said he thought there were some “very clear things” that AI can help the news business with.
“Think about the assets that the news business has right now that the AI generators don’t. They’ve got tons of venture capital – but they have no data. They’ve run out of data. They’re using really crappy data and they’re looking for more data.
“Big cheques are being written on the capacity of these big companies, these startups, to find data. You have data, you have content, so you’re not powerless – that’s the one thing I want to articulate.”
Bascobert made a similar suggestion later in the conference: “The threat of generative AI to create a lot more misinformation, I think, will in some ways help strong brands to rise. I think brands will become more important as a signalling device for what is true – what is provably factual or [possible] to document.”
And Bernard emphasised that beyond business disruption, generative AI could offer newsrooms the opportunity to reach new audiences.
“I think you want to cautiously advance with an open heart,” she said. “Because it’s not like the way we run our news organisations is the most accomplished that we could ever do. There’s plenty we don’t cover, there are plenty of audiences we’ve either never addressed or have abandoned over time. And they know this.
“So even if we can’t solve all these problems all at once, all new technologies give us a chance to fix some of the wrongs or address things that are not addressed.
“And in this way, the optimism is to say we’re going to gain ground either where we lost it or we never had it, and we’re going to do it in a way that is rational in this news organisation – because we don’t necessarily want to abandon other things we’ve been doing.”
What are those newsrooms doing with generative AI now?
Both Bascobert at Reuters and ITN’s Corp said their organisations are making moves to experiment with generative AI.
Corp said ITN “put out guidelines to our staff in the last week really talking to them about – let’s start experimenting off-air or non-live with the things that we can do while protecting fact-checking and eyewitness journalism”.
And Bascobert said: “We’re part of Thomson Reuters, which is a technology company. And so for us, technology is really more core to who we are as a business, and for us to commit [to] long-term capital projects that involve technology and AI is a very natural thing for our board of directors…
“That doesn’t mean that we don’t need to move quickly and be agile. We do want to spend more time with our clients, and understand what the problems are as they are making their migration.
“But you have, and will continue to see, us investing long term in building tools for discovery, for manipulation, for editing, to get into the workflows as well as commercials – the selling of content. We’re going to continue to evolve our tech platform.”
[Read more: ChatGPT six months on – Insight from 12 news leaders on generative AI and journalism]
Email pged@pressgazette.co.uk to point out mistakes, provide story tips or send in a letter for publication on our "Letters Page" blog