Four in ten news organisations have not greatly changed their approach to AI in the newsroom since 2019, according to a new LSE survey.
The lack of development on AI in many newsrooms comes despite the arrival of accessible generative AI tools – such as ChatGPT‘s launch last November.
However, 80% expect an increase in the use of AI in their newsrooms, and 73% believe generative AI tools such as ChatGPT and Google’s Bard present new opportunities for journalism.
Some 85% have experimented with generative AI technology in varying degrees so far.
The new survey from LSE’s Journalism AI initiative was carried out between April and June and follows on from its previous research in 2019. It involved journalists, technologists and managers from 105 news and media organisations in 46 countries. UK and US companies surveyed included the Associated Press, Reuters, The Economist, McClatchy, NPR, Semafor and the Texas Tribune.
The report said of the 40% of organisations whose approach to AI technologies has not changed much since 2019, many are still in the early stages of implementation. Meanwhile, others still limit AI use to one department or a small number of staff, meaning their institutional approach has not shifted.
One respondent said they were still in the pilot stage, while another said their use of AI so far has “mainly involved the IT department and just a handful of journalists dedicated to testing and validating them in limited contexts”.
However, around a quarter said their organisation’s approach has evolved, with the report highlighting one response that pointed to the change in attitudes following the launch of ChatGPT: “I’ve been working on little AI projects as part of my innovations remit for a few years now, but they have just been curiosities really.
“But the moment ChatGPT was launched, the upper management suddenly [became] really enthusiastic about AI.”
Concerns about its potential impact on editorial quality and ethical implications remained for 60% of respondents, however.
And of widespread calls for transparency, the report noted: “It is important to note that today it is almost impossible to perform journalistic duties without using AI technologies in some way, however minor. So, it is not clear where the line is drawn between an AI-assisted production process that requires disclosure and one that does not.
“Most of our respondents seemed to refer to the explicit use of AI in content production, i.e., using ChatGPT or other gen AI technologies to summarise or author pieces, as areas where disclosure was needed.”
Newspapers were most heavily represented (28%) among the news organisations that completed the survey, followed by publishing groups (20%), broadcasters and other miscellaneous companies (16% each), news agencies (13%) and magazines (7%).
The report defines AI as a “collection of ideas, technologies, and techniques that relate to a computer system’s capacity to perform tasks normally requiring human intelligence” and generative AI as a subfield within machine learning “that involves the generation of new data, such as text, images, or code, based on a given set of input data”.
The impact of AI on news organisations over the past five years has been “systemic and accelerating”, the report said. “The most successful organisations were those that took a strategic, holistic approach and who recognised that these technologies required fundamental self-analysis of the organisation’s capabilities and future planning.”
Generative AI in newsrooms sees ‘more strategic approach’
The arrival of accessible generative AI in the past year has led to most publishers “taking a more strategic approach to gen AI, often based on the lessons from dealing with AI and other technology beforehand”.
Of the 85% of respondents who have experimented with generative AI, some examples of their use of the tech include suggesting headlines, generating images, creating summaries, search engine optimisation and writing code – others have experimented with it for data analysis but continue to be wary of involving it in the editorial process.
One newsroom said: “We are encouraging everyone to experiment with these. For example, our social media team uses ChatGPT to summarise articles. Our newsletter team creates infoboxes to use in newsletters, etc.”
Another said: “We use Bing Co-Pilot for suggesting headlines and sublines for topics, gathering background information and generating unique images for an article.”
Almost three-quarters (73%) believed generative AI technologies present new opportunities, with just 1% saying no and 26% were not sure.
One respondent said generative AI is helpful because it is “democratic”. “I don’t need an intermediary, a developer to make me the application that I need, it’s like a Chrome extension. I make my life easier. The ease with which today, in 2023, you can do artificial intelligence compared to 2020 is impressive.”
As well as the opportunities, 40% said generative AI creates new challenges, especially in exacerbating the problem of misinformation, while 52% were not sure and 8% said no.
The report’s co-author, Professor Charlie Beckett, the director of LSE’s Journalism AI, said: “Our survey shows that the new generative AI tools are a potential threat to the integrity of information and the news media. But they also offer an incredible opportunity to make journalism more efficient, effective and trustworthy. This survey is a fascinating snapshot of the news media at a critical juncture in its history.”
AI in newsrooms: Threat to editorial quality?
Overall, 90% of respondents said AI of some kind was being used in their newsroom for news production (such as fact-checking, proofreading and writing summaries), 80% for news distribution (for example, content personalisation and recommendation, text-to-speech tools and social media posting), and 75% for newsgathering (trend detection and news discovery or tools like transcription and extracting text from images).
More than half said their core objectives for using AI are around increasing efficiency and productivity.
However, the survey found widespread concerns around the potential for AI technologies to heighten competitive pressures on newsrooms, leading to the mass production of lower-quality journalism and ultimately having an impact on audience trust.
Some 82% were concerned about the potential hit to editorial quality, with 40% concerned for the impact on readers’ perceptions.
One respondent said: “I think it’s going to result in a lot of mass-produced clickbait as news organisations compete for clicks. We will not be participating in that contest. Most of the public already have a very poor opinion of journalism, and that seems unlikely to change either way as a result of this technology.”
Another said: “If the industry seeks to only maximise revenue, then it could have a negative impact on editorial standards and ethics at large.”
Beckett said in the report that the “good news” from those who responded to the survey was that “they are aware of the opportunities and risks and are beginning to address them”.
“The best organisations have set up structures to investigate gen AI and processes to include all their staff in its adoption. They have written new guidelines and started to experiment with caution.
“This is a critical phase (again!) for news media around the world. Journalists have never been under so much pressure economically, politically and personally. Gen AI will not solve those problems and it might well add some, too.”
Email pged@pressgazette.co.uk to point out mistakes, provide story tips or send in a letter for publication on our "Letters Page" blog