Half of newsrooms are “actively working” with generative AI tools like ChatGPT, according to a new survey.
The survey by the World Association of News Publishers (WAN-IFRA) also indicates that newsroom leaders are the ones spearheading the deployment of AI in newsrooms.
The survey of 101 editors, journalists or other newsroom staff, conducted with Schickler Consulting, focused on how news organisations are already using generative AI, a technology that can produce lengthy plain-language text from short prompts.
Some 49% of respondents answered yes to the question: “Is your newsroom actively working with generative AI tools like ChatGPT?” The remaining 51% answered no.
Third say newsroom management driving the use of generative AI
Asked who in their newsroom was driving the introduction of generative AI, 32% of respondents identified editorial management. Some 31% said data and tech teams were leading implementation, 27% said it was being done by individual journalists and 11% responded “other” – a group the survey authors speculated may include company executives.
The survey also picked up some evidence that it is editors, rather than reporters, who are most keen to roll out the new technology. Asked how their colleagues felt about generative AI, 10% of respondents said the journalists they work with had expressed “significant resistance” to its use in the newsroom. Only 4% reported that level of resistance among their editors.
Similarly, 37% of respondents said there was “no resistance” to using AI tools among their editors, compared to 26% who described that level of acceptance among journalists in the newsroom.
Seven in ten say generative AI is ‘helpful tool’ for news
The outlook on AI among respondents was broadly positive. Some 70% of those who answered the survey said they saw generative AI “as a helpful tool for your journalists and newsroom”.
Asked about the proportion of their peers in the newsroom they believed were using generative AI at least weekly, 39% of respondents estimated 5% or less. Some 31% of respondents estimated 5–15% of their colleagues were using the technology at least once a week; only 3% put the proportion at 50% or above.
The most common use of AI in the newsroom was (limited) text creation, with 54% saying their colleagues were using the tools to that end.
Some 44% said their newsroom was using generative AI for simplified research or search, and 43% apiece said their peers had been using AI for workflow help or text correction. Some 32% said they had been using AI for wholesale article creation.
Only 20% of respondents said their news organisation has guidelines on using AI – but 49% said journalists in their newsrooms have the freedom to use the tools as they see fit. Some 3% said their newsrooms simply do not allow use of generative AI.
Perhaps explaining some of the more junior journalists’ resistance to generative AI, 45% of respondents said they foresaw “significant” changes to “the roles and responsibilities of editors and/or other professionals” in the newsroom as a result of increased AI adoption. Some 37% anticipated the tools would “slightly” change those roles while 14% expected no change.
Job replacement was not the leading cause of concern among the survey’s responders, however. The biggest worry was that AI tools might propagate misinformation or poor content quality, with 85% listing it as a major concern.
Some 67% said they were worried about plagiarism and copyright infringement, 46% about data protection and privacy implications, and 38% about the risks AI may pose to job security.
Email email@example.com to point out mistakes, provide story tips or send in a letter for publication on our "Letters Page" blog