Fighting for quality news media in the digital age.

  1. Publishers
  2. Digital Journalism
June 5, 2023updated 07 Nov 2023 7:32am

ChatGPT six months on: Insight from 12 news leaders on generative AI and journalism

News leaders from the likes of the New York Times, News Corp, Schibsted and Mediahuis give their takes on AI.

By Charlotte Tobitt

Journalism industry leaders are reacting to the sudden arrival of freely available generative AI tools with more urgency than, it is now acknowledged, they did to the dawn of the internet.

ChatGPT arrived on the scene in November 2022 and already many publishers have guidelines and codes of practice in place for its use in their newsrooms, as well as ideas about how it can be implemented to create efficiencies and perhaps free up journalists for more complex journalism.

As a result, AI comes up at every industry meeting and conference right now – but there is a mixture of pessimism and optimism in the air.

Below are comments made since mid-May from 12 news leaders – editors and chief executives from the UK, US, Europe, India and New Zealand – exploring their views on where we go next in this new world of generative AI, including comments made at the International News Media Association World Congress in New York.

It covers their views on negotiating for payments from tech platforms training their content on expensive journalism, gives some examples on how newsrooms are already using generative AI in their work, and gives a sense of the mood in these conversations.

Quick links here to all 12 news leaders:

A.G. Sulzberger, New York Times publisher (US): ‘AI will usher in torrent of crap’

New York Times publisher A.G. Sulzberger
New York Times publisher A.G. Sulzberger. Picture: Robert Downs / INMA

A.G. Sulzberger told the INMA World Congress on Thursday 25 May that AI is “almost certainly going to usher in an unprecedented torrent of crap, to use the scientific word, into the information ecosystem, really poisoning it and leaving people totally confused”.

He gave the example of deep fakes circulating relating to Ron DeSantis launching his presidential bid the previous day – for example, a video purporting to show Hillary Clinton endorsing him for the job – that “people are falling for”.

Warning that the “information ecosystem is about to get much, much worse,” Sulzberger added: “I suspect you’re going to need to use brands as proxies for trust, and specifically for having the processes that discern whether things are real.”

As a result, he went on, he predicted there will be “more people returning to like, the intentionality of having a relationship with a news provider” and paying for news. “But then I suspect we will see a lot of people drowning in bad information.”

Sulzberger was even hesitant over personalisation, saying a website front page should be a “mix of interesting and important… but important is in the eye of the beholder”. “We are in the judgment business,” he added, saying there is “always a place for human editorial judgment”.

Alessandra Galloni, Reuters editor-in-chief (US): ‘Autonomous news content essential at Reuters News’

Alessandra Galloni, the editor-in-chief of Reuters News, poses for a photograph in Rome, Italy, on 13 May 2021. Picture: Reuters/Yara Nardi

Alessandra Galloni, along with Reuters ethics editor Alix Freedman, has laid out four principles for their journalists when using AI to ensure they use the tech effectively while remaining the “world’s most trusted news organisation”.

She noted that Reuters has always “embraced new technologies” including using automation for extracting economic and corporate data: “The idea of autonomous news content may be new for some media companies, but it is a longstanding and essential practice at Reuters News.”

“Second, Reuters reporters and editors will be fully involved in – and responsible for – greenlighting any content we may produce that relies on AI,” Galloni wrote. “A Reuters story is a Reuters story, regardless of who produces it or how it’s generated, and our editorial ethics and standards apply.”

She also promised “robust disclosures” to the audience and said journalists must “remain vigilant that our sources of content are real. Our mantra: Be skeptical and verify.”

Anders Grimstad, Schibsted head of foresight and emerging interfaces (Norway): AI-written summaries included ‘hallucinations’

Anders Grimstad, Schibsted head of foresight and emerging interfaces. Picture: Robert Downs/INMA

Nordic media company Schibsted has been readying itself for a world of generative AI even before ChatGPT launched onto the scene in October, having launched its Futures Lab in early summer last year to help it prepare to use emerging technologies.

Schibsted’s head of foresight and emerging interfaces Anders Grimstad told the World Congress they initially experimented by creating an automated weather avatar trained on a small amount of data from a Norwegian public service and modelled on a member of staff named Kris.

That avatar acted as their “investment case” and taught them how large language models are “capable of changing how we work with creating and distributing content”.

They then set about creating a model trained on thousands of articles from one of their own newspapers, Norwegian tabloid VG, to create automatic summaries within its CMS.

Grimstad said that “on average, humans aren’t able to distinguish whether something was machine written or human written, so the quality of the output is pretty good”.

However, he added: “The downside is that every 10% of the time at this point, it included things that wasn’t in the source material. It doesn’t mean that it was incorrect, but it wasn’t there. So it was this kind of, we call it hallucinations. But it’s this kind of factor that makes us uncomfortable.

“But in general, our journalistic staff saw this, quite optimistically, like sceptical, but optimistic about how this could free up time, and give them more time to spend on things like investigative journalism, or other types of tasks, which is cool.”

They have also created their own transcription service powered by OpenAI’s Whisper named Jojo which, Grimstad said, already saves journalists thousands of hours of work every month by helping them write up interviews and podcasts. He added that it is useful for source protection because the material never has to leave their computers.

Also rolled out at VG and sister title Aftenposten are voice clones allowing readers to click and listen to almost all of their articles. Grimstad said this is particularly popular with younger audiences.

Grimstad did advocate AI-driven personalisation: “I hope that we’re going to get to a point where we personalise a lot more, and personalised for me means adapting content to specific audiences – not just giving them the same thing but actually, if a person is from an immigrant background, they get content that is adapted to that, or maybe the language is different or it’s simplified because they don’t speak that well yet. If it’s an old person, we avoid like technical terms, or we explain things in a different way.

“And that for me is utopia in a way, if we can get to that level, but that doesn’t scale with humans – it will never work. And so, I don’t know, it’s a super interesting period going forward where we need to figure out those things.”

[Swedish daily prototypes ‘cringey’ AI rap to tell news stories]

Fred Ryan, Washington Post CEO and publisher (US): Full-time AI team established

Washington Post CEO and publisher Fred Ryan
Publisher and CEO of The Washington Post Fred Ryan speaks at the Washington Post Global Women’s Summit at the newspaper’s headquarters on 15 November 2022. Picture: Anna Moneymaker/Getty Images

Fred Ryan wrote in a statement that generative AI “presents a significant opportunity to support the work of our journalists and enable us to better serve our readers”.

The Washington Post, he said, has established an AI taskforce made up of many of the title’s senior leaders who will be “charged with establishing the company’s strategic direction and priorities for advancing our AI capabilities”.

The Post has simultaneously created a small full-time team in an AI hub led by Sam Han, who has previously led on data and personalisation at the title. The hub will “expedite our AI initiatives and foster cross-functional cooperation”, Ryan said, as well as spearhead its experimentation and proof-of-concept initiatives across the company and ensure everything stays within the strategies and guardrails set by the AI taskforce.

“This is only the first step in establishing AI as a priority opportunity for The Washington Post,” Ryan said. “As we learn more, we will adjust team structures and allocate resources that will deliver value and results.”

Gert Ysebaert, Mediahuis CEO (Belgium): Readers must be informed if AI is used

Mediahuis CEO Gert Ysebaert
Mediahuis CEO Gert Ysebaert. Picture: Jonas Roosens/AFP via Getty Images

Gert Ysebaert said AI “will bring huge potential” but “make or break the newsroom”.

“I think we will have to embrace it, that we have to use all the potential because we will need it, we will definitely need it because it comes as a revolution, the landscape will change drastically,” he told the World Congress. “So we have to think how can we implement this very fast and do this right, preferably to also mitigate the risks that are coming and the huge challenges.”

Ysebaert said Mediahuis has created an AI framework of seven principles about how to use the tech in the newsroom in an “ethical and responsible way” and “augment journalism, not replace journalism”. For example, the editor-in-chief is still “responsible for everything that’s published” with a human always “in the loop” and readers must be informed if AI has been used for anything whether it is a section of an article or a summary.

Ysebaert added: “Everyone is experimenting and testing and I think that’s good, but as a company, as a group, we have to do it in a controlled way.”

Nick Thompson, The Atlantic CEO (US): ‘Regulating too quickly could lock in industry incumbents’

Nicholas Thompson
The Atlantic CEO Nick Thompson at the World Congress of News Media in New York in May 2023. Picture: Robert Downs/INMA

Nick Thompson told news leaders at the World Congress to “understand AI – don’t fight it” and also to seize the opportunity resulting because “realness is going to matter more and more”.

ChatGPT’s arrival in November 2022 will act as a “demarcation line” on the internet in terms of how confident users could be about whether something was real or not, he said.

“Realness will have a premium. It’s going to be even more important to have trusted brands.”

The former Wired editor also warned against regulating AI too fast, saying: “There’s going to be a real risk that regulating too quickly will lock in the incumbents.” He said we are currently “underestimating” that risk.

Thompson also discussed the fact some newsrooms are experimenting with synthetic, AI-generated voices to read stories, saying there is “nothing wrong with having a synthetic voice” as long as it is labelled so consumers are not misled into believing it is a human.

There are three categories of doing this, he said: using an actor’s or an otherwise licensed voice, a purely synthetic voice, or the voice of one of your journalists to read their stories using AI.

Of the latter, he said: “That gets a little weird because the whole point is for a brand, like you should be saying exactly what you’re doing. If you have Nick Thompson’s voice reading a story that I didn’t actually read and it makes it seem like I read it, that’s very complicated.”

Nevertheless, Thompson concluded: “My view is that technology is amazing – and be careful.”

Nicole Carroll, former USA Today editor-in-chief (US): ‘Go in there and play, but set standards’

Former USA Today editor-in-chief Nicole Carroll speaking at the INMA World Congress of News Media in New York in May 2023. Picture: Robert Downs/INMA

Nicole Carroll, who stepped down as editor-in-chief of USA Today in May after more than five years, said the company had created guidelines on generative AI.

They are not mandated, she said, but they realised they “needed to have guideposts before we did something that we didn’t want to do”.

Carroll also recommended other publishers assign someone to play with the new generative AI tools as soon as possible, while it is free to do so with many of them.

“See what it does, see what it doesn’t do, because we don’t know where this is going but we need to be ready,” she told the World Congress.

“And so to be ready, you have to have some experience in there. So that’s one thing we can all do is go assign somebody to go in there and just play and create and see what they see – and then also get your own internal standards out today before something happens you’re not happy with.”

Praveen Someshwar, HT Media Group CEO (India): ‘AI can release huge capacity for our journalists’

CEO panel at the World News Media Congress in New York in May 2023. From left to right: moderator Jodie Hopperton, Stuff CEO Sinead Boucher, Infoglobo CEO Frederic Kachar, South China Morning Post CEO Catherine So, HT Media Group CEO Praveen Someshwar, and Mediahuis CEO Gert Ysebaert. Picture: Robert Downs/INMA

Praveen Someshwar, who leads the publisher of news titles like the Hindustan Times and Mint in India, called for payments from big tech to content creators.

“I think if we can work together, if big tech and publishers can work together, it’s a massive opportunity for both,” he told the World Congress.

Someshwar added: “Big tech’s not used to sharing the spoils… If big tech can share the spoils, we have a massive opportunity to unlock.”

He continued: “If we are able to get there, it’s a massive opportunity. Because what gen AI can do for us is huge. It can really automate the mundane, release huge capacity for our journalists, and unlock new truths, if I may say so, for our audiences. But all of that is dependent on how this equation gets balanced. That’s what we really have to see.”

Robert Thomson, News Corp CEO (US): ‘It’s just Wikipedia on amphetamines’

Robert Thomson

Robert Thomson warned generative AI has the potential to become “degenerative AI” and said news publishers must act as “AI alchemists” and make sure it in fact proves to be “regenerative AI”.

Because generative AI tools are trained on past data, with ChatGPT offering information only up to 2021, he said this gives publishers an advantage: “Otherwise it’s just Wikipedia on amphetamines.”

However, outside the newsroom he said that AI will have a positive impact in many ways “from customer service to the finance department”.

Thomson also called for compensation to publishers for their content being harvested and scraped to train AI engines, as well as for the information snippets in AI-powered search results that “contain all the effort and insights of great journalism but designed so the reader will never visit a journalism website”.

He said he wants as many media companies as possible to derive value, ensuring it is not simply a sweetheart deal for the biggest publishers, and said he has support from his bosses Rupert and Lachlan Murdoch as well as the News Corp board in pursuing this mission.

Publishers “need to be more collectively assertive in haggling for the values and virtues of journalism,” he told the World Congress.

But he did not believe “there’s going to be regulation of AI anytime soon” in the US and certainly no “coherent, cogent response” from Washington DC. “So it’s going to be up to us” as journalists, he said, to write about and explain what is going on and the implications of the technology.

[Journalists: ChatGPT is coming for your jobs (but not in the way you might think)]

Roula Khalaf, Financial Times editor (UK): AI summaries will always have human oversight

FT editor Roula Khalaf. Picture: FT
FT editor Roula Khalaf. Picture: FT

Roula Khalaf has promised that the FT’s journalism will continue to be “reported, written and edited” by humans, saying that trust matters above all else, although she said some summarising and visual creation tools could be deployed.

In an op-ed, Khalaf said she wants the FT to be an “invaluable source of information and analysis” about AI – but that using generative AI models themselves can produce false facts, references, links, images and even articles.

However AI has some uses, she said, both for the FT’s clients and possibly for infographics, diagrams and photos used by the newsroom.

“The FT is also a pioneer in the business of digital journalism and our business colleagues will embrace AI to provide services for readers and clients and sustain our record of effective innovation,” Khalaf said.

“Our newsroom too must remain a hub for innovation. It is important and necessary for the FT to have a team in the newsroom that can experiment responsibly with AI tools to assist journalists in tasks such as mining data, analysing text and images and translation. We won’t publish photorealistic images generated by AI but we will explore the use of AI-augmented visuals (infographics, diagrams, photos) and when we do we will make that clear to the reader. This will not affect artists’ illustrations for the FT.

“The team will also consider, always with human oversight, generative AI’s summarising abilities.”

[Read more: FT creates AI editor role to lead coverage on new tech]

Sinead Boucher, Stuff CEO (New Zealand): Experimenting with AI layout of print

Stuff CEO Sinead Boucher posing at their company headquarters in Wellington, New Zealand. Picture: Marty Melville/ AFP via Getty Images

Sinead Boucher said Stuff has, similarly to other publishers, begun to use generative AI to assist with some elements of publishing and recommendations that enables the “enhancing and augmenting the work of newsrooms”.

The publisher was about to roll out “AI layout” of its printed papers. “That’s all about the efficiencies so that we can reduce production resources, so we can invest more in creation and the sort of things that our readers see unique value in,” Boucher told the World Congress.

She continued: “But I think the big thing we’re really trying to think about is what are the core decisions we have to make now? And how do we make them when we can’t possibly see what the potential ramifications of them are going to be so early on?”

Nevertheless she said: “We have to be really positive and think about all the things that could go right with this technology in terms of equal access to education, to health, to all of those really big profound things.

“It’s going to take a lot of focus and effort from everybody, and from all of us, to think about what do we need to do to ensure those things do go right.”

Backing the need for fair reward from the platforms in question, Boucher later added: “I think we would be deluding ourselves if we don’t recognise this is going to be a hugely disruptive force in our society, and I think we should all look back to the rise of social and the rise of search and how we reacted then, what lessons can we learn from our lack of urgency, our complacency, our slowness, our heads in the sand until things had got beyond our control in some ways. And a lot of us are still trying to grapple with the legacy of those disruptive changes, even in terms of our relationship with the tech companies, and the regulation and payments.

“So I think for us now it’s having that sort of clear-eyed awareness that this is going to be another hugely disruptive moment for us all, and having the resolve to go hard after what we need to protect, which is our IP and our unique qualities and the trust in what we do, and think about how do we compete and how we’re going to adapt and compete in that world because one of the great things about this, which is a bit different to the social era for example, is that this technology does not belong to a few platforms.

“It is much more accessible and can be much more democratised. But it’s going to be incumbent on us to sort of realise how we will grasp it and learn about it and adapt what we do.”

Thomas Schultz-Homberg, KSTA Media CEO (Germany): ‘As an old, white man, I don’t want to miss the last train’

KSTA Media CEO Thomas Schultz-Homberg. Picture: Robert Downs/INMA

Thomas Schultz-Homberg said that although KSTA Media, which publishes the likes of the daily newspaper Kölner Stadt-Anzeiger, is a “pretty good business still” it is “under fire” from AI “disrupting” and “intersecting” the industry.

“It’s a big opportunity, but if we don’t get it right, we will lose the game,” he said. “And we missed so many trains in the industry, and I’m over 30 years in the media industry now and I’ve missed a lot of trains. And now as an old, white man I don’t now want to miss the last one and I’m very determined to get this train.

“So that means we are experimenting a lot with AI, we are trying to make AI a regular part of our daily job. And we implemented a lot and tried a lot.”

For example, Schultz-Homberg said the publisher is now using AI for topic pages, taxonomy, contextual advertising, and personalisation.

He said they are in a “very early stage… but we’re learning and I think if we don’t learn early now, and if we don’t try early to find out how this can make our product better, our processes in the company, how we work, improving everything we do, then we can save the business that I showed you. If we don’t, others will do that for us or instead of us.”

Editors were hesitant about being replaced by AI for curation so, Schultz-Homberg said, they implemented an experiment with 50% of users seeing an AI-generated recommendation boxes and 50% curated by humans.

The AI-generated boxes saw an 80% increase in click-through rate and a 13% increase in fully-read articles, he said. “The discussion was over after that.”

He added: “Whenever we do something with AI technology, the figures look better than they did before.”

KSTA’s news websites are now rolling out a process by which AI curates 80% of the site, to make sure people stay as long as possible because they see things that interest them and perhaps surprise them, and human editors curate the remaining 20% – the top and most important stories of the day. Schultz-Homberg credited this work with a 6% increase in overall page views.

He said it was “okay” that journalists felt competitive about it because they are “at least in our newsroom, more anxious about what might be coming, what kind of threats might come out of this technology, and they might lose their jobs and they might lose their relevance because they were used to sending out what the people have to read.

“And so this is a loss of control where machines curate the website, so this is something they obviously don’t like, and so we have a respectful discussion about that.”

Schultz-Homberg also said they have used generative AI to write horoscopes, published alongside a disclaimer. “We said if a man can invent them, a machine can do as well.”

“To be clear, we don’t want to get rid of journalists and we don’t want to replace the work of journalists by AI,” he said. “But there might be some standard processes we do today with humans that we can send off to the to the AI and so the humans have more time to do their real world investigations, good journalism, quality journalism.”

He added: “We’re going to make mistakes, we will cause some damage, some accidents, but we’re eager to do that nevertheless, because we think if you don’t do this now, you won’t learn that and you miss the next train for the media industry.”

Email pged@pressgazette.co.uk to point out mistakes, provide story tips or send in a letter for publication on our "Letters Page" blog

Websites in our network