With the advent of AI in media – generative AI in particular – the journalism industry is at the beginning of a new wave of digital transformation with implications for everything from content to brand, ethics, business and labour practices. So it’s no surprise that the Nordic AI in Media Summit in Copenhagen recently was a sell-out. After a spring of intense AI discussions, the summit was a chance for some much-needed collective stock-taking.
The summit* had 250+ AI folks from journalism, tech and academia from well beyond the Nordic region attend to take a 360° look at the implications of AI in the media industry – learnings and cases so far as well as what lies ahead – not least in terms of what generative AI will mean for journalism.
First, a reflection on how much the ground has shifted in this space since the release of ChatGPT late last year. United Robots had been asked to the summit to share insights from scaling automated journalism in newsrooms across the world.
The AI we deploy to do this is rules-based – a type that’s been in use in media for various tasks for years. Nevertheless, robot journalism has always been a challenging sell.
I get the sense, though, that in the past six months we’ve gone from being perceived as the crazy new kids on the block to essentially, a safe pair of hands. Where what we offer was seen by many in the news industry as challenging, it’s now become relatively unproblematic. This shift was apparent at workshops we hosted at the PPA Festival in London recently and at the INMA Subscription Summit in Stockholm in March, as well as in Copenhagen.
I’ll focus here on a few key aspects from the keynotes and panel discussions. Most of the summit presentations are now available to view on Youtube.
1) We are at peak hype curve
In his keynote, Nick Diakopoulos, associate professor at Northwestern University in Illinois looked at the hype curve triggered by AI. He provided historical context, pointing out that AI has been around in news media since IBM used it to produce news summaries as early as 1958.
“Hype comes and goes. With generative AI, we’re now at peak hype again. We need to quickly move into the next stage of the curve, start to sort out hype from reality and figure out what this tech can really offer our industry.” To this end, Diakopoulos has set up the Generative AI in Newsrooms project, a collaborative resource to explore the responsible use of generative AI in news production.
Speaking during the panel Spurring AI innovation via collaboration? Charlie Beckett, director of the Journalism AI project at the London School of Economics, also talked about the need to move beyond the hype. “There’s no doubt, though, that generative AI is a game changer and will have a massive impact on both journalists and readers. There are already people in every newsroom who are trying these tools to solve problems.”
He added that he’s convinced generative AI will have a similar effect on the industry as did the first wave of digital transformation. “There will likely be a lot of talk of the coming slaughter of news brands. But we are here today! With some fantastic Danish brands who are innovating brilliantly. We should focus on using this tech to empower us. We face massive opportunities as well as some real risks and it’s key to have a bigger discussion around the ecosystem of journalism, and how it’s affected.”
2) Trusted news brands are at an advantage
Ezra Eeman, Change Director at Belgian/Dutch publishing group Mediahuis provided strategic and practical context in his keynote on how AI is shaping publishing today and tomorrow. Like Beckett, Eeman emphasised that AI is a tool and that the starting point should be identifying what problems you need to solve. “Look for any friction you can remove in your processes or service.
Start with the why – it’s not because it’s possible that it’s valuable. And make sure you have a clear editorial and ethical framework in place,” he said.
Ezra Eeman also laid out the secondary effects he foresees in a world where anyone can use generative AI to gain “creative superpowers”. “Generative AI has the potential to level the professional playing field in terms of content creation. Which in turn raises the bar for real journalistic output. Differentiation and uniqueness [are] more important than ever.”
Eeman pointed out that the differentiation opportunity includes trust specifically. “Trusted news brands have a huge advantage because there will be only more disinformation out there. Make sure that you bank on the trust you have and become a destination in a way that delivers value. We shouldn’t be afraid of more disinformation, it’s a challenge we’ll have to work around. We are in a good position because we have trusted brands.”
3) Design ethics into products and processes
During the Ethics of generative AI panel, Nick Diakopoulos laid out a number of suggested guidelines in the context of generative AI and journalistic values. A key guideline is to always check the work done. Never take output as evidence of truth, including any sources listed.
“It’s troubling that people even use AI to generate questions to ask an interviewee – there’s no way to guarantee that they will be relevant.” Furthermore, he stressed that we should not anthropomorphise these systems. “Don’t interview bots. It’s an improper way to use them and disingenuous when writing about them. Treat them as the bland technology they are.”
4) AI can fuel recommender systems
Christoph Schmitz delivered insights from Schibsted’s Curate – personalisation in production. Johannes Kruse from Danish Ekstra-Bladet talked about how they are creating the next generation news experience with recommenders.
Mikkel Flyverbom, professor at Copenhagen Business School gave the keynote Feed me right – the Ethics and Politics of Recommender systems where he mentioned how the Chinese authorities recognise the addictive nature of Tiktok, limiting its use among its own population: “The Chinese keep the spinach version for domestic use, while they ship the opium version to the rest of the world.”
5) AI can be a tool for inclusivity and accessibility
Tove Mylläri and Samuli Sillanpää from Yle News Lab (Finnish public service) described how they use AI for things like measuring and improving diversity in news content. At Aftenposten (Schibsted, Norway) they clone human voices to make journalism more accessible to groups of people who struggle to read due to issues like dyslexia or ADHD – Product Manager Lena Beate Hamborg Pedersen described the work and results.
6) Large Language Models are too resource-intensive for smaller companies to build
Ezra Eeman covered LLMs in his keynote: “Even OpenAIs [CEO] Sam Altman has said that the future might not lie in even bigger models. The competitive models will be the ones which can iterate even more quickly [and] adapt faster. Size is not necessarily an advantage here.” Mynewsdesk’s Daniel Jonsson and Ola Gustafsson shared their experience and learnings from building products based on large language models – they decided two years ago that LLMs are too big and resource intensive for smaller companies to build.
Finally, do check out Agnes Stenbom’s talk on the need for imagining possible futures for media. Stenbom, co-founder of the Nordic AI Journalism Network, runs Schibsted’s Inclusion Lab, where her team develop AI solutions that address news outsiders.
We are only seeing the beginning of how AI will transform how we work in media. However, good journalism is about people – those who produce it and those who consume it, and in Copenhagen, it was encouraging to see how much good and proactive work is being done in this space.
* The Nordic AI in Media Summit was coordinated by Head of Research and Innovation Kasper Lindskow at Ekstra Bladet as part of the Platform Intelligence in News project (PIN) and Agnes Stenbom (Head of IN/LAB at Schibsted) and Olle Zachrison (News Commissioner at Swedish Radio) as representatives of the Nordic AI Journalism Network.
The summit was not-for-profit and supported by Innovation Fund Denmark and Politiken-Fonden and hosted by Ekstra Bladet / JP/Politikens Hus.
Email firstname.lastname@example.org to point out mistakes, provide story tips or send in a letter for publication on our "Letters Page" blog