Within days of the UK election campaign starting, videos started circulating on a number of platforms ‘investigating’ Rishi Sunak’s ‘real heritage’ following similar patterns to that experienced by Barak Obama during his US presidential campaign.
In today’s digital society, the demarcation between truth and falsehood is increasingly obscured. Misinformation, or the unintentional spread of false information, has become the genericised term. It is, in fact, one pillar in a triad of information disorder.
Peer deeper beyond the viral, baseless claims, and you may spot ‘disinformation’, the deliberate creation and dissemination of falsehoods, a sinister tactic used to sway public opinion or tarnish reputations.
Then there’s malinformation, the strategic use of true information to inflict harm, such as leaking genuine but private information to discredit an individual or entity.
It’s crucial to get to grips with these nuances to identify and avoid information disorder, know the source of the information, understand the context and terms of reference and not readily accept anything at face value.
Information disorder boosted by tech revolution
Information disorder is nothing new but technological advancements, particularly in social media and AI, have revolutionised the way information is created, shared and consumed.
On the other side of this technological revolution, algorithms designed to engage users often inadvertently prioritise sensational or divisive content, regardless of its veracity.
A fertile ground for the spread of misinformation has been created by the ability of everyone to publish, algorithmic bias and the human tendency to engage with content that resonates with pre-existing beliefs.
Information is not always intelligent
The role of AI has introduced new complexities to the information landscape. The ability of AI to create convincing yet false content, from deepfakes in audio, imagery and even video, to fabricated news articles, poses a formidable challenge to discerning truth from fiction.
These technologies, while remarkable when used for good, offer tools for those intent on manipulating public opinion. This underscores the need for a vigilant and informed populace.
Within the first week of the UK election campaign, there were many videos, beyond those of parody or satire, making factually incorrect assertions around ‘National Service’ and an unfounded claim about Starmer’s involvement in not prosecuting Jimmy Savile.
Information runs deep
A 2023 report by Home Security Heroes exposed the chilling ease of manipulating reality with deepfakes, AI-generated fabrications that exploit our trust in visuals.
Beyond the technical prowess, it’s the psychological manipulation that’s concerning. We inherently trust familiar faces and voices, making us susceptible to deepfakes’ potent mix of misinformation, disinformation, and malinformation.
The consequences are far-reaching, impacting individual reputations, public opinion, and even societal stability. High-profile targets like Taylor Swift (whose deepfake nudes were widely circulated) highlight the potential for mass deception. Social media’s quick takedown in this case underscores the urgency for a comprehensive response. Arguably, only Swift’s stardom resulted in such rapid, universal action. We need education, regulation, and advanced detection to safeguard online discourse.
[Read more: ITN sounds alarm over fake online content featuring Robert Peston, Mary Nightingale and others]
Deepfakes aren’t the only threat. ‘Shallowfakes’, created with traditional editing techniques, exploit similar vulnerabilities. These can be malicious, but often stem from misinformation, like taking quotes out of context or memes masquerading as news headlines.
In recent days, there have been videos using a combination of shallowfake and AI-generated voiceover around supposed hustings where a Conservative MP’s speech has been layered to imply the audience in the background are not interested and sceptical of the orator. Both scenes are totally different but have been merged into one misleading piece of video.
Information creators need to be responsible
In previous roles in current affairs and journalism I have worked with great people who scientifically process the many examples of misinformation. I’ve seen first-hand the relative ease to which they can disseminate. It brings life to the old proverb that ‘a lie will go round the world while truth is pulling its boots on’.
So how can we mitigate the impact of misinformation? Education plays a pivotal role. Enhancing media literacy is not just about people being able to distinguish true and false information but cultivating a critical mindset that questions and analyses the source, context, and purpose of the information they’re consuming.
But journalists and content creators play a crucial role in this ecosystem. Adhering to rigorous verification processes, beyond a fact-check, to delve into the context and framing of information to ensure it is accurate and unbiased, promoting transparency and accountability along the way.
Technology firms, too, bear a significant responsibility. Implementing more transparent algorithms, enhancing fact-checking mechanisms, and fostering collaborations with fact-checkers and academia can contribute to a more informed and discerning public.
It’s vital to support creators and publishers with the skills they desperately need to be able to accurately interpret, inspect and investigate the information at hand in order to inform their response and, ultimately, their output.
Information sharing is built on trust
In an age where misinformation thrives, the cornerstone of countering this tide is rebuilding public trust. The Edelman Trust Barometer offers insightful revelations. Their last report indicates that trust across institutions varies significantly, shaping public receptivity to information.
[Trust in media: UK drops to last place in Edelman survey of 28 nations]
The European Broadcasting Union (EBU) charts public trust in media across the complex landscape. It reveals trust in traditional media remains relatively robust, but we are only as worthy as our last headline, citation, video edit or social post.
Engendering trust isn’t an overnight task but a sustained effort. Media institutions have established trust over decades. However, it takes seconds to eradicate. Media brands need to continue to uphold honest, wholesome, ethical, values. That is by no means a rallying cry to return to notebook and quill.
Content creation in 2024 is exciting, it is innovative, it fuels so much aspiration to inspire and bring joy to audiences, made possible by the same technological advances that threaten it.
Paul Doyle has over 20 years of experience in the media industry, specialising in video production and content strategy. At Immediate, Paul oversees the strategic direction of video output, managing content creation, production, engagement, distribution, and monetisation for platform brands. Immediate’s portfolio includes Good Food, Radio Times and BBC Gardeners’ World.
Previously at Tiktok EU, he led content strategy and development, localising content for five European markets. Prior to this, Paul managed technical and production responsibilities at media companies including the BBC, Sky, and RTE and ITV while also launching various formats for broadcast and digital platforms.
In his role as head of programme delivery at First Draft, Paul addressed digital misinformation across three continents through strategic initiatives, promoting digital content integrity for media companies, tech firms, NGOs, and non-profits.
Email pged@pressgazette.co.uk to point out mistakes, provide story tips or send in a letter for publication on our "Letters Page" blog