Facebook, Twitter and Google have pledged to speed up their response to anti-vaccine Covid-19 disinformation flagged to them by the UK government after concerns they can do “so much more” to tackle the infodemic.
Culture Secretary Oliver Dowden and Health Secretary Matt Hancock held a virtual roundtable with the tech giants, who also agreed to commit to the principle that no user or company should directly profit from Covid-19 misinformation or disinformation.
- June 21, 2021
- June 18, 2021
- May 26, 2021
A study was published in August exposing how Covid-19 disinformation websites were able to use Google’s search engine and advertising platform to make money.
The government said the new commitment would remove an incentive for this type of content to be produced, promoted or circulated.
The Center for Countering Digital Hate, which published a recent report finding that just 5% of anti-vaccine misinformation posts are removed or labelled by platforms after being reported by users, said the government needed to go further and introduce sanctions where standards are not met.
CCDH chief executive Imran Ahmed said: “The online audience for anti-vaxxers is worth $1bn to the major social media companies. It’s all very well them agreeing in principle that no company should profit from anti-vaccine content, but it’s meaningless while they themselves make such huge sums from hosting these lies and conspiracy theories.
“We know as well that 95% of the content flagged to these platforms by users is not acted on and stays online, so it’s not just reports from government that need to be responded to in a more timely manner.
“Until these words are backed up with sanctions for inaction, social media companies simply will not enforce these pledges. The government must force Big Tech to remove anti-vaccine propaganda in the Online Harms Bill, which must be passed as soon as possible.”
The agreement was made public on Sunday, a day before pharmaceutical giant Pfizer said its coronavirus vaccine is more than 90% effective in preventing Covid-19.
It also came just days after Twitter removed David Icke’s account, after the platform was accused of allowing the conspiracy theorist to spread “dangerous Covid misinformation for months”. Facebook and YouTube took action against him six months earlier.
The platforms also agreed to continue to work with public health bodies to make sure authoritative messages about vaccine safety reach as many people as possible – they have already been flagging official information in relevant searches.
They will also join policy forums with the government, public health bodies and academia in the next few months to improve responses to disinformation and prepare for future threats with better co-operation.
Dowden said: “Covid disinformation is dangerous and could cost lives. While social media companies are taking steps to stop it spreading on their platforms there is much more that can be done.
“So I welcome this new commitment from social media giants not to profit from or promote flagged anti-vax content, given that making money from this dangerous content would be wrong.”
In a rare glimpse into the work of the government’s counter disinformation unit, it was revealed the team has observed “a range of false narratives about coronavirus vaccines across multiple platforms, including widespread misuse of scientific findings and baseless claims challenging the safety of vaccines or plans for their deployment”.
Press Gazette’s Fight the Infodemic campaign aims to stop tech giants from promoting Covid-19 misinformation and instead favour evidence-based journalism from bona fide outlets.
Google UK’s managing director Ronan Harris said: “Since the beginning of the Covid-19 epidemic, we have worked relentlessly to promote authoritative content from the NHS and to fight misinformation.
“In the last few months, we have continued to update our policies to make sure that content contradicting scientific consensus about the virus is swiftly removed and demonetised.
“Today, we are redoubling our commitment to take effective action against Covid vaccine misinformation and to continue to work with partners across government and industry to make sure people in the UK have easy access to helpful and accurate information.”
Facebook’s latest attempt to slow the spread of misinformation, announced last week, will see moderators of groups required to manually approve all posts in a “probation” period if too many are deemed to have breached content standards.
The company’s head of UK public policy Rebecca Stimson said: “We’re working closely with governments and health authorities to stop harmful misinformation from spreading on our platforms.
“Ads that include vaccine hoaxes or discourage people from getting a vaccine are banned, we remove harmful misinformation about Covid-19 and put warning labels over posts marked as false by third-party fact checkers.
“We’re also connecting people to accurate information about vaccines and Covid-19 whenever they search for these topics. In the first months of the pandemic, we directed more than 3.5m visits to official advice from the NHS and UK government and we’re pleased to continue to support public health efforts.”
Twitter’s head of UK public policy Katy Minshall said: “Since introducing Covid misinformation policies in March, and as we’ve doubled down on tech, our automated systems have challenged millions of accounts which were targeting discussions around Covid-19 with spammy or manipulative behaviours.
“We remain committed to combating misinformation about Covid-19, and continue to take action on accounts that violate our rules. We look forward to continued collaboration with government and industry partners in our work towards improving the health of the public conversation.”