Facebook has rejected a claim that it prioritises the debunking of false content affecting its advertisers above others in response to a Guardian article quoting a former fact-checking editor.
The social network used a blog post to claim the story, which raises concerns by journalists both currently and formerly employed by Facebook to check facts in posts, contained “several inaccuracies”.
The Guardian article, published yesterday, largely quotes Brooke Binkowski, former managing editor of US fact-checking website Snopes which has partnered with Facebook to help it tackle fake news.
Binkowski, who left Snopes earlier this year, was critical of its two-year agreement with Facebook, telling the Guardian: “They’ve essentially used us for crisis PR.”
The paper reported her as saying that on at least one occasion, it appeared that Facebook was pushing reporters to prioritise debunking misinformation that affected Facebook advertisers.
“You’re not doing journalism any more,” she said. “You’re doing propaganda.”
Responding directly to this in a blog post, also published yesterday, Facebook said: “Contrary to a claim in the story, we absolutely do not ask fact-checkers to prioritise debunking content about our advertisers.”
Binowski said that when Snopes became an official fact-checking partner with Facebook in December 2016, its journalists experienced a rise in online harassment and attacks.
She told the Guardian that when facing these attacks Facebook “threw us under the bus at every opportunity”.
Kim LaCapria, a former content manager and fact-checker at Snopes, also told the paper: “We were just collateral damage.”
She left the company over her frustrations with the agreement between Facebook and Snopes, saying the social media giant wanted the “appearance of trying to prevent damage without actually doing anything”.
The Guardian article also anonymously quoted fact-checkers currently working with Facebook who said the tech platform’s collaboration with outside reporters has produced minimal results and dented trust.
In its rebuttal, Facebook said the article was “based primarily on the account of a single fact-checker who hasn’t been involved with the Facebook fact-checking program for six months”
It added: “We provided information to The Guardian, but they chose not to include all of it.”
Facebook said it had committed to fighting misinformation “for years now” and has “strong relationships” with fact-checking partners, which number 35 across 24 countries worldwide.
Other US partners include ABC News, FactCheck.org, the Associated Press and Politifact.
Facebook said fact-checking was “highly effective in fighting misinformation”. It said future impressions for false content drops by an average of 80 per cent when rated “false” by a fact-checker on Facebook.
These ratings are also used to take action on pages and websites that are repeat offenders when it comes to the spread of misinformation.
“We de-prioritise all content from actors who repeatedly get ‘false’ ratings on content they share, and we remove their advertising and monetization rights,” it said.
Facebook said fact-checking relied on machine learning to surface potentially false news to its partner fact-checkers. This in turn “relies on a number of signals like feedback from people who use Facebook and the number of comments expressing disbelief”.
It added: “Fact-checkers then go through a list of this potentially false content and choose for themselves what to fact-check – they are under no obligation to fact-check anything from the list, and if they’d like, they can rate stories that Facebook hasn’t added to the list (which they often do).
“As soon as something is rated ‘false’ it is automatically de-prioritised in News Feed, and where it does appear, we’ll show Related Articles including the fact-checker’s article below it. These processes are automated.”
On the concerns around journalists safety, Facebook said it had started to provide safety training when bringing in new partners and is working to expand this to existing partners as well.
“We take the safety of journalists seriously,” Facebook said and pointed out that it provides online safety resources for journalists through its Facebook Journalism Project.
“Our community standards on credible violence aim to protect journalists and other vulnerable people or groups,” the social media giant added.
“We remove content, disable accounts, and work with local authorities when we become aware of content that we believe poses a genuine risk of physical harm or direct threats to safety.”
It added: “Misinformation is an ever-evolving problem that we’re committed to fighting globally, and the work that third-party fact-checkers do to help review content on Facebook is a valued and important piece of this effort.”
Picture: Reuters/Dado Ruvic/Illustration/File Photo
Email pged@pressgazette.co.uk to point out mistakes, provide story tips or send in a letter for publication on our "Letters Page" blog