Youtube has said it is taking “a new approach to advertising” following concerns over the availability of “inappropriate content” on its website that has led big brands to pull adverts.
The Google-owned video sharing platform has come under fire in recent months for the ease with which violent extremist material and other questionable content can be accessed by users.
- April 13, 2018
- March 20, 2018
- March 14, 2018
At one point a bomb-making video tutorial by Manchester Arena bomber Salman Abedi was available on Youtube. It had since resurfaced on Google networks and was only removed again yesterday, The Sun has reported.
A Times investigation last month revealed that Youtube videos showing “scantily clad children” had attracted comments from hundreds of paedophiles, in some cases encouraging them to perform sex acts.
Adverts for brands including Adidas, Amazon and Mars appear on the videos, although these are not targeted at a particular audience. Adidas has said the situation is “completely unacceptable” while Mars, along with other companies, has pulled advertising until safeguards are in place.
Newsquest boss Henry Faure Walker told the House of Lords communications committee last week that while hosting this type of content was “not necessarily criminal” there was nonetheless “huge detriment to the brands” advertising against it.
He said: “I hope that 2017 will be a big wake up call to the advertising industry that has slightly fallen in love with the art and science of what we call blind programmatic advertising.”
In a blog post yesterday clearly attempting to address these concerns, Youtube’s chief executive Susan Wojcicki (pictured top) said the platform was “taking actions to protect advertisers and creators from inappropriate content”.
She said: “We want advertisers to have peace of mind that their ads are running alongside content that reflects their brand’s values. Equally, we want to give creators confidence that their revenue won’t be hurt by the actions of bad actors.
“We believe this requires a new approach to advertising on YouTube, carefully considering which channels and videos are eligible for advertising.
“We are planning to apply stricter criteria, conduct more manual curation, while also significantly ramping up our team of ad reviewers to ensure ads are only running where they should.”
Wojcicki said Youtube was taking these actions “because it’s the right thing to do” and that creators, fans and advertisers were all “essential” to the platform’s “creative ecosystem” and deserved its “best efforts”.
She said Youtube would be speaking with advertisers and creators “over the next few weeks” to hone its approach.
Wojcicki said that while Youtube was a “force for creativity, learning and access to information” she had also “seen up-close that there can be another, more troubling, side of Youtube’s openness”.
She said: “I’ve seen how some bad actors are exploiting our openness to mislead, manipulate, harass or even harm.”
Youtube uses both “human reviewers” and “machine learning technology” to review video content posted on its platform.
Wojcicki said human reviewers remained “essential to both removing content and training machine learning systems” and that Youtube planned to grow its human teams to more than 10,000 in 2018.
She said that since June, when Youtube deployed new technology to flag violent extremist content for human review, the platform had manually reviewed nearly 2m videos and removed 150,000.
She said this in turn was helping to train its machine learning technology to identify similar videos in the future.
“In the last few weeks we’ve used machine learning to help human reviewers find and terminate hundreds of accounts and shut down hundreds of thousands of comments,” she said.
Wojcicki said machine learning is helping Youtube’s human reviewers remove nearly five times as many videos than previously and that today, 98 per cent of the videos removed for violent extremism were flagged by machine-learning algorithms.
She said advances in machine learning meant Youtube could take down nearly 70 per cent of violent extremist content within eight hours of it being uploaded and nearly half of it within two hours.
The technology has reviewed and flagged content that would have taken 180,000 people working 40 hours a week to assess, according to Wojcicki.
“Because we have seen these positive results, we have begun training machine-learning technology across other challenging content areas, including child safety and hate speech,” she said.
“As challenges to our platform evolve and change, our enforcement methods must and will evolve to respond to them. But no matter what challenges emerge, our commitment to combat them will be sustained and unwavering.”
Picture: Reuters/Stephen Lam