So-called “deep fake” technology will be producing audio and video clips that are indistinguishable from authentic ones as soon as one year from now, a media panel audience heard today.
“We are about to get to a world where the fact we have seen something on a video is no longer a statement that it is true,” data scientist John Gibson told Mindshare’s Huddle event.
Telegraph technology special correspondent Harry De Quetteville added: “Video for a long time has been the watermark of credibility – it is the media that conveys veracity…
“Now that is up for questioning in the future, we have to begin to re-evaluate our equivalence of video with truth.”
Deep fake videos use algorithms to create a facsimile of a person’s voice and their appearance, making them appear to be saying whatever the video’s creator desires.
According to Gibson, the audio side of the technology is the result of years of research by Google. The tech giant owns London start-up Deep Mind which runs speech synthesis model Wavenet.
The video technology side of deep fake appeared on Reddit a year ago from nowhere, said Gibson. Its origin is understood to be in imposing famous faces on bodies in pornographic videos.
“Audio is just as susceptible as video and when we put the two together it’s very deceptive,” said De Quetteville.
“Deep fakes are competing with CGI [computer-generated imagery] now after a year. In a year’s time it will be better.
“Artificial intelligence will overcome the ‘uncanny valley’ that means you automatically distinguish that something is real or not and when it’s done it will be able to be done by a kid in their bedroom.
“It will suddenly matter an awful lot where you look at the content you look at. That has quite big implications for brands and platforms.”
Already the technology is in use.
Earlier this year US comedian Jordan Peele teamed up with Buzzfeed to create fake video of former US president Barack Obama speaking, with Peele providing the impression and a script of his own.
Said Gibson: “Human impersonators are unlikely to get any better anytime soon. Algorithms will get better really very fast.”
He said by this time next year it was “completely reasonable to assume” that algorithms would be imperceptible in many cases.
De Quetteville said deep fake videos could be a good thing for trusted newsbrands because “all that trust that used to reside in the medium, i.e. video, will reside in the brand”.
He added: “The result will be that people will trust brands. They will trust the conduit rather than the message itself.”
Gibson said using detection tools to spot fake videos could become an everyday part of a journalist’s professional life within a matter of years. Tools such as digital watermarking on authentic videos could also become more prevalent, the ASI Data Science consultant said.
He said one upside could be that because deep fake videos were “quite striking and people like to talk about it” it could raise the profile of disinformation and people’s awareness of it.
He said it took people about 30 years to realise that models and celebrities on the front of magazines had been airbrushed using Photoshop, which was released in the 80s.
It published a video on social media yesterday using deep fake technology to show broadcaster Matthew Amroliwala presenting the news in languages he doesn’t actually know how to speak.
Real or deepfake? See how many languages BBC presenter @AmroliwalaBBC can really speak! See how technology could impact what you see and hear. #BeyondFakeNews pic.twitter.com/JV94diOU0F
— @bbcclick (@BBCClick) November 14, 2018
Picture: Buzzfeed
Email pged@pressgazette.co.uk to point out mistakes, provide story tips or send in a letter for publication on our "Letters Page" blog