AI Generated ‘Deep Video Portaits’ Could Make the Fake News Crisis Even Worse

The fake news crisis led to a lot of discussions and some great ideas on how to face it. Under the spotlight, both Facebook and Google reacted by banning fake news sites from advertising with them. Facebook also went a step further by introducing new measures to limit the spread of fake news on its platform, such as the ability for users to report dubious content, which then shows a “disputed” warning label next to it.

While those are promising first steps, I am afraid they won’t be enough. I believe our current misinformation problem is only the tip of a massive iceberg coming our way — and this starts with AI.

The Age of Artificial Intelligence

2016 was the year where AI became mainstream. Following a long period of disappointments, AI is making a comeback thanks to recent breakthroughs such as Deep Learning. Now, rather than having to code the solution to a problem, it is possible to teach the computer to solve the problem on its own. This game-changing approach is enabling incredible products that would have been thought impossible just a few years ago, such as voice-controlled assistants like Amazon Echo and self-driving cars.

While this is great, AI is also enabling some impressive but downright scary new tools for manipulating media. These tools have the power to forever change how we perceive and consume information.

Deep Video Portraits

Deep Video Portraits is the title of a paper submitted for consideration this August at SIGGRAPH; it describes an improved technique for reproducing the motions, facial expressions and speech movements of one person using the face of another. Here’s a scary example:

A few months ago, there’s a video of Gal Gadot having sex with her stepbrother on the internet. But it’s not really Gadot’s body, and it’s barely her own face. It’s an approximation, face-swapped to look like she’s performing in an existing incest-themed porn video.

The video was created with a machine learning algorithm, using easily accessible materials and open-source code that anyone with a working knowledge of deep learning algorithms could put together.

It’s not going to fool anyone who looks closely. Sometimes the face doesn’t track correctly and there’s an uncanny valley effect at play, but at a glance it seems believable. It’s especially striking considering that it’s allegedly the work of one person—a Redditor who goes by the name ‘deepfakes’—not a big special effects studio that can digitally recreate a young Princess Leia in Rogue One using CGI. Instead, deepfakes uses open-source machine learning tools like TensorFlow, which Google makes freely available to researchers, graduate students, and anyone with an interest in machine learning.

It doesn’t stop here. AI is enabling many other ways to impersonate you. For instance, researchers created an AI-powered tool that can imitate any handwriting, potentially allowing to easily manipulate legal and historical documents or create false evidence to use in court. Even creepier, a startup created an AI-powered memorial chatbot: software that can learn everything about you from your chat logs, and then allow your friends to chat with your digital-self after you die.

A Photoshop for Everything

Remember the first time you realized that you’d been had? That you saw a picture you thought was real, only to realize it was photoshopped? Well, here we go again.

Back in the days, people used to say that the camera cannot lie. Thanks to the invention of the camera, it was possible, for the first time, to capture reality as it was. Consequently, it wasn’t long before photos became the most trusted pieces of evidence one could rely upon. Phrases like ‘photographic memory’ are a testament to that. Granted, people have been historically manipulating photos, but those edits were rare and required the tedious work of experts. This isn’t the case anymore.

Today’s generation knows very well that the camera does lie all the time. With the widespread adoption of photo-editing tools, such as Photoshop, manipulating and sharing photos has now become one of the Internet favorite’s hobbies. By making it so easy to manipulate photos, these tools also made it much harder to differentiate fake photos from real ones. Today, when we see a picture that seems very unlikely, we naturally assume that it is photoshopped, even though it looks very real.

Entering the Post-Truth World

With AI, we are heading toward a world where this will be the case with every other form of media: text, voice, video, etc. To be fair, these AI tools aren’t entirely revolutionary. Hollywood has been doing voice and face replacement for many years. However, what is new is that you no longer need professionals and powerful computers to do it. With these new tools, anyone will be able to achieve the same results effortlessly using a regular computer.

Given how well fake news tends to perform online, and that our trust in the media industry is at an all-time low, this is troubling news.

Join the battle against fake news! Share this to let your friends know of this emerging technology.