DeepfakesIn a Battle of AI versus AI, Researchers Are Preparing for the Coming Wave of Deepfake Propaganda

By John Sohrawardi and Matthew Wright

Published 9 October 2020

Deepfakes are here to stay. Managing disinformation and protecting the public will be more challenging than ever as artificial intelligence gets more powerful. People may soon be able to watch videos through a specialized tool, which tells them whether or not the videos they are watching are what they seem – or whether the videos are “deepfake,” videos made using artificial intelligence with deep learning. We are part of a growing research community that is taking on the deepfake threat, in which detection is just the first step.

An investigative journalist receives a video from an anonymous whistleblower. It shows a candidate for president admitting to illegal activity. But is this video real? If so, it would be huge news – the scoop of a lifetime – and could completely turn around the upcoming elections. But the journalist runs the video through a specialized tool, which tells her that the video isn’t what it seems. In fact, it’s a “deepfake,” a video made using artificial intelligence with deep learning.

Journalists all over the world could soon be using a tool like this. In a few years, a tool like this could even be used by everyone to root out fake content in their social media feeds.

As researchers who have been studying deepfake detection and developing a tool for journalists, we see a future for these tools. They won’t solve all our problems, though, and they will be just one part of the arsenal in the broader fight against disinformation.

The Problem with Deepfakes
Most people know that you can’t believe everything you see. Over the last couple of decades, savvy news consumers have gotten used to seeing images manipulated with photo-editing software. Videos, though, are another story. Hollywood directors can spend millions of dollars on special effects to make up a realistic scene. But using deepfakes, amateurs with a few thousand dollars of computer equipment and a few weeks to spend could make something almost as true to life.

Deepfakes make it possible to put people into movie scenes they were never in – think Tom Cruise playing Iron Man – which makes for entertaining videos. Unfortunately, it also makes it possible to create pornography without the consent of the people depicted. So far, those people, nearly all women, are the biggest victims when deepfake technology is misused.

Deepfakes can also be used to create videos of political leaders saying things they never said. The Belgian Socialist Party released a low-quality nondeepfake but still phony video of President Trump insulting Belgium, which got enough of a reaction to show the potential risks of higher-quality deepfakes.

Perhaps scariest of all, they can be used to create doubt about the content of real videos, by suggesting that they could be deepfakes.

Given these risks, it would be extremely valuable to be able to detect deepfakes and label them clearly. This would ensure that fake videos do not fool the public, and that real videos can be received as authentic.