DeepfakesThe challenges of Deepfakes to national security

Published 17 June 2019

Last Thursday, 13 June 2019, Clint Watts testified before the House Intelligence Committee of the growing dangers of Deepfakes – that is, false audio and video content. Deepfakes grow in sophistication each day and their dissemination via social media platforms is far and wide. Watts said: “I’d estimate Russia, as an enduring purveyor of disinformation, is and will continue to pursue the acquisition of synthetic media capabilities and employ the outputs against its adversaries around the world. I suspect they’ll be joined and outpaced potentially by China.” He added: “These two countries along with other authoritarian adversaries and their proxies will likely use Deepfakes as part of disinformation campaigns seeking to 1) discredit domestic dissidents and foreign detractors, 2) incite fear and promote conflict inside Western-style democracies, and 3) distort the reality of American audiences and the audiences of America’s allies.”

Last Thursday, 13 June 2019, Clint Watts testified before the House Intelligence Committee of the growing dangers of Deepfakes – that is, false audio and video content.

Here are Watts’s opening remarks:

All advanced nations recognize the power of artificial intelligence to revolutionize economies and empower militaries. But those countries with the most advanced artificial intelligence (AI) capabilities and unlimited access to large data troves will gain enormous advantages in information warfare. AI provides purveyors of disinformation the ability to rapidly recon American social media audiences to identify psychological vulnerabilities. AI powered systems can quickly generate modified content and digital forgeries advancing false narratives against Americans and American interests.

“Deepfakes,” false audio and video content, grow in sophistication each day and their dissemination via social media platforms is far and wide. Historically, each advancement in media, from text to speech to video to virtual reality, more deeply engages information consumers enriching the context of experiences and shaping user reality. The falsification of audio and video allows manipulators to dupe audience members in highly convincing ways provoking emotional responses that can lead to widespread mistrust and, at times, physical mobilizations. False video and audio, once consumed and believed, can be extremely difficult to refute and counter.

Before the Kremlin’s Internet Research Agency pushed bogus social media advertisements and manipulated content heading into the Presidential election of 2016(1), the Soviet Union authored and placed forged documents seeding conspiracies abroad. The most notable and possibly prolific claimed the U.S. created and proliferated the AIDS virus.(2) Last decade, manipulated video was disseminated to mainstream media outlets in an attempt to disparage an American diplomat serving in Russia.(3)