Perspective: DeepfakesCISA, DARPA Offer Look Into their Dealings with Deepfakes

Published 5 November 2019

Agency and industry officials last week offered details of their efforts to improve public resiliency, streamline communication, and accelerate technical solutions to counter the threats posed by deepfakes and other disinformation techniques ahead of next year’s election. “Essentially, if you generalize a bit, these are attacks on knowledge, right, which underpins everything that we do,” Matt Turek, program manager for the Defense Advanced Research Projects Agency’s (DARPA) Information Innovation Office said on a panel at the Center for Strategic and International Studies (CSIS) Wednesday. “It underpins our trust in institutions and organizations.”

Agency and industry officials last week offered details of their efforts to improve public resiliency, streamline communication, and accelerate technical solutions to counter the threats posed by deepfakes and other disinformation techniques ahead of next year’s election. 

“Essentially, if you generalize a bit, these are attacks on knowledge, right, which underpins everything that we do,” Matt Turek, program manager for the Defense Advanced Research Projects Agency’s (DARPA) Information Innovation Office said on a panel at the Center for Strategic and International Studies (CSIS) Wednesday. “It underpins our trust in institutions and organizations.”

Brandi Vincent writes in Defense One that Turek runs a media forensics program inside DARPA that leverages technology to automatically assess if images and video have been manipulated. More specifically, he said the program also provides a quantitative score on the contents’ integrity, which enables researchers to “do a triage of data at scale.” Though the program launched in 2016, Turek said that deepfaking as it was originally understood—as a specific automated manipulation technique for swapping faces in video—initiated in 2017, when source code for creating the media was released on the social media site, Reddit. And over the last year, politicians and major figures of popular culture have increasingly fallen victim to deepfakes that falsely reveal them doing or saying things that they actually did not.

“And now, that term has been broadly adopted to essentially be any automated, or somewhat automated, manipulation technique primarily in video—but the term in broad use is starting to apply to other media types as well,” Turek said. 

Vincent adds:

DARPA’s Turek also emphasized that disinformation campaigns are not limited to the political sector—they also target the realms of finance, commerce, insurance and the scientific process. He said the agency’s researchers are also working directly with the Health and Human Services Department to address allegations around scientific fraud. The team is now creating technological tools that can rip apart scientific publications and understand if images included have been manipulated. 

“So, there is really broad-based opportunities for these sort of manipulation tools, and again, it’s not just sort of elections, military decisions and politics,” he said. “But I think they can essentially touch us in our everyday lives.”