Truth decayDetermining the Who, Why, and How Behind Manipulated Media

Published 19 September 2019

The threat of manipulated multi-modal media – which includes audio, images, video, and text – is increasing as automated manipulation technologies become more accessible, and social media continues to provide a ripe environment for viral content sharing. The creators of convincing media manipulations are no longer limited to groups with significant resources and expertise. Today, an individual content creator has access to capabilities that could enable the development of an altered media asset that creates a believable, but falsified, interaction or scene. A new program seeks to develop technologies capable of automating the detection, attribution, and characterization of falsified media assets.

The threat of manipulated multi-modal media – which includes audio, images, video, and text – is increasing as automated manipulation technologies become more accessible, and social media continues to provide a ripe environment for viral content sharing. The creators of convincing media manipulations are no longer limited to groups with significant resources and expertise. Today, an individual content creator has access to capabilities that could enable the development of an altered media asset that creates a believable, but falsified, interaction or scene.

“At the intersection of media manipulation and social media lies the threat of disinformation designed to negatively influence viewers and stir unrest,” said Dr. Matt Turek, a program manager in DARPA’s Information Innovation Office (I2O). “While this sounds like a scary proposition, the truth is that not all media manipulations have the same real-world impact. The film industry has used sophisticated computer generated editing techniques for years to create compelling imagery and videos for entertainment purposes. More nefarious manipulated media has also been used to target reputations, the political process, and other key aspects of society. Determining how media content was created or altered, what reaction it’s trying to achieve, and who was responsible for it could help quickly determine if it should be deemed a serious threat or something more benign.”

DARPA notes that while statistical detection techniques have been successful in uncovering some media manipulations, purely statistical methods are insufficient to address the rapid advancement of media generation and manipulation technologies. Fortunately, automated manipulation capabilities used to create falsified content often rely on data-driven approaches that require thousands of training examples, or more, and are prone to making semantic errors. These semantic failures provide an opportunity for the defenders to gain an advantage.

The Semantic Forensics (SemaFor) program seeks to develop technologies that make the automatic detection, attribution, and characterization of falsified media assets a reality. The goal of SemaFor is to develop a suite of semantic analysis algorithms that dramatically increase the burden on the creators of falsified media, making it exceedingly difficult for them to create compelling manipulated content that goes undetected.

To develop analysis algorithms for use across media modalities and at scale, the SemaFor program will create tools that, when used in conjunction, can help identify, deter, and understand falsified multi-modal media. SemaFor will focus on three specific types of algorithms: semantic detection, attribution, and characterization.

Semantic detection algorithms will determine if multi-modal media assets were generated or manipulated, while attribution algorithms will infer if the media originated from a purported organization or individual. Determining how the media was created, and by whom could help determine the broader motivations or rationale for its creation, as well as the skillsets at the falsifier’s disposal. Finally, characterization algorithms will reason about whether multi-modal media was generated or manipulated for malicious purposes.

“There is a difference between manipulations that alter media for entertainment or artistic purposes and those that alter media to generate a negative real-world impact. The algorithms developed on the SemaFor program will help analysts automatically identify and understand media that was falsified for malicious purposes,” said Turek.

SemaFor will also develop technologies to enable human analysts to more efficiently review and prioritize manipulated media assets. This includes methods to integrate the quantitative assessments provided by the detection, attribution, and characterization algorithms to prioritize automatically media for review and response. To help provide an understandable explanation to analysts, SemaFor will also develop technologies for automatically assembling and curating the evidence provided by the detection, attribution, and characterization algorithms. Throughout the life of the program, the SemaFor technologies will be evaluated against a set of increasingly difficult challenge problems that are representative of new or emerging threat scenarios.

See more information about the program in the Broad Agency Announcement, posted on FedBizOpps.gov, https://go.usa.gov/xVkNd