TRUTH DECAYDeepfake Defense Tech Ready for Commercialization

Published 6 April 2024

The threat of manipulated media has steadily increased as automated manipulation technologies become more accessible, and social media continues to provide a ripe environment for viral content sharing. The speed, scale, and breadth at which massive disinformation campaigns can unfold require computational defenses and automated algorithms to help humans discern what content is real and what’s been manipulated or synthesized, why, and how.

The threat of manipulated media has steadily increased as automated manipulation technologies become more accessible, and social media continues to provide a ripe environment for viral content sharing.

The speed, scale, and breadth at which massive disinformation campaigns can unfold require computational defenses and automated algorithms to help humans discern what content is real and what’s been manipulated or synthesized, why, and how.

Through the Semantic Forensics (SemaFor) program, and previously the Media Forensics program, DARPA’s research investments in detecting, attributing, and characterizing manipulated and synthesized media, known as deepfakes, have resulted in hundreds of analytics and methods that can help organizations and individuals protect themselves against the multitude of threats of manipulated media.

With SemaFor in its final phase, DARPA’s investments have systemically driven down developmental risks – paving the way for a new era of defenses against the mounting threat of deepfakes. Now, the agency is calling on the broader community – including commercial industry and academia doing research in this space – to leverage these investments.

To support this transition, the agency is launching two new efforts to help the broader community continue the momentum of defense against manipulated media.

The first comprises an analytic catalog containing open-source resources developed under SemaFor for use by researchers and industry. As capabilities mature and become available, they will be added to this repository.

The second will be an open community research effort called AI Forensics Open Research Challenge Evaluation (AI FORCE), which aims to develop innovative and robust machine learning, or deep learning, models that can accurately detect synthetic AI-generated images. Via a series of mini challenges, AI FORCE asks participants to build models that can discern between authentic images, including ones that may have been manipulated or edited using non-AI methods, and fully synthetic AI-generated images. This effort will launch the week of March 18 and will be linked from the SemaFor program page. Those seeking a notification may sign up for the Information Innovation Office newsletter.

According to DARPA and SemaFor researchers, a concerted effort across the commercial sector, media organizations, external researchers and developers, and policymakers is needed to develop and deploy solutions that combat the threats of manipulated media. SemaFor is providing the tools and methods necessary to help people in this problem space.

“Our investments have seeded an opportunity space that is timely, necessary, and poised to grow,” said Dr. Wil Corvey, DARPA’s Semantic Forensics program manager. “With the help of industry and academia around the world, the Semantic Forensics program is ready to share what we’ve started to bolster the ecosystem needed to defend authenticity in a digital world.”

To learn more about the SemaFor program and an overview on resulting technologies, check out the Voices from DARPA episode on “Demystifying Deepfakes” at https://www.darpa.mil/news-events/2023-06-16