Perspective: DeepfakesFighting Deepfakes When Detection Fails

Published 18 November 2019

Deepfakes intended to spread misinformation are already a threat to online discourse, and there is every reason to believe this problem will become more significant in the future. Automated deepfake detection is likely to become impossible in the relatively near future, as the approaches that generate fake digital content improve considerably.

Deepfakes intended to spread misinformation are already a threat to online discourse, and there is every reason to believe this problem will become more significant in the future. Alex Engler writes for the Brookings Institution that so far, most ongoing research and mitigation efforts have focused on automated deepfake detection, which will aid deepfake discovery for the next few years.

However, worse than cybersecurity’s perpetual cat-and-mouse game, automated deepfake detection is likely to become impossible in the relatively near future, as the approaches that generate fake digital content improve considerably. In addition to supporting the near-term creation and responsible dissemination of deepfake detection technology, policymakers should invest in discovering and developing longer-term solutions.

Engler writes that policymakers should take actions that:

·  Support ongoing deepfake detection efforts with continued funding through DARPA’s MediFor program, as well as adding new grants to support collaboration between detection efforts and training journalists and fact-checkers to use these tools.

·  Create an additional stream of funding awards for the development of new tools, such as reverse video search or blockchain-based verification systems, that may better persist in the face of undetectable deepfakes.

·  Encourage the release of large social media datasets for social science researchers to study and explore solutions to viral misinformation and disinformation campaigns.