Considered opinionDeep Fakes: A looming crisis for national security, democracy and privacy?

By Robert Chesney and Danielle Citron

Published 26 February 2018

Events in the last few years, such as Russia’s broad disinformation campaign to undermine Western democracies, including the American democratic system, have offered a compelling demonstration of truth decay: how false claims — even preposterous ones — can be disseminated with unprecedented effectiveness today thanks to a combination of social media ubiquitous presence and virality, cognitive biases, filter bubbles, and group polarization. Robert Chesney and Danielle Citron write in Lawfare that the resulting harms are significant for individuals, businesses, and democracy – but that the problem may soon take a significant turn for the worse thanks to deep fakes. They urge us to get used to hearing that phrase. “It refers to digital manipulation of sound, images, or video to impersonate someone or make it appear that a person did something—and to do so in a manner that is increasingly realistic, to the point that the unaided observer cannot detect the fake. Think of it as a destructive variation of the Turing test: imitation designed to mislead and deceive rather than to emulate and iterate.”

Events in the last few years, such as Russia’s broad disinformation campaign to undermine Western democracies, including the American democratic system, have offered a demonstration of how false claims — even preposterous ones — can be disseminated with unprecedented effectiveness today thanks to a combination of social media ubiquitous presence and virality, cognitive biases, filter bubbles, and group polarization.

Robert Chesney and Danielle Citron write in Lawfare that the resulting harms are significant for individuals, businesses, and democracy. Belated recognition of the problem has spurred a variety of efforts to address this most recent examples of truth decay, and, at least at this early stage, there seems to be reason for optimism. “Alas, the problem may soon take a significant turn for the worse thanks to deep fakes,” they write.

They urge us to get used to hearing that phrase. “It refers to digital manipulation of sound, images, or video to impersonate someone or make it appear that a person did something—and to do so in a manner that is increasingly realistic, to the point that the unaided observer cannot detect the fake. Think of it as a destructive variation of the Turing test: imitation designed to mislead and deceive rather than to emulate and iterate.”

They continue:

Fueled by artificial intelligence, digital impersonation is on the rise. Machine-learning algorithms (often neural networks) combined with facial-mapping software enable the cheap and easy fabrication of content that hijacks one’s identity—voice, face, body. Deep fake technology inserts individuals’ faces into videos without their permission. The result is “believable videos of people doing and saying things they never did.”

Not surprisingly, this concept has been quickly leveraged to sleazy ends.

….

We can expect to see deep fakes used in other abusive, individually-targeted ways, such as undermining a rival’s relationship with fake evidence of an affair or an enemy’s career with fake evidence of a racist comment.

Blackmailers might use fake videos to extract money or confidential information from individuals who have reason to believe that disproving the videos would be hard (an abuse that will include sextortion but won’t be limited to it). Reputations could be decimated, even if the videos are ultimately exposed as fakes; salacious harms will spread rapidly, technical rebuttals and corrections not so much.

….