PerspectiveAll’s Clear for Deep Fakes: Think Again

Published 11 May 2020

A few analysts are claiming that the bark of deepfakes is worse than their bite. Robert Chesney, Danielle Citron, and Hany Farid disagree, writing that “Now is not the time to sit back and claim victory over deep fakes or to suggest that concern about them is overblown. The coronavirus has underscored the deadly impact of believable falsehoods, and the election of a lifetime looms ahead. More than ever we need to trust what our eyes and ears are telling us.”

The verdict was in, and it was a comforting one: Deep fakes are the “dog that never barked.” So said Keir Giles, a Russia specialist with the Conflict Studies Research Center in the United Kingdom. Giles reasoned that the threat posed by deep fakes has become so entrenched in the public’s imagination that no one would be fooled should they appear. Simply put, deep fakes “no longer have the power to shock.” Tim Hwang agreed but for different reasons, some technical, some practical. Hwang asserted that the more deep fakes are made, the better machine learning becomes at detecting them. Better still, the major platforms are marshalling their efforts to remove deep fakes, leaving them “relegated to sites with too few users to have a major effect.”

Robert Chesney, Danielle Citron, and Hany Farid write in Lawfare that they disagree with each of these claims. Deep fakes have indeed been “barking,” though so far their bite has most often been felt in ways that many of us never see. Deep fakes in fact have taken a serious toll on people’s lives, especially the lives of women. As is often the case with early uses of digital technologies, women are the canaries in the coalmine. According to DeepTrace Labs, of the approximately 15,000 deep fake videos appearing online, 96 percent involve deep-fake sex-videos; 99 percent of which involve women’s faces being inserted into porn without consent. And even for those who have heard a great deal about the potential harms from deep fakes, the opportunity to be shocked remains strong.

Is this really any different from the threat posed by familiar, lower-tech forms of fraud? Yes. Human cognition predisposes us to be persuaded by visual and audio evidence, but especially so when the video or audio in question is of such quality that our eyes and ears cannot readily detect that something artificial is at work.

Making matters worse, growing awareness of the deep fake threat is itself potentially harmful. It increases the chances people will fall prey to a phenomenon that two of us (Chesney and Citron) call the Liar’s Dividend

The authors conclude:

Now is not the time to sit back and claim victory over deep fakes or to suggest that concern about them is overblown. The coronavirus has underscored the deadly impact of believable falsehoods, and the election of a lifetime looms ahead. More than ever we need to trust what our eyes and ears are telling us.