Deep Fakes: A looming crisis for national security, democracy and privacy?

But there’s more to the problem than these individual harms. Deep fakes also have potential to cause harm on a much broader scale—including harms that will impact national security and the very fabric of our democracy.

Deep fakes raise the stakes for the “fake news” phenomenon in dramatic fashion (quite literally). We have already seen trolls try to create panic over fake environmental disasters, and the recent Saudi-Qatar crisis may have been fueled by a hack in which someone injected fake stories (with fake quotes by Qatar’s emir) into a Qatari news site. Now, let’s throw in realistic-looking videos and audio clips to bolster the lies. Consider these terrifying possibilities:

— Fake videos could feature public officials taking bribes, uttering racial epithets, or engaging in adultery.

— Politicians and other government officials could appear in locations where they were not, saying or doing horrific things that they did not.

— Fake videos could place them in meetings with spies or criminals, launching public outrage, criminal investigations, or both.

— Soldiers could be shown murdering innocent civilians in a war zone, precipitating waves of violence and even strategic harms to a war effort.

— A deep fake might falsely depict a white police officer shooting an unarmed black man while shouting racial epithets.

— A fake audio clip might “reveal” criminal behavior by a candidate on the eve of an election.

— A fake video might portray an Israeli official doing or saying something so inflammatory as to cause riots in neighboring countries, potentially disrupting diplomatic ties or even motivating a wave of violence.

— False audio might convincingly depict U.S. officials privately “admitting” a plan to commit this or that outrage overseas, exquisitely timed to disrupt an important diplomatic initiative.

— A fake video might depict emergency officials “announcing” an impending missile strike on Los Angeles or an emergent pandemic in New York, provoking panic and worse.

….

The spread of deep fakes will threaten to erode the trust necessary for democracy to function effectively, for two reasons. First, and most obviously, the marketplace of ideas will be injected with a particularly-dangerous form of falsehood. Second, and more subtly, the public may become more willing to disbelieve true but uncomfortable facts. Cognitive biases already encourage resistance to such facts, but awareness of ubiquitous deep fakes may enhance that tendency, providing a ready excuse to disregard unwelcome evidence. At a minimum, as fake videos become widespread, the public may have difficulty believing what their eyes (or ears) are telling them—even when the information is quite real.

….

The capacity to generate hyper-realistic deep fakes will open up remarkable opportunities for covert action, plainly. Hostile foreign intelligence agencies already will be quite aware of this (as will our own intelligence agencies, no doubt), and if 2016 taught us nothing else it should at least have taught us to expect that at least some of those foreign intelligence agencies will make sophisticated efforts to exploit such possibilities.

But there is no reason at all to think that the capacity to generate persuasive deep fakes would stay with governments.

….

At any rate, the capacity to generate persuasive deep fakes (and, critically, user-friendly software enabling almost anyone to exploit that capacity) will diffuse rapidly and globally (in keeping with the dynamics Benjamin Wittes and Gabriella Blum explore in their compelling book The Future of Violence). Thus, even if it does not start that way, the technology will end up in the hands of a vast range of actors willing to use deep fakes in harmful ways.

….

Consider the worst-case scenario: We enter a world in which it becomes child’s play to portray people as having done or said things they did not say or do; we lack the technology to reliably expose the fakes; and we lack the legal and practical capacity to punish and deter use of deep fakes to inflict individual and large-scale harms. In that case, it is not hard to imagine the rise of a profitable new service: immutable authentication trails.

The idea is this: A person who is sufficiently interested in protecting against a targeted deep fake (or whose employer feels this way) may prove willing to pay for a service that comprehensively tracks some or all of the following—their movements, electronic communications, in-person communications, and surrounding visual circumstances. The vendor providing the service, to be successful, would have to develop a sufficient reputation for the immutability and comprehensiveness of its data. It might then have its own arrangements with media platforms allowing it to debunk—perhaps quite rapidly—emergent deep fakes impacting its clients. If successful, it is not hard to imagine it proving somewhat popular (especially with employers who might require assent to such a service as a term of employment, barring legal obstacles to doing so).

Whatever the benefits, the social cost should such a service emerge and prove popular would be profound. It risks the unraveling of privacy — that is, the collapse of privacy by social consent regardless of what legal protections for privacy there may be.

Chesney and Citron conclude:

Perhaps such a system would yield more good than harm on the whole and over time (particularly if there is legislation well-tailored to regulate access to such a new state of affairs). Perhaps time will tell. For now, our aim is no more and no less than to identify the possibility that the rise of deep fakes will in turn give birth to such a service, and to flag the implications this will have for privacy. Enterprising businesses may seek to meet the pressing demand to counter deep fakes in this way, but it does not follow that society should welcome—or wholly accept—that development. Careful reflection is essential now, before either deep fakes or responsive services get too far ahead of us.

Read the article: Robert Chesney and Danielle Citron, “Deep Fakes: A Looming Crisis for National Security, Democracy and Privacy?” Lawfare (21 February 2018)