Weapons of mass distraction

The false crucifixion story was but one example of Kremlin-backed disinformation deployed during Russia’s annexation of Crimea. In subsequent years, similar tactics would again be unleashed by the Kremlin on other foreign adversaries, including the United States during the lead-up to the 2016 presidential election.

Yet the use of modern-day disinformation does not start and end with Russia. A growing number of states, in the pursuit of geopolitical ends, are leveraging digital tools and social media networks to spread narratives, distortions, and falsehoods to shape public perceptions and undermine trust in the truth.

If there is one word that has come to define the technology giants and their impact on the world, it is “disruption.” The major technology and social media companies have disrupted industries ranging from media to advertising to retail. However, it is not just the traditional sectors that these technologies have upended. They have also disrupted another, more insidious trade – disinformation and propaganda.

The proliferation of social media platforms has democratized the dissemination and consumption of information, thereby eroding traditional media hierarchies and undercutting claims of authority. The environment, therefore, is ripe for exploitation by bad actors. Today, states and individuals can easily spread disinformation at lightning speed and with potentially serious impact.

There are significant vulnerabilities in the information ecosystem that foreign state-sponsored actors can exploit, and they revolve around three primary, interconnected elements:

1. The medium – the platforms on which disinformation flourishes;

2. The message – what is being conveyed through disinformation; and

3. The audience – the consumers of such content.

The first two elements, the medium and the message, operate hand in hand. Social media and news platforms are designed to deliver information to mass audiences quickly, optimizing for viral content that generates clicks and thus revenue. As a consequence, they are inherently vulnerable to sensationalist disinformation that seeks to catch the eye and be shared.(2)

The messages conveyed through disinformation range from biased half-truths to conspiracy theories to outright lies. The intent is to manipulate popular opinion to sway policy or inhibit action by creating division and blurring the truth among the target population.

Unfortunately, the most useful emotions to create such conditions – uncertainty, fear, and anger – are the very characteristics that increase the likelihood a message will go viral. Even when disinformation first appears on fringe sites outside of the mainstream media, mass coordinated action that takes advantage of platform business models reliant upon clicks and views helps ensure greater audience penetration.3 Bot networks consisting of fake profiles amplify the message and create the illusion of high activity and popularity across multiple platforms at once, gaming recommendation and rating algorithms.

Research shows that these techniques for spreading fake news are effective. On average, a false story reaches 1,500 people six times more quickly than a factual story.(4) This is true of false stories about any topic, but stories about politics are the most likely to go viral.(5)

For all that has changed about disinformation and the ability to disseminate it, arguably the most important element has remained the same: the audience. No number of social media bots would be effective in spreading disinformation if the messages did not exploit fundamental human biases and behavior. People are not rational consumers of information. They seek swift, reassuring answers and messages that give them a sense of identity and belonging.(6) The truth can be compromised when people believe and share information that adheres to their worldview.

The problem of disinformation is therefore not one that can be solved through any single solution, whether psychological or technological. An effective response to this challenge requires understanding the converging factors of technology, media, and human behaviors.

The following interdisciplinary review attempts to shed light on these converging factors, and the challenges and opportunities moving forward.

(1) See “State-Run News Station Accused of Making Up Child Crucifixion,”The Moscow Times, 14 July 2014, https://themoscowtimes.com/news/ state-run-news-station-accused-of-making-up-child-crucifixion-37289; and Arkady Ostrovsky, “Putin’s Ukraine Unreality Show,” Wall Street Journal, 28 July 2014, https://www.wsj.com/articles/arkady-ostrovsky-putins-ukraine-unreality-s… and Andrew Higgins, “Fake News, Fake Ukrainians, How a Group of Russians Tilted a Dutch Vote,” New York Times, 16 Feb 2017,  https://www.nytimes.com/2017/02/16/world/europe/russia-ukraine-fake-news….

(2) Information Society Project at Yale Law School and the Floyd Abrams Institute for Freedom of Expression, “Fighting Fake News  (Workshop Report),” 2017, https://law.yale.edu/system/files/area/center/isp/documents/fighting_fak….

(3) “Connecting the bots: Researchers uncover invisible influence on social media,” University of Georgia, 30 May 2017,   https://www.sciencedaily.com/releases/2017/05/170530095910.htm.

(4) Robinson Meyer, “The Grim Conclusions of the Largest-Ever Study of Fake News,” The Atlantic, 08 March 2018,  https://www.theatlantic.com/technology/archive/2018/03/largest-study-eve….

(5) Meyer, “The Grim Conclusions,” The Atlantic.

(6) Daniele Anastasion, “The Price of Certainty,” New York Times, 01 November 2016, https://www.nytimes.com/2016/11/01/opinion/the-price-ofcertainty.html.

— Read more in Christina Nemr and William Gangware, Weapons of Mass Distraction: :Foreign State-Sponsored Disinformation in the Digital Age (Park Advisors for the Global Engagement Center, U.S. Department of State, March 2019)