How Extremism Operates Online

The decision, one of a host of new measures targeting online extremist activity that have been enacted or are reported to be under review by the Biden administration, exemplified U.S. policymakers’ recognition of the important role that the internet plays in mobilizing, sustaining, and propagating extremist activity.(3) Since the mid-1980s, extremist movements across the ideological spectrum have demonstrated their intent and ability to exploit digital communication, networking, and commerce tools and to transition some of their operations online.(4) These activities began to capture policy attention in the early 2000s, but the challenge has gained new urgency in recent years as groups and movements such as the Islamic State in Iraq and Syria (ISIS), the Q-Anon conspiracy theory, and the #StopTheSteal political campaign have harnessed social media and other virtual platforms to generate major real-world effects.(5)

The purpose of this Perspective is to synthesize existing research on how the internet influences the activities of extremist groups and movements and how exposure to or consumption of extremist content online influences the behavior of internet users. We surveyed studies and analyses produced over the past two decades by academics, nongovernmental organizations, and other civil sector entities that have sought to better understand whether new technologies have changed how radical ideas spread, how they gain a hold, and how they motivate people to act on their grievances. The second in a series of RAND Corporation primers on the far-right virtual extremist ecosystem,(6) this Perspective is intended to promote a general understanding of trends in the current literature and to identify areas of emerging consensus, as well as ongoing disagreement and outstanding questions. The information collected here also may be of interest to those looking to improve their ability to recognize, avoid, or resist hateful, violent, and other manipulative online activity.

We have organized this Perspective into four sections. The first provides a brief definition of core terms and notes areas of conceptual disagreement. The second focuses on how the internet enables extremist organizations and movements by facilitating such basic operational functions as fundraising, recruiting, and knowledge transfer. The third focuses on how individuals receive extremist online material, and how the dynamics of the virtual world can facilitate receptivity to extremist ideas and, possibly, offline violence. We conclude with a discussion of research that addresses how the internet can be leveraged as a tool to counter extremism, before outlining avenues for further research that could contribute to the prevention, intervention, and monitoring of harmful activity.

….

Countering Virtual Extremism
The challenge of combating online extremist activity—and managing its offline consequences—likely will preoccupy international governments, community organizations, and major technology companies for years to come. Despite continued methodological and definitional differences, researchers agree that the internet plays an important role in enabling extremists to perform critical operational functions, to promote their ideas, and to encourage harmful online and offline behaviors.

Numerous governmental, educational, and civil sector entities seek to disrupt extremists’ attempts to exploit the internet and to impede the indoctrination of individuals online. Such initiatives include using automated tools to remove or refute violent, hateful, or otherwise harmful content, in the hope that this will inhibit the spread of this material online.(91)

There are also efforts to deny extremists access to virtual platforms that can be used to generate revenue, amplify their messages, or coordinate their activities.(92) In addition, the U.S. government has endorsed proactive measures to promote individual and community resiliency and to improve internet users’ ability to identify manipulative information.(93)

Yet researchers have not yet reached consensus on the relative effectiveness of these various strategies, and a RAND analysis of proposed frameworks to evaluate counterextremism programming found that most had significant methodological shortfalls.(94)

Nonetheless, the literature suggests that disrupting extremists’ use of the internet will require two types of action: content moderation and removal (commonly described as deplatforming) and tailored counternarrative and strategic communication campaigns to prevent radicalization, promote community resiliency, and aid the deradicalization and reintegration of extremist adherents. Studies analyzing the effects of mass content removals on extremist activity found that they reduced the size of the audience exposed to extremist messages, degraded the effectiveness of some extremist propaganda, and forced extremist groups to divert resources to rebuilding their networks.(95) One influential study of Reddit’s 2015 decision to close subreddits that violated its terms of use found that this action contributed to an 80-percent decrease in hate speech usage across the entire platform.(96)

But technological solutions alone are imperfect because extremists can still disseminate their messages to smaller audiences on alternative platforms, where the conviction of remaining followers may harden, or alter their language to circumvent restrictions on major platforms.(97) Researchers have cautioned that the sheer number of far-right groups, their co-option of popular memes and internet jargon, and their tendency to avoid using the explicit branding seen in ISIS and other Islamist propaganda make them particularly resilient to content-filtering and content-removal programs.(98) Disagreements over how to define hate speech also present barriers to designing effective tools to detect and disrupt extremist behavior online.(99)

Moreover, researchers generally agree that addressing the underlying drivers of extremism requires effective countermessaging and community programming.(100) To date, however, the majority of the research that evaluates the efficacy of prevention and deradicalization programs has focused on religiously motivated extremism, and more research is needed to assess their applicability to far-right and white-supremacist movements.(101)

Disagreements over who should produce and disseminate counternarratives also present an impediment to designing and implementing new programs.(102) Who should be responsible for producing counterextremism material: technology platforms, federal or local government entities, or public interest groups? These debates raise fundamental and divisive questions about the importance of free speech, the appropriate role of government regulation, and the balance between individual rights and community welfare. While some have called for the federal government to regulate online content or to compel technology companies to strengthen their moderation policies, others have argued that stricter action would amount to an undue restriction or burden on constitutionally protected activities.(103) Likewise, policymakers, technology companies, and activists have struggled to reconcile the need to minimize the social harms associated with extremism, on the one hand, with the principles of a free and open internet on the other.104 Any effort to disrupt extremists’ use of the internet requires consideration of these trade-offs, as well as attention to who is responsible for executing these initiatives, which techniques offer the most-promising outcomes, and what should receive scarce resources.

1. Christchurch Call, “The Christchurch Call to Action to Eliminate Terrorist and Violent Extremist Content Online,” undated. The announcement coincided with the two-year anniversary of the first Christchurch Call to Action Summit, which was hosted in Paris on May 7, 2019.

2. White House, “Statement by Press Secretary Jen Psaki on the Occasion of the United States Joining the Christchurch Call to Action to Eliminate Terrorist and Violent Extremist Content Online,” press statement, May 7, 2021.

3. Executive Office of the President, National Strategy for Countering Domestic Terrorism, Washington, D.C.: National Security Council and White House, June 2021, pp. 20–22; Zachary Cohen and Katie Bo Williams, “Biden Team May Partner with Private Firms to Monitor Extremist Chatter Online,” CNN, May 3, 2021; Nomaan Merchant, “US to Ramp Up Tracking of Domestic Extremism on Social Media,” Associated Press, May 20, 2021.

4. For a brief description of online extremist activity over the 1980s and 1990s, see Maura Conway, Ryan Scrivens, and Logan Macnair, Right-Wing Extremists’ Persistent Online Presence: History and Contemporary Trends, The Hague: International Centre for Counter-Terrorism, October 2019, pp. 3–4; Joseph A. Schafer, “Spinning the Web of Hate: Web-Based Hate Propagation by Extremist Organizations,” Journal of Criminal Justice and Popular Culture, Vol. 9, No. 2, 2002, pp. 69–70.

5. Heather J. Williams, Alexandra T. Evans, Jamie Ryan, Erik E. Mueller, and Bryce Downing, The Online Extremist Ecosystem: Its Evolution and a Framework for Separating Extreme from Mainstream, Santa Monica, Calif.: RAND Corporation, PE-A1458-1, 2021. For a discussion of the role of social media in promoting conspiracy theories, see William Marcellino, Todd C. Helmus, Joshua Kerrigan, Hilary Reininger, Rouslan I. Karimov, and Rebecca Ann Lawrence, Detecting Conspiracy Theories on Social Media: Improving Machine Learning to Detect and Understand Online Conspiracy Theories, Santa Monica, Calif.: RAND Corporation, RR-A676-1, 2021.

6. Stephane J. Baele, Lewys Brace, and Travis G. Coan coined the use of the term ecosystem to describe virtual networks of farright activity in “Uncovering the Far-Right Online Ecosystem: An Analytical Framework and Research Agenda,” Studies in Conflict & Terrorism, ahead-of-print version, December 30, 2020, pp. 1–21.

….

91. For a single-volume overview of major counterextremism initiatives, see Spandana Singh, Everything in Moderation: An Analysis of How Internet Platforms Are Using Artificial Intelligence to Moderate User-Generated Content, Washington, D.C.: New America, July 15, 2019. For RAND research, see Todd C. Helmus and Elizabeth Bodine-Baron, Empowering ISIS Opponents on Twitter, Santa Monica, Calif.: RAND Corporation, PE-227-RC, 2017; and William Marcellino, Madeline Magnuson, Anne Stickells, Benjamin Boudreaux, Todd C. Helmus, Edward Geist, and Zev Winkelman, Counter-Radicalization Bot Research: Using Social Bots to Fight Violent Extremism, Santa Monica, Calif.: RAND Corporation, RR-2705-DOS, 2020b.

92. For a discussion of the strengths and challenges of this approach, see Ethan Zuckerman and Chand Rajendra-Nicolucci, “Deplatforming Our Way to the Alt-Tech Ecosystem,” Knight First Amendment Institute at Columbia University, January 11, 2021; Shiza Ali, Mohammad Hammas Saeed, Esraa Aldreabi, Jeremy Blackburn, Emiliano De Cristofaro, Savvas Zannettou, and Gianluca Stringhini, “Understanding the Effect of Deplatforming on Social Networks,” WebSci ’21: 13th ACM Web Science Conference 2021, June 2021; and Neil F. Johnson, Rhys Leahy, Nicholas Johnson Restrepo, Nicholas Velásquez, Minzhang Zheng, Pedro Manrique, Prajwal Devkota, and Stefan Wuchty, “Hidden Resilience and Adaptive Dynamics of the Global Online Hate Ecology,” Nature, Vol. 573, 2019.

93. Executive Office of the President, 2021, p. 22. For examples of these measures, see Ashley L. Rhoades, Todd C. Helmus, James V. Marrone, Victoria Smith, and Elizabeth Bodine-Baron, Promoting Peace as the Antidote to Violent Extremism: Evaluation of a Philippines-Based Tech Camp and Peace Promotion Fellowship, Santa Monica, Calif.: RAND Corporation, RR-A233-3, 2020; and Alice Huguet, John F. Pane, Garrett Baker, Laura S. Hamilton, and Susannah Faxon-Mills, Media Literacy Education to Counter Truth Decay: An Implementation and Evaluation Framework, Santa Monica, Calif.: RAND Corporation, RR-A112-18, 2021.

94. Sina Beaghley, Todd C. Helmus, Miriam Matthews, Rajeev Ramchand, David Stebbins, Amanda Kadlec, and Michael A. Brown, Development and Pilot Test of the RAND Program Evaluation Toolkit, Santa Monica, Calif.: RAND Corporation, RR-1799-DHS, 2017, pp. 5–6; Jacopo Bellasio, Joanna Hofman, Antonia Ward, Fook Nederveen, Anna Knack, Arya Sofia Meranto, and Stijn Hoorens, Counterterrorism Evaluation: Taking Stock and Looking Ahead, Santa Monica, Calif., and Cambridge, United Kingdom: RAND Corporation, RR-2628-WODC, 2018, pp. 76–77. See also Amy-Jane Gielen, “Countering Violent Extremism: A Realist Review for Assessing What Works, for Whom, in What Circumstances, and How?” Terrorism and Political Violence, Vol. 31, No. 6, 2019, pp. 1149–1150.

95. Rogers, 2020, p. 215; J. M. Berger and Jonathon Morgan, “The ISIS Twitter Census: Defining and Describing the Population of ISIS Supporters on Twitter,” Washington, D.C.: Brookings Institution, Analysis Paper No. 20, March 2015, p. 56; Lella Nuori, Nuria Lorenzo-Dus and Amy-Louise Watkin, “Following the Whack-a-Mole: Britain First’s Visual Strategy from Facebook to Gab,” London: Royal United Services Institute for Defence and Security Studies, Global Research Network on Terrorism and Technology Paper No. 4, July 4, 2019.

96. Eshwar Chandrasekharan, Umashanthi Pavalanathan, Anirudh Srinivasan, Adam Glynn, Jacob Eisenstein, and Eric Gilbert, “You Can’t Stay Here: The Efficacy of Reddit’s 2015 Ban Examined Through Hate Speech,” Proceedings of the ACM Human-Computer Interaction, Vol. 1, No. CSCW, November 2017.

97. Rogers, 2020, p. 215; Paris Peace Forum, “Digital Platforms and Extremism: Are Content Controls Effective?” in Insights from the 2018 Paris Peace Forum Debate Sessions, November 13, 2018; Sheera Frenkel and Davey Alba, “In India, Facebook Grapples with an Amplified Version of Its Problems,” New York Times, October 23, 2021.

98. Conway, 2020, pp. 108–110; Paris Peace Forum, 2018.

99. For illustrative studies on developing tools to detect hate speech, see Mainack Mondal, Leandro Araújo Silva, and Fabrício Benevenuto, “A Measurement Study of Hate Speech in Social Media,” HT ’17: Proceedings of the 28th ACM Conference on Hypertext and Social Media, July 2017; and Njagi Dennis Gitari, Zhang Zuping, Hanyurwimfura Damien, and Jun Long, “A Lexicon-Based Approach for Hate Speech Detection,” International Journal of Multimedia and Ubiquitous Engineering, Vol. 10, No. 4, 2015.

100. Bharath Ganesh and Jonathan Bright, “Countering Extremists on Social Media: Challenges for Strategic Communication and Content Moderation,” Policy & Internet, Vol. 12, No. 1, March 2020, p. 8; Rachel Briggs and Sebastien Feve, Review of Programs to Counter Narratives of Violent Extremism: What Works and What Are the Implications for Government? London: Institute for Strategic Dialogue, 2013, p. 25.

101. For helpful reviews of the literature on preventing extremism and countering extremist narratives, see Joshua Sinai with Jeffrey Fuller and Tiffany Seal, “Research Note: Effectiveness in Counter-Terrorism and Countering Violent Extremism: A Literature Review,” Perspectives on Terrorism, Vol. 13, No. 6, December 2019; and William Stephens, Stijn Sieckelinck, and Hans Boutellier, “Preventing Violent Extremism: A Review of the Literature,” Studies in Conflict & Terrorism, Vol. 44, No. 4, 2021. For an illustration of the emphasis on religious extremism to date, see the summary of evaluation studies in Beaghley et al., 2017, pp. 22–23.

102. For a discussion of the “contested role between civil society, government, and the private sector,” see Ganesh and Bright, 2020; and Anne Aly, Anne-Marie Balbi, and Carmen Jacques, “Rethinking Countering Violent Extremism: Implementing the Role of Civil Society,” Journal of Policing, Intelligence and Counter Terrorism, Vol. 10, No. 1, 2015.

103. For a balanced discussion of the legal considerations when implementing counterextremist measures, see Victoria L. Killion, Terrorism, Violent Extremism, and the Internet: Free Speech Considerations, Washington, D.C.: Congressional Research Service, R45713, May 6, 2019. For an illustrative argument against government monitoring of social media on these grounds, see Rachel Levinson-Waldman and Sahil Singhvi, “Law Enforcement Social Media Monitoring Is Invasive and Opaque,” Brennan Center for Justice, November 6, 2019.