TERRORIST CONTENT ONLINEProblems in Regulating Social Media Companies’ Extremist, Terrorist Content Removal Policies

Published 20 December 2021

The U.S. government’s ability to meaningfully regulate major social media companies’ terrorist and extremist content removal policies is limited.

The proliferation of online terrorist and violent extremist content, particularly on social media platforms, is one of the major policy issues facing U.S. counterterrorism authorities and digital communications technology providers.

The George Washington University’s Program on Extremism (POE) has released a report —

Moderating Extremism: The State of Online Terrorist Content Removal Policy in the United States – which notes that the advent of massive online social media services led to a range of terrorist and violent extremist groups exploiting these platforms for propaganda, recruitment, radicalization, and operational planning.

“Initially, terrorist content was most plentiful on platforms operated by exponentially growing American companies, sparking society-wide debates about the role of these platforms, their approaches to harmful content, and industry regulation,” the report says. “From government officials to company shareholders, civil society organizations to media reporting, societal pressure to regulate digital communications service providers usually involves the question: “’why is your company not doing more to stop terrorist content on your platform?’”

The reports says that when the public perceives that major social media companies are failing to address terrorist content, many call for direct governmental regulation, or externally imposed laws that attempt to shape the behavior of the company in question. “While government regulation can take an incentivizing form, pushes for regulation against major social media companies in the wake of violent extremist activity online almost always involves punitive action,” the report says. For example, American lawmakers have threatened to fine companies, remove companies’ immunity for hosting third-party content, charge companies with providing material support to terrorists, and threatened to break up companies.

“Other debates on social media content moderation policies, particularly regarding hate speech, disinformation, and content harmful to children, have also influenced a more vocal call for the U.S. government to crack down on major social media companies” the report notes.

The report adds:

Calls for increased governmental regulation are understandably attractive in theory for lawmakers and the public, but if put into practice, the imposition of more stringent regulations on major service providers may not deliver the intended results. As this paper argues, direct U.S. government regulation of major social media companies’ content removal efforts may not have a meaningful effect on either the amount of extremist content on those platforms or broader issues of online extremism and radicalization.