Problems in Regulating Social Media Companies’ Extremist, Terrorist Content Removal Policies

As the landscape of extremist use of the internet has evolved in its architecture, major players, tools, and tactics, the public debate about content removal policy has largely remained stagnant, relying on the same tropes, axioms, and solutions that it did ten years ago. Some proposed regulations still fail to account for how terrorist and extremist content spreads online today and are therefore unlikely to be effective.

Here is the POE report’s Executive Summary:

Executive Summary
Alongside a host of platform governance issues facing technology companies, the exploitation of social media platforms by terrorist and extremist groups is a major controversy in debates about how companies can combat harmful content online. In the United States and around the world, the shortcomings of social media providers in removing terrorist content have increased the frequency and intensity of calls by lawmakers and the public for governments to directly regulate social media companies’ policies against terrorist and extremist content.

Advocates of direct governmental regulation present a straightforward narrative of companies failing to meet their responsibility to police terrorist content on their platforms, and governments intervening with strict parameters, hefty fines, and legal penalties to force them into compliance.1 To push the U.S. government to act, advocates of government regulation cite examples of these measures adopted by governments around the world. Yet, oftentimes missing from these arguments are thorough evaluations of the state of terrorist and extremist content online, as well as historical assessments of the interplay between governments and social media providers on the question of how to manage online terrorist content.

By reviewing studies of how today’s terrorist and extremist groups operate on social media in conjunction with an overview of U.S. government regulation of terrorist content online, this report finds that stricter U.S. regulation of social media providers may not be the most effective method of combating online terrorist and extremist content. Specifically:

• Direct governmental regulations that ignore other sources of regulation on content removal policies could disrupt growing intra-industry collaboration on countering terrorist content online.

• In many regards, the U.S. government defers to and depends on the private sector to conduct counterterrorism online. Many factors contribute to this arrangement, including limits on the government’s authorities, expertise, staffpower, dexterity and political will to manage online terrorist content with the same efficacy as major social media companies.

 Attempts by other governments to strictly regulate social media companies’ terrorist content removal policies hurt small companies, created double standards and redundancies, and raised concerns about censorship and free speech.

• Proposed regulations may only affect major U.S. social media providers; smaller and non-U.S. companies may be unable, unwilling, or not required to comply. Due to the proliferation of social media platforms exploited by terrorists and extremists, platforms that may be unaffected by U.S. government regulation currently host a large proportion of terrorist content online.

• In certain regards, major social media companies’ content removal policies have more flexibility than the U.S. government to be able to account for new terrorist and extremist groups and actors and their respective tactics, techniques, and procedures online.