Extremists & social mediaCan taking down websites really stop terrorists and hate groups?

By Thomas Holt, Joshua D. Freilich, and Steven Chermak

Published 18 September 2017

Racists and terrorists, and many other extremists, have used the internet for decades and adapted as technology evolved, shifting from text-only discussion forums to elaborate and interactive websites, custom-built secure messaging systems and even entire social media platforms. Recent efforts to deny these groups online platforms will not kick hate groups, nor hate speech, off the web. In fact, some scholars theorize that attempts to shut down hate speech online may cause a backlash, worsening the problem and making hate groups more attractive to marginalized and stigmatized people, groups, and movements. The tech industry, law enforcement, and policymakers must develop a more measured and coordinated approach to the removal of extremist and terrorist content online. The only way to really eliminate this kind of online content is to decrease the number of people who support it.

In the wake of an explosion in London on September 15, President Trump called forcutting off extremists’ access to the internet.

Racists and terrorists, and many other extremists, have used the internet for decades and adapted as technology evolved, shifting from text-only discussion forums to elaborate and interactive websites, custom-built secure messaging systems and even entire social media platforms.

Our research has examined various online communities populated by radical and extremist groups. And two of us were on the team that created the U.S. Extremist Crime Database, an open-source database helping scholars better understand the criminal behaviors of jihadi, far-right and far-left extremists. Analysis of that data demonstrates that having an online presence appears to help hate groups stay active over time. (One of the oldest far-right group forums, Stormfront, has been online in some form since the early 1990s.)

But recent efforts to deny these groups online platforms will not kick hate groups, nor hate speech, off the web. In fact, some scholars theorize that attempts to shut down hate speech online may cause a backlash, worsening the problem and making hate groups more attractive to marginalized and stigmatized people, groups and movements.

Fighting an impossible battle
Like regular individuals and corporations, extremist groups use social media and the internet. But there have been few concerted efforts to eliminate their presence from online spaces. For years, Cloudflare, a company that provides technical services and protection against online attacks, has been a key provider for far-right groups and jihadists, withstanding harsh criticism.

The company refused to act until a few days after the violence in Charlottesville. As outrage built around the events and groups involved, pressure mounted on companies providing internet services to the Daily Stormer, a major hate site whose members helped organize the demonstrations that turned fatal. As other service providers stopped working with the site, Cloudflare CEO Matthew Prince emailed his staff that he “woke up … in a bad mood and decided to kick them off the internet.”

It may seem like a good first step to limit hate groups’ online activity – thereby keeping potential supporters from learning about them and deciding to participate. And a company’s decision may demonstrate to other customers its willingness to take hard stances against hate speech.