The Great Replacement, White Genocide Theories: Prevalence, Scale, Proliferation

Our findings draw on analysis using social listening tools to examine online behavior, as well as over four years of digital ethnographic work observing extreme-right communities online. Following the attack in Christchurch, we investigated the prevalence, scale and nature of the ideologies and narratives that motivated the perpetrator, using a combination of quantitative and qualitative analysis across mainstream and alternative social media channels. In our quantitative analysis we assessed over two million social media and media mentions of the Great Replacement conspiracy theory, which was at the heart of the attacker’s manifesto, and related terms such as ‘remigration’ and ‘white genocide’. We complemented this approach by creating case studies drawn from analysis of conversations on forums and encrypted chat rooms frequented by the extreme-right.

Key Findings
The so-called ‘Great Replacement’ theory originated in France and its main proponents include the Identitarian group Generation Identity, an organization that wants to preserve ‘ethnocultural identity’ globally. Our 2019 Generation Identity Europe Census identified 70,000 followers of official GI accounts on Twitter, 11,000 members of Facebook groups, 30,000 members of Telegram groups and 140,000 subscribers on YouTube. Although these numbers will inevitably contain researchers and journalists, our assessment suggests that a majority of these individuals are supporters of the Identitarian Movement.

We identified around 1.5 million tweets referencing the Great Replacement theory between April 2012 and April 2019 in English, French and German language. The volume of tweets steadily increased in the seven years leading up to the Christchurch attack, with the number of tweets mentioning the theory nearly tripling in four years from just over 120,000 in 2014 to just over 330,000 in 2018.

French accounts dominate online conversation around the Great Replacement theory, perhaps unsurprisingly as the concept originated in France. However, the theory is becoming more prevalent internationally with English speaking countries accounting for 32.76 percent of online discussion around it.

Extreme-right communities use a range of methods to broadcast the Great Replacement theory, including dehumanizing racist memes, distorting and misrepresenting demographic data, and using debunked science. Great Replacement propagandists have found ways to co-opt the grievances of different fringe communities on the internet by connecting anti-migration, anti-lesbian, gay, bisexual and transgender (LGBT), anti-abortion and anti-establishment narratives.

The Great Replacement theory is able to inspire calls for extreme action from its adherents, ranging from non-violent ethnic cleansing through ‘remigration’ to genocide. This is in part because the theory is able to inspire a sense of urgency by calling on crisis narratives.

We found over 540,000 tweets using the term ‘remigration’ between April 2012 and April 2019. This concept calls for forced deportations of minority communities and essentially represents a soft form of ethnic cleansing. Since 2014, the volume of tweets about remigration has surged and reached broader audiences, rising from 66,000 tweets in 2014 to 150,000 tweets in 2018. The first stark increase in conversation around the theory occurred in November 2014, coinciding with the first Assises de la Remigration (Annual Meeting on Remigration) organized by Generation Identity in Paris.

Politicians and political commentators have been key in mainstreaming the Great Replacement narrative by making explicit and implicit references to the conspiracy theory in their speeches, social media posts and policies. We identified four leading politicians from across Europe explicitly advocating the Great Replacement concept, and five others using related language and conspiracy theories in their campaigns.

Alternative far-right media outlets have played an important role in spreading the idea of remigration on a global level in the last year: 10 out of the top 15 sources are responsible for roughly 50 percent of total coverage of the term remigration between April 2018 and April 2019, and can be classified as sources of far-right alternative news.

Conclusions
Recent attacks demonstrate the potential for the Great Replacement theory to drive extreme-right mobilization and terrorist acts. By examining the narratives that its proponents employ, it is clear that the theory lends itself to calls for radical action against minority communities – including ethnic cleansing, violence and terrorism.

These narratives are increasingly influential in the promotion of violent extremism, but there are also concerted efforts to normalize the underlying ideology through a range of communications tactics that have enabled groups on the fringes to impact mainstream political and public discourse.  Populist and far-right parties including the AfD in Germany, the Austrian Freedom Party, Lega in Italy and UKIP are championing virulent anti-migrant and anti-Muslim rhetoric and policies, in part shaped by a desire to pander to voters who are sympathetic to these narratives. Several more traditional conservative and centre-right parties have also nodded to these narratives, such as Austrian chancellor Sebastian Kurz who has been criticized by far-right leader Heinz-Christian Strache of the Austrian Freedom Party for copying his agenda on immigration. But such rhetoric can also be identified on the political left, as evidenced for instance by the Social Democrats in Denmark who have been noted for their tough anti-immigration policies.

Further research is required to understand the interplay between extreme fringe movements and the political mainstream in relation to the instigation and dissemination of extremist ideologies and conspiracy theories. Such research could include an examination of the points of convergence between mainstream and fringe discourse. Perhaps most importantly, there needs to be a facility that provides real-time, ongoing analysis of extremist information operations targeting specific groups such as minorities, LGBT communities and political opponents, and leveraging key wedge-issues like migration or integration. Such a facility could help inform rapid response to this growing challenge, providing up-to-date data to civil society organizations, policy makers and front line services working to prevent community polarization as well as incidences of hate crime, violence and terrorism.

More generally, however, we need to explore and trial innovative mechanisms that allow for a frank airing and authentic engagement with legitimate grievances and policy concerns around migration. This is often best done at a local level, where municipal authorities, if properly informed, can play an instrumental role in engaging with local grievances and divisive local dynamics. Innovative approaches need to be explored and tested, both offline and online, that enable ‘people to people’ contact across dividing lines, and engagement with marginalized or fringe voices and with extremists themselves. Some of the most promising examples of such work have failed to be invested in at scale, leaving extremists a wide open and fruitful terrain to successfully exploit grievances and poison the public debate.

From a policy response perspective, the fact that extremist messaging and content often   skirts the boundaries of both acceptability and legality represents a serious challenge. These groups often operate within a ‘grey area’ that tests the limits of freedom of speech, illegal hate speech and speech that contravenes the community standards of different social media platforms.

Where content and messaging online neither transgresses national laws or the policies of the tech companies, one has to look beyond the content moderation and removal approaches which have dominated government and company digital policy discussions to date. Indeed, perhaps the greater challenge faced is not in individual pieces of hateful content, but rather the flourishing, noxious communities that propagate and normalize their hateful ideas among wider and wider constituencies of users. It has become increasingly apparent that the technological architecture of the major platforms inorganically amplifies extreme messaging.  Algorithms designed to maximize time spent on platforms to enhance advertising revenue inadvertently aid extremist communications strategies, channeling sympathetic users to ever more sensationalist or borderline content. Individuals may be unaware of the extent to which algorithms shape what they see online and the distortive effect this can have on their digital experience.

The potential impact of online architecture on radicalization requires further research and attention from policymakers as well as greater public awareness. Improved transparency around the development and outcomes of these algorithms has the potential to shed light on the impact of malicious co-option of platform architecture, better inform policy responses, and encourage more responsible platform design on behalf of the technology sector.

In addition to third party review or regulation of the outcomes of platform architecture, there are a number of crucial policy gaps related specifically to far and extreme-right movements. In recent years, governments have been pushing social media companies to restrict the exploitation of their platforms by extremist and terrorist groups. For example, following pressure from governments, the Global Internet Forum to Counter Terrorism (GIFCT) was launched by Facebook, Microsoft, Twitter, and YouTube. The GIFCT website outlines its record in relation to the removal of terrorist content. However, the main focus of these efforts to date has been on the removal of Islamist extremist content. Far-right extremist propaganda has only very recently started to come into focus as a priority concern, with new policies being launched by both Facebook and YouTube in relation to limiting access to white supremacist and white nationalist content. But it should be noted that the tech companies currently rely on national or international lists of proscribed terrorist groups, such as the UN Designated Terror Groups list, to direct their GIFCT enforcement efforts and these tend to prioritize identification of Islamist groups. As a result, far and extreme-right violent extremist and terrorist material is still readily accessible online.

Overall, policymakers have been slow to recognize the threat posed by the extreme-right and have only very recently begun to make efforts to address this issue in response to the recent surge in far-right inspired acts of violence. It is crucial that this gap is closed and that more is done by governments, tech platforms and practitioners to understand the dynamics of these movements. This however requires an increase in expertise. Whilst the UK, Canadian and German Governments have made steps to proscribe extreme-right groups, other countries are currently lagging behind. Crucially, the United States is currently limited in its domestic response to extreme-right terror; a policy failure with far-reaching consequences, including shaping the agenda of predominantly US-based social media platforms. Although major platforms have introduced some voluntary measures to counter white nationalist and white supremacist content, many of the fringe platforms frequented by the extreme-right use free speech and libertarian arguments as the baseline for their policies. This wider technological ecosystem must be addressed by policy makers if the challenges faced are to be addressed successfully. Raising greater awareness about the nature, scale and tactics of online extremist networks is an essential first step in helping practitioners and governments to more effectively respond to the threat in an informed, proportional and consistent fashion.

— Read more in Jacob Davey and Julia Ebner, “The Great Replacement”: The Violent Consequences of Mainstreamed Extremism (Institute for Strategic Dialogue, 2019)