Understanding Antisemitism on Twitter After Musk

The Musk Effect: Instant Uptick and Long-Term Impact
Twitter’s tumultuous takeover saw dramatic changes in the platform’s approach to tackling online harms. Within days, fundamental changes were made to policies and enforcement, including the reinstatement of accounts previously permanently banned, the dissolution of Twitter’s independent Trust and Safety Council responsible for advising on decisions around tackling harmful activity on the platform, and the laying off of over half of Twitter’s staff, including many of those responsible for content moderation, online safety and conversational health.

The effect of these changes were reflected in the data analysis outlined in this report, which demonstrates a major increase in the number of antisemitic Tweets posted in the immediate aftermath of the takeover, which has crucially remained at an elevated level in subsequent months.

We also identified a surge in the creation of new accounts posting hate speech which correlated with Musk’s takeover. In total 3,855 accounts which posted at least one antisemitic Tweet were created between October 27 and November 6. This represents more than triple the rate of potentially hateful account creation for the equivalent period prior to the takeover.  Closer assessment of these accounts showed that many displayed characteristics of overt racism and ethnonationalism. This correlates with a rise in coordinated harassment and even pro-ISIS activity on the platform around Musk’s takeover, suggesting that harmful online communities felt empowered by Musk’s widely publicized shits to Twitter’s management.

Despite Musk’s claims that “hate Tweets will be max deboosted & demonetized” – indicating that they will not be algorithmically recommended to users on their news feeds (deboosted) and will not be able to be displayed as adverts or able to generate revenue (demonetized) – and that “New Twitter policy is freedom of speech, but not freedom of reach”, the research showed no appreciable change in the average levels of engagement or interaction with antisemitic Tweets before and after the takeover. There is no clear evidence that ‘de-boosting’ had any impact, as the platform’s algorithmic architecture seemingly continues to prioritize engagement over quality content. However, Twitter’s lack of algorithmic transparency means it is not easy to test this hypothesis at scale, preventing Musk from being held accountable for his promises.

A New Regulatory Paradigm
Twitter’s policy on hateful conduct claims to prohibit the incitement of harm against people based on race, ethnicity or religious affiliation; the harassment of individuals with reference to the Holocaust; and the use of slurs and racist epithets.  However, our research surfaced a broad spectrum of antisemitic content on Twitter ranging from harmful conspiracy theories referring to Jewish control of finance, media and politics; to overt support for antisemitic comments made by public figures such as Kanye West; and the promotion of profoundly racist white supremacy.

Much of this falls in a grey area, where it doesn’t contravene legal thresholds of hate speech, but nonetheless likely violates platform terms of service. Twitter purports to take a variety of actions on violating material, including removing content, and down-ranking and de-amplifying Tweets, but there is little clarity around how such platform interventions are enforced.

Significantly, our research did find that after Musk’s takeover of the platform around 12% of the plausibly antisemitic messages we identified are now inaccessible on the platform, compared to roughly 6% versus pre-takeover. Whilst there are multiple possibilities for a Tweet not being retrievable, one cause would be the platform’s own content moderation practices. However, crucially our research suggests that these moderation efforts are not keeping up with the increased volume of hateful content on the platform, and accordingly are having a limited impact on the increasingly hateful environment on Twitter under Musk, a finding affirmed by recent research from the ADL showing the low removal rate of antisemitic Tweets flagged to the platform.

Beyond a sustained increase in hate speech, and evidence suggesting that other counter-measures to deboost harmful content are having limited impact, Twitter’s commitment to transparency also appears to be moving in the opposite direction, with the platform revoking the free API access that makes a substantial amount of this research possible. This poses the significant risk of limiting the impact of third party efforts to assess the scale of harmful content on the platform, or the impact of their moderation efforts. New regulations incoming from the European Union (in particular the Digital Services Act) will mandate much greater transparency from social media platforms on the actions being undertaken to prevent the proliferation of harmful material online.

The Rising Threat of Antisemitism
These findings come amidst wider concerns around the proliferation of online antisemitism, with weaponised hate manifesting in rising real world violence targeting Jewish communities. In 2021 the ADL tracked the highest number of antisemitic incidents including harassment, vandalism and assaults in the US since they started recording in 1979. This is not just a US phenomenon; in the UK the Community Security Trust recorded a similar spike in this concerning activity, whilst the Interior Ministry of Germany also recorded record highs in antisemitic crimes following the Covid-19 pandemic.

These offline hate incidents should be viewed in the context of surges in online hate, with digital platforms facilitating the radicalisation of individuals towards antisemitic worldviews and the mass proliferation of narratives which seek to hold Jews responsible for the world’s ills. If we are to limit the spread of antisemitism and other forms of hate it is essential that policy solutions are found to its proliferation online.

This includes emerging regulatory regimes such as the EU’s newly introduced Digital Services Act, which seeks to enshrine a systemic approach to platform governance, addressing the platforms’ business models and their underpinning algorithmic architectures which promoteing hate.  Our research suggests that Twitter is failing in their duties under this regime, amid calls from regulators for an increased commitment to meaningful transparency, sophisticated detection and proportionate enforcement by the platform.

The full report can be found on the ISD website, here

[1] Although there are several explanations for Tweets being inaccessible on the platform, in the body of the report we explain how this can provide a potential measure of Twitter’s takedown efforts.