EXTREMISMPolitical Violence Offers Extremist “Trigger Events” for Recruiting Supporters

Published 30 October 2025

Extremists are exploiting political violence by using online platforms to recruit new people to their causes and amplify the use of violence for political goals. High-profile incidents of political violence are useful trigger events for justifying extremist ideologies and calls for retaliation.

Extremists are exploiting political violence by using online platforms to recruit new people to their causes and amplify the use of violence for political goals, according to a new report that monitored social platforms after recent attacks.

Researchers at New York University’s Stern Center for Business and Human Rights tracked social media feeds for several months this year, including in the aftermath of Charlie Kirk’s assassination.

“Violent extremist groups systematically exploit trigger events – high-profile incidents of violence – to recruit supporters, justify their ideologies and call for retaliatory action,” the findings say.

Here is the report’s Executive Summary:

Political violence in the United States has increased in recent years and shows no signs of declining.1 This trend was underscored in September 2025 by the assassination of conservative activist Charlie Kirk at Utah Valley University. In the two weeks before and after Kirk’s killing, shooting incidents in Colorado, Minneapolis, and Dallas seized public attention.2

Amid growing concern about the relationship between online rhetoric and real-world violence, this report examines how violent extremist actors across the ideological spectrum use digital platforms to respond to, amplify, and exploit acts of political violence in the United States. Drawing on opensource intelligence (OSINT) gathered initially from March 24 to June 6, 2025, and then extended to include a period following Kirk’s assassination, this analysis reveals sophisticated cross-platform strategies employed by far-right, far-left, violent Islamist, and nihilistic violent extremist (NVE) actors.

This report uses “violent extremist” to refer to individuals who support or commit ideologically motivated violence to further political goals, as well as those who commit violence driven by generalized hatred rather than a coherent ideology

Key Findings

·  Violent extremist groups systematically exploit trigger events—high-profile incidents of violence—to recruit supporters, justify their ideologies, and call for retaliatory action.

·  These groups employ multi-platform strategies, using mainstream sites like X for visibility and recruitment while maintaining a presence on private or semi-private platforms for coordination and more extreme content.

·  Far-right groups capitalized on cases like the Austin Metcalf stabbing and the Iryna Zarutska killing to advance narratives of White victimhood and justify threats against perceived enemies.

·  Activities of both far-left and far-right networks revealed a troubling convergence around antisemitic targeting.

·  Violent Islamic groups are more aggressively monitored than domestic groups espousing similar levels of violence.

·  Violent Islamist groups, facing stricter moderation than domestic extremists, have migrated to decentralized platforms like Rocket.Chat while disseminating symbolic propaganda elsewhere.

·  Nihilistic Violent Extremist (NVE) communities glorify violence across ideological lines for shock value and digital notoriety, making their threats harder to predict based on political triggers.

This report aims to bring clarity to a conversation clouded by vagueness and partisanship. It first maps the domestic threat landscape, offering timely examples of online violent discourse from across the ideological spectrum targeting US individuals or institutions, and sets out a clear definitional framework for types of speech that carry legal significance under US constitutional doctrine. It closes with practical recommendations for online service providers and policymakers.

Recommendations In Brief
For Online Platforms

1. Adopt precise policies on threats and incitement and demonstrate willingness and capacity to enforce those policies. Clearly define prohibited conduct involving threats of violence and incitement, and report publicly on their enforcement actions and outcomes.

2. Implement user-friendly reporting tools compatible with encryption. Any platform enabling user communication should allow users to flag illegal conduct and content they believe violates platform policies. Those reports should be examined swiftly and escalated as appropriate.

3. Use metadata responsibly to disrupt networks. When collecting metadata to detect abusive behavior, limit collection to what is necessary for specific safety purposes, be transparent about its use, and delete it after a set period.

4. Cooperate with other services to monitor and combat dangerous cross-platform activity. Participate actively in cross-industry initiatives to identify migration patterns and disrupt attempts by dangerous actors to exploit harder-to-monitor encrypted environments.

For US Legislators and Policymakers

5. Recognize the limits of legal remedies. Distinguish between harmful speech that is lawful and speech that is illegal under the First Amendment when setting out platform obligations.

6. Clarify protocols for platform-law enforcement cooperation. Establish clear standards for when and how platforms should share information related to threats or incitement with law enforcement.

7. Revisit extremist and terrorist designation frameworks. Re-examine the criteria used to designate terrorist organizations and apply them consistently across ideologies.

8. Mandate transparency, design, and procedural standards without undermining encryption. Require platforms to publish transparency reports that explain their abuse-detection and reporting goals, processes, and outcomes.

9. Support research on effective counter-speech initiatives. Explore partnerships with civil society to counteract violent narratives through counter-speech campaigns.