TERRORISMTechnology Evolves the Tactics: Preparing for the Rise of Terrorist AI Harms
Terrorist groups, like the societies they emerge from, adapt to new technologies. As AI capabilities evolve, so too do the tactics of extremist actors. While the full effects may take years to observe, as the technologies continue to develop, we are starting to see them directly alter extremism tradecraft.
From AI-driven denial-of-service attacks and adaptive malware, to drone swarms, and autonomous vehicles - the malicious use and wider adoption of machine learning and artificial intelligence, by terrorist groups, has been widely speculated in recently months. Earlier this year, the UK Government published “The Terrorism Acts in 2023 report of the Independent Reviewer of Terrorism Legislation” a report, which among other areas, considers seven categories of potential terrorism harm that may result from use of Generative AI.
This article builds on these conclusions, providing real world examples of terrorist uses of AI.
Propaganda and Productivity Innovation
Synthetic propaganda is nothing new, deepfakes (audio and visual) have historically been used to support terrorism. With the increasing availability of generative AI tools, we’re seeing its weaponization by terrorist organizations for propaganda generation. ISIS’ media division, News Harvest, have adopted generative AI to produce human-like news presenters resembling shows such as CNN and Al Jazeera - generating video, audio, and text content tailored for propaganda in various languages.
Chatbot Radicalization
With the boom of generative AI, people have been flocking to these AI chatbots to converse and share ideas. These chatbots provide a real potential threat of steering users towards terrorist ideals and views - whether in closed-loop private one-to-one chats or in public multi-user settings.
We’ve recently seen this phenomenon with the xAI chatbot, Grok. On 8 July, 2025, Grok began posting content praising Adolf Hitler, using antisemitic stereotypes, and even referred to itself as “MechaHitler”. Similarly, before these concerns on X, Gab’s AI chatbot had been observed generating Holocaust denial and other conspiracy theorist content.
Attack Facilitation and Innovation
Generative AI may be used to obtain, or refine practical instructions, training, or support for acts of terror. Such tactics are increasingly common in pro-IS propaganda networks. For example in March 2025, a pro-IS account released an AI-generated video featuring a digital avatar providing instructions for making a bomb using common household items.
