How 'Islamic State' Uses AI to Spread Extremist Propaganda

What we know about AI use today is that it works as a complement to official propaganda by both al-Qaeda and IS,” says Moustafa Ayad, executive director for Africa, the Middle East and Asia at the London-based Institute for Strategic Dialogue (ISD), which investigates extremism of all kinds. “It allows supporters and unofficial support groups to create emotive content specifically used to galvanize the base of supporters around core concept.”

The way this kind of content looks also means it may not be picked up by content moderators on popular social media platforms either.  

In fact, Ayad told DW that even the more ridiculous and unrealistic IS content is often enough of a novelty for followers to share it among themselves. 

Extremists Are ‘Early Adopters’
None of this is surprising to longtime observers of the IS group. When the extremist group first came to prominence around 2014, it was already making propaganda videos with fairly high production values to intimidate enemies and recruit followers.

All this speaks to something the ISD has continually noted,” Ayad explained. “Terrorist groups and their supporters continue to be early adopters of technology to serve their interests.” 

But how dangerous is this sort of content really? After all, the fake news broadcast about the Moscow attack looks fake, and the Peter Griffin song isn’t hurting anybody. Or is it?

Monitoring groups have listed a variety of ways in which extremist groups could use AI. Besides propaganda, they could also use chatbots from large language models, like ChatGPT, to converse with potential new recruits, experts suggest. Once the chatbot has aroused interest, a human recruiter might take over, they say.

AI models, like ChatGPT, also have certain rules written into their systems that prevent them from helping users with things like, for example, getting away with murder. However, these rules have proven unreliable in the past, and would-be terrorists might be able to override them to obtain dangerous information.

There are also fears that extremists could use AI tools to undertake digital or cyberattacks or to help them plan terror attacks in real life.

Deep Fakes vs. Real Bombs
Experts argue that while AI has worrying potential in the hands of extremists, real life is still more dangerous.

In a 2019 paper in the journal, “Perspectives on Terrorism,” researchers examined the connection between how much propaganda the IS group put out and their actual physical attacks. There was “no strong and predictable correlation,” they concluded.

It’s similar to the discussion we were having about cyberweapons and cyber bombs around 10 years ago,” says Lilly Pijnenburg Muller, a research associate and expert on cybersecurity at the Department of War Studies at King’s College London.

Today even rumors and old videos can have a destabilizing impact and lead to a flurry of disinformation on social media, she told DW. “And states have conventional bombs that can be dropped, if that is their intention.”

I don’t know if, at this stage, the use of AI by foreign terrorist organizations and their supporters is more dangerous than their very real and graphic propaganda involving the wanton murders of civilians and attacks on security forces,” the ISD’s Ayad says.

Right now, the bigger threat is from these groups actually conducting attacks, inspiring lone actors or successfully recruiting new members because of their responses to the geopolitical landscape, namely the Israeli war on Gaza in response to October 7,” he continued. “They are using the civilian deaths and Israel’s actions as a rhetorical device for recruitment and to build out campaigns.”

Cathrin Schaer is a freelance journalist based in Berlin. This article was edited by: Davis VanOpdorp, and it is published courtesy of Deutsche Welle (DW).