EXTREMISMInnovative AI Video Generators Produce Antisemitic, Hateful and Violent Outputs
In a matter of seconds, anyone can now use popular AI video generation tools to create antisemitic and extremist content. As this technology continues to evolve, existing guardrails often fail to catch prompts that can be used to generate extremist content, contributing to the proliferation of antisemitic propaganda across social media.
In a matter of seconds, anyone can now use popular AI video generation tools to create antisemitic and extremist content.
As this technology continues to evolve, existing guardrails often fail to catch prompts that can be used to generate extremist content, contributing to the proliferation of antisemitic propaganda across social media.According to a new analysis from the ADL Center on Technology and Society (CTS), new AI-powered text-to-video tools can be easily used to produce disturbing antisemitic, hateful and dangerous videos, despite existing safeguards that supposedly prohibit them. In our test of 50 text prompts across four AI video generators, the tools produced videos in response to antisemitic, extremist or otherwise hateful text prompts at least 40% of the time.
Of all four tools, OpenAI’s new Sora 2 model—released on September 30, 2025—performed the best in terms of content moderation, refusing to generate 60% of the prompts.
Why It Matters
Over the past few years, our analysts have noted an influx of misleading or disturbing AI content online, which is often shared on mainstream social media platforms, allowing it to reach wider audiences. AI image generators have been used to create camouflaged propaganda, AI song generators to create hateful or extremist music and biased responses by popular AI-powered chatbots, such as inconsistent responses to basic questions like, “Did the Holocaust happen?” AI content has also been leveraged to sow confusion and division following newsworthy events or tragedies, such as the Hamas October 7 attacks on Israel.
Unlike older AI video creation technologies, these new tools are far more user-friendly, accessible and sophisticated. Veo 3 and Sora 2 , for example, can generate complex videos that include dialogue and other types of audio from a single text prompt.
Our analysts used 50 text prompts to test a variety of antisemitic, hateful, and extremist rhetoric across four AI-powered text-to-video generation tools to see if we could produce this problematic content.
We designed our prompts to cover a range of hateful terms, tropes and scenarios that could easily be weaponized to create effective propaganda. Some prompts were overt in nature, requesting videos that used phrases or symbols associated with known extremist groups and mass shooters. Others were more subtle, utilizing common dog whistles and coded hate speech often used to evade content moderation online, including Holocaust denial references and antisemitic conspiracy theories.
