DISINFORMATIONMis- and Disinformation Trends and Tactics to Watch in 2025
Predicting how extremists may weaponize false narratives requires an understanding of the strategies that allow them to spread most effectively.
Throughout 2024, the ADL Center on Extremism documented the tactics deployed by extremists and purveyors of hate to promote false narratives, as well as the harmful impact of conspiracy theories, misinformation and disinformation on communities including Jews, immigrants and other marginalized groups.
Predicting how extremists may weaponize false narratives requires an understanding of the strategies that allow them to spread most effectively. Here, we highlight three key mis- and disinformation trends and tactics that saw success throughout 2024 and could deeply impact the extremist landscape in 2025 and beyond.
Using Generative Artificial Intelligence (GAI) to Spread Hate and Propaganda
In recent years, the ADL Center on Extremism has highlighted how purveyors of conspiracy theories and hate use generative artificial intelligence (GAI) tools to promote disinformation, extremist rhetoric and harmful content. 2024 proved no different, but advances in this technology made these tools more pernicious as social media continued to serve as fertile ground for AI-generated hate.
Videos showing English-language Hitler speeches once again went viral in September 2024, following similar content that circulated widely on X (formerly Twitter) earlier in the year, and on fringe platforms in 2023. Other types of audio-based GAI content popularized in 2024 came from apps like Suno, which users exploited to create AI-generated songs promoting hate, conspiracy theories and violence.
In April 2024, hateful GAI images migrated beyond the screen when the Michigan chapter of the white supremacist group White Lives Matter (WLM) rented roadside billboards in the metro Detroit area. These included white supremacist dog whistles alongside an image of Hitler that appeared to be AI-generated, making the billboard one of the earliest examples of GAI content in offline extremist propaganda tracked by ADL.
Ahead of the 2024 U.S. presidential election, promoters of disinformation used GAI content to influence voter sentiment, including synthetic speech robocalls and fabricated images. They also leveraged the “liar’s dividend” phenomenon to discredit factual information, suggesting that authentic images — such as photos of a crowd at a Kamala Harris rally in Detroit — were actually fabricated to deceive the masses.