• POLARIZATIONInfluencers, Multipliers, and the Structure of Polarization: How Political Narratives Circulate on Twitter/X

    A recent study provides a nuanced understanding of the mechanisms driving polarization and issue alignment on Twitter/X and reveals how political polarization is reinforced and structured by two distinct types of highly active users: influencers and multipliers.

  • EXTREMISMHashtags and Humor Are Used to Spread Extreme Content on Social Media

    Conspiracy theories and incitement to harassment and violence abound on mainstream social media platforms like Facebook and Instagram. But the extreme content is often mixed with ironic play, memes and hashtags, which makes it difficult for authorities and media to know how to respond.

  • DEEPFAKESAustralia’s Deepfake Dilemma and the Danish Solution

    By Andrew Horton and Elizabeth Lawler

    Countries need to move beyond simply pleading with internet platforms for better content moderation and instead implement new legal frameworks that empower citizens directly. For a model of how to achieve this, policymakers should look to the innovative legal thinking emerging from Denmark.

  • EXTREMISMWhat Does Netflix’s Drama “Adolescence” Tell Us About Incels and the Manosphere?

    By Lewys Brace

    While Netflix’s psychological crime drama ‘Adolescence’ is a work of fiction, its themes offer insight into the very real and troubling rise of the incel and manosphere culture online.

  • TRUTH DECAYAI System Identifies Fake Videos Beyond Face Swaps and Altered Speech

    By David Danelski

    In an era where manipulated videos can spread disinformation, bully people, and incite harm, UC Riverside, in collaboration with Google, have developed a new model which spots fakes by interpreting faces and backgrounds.

  • EXTREMISMGrok’s Antisemitic Rant Shows How Generative AI Can Be Weaponized

    By James Foulds, Phil Feldman, and Shimei Pan

    The AI chatbot Grok went on an antisemitic rant on July 8, 2025, posting memes, tropes and conspiracy theories used to denigrate Jewish people on the X platform. It also invoked Hitler in a favorable context. The episode follows one on May 14, 2025, when the chatbot spread debunked conspiracy theories about “white genocide” in South Africa, echoing views publicly voiced by Elon Musk, the founder of its parent company, xAI.

  • EXTREMISMTerrorgram Block Is a Welcome Step Towards Countering Violent Extremism

    By Henry Campbell

    Terrorgram has been linked to lone-actor attacks in Slovakia, Turkey, Brazil and the United States. Its listing places it among the likes of Hamas, Islamic State, and violent white supremacist groups such as Sonnenkrieg Division and The Base.

  • TRUTH DECAYGrok’s ‘White Genocide’ Responses Show How Generative AI Can Be Weaponized

    By James Foulds, Phil Feldman, and Shimei Pan

    The AI chatbot Grok spent one day in May 2025 spreading debunked conspiracy theories about “white genocide” in South Africa, echoing views publicly voiced by Elon Musk. There has been substantial research on methods for keeping AI from causing harm by avoiding such damaging statements – called AI alignment – but this incident is particularly alarming because it shows how those same techniques can be deliberately abused to produce misleading or ideologically motivated content.

  • DEMOCRACY WATCHRegulating X Isn’t Censorship. It’s Self-Defense

    By Fergus Ryan

    The European Union’s landmark new content law, the Digital Services Act (DSA) reflects hard-earned European wisdom. It comes from historical memory of democracies undone by propaganda, foreign interference, and the normalization of lies. Vice President J. D. Vance and X owner Elon Musk harshly criticize DSA, framing their agenda as “free speech,” but in Europe, it increasingly looks like a coordinated push to weaken democratic institutions and empower their far-right allies.

  • TECHNOLOGY & CONFLICTWhat if Bin Laden Was Killed in the Era of Generative AI?

    By Matthew J. Fecteau

    By leveraging machine learning to produce AI-generated content, adversaries can weaponize synthetic media, making fact and fiction nearly indistinguishable. The death—or not—of combatant leaders is prime example of the magnitude of the challenge this emerging reality poses.

  • TECHNOLOGY & CONFLICTMemes and Conflict: Study Shows Surge of Imagery and Fakes Can Precede International and Political Violence

    By Tim Weninger and Ernesto Verdeja

    The widespread use of social media during times of political trouble and violence has made it harder to prevent conflict and build peace.

  • EXTREMISMThe Rise and Fall of Terrorgram: Inside a Global Online Hate Network

    By A. C. Thompson and James Bandler

    White supremacists from around the world used Telegram to spread hateful content promoting murder and destruction in a community they called Terrorgram. ProPublica and FRONTLINE identified 35 crimes linked to Terrorgram, including bomb plots, stabbings, and shootings. After several arrests of alleged Terrorgram members and reforms by Telegram, experts expect that extremists will find a new platform for their hate.

  • TERRORISMHow a Global Online Network of White Supremacists Groomed a Teen to Kill

    By A.C. Thompson, James Bandler, and Lukáš Diko

    Neo-Nazi influencers on the social media platform Telegram created a network of chats and channels where they stoked racist, antisemitic and homophobic hate. The influencers, known as the Terrorgram Collective, targeted a teen in Slovakia and groomed him for three years to kill.

  • THE RUSSIA CONNECTIONTrump Is Giving Russian Cyber Ops a Free Pass – and Putting Western Democracy on the Line

    By Jasper Jackson, Bureau of Investigative Journalism

    Secretary of Defense Pete Hegseth last weekend announced that the U.S. will halt all offensive cyber operations –and planning for such operations –against Russia. The Kremlin has long sought to sow chaos in the United States and other democracies by using “information confrontation.” That job just got a lot easier.

  • TRUTH DECAYAs Facebook Abandons Fact-Checking, It’s Also Offering Bonuses for Viral Content

    By Craig Silverman

    Meta decided to stop working with U.S. fact-checkers at the same time as it’s revamping a program to pay bonuses to creators with high engagement numbers, potentially pouring accelerant on the kind of false posts the company once policed.