• Just 12 People Are Behind Most Vaccine Hoaxes on Social Media

    Researchers have found that just twelve individuals are responsible for the bulk of the misleading claims and outright lies about COVID-19 vaccines that proliferate on Facebook, Instagram and Twitter. Many of the messages about the COVID-19 vaccines being widely spread online echo the lies peddlers of health misinformation have been spreading in the past about other vaccines, for example, the vaccines against measles, mumps, and rubella.

  • How Truth Decay Is Fueling Vaccine Hesitancy

    A recent poll found that more than a quarter of Americans will not try to get vaccinated. Why are so many people opting out? Why are so many people opting out? The reasons vary, but some simply don’t trust the public health and government officials who are urging them to get the vaccine. The spread of misinformation and disinformation, which is rampant over social media, is one of the factors fueling vaccine hesitancy. And in turn, it’s threatening our ability to end the pandemic for good.

  • Detecting Conspiracy Theories on Social Media

    Conspiracy theories circulated online via social media contribute to a shift in public discourse away from facts and analysis and can contribute to direct public harm. Social media platforms face a difficult technical and policy challenge in trying to mitigate harm from online conspiracy theory language. Researchers are working to improvemachine learning to detect and understand online conspiracy theories.

  • New AI tool Tracks Evolution of COVID-19 Conspiracy Theories on Social Media

    A new machine-learning program accurately identifies COVID-19-related conspiracy theories on social media and models how they evolved over time—a tool that could someday help public health officials combat misinformation online.

  • Superspreaders of Malign and Subversive Information on COVID-19

    The global spread of coronavirus disease 2019 (COVID-19) created a fertile ground for attempts to influence and destabilize different populations and countries. Both Russia and China have employed information manipulation during the COVID-19 pandemic to tarnish the reputation of the United States by emphasizing challenges with its pandemic response and characterizing U.S. systems as inadequate, and both countries falsely accused the United States of developing and intentionally spreading the virus.

  • Breakthrough Technology a Game Changer for Deepfake Detection

    Army researchers developed a deepfake detection method that will allow for the creation of state-of-the-art soldier technology to support mission-essential tasks such as adversarial threat detection and recognition. This work specifically focuses on a lightweight, low training complexity and high-performance face biometrics technique that meets the size, weight and power requirements of devices soldiers will need in combat.

  • QAnon Hasn’t Gone Away – It’s Alive and Kicking in States Across the Country

    By this point, almost everyone has heard of QAnon, the conspiracy spawned by an anonymous online poster of enigmatic prophecies. Perhaps the greatest success of the conspiracy is its ability to create a shared alternate reality, a reality that can dismiss everything from a decisive election to a deadly pandemic. The QAnon universe lives on – now largely through involvement in local, not national, politics. Moving on from contesting the election, the movement’s new focus is vaccines and pandemic denialism.

  • New Tool Helps Spot False Information on Social Media

    University of Nebraska students have developed a tool — Info Window – which aims to help audiences look at their social feeds more critically, learn how to spot false information online, and understand how these tools can be used for malicious purposes.

  • Legislation Introduced to Ban TikTok from Government Devices

    U.S. Senators Marco Rubio (R-FL), Josh Hawley (R-MO), and Rick Scott (R-FL) have introduced legislation that would ban all federal employees from using TikTok on government devices. The U.S. State Department, the Department of Homeland Security, the Department of Defense, and TSA have already banned TikTok on federal devices due to cybersecurity concerns and the potential for spying by the Chinese government.

  • Messaging Authoritarianism: China’s Four Messaging Pillars and How ‘Wolf Warrior’ Tactics Undermine Them

    A messaging strategy is only as good as the goal it serves; as Xi Jinping has made clear, China is seeking to make the world safer for its brand of authoritarianism by reshaping the world order. An analysis of messaging from China’s diplomats, state-backed media, and leaders of the Chinese Communist Party (CCP) demonstrates that Beijing repeatedly uses narratives, angles, and comparisons that serve to change perceptions about China’s autocracy and the United States’ democracy—to China’s advantage.

  • After the Islamic State: Social Media and Armed Groups

    The Islamic State is often credited with pioneering the use of social media in conflict, having created a global brand that drew between 20,000 and 40,000 volunteers from at least 85 countries. Social media served as a key recruiting tool, source of fundraising, and platform for disseminating graphic propaganda to a global audience. Laura Courchesne and Brian McQuinn write that the Islamic State perfected tactics and strategies already widely used by hundreds of other armed groups.

  • Capitol Riot Exposed QAnon’s Violent Potential

    Many followers of the QAnon conspiracy theory see themselves as digital warriors battling an imaginary cabal of Satan-worshipping pedophiles who rule the world from the convenience of their keyboards. But the January 6 U.S. Capitol riot by supporters of former President Donald Trump exposed the potential for violence in a movement that reared its head on the fringes of the internet in 2018 and now boasts millions of adherents around the world.

  • An AI-Based Counter-Disinformation Framework

    There are different roles that AI can play in counter-disinformation efforts, but the current shortfalls of AI-based counter-disinformation tools must be addressed first. Such an effort faces technical, governance, and regulatory barriers, but there are ways these obstacles could be effectively addressed to allow AI-based solutions to play a bigger role in countering disinformation.

  • Many QAnon Followers Report Having Mental Health Diagnoses

    QAnon followers, who may number in the millions, are often viewed as a group associated with baseless and debunked conspiracy, terrorism, and radical action, such as the 6 January Capitol insurrection. But radical extremism and terror may not be the real concern from this group. A social psychologist who studies terrorists, and a security scholar, in their research for their forthcoming book — Pastels and Pedophiles: Inside the Mind of QAnon — noticed that QAnon followers are different from the radicals they usually study in one key way: They are far more likely to have serious mental illnesses.

  • A Dozen Experts with Questions Congress Should Ask the Tech CEOs — On Disinformation and Extremism

    On Thursday, 25 March, two subcomittees of the House Energy & Commerce Committee will hold a joint hearing on “the misinformation and disinformation plaguing online platforms. Yaël Eisenstat and Justin Hendrix write that Thursday hearings will be the first time the tech CEOs will face Congress since the January 6th siege on the U.S. Capitol, where different groups of individuals sought to prevent the certification of the presidential election because they were led by Donald Trump to believe in the lie that the election was stolen. “Should social media companies continue their pattern of negligence, governments must use every power – including new legislation, fines and criminal prosecutions – to stop the harms being created,” says one expert. “Lies cost lives.”