• Why Do the Russian and Chinese Governments Want Americans to Dislike Immigrants?

    The Internet Research Agency (IRA), the Kremlin’s propaganda and disinformation arm, employs fake social media accounts, media properties, memes, and bots to conduct what the Russians call “active measures” campaign to influence U.S. public opinion. The IRA’s goal is to intensify political opinions on every issue, and one of the IRA’s prime targets is to deepen nativist sentiments among Americans.

  • Is There Such a Thing as a Safe Algorithm? Talk of Regulation Gathers Momentum

    There is now wide agreement among experts and politicians that regulatory changes are needed to protect users, particularly young children and girls, who are vulnerable to mental health problems and body image issues that are tied to the social media platform’s algorithms.

  • More Violent Pro-ISIS, Extreme Right Content on Facebook & Instagram

    The Counter Extremism Project (CEP), which monitors the methods used by extremists to exploit the Internet and social media platforms to recruit followers and incite violence, reports that violent Islamist and extreme right content continues to be available on social media platforms.

  • Only Playing: Extreme-Right Gamification

    Extremist ideas inspire violence in a few, but for the many, participation increasingly resembles a consequence-free game separate from reality. “As technologies develop further, either in the form of Facebook’s metaverse or other forms of mixed or augmented reality that blur the line between online and offline, the potential disconnect between play and real-world violence is only going to grow more acute,” experts say.

  • Nearing the Tipping Point Needed to Reform Facebook, Other Social Media?

    The recent series of five articles from the Wall Street Journal exposed Facebook’s complicity in spreading toxic content. Yet, social media platforms continue to enjoy free rein despite playing what many consider to be an outsized and destabilizing role in delivering content to billions of individuals worldwide. No one said reigning in social media was going to be easy. But the harm caused of social media is simply too big for us to fail.

  • Video Fake News: Believed More, Shared More Than Text, Audio Versions

    People are more likely to believe fake news in a video format compared to text and audio forms of the same story. People are also more willing to share these videos with people in their network.

  • Fact-Checking Works Across the Globe

    Researchers found that fact-checking, aiming to reduce false beliefs, worked with little variation in Argentina, Nigeria, South Africa and the U.K., and the positive effects were still detectable two weeks later.

  • Model Predicts COVID-19 Outbreak Two Weeks Ahead of Time

    People’s social behavior, reflected in their mobility data, is providing scientists with a way to forecast the spread of COVID-19 nationwide at the county level. The data-driven deep learning model which FAU researchers developed has important implications for managing the current pandemic as well as future pandemics.

  • Social Media Platforms Do Little to Limit Online Anti-Semitic Content

    A new report shows how social media companies fail to act on anti-Jewish hate on their platforms. As a result of their failure to enforce their own rules, social media platforms like Facebook have become safe places to spread racism and propaganda against Jews.

  • New Book Helps Readers Spot Online Health Scams

    UBC’s Dr. Bernie Garrett, the author of a new book on health scams, misinformation, and disinformation, says that “Scam marketers are well-versed in modern advertising techniques and the psychology of persuasion. They know all the triggers that can help sell a product.” He adds that, during the COVID period, such scams “definitely have proliferated, and this has been aided by social media… Unfortunately, people can post misinformation on social media with no real consequences.”

  • Combating Foreign Disinformation on Social Media

    How are other nations using disinformation on social media to advance their interests? What is the U.S. response to these campaigns, and how has it evolved? What does the Joint Force—and the U.S. Air Force in particular—need to be prepared to do in response?

  • On the Internet, Nobody Knows You’re a Dog – or a Fake Russian Twitter Account

    Legacy media outlets played an unwitting role in the growth of the four most successful fake Twitter accounts hosted by the Russian Internet Research Agency (IRA) which were created to spread disinformation during the 2016 U.S. presidential campaign.

  • The Storywrangler: Exploring Social Media Messages for Signs of Coming Turmoil

    Scientists have invented an instrument to peer deeply into the billions and billions of posts made on Twitter since 2008, and have begun to uncover the vast galaxy of stories that they contain looking for patterns which would help predict political and financial turmoil.

  • Surgeon General Urges ‘Whole-of-Society’ Effort to Fight Health Misinformation

    “Misinformation is worse than an epidemic: It spreads at the speed of light throughout the globe, and can prove deadly when it reinforces misplaced personal bias against all trustworthy evidence,” said National Academy of Sciences President Marcia McNutt. “Research is helping us combat this ‘misinfodemic’ through understanding its origins and the aspects of human nature that make it so transmittable.”

  • Holding the Line: Chinese Cyber Influence Campaigns After the Pandemic

    While the American public became more aware of Chinese cyber influence campaigns during the 2020 COVID-19 outbreak, they did not start there – and they will not end there, either. Maggie Baughman writes that as the world’s attention returns to the origins of the global pandemic and recommits to its containment, the United States must prepare for inevitable shifts in Chinese methods and goals in its cyber influence activities – “likely beyond what Western countries have previously experienced in dealing with China”