• On the Internet, Nobody Knows You’re a Dog – or a Fake Russian Twitter Account

    Legacy media outlets played an unwitting role in the growth of the four most successful fake Twitter accounts hosted by the Russian Internet Research Agency (IRA) which were created to spread disinformation during the 2016 U.S. presidential campaign.

  • The Storywrangler: Exploring Social Media Messages for Signs of Coming Turmoil

    Scientists have invented an instrument to peer deeply into the billions and billions of posts made on Twitter since 2008, and have begun to uncover the vast galaxy of stories that they contain looking for patterns which would help predict political and financial turmoil.

  • Surgeon General Urges ‘Whole-of-Society’ Effort to Fight Health Misinformation

    By Molly Galvin

    “Misinformation is worse than an epidemic: It spreads at the speed of light throughout the globe, and can prove deadly when it reinforces misplaced personal bias against all trustworthy evidence,” said National Academy of Sciences President Marcia McNutt. “Research is helping us combat this ‘misinfodemic’ through understanding its origins and the aspects of human nature that make it so transmittable.”

  • Holding the Line: Chinese Cyber Influence Campaigns After the Pandemic

    While the American public became more aware of Chinese cyber influence campaigns during the 2020 COVID-19 outbreak, they did not start there – and they will not end there, either. Maggie Baughman writes that as the world’s attention returns to the origins of the global pandemic and recommits to its containment, the United States must prepare for inevitable shifts in Chinese methods and goals in its cyber influence activities – “likely beyond what Western countries have previously experienced in dealing with China”

  • Social Media Use One of Four Factors Related to Higher COVID-19 Spread Rates Early On

    Researchers showed that, in the early stages of the pandemic, there was a correlation between social media use and a higher rate of COVID spread. The researchers compared 58 countries and found that higher social media use was among the four factors driving a faster and broader spread. Accounting for pre-existing, intrinsic differences among countries and regions would help facilitate better management strategies going forward.

  • Developing Research Model to Fight Deepfakes

    Detecting “deepfakes,” or when an existing image or video of a person is manipulated and replaced with someone else’s likeness, presents a massive cybersecurity challenge: What could happen when deepfakes are created with malicious intent? Artificial intelligence experts are working on a new reverse-engineering research method to detect and attribute deepfakes.

  • China's Internet Trolls Go Global

    By Ryan Fedasiuk

    Chinese trolls are beginning to pose serious threats to economic security, political stability, and personal safety worldwide. The CCP-backed trolls have become more than a nuisance, and the magnitude and frequency of their attacks will likely continue to increase. Formulating an effective response will require understanding their size, tactics, and mission as the CCP widens the scope of its public opinion war to include foreign audiences.

  • Ghosts in the Machine: Malicious Bots Spread COVID Untruths

    By Mary Van Beusekom

    Malicious bots, or automated software that simulates human activity on social media platforms, are the primary drivers of COVID-19 misinformation, spreading myths and seeding public health distrust exponentially faster than human users could, suggests a new study.

  • Overconfidence in Identifying False News Makes One More Susceptible to It

    A new study finds that individuals who falsely believe they are able to identify false news are more likely to fall victim to it. “Though Americans believe confusion caused by false news is extensive, relatively few indicate having seen or shared it,” said one researcher. “If people incorrectly see themselves as highly skilled at identifying false news, they may unwittingly be more likely to consume, believe and share it, especially if it conforms to their worldview.”

  • Antisemitism on TikTok

    Over the last few years, TikTok—the social media app that allows users to create and share short videos—has gained immense popularity. While much of the content on TikTok is lighthearted and fun, extremists have exploited the platform to share hateful content and recruit new adherents.

  • Evil Eye Gazes Beyond China’s Borders: Troubling Trends in Chinese Cyber Campaigns

    By Eli Clemens

    On March 24, 2021, Facebook announced they had taken actions against an advanced persistent threat (APT) group located in China, previously monikered as Evil Eye. Evil Eye’s campaign was clearly motivated by a political goal that China frequently uses a blend of information operations (IO) and cyber means to accomplish: the disruption of dissidents, especially those who raise awareness of China’s human rights violations against its ethnic minorities.

  • The Case for a “Disinformation CERN”

    By Anastasia Kapetas

    Democracies around the world are struggling with various forms of disinformation afflictions. But the current suite of policy prescriptions will fail because governments simply don’t know enough about the emerging digital information environment.

  • On Christchurch Call Anniversary, a Step Closer to Eradicating Terrorism Online?

    Is it possible to eradicate terrorism and violent extremism from the internet? To prevent videos and livestreams of terrorist attacks from going viral, and maybe even prevent them from being shared or uploaded in the first place? Courtney C. Radsch writes that the governments and tech companies involved in the Christchurch Call are dealing with a difficult issue: “The big question is whether the twin imperatives of eradicating TVEC while protecting the internet’s openness and freedom of expression are compatible,” Radsch writes.

  • Does Correcting Online Falsehoods Make Matters Worse?

    By Peter Dizikes

    So, you thought the problem of false information on social media could not be any worse? Well, there is evidence it can. A new study shows Twitter users post even more misinformation after other users correct them.

  • Just 12 People Are Behind Most Vaccine Hoaxes on Social Media

    Researchers have found that just twelve individuals are responsible for the bulk of the misleading claims and outright lies about COVID-19 vaccines that proliferate on Facebook, Instagram and Twitter. Many of the messages about the COVID-19 vaccines being widely spread online echo the lies peddlers of health misinformation have been spreading in the past about other vaccines, for example, the vaccines against measles, mumps, and rubella.