Truth decayGhosts in the Machine: Malicious Bots Spread COVID Untruths

By Mary Van Beusekom

Published 9 June 2021

Malicious bots, or automated software that simulates human activity on social media platforms, are the primary drivers of COVID-19 misinformation, spreading myths and seeding public health distrust exponentially faster than human users could, suggests a new study.

Malicious bots, or automated software that simulates human activity on social media platforms, are the primary drivers of COVID-19 misinformation, spreading myths and seeding public health distrust exponentially faster than human users could, suggests a study published yesterday in JAMA Internal Medicine.

Led by University of California at San Diego (UCSD) researchers, a team analyzed a sample of roughly 300,000 posts on heavily bot-influenced public Facebook groups to measure how quickly the posts’ links were shared.

When multiple accounts share links within seconds of each other, it is a sign of bot accounts controlled by a computer program that coordinates their operations. The researchers found that the most heavily bot-influenced Facebook groups shared identical links within, on average, 4.28 seconds, versus 4.35 hours for the least-influenced groups.

Heavily influenced groups were considered those that hosted identical links at least five times, with at least half of them posted within 10 seconds.

Scientific Journals “Easy Targets”
The researchers focused on posts sharing a link to the Danish Study to Assess Face Masks for the Protection Against COVID-19 Infection (DANMASK-19) randomized clinical trial, which showed no benefit to face coverings and was published in the Annals of Internal Medicine on Nov 18, 2020.

Study coauthor Davey Smith, MD, of UCSD, said in an Elevated Science Communications news release that the team chose the DANMASK-19 study “because masks are an important public health measure to potentially control the pandemic and are a source of popular debate.” The study was the fifth most shared research article of all time as of March 2021, the researchers said.

The team identified 712 posts to 563 Facebook groups that shared a link to the DANMASK-19 study and then downloaded all 299,925 available posts in the 5 days after the study’s publication, when media interest is usually greatest. Of all posts sharing a link to the trial, 39% appeared on Facebook groups most influenced by bots, while only 9% were made to the least-influenced groups.

Scientific journals are easy targets of automated software,” the authors wrote. “Possible approaches to prevent misinformation due to dissemination of articles by automated software include legislation that penalizes those behind automation; greater enforcement of rules by social media companies to prohibit automation; and counter-campaigns by health experts.”

Twenty percent of posts to the most heavily bot-influenced groups said that face coverings harm the people who wear them, contrary to scientific knowledge, claiming “Danish study proves…the harmfulness of wearing a mask.” And 51% promoted conspiracy theories such as “corporate fact checkers are lying to you! All this to serve their Dystopian #Agenda2030 propaganda,” while 44% made neither claim.

Among posts to groups least heavily influenced by bots, however, only 9% claimed that masks harm the wearer, 20% promoted conspiracy theories about the trial, and 73% made neither claim.

DANMASK-19 posts made to the most heavily bot-influenced Facebook groups were 2.3 times more likely to state that face coverings harm the wearer and 2.5 times more likely to promote conspiracy theories than posts to the least heavily influenced groups.

Onus Is on Social Media Platforms
Lead study author John Ayers, PhD, of UCSD, said in the release that the COVID-19 pandemic spawned what the World Health Organizations termed an “infodemic” of bad coronavirus information. “Bots—like those used by Russian agents during the 2016 American presidential election—have been overlooked as a source of COVID-19 misinformation,” he said.

In fact, malicious bots make up at least a quarter of all internet traffic, according to the 2021 Bad Bot Report by the security company Imperva.

Coauthor Brian Chu of the University of Pennsylvania at Philadelphia noted that if bots can misrepresent a prominent study in a prestigious medical journal to transmit bad information, “no content is safe from the dangers of weaponized misinformation.”

The amount of propaganda spread by bots suggests that their influence goes far beyond that identified in the study, Smith added. “Could bots be fostering vaccine hesitancy or amplifying Asian discrimination, too?” he asked.

Coauthor David Broniatowski, PhD, of The George Washington University, said that the misinformation spread by bots can cascade. “For example, bots may make platforms’ algorithms think that automated content is more popular than it actually is, which can then lead to platforms actually prioritizing deceptive information and disseminating it to an even larger audience,” he said.

But Broniatowski also pointed out that social media platforms have the power to identify and remove the offending bots. “Efforts to purge deceptive bots from social media platforms must become a priority among legislators, regulators, and social media companies who have instead been focused on targeting individual pieces of misinformation from ordinary users,” he said.

Unlike controversial strategies to censor actual people, silencing automated propaganda is something everyone can and should support.”

Mary Van Beusekom is editorial consultant and content manager at CIDRAPThis article  is published courtesy of the University of Minnesota’s Center for Infectious Diseases Research and Policy (CIDRAP).