Twitter bots played disproportionate role spreading misinformation during 2016 election

produced by independent third-party organizations of outlets that regularly share false or misleading information. These sources — such as websites with misleading names like “USAToday.com.co” — include outlets with both right- and left-leaning points of view.

The researchers also identified other tactics for spreading misinformation with Twitter bots. These included amplifying a single tweet — potentially controlled by a human operator — across hundreds of automated retweets; repeating links in recurring posts; and targeting highly influential accounts.

For instance, the study cites a case in which a single account mentioned @realDonaldTrump in 19 separate messages about millions of illegal immigrants casting votes in the presidential election — a false claim that was also a major administration talking point.

The researchers also ran an experiment inside a simulated version of Twitter and found that the deletion of 10 percent of the accounts in the system — based on their likelihood to be bots — resulted in a major drop in the number of stories from low-credibility sources in the network.

“This experiment suggests that the elimination of bots from social networks would significantly reduce the amount of misinformation on these networks,” Menczer said.

The study also suggests steps companies could take to slow misinformation spread on their networks. These include improving algorithms to automatically detect bots and requiring a “human in the loop” to reduce automated messages in the system. For example, users might be required to complete a CAPTCHA to send a message.

Although their analysis focused on Twitter, the study’s authors added that other social networks are also vulnerable to manipulation. For example, platforms such as Snapchat and WhatsApp may struggle to control misinformation on their networks because their use of encryption and destructible messages complicates the ability to study how their users share information.

“As people across the globe increasingly turn to social networks as their primary source of news and information, the fight against misinformation requires a grounded assessment of the relative impact of the different ways in which it spreads,” Menczer said. “This work confirms that bots play a role in the problem — and suggests their reduction might improve the situation.”

To explore election messages currently shared on Twitter, Menczer’s research group has also recently launched a tool to measure “Bot Electioneering Volume.” Created by IU Ph.D. students, the program displays the level of bot activity around specific election-related conversations, as well as the topics, user names and hashtags they’re currently pushing.

Additional authors on the study are Alessandro Flammini, a professor in the IU School of Informatics, Computing and Engineering; Kai-Cheng “Kevin” Yang, an IU Ph.D. student; Chengcheng Shao of the National University of Defense Technology in China, who was a visiting professor at IU at the time of the study; and Onur Varol of Northeastern University, who was a Ph.D. student at IU at the time of the study. Ciampaglia is now an assistant professor at the University of Southern Florida.

This work was supported in part by National Science Foundation, the James S. McDonnell Foundation and the Democracy Fund.

— Read more in Chengcheng Shao et al., “The spread of low-credibility content by social bots,” Nature Communications 9, Article number: 4787 (2018) (DOI: 10.1038/s41467-018-06930-7)