The Russia connectionTwitter bots played disproportionate role spreading misinformation during 2016 election
An analysis of information shared on Twitter during the 2016 U.S. presidential election has found that automated accounts — or “bots” — played a disproportionate role in spreading misinformation online. The study analyzed 14 million messages and 400,000 articles shared on Twitter between May 2016 and March 2017 — a period that spans the end of the 2016 presidential primaries and the presidential inauguration on Jan. 20, 2017.
An analysis of information shared on Twitter during the 2016 U.S. presidential election has found that automated accounts — or “bots” — played a disproportionate role in spreading misinformation online.
The study, conducted by Indiana University researchers and published in Nature Communications, analyzed 14 million messages and 400,000 articles shared on Twitter between May 2016 and March 2017 — a period that spans the end of the 2016 presidential primaries and the presidential inauguration on Jan. 20, 2017.
Among the findings: A mere 6 percent of Twitter accounts that the study identified as bots were enough to spread 31 percent of the “low-credibility” information on the network. These accounts were also responsible for 34 percent of all articles shared from “low-credibility” sources.
The study also found that bots played a major role promoting low-credibility content in the first few moments before a story goes viral.
The brief length of this time — two to 10 seconds — highlights the challenges of countering the spread of misinformation online. Similar issues are seen in other complex environments like the stock market, where serious problems can arise in mere moments due to the impact of high-frequency trading.
“This study finds that bots significantly contribute to the spread of misinformation online — as well as shows how quickly these messages can spread,” said Filippo Menczer, a professor in the IU School of Informatics, Computing and Engineering, who led the study.
The analysis also revealed that bots amplify a message’s volume and visibility until it’s more likely to be shared broadly, despite only representing a small fraction of the accounts that spread viral messages.
“People tend to put greater trust in messages that appear to originate from many people,” said co-author Giovanni Luca Ciampaglia, an assistant research scientist with the IU Network Science Institute at the time of the study. “Bots prey upon this trust by making messages seem so popular that real people are tricked into spreading their messages for them.”
Information sources labeled as low-credibility in the study were identified based upon their appearance on lists