Disinformation and fake news on Twitter

Just a few fake and conspiracy outlets dominated during the election—and nearly all of them continue to dominate today.
Sixty-five percent of fake and conspiracy news links during the election period went to just the 10 largest sites, a statistic unchanged six months later. The top 50 fake news sites received 89 percent of links during the election and (coincidentally) 89 percent in the 30-day period five months later. Critically—and contrary to some previous reports—these top fake and conspiracy news outlets on Twitter are largely stable. Nine of the top 10 fake news sites during the month before the election were still in or near the top 10 six months later

Our methods find much more fake and conspiracy news activity on Twitter than several recent high-profile studies—though fake news still receives significantly fewer links than mainstream media sources.
Our study finds much more fake news activity than several recent studies, largely because it examines a larger corpus of fake and conspiracy news sites. Fake and conspiracy news sites received about 13 percent as many Twitter links as a comparison set of national news outlets did, and 37 percent as many as a set of regional newspapers.

Most accounts spreading fake or conspiracy news in our maps are estimated to be bots or semi-automated accounts.
Machine learning models estimate that 33 percent of the 100 most-followed accounts in our postelection map—and 63 percent of a random sample of all accounts — are “bots,” or automated accounts. Because roughly 15 percent of accounts in the postelection map have since been suspended, the true proportion of automated accounts may have exceeded 70 percent.

Our maps show that accounts that spread fake news are extremely densely connected.
In both the election-eve and postelection maps, our methods identify an ultra-dense core of heavily followed accounts that repeatedly link to fake or conspiracy news sites. Sites in the core are typically not the highest-volume tweeters of fake news. However, the popularity of these accounts, and heavy co-followership among top accounts, means that fake news stories that reach the core (or start there) are likely to spread widely. The pre-election fake news network is one of the densest Graphika has ever analyzed, necessitating unusual map drawing procedures.

Fake news during the election did not just adopt conservative or Republican-leaning frames — though it has become more ostensibly Republican since.
While a large majority of fake news came from supposedly pro-Republican and pro-Donald Trump accounts in the month before the election, smaller but still substantial amounts of fake news were passed on by liberal or Democratic-identified accounts. After the election period, though, left-leaning fake news decreased much more than right-leaning fake news.

There are structural changes in the role of Russian-aligned clusters of accounts postelection.
In the pre-election map, clusters of accounts affiliated with Russia serve a brokerage role, serving as a cultural and political bridge between liberal U.S. accounts and European far-right accounts. Postelection, however, accounts in the Russia cluster have become more peripheral, while the International Conspiracy | Activist cluster (which similarly spreads pro-Russia content) is spread broadly through the map. This structure suggests that international conspiracy-focused accounts have become more important as brokers of fake news postelection.

Most of the accounts that linked repeatedly to fake and conspiracy news during the election are still active.
Twitter has claimed repeatedly that it has cracked down on automated accounts that spread fake news and engage in “spammy behavior.” Yet of the 100 accounts that were most active in spreading fake news in the months before the election—the large majority clearly engaged in “spammy behavior” that violates Twitter’s rules—more than 90 were still active as of spring 2018. Overall, 89 percent of accounts in our fake and conspiracy news map remained active as of mid-April 2018. The persistence of so many easily identified abusive accounts is difficult to square with any effective crackdown.

A few dozen accounts controlled by Russia’s Internet Research Agency appear in our maps—but hundreds of other accounts were likely more important in spreading fake news.
Of the more than 2,700 IRA accounts named publicly as of this writing, 65 are included in at least one of our maps. The IRA accounts in our maps include several accounts that were widely quoted in U.S. media, such as @WarfareWW, @TEN_GOP and @Jenn_Abrams. Most of the publicly known IRA accounts are filtered from our map because of relatively few followers and little measurable influence. Plenty of other accounts, though, do tweet in lockstep with the Kremlin’s message, including hundreds of accounts with more followers than top IRA trolls.

There is evidence of coordinated campaigns to push fake news stories and other types of disinformation.
Most news stories on Twitter follow a statistically regular pattern: The rate of new links ramps up quickly (but not instantly), peaks in an hour or two, and then decays in an exponential, statistically regular fashion. But many fake news stories do not follow this nearly universal pattern. Organized blocks of accounts appear to coordinate to extend the life cycle of selected news stories and hashtags. Segments of our maps associated with Russian propaganda are key participants in these campaigns, and many of these efforts align strongly with Russian goals and interests.

Coordinated campaigns seem to opportunistically amplify content they did not create.
Public discussion has often vacillated between portraying fake news as an organic, small-scale phenomenon driven by ad dollars, and characterizing it as the product of massive coordinated efforts by state actors. Our data tell a more complicated story, in which some narratives are carefully crafted, but others are amplified because they fit with the agenda of those running these campaigns. This is the information warfare equivalent of giving air cover to a rebel group, using outside technology and resources to augment otherwise-weak organic efforts.

One case study suggests that concerted action against noncredible outlets can drastically reduce their audience.
The Real Strategy was referenced by more than 700,000 tweets in our election sample, the second-most linked fake or conspiracy news outlet overall. After being tied to a large-scale harassment campaign and the “Pizzagate” falsehood, though, The Real Strategy’s Twitter account was deleted, it was blacklisted on online forums such as Reddit, and a network of supportive bot accounts was partially disrupted. The postelection sample showed only 1,534 tweets to The Real Strategy, a drop of 99.8 percent. This example suggests that aggressive action against fake news outlets can be effective at containing the spread of fake news.