Russian-operated bots posted millions of social media posts, fake stories during Brexit referendum

The Times reports that the researchers found that the Russian propaganda and disinformation campaign employed automated software agents, or “bots,” to spread pro-“Leave” (that is, leave the EU) social media stories during and after the Brexit referendum. The Russian operatives also targeted a few pro-“Remain” social media messages to specific constituencies in order to push the two sides of the debate further apart.

Importantly, the researchers found that human Twitter users – that is, British citizens who were the recipients of the Russian bots’ disinformation — were more likely to spread pro-leave Russian bot content via retweets (vs pro-remain content), thus amplifying the potential impact of the pro-Leave messages.

The research paper says:

    During the Referendum day, there is a sign that bots attempted to spread more leave messages with positive sentiment as the number of leave tweets with positive sentiment increased dramatically on that day.

    More specifically, for every 100 bots’ tweets that were retweeted, about 80-90 tweets were made by humans. Furthermore, before the Referendum day, among those humans’ retweets from bots, tweets by the Leave side accounted for about 50 percent of retweets while only nearly 20 percent of retweets had pro-remain content. In the other words, there is a sign that during pre-event period, humans tended to spread the leave messages that were originally generated by bots. Similar trend is observed for the US Election sample. Before the Election Day, about 80 percent of retweets were in favor of Trump while only 20 percent of retweets were supporting Clinton.

Research by Professor Sasha Talavera and Ph.D. student Tho Pham from the School of Management, in collaboration with Yuriy Gorodnichenko, Associate Professor at University of California, Berkeley, focused on information diffusion on Twitter in the run up to the EU Referendum.

Professor Talavera said: “With the development of technology, social media sites like Twitter and Facebook are often used as tools to express and spread feelings and opinions. During high-impact events like Brexit, public engagement through social media platforms quickly becomes overwhelming. However, not all social media users are real. Some, if not many, are actually automated agents, so-called bots. And more often, real users, or humans, are deceived by bots.”

Twitter analysis
Swansea says that the researchers, using a sample of 28.6 million #Brexit-related tweets collected from 24 May 2016 to 17 August 2016, observed the presence of Twitter bots that accounted for approximately 20 percent of total users in the sample. Given the preponderance of re-tweets from bots by humans, a key question is whether human users’ opinions about Brexit were manipulated by bots.

Empirical analysis shows that information about Brexit is spread quickly among users. Most of the reaction happened within ten minutes, suggesting that for issues critically important to people or issues widely covered in the media, informational rigidity is very small. Beyond information spread, an important finding is that bots seem to affect humans.

However, the degree of influence depends on whether a bot provides information consistent with that provided by a human. More specifically, a bot supporting leaving the EU has a stronger effect on a “leaver” human than a “remain” human.

“Echo chamber”
Further investigation shows that “leavers” were more likely to be influenced by bots compared to “remainers.” These results suggest that dissemination of information is consistent with what is frequently referred to as an echo chamber — a situation in which information, ideas, or beliefs are amplified or reinforced by communication and repetition inside a defined system, revealing that the outcome is that information is more fragmented rather than uniform across people.

From the paper:

These results lend supports to the echo chambers view that Twitter creates networks for individuals sharing the similar political beliefs. As the results, they tend to interact with others from the same communities and thus their beliefs are reinforced. By contrast, information from outsiders is more likely to be ignored. This, coupled by the aggressive use of Twitter bots during the high-impact events, leads to the likelihood that bots are used to provide humans with the information that closely matches their political views. Consequently, ideological polarization in social media like Twitter is enhanced. More interestingly, we observe that the influence of pro-leave bots is stronger the influence of pro-remain bots. Similarly, pro-Trump bots are more influential than pro-Clinton bots. Thus, to some degree, the use of social bots might drive the outcomes of Brexit and the US Election.

In summary, social media could indeed affect public opinions in new ways. Specifically, social bots could spread and amplify misinformation thus influence what humans think about a given issue. Moreover, social media users are more likely to believe (or even embrace) fake news or unreliable information which is in line their opinions. At the same time, these users distance from reliable information sources reporting news that contradicts their beliefs. As a result, information polarization is increased, which makes reaching consensus on important public
issues more difficult.

The researchers highlight some of the positive contributions social media have made, but also warn of the risks of “lies and manipulations” being dumped onto these platforms in a deliberate attempt to misinform the public and skew opinions and democratic outcomes — suggesting regulation to prevent abuse of bots may be necessary.

They conclude:

    Recent political events (the Brexit Referendum and the US Presidential Election) have observed the use of social bots in spreading fake news and misinformation. This, coupled by the echo chambers nature of social media, might lead to the case that bots could shape public opinions in negative ways. If so, policy-makers should consider mechanisms to prevent abuse of bots in the future.

Professor Talavera said: “Social bots spread and amplify the misinformation, thus influencing what humans think about a given issues. Moreover, social media users are more likely to believe (or even embrace) fake news that is in line their opinions. At the same time, these users distance themselves from reliable information sources reporting news that contradicts their beliefs. As a result, information polarization is increased, which makes reaching consensus on important public issues more difficult.”

“It is now vital that policy makers and social media should seriously consider mechanisms to discourage the use of social bots to manipulate public opinion.”

Twitter, Facebook, and Google told Congress they are now revising their policies and taking steps to prevent future exploitation of their platforms by Russia. Techcrunch is not overly impressed:

And while it’s great that tech platforms finally appear to be waking up to the disinformation problem their technology has been enabling, in the case of these two major political events — Brexit and the 2016 US election — any action they have since taken to try to mitigate bot-fueled disinformation obviously comes too late.

While citizens in the U.S. and the U.K. are left to live with the results of votes that appear to have been directly influenced by Russian agents using U.S. tech tools.

— See the research video here