Does Correcting Online Falsehoods Make Matters Worse?

The research team then created a series of Twitter bot accounts, all of which existed for at least three months and gained at least 1,000 followers, and appeared to be genuine human accounts. Upon finding any of the 11 false claims being tweeted out, the bots would then send a reply message along the lines of, “I’m uncertain about this article — it might not be true. I found a link on Snopes that says this headline is false.” That reply would also link to the correct information.

Among other findings, the researchers observed that the accuracy of news sources the Twitter users retweeted promptly declined by roughly 1 percent in the next 24 hours after being corrected. Similarly, evaluating over 7,000 retweets with links to political content made by the Twitter accounts in the same 24 hours, the scholars found an upturn by over 1 percent in the partisan lean of content, and an increase of about 3 percent in the “toxicity” of the retweets, based on an analysis of the language being used.

In all these areas — accuracy, partisan lean, and the language being used — there was a distinction between retweets and the primary tweets written by the Twitter users. Retweets, specifically, degraded in quality, while tweets original to the accounts being studied did not.

“Our observation that the effect only happens to retweets suggests that the effect is operating through the channel of attention,” says Rand, noting that on Twitter, people seem to spend a relatively long time crafting primary tweets, and little time making decisions about retweets.

He adds: “We might have expected that being corrected would shift one’s attention to accuracy. But instead, it seems that getting publicly corrected by another user shifted people’s attention away from accuracy — perhaps to other social factors such as embarrassment.” The effects were slightly larger when people were being corrected by an account identified with the same political party as them, suggesting that the negative response was not driven by partisan animosity.

Ready for Prime Time
As Rand observes, the current result seemingly does not follow some of the previous findings that he and other colleagues have made, such as a study published in Nature in March showing that neutral, nonconfrontational reminders about the concept of accuracy can increase the quality of the news people share on social media.

“The difference between these results and our prior work on subtle accuracy nudges highlights how complicated the relevant psychology is,” Rand says. 

As the current paper notes, there is a big difference between privately reading online reminders and having the accuracy of one’s own tweet publicly questioned. And as Rand notes, when it comes to issuing corrections, “it is possible for users to post about the importance of accuracy in general without debunking or attacking specific posts, and this should help to prime accuracy and increase the quality of news shared by others.”

At least, it is possible that highly argumentative corrections could produce even worse results. Rand suggests the style of corrections and the nature of the source material used in corrections could both be the subject of additional research.

“Future work should explore how to word corrections in order to maximize their impact, and how the source of the correction affects its impact,” he says.

Peter Dizikes is the social sciences, business, and humanities writer at the MIT News Office. The article  is reprinted with permission of MIT News.