Truth decay, fake news, disinformation, Russian propaganda, science, misinformation | Homeland Security Newswire

Truth decayWar on fake news could be won with the help of behavioral science

By Gleb Tsipursky

Published 16 May 2018

Facebook CEO Mark Zuckerberg recently acknowledged his company’s responsibility in helping create the enormous amount of fake news that plagued the 2016 election – after earlier denials. Yet he offered no concrete details on what Facebook could do about it. Fortunately, there’s a way to fight fake news that already exists and has behavioral science on its side: the Pro-Truth Pledge project. I was part of a team of behavioral scientists that came up with the idea of a pledge as a way to limit the spread of misinformation online. Two studies that tried to evaluate its effectiveness suggest it actually works.

Facebook CEO Mark Zuckerberg recently acknowledged his company’s responsibility in helping create the enormous amount of fake news that plagued the 2016 election – after earlier denials. Yet he offered no concrete details on what Facebook could do about it.

Fortunately, there’s a way to fight fake news that already exists and has behavioral science on its side: the Pro-Truth Pledge project.

I was part of a team of behavioral scientists that came up with the idea of a pledge as a way to limit the spread of misinformation online. Two studies that tried to evaluate its effectiveness suggest it actually works.

Fighting fake news
A growing number of American lawmakers and ordinary citizens believe social media companies like Facebook and Twitter need to do more to fight the spread of fake news – even if it results in censorship.

A recent survey, for example, showed that 56 percent of respondents say tech companies “should take steps to restrict false info online even if it limits freedom of information.”

But what steps they could take – short of censorship and government control – is a big question.

Before answering that, let’s consider how fake news spreads. In the 2016 election, for example, we’ve learned that a lot of misinformation was a result of Russian bots that used falsehoods to try to exacerbate American religious and political divides.

Yet the posts made by bots wouldn’t mean much unless millions of regular social media users chose to share the information. And it turns out ordinary people spread misinformation on social media much faster and further than true stories.

In part, this problem results from people sharing stories without reading them. They didn’t know they were spreading falsehoods.