Truth decayExploring solutions for the problem of "fake news"

Published 28 February 2018

A new report, titled “Dead Reckoning: Navigating Content Moderation after ‘Fake News’,” analyzes nascent solutions recently proposed by platform corporations, governments, news media industry coalitions, and civil society organizations to the problem of identifying, handling, and mitigating fake news. The report then explores potential approaches to containing fake news including trust and verification, disrupting economic incentives, de-prioritizing content and banning accounts, as well as limited regulatory approaches.

A new report from Data & Society, titled Dead Reckoning: Navigating Content Moderation after “Fake News,” analyzes nascent solutions recently proposed by platform corporations, governments, news media industry coalitions, and civil society organizations to the problem of identifying, handling, and mitigating fake news. The report then explores potential approaches to containing fake news including trust and verification, disrupting economic incentives, de-prioritizing content and banning accounts, as well as limited regulatory approaches.

Data & Society says that the report is intended to inform platforms, news organizations, media makers, and others who do not ask whether standards for media content should be set, but rather who should set them, who should enforce them, and what entity should hold platforms, the media industry, states, and users accountable. “Fake news is thus not only about defining what content is problematic or false, but what constitutes credible and legitimate news in the social media era,” Data & Society says.

Among the report’s findings

— Fake news has become a politicized and controversial term, being used both to extend critiques of mainstream media and refer to the growing spread of propaganda and problematic content online.

— Definitions that point to the spread of problematic content rely on assessing the intent of producers and sharers of news, separating content into clear and well-defined categories, and/or identifying features that can be used to detect fake news content by machines or human reviewers.

— Strategies for limiting the spread of fake news include trust and verification, disrupting economic incentives, de-prioritizing content and banning accounts, as well as limited regulatory approaches.

— Content producers learn quickly and adapt to new standards set by platforms, using tactics like including satire or parody disclaimers to bypass standards enforced through content moderators and automated approaches.

— Moderating fake news well requires understanding the context of the article and the source. Currently automated technologies and artificial intelligence (AI) are not advanced enough to address this issue, which requires human-led interventions.

— Third-party fact-checking and media literacy organizations are expected to close the gap between platforms and the public interest, but are currently under resourced to meet this challenge.

— Read more in Robyn Caplan, Lauren Hanson, and Joan Donovan, Dead Reckoning: Navigating Content Moderation After “Fake News” (Data & Society, 21 February 2018). This report is the most recent third in a series of reports from the Data & Society initiative on Media Manipulation. See also: Monica Bulger and Patrick Davison, The Promises, Challenges, and Futures of Media Literacy (Data & Society, 21 February 2018); Alice Marwick and Rebecca Lewis, Media Manipulation and Disinformation Online (Data & Society, 15 May 2017); and Caroline Jack, Lexicon of Lies: Terms for Problematic Information (Data & Society, 9 August 2017)