TRUTH DECAYFact-Checking Found to Influence Recommender Algorithms

By Tom Fleischman

Published 3 August 2023

Researchers have shown that urging individuals to actively participate in the news they consume can reduce the spread of these kinds of falsehoods. “We don’t have to think of ourselves as captive to tech platforms and algorithms,” said a researcher.

In January 2017, Reddit users read about an alleged case of terrorism in a Spanish supermarket. What they didn’t know was that nearly every detail of the stories, taken from several tabloid publications and amplified by Reddit’s popularity algorithms, was false.

A Cornell researcher has shown that urging individuals to actively participate in the news they consume can reduce the spread of these kinds of falsehoods.

J. Nathan Matias, assistant professor of communication in the College of Agriculture and Life Sciences, conducted an experiment with a community of 14 million on Reddit, and found that encouraging people to participate in knowledge-gathering could, in fact, move an algorithm’s needle.

Suggesting that community members fact-check suspect stories, he found, led to those stories being dropped in Reddit’s rankings.

“One of the lessons here is that we don’t have to think of ourselves as captive to tech platforms and algorithms,” said Matias, the author of “Influencing Recommendation Algorithms to Reduce the Spread of Unreliable News by Encouraging Humans to Fact-check Articles, in a Field Experiment,” which published July 20 in Scientific Reports.

Matias, who leads the Citizens and Technology Lab at Cornell, had the idea for this work while pursuing his Ph.D. a decade ago at Massachusetts Institute of Technology.

“I was spending time with these extraordinary groups of people who organize large-scale conversations about collective knowledge,” he said, describing his fieldwork with moderators of r/science, which now reaches 30 million subscribers on Reddit with news about science.

“It was at that time that I realized that those communities were already collecting data, and using the tools of science to make decisions about how to manage these online conversations, even though they weren’t employees of Reddit,” Matias said. “They were doing citizen science for the social web.”

Working with the volunteer community leaders of the world news group on Reddit, and focusing on news websites with a reputation for publishing inaccurate claims, Matias and an MIT master’s student developed a software program that observed when a community member submitted a link for discussion. The software would then assign the discussion one of three conditions:

·  Readers were shown a persistent message encouraging them to fact-check the article, and comment with links to further evidence refuting the article’s claims;

·  Readers were shown the message and encouraged to consider down-voting the article. Reddit articles are voted up or down by community members, known as “redditors,” and ranked accordingly; and

·  Control group; no action taken.

Matias said he expected that fact-checking could backfire. If fact-checking focused more attention on an unreliable story, the algorithm might view it as positive reinforcement of the article and cause it to be ranked higher on average.

“There’s the concern that if you repeat a falsehood often enough, is that going to anchor it in someone’s mind?” he said. “And is there something similar with these recommendation algorithms that can’t necessarily distinguish right from wrong? Even if you’re fact-checking, the algorithm might see more engagement and show it to more people.’”

That turned out not to be the case. In a total of 1,104 news discussions, compiled from December 2016 to February 2017, Matias found that merely encouraging fact-checking – even without the down-voting prompt – caused a drop in story rank by an average of minus-25 spots. On Reddit, that would cause a story to drop off the front page, and likely be missed by a significant number of readers.

Though conducted on a narrow scale, Matias sees his experiment as proof that people can collectively take control of the information they are fed, and not just accept a steady diet of falsehoods.

“There’s a lot of talk about this idea of determinism – that the decisions of an engineer somewhere in Silicon Valley influence our minds, and lock us into certain patterns of behavior,” he said. “And while they do have influence, these systems are designed to react to humans. So when people work together to improve our information environments, the algorithms can respond accordingly.

“This is a really powerful example of something that was very practical for this community,” he said, “and is making fundamental contributions to scientific knowledge.”

Tom Fleischman is senior writer/editor at the Cornell Chronicle. This article is published courtesy of the Cornell Chronicle.