InformationTracking Misinformation Campaigns in Real-Time is Possible: Study

Published 22 July 2020

A research team has developed a technique for tracking online foreign misinformation campaigns in real time, which could help mitigate outside interference in the 2020 American election. The researchers developed a method for using machine learning to identify malicious Internet accounts, or trolls, based on their past behavior. The model investigated past misinformation campaigns from China, Russia, and Venezuela that were waged against the United States before and after the 2016 election.

A research team has developed a technique for tracking online foreign misinformation campaigns in real time, which could help mitigate outside interference in the 2020 American election.

The researchers developed a method for using machine learning to identify malicious Internet accounts, or trolls, based on their past behavior. Appearing in Science Advances, the model investigated past misinformation campaigns from China, Russia, and Venezuela that were waged against the United States before and after the 2016 election.

The team, which included researchers from New York University, Princeton University, and New Jersey Institute of Technology, identified the patterns these campaigns followed by analyzing posts to Twitter and Reddit and the hyperlinks or URLs they included. After running a series of tests, they found their model was effective in identifying posts and accounts that were part of a foreign influence campaign, including those by accounts that had never been used before.

They hope that software engineers will be able to build on their work to create a real-time monitoring system for exposing foreign influence in American politics.

“What our research means is that you could estimate in real time how much of it is out there and what they’re talking about,” says Jacob N. Shapiro, professor of politics and international affairs at the Princeton School of Public and International Affairs. “It’s not perfect, but it would force these actors to get more creative and possibly stop their efforts. You can only imagine how much better this could be if someone puts in the engineering efforts to optimize it.”

Shapiro conducted the study with Joshua Tucker, professor of politics at NYUCody Buntain, an assistant professor in informatics at New Jersey Institute of Technology, and Meysam Alizadeh, a Princeton research scholar.

The team began with a simple question: Using only content-based features and examples of known influence campaign activity, could you look at other content and tell whether a given post was part of an influence campaign?

They chose to investigate a unit known as a “postURL pair,” which is a post with a hyperlink. To have real influence, coordinated operations require intense human and bot-driven information sharing. The team theorized that similar posts may appear frequently across platforms over time.