Fighting Against Disinformation with Lessons from Cybersecurity

Getting Ahead of Influence Operations
Research in counteracting online influence operations is still young. Three years ago, Lincoln Laboratory initiated a study on the topic to understand its implications for national security. The field has since ballooned, notably since the spread of dangerous, misleading Covid-19 claims online, perpetuated in some cases by China and Russia, as one RAND study found. There is now dedicated funding through the laboratory’s Technology Office toward developing influence operations countermeasures.

“It’s important for us to strengthen our democracy and make all our citizens resilient to the kinds of disinformation campaigns targeted at them by international adversaries, who seek to disrupt our internal processes,” Zurko says.

Like cyberattacks, influence operations often follow a multistep path, called a kill chain, to exploit predictable weaknesses. Studying and reinforcing those weaknesses can work in fighting influence operations, just as they do in cyber defense. Lincoln Laboratory’s efforts are in developing technology to support “source tending,” or reinforcing early stages in the kill chain when adversaries begin to find opportunities for a divisive or misleading narrative and build accounts to amplify it. Source tending helps cue U.S. information-operations personnel of a brewing disinformation campaign.

A couple of approaches at the laboratory are aimed at source tending. One approach is leveraging machine learning to study digital personas, with the intent of identifying when the same person is behind multiple, malicious accounts. Another area is focusing on building computational models that can identify deepfakes, or AI-generated videos and photos created to mislead viewers. Researchers are also developing tools to automatically identify which accounts hold the most influence over a narrative. First, the tools identify a narrative (in one paper, the researchers studied the disinformation campaign against French presidential candidate Emmanuel Macron) and gather data related to that narrative, such as keywords, retweets, and likes. Then, they use an analytical technique called causal network analysis to define and rank the influence of specific accounts — which accounts often generate posts that go viral?

These technologies are feeding into the work that Zurko is leading to develop a counter-influence operations test bed. The goal is to create a safe space to simulate social media environments and test counter-technologies. Most importantly, the test bed will allow human operators to be put into the loop to see how well new technologies help them do their jobs.

“Our military’s information-operations personnel are lacking a way to measure impact. By standing up a test bed, we can use multiple different technologies, in a repeatable fashion, to grow metrics that let us see if these technologies actually make operators more effective in identifying a disinformation campaign and the actors behind it.”

This vision is still aspirational as the team builds up the test bed environment. Simulating social media users and what Zurko calls the “grey cell,” the unwitting participants to online influence, is one of the greatest challenges to emulating real-world conditions. Reconstructing social media platforms is also a challenge; each platform has its own policies for dealing with disinformation and proprietary algorithms that influence disinformation’s reach. For example, The Washington Post reported that Facebook’s algorithm gave “extra value” to news that received anger reactions, making it five times more likely to appear on a user’s news feed — and such content is disproportionately likely to include misinformation. These often-hidden dynamics are important to replicate in a test bed, both to study the spread of fake news and understand the impact of interventions.

Taking a Full-System Approach
In addition to building a test bed to combine new ideas, Zurko is also advocating for a unified space that disinformation researchers can call their own. Such a space would allow researchers in sociology, psychology, policy, and law to come together and share cross-cutting aspects of their work alongside cybersecurity experts. The best defenses against disinformation will require this diversity of expertise, Zurko says, and “a full-system approach of both human-centered and technical defenses.”

Though this space doesn’t yet exist, it’s likely on the horizon as the field continues to grow. Influence operations research is gaining traction in the cybersecurity world. “Just recently, the top conferences have begun putting disinformation research in their call for papers, which is a real indicator of where things are going,” Zurko says. “But, some people still hold on to the old-school idea that messy humans don’t have anything to do with cybersecurity.”

Despite those sentiments, Zurko still trusts her early observation as a researcher — what cyber technology can do effectively is moderated by how people use it. She wants to continue to design technology, and approach problem-solving, in a way that places humans center-frame. “From the very start, what I loved about cybersecurity is that it’s partly mathematical rigor and partly sitting around the ‘campfire’ telling stories and learning from one another,” Zurko reflects. Disinformation gets its power from humans’ ability to influence each other; that ability may also just be the most powerful defense we have.

Kylie Foy writes about science, technology, and community at MIT Lincoln Laboratory. This story is reprinted with permission of MIT News.