AI & disinformationAn AI-Based Counter-Disinformation Framework

By Linda Slapakova

Published 31 March 2021

There are different roles that AI can play in counter-disinformation efforts, but the current shortfalls of AI-based counter-disinformation tools must be addressed first. Such an effort faces technical, governance, and regulatory barriers, but there are ways these obstacles could be effectively addressed to allow AI-based solutions to play a bigger role in countering disinformation.

Disinformation has become a defining feature of the COVID-19 crisis. With social media bots (i.e. automated agents engaging on social networks) nearly twice as active during COVID-19 as opposed to past crises and national elections, the public and private sectors have struggled to address the rapid spreading of false information about the pandemic. This has highlighted the need for effective, innovative tools to detect and strengthen institutional and societal resilience against disinformation. The leveraging of Artificial Intelligence (AI) represents one avenue for the development and use of such tools.

To provide a holistic assessment of the opportunities of an AI-based counter-disinformation framework, this blog firstly discusses the various roles that AI plays in counter-disinformation efforts. Next, it discusses the prevailing shortfalls of AI-based counter-disinformation tools and the technical, governance and regulatory barriers to their uptake, and how these could be addressed to foster the uptake of AI-based solutions for countering disinformation. 

The Double-Edged Sword of Emerging Technologies and Disinformation
Emerging technologies, including AI, are often described as a double-edged sword with relation to information threats. On the one hand, emerging technologies can enable more sophisticated online information threats and often lower the barriers to entry for malign actors. On the other hand, they can provide significant opportunities for countering such threats. This has been no less true in the case of AI and disinformation.

Though the majority of malign information on social media is spread by relatively simple bot technology, existing evidence suggests that AI is being leveraged for more sophisticated online manipulation techniques. The extent of the use of AI in this context is difficult to measure, but many information security experts believe that AI is already being leveraged by malign actors, for example to better determine attack parameters (e.g. ‘what to attack, who to attack, [and] when to attack’). This enables more targeted attacks and thus more effective information threats, including disinformation campaigns. Recent advances in AI techniques such as Natural Language Processing (NLP) have also given rise to concerns that AI may be used to create more authentic synthetic text (e.g. fake social media posts, articles and documents). Moreover, Deepfakes (i.e. the leveraging of AI to create highly authentic and realistic manipulated audio-visual material) represent a prominent example of image-based  AI-enabled information threat.