Using AI to Monitor the Internet for Terror Content Is Inescapable – but Also Fraught with Pitfalls

Perceptual hashing, on the other hand, focuses on similarity. It overlooks minor changes like pixel color adjustments, but identifies images with the same core content. This makes perceptual hashing more resilient to tiny alterations to a piece of content. But it also means that the hashes are not entirely random, and so could potentially be used to try and recreate the original image.

2. Classification
The second approach relies on classifying content. It uses machine learning and other forms of AI, such as natural language processing. To achieve this, the AI needs a lot of examples like texts labelled as terrorist content or not by human content moderators. By analyzing these examples, the AI learns which features distinguish different types of content, allowing it to categorize new content on its own.

Once trained, the algorithms are then able to predict whether a new item of content belongs to one of the specified categories. These items may then be removed or flagged for human review.

This approach also faces challenges, however. Collecting and preparing a large dataset of terrorist content to train the algorithms is time-consuming and resource-intensive.

The training data may also become dated quickly, as terrorists make use of new terms and discuss new world events and current affairs. Algorithms also have difficulty understanding context, including subtlety and irony. They also lack cultural sensitivity, including variations in dialect and language use across different groups.

These limitations can have important offline effects. There have been documented failures to remove hate speech in countries such as Ethiopia and Romania, while free speech activists in countries such as EgyptSyria and Tunisia have reported having their content removed.

We Still Need Human Moderators
So, in spite of advances in AI, human input remains essential. It is important for maintaining databases and datasets, assessing content flagged for review and operating appeals processes for when decisions are challenged.

But this is demanding and draining work, and there have been damning reports regarding the working conditions of moderators, with many tech companies such as Meta outsourcing this work to third-party vendors.

To address this, we recommend the development of a set of minimum standards for those employing content moderators, including mental health provision. There is also potential to develop AI tools to safeguard the wellbeing of moderators. This would work, for example, by blurring out areas of images so that moderators can reach a decision without viewing disturbing content directly.

But at the same time, few, if any, platforms have the resources needed to develop automated content moderation tools and employ a sufficient number of human reviewers with the required expertise.

Many platforms have turned to off-the-shelf products. It is estimated that the content moderation solutions market will be worth $32bn by 2031.

But caution is needed here. Third-party providers are not currently subject to the same level of oversight as tech platforms themselves. They may rely disproportionately on automated tools, with insufficient human input and a lack of transparency regarding the datasets used to train their algorithms.

So, collaborative initiatives between governments and the private sector are essential. For example, the EU-funded Tech Against Terrorism Europe project has developed valuable resources for tech companies. There are also examples of automated content moderation tools being made openly available like Meta’s Hasher-Matcher-Actioner, which companies can use to build their own database of hashed terrorist content.

International organizations, governments and tech platforms must prioritize the development of such collaborative resources. Without this, effectively addressing online terror content will remain elusive.

Stuart Macdonald is Professor of Law, Swansea University. Ashley A. Mattheis is Postdoctoral Researcher, School of Law and Government, Dublin City University. David Wells is Honorary Research Associate at the Cyber Threats Research Centre, Swansea University. This article s published courtesy of The Conversation.