Intelligence Agencies Have Used AI Since the Cold War – but Now Face New Security Challenges

Intelligence agencies are also able to use AI to spot any potential threats to the technology that is used to communicate across the internet, respond to cyber-attacks, and identify unusual behavior on networks. It can act against possible malware and contribute to a more secure digital environment.

AI Brings Security Threats
AI creates both opportunities and challenges for intelligence agencies. While it can help protect networks from cyber-attacks, it can also be used by hostile individuals or agencies to attack vulnerabilities, install malware, steal information or disrupt and deny use of digital systems.

AI cyber-attacks have become a “critical threat”, according to Alberto Domingo, technical director of cyberspace at NATO Allied Command Transformation, who called for international regulation to slow down the number of attacks that are “increasing exponentially”.

AI that analyses surveillance data can also reflect human biases. Research into facial recognition programs has shown they are often worse at identifying women and people with darker skin tones because they have predominately been trained using data on white men. This has led to police being banned from using facial recognition in cities including Boston and San Francisco.

Such is the concern about AI-driven surveillance that researchers have designed counter-surveillance software aimed at fooling AI analysis of sounds, using a combination of predictive learning and data analysis.

Truth or Lie?
Online misinformation (incorrect information) and disinformation (deliberately false information) represent another major AI-related concern for intelligence agencies.

AI can generate false but believable “deepfake” images, videos and audio recordings, as well as text in the case of ChatGPT. Gordon Crovits of online misinformation research company Newsguard has said that ChatGPT could evolve into “the most powerful tool for spreading misinformation that has ever been on the internet”.

Some intelligence agencies are tasked with stopping the spread of online falsehoods from affecting democratic processes. But it is almost impossible to identify AI-generated mis- or disinformation before it goes viral. And once fake stories are widely believed, they are very difficult to counter.

Agencies are also at increased risk themselves of mistaking false information for the real thing, as the AI tools used to analyze online data may not be able to tell the difference.

Privacy Concerns
The vast amount of data collected from surveillance activities that AI analyses is also creating concerns about privacy and civil liberties.

The World Economic Forum has declared that AI must place privacy before efficiency when used by governments in surveillance programs, while some scholars and others are calling for regulation to limit AI’s impact on society.

Governments must ensure that agencies that use AI to conduct surveillance are doing so within the law. Such oversight would require clear guidelines being set, regulations to be enforced, and transgressors to be punished. Early indications are that governments have been slow to keep up, even in the United States.

The vulnerabilities of AI mean that, despite the technological advances of the post-cold war world, there is still a need for human agents and intelligence officers.

As Zegart states, what AI will do is undertake most time-consuming menial analysis roles that humans currently do. While AI will allow intelligence agencies to understand what the objects are in a photograph, for example, human intelligence officers will be able to say why those are objects are there.

This should lead to greater efficiency within intelligence agencies. But to overcome the fears of many citizens, legislation may need to catch up with the way the AI world works.

Dafydd Townley is Teaching Fellow in International Security, University of Portsmouth. This article is published courtesy of The Conversation.