AI & CRIMEFour Ways Criminals Could Use AI to Target More Victims

By Daniel Prince

Published 23 June 2023

Warnings about artificial intelligence (AI) are ubiquitous right now, but we have been using AI tools for a long time. AI is a tool to increase efficiency, process and sort large volumes of data, and offload decision making – and these tools are open to everyone, including criminals. Observing how criminals have adapted to, and adopted, technological advances in the past, can provide some clues as to how they might use AI.

Warnings about artificial intelligence (AI) are ubiquitous right now. They have included fearful messages about AI’s potential to cause the extinction of humans, invoking images of the Terminator movies. The UK Prime Minister Rishi Sunak has even set up a summit to discuss AI safety.

However, we have been using AI tools for a long time – from the algorithms used to recommend relevant products on shopping websites, to cars with technology that recognizes traffic signs and provides lane positioning. AI is a tool to increase efficiency, process and sort large volumes of data, and offload decision making.

Nevertheless, these tools are open to everyone, including criminals. And we’re already seeing the early stage adoption of AI by criminals. Deepfake technology has been used to generate revenge pornography, for example.

Technology enhances the efficiency of criminal activity. It allows lawbreakers to target a greater number of people and helps them be more plausible. Observing how criminals have adapted to, and adopted, technological advances in the past, can provide some clues as to how they might use AI.

1. A better phishing hook
AI tools like ChatGPT and Google’s Bard provide writing support, allowing inexperienced writers to craft effective marketing messages, for example. However, this technology could also help criminals sound more believable when contacting potential victims.

Think about all those spam phishing emails and texts that are badly written and easily detected. Being plausible is key to being able to elicit information from a victim.

Phishing is a numbers game: an estimated 3.4 billion spam emails are sent every day. My own calculations show that if criminals were able to improve their messages so that as little as 0.000005% of them now convinced someone to reveal information, it would result in 6.2 million more phishing victims each year.

2. Automated interactions
One of the early uses for AI tools was to automate interactions between customers and services over text, chat messages and the phone. This enabled a faster response to customers and optimized business efficiency. Your first contact with an organization is likely to be with an AI system, before you get to speak to a human.

Criminals can use the same tools to create automated interactions with large numbers of potential victims, at a scale not possible if it were just carried out by humans. They can impersonate legitimate services like banks over the phone and on email, in an attempt to elicit information that would allow them to steal your money.