CYBERSECURITYGet Ready for AI-supercharged Hacking

By Niusha Shafiabady and Mamoun Alazab

Published 16 July 2024

Artificial intelligence can supercharge the effect of hacking attacks. As use of AI widens, people and organizations will have to become much more careful in guarding against its malicious use. So far, the only answer to all this is increased vigilance, by individuals and their employers. Governments can help by publicizing the problem. They should.

Artificial intelligence can supercharge the effect of hacking attacks. As use of AI widens, people and organizations will have to become much more careful in guarding against its malicious use.

One aspect of the hacking problem is that malicious actors, having succeeded in hacking a system, such as a database or phone, can apply AI to the information they have stolen to create phishing messages that are much more persuasive and effective.

Another challenge is that an AI program loaded on to a phone or other computer must have access to far more information than a normal app. So a hacker may target the AI tool itself, seeing it as a wide door to more information that in turn can be used to execute more and stronger attacks.

Cybercrime is causing significant disruption to the Australian economy. According to the Australian Institute of Criminology, cybercrime cost $3.5 billion in Australia in 2019. Around $1.9 billion was lost directly by victims and the rest was the cost of recovery from attacks and of measures to protect systems.

To guard against AI-supercharged hacking, we’ll need to try harder in protecting ourselves and organizations we’re affiliated to. We’ll need even more vigilance when receiving emails and text messages, more diligence in reporting suspicious ones and more reluctance in sharing information in response to them.

Spear-phishing is sending emails and text messages that are highly targeted to the individuals they’re addressed to. For example, suppose you visited a bakery yesterday, bought a tiramisu cake and later received a text message asking you to follow a link to rate the cake and your shopping experience. Mistakenly assuming that such a message can have come only from the innocent local bakery, you may click through and provide personal information, when in fact you’re dealing with a hacker who has found out just a little about you—your phone number, the name of the shop and what you bought.

But that example is mild compared with the spear-phishing that might be done with generative AI, the type that can create text, music, voice or images. It’s quite conceivable that a hacker using generative AI could send a detailed email purporting to come from your friend, written in the friend’s style and discussing things that you’d expect to hear only from that friend.

Next, there’s the problem that AI tools that are or will be on our phones and other computers must have permission to access a great deal of other information in other apps. Although the AI tools are mostly pre-trained, for them to provide personalized solutions or recommendations for each individual they need to access our data. For example, to send that persuasive message from your friend, it would learn from records in your messages, email and contacts apps, and maybe the photos app, too.

This means that if someone can get into the system, maybe using some means that hackers already use, then get access to the AI tool, he or she may be able to collect whatever information the AI is collecting and do so without having to directly get into the stores of information to which the AI has access.

Imagine that you and two friends are planning a birthday party for your brother and discussing gift ideas by email. A hacker who can read the contents of your email app, because your AI tool has access to it, can then send an extremely persuasive spear-phishing email. It might purport to come from one of the friends, offering links to gifts of the type you were discussing. With today’s usual level of guardedness, you are not likely to be at all suspicious. But the links are in fact malicious, possibly designed to give access to your organization’s computer network.

The AI tool that Apple announced in June, for example, requires access to your contacts and other personal information on your phone or other computer.

So far, the only answer to all this is increased vigilance, by individuals and their employers. Governments can help by publicizing the problem. They should.

Niusha Shafiabady is an associate professor at the Department of Information Technology at the Australian Catholic University and an adjunct associate professor at Charles Darwin University. Mamoun Alazab is a professor at the Faculty of Science and Technology and the director of the Northern Territory Academic Centre for Cyber Security and Innovation at Charles Darwin University, Australia. This article is published courtesy of the Australian Strategic Policy Institute (ASPI).