AIArtificial Intelligence Isn’t That Intelligent

By Harriet Farlow

Published 8 August 2022

In the world of information security, social engineering is the game of manipulating people into divulging information that can be used in a cyberattack or scam. Cyber experts may therefore be excused for assuming that AI might display some human-like level of intelligence that makes it difficult to hack. Unfortunately, it’s not. It’s actually very easy.

Late last month, Australia’s leading scientists, researchers and businesspeople came together for the inaugural Australian Defense Science, Technology and Research Summit (ADSTAR), hosted by the Defense Department’s Science and Technology Group. In a demonstration of Australia’s commitment to partnerships that would make our non-allied adversaries flinch, Chief Defense Scientist Tanya Monro was joined by representatives from each of the Five Eyes partners, as well as Japan, Singapore and South Korea. Two streams focusing on artificial intelligence were dedicated to research and applications in the defense context.

‘At the end of the day, isn’t hacking an AI a bit like social engineering?’

A friend who works in cybersecurity asked me this. In the world of information security, social engineering is the game of manipulating people into divulging information that can be used in a cyberattack or scam. Cyber experts may therefore be excused for assuming that AI might display some human-like level of intelligence that makes it difficult to hack.

Unfortunately, it’s not. It’s actually very easy.

The man who coined the term ‘artificial intelligence’ in the 1950s, cybernetics researcher John McCarthy, also said that once we know how it works, it isn’t called AI anymore. This explains why AI means different things to different people. It also explains why trust in and assurance of AI is so challenging.

AI is not some all-powerful capability that, despite how much it can mimic humans, also thinks like humans. Most implementations, specifically machine-learning models, are just very complicated implementations of the statistical methods we’re familiar with from high school. It doesn’t make them smart, merely complex and opaque. This leads to problems in AI safety and security.

Bias in AI has long been known to cause problems. For example, AI-driven recruitment systems in tech companies have been shown to filter out applications from women, and re-offence prediction systems in US prisons exhibit consistent biases against black inmates. Fortunately, bias and fairness concerns in AI are now well known and actively investigated by researchers, practitioners and policymakers.

AI security is different, however. While AI safety deals with the impact of the decisions an AI might make, AI security looks at the inherent characteristics of a model and whether it could be exploited. AI systems are vulnerable to attackers and adversaries just as cyber systems are.

A known challenge is adversarial machine learning, where ‘adversarial perturbations’ added to an image cause a model to predictably misclassify it.