3 Questions: Modeling Adversarial Intelligence to Exploit AI’s Security Vulnerabilities

I should also note that cyber defenses are pretty complicated. They’ve evolved their complexity in response to escalating attack capabilities. These defense systems involve designing detectors, processing system logs, triggering appropriate alerts, and then triaging them into incident response systems. They have to be constantly alert to defend a very big attack surface that is hard to track and very dynamic. On this other side of attacker-versus-defender competition, my team and I also invent AI in the service of these different defensive fronts. 

Another thing stands out about adversarial intelligence: Both Tom and Jerry are able to learn from competing with one another! Their skills sharpen and they lock into an arms race. One gets better, then the other, to save his skin, gets better too. This tit-for-tat improvement goes onwards and upwards! We work to replicate cyber versions of these arms races.

Q: What are some examples in our everyday lives where artificial adversarial intelligence has kept us safe? How can we use adversarial intelligence agents to stay ahead of threat actors?
A: Machine learning has been used in many ways to ensure cybersecurity. There are all kinds of detectors that filter out threats. They are tuned to anomalous behavior and to recognizable kinds of malware, for example. There are AI-enabled triage systems. Some of the spam protection tools right there on your cell phone are AI-enabled!

With my team, I design AI-enabled cyber attackers that can do what threat actors do. We invent AI to give our cyber agents expert computer skills and programming knowledge, to make them capable of processing all sorts of cyber knowledge, plan attack steps, and to make informed decisions within a campaign.

Adversarially intelligent agents (like our AI cyber attackers) can be used as practice when testing network defenses. A lot of effort goes into checking a network’s robustness to attack, and AI is able to help with that. Additionally, when we add machine learning to our agents, and to our defenses, they play out an arms race we can inspect, analyze, and use to anticipate what countermeasures may be used when we take measures to defend ourselves.

Q: What new risks are they adapting to, and how do they do so?
A: There never seems to be an end to new software being released and new configurations of systems being engineered. With every release, there are vulnerabilities an attacker can target. These may be examples of weaknesses in code that are already documented, or they may be novel. 

New configurations pose the risk of errors or new ways to be attacked. We didn’t imagine ransomware when we were dealing with denial-of-service attacks. Now we’re juggling cyber espionage and ransomware with IP [intellectual property] theft. All our critical infrastructure, including telecom networks and financial, health care, municipal, energy, and water systems, are targets. 

Fortunately, a lot of effort is being devoted to defending critical infrastructure. We will need to translate that to AI-based products and services that automate some of those efforts. And, of course, to keep designing smarter and smarter adversarial agents to keep us on our toes, or help us practice defending our cyber assets.

Alex Shipps is Digital Strategy Coordinator, MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). The article is reprinted with permission of MIT News.