AI & CYBER DEFENSESGenerative AI Speeds up Cybersecurity Defenses

By Tom Rickey

Published 13 January 2026

Faster adversary emulation helps defenders stop cyberattacks: Scientists are using generative AI to accelerate a key step in the defense against cyberattacks, performing complex operations in minutes instead of weeks.

Scientists are using generative AI to accelerate a key step in the defense against cyberattacks, performing complex operations in minutes instead of weeks.

The team led by Loc Truong at the Department of Energy’s Pacific Northwest National Laboratory is using generative AI to reconstruct complex cyberattacks. These reconstructions are a crucial component of digital defense: Cybersecurity professionals need to understand exactly how an attack occurred to be sure they can stop it.

“To really protect against an attack, you need to replicate it,” said Truong, a data scientist. “When an attack happens, usually a defender simply receives a text document explaining the attack, but someone needs to re-implement the entire attack. That can be a lengthy process and cost a lot of money. We hope to change that.”

The work comes at a time when hackers and other bad actors have unfettered access to advanced generative AI tools, muddying the cyber landscape. PNNL cybersecurity researcher Kristopher Willis, who works on the project with Truong, noted that AI is part of the approach of some of the best hackers across industry, academia and government. 

“At the most recent DEF CON, the largest hacker conference in the world, every team competing at the DEF CON Capture the Flag finals was using AI to assist with their attacks,” said Willis, who was a participant in the finals.

Meanwhile, defenders like Truong and Willis are expanding the use of autonomous defense to stay ahead.

ALOHA and Claude
The PNNL team created an adaptive generative AI agent called ALOHA (Agentic LLMs for Offensive Heuristic Automation) using Claude, a popular large language model developed by Anthropic. The partnership allows laboratory researchers to benefit from an advanced LLM, and it allows Anthropic to have its technology subjected to rigorous testing to prevent misuse.

PNNL’s work using large language models to simulate attacks on critical infrastructure is crucial for understanding the national security implications of increasingly capable AI,” said Marina Favaro, national security policy lead at Anthropic. “We’re proud to have helped augment and accelerate the cyber defenders who need it most. This kind of collaboration helps us better understand the national security landscape and feeds directly into our safety processes and how we build Claude.”

The PNNL technology works in concert with MITRE’s open-source “Caldera” software, which helps defenders prepare for and defend against cyberattacks.