Leveraging AI to Enhance U.S. Cybersecurity
“CISA understands that AI is the future and intends to move further in that direction,” said S&T Program Manager Benson Macon. “We can provide them with the contract vehicle to support very specific R&D activities. We get them the experts. Contract support is critical at this time, especially when it comes to AI. There is a shortage within this field of expertise. We must bring in the experts that are more specialized and have the knowledge depth, coupled with the technology experience gained in the IT industry.”
Future-Looking AI Exploration
A large part of S&T’s current work on AI cybersecurity applications is around conducting the necessary research to help chart a future course, looking forward on how AI will evolve and may be applied. S&T funded a series of Emerging Technology and Risk Analysis reports on emerging technologies, and one—co-authored by former S&T Acting Under Secretary Dan Gerstein—looked specifically at risks and scenarios relating to AI use affecting critical infrastructure. While assessing that both challenges and opportunities exist, researchers pointed to the arrival of commercially-available generative AI in March 2023 as an interesting case study for how AI technologies—in this case, large-language models—are likely to mature and be integrated into society. This disruptive generative AI “bot” was capable of analyzing large quantities of data and generating content, performing a human-like function never before seen. According to the report, the initial rollout illustrated a cycle—development, deployment, identification of shortcomings and other areas of potential use and rapid updating of AI systems—that will likely be a feature of future AI evolutions.
S&T also partnered with the National Science Foundation (NSF) on the launch of the AI Institute for Agent-Based Cyber Threat Intelligence and Operation (ACTION). Only a year old, S&T hopes that the R&D developed will ultimately inform S&T’s AI Roadmap and help push its development programs forward into the future. “We will take the knowledge learned and preliminary technologies developed to inform our approach to operationalizing AI for cybersecurity,” Coulter said.
The ACTION Institute, a federally funded university consortium, seeks to change the way mission-critical systems are protected against sophisticated, ever-changing security threats. The ultimate goal is to design AI-based intelligent agents for security operations experts to use that will apply complex knowledge representation and logic reasoning, learning to identify flaws, detect attacks, perform attribution and respond to breaches in a timely and scalable fashion.
“We are using AI to increase the effectiveness of our cybersecurity mechanisms and researching ways we can use distributed learning and automated intelligent agents to monitor the network for anomalies. Can we apply this? How do we make our detection and mitigation techniques automated and identify indicators of compromise?” Coulter said.
Human Teaming Is the Key
The robots won’t be taking over at S&T anytime soon, as it is committed to applying a human-machine teaming approach, especially when it comes to cybersecurity, with the understanding that there are unique advantages to keeping humans in the loop that lends to greater controls and oversight of quality. The Center for Accelerating Operational Efficiency (CAOE), one of its many DHS Centers of Excellence (COE), has a project in progress on Combining Human Intelligence with Artificial Intelligence for a Usable, Adaptable [Software Bill of Materials], more commonly known as CHIAUS. Also focused on software resilience, the objective of this project is to integrate human-centered interactions with Software Bill of Materials (SBOM) data to empower developers and consumers with actionable, understandable risk information and foster greater trust in automated decision-making systems. This increased confidence will come from having more detailed information on each of the software components and chain of custody as well as human factors that could ultimately influence results.
It is S&T’s belief that human-machine teaming will ultimately enhance and increase effectiveness of cybersecurity, Coulter said, and this is undergirding S&T’s approach. While the value of human-in-the-loop is generally recognized as a risk-mitigating feature, S&T is mapping out future research to dig deeper into ways to extract maximum value from AI with minimum risks associated with human error. This future research will seek to identify the most effective human-machine teaming models for homeland security application, determine ways to increase effectiveness of teams individually and at scale and increase trust in both the AI model and the human’s competence as they apply to cybersecurity use cases.
ACTION will look at different components of AI and think through how to build them and how to shape the human interaction with an intelligent agent that is specifically focused on cybersecurity. “How do they interact with each other? How do we pull in thoughts from game theory, social behavioral analysis,” Coulter asked. “The outcome will be that we as an organization will be able to use this tech autonomously to respond to an incident and mitigate it, leading to improved resilience. AI will be used as a tool to create more secure components as part of the design and analyze systems while in operation to identify where potential weak points might be.”
This article is the first in a new feature article series dedicated to S&T’s AI/ML research and development efforts. Additional information can also be found on the Artificial Intelligence and Autonomous Systems webpage.