CYBERSECURITYLeveraging AI to Enhance U.S. Cybersecurity

Published 18 October 2024

Artificial intelligence (AI) and machine learning (ML) can potentially provide the homeland unprecedented opportunity to enhance its cybersecurity posture. DHS S&T is exploring the possibilities of using new advances in this technology to detect threats, increase resilience and provide more supply chain oversight.

Artificial intelligence (AI) and machine learning (ML) can potentially provide the homeland unprecedented opportunity to enhance its cybersecurity posture. The Science and Technology Directorate (S&T) is exploring the possibilities of using new advances in this technology to quickly process large amounts of data and deploy models to detect threats, increase resilience and provide more supply chain oversight.

When you hear about AI in the news, it sounds as if the robots of science fiction will be taking over soon. What isn’t as commonly covered is how much potential it has to make things safer from a cybersecurity perspective. The sheer volume of data it can process in a compressed period of time and then, contextualizing that data using ML, have vast implications for those attempting to make the nation more cyber secure.

The Science and Technology Directorate(S&T) is exploring the many ways this technology and its newer applications can support the national security mission, in line with the DHS AI Roadmap and its “Protect AI systems from cybersecurity threats and guard against AI-enabled cyberattacks” workstream. There are cybersecurity problems that are more complex where AI can provide solutions—and potential protections—never imagined, according to Donald Coulter, S&T’s Senior Science Advisor on Cybersecurity.

S&T is working on a number of initiatives that are intended to help inform the Cybersecurity and Infrastructure Security Agency’s (CISA) AI strategy. For instance, S&T has a project underway to research advanced methods for enabling real-time management of cyber threats to critical infrastructure. Another project is increasing the resilience of software analysis tools by helping to identify and mitigate possible weaknesses in ML-based reverse engineering tools, as part of an overarching strategy to assess and mitigate risks of adversarial attacks on AI-based systems. This effort involves identifying whether certain ML algorithms may be susceptible to subversion by sophisticated adversaries, which could make it difficult to understand and mitigate attacks on our models. There is also work being done to help CISA launch a testbed that can provide a secure, connected multi-cloud environment to support AI development and testing. Since AI systems are software systems, ensuring that they are designed and deployed securely is an extension of CISA’s cybersecurity mission.