ARGUMENT: AI & NATIONAL SECURITY Preparing National Security Officials for the Challenges of AI

Published 20 June 2022

Artificial intelligence (AI) is one of several rapidly emerging technologies that promise to disrupt not only multiple sectors of the U.S. economy but also the manner in which the U.S. government carries out its foundational responsibility to protect national security consistent with the rule of law and constitutional values. Steve Bunnell writes that “The United States’ national security apparatus is not known for nimbleness, nor is the law that governs it. When it comes to AI, the risk is not just that our generals will fight tomorrow’s war with yesterday’s strategy but also that the United States will lack the legal and policy guardrails that are essential to a lawful, accountable, and ethical protection of the nation’s security.”

Artificial intelligence (AI) is one of several rapidly emerging technologies that promise to disrupt not only multiple sectors of the U.S. economy but also the manner in which the U.S. government carries out its foundational responsibility to protect national security consistent with the rule of law and constitutional values. This presents an important challenge.

Steve Bunnell, reviewing James E. Baker, The Centaur’s Dilemma: National Security Law for the Coming AI Revolution (Brookings Institution, 2020), writes in Lawfare that hard legal and ethical questions about national security uses of AI are already myriad and constantly evolving and expanding. How should the U.S. integrate tools like neural language models and facial and image recognition into its intelligence collection and analysis efforts? How much faith should be placed in machine predictions and identifications that no human can fully understand? What sort of oversight is needed to control for bias and protect privacy? To promote public trust? How can the U.S. combat deepfakes by foreign adversaries without running afoul of the First Amendment and free speech values? What level of AI-based predication is sufficient to warrant what types of intelligence, investigative, or military actions? When a decision is made to launch a drone attack against a terrorist target based on AI-based data and image analyses, are humans in the loop, on the loop, or out of the loop?

Bunnell adds:

James E. Baker’s “The Centaur’s Dilemma” is an excellent place to start for any national security policy official or lawyer looking to understand not only what AI can do in a security context but also the current legal and ethical frameworks (or lack thereof) that guide its use in the fast moving world of national security threats and military operations. “The Centaur’s Dilemma” is a thoughtful and crisply written exploration of the implications of AI and the legal, ethical, and normative frameworks that govern and channel the use of AI in the national security realm.

The topic is of fundamental importance to global security in the 21st century. As the war in Ukraine is demonstrating, AI can play a critical role in both kinetic and nonkinetic domains.

….

AI is also being used extensively in the information war. For example, Ukrainian officials, working with citizen volunteers, are reportedly using facial recognition software and social media data to identify the bodies of Russian soldiers killed in Ukraine, notify their families, and provide real-time information about the tragic costs of the war in an effort to counter Russian government censorship and internal propaganda. 

The role of AI in the cyber domain is less public. But it is safe to assume that AI-powered cyberattacks and countermeasures—such as malware that mutates to try to avoid detection by anti-virus software, or the automated creation of highly personalized (and, hence, hard to detect) spear phishing attacks—are critical factors not just in the jockeying for advantage on the battlefield but also as a means to degrade or protect critical infrastructure and, more generally, to create (or defend against) economic and political pressure, confusion, and chaos.

Bunnell concludes:

The United States’ national security apparatus is not known for nimbleness, nor is the law that governs it. When it comes to AI, the risk is not just that our generals will fight tomorrow’s war with yesterday’s strategy but also that the United States will lack the legal and policy guardrails that are essential to a lawful, accountable, and ethical protection of the nation’s security. There is also the further risk that policymakers and operational decision-makers will find themselves making recommendations and decisions involving technologies they barely understand. A basic level of tech literacy among policymakers and operational officials is a precondition for those officials being able to sensibly develop and implement the new laws and new policies that AI requires. “The Centaur’s Dilemma” is not just an important contribution to the scholarly thinking around national security and AI. It is a practical reference book, intended, first and foremost, to empower those in the arena. National security officials and lawyers would be well advised to read it carefully and to keep a copy close at hand.