ARGUMENT: REGULATING AIRegulate National Security AI Like Covert Action

Published 31 July 2023

Congress is trying to roll up its sleeves and get to work on artificial intelligence (AI) regulation. Ashley Deeks writes that only a few of these proposed provisions, however, implicate national security-related AI, and none create any kind of framework regulation for such tools. She proposes crafting a law similar to the War Powers Act to govern U.S. intelligence and military agencies use of AI tools.

Congress is trying to roll up its sleeves and get to work on artificial intelligence (AI) regulation. In June, Sen. Chuck Schumer (D-N.Y.) launched a framework to regulate AI, a plan that offered high-level objectives and plans to convene nine panels to discuss hard questions, but no specific legislative language. Sen. Michael Bennet (D-Colo.) has advocated for a new federal agency to regulate AI. With others, Rep. Ted Lieu (D-Calif.) is proposing to create a National Commission on Artificial Intelligence. And at a more granular level, Sen. Gary Peters (D-Mich.) has proposed three AI bills that would focus on the government as a major purchaser and user of AI, requiring agencies to be transparent about their use of AI, to create an appeals process for citizens wronged by automated government decision-making, and to require agencies to appoint chief AI officers.

Ashley Deeks writes in Lawfare that only a few of these proposed provisions, however, implicate national security-related AI, and none create any kind of framework regulation for such tools.

Deeks continues:

Yet AI systems developed and used by U.S. intelligence and military agencies seem just as likely to create significant risks as publicly available AI does. These risks will likely fall on the U.S. government itself, not on consumers, who are the focus of most of the current legislative proposals. If a national security agency deploys an ill-conceived or unsafe AI system, it could derail U.S. military and foreign policy goals, destabilize interstate relations, and invite other states to retaliate in kind. Both the Defense Department and the intelligence community have issued policy documents reflecting their interest in ensuring that they deploy only reliable AI, but history suggests that it is still important to establish a basic statutory framework within which these agencies must work.

This challenge—trying to ensure that the risks that the U.S. national security bureaucracy takes are sensible, deliberate, and manageable—is not entirely novel. Congress has enacted a number of laws that create formalized processes by which the president must notify it of certain high-risk national security measures that he chooses to take. Some of these statutes create a baseline standard for presidential action, such as the covert action statute’s requirement that the president find that a particular action is “necessary to support identifiable foreign policy objectives of the United States and is important to the national security of the United States.” That statute also requires the president to share covert action findings with congressional leadership and intelligence committees, generally before the action takes place. The War Powers Resolution is another example: It requires the president to notify Congress within 48 hours when he introduces U.S. forces into hostilities without underlying congressional authorization to do so, and it requires him to remove those forces from hostilities within 60 or 90 days if Congress does not subsequently authorize their deployment.

Deeks note that as with the War Powers Resolution, the covert action statute helps ensure that the president’s use of a high-risk tool is legal and carefully evaluated, and it holds the president directly accountable for the decision to use it.

Several elements in the covert action statute, including a baseline standard, presidential authorization, and congressional reporting rules, would translate well into a new law that addresses high-risk uses of national security AI not otherwise covered by existing reporting statutes. The purpose of an AI framework statute would be to ensure that the president himself approves the deployment of such high-risk uses, that senior policymakers and lawyers in the executive branch have the opportunity to debate those uses, and that Congress is aware that the United States is using such tools.

Deeks concludes:

Congress could modify this proposal along various axes. For example, if Congress and the executive believed that requiring presidential signoff was too onerous or time-consuming, an alternative would be to require Cabinet-level officials to sign AI determinations for any high-risk AI that their agencies deploy and submit those determinations to their relevant committees. Likewise, Congress could make the list of what AI tools would be covered more or less capacious. The Defense Department and the intelligence community might require different framework statutes, given that they have different missions, underlying statutory authorities, and oversight committees.

This type of framework statute would likely prompt the executive to establish an interagency process to draft the AI determinations and review the legality of their contents. In the covert action setting, various administrations have established an interagency lawyers’ group to review draft findings, which helps ensure that the proposed actions do not violate U.S. law. A framework statute for national security AI would likewise ensure that both Congress and the president know when U.S. national security agencies are deploying the most sensitive, powerful, and risky AI tools to make battlefield and intelligence decisions.