Five Questions: RAND’s Jim Mitre on Artificial General Intelligence and National Security

There are people in the tech world who are worried about how capable these models are becoming and sounding the alarm for the U.S. government to grapple with the implications. But they’re a little out of their depth once they start weighing in on what that means for national security. On the other hand, there are a lot of people in the national security community who aren’t up to speed on where this technology might be going. We wanted to just level-set everybody, to say, ‘Look, from our perspective, AGI presents five hard problems for U.S. national security. Any sensible strategy needs to think through the implications and not over-optimize for any one.’

What would be an example of that?
There have been calls for the U.S. government to launch a Manhattan Project–like effort to achieve artificial general intelligence. And if you’re focused on ensuring the U.S. has the lead in this technology, that makes perfect sense. But that might spur the Chinese to race us there, which would aggravate global instability. Some people have also called for a moratorium on developing these technologies until we’re certain we can control them. That takes care of one problem—a rogue AI getting out of the box. But then you risk enabling China or some other country to race ahead and maybe even weaponize this technology.

____________________________________

“Leaders need to have a better sense of what the current state of AI technology is, what are some of the capabilities it could present, and what that means from a national security perspective.”
 ____________________________________

So how should leaders be thinking about this? How do they guard against these risks without stifling innovation?
At a minimum, they need to take steps to avoid technological surprise. They need to have a better sense of what the current state of the technology is, what are some of the capabilities it could present, and what that means from a national security perspective. They also need to get this technology into the hands of operators. For example, frontier AI models right now are really good at computer programming. That raises a natural question: How might this technology impact cyber offense and defense? It would be sensible to have cyber operators working with the most state-of-the-art models, experimenting with them, learning their potential and limitations.

You’ve been studying these issues for years. Was there any one development or breakthrough that really made you sit up and take notice?
A lightbulb moment? I’m in awe of breakthroughs that happen almost on a monthly basis. But what has most seized my attention about this technology is how creative people are in finding new ways to use it. Unlike ‘narrow AI’ which is built to solve a specific problem, generative AI is a general-purpose technology that has a range of potential uses. It’s a form of pure cognition—and with its own agency. Understanding what possible applications there are for good and ill is endlessly fascinating and terrifying to consider.

As the director of RAND Global and Emerging Risks, what’s one risk that more people should be paying attention to right now?
Unfortunately, when it comes to global and emerging risks, business is booming. There’s no shortage of risks we need to grapple with. AI is a big one. Synthetic biology is a big one. China in its own right is a risk to global security. It’s a cliché in the national security community to say that this moment in time is more dangerous than others. But it certainly does feel that way right now.

____________________________________

AGI’s Five Hard Problems
Artificial general intelligence, or AGI, refers to an AI model that produces human-level, or even superhuman-level, intelligence across a wide variety of cognitive tasks. A RAND initiative, Geopolitics of Artificial General Intelligence, identified five hard national security problems such a model could present.

Wonder Weapons
AGI could enable the sudden creation of a decisive military capability, or “wonder weapon,” granting a significant, potentially unassailable, advantage to the first nation that achieves it. This could dramatically shift the military balance through breakthroughs in areas like cyber warfare or autonomous weapons systems.

Systemic Shifts in Global Power
Beyond singular weapons, AGI could fundamentally alter the instruments of national power and societal competitiveness, leading to systemic shifts in the global order. Nations best able to adapt and leverage AGI across economic, military, and societal domains may gain greatly expanded global influence.

Empowering Nonexperts with WMD Capabilities
AGI could act as a “malicious mentor,” accelerating the learning curve for nonexperts to develop weapons of mass destruction by distilling complex information into accessible instructions. This dramatically widens the pool of actors potentially capable of creating highly lethal biological, chemical, or cyber threats.

Artificial Entities with Agency
AGI might lead to artificial entities that operate with their own agency, potentially beyond human control or alignment with human intentions, posing direct threats to global security. Such entities could make decisions with far-reaching consequences, optimize systems in harmful ways, or resist human oversight.

Instability
The intense global competition to develop AGI, irrespective of its eventual realization, fosters a period of heightened strategic instability marked by arms race dynamics and increased risk of miscalculation. This pursuit itself, driven by fear and ambition, could precipitate conflict and destabilize global security.

Doug Irving is a communications analyst at RAND. This article is published courtesy of RAND.

Leave a comment

Register for your own account so you may participate in comment discussion. Please read the Comment Guidelines before posting. By leaving a comment, you agree to abide by our Comment Guidelines, our Privacy Policy, and Terms of Use. Please stay on topic, be civil, and be brief. Names are displayed with all comments. Learn more about Joining our Web Community.