AI & NATIONL SECURITYFive Questions: RAND’s Jim Mitre on Artificial General Intelligence and National Security
A recent RAND paper lays out five hard national security problems that will become very real the moment an artificial general intelligence comes online. The researchers made only one prediction: If we ever get to that point, the consequences will be so profound that the U.S. government needs to take steps now to be ready for them.
A computer with human—or even superhuman—levels of intelligence remains, for now, a what-if. But AI labs around the world are racing to get there. U.S. leaders need to anticipate the day when that what-if becomes “What now?”
A recent RAND paper lays out five hard national security problems that will become very real the moment an artificial general intelligence comes online. Researchers did not try to guess whether that might happen in a few years, in a few decades, or never. They made only one prediction: If we ever get to that point, the consequences will be so profound that the U.S. government needs to take steps now to be ready for them.
RAND vice president and national security expert Jim Mitre wrote the paper with senior engineer Joel Predd.
Mitre was working on Wall Street on 9/11. He abandoned his career in finance and refocused on national security. He cofounded a private terrorism research organization, then moved to the Pentagon. He has since served in several leadership roles in the Department of Defense. His most recent job before coming to RAND was helping the Pentagon establish its Chief Digital and AI Office. He now directs RAND Global and Emerging Risks, a division focused on the most consequential challenges facing human civilization.
“What I worry about,” he said, “is that if artificial general intelligence ever does come about, the U.S. government is not well prepared to handle it. We don’t want to stumble into that world. Our job at RAND is to help anticipate what some of the choices are going to be, some of the trade-offs—and to make sure we think through them in advance.”
Q:What do you see as the most plausible scenario for how AI develops over the next five years?
A:To be honest, I don’t know. What we hear from a lot of the technologists working at the forefront of AI is that we might be on the threshold of some significantly more capable model, which they refer to as artificial general intelligence. This is plausible. It may happen—and because it would be of such high consequence if it does, it’s prudent to think through what that would mean.