-
De-Risking Authoritarian AI
You may not be interested in artificial intelligence, but it is interested in you. AI-enabled systems make many invisible decisions affecting our health, safety and wealth. They shape what we see, think, feel and choose, they calculate our access to financial benefits as well as our transgressions. In a technology-enabled world, opportunities for remote, large-scale foreign interference, espionage and sabotage —via internet and software updates—exist at a ‘scale and reach that is unprecedented’.
-
-
Regulate National Security AI Like Covert Action
Congress is trying to roll up its sleeves and get to work on artificial intelligence (AI) regulation. Ashley Deeks writes that only a few of these proposed provisions, however, implicate national security-related AI, and none create any kind of framework regulation for such tools. She proposes crafting a law similar to the War Powers Act to govern U.S. intelligence and military agencies use of AI tools.
-
-
Oppenheimer and the Pursuit of Nuclear Disarmament
Stanford scholar and political scientist Scott Sagan talks about what the film “Oppenheimer” got right – and missed – about creating the world’s first atomic bomb: the politics of nuclear proliferation, Oppenheimer’s attempts after World War II to constrain the new military technology, and the frightening role nuclear weapons play today. “I think there’s a broader tragedy that came out less clearly: the political tragedy of the nuclear arms race,” he says.
-
-
U.S. Voluntary AI Code of Conduct and Implications for Military Use
Seven technology companies including Microsoft, OpenAI, Anthropic and Meta, with major artificial intelligence (AI) products made voluntary commitments regarding the regulation of AI. These are non-binding, unenforceable and voluntary, but they may form the basis for a future Executive Order on AI, which will become critical given the increasing military use of AI.
-
-
Geoscientists Aim to Improve Human Security Through Planet-Scale POI Modeling
Geoinformatics engineering researchers developed MapSpace, a publicly available, scalable land-use modeling framework. By providing data characteristics broader and deeper than satellite imagery alone, MapSpace can generate population analytics invaluable for urban planning and disaster response.
-
-
Closer Look at “Father of Atomic Bomb”
Robert Oppenheimer is often referred to as the “father of the atomic bomb.” But he also had his federal security clearance revoked during the McCarthy era, a disputed decision that was only posthumously reversed last year. Harvard historian unwinds the complexities of J. Robert Oppenheimer as scientist, legend.
-
-
The Mythical Tie Between Immigration and Crime
Opponents of immigration often argue that immigrants drive up crime rates. Research by Stanford’s Ran Abramitzky and co-authors uncovers the most extensive evidence to date that immigrants are less likely to be imprisoned than U.S.-born individuals.
-
-
Researching the Future of Emergency Management
The research will consider emerging innovations in areas such as artificial intelligence, geospatial intelligence, machine learning, data analytics, and decision aids, to equip and support emergency managers for the future.
-
-
Roles and Implications of AI in the Russian-Ukrainian Conflict
Artificial Intelligence (AI) is emerging as a significant asset in the ongoing Russian-Ukrainian conflict. Specifically, it has become a key data analysis tool that helps operators and warfighters make sense of the growing volume and amount of information generated by numerous systems, weapons and soldiers in the field.
-
-
Proposed U.S. Missile Defense Plan a Source of White House, Congress Disagreement
The $874 billion budget passed by the House on Friday calls for the military to maintain a “credible nuclear capability” to deter adversaries, while developing and deploying layered defense systems that can defeat complex missile threats “in all phases of flight.” The White House argues that this contradicts the balance-of-terror on which the nuclear relations of the great powers was based since the 1960s, and which is embedded in nuclear arms control treaties.
-
-
The Uncertain Future of the U.S. Military’s All-Volunteer Force
This year marks the fiftieth anniversary of U.S. all-volunteer military force. It also coincides with one of the worst recruiting years for the U.S. military since 1973. The U.S military has a manpower problem and it’s not just due to today’s recruiting shortages. It’s time for a comprehensive plan to solve the personnel shortfalls.
-
-
Satellite Security Lags Decades Behind the State of the Art
Thousands of satellites are currently orbiting the Earth, and there will be many more in the future. Researchers analyzed three current low-earth orbit satellites and found that, from a technical point of view, hardly any modern security concepts were implemented. Various security mechanisms that are standard in modern mobile phones and laptops were not to be found.
-
-
Americans in Former Confederate States More Likely to Say Violent Protest against Government Is Justified, 160 Years After Gettysburg
Americans living in the Confederate states that violently rebelled against the United States during the Civil War express significantly greater support for the notion that it can be justifiable to violently protest against the government. Residents of what are known as the Border States, the slave states that did not secede from the Union, are also more likely than residents of Union states to say it can be justifiable to violently protest against the government.
-
-
How an “AI-tocracy” Emerges
Many scholars, analysts, and other observers have suggested that resistance to innovation is an Achilles’ heel of authoritarian regimes. But in China, the use of AI-driven facial recognition helps the regime repress dissent while enhancing the technology, researchers report.
-
-
Preparing for Great Power Conflict
How has the military experience gained by both the U.S. military and the PLA since 2001 shaped the way both militaries train? What effect do these experiences and training trends have on readiness for major power conflict?
-
More headlines
The long view
Tantalizing Method to Study Cyberdeterrence
Tantalus is unlike most war games because it is experimental instead of experiential — the immersive game differs by overlapping scientific rigor and quantitative assessment methods with the experimental sciences, and experimental war gaming provides insightful data for real-world cyberattacks.
Using Drone Swarms to Fight Forest Fires
Forest fires are becoming increasingly catastrophic across the world, accelerated by climate change. Researchers are using multiple swarms of drones to tackle natural disasters like forest fires.
Testing Cutting-Edge Counter-Drone Technology
Drones have many positive applications, bad actors can use them for nefarious purposes. Two recent field demonstrations brought government, academia, and industry together to evaluate innovative counter-unmanned aircraft systems.
European Arms Imports Nearly Double, U.S. and French Exports Rise, and Russian Exports Fall Sharply
States in Europe almost doubled their imports of major arms (+94 per cent) between 2014–18 and 2019–23. The United States increased its arms exports by 17 per cent between 2014–18 and 2019–23, while Russia’s arms exports halved. Russia was for the first time the third largest arms exporter, falling just behind France.
How Climate Change Will Affect Conflict and U.S. Military Operations
“People talk about climate change as a threat multiplier,” said Karen Sudkamp, an associate director of the Infrastructure, Immigration, and Security Operations Program within the RAND Homeland Security Research Division. “But at what point do we need to start talking about the threat multiplier actually becoming a significant threat all its own?”
The Tech Apocalypse Panic is Driven by AI Boosters, Military Tacticians, and Movies
From popular films like a War Games or The Terminator to a U.S. State Department-commissioned report on the security risk of weaponized AI, there has been a tremendous amount of hand wringing and nervousness about how so-called artificial intelligence might end up destroying the world. There is one easy way to avoid a lot of this and prevent a self-inflicted doomsday: don’t give computers the capability to launch devastating weapons.