• The Danger of AI in War: It Doesn’t Care About Self-Preservation

    Recent wargames using artificial-intelligence models from OpenAI, Meta and Anthropic revealed a troubling trend: AI models are more likely than humans to escalate conflicts to kinetic, even nuclear, war.

  • Four Fallacies of AI Cybersecurity

    To date, the majority of AI cybersecurity efforts do not reflect the accumulated knowledge and modern approaches within cybersecurity, instead tending toward concepts that have been demonstrated time and again not to support desired cybersecurity outcomes. 

  • To Get Off Fossil fuels, America Is Going to Need a Lot More Electricians

    To cut greenhouse gas emissions on pace with the best available science, the United States must prepare for a monumental increase in electricity use. Burning fossil fuels to heat homes and get around isn’t compatible with keeping the planet at a livable temperature. Appliances that can be powered by clean electricity already exist to meet all of these needs. The problem is, most houses aren’t wired to handle the load from electric heating, cooking, and clothes dryers, along with solar panels and vehicle chargers. And a shortage of skilled labor could derail efforts to “electrify everything.”

  • Floating Piers and Sinking Hopes: China’s Logistics Challenge in Invading Taiwan

    Last month the United States disassembled and removed the floating pier it had assembled at a Gaza beach to take aid deliveries. It took almost a month to assemble, waves damaged it and almost destroyed it, and waves drove ashore boats that serviced it. And all that was nothing compared with the challenges that China’s armed forces would face in trying to deliver a mountain of personnel, equipment and supplies in an invasion of Taiwan.

  • Space Militarization Could Pose a Challenge to Global Security

    Typically, we would not be thinking of killer satellites, space nukes, and orbital debris fields that could lead to global collapse. But maybe we should. In May 2024, Russia launched a satellite that some observers believe is a weapon system that could allow the targeted destruction of other satellites in orbit.

  • China May Be Putting the Great Firewall into Orbit

    The first satellites for China’s ambitious G60 mega-constellation are in orbit in preparation for offering global satellite internet services—and we should worry about how this will help Beijing export its model of digital authoritarianism around the world.

  • How Smart Toys May e Spying on Kids: What Parents Need to Know

    Toniebox, Tiptoi, and Tamagotchi are smart toys, offering interactive play through software and internet access. However, many of these toys raise privacy concerns, and some even collect extensive behavioral data about children.

  • An Electric Grid that Thinks Ahead

    The reliability of the power rid depends on utility operators who have developed control systems and fail-safes to keep the power flowing. PNNL researchers point toward a smart grid that includes machine learning and artificial intelligence inputs, but with human expertise in the loop.

  • Artificial Intelligence at War

    The Gaza war has shown that the use of AI in tactical targeting can drive military strategy by encouraging decision-making bias. At the start of the conflict, an Israeli Defense Force AI system called Lavender apparently identified 37,000 people linked to Hamas. Its function quickly shifted from gathering long-term intelligence to rapidly identifying individual operatives to target.

  • To Win the AI Race, China Aims for a Controlled Intelligence Explosion

    China’s leader Xi Jinping has his eye on the transformative forces of artificial intelligence to revolutionize the country’s economy and society in the coming decades. But the disruptive, and potentially unforeseen, consequences of this technology may be more than the party-state can stomach.

  • What Is ‘Model Collapse’? An Expert Explains the Rumors About an Impending AI Doom

    Artificial intelligence (AI) prophets and newsmongers are forecasting the end of the generative AI hype, with talk of an impending catastrophic “model collapse”. But how realistic are these predictions? And what is model collapse anyway?

  • AI Poses No Existential Threat to Humanity, New Study Finds

    Large language models like ChatGPT cannot learn independently or acquire new skills, meaning they pose no existential threat to humanity.

  • Innovating Firefighting Technology with Smart Solutions to Enhance Urban Resilience

    The increase in high-rise and densely populated urban development has heightened the demand for safety and resilience solutions against emergencies, such as fires. Researchers have created advanced technological solutions to enhance firefighting and urban resilience.

  • No Power, No Operator, No Problem: Simulating Nuclear Reactors to Explore Next-Generation Nuclear Safety Systems

    To create safe and efficient nuclear reactors, designers and regulators need reliable data consistent with real-world observation. Data generated at the facility validates computational models and guides the design of nuclear reactors.

  • Could We Use Volcanoes to Make Electricity?

    It is challenging, but tapping into the Earth’s natural heat can create a renewable, reliable and clean source of energy. As technology improves, more places around the world will turn to geothermal energy to light up people’s lives. Volcanoes are reminders of a great powerhouse deep underground that’s waiting to be harnessed.