-
Microsoft Failed to Disclose Key Details About Use of China-Based Engineers in U.S. Defense Work, Record Shows
The tech giant is required to regularly provide U.S. officials with its plan for keeping government data safe from hacking. Yet a copy of Microsoft’s security plan obtained by ProPublica makes no reference to the company’s China-based operations.
-
-
President Trump’s War on “Woke AI” Is a Civil Liberties Nightmare
The White House’s recently-unveiled “AI Action Plan” wages war on so-called “woke AI”—including large language models (LLMs) that provide information inconsistent with the administration’s views on climate change, gender, and other issues. The plan would force developers to roll back efforts to reduce biases—making the models much less accurate, and far more likely to cause harm, especially in the hands of the government.
-
-
Bookshelf: Smartphones Shape War in Hyperconnected World
The smartphone is helping to shape the conduct and representation of contemporary war. A new book argues that as an operative device, the smartphone is now “being used as a central weapon of war.”
-
-
Nuclear Waste Could Be a Source of Fuel in Future Reactors
In theory, nuclear fusion —a process that fuses atoms together, releasing heat to turn generators —could provide vast energy supplies with minimal emissions. But nuclear fusion is an expensive prospect because one of its main fuels is a rare version of hydrogen called tritium. Now, researchers are developing new systems to use nuclear waste to make tritium.
-
-
New Approach Detects Adversarial Attacks in Multimodal AI Systems
New vulnerabilities have emerged with the rapid advancement and adoption of multimodal foundational AI models, significantly expanding the potential for cybersecurity attacks. Topological signatures key to revealing attacks, identifying origins of threats.
-
-
How Poisoned Data Can Trick AI − and How to Stop It
The quality of the information that the AI offers depends on the quality of the data it learns from If someone tries to interfere with those systems by tampering with their training data –either the initial data used to build the system or data the system collects as it’s operating to improve –trouble could ensue.
-
-
Filtered Data Stops Openly Available AI Models from Performing Dangerous Tasks
Researchers have reported a major advance in safeguarding open-weight language models. By filtering out potentially harmful knowledge during training, the researchers were able to build models that resist subsequent malicious updates – especially valuable in sensitive domains such as biothreat research.
-
-
Asteroid Hunting Using Heliostats?
Most planetary defense efforts use observatory-grade telescopes to produce images of the stars. Within those images, computational methods identify streaks, which are asteroids. This process is precise but time-consuming, and building new observatories is expensive. A Researcher says that heliostats, which typically turn solar energy into electricity, could help find asteroids at night. Most planetary defense efforts use observatory-grade telescopes to produce images of the stars. Within those images, computational methods identify streaks, which are asteroids. This process is precise but time-consuming, and building new observatories is expensive. A Researcher says that heliostats, which typically turn solar energy into electricity, could help find asteroids at night.
-
-
Risk Assessment with Machine Learning
Researchers utilize geological survey data and machine learning algorithms for accurately predicting liquefaction risk in earthquake-prone areas.
-
-
HHS Scraps Further Work on Life-Saving mRNA Vaccine Platform
In what experts say will hobble pandemic preparedness, HHS Secretary Robert F. Kennedy Jr. announced the dismantling of the country’s mRNA vaccine-development programs—the same innovation that allowed rapid scale-up of COVID-19 vaccines during the public health emergency.
-
-
Incentives for U.S.-China Conflict, Competition, and Cooperation Across Artificial General Intelligence’s Five Hard National Security Problems
The prospect of either the United States or the People’s Republic of China —or both—achieving artificial general intelligence (AGI) is likely to heighten tensions and could even increase the risk of competition spiraling into conflict. But the emergence of AGI could also create incentives for risk reduction and cooperation. We argue that both will not only be possible but essential.
-
-
Hundreds of Old EV Batteries Have New Jobs in Texas: Stabilizing the Power Grid
After reaching the end of their automotive lives, the batteries are being reused to provide lower-cost grid energy storage.
-
-
To Better Detect Chemical Weapons, Materials Scientists Are Exploring New Technologies
Chemical warfare is one of the most devastating forms of conflict. It leverages toxic chemicals to disable, harm or kill without any physical confrontation. Across various conflicts, it has caused tens of thousands of deaths and affected over a million people through injury and long-term health consequences.
-
-
DHS S&T Launches Third Phase of Industry Competition to Develop Remote Identity Verification Tech
DHS S&T’s has announced the third phase of the Remote Identity Validation Rally (RIVR), which challenges the private sector to deliver secure, accurate, and user-friendly technologies that can combat identity fraud when users apply for government services, open bank accounts, or verify social media accounts.
-
-
Cybersecurity Education in the Age of AI: Rethinking the Need for Human Capital in National Cyber Defense
Just five years ago, headlines were filled with urgent calls for the United States to drastically increase its output of cybersecurity professionals. Fast forward to 2025, and the proliferation of AI —especially generative and autonomous models—has transformed both the threats we face and the tools we use to defend against them. AI-driven cybersecurity software now automates many of the functions that once required a skilled human analyst, and the argument is made that AI may soon render many human cybersecurity roles obsolete.
-
More headlines
The long view
Risk Assessment with Machine Learning
Researchers utilize geological survey data and machine learning algorithms for accurately predicting liquefaction risk in earthquake-prone areas.
Bookshelf: Smartphones Shape War in Hyperconnected World
The smartphone is helping to shape the conduct and representation of contemporary war. A new book argues that as an operative device, the smartphone is now “being used as a central weapon of war.”
New Approach Detects Adversarial Attacks in Multimodal AI Systems
New vulnerabilities have emerged with the rapid advancement and adoption of multimodal foundational AI models, significantly expanding the potential for cybersecurity attacks. Topological signatures key to revealing attacks, identifying origins of threats.