-
Humans Unable to Detect Over a Quarter of Deepfake Speech Samples
New research has found that humans were only able to detect artificially generated speech 73% of the time, with the same accuracy in both English and Mandarin.
-
-
Reached: Milestone in Power Grid Optimization on World’s First Exascale Supercomputer
Ensuring the nation’s electrical power grid can function with limited disruptions in the event of a natural disaster, catastrophic weather or a manmade attack is a key national security challenge. Compounding the challenge of grid management is the increasing amount of renewable energy sources such as solar and wind that are continually added to the grid, and the fact that solar panels and other means of distributed power generation are hidden to grid operators.
-
-
Aging Bridge Detection Through Digital Image Correlation
Researchers have developed a novel and practical method of assessing the mechanical properties of structures, with potential application to structural health monitoring of large structures such as bridges and viaducts.
-
-
Using Artificial Mussels to Monitor Radioactivity in the Ocean
Amid the global concern over the pollution of radioactive wastes in the ocean, researchers have conducted a study which has found that “artificial mussels” (AMs) can effectively measure low concentrations of radionuclides in the sea. It is believed that this technology can be applied as a reliable and effective solution for monitoring radioactive contamination around the world.
-
-
Denying Denial-of-Service: Strengthening Defenses Against Common Cyberattack
A Denial-of-Service attack is a cyberattack that makes a computer or other device unavailable to its intended users. This is usually accomplished by overwhelming the targeted machine with requests until normal traffic can no longer be processed. Scientists have developed a better way to recognize a common internet attack, improving detection by 90 percent compared to current methods.
-
-
Fighting Fake “Facts” with Two Little Words: Grounding a Large Language Model's Answers in Reality
Asking ChatGPT for answers comes with a risk—it may offer you entirely made-up “facts” that sound legitimate. Despite having been trained on vast amounts of factual data, large language models, or LLMs, are prone to generating false information called hallucinations. Inspired by a phrase commonly used in journalism, the researchers conducted a study on the impact of incorporating the words “according to” in LLM queries.
-
-
Fact-Checking Found to Influence Recommender Algorithms
Researchers have shown that urging individuals to actively participate in the news they consume can reduce the spread of these kinds of falsehoods. “We don’t have to think of ourselves as captive to tech platforms and algorithms,” said a researcher.
-
-
Fighting Fake News: Using Machine Learning, Blockchain to Counter Misinformation
False information can lead to harmful consequences. How can content creators focus their efforts on areas where the misinformation is likely to do the most public harm? Research offers possible solutions through a proposed machine learning framework, as well as expanded use of blockchain technology.
-
-
Using AI to Protect Against AI Image Manipulation
As we enter a new era where technologies powered by artificial intelligence can craft and manipulate images with a precision that blurs the line between reality and fabrication, the specter of misuse looms large. “PhotoGuard,” developed by MIT CSAIL researchers, prevents unauthorized image manipulation, safeguarding authenticity in the era of advanced generative models.
-
-
New Cipher System Protects Computers Against Spy Programs
Researchers have achieved a breakthrough in computer security with the development of a new and highly efficient cipher for cache randomization. The innovative cipher addresses the threat of cache side-channel attacks, offering enhanced security and exceptional performance.
-
-
De-Risking Authoritarian AI
You may not be interested in artificial intelligence, but it is interested in you. AI-enabled systems make many invisible decisions affecting our health, safety and wealth. They shape what we see, think, feel and choose, they calculate our access to financial benefits as well as our transgressions. In a technology-enabled world, opportunities for remote, large-scale foreign interference, espionage and sabotage —via internet and software updates—exist at a ‘scale and reach that is unprecedented’.
-
-
Regulate National Security AI Like Covert Action
Congress is trying to roll up its sleeves and get to work on artificial intelligence (AI) regulation. Ashley Deeks writes that only a few of these proposed provisions, however, implicate national security-related AI, and none create any kind of framework regulation for such tools. She proposes crafting a law similar to the War Powers Act to govern U.S. intelligence and military agencies use of AI tools.
-
-
Bringing Resilience to Small-Town Hydropower
Using newly developed technologies, researchers demonstrated how hydropower with advanced controls and use of a mobile microgrid, can enable small communities to maintain critical services during emergencies.
-
-
U.S. Voluntary AI Code of Conduct and Implications for Military Use
Seven technology companies including Microsoft, OpenAI, Anthropic and Meta, with major artificial intelligence (AI) products made voluntary commitments regarding the regulation of AI. These are non-binding, unenforceable and voluntary, but they may form the basis for a future Executive Order on AI, which will become critical given the increasing military use of AI.
-
-
Geoscientists Aim to Improve Human Security Through Planet-Scale POI Modeling
Geoinformatics engineering researchers developed MapSpace, a publicly available, scalable land-use modeling framework. By providing data characteristics broader and deeper than satellite imagery alone, MapSpace can generate population analytics invaluable for urban planning and disaster response.
-
More headlines
The long view
Encryption Breakthrough Lays Groundwork for Privacy-Preserving AI Models
In an era where data privacy concerns loom large, a new approach in artificial intelligence (AI) could reshape how sensitive information is processed. New AI framework enables secure neural network computation without sacrificing accuracy.
AI-Controlled Fighter Jets May Be Closer Than We Think — and Would Change the Face of Warfare
Could we be on the verge of an era where fighter jets take flight without pilots – and are controlled by artificial intelligence (AI)? US R Adm Michael Donnelly recently said that an upcoming combat jet could be the navy’s last one with a pilot in the cockpit.
AI and the Future of the U.S. Electric Grid
Despite its age, the U.S. electric grid remains one of the great workhorses of modern life. Whether it can maintain that performance over the next few years may determine how well the U.S. competes in an AI-driven world.
Using Liquid Air for Grid-Scale Energy Storage
New research finds liquid air energy storage could be the lowest-cost option for ensuring a continuous power supply on a future grid dominated by carbon-free but intermittent sources of electricity.
Enhanced Geothermal Systems: A Promising Source of Round-the-Clock Energy
With its capacity to provide 24/7 power, many are warming up to the prospect of geothermal energy. Scientists are currently working to advance human-made reservoirs in Earth’s deep subsurface to stimulate the activity that exists within natural geothermal systems.
Experts Discuss Geothermal Potential
Geothermal energy harnesses the heat from within Earth—the term comes from the Greek words geo (earth) and therme (heat). It is an energy source that has the potential to power all our energy needs for billions of years.
Autonomous Weapon Systems: No Human-in-the-Loop Required, and Other Myths Dispelled
“The United States has a strong policy on autonomy in weapon systems that simultaneously enables their development and deployment and ensures they could be used in an effective manner, meaning the systems work as intended, with the same minimal risk of accidents or errors that all weapon systems have,” Michael Horowitz writes.
Are We Ready for a ‘DeepSeek for Bioweapons’?
Anthropic’s Claude 4 is a warning sign: AI that can help build bioweapons is coming, and could be widely available soon. Steven Adler writes that we need to be prepared for the consequences: “like a freely downloadable ‘DeepSeek for bioweapons,’ available across the internet, loadable to the computer of any amateur scientist who wishes to cause mass harm. With Anthropic’s Claude Opus 4 having finally triggered this level of safety risk, the clock is now ticking.”
Autonomous Weapon Systems: No Human-in-the-Loop Required, and Other Myths Dispelled
“The United States has a strong policy on autonomy in weapon systems that simultaneously enables their development and deployment and ensures they could be used in an effective manner, meaning the systems work as intended, with the same minimal risk of accidents or errors that all weapon systems have,” Michael Horowitz writes.
Are We Ready for a ‘DeepSeek for Bioweapons’?
Anthropic’s Claude 4 is a warning sign: AI that can help build bioweapons is coming, and could be widely available soon. Steven Adler writes that we need to be prepared for the consequences: “like a freely downloadable ‘DeepSeek for bioweapons,’ available across the internet, loadable to the computer of any amateur scientist who wishes to cause mass harm. With Anthropic’s Claude Opus 4 having finally triggered this level of safety risk, the clock is now ticking.”