-
Not the Usual Suspects: New Interactive Lineup Boosts Eyewitness Accuracy
Allowing eyewitnesses to dynamically explore digital faces using a new interactive procedure can significantly improve identification accuracy compared to the video lineup and photo array procedures used by police worldwide.
-
-
Securing the Food Pipeline from Cyberattacks
Sensors detecting the amount of food that herds of cattle are eating. Machines taking thousands of photos of fruit per second to detect their defects and sort them by quality. Robots packing fruit and vegetables into bags and boxes for purchase at grocery stores: Researchers are protecting the food and agriculture sector.
-
-
Scorpius Images to Test Nuclear Stockpile Simulations
One thousand feet below the ground, three national defense labs and a remote test site are building Scorpius — a machine as long as a football field — to create images of plutonium as it is compressed with high explosives, creating conditions that exist just prior to a nuclear explosion. The Sandia injector is key to validating plutonium pit performance.
-
-
Chi-Nu Experiment Concludes with Data to Support Nuclear Security, Energy Reactors
The Chi-Nu project, a years-long experiment measuring the energy spectrum of neutrons emitted from neutron-induced fission, recently concluded the most detailed and extensive uncertainty analysis of the three major actinide elements — uranium-238, uranium-235 and plutonium-239.
-
-
AI-Driven Earthquake Forecasting Shows Promise in Trials
A new attempt to predict earthquakes with the aid of artificial intelligence has raised hopes that the technology could one day be used to limit earthquakes’ impact on lives and economies. Researchers used AI algorithm to correctly predict 70% of earthquakes a week before they happened during a seven-month trial in China.
-
-
Computer Scientists Awarded $3M to Bolster Cybersecurity
A $3 million grant from the DARPA, the research and development arm of the U.S. Department of Defense, aims to leverage reinforcement learning to make computer networks stronger, dynamic and more secure.
-
-
Using Petroleum Reservoirs to Store Carbon
Oil and gas produced from reservoirs are traditionally thought of as sources of carbon dioxide and other greenhouse gases. In recent years, scientists in government and industry have been looking more at oil and gas reservoirs as places to store the very carbon that was previously taken out of the reservoirs. Injecting carbon dioxide into oil reservoirs also increases oil production in areas that have already produced a lot of oil.
-
-
NSF Backs Processor Design, Chip Security Research
Rice University computer scientists have won two grants from the National Science Foundation to explore new information processing technologies and applications that combine seamlessly co-designed hardware and software to allow for more effective and efficient data stream analysis using pattern matching.
-
-
AI Risks to the Financial Sector
In a world where AI algorithms can already analyze real-time financial information and make high-stakes trading decisions with little or no human oversight, our financial regulations are failing to keep up. A professor of computer science and engineering identifies new concerns that recent AI advances pose for financial markets.
-
-
AI Disinformation Is a Threat to Elections − Learning to Spot Russian, Chinese and Iranian Meddling in Other Countries Can Help the U.S. Prepare for 2024
Elections around the world are facing an evolving threat from foreign actors, one that involves artificial intelligence. Countries trying to influence each other’s elections entered a new era in 2016, when the Russians launched a series of social media disinformation campaigns targeting the U.S. presidential election. But there is a new element: generative AI and large language models. These have the ability to quickly and easily produce endless reams of text on any topic in any tone from any perspective, thus making generative AI and large language models a tool uniquely suited to internet-era propaganda. The sooner we know what to expect, the better we can deal with what comes.
-
-
U.S.-China “Tech War”: AI Sparks First Battle in Middle East
The U.S. has restricted exports of some computer chips to the Middle East, to stop AI-enabling chips from getting to China. But there’s no information on which countries are affected, or how chips would get to China. What is becoming clear is that AI could well become a new source of friction between democratic and autocratic states.
-
-
It's Easier to Get Valuable Metals from Battery Waste If You “Flash” It
Demand for valuable metals needed in batteries is poised to grow over the coming decades in step with the growth of clean energy technologies, and the best place to source them may be by recycling spent batteries.
-
-
Desalination System Could Produce Freshwater That Is Cheaper Than Tap Water
Researchers are aiming to turn seawater into drinking water with a completely passive device that is inspired by the ocean, and powered by the sun. They developed a solar-powered device that avoids salt-clogging issues of other designs.
-
-
Assessing the Risks of Existential Terrorism and AI
Professor Gary Ackerman, associate dean at the College of Emergency Preparedness, Homeland Security and Cybersecurity (CEHC) at SUNY-Albany, recently published an article, “Existential Terrorism: Can Terrorists Destroy Humanity?” which he co-authored with Zachary Kallenborn of CSIS. The article explores the plausibility of terrorist organizations using emerging technologies such as AI to enact existential harm, including human extinction.
-
-
New Center for AI Security Research to study AI’s Impacts on Society, Security
The Department of Energy’s Oak Ridge National Laboratory announced the establishment of the Center for AI Security Research, or CAISER, to address threats already present as governments and industries around the world adopt artificial intelligence and take advantage of the benefits it promises in data processing, operational efficiencies and decision-making.
-
More headlines
The long view
Autonomous Vehicle Technology Vulnerable to Road Object Spoofing and Vanishing Attacks
Researchers have demonstrated the potentially hazardous vulnerabilities associated with the technology called LiDAR, or Light Detection and Ranging, many autonomous vehicles use to navigate streets, roads and highways. The researchers have shown how to use lasers to fool LiDAR into “seeing” objects that are not present and missing those that are – deficiencies that can cause unwarranted and unsafe braking or collisions.
Tantalizing Method to Study Cyberdeterrence
Tantalus is unlike most war games because it is experimental instead of experiential — the immersive game differs by overlapping scientific rigor and quantitative assessment methods with the experimental sciences, and experimental war gaming provides insightful data for real-world cyberattacks.
Prototype Self-Service Screening System Unveiled
TSA and DHS S&T unveiled a prototype checkpoint technology, the self-service screening system, at Harry Reid International Airport (LAS) in Las Vegas, NV. The aim is to provide a near self-sufficient passenger screening process while enabling passengers to directly receive on-person alarm information and allow for the passenger self-resolution of those alarms.
Falling Space Debris: How High Is the Risk I'll Get Hit?
An International Space Station battery fell back to Earth and, luckily, splashed down harmlessly in the Atlantic. Should we have worried? Space debris reenters our atmosphere every week.
Testing Cutting-Edge Counter-Drone Technology
Drones have many positive applications, bad actors can use them for nefarious purposes. Two recent field demonstrations brought government, academia, and industry together to evaluate innovative counter-unmanned aircraft systems.
Strengthening the Grid’s ‘Backbone’ with Hydropower
Argonne-led studies investigate how hydropower could help add more clean energy to the grid, how it generates value as grids add more renewable energy, and how liner technology can improve hydropower efficiency.
The Tech Apocalypse Panic is Driven by AI Boosters, Military Tacticians, and Movies
From popular films like a War Games or The Terminator to a U.S. State Department-commissioned report on the security risk of weaponized AI, there has been a tremendous amount of hand wringing and nervousness about how so-called artificial intelligence might end up destroying the world. There is one easy way to avoid a lot of this and prevent a self-inflicted doomsday: don’t give computers the capability to launch devastating weapons.
The Tech Apocalypse Panic is Driven by AI Boosters, Military Tacticians, and Movies
From popular films like a War Games or The Terminator to a U.S. State Department-commissioned report on the security risk of weaponized AI, there has been a tremendous amount of hand wringing and nervousness about how so-called artificial intelligence might end up destroying the world. There is one easy way to avoid a lot of this and prevent a self-inflicted doomsday: don’t give computers the capability to launch devastating weapons.