-
Aging Bridge Detection Through Digital Image Correlation
Researchers have developed a novel and practical method of assessing the mechanical properties of structures, with potential application to structural health monitoring of large structures such as bridges and viaducts.
-
-
Using Artificial Mussels to Monitor Radioactivity in the Ocean
Amid the global concern over the pollution of radioactive wastes in the ocean, researchers have conducted a study which has found that “artificial mussels” (AMs) can effectively measure low concentrations of radionuclides in the sea. It is believed that this technology can be applied as a reliable and effective solution for monitoring radioactive contamination around the world.
-
-
Denying Denial-of-Service: Strengthening Defenses Against Common Cyberattack
A Denial-of-Service attack is a cyberattack that makes a computer or other device unavailable to its intended users. This is usually accomplished by overwhelming the targeted machine with requests until normal traffic can no longer be processed. Scientists have developed a better way to recognize a common internet attack, improving detection by 90 percent compared to current methods.
-
-
Fighting Fake “Facts” with Two Little Words: Grounding a Large Language Model's Answers in Reality
Asking ChatGPT for answers comes with a risk—it may offer you entirely made-up “facts” that sound legitimate. Despite having been trained on vast amounts of factual data, large language models, or LLMs, are prone to generating false information called hallucinations. Inspired by a phrase commonly used in journalism, the researchers conducted a study on the impact of incorporating the words “according to” in LLM queries.
-
-
Fact-Checking Found to Influence Recommender Algorithms
Researchers have shown that urging individuals to actively participate in the news they consume can reduce the spread of these kinds of falsehoods. “We don’t have to think of ourselves as captive to tech platforms and algorithms,” said a researcher.
-
-
Fighting Fake News: Using Machine Learning, Blockchain to Counter Misinformation
False information can lead to harmful consequences. How can content creators focus their efforts on areas where the misinformation is likely to do the most public harm? Research offers possible solutions through a proposed machine learning framework, as well as expanded use of blockchain technology.
-
-
Using AI to Protect Against AI Image Manipulation
As we enter a new era where technologies powered by artificial intelligence can craft and manipulate images with a precision that blurs the line between reality and fabrication, the specter of misuse looms large. “PhotoGuard,” developed by MIT CSAIL researchers, prevents unauthorized image manipulation, safeguarding authenticity in the era of advanced generative models.
-
-
New Cipher System Protects Computers Against Spy Programs
Researchers have achieved a breakthrough in computer security with the development of a new and highly efficient cipher for cache randomization. The innovative cipher addresses the threat of cache side-channel attacks, offering enhanced security and exceptional performance.
-
-
De-Risking Authoritarian AI
You may not be interested in artificial intelligence, but it is interested in you. AI-enabled systems make many invisible decisions affecting our health, safety and wealth. They shape what we see, think, feel and choose, they calculate our access to financial benefits as well as our transgressions. In a technology-enabled world, opportunities for remote, large-scale foreign interference, espionage and sabotage —via internet and software updates—exist at a ‘scale and reach that is unprecedented’.
-
-
Regulate National Security AI Like Covert Action
Congress is trying to roll up its sleeves and get to work on artificial intelligence (AI) regulation. Ashley Deeks writes that only a few of these proposed provisions, however, implicate national security-related AI, and none create any kind of framework regulation for such tools. She proposes crafting a law similar to the War Powers Act to govern U.S. intelligence and military agencies use of AI tools.
-
-
Bringing Resilience to Small-Town Hydropower
Using newly developed technologies, researchers demonstrated how hydropower with advanced controls and use of a mobile microgrid, can enable small communities to maintain critical services during emergencies.
-
-
U.S. Voluntary AI Code of Conduct and Implications for Military Use
Seven technology companies including Microsoft, OpenAI, Anthropic and Meta, with major artificial intelligence (AI) products made voluntary commitments regarding the regulation of AI. These are non-binding, unenforceable and voluntary, but they may form the basis for a future Executive Order on AI, which will become critical given the increasing military use of AI.
-
-
Geoscientists Aim to Improve Human Security Through Planet-Scale POI Modeling
Geoinformatics engineering researchers developed MapSpace, a publicly available, scalable land-use modeling framework. By providing data characteristics broader and deeper than satellite imagery alone, MapSpace can generate population analytics invaluable for urban planning and disaster response.
-
-
Sandia Helps Develop Digital Tool to Track Cloud Hackers
Sandia programmers are helping the federal Cybersecurity and Infrastructure Security Agency (CISA) through an innovative program that enlists Microsoft cloud users everywhere to track down hackers and cyberterrorists.
-
-
Closer Look at “Father of Atomic Bomb”
Robert Oppenheimer is often referred to as the “father of the atomic bomb.” But he also had his federal security clearance revoked during the McCarthy era, a disputed decision that was only posthumously reversed last year. Harvard historian unwinds the complexities of J. Robert Oppenheimer as scientist, legend.
-
More headlines
The long view
Autonomous Vehicle Technology Vulnerable to Road Object Spoofing and Vanishing Attacks
Researchers have demonstrated the potentially hazardous vulnerabilities associated with the technology called LiDAR, or Light Detection and Ranging, many autonomous vehicles use to navigate streets, roads and highways. The researchers have shown how to use lasers to fool LiDAR into “seeing” objects that are not present and missing those that are – deficiencies that can cause unwarranted and unsafe braking or collisions.
Tantalizing Method to Study Cyberdeterrence
Tantalus is unlike most war games because it is experimental instead of experiential — the immersive game differs by overlapping scientific rigor and quantitative assessment methods with the experimental sciences, and experimental war gaming provides insightful data for real-world cyberattacks.
Prototype Self-Service Screening System Unveiled
TSA and DHS S&T unveiled a prototype checkpoint technology, the self-service screening system, at Harry Reid International Airport (LAS) in Las Vegas, NV. The aim is to provide a near self-sufficient passenger screening process while enabling passengers to directly receive on-person alarm information and allow for the passenger self-resolution of those alarms.
Falling Space Debris: How High Is the Risk I'll Get Hit?
An International Space Station battery fell back to Earth and, luckily, splashed down harmlessly in the Atlantic. Should we have worried? Space debris reenters our atmosphere every week.
Testing Cutting-Edge Counter-Drone Technology
Drones have many positive applications, bad actors can use them for nefarious purposes. Two recent field demonstrations brought government, academia, and industry together to evaluate innovative counter-unmanned aircraft systems.
Strengthening the Grid’s ‘Backbone’ with Hydropower
Argonne-led studies investigate how hydropower could help add more clean energy to the grid, how it generates value as grids add more renewable energy, and how liner technology can improve hydropower efficiency.
The Tech Apocalypse Panic is Driven by AI Boosters, Military Tacticians, and Movies
From popular films like a War Games or The Terminator to a U.S. State Department-commissioned report on the security risk of weaponized AI, there has been a tremendous amount of hand wringing and nervousness about how so-called artificial intelligence might end up destroying the world. There is one easy way to avoid a lot of this and prevent a self-inflicted doomsday: don’t give computers the capability to launch devastating weapons.
The Tech Apocalypse Panic is Driven by AI Boosters, Military Tacticians, and Movies
From popular films like a War Games or The Terminator to a U.S. State Department-commissioned report on the security risk of weaponized AI, there has been a tremendous amount of hand wringing and nervousness about how so-called artificial intelligence might end up destroying the world. There is one easy way to avoid a lot of this and prevent a self-inflicted doomsday: don’t give computers the capability to launch devastating weapons.