• Better Resources to Mitigate Explosive Threats

    Every second counts when responders encounter an explosive device, and critical decisions must be made quickly in order to neutralize the threat while also ensuring the security of civilians, property, and the responders themselves.

  • FEMA Maps Said They Weren’t in a Flood Zone. Then Came the Rain.

    The most common reference for flood risk are the flood insurance rate maps, also known as 100-year floodplain maps, that the Federal Emergency Management Agency, or FEMA, produces. They designate so-called special flood hazard areas that have a roughly 1 percent chance of inundation in any given year. Properties within those zones are subject to more stringent building codes and regulations that, among other things, require anyone with a government-backed mortgage to carry flood insurance. Flaws in federal flood maps leave millions unprepared. Some are trying to fix that.

  • The New Technology Which Is Making Cars Easier for Criminals to Steal, or Crash

    There is much talk in the automotive industry about the “internet of vehicles” (IoV). This describes a network of cars and other vehicles that could exchange data over the internet in an effort to make transportation more autonomous, safe and efficient. There are many benefits to IoV, but some of these systems might also make our vehicles prone to theft and malicious attack, as criminals identify and then exploit vulnerabilities in this new technology. In fact, this is already happening.

  • The Impending Privacy Threat of Self-Driving Cars

    With innovations often come unintended consequences—one of which is the massive collection of data required for an autonomous vehicle to function. The sheer amount of visual and other information collected by a fleet of cars traveling down public streets conjures the threat of the possibility for peoples’ movements to be tracked, aggregated, and retained by companies, law enforcement, or bad actors—including vendor employees.

  • Safeguarding U.S. Laws and Legal Information Against Cyberattacks and Malicious Actors

    NYU Tandon School of Engineering researchers will develop new technologies to secure the “digital legal supply chain” — the processes by which official laws and legal information are recorded, stored, updated and distributed electronically.

  • Randomized Data Can Improve Our Security

    Huge streams of data pass through our computers and smartphones every day. In simple terms, technical devices contain two essential units to process this data: A processor, which is a kind of control center, and a RAM, comparable to memory. Modern processors use a cache to act as a bridge between the two, since memory is much slower at providing data than the processor is at processing it. This cache often contains private data that could be an attractive target for attackers.

  • Major Update to NIST’s Widely Used Cybersecurity Framework

    The world’s leading cybersecurity guidance is getting its first complete makeover since its release nearly a decade ago. NIST has revised the framework to help benefit all sectors, not just critical infrastructure.

  • Bipartisan Texan Push in Congress to Boost Semiconductors, a Crucial Industry in the State

    Republicans like Sen. Ted Cruz and Democrats like Rep. Colin Allred — opponents in the 2024 election — propose streamlining environmental reviews to promote investment and expansion by chipmakers.

  • Beaver-Like Dams Can Enhance Existing Flood Management Strategies for At-risk Communities

    River barriers made up of natural materials like trees, branches, logs and leaves can reduce flooding in at-risk communities. Leaky barriers are effective in slowing down the flow of the river during periods of rainfall and storing up vast quantities of water which would otherwise rush through causing damage to communities downstream.

  • Humans Unable to Detect Over a Quarter of Deepfake Speech Samples

    New research has found that humans were only able to detect artificially generated speech 73% of the time, with the same accuracy in both English and Mandarin.

  • Reached: Milestone in Power Grid Optimization on World’s First Exascale Supercomputer

    Ensuring the nation’s electrical power grid can function with limited disruptions in the event of a natural disaster, catastrophic weather or a manmade attack is a key national security challenge. Compounding the challenge of grid management is the increasing amount of renewable energy sources such as solar and wind that are continually added to the grid, and the fact that solar panels and other means of distributed power generation are hidden to grid operators.

  • Aging Bridge Detection Through Digital Image Correlation

    Researchers have developed a novel and practical method of assessing the mechanical properties of structures, with potential application to structural health monitoring of large structures such as bridges and viaducts.

  • Using Artificial Mussels to Monitor Radioactivity in the Ocean

    Amid the global concern over the pollution of radioactive wastes in the ocean, researchers have conducted a study which has found that “artificial mussels” (AMs) can effectively measure low concentrations of radionuclides in the sea. It is believed that this technology can be applied as a reliable and effective solution for monitoring radioactive contamination around the world.

  • Denying Denial-of-Service: Strengthening Defenses Against Common Cyberattack

    A Denial-of-Service attack is a cyberattack that makes a computer or other device unavailable to its intended users. This is usually accomplished by overwhelming the targeted machine with requests until normal traffic can no longer be processed. Scientists have developed a better way to recognize a common internet attack, improving detection by 90 percent compared to current methods.

  • Fighting Fake “Facts” with Two Little Words: Grounding a Large Language Model's Answers in Reality

    Asking ChatGPT for answers comes with a risk—it may offer you entirely made-up “facts” that sound legitimate. Despite having been trained on vast amounts of factual data, large language models, or LLMs, are prone to generating false information called hallucinations. Inspired by a phrase commonly used in journalism, the researchers conducted a study on the impact of incorporating the words “according to” in LLM queries.