• Security Solution Traps Cybercriminals in a Virtual Network

    Researchers are developing a new cyber-security deception solution that uses artificial intelligence to lure hackers away and prevent breaches of network systems. The “Lupovis” solution under development by the team at the University of Strathclyde’s Center for Intelligent and Dynamic Communications makes the hunter become the hunted.

  • Climate Engineering: Modelling Projections Oversimplify Risks

    Climate change is gaining prominence as a political and public priority. But many ambitious climate action plans foresee the use of climate engineering technologies whose risks are insufficiently understood. Researchers warn that over-optimistic expectations of climate engineering may reinforce the inertia with which industry and politics have been addressing decarbonization. In order to forestall this trend, they recommend more stakeholder input and clearer communication of the premises and limitations of model results.

  • Combatting Potential Electromagnetic Pulse (EMP) Attack

    Electromagnetic Pulse (EMP) weapons have the potential to disrupt unprotected critical infrastructure within the United States and could impact millions over large parts of the country. DHS says it continues to prepare against evolving threats against the American homeland, most recently highlighting efforts to combat an EMP attack.

  • Algorithm Could Quash Abuse of Women on Twitter

    Online abuse targeting women, including threats of harm or sexual violence, has proliferated across all social media platforms, but researchers have developed a statistical model to help drum it out of the Twittersphere.

  • Would You Fall for a Fake Video? Research Suggests You Might

    Deepfakes are videos that have been manipulated in some way using algorithms. As concerns about election interference around the globe continue to rise, the phenomenon of deepfakes and their possible impact on democratic processes remains surprisingly understudied.

  • Chemical Fingerprint for Explosives in Forensic Research

    The police frequently encounter explosives in their forensic investigations related to criminal and terrorist activities. Chemical analysis of explosives can yield valuable tactical information for police and counterterrorist units.

  • Ultrasensitive Measurements Detect Nuclear Explosions

    Imagine being able to detect the faintest of radionuclide signals from hundreds of miles away. Scientists have developed a system which constantly collects and analyzes air samples for signals that would indicate a nuclear explosion, perhaps conducted secretly underground. The system can detect just a small number of atoms from nuclear activity anywhere on the planet. In terms of sensitivity, the capability – in place for decades – is analogous to the ability to detect coronavirus from a single cough anywhere on Earth.

  • New Technique to Prevent Medical Imaging Cyberthreats

    Complex medical devices such as CT (computed tomography), MRI (magnetic resonance imaging) and ultrasound machines are controlled by instructions sent from a host PC. Abnormal or anomalous instructions introduce many potentially harmful threats to patients, such as radiation overexposure, manipulation of device components or functional manipulation of medical images. Researchers at Ben-Gurion University of the Negev have developed a new artificial intelligence technique that will protect medical devices from malicious operating instructions in a cyberattack as well as other human and system errors.

  • Next-Generation Explosives Trace Detection Technology

    Explosive materials pose a threat whether they are used by domestic bad actors or in a theater of war. Staying ahead of our adversaries is a job that DHS DOD share. The two departments’ research and development work is no different.

  • Thwarting Illicit Cryptocurrency Mining with Artificial Intelligence

    Cryptocurrencies, such as Bitcoin, are forms of digital money. Instead of minting it like coins or paper bills, cryptocurrency miners digitally dig for the currency by performing computationally intense calculations. A new artificial intelligence algorithm is designed to detect cryptocurrency miners in the act of stealing computing power from research supercomputers.

  • New Detection Method to Protect Army Networks

    U.S. Army researchers developed a novel algorithm to protect networks by allowing for the detection of adversarial actions that can be missed by current analytical methods. The main idea of this research is to build a higher-order network to look for subtle changes in a stream of data that could point to suspicious activity.

  • One Step Closer to Bomb-Sniffing Cyborg Locusts

    Researchers found that they could direct locust swarms toward areas where suspected explosives are located, and that the locusts’ brain reaction to the smell of explosives can be read remotely. Moreover, a study found locusts can quickly discriminate between different smells or different explosives. “This is not that different from in the old days, when coal miners used canaries,” says a researcher. “People use pigs for finding truffles. It’s a similar approach — using a biological organism — this is just a bit more sophisticated.”

  • China Embraces Bigger Internet with Virtually Unlimited IP Addresses

    By John Xie

    China is pushing for the adoption of a new worldwide Internet Protocol that could make the internet bigger and faster, but also potentially less anonymous. The technology, called IPv6, is an upgrade of the internet’s architecture that would allow trillions more electronic devices to have unique addresses online.

  • Lethal Autonomous Weapons May Soon Make Life-and-Death Decisions – on Their Own

    With drone technology, surveillance software, and threat-predicting algorithms, future conflicts could computerize life and death. “It’s a big question – what does it mean to hand over some of the decision making around violence to machines, and everybody on the planet will have a stake in what happens on this front,” says one expert.

  • Schools’ Facial Recognition Technology Problematic, Should Be Banned: Experts

    Facial recognition technology should be banned for use in schools, according to a new study. The research reveals inaccuracy, racial inequity, and increased surveillance are the touchstones of a flawed technology.