• Quiet and Green: Why Hydrogen Planes Could Be the Future of Aviation

    Today, aviation is responsible for 3.6 percent of EU greenhouse gas emissions. Modern planes use kerosene as fuel, releasing harmful carbon dioxide into the atmosphere. But what if there was another way? One possible solution is to use a new type of fuel in planes that doesn’t produce harmful emissions – hydrogen. Long touted as a sustainable fuel, hydrogen is now gaining serious traction as a possibility for aviation, and already tests are under way to prove its effectiveness.

  • Accurately Pinpointing Malicious Drone Operators

    Researchers have determined how to pinpoint the location of a drone operator who may be operating maliciously or harmfully near airports or protected airspace by analyzing the flight path of the drone.

  • Privacy Risks of Home Security Cameras

    Researchers have used data from a major home Internet Protocol (IP) security camera provider to evaluate potential privacy risks for users. The researchers found that the traffic generated by the cameras could be monitored by attackers and used to predict when a house is occupied or not.

  • Improving Ethical Models for Autonomous Vehicles

    There’s a fairly large flaw in the way that programmers are currently addressing ethical concerns related to artificial intelligence (AI) and autonomous vehicles (AVs). Namely, existing approaches don’t account for the fact that people might try to use the AVs to do something bad.

  • Reverse Engineering of 3D-Printed Parts by Machine Learning Reveals Security Vulnerabilities

    Over the past thirty years, the use of glass- and carbon- fiber reinforced composites in aerospace and other high-performance applications has soared along with the broad industrial adoption of composite materials. Machine learning can make reverse engineering of complex composite material parts easy.

  • AI Could Help Solve the Privacy Problems It Has Created

    The stunning successes of artificial intelligence would not have happened without the availability of massive amounts of data, whether its smart speakers in the home or personalized book recommendations. These large databases are amassing a wide variety of information, some of it sensitive and personally identifiable. All that data in one place makes such databases tempting targets, ratcheting up the risk of privacy breaches. We believe that the relationship between AI and data privacy is more nuanced. The spread of AI raises a number of privacy concerns, most of which people may not even be aware. But in a twist, AI can also help mitigate many of these privacy problems.

  • How Much Control Would People Be Willing to Grant to a Personal Privacy Assistant?

    CyLab’s Jessica Colnago believes that in the future, the simple act of walking down the street is going to be a little weird. “You know how every time you enter a website, and it says: ‘We use cookies. Do you consent?’ Imagine that same thing walking down the street, but for a light pole, or a surveillance camera, or an energy sensor on a house,” Colnago says.

  • Sound Beacons Support Safer Tunnel Evacuation

    Research conducted as part of the project EvacSound demonstrates that auditory guidance using sound beacons is an effective aid during the evacuation of smoke-filled road tunnels. This is good news. It is a fact that vehicle drivers and passengers cannot normally expect to be rescued by the emergency services during such accidents.

  • Searching the Universe for Signs of Technological Civilizations

    Scientists are collaborating on a project to search the universe for signs of life via technosignatures. Researchers believe that although life appears in many forms, the scientific principles remain the same, and that the technosignatures identifiable on Earth will also be identifiable in some fashion outside of the solar system.

  • COVID-19 Sparks Technology Innovation

    Researchers say the swift development of wearable sensors tailored to a pandemic reinforces how a major crisis can accelerate innovation, Kane Farabaugh writes in VOA News. “I think it’s really opened people’s eyes to what’s possible, in terms of modern technology in that context,” said John Rogers of Northwestern University Technological Institute.

  • The Dangers of Tech-Driven Solutions to COVID-19

    Although few sensible people have anything good to say about the federal government response, reactions to tools for managing the pandemic designed by tech firms have been more mixed, with many concluding that such tools can minimize the privacy and human rights risks posed by tight coordination between governments and tech firms. Julie E. Cohen, Woodrow Hartzog, and Laura Moy write for Brookings that contact tracing done wrong threatens privacy and invites mission creep into adjacent fields, including policing. Government actors might (and do) distort and corrupt public-health messaging to serve their own interests. Automated policing and content control raise the prospect of a slide into authoritarianism. 

  • The Coronavirus App Was Always Doomed to Fail

    For months now, the British public has been told there’s only one way to resume normal life: a successful virus-tracing scheme. The public was prepped to download it as soon as it was made available UK-wide. Kate Andrews writes in The Spectator that months later, there is still no NHSX app to download. Today we learn there will never be.Far from being an exception to the rule, the app now joins in a long line of government IT projects to have glitched and failed, even before arrival.

  • The Answer to Groundwater Resources Comes from High in the Sky

    Groundwater makes up 30 to 50 percent of California’s water supply, but until recently there were few restrictions placed on its retrieval. Then in 2014 California became the last Western state to require regulation of its groundwater, and water managers in the state’s premier agricultural region – the state’s Central Valley – are tasked with estimating available groundwater. It’s a daunting technological challenge – but scientists can help by pairing satellite data with high-resolution monitoring to estimate groundwater depletion.

  • Bans on Facial Recognition Are Naïve — Hold Law Enforcement Accountable for Its Abuse

    The use of facial recognition technology has become a new target in the fight against racism and brutality in law enforcement. The current controversy over facial recognition purports to be about bias — inaccurate results related to race or gender. Osonde A. Osoba and Douglas Yeung write that “That could be fixed in the near future, but it wouldn’t repair the underlying dilemma: The imbalance of power between citizens and law enforcement. On this, facial recognition ups the ante. These tools can strip individuals of their privacy and enable mass surveillance.

  • The Dangers of Tech-Driven Solutions to COVID-19

    Although few sensible people have anything good to say about the federal government response, reactions to tools for managing the pandemic designed by tech firms have been more mixed, with many concluding that such tools can minimize the privacy and human rights risks posed by tight coordination between governments and tech firms. Julie E. Cohen, Woodrow Hartzog, and Laura Moy write for Brookings that contact tracing done wrong threatens privacy and invites mission creep into adjacent fields, including policing. Government actors might (and do) distort and corrupt public-health messaging to serve their own interests. Automated policing and content control raise the prospect of a slide into authoritarianism.