• Protecting Yourself against Facial Recognition Software

    The rapid rise of facial recognition systems has placed the technology into many facets of our daily lives, whether we know it or not. What might seem innocuous when Facebook identifies a friend in an uploaded photo grows more ominous in enterprises such as Clearview AI, a private company that trained its facial recognition system on billions of images scraped without consent from social media and the internet. A new research project from the University of Chicago provides a powerful new protection mechanism.

  • Face Masks’ Effect on Face Recognition Software

    Now that so many of us are covering our faces to help reduce the spread of COVID-19, how well do face recognition algorithms identify people wearing masks? The answer, according to a preliminary NIST study), is with great difficulty. Algorithms created before the pandemic generally perform less accurately with digitally masked faces.

  • No Laughing Matter: Laughter Signature as New Biometrics

    The popular view of biometric security often invokes fingerprint readers, iris or retinal scans, and voice-activated systems. Researchers have now demonstrated how the way a person laughs might be used in biometrics. Initial tests of the approach show that a prototype laughter recognition algorithm can be 90 percent accurate.

  • Large-Scale Facial Recognition Is Incompatible with a Free Society

    In the U.S., tireless opposition to state use of facial recognition algorithms has recently won some victories. Outside the U.S., however, the tide is heading in the other direction. To decide whether to expand or limit the use of facial recognition technology, nations will need to answer fundamental questions about the kind of people, and the kind of society, they want to be. Face surveillance is based on morally compromised research, violates our rights, is harmful, and exacerbates structural injustice, both when it works and when it fails. Its adoption harms individuals, and makes our society as a whole more unjust, and less free. A moratorium on its use is the least we should demand.

  • Tightening Up Facial Biometrics

    Facial biometrics for security applications is an important modern technology. Unfortunately, there is the possibility of “spoofing” a person’s face to the sensor or detection system through the use of a photograph or even video presented to the security system. Researchers have now developed a way to thwart spoofing.

  • Lend Me Your Ears: Securing Smart-Home Entry with Earprints

    Fingerprints and DNA are well-known forms of biometrics, thanks to crime dramas on television and at the movies. But as technology drives us toward the Internet of Things—the interconnection of computer devices in common objects—other forms of biometrics are sure to enter the cultural consciousness beyond use as forensics tools such as face recognition and retinas, veins, and palm prints. Researchers say that “earprints” could one day be used as person identification to secure smart homes via smartphones.

  • A Face-Recognition Tech that Works Even for Masked Faces

    In these corona days, face-recognition technologies — used for a variety of security purposes — are severely challenged by the fact that everyone’s wearing protective masks. The Israeli company Corsight says it has solved that problem with autonomous artificial intelligence.

  • New Privacy Threat Combines Device Identification with Biometric Information

    A new study by computer scientists has revealed a new privacy threat from devices such as smartphones, smart doorbells and voice assistants that allows cyber attackers to access and combine device identification and biometric information.

  • Lawmaker Presses Clearview AI on Foreign Sales of Facial Recognition

    Senator Edward J. Markey (D-Massachusetts earlier this week raised new concerns about Clearview AI’s facial recognition app. Markey initially wrote to Clearview in January 2020 with concerns about how the company’s app might violate Americans’ civil liberties and privacy. Clearview is marketing its product to users in foreign countries with authoritarian regimes such as Saudi Arabia. The company might also be collecting and processing images of children from social media sites.

  • Forensic Proteomics: Going Beyond DNA Profiling

    A new book details an emerging forensic method that could become as widespread and trustworthy as DNA profiling. The method is called mass-spectrometry-based proteomics, which examines the proteins that make up many parts of living things. These proteins exist in unique combinations in everything from blood cells and clothing fibers to certain types of medicine and the diseases they fight. Because proteomics analyzes these proteins directly, forensic proteomics can fill in when DNA is missing, ambiguous, or was never present to begin with.

  • U.S. Plans to Collect DNA from Nearly a Million Immigrants Despite Charges It Violates Privacy

    The Trump administration is pushing ahead with a project that could lead to the government collecting DNA from hundreds of thousands of detained immigrants, some as young as 14 years old, alarming civil rights advocates. Once fully underway, the DNA program could become the largest U.S. law enforcement effort to systemically collect genetic material from people not accused of a crime.

  • Evaluating Effects of Race, Age, Sex on Face Recognition Software

    How accurately do face recognition software tools identify people of varied sex, age and racial background? According to a new study by the National Institute of Standards and Technology (NIST), the answer depends on the algorithm at the heart of the system, the application that uses it and the data it’s fed — but the majority of face recognition algorithms exhibit demographic differentials. A differential means that an algorithm’s ability to match two images of the same person varies from one demographic group to another.

  • Inside America’s First All-Biometric Airline Terminal

    People still need more than their faces to enter and exit America on international flights, but Brandi Vincent writes that a growing number of early-stage facial recognition deployments that aim to screen passengers with little human intervention are rolling out at airports across the country.

  • Chinese Communist Party’s Media Influence Expands Worldwide

    Over the past decade, Chinese Communist Party (CCP) leaders have overseen a dramatic expansion in the regime’s ability to shape media content and narratives about China around the world, affecting every region and multiple languages, according to a new report. This trend has accelerated since 2017, with the emergence of new and more brazen tactics by Chinese diplomats, state-owned news outlets, and CCP proxies.

  • The Effects of Race, Age, Sex on Face Recognition Software

    How accurately do face recognition software tools identify people of varied sex, age and racial background? According to a new study by the National Institute of Standards and Technology (NIST), the answer depends on the algorithm at the heart of the system, the application that uses it and the data it’s fed — but the majority of face recognition algorithms exhibit demographic differentials.