• Schools’ Facial Recognition Technology Problematic, Should Be Banned: Experts

    Facial recognition technology should be banned for use in schools, according to a new study. The research reveals inaccuracy, racial inequity, and increased surveillance are the touchstones of a flawed technology.

  • Artificial Intelligence Is a Totalitarian’s Dream – Here’s How to Take Power Back

    Individualistic Western societies are built on the idea that no one knows our thoughts, desires or joys better than we do. And so we put ourselves, rather than the government, in charge of our lives. We tend to agree with the philosopher Immanuel Kant’s claim that no one has the right to force their idea of the good life on us. Artificial intelligence (AI) will change this.

  • Cyberspace Is Critical Infrastructure – It Will Take Effective Government Oversight to Make It Safe

    A famous 1990s New Yorker cartoon showed two dogs at a computer and a caption that read “On the Internet, nobody knows you’re a dog.” The New Yorker cartoon doesn’t apply today. Not only do your browser, service provider and apps know you’re a dog, they know what breed you are, what kind of dog food you eat, who your owner is and where your doghouse is. Cyberspace can function as critical infrastructure only when it’s safe for everyone, but legal and regulatory protections in cyberspace have not kept up with the times.

  • Protecting Yourself against Facial Recognition Software

    The rapid rise of facial recognition systems has placed the technology into many facets of our daily lives, whether we know it or not. What might seem innocuous when Facebook identifies a friend in an uploaded photo grows more ominous in enterprises such as Clearview AI, a private company that trained its facial recognition system on billions of images scraped without consent from social media and the internet. A new research project from the University of Chicago provides a powerful new protection mechanism.

  • Trust in Data Privacy Increases During Pandemic

    COVID-19 has seen Australians become more trusting of organizations and governments when it comes to their personal data and privacy, according to new research. “Our findings provide strong support for the notion that trust and confidence in different aspects of policy design and delivery interact with each other, creating vicious or virtuous circles,” says the study’s lead author.

  • How to Hide from a Drone – the Subtle Art of “Ghosting” in the Age of Surveillance

    Drones of all sizes are being used by environmental advocates to monitor deforestation, by conservationists to track poachers, and by journalists and activists to document large protests. But when the Department of Homeland Security redirects large, fixed-wing drones from the U.S.-Mexico border to monitor protests, and when towns experiment with using drones to test people for fevers, it’s time to think about how many eyes are in the sky and how to avoid unwanted aerial surveillance. One way that’s within reach of nearly everyone is learning how to simply disappear from view.

  • Contact Tracing’s Long, Turbulent History Holds Lessons for COVID-19

    To get the COVID-19 pandemic under control and keep it from flaring up again, contact tracing is critical, but persuading everyone who tests positive to share where they’ve been and with whom relies on trust and cooperation. Amy Lauren Fairchild, Lawrence O. Gostin, and Ronald Bayer write in The Conversation that contact tracing’s long, contested history shows how easily both can be shattered. Looking back at the reasons for resistance to contact tracing as the U.S. struggled to contain epidemics in the past can help us understand the first signs of pushback against contact tracing in the COVID-19 response, as well as the public health consequences.

  • Personal Data Can Easily Be Extracted from Zoom, Other Video Conference Screenshots

    Video conference users should not post screen images of Zoom and other video conference sessions on social media, according to BGU researchers, who easily identified people from public screenshots of video meetings on Zoom, Microsoft Teams and Google Meet.

  • EFF Launches Searchable Database of Police Use of Surveillance Technologies

    The Electronic Frontier Foundation (EFF), in partnership with the Reynolds School of Journalism at the University of Nevada, Reno, the other day launched what the EFF describes as “the largest-ever collection of searchable data on police use of surveillance technologies,” created as a tool for the public to learn about facial recognition, drones, license plate readers, and other devices law enforcement agencies are acquiring to spy on our communities.

  • Large-Scale Facial Recognition Is Incompatible with a Free Society

    In the U.S., tireless opposition to state use of facial recognition algorithms has recently won some victories. Outside the U.S., however, the tide is heading in the other direction. To decide whether to expand or limit the use of facial recognition technology, nations will need to answer fundamental questions about the kind of people, and the kind of society, they want to be. Face surveillance is based on morally compromised research, violates our rights, is harmful, and exacerbates structural injustice, both when it works and when it fails. Its adoption harms individuals, and makes our society as a whole more unjust, and less free. A moratorium on its use is the least we should demand.

  • Privacy Risks of Home Security Cameras

    Researchers have used data from a major home Internet Protocol (IP) security camera provider to evaluate potential privacy risks for users. The researchers found that the traffic generated by the cameras could be monitored by attackers and used to predict when a house is occupied or not.

  • Coronavirus opens door to company surveillance of workers

    Employers are rushing to use digital tracking technology to reduce virus transmission in the workplace. Mohana Ravindranath writes in Politico that privacy experts worry that businesses will start using their newfound surveillance capabilities for purposes far beyond public health. The data could be used to evaluate workers’ productivity, see which colleagues are holding meetings or even flag an employee who unexpectedly ducks out of the office during work hours.

  • Protecting Children's Online Privacy

    A University of Texas at Dallas study of 100 mobile apps for kids found that 72 violated a federal law aimed at protecting children’s online privacy. Researchers developed a tool that can determine whether an Android game or other mobile app complies with the federal Children’s Online Privacy Protection Act (COPPA).

  • AI Could Help Solve the Privacy Problems It Has Created

    The stunning successes of artificial intelligence would not have happened without the availability of massive amounts of data, whether its smart speakers in the home or personalized book recommendations. These large databases are amassing a wide variety of information, some of it sensitive and personally identifiable. All that data in one place makes such databases tempting targets, ratcheting up the risk of privacy breaches. We believe that the relationship between AI and data privacy is more nuanced. The spread of AI raises a number of privacy concerns, most of which people may not even be aware. But in a twist, AI can also help mitigate many of these privacy problems.

  • How Much Control Would People Be Willing to Grant to a Personal Privacy Assistant?

    CyLab’s Jessica Colnago believes that in the future, the simple act of walking down the street is going to be a little weird. “You know how every time you enter a website, and it says: ‘We use cookies. Do you consent?’ Imagine that same thing walking down the street, but for a light pole, or a surveillance camera, or an energy sensor on a house,” Colnago says.