SurveillanceCalls for New Federal Authority to Regulate Facial Recognition Tech
A group of artificial intelligence experts — citing profiling, breach of privacy and surveillance as potential societal risks — recently proposed a new model for managing facial recognition technologies at the federal level. The experts propose an FDA-inspired model that categorizes these technologies by degrees of risk and would institute corresponding controls.
A group of artificial intelligence experts including computer vision researcher and lead author Erik Learned-Miller of the University of Massachusetts Amherst’s College of Information and Computer Sciences recently proposed a new model for managing facial recognition technologies at the federal level.
In a white paper titled, “Facial Recognition Technologies in the Wild: A Call for a Federal Office,” the authors propose an FDA-inspired model that categorizes these technologies by degrees of risk and would institute corresponding controls.
Learned-Miller explains, “There are a lot of problems with face recognition, like breach of privacy, surveillance, unequal performance across sub-groups and profiling. Due to the high stakes situations in which this technology is being deployed, such as in police work, financial decision-making and analysis of job applicants, harms from inaccuracies or misuse are a real and growing problem.”
Further, “People have proposed a variety of possible solutions, but we argue that they are not enough. We are proposing a new federal office for regulating the technology. We model it after some of the offices in the Food and Drug Administration for regulating medical devices and pharmaceuticals.”
He says that the FDA provides a model or precedent of centralized regulation for managing complex technologies with major societal implications. Such an independent agency would encourage addressing the facial recognition technologies ecosystem as a whole. The white paper describing the researchers’ proposal is accompanied by a primer and basic introduction to the terminology, applications and difficulties of evaluating this complex set of technologies.
Learned-Miller’s co-authors are Joy Buolamwini of MIT’s Media Lab and founder of the Algorithmic Justice League, computer scientist Vicente Ordóñez of the University of Virginia, and Jamie Morgenstern at the University of Washington. The project was supported by a grant from the MacArthur Foundation.
The authors write that while various cities and states have begun to pass laws that provide oversight of facial recognition technologies, these individual measures are not enough to guarantee the consistent protection of people’s rights or set shared expectations for organizations that buy and sell in the tech market. Although the task is complex, this white paper provides actionable recommendations.
Buolamwini adds, “This paper is a starting point for how we as a society might establish redlines and guidelines for a complex range of facial recognition technologies.” Left unchecked, they threaten to propagate discrimination and intensify the risks for eroding civil liberties and human rights,” she and co-authors write.
Last year, Learned-Miller and two others received an award from the International Conference on Computer Vision for work on one of the most influential face datasets in the world, Labeled Faces in the Wild. It has been used by companies like Google and Facebook to test their facial-recognition accuracy.