Democratic Policing and Facial Recognition Technology: Who Gets to Decide?

This post was written by Policing Project interns David Drew, Kathleen Lewis and Alexia Ramirez.

Over the past year, employees at several tech companies have made headlines for publicly urging that their work not be used for government surveillance. Employees at Google and Microsoft demanded that their companies not compete for a lucrative cloud-computing contract with the military, citing a government spokesperson’s claim that the project would “increase the lethality” of the military. Now, Amazon employees have followed in their footsteps - except this time, they don’t want their technology to be used by the local police.

In an open letter published in October, an Amazon employee claims that Rekognition, a powerful facial recognition product that supposedly can “store and search tens of millions of faces at a time,” will lead to ethical abuses and mass surveillance if used unchecked by the police. The Amazon letter writer argues that these tools are “flawed” and “reinforce existing bias.” A recent test of Rekognition that ran pictures of every member of Congress against a collection of mugshots, lead to 28 false matches—and the incorrect results were disproportionately higher for people of color. (Amazon’s response recognized the sensitivity of its technology and critiqued the test.)

However, police use of Rekognition has already begun. The City of Orlando has begun testing Rekognition’s real-time facial recognition system with live video feeds from cameras around the city. While Orlando is the only city to use this technology in real time, police in Washington County, Oregon also have been using Rekognition to allow officers in the field to compare photos to a database of mugshots.

As striking as it is to witness employees of a tech company calling for people not to use their product, this phenomenon pinpoints the unique ethical issues posed by facial recognition technology.

Front-End Accountability in Tech

Although many involved are focused on whether such a new technology should be used at all, the Policing Project approaches questions like this differently, asking first, what role the community played in deciding whether a particular policing technology is used, and second, whether the costs that the technology imposes justify the benefits.

Although the Orlando police claim that their Rekognition program is being used “in accordance with current and applicable law,” and that they are “not utilizing any images of members of the public for testing,” many in the civil liberties community are skeptical that civilians’ rights are being sufficiently protected. This skepticism is understandable, given that Orlando residents were not informed or consulted about the implementation of a program that could potentially result in real-time monitoring of their daily lives.

This lack of transparency and community engagement touches on one of the key principles underlying the Policing Project: democratic accountability. We believe there must be robust engagement between police departments and the communities they serve around the policies and priorities of policing. That includes rolling out new technologies that can be used by the police in ways that affect privacy interests.

Another key principle of ours is the use of some decisionmaking tool like cost-benefit analysis to determine whether a policing tactic achieves real benefits while minimizing or eliminating any potential harms. The questions around policing technology should be ascertaining whether there are real benefits to be had, and if so whether they can be obtained while minimizing or eliminating costs.

These two principles work together. The community needs to understand the upsides and risks of new technologies, and then express its views as the appropriate course of action.

When it comes to the adoption of policing technologies, meaningful community voice cannot be an afterthought. It must drive the conversation.