The Risks of Public Safety AI
It is essential to understand the potential risks of AI systems. Otherwise, it is impossible to use them responsibly or regulate them in appropriate ways. This explainer provides a high-level overview of some of these risks. It is the first part in a two-part series, and covers inaccuracy, incursions upon privacy, equity concerns, and misuse.
To be clear, AI public safety systems also can offer real public safety benefits. But these risks, most having real world examples, make clear the need for sound governance to ensure that the benefits of these tools can be captured, while minimizing the harms.
Inaccuracy
Accuracy is paramount. Systems must provide valid results, and must function reliably and consistently. Should they fail in this, and should human operators fail to catch the error, the results can be devastating, ranging from wrongful arrests, to violations of civil rights and liberties, to the erosion of public trust. See, for example, the weapons detection system deployed at a school, which reported 250 false alarms for every verified hit; the same system failed to detect a student’s hunting knife, which later was used in a stabbing.
There are many reasons an AI system might perform poorly. AI models are trained on data, and if this data is outdated, incomplete, or otherwise flawed, errors could result. Problems can arise if an AI system lacks the computing resources needed to perform a task correctly. Human shortcomings also can be a problem, such as an overreliance on the outputs of an AI system, even when these outputs are incorrect.
Regardless of the cause, inaccuracy can cause real-world harms. Incorrect face recognition matches have led to wrongful enforcement action, with three individuals being arrested in the city of Detroit alone.
Incursions Upon Privacy
Privacy is essential to human existence. We may, for good reasons, wish to keep private the medical care we receive or the religious services we attend. We may wish to shield from others our attendance at a support group or our membership in a private organization. Or we may wish to share our political beliefs with friends and neighbors, free from government scrutiny. Without privacy, we may be deterred from exercising our most essential rights and liberties. In law, this is called “chill” or “chilling effect.”
Consider, for example, police use of vehicle surveillance systems (“VSS”), also known as automated license plate readers (“ALPRs”). These systems capture and store information about passing vehicles, including their license plate numbers, make, model, and color. Today, police have access to databases which aggregate hundreds of millions of records on vehicle locations and movements.
Doubtless, there are cases in which ALPR data has proven to be of use to investigators. Yet these systems, by their nature, also record the movements of millions of individuals not suspected of any wrongdoing, often with little oversight in place over how that data is used. Some VSS systems can even identify vehicles that travel together (known as “convoy analysis”), raising the prospect that police could use VSS data to identify an individuals’ friends, family members, romantic partners, or other associates.
AI tools have been used to surveil activities that are First Amendment-protected. During the 2008 presidential election, for example, the Virginia State Police used license plate readers to record people attending political rallies. And in 2020, face recognition technology was used to identify Black Lives Matter protesters in Florida.
Equity Concerns
The adoption of new policing technologies often has a greater impact on certain communities — in particular Black and brown communities and individuals with lower socioeconomic status.
In some cases, this occurs because of the way that a system is designed. For example, some predictive policing systems forecast where future crimes might occur, with implications for where officers direct their patrols. This can entrench past inequalities. For example, data reflecting that an area is “high crime” might be the product of unjust targeting of that area for enforcement efforts, but now that same data will be used to increase surveillance in the very same location.
In other cases, disparities might emerge due to the manner in which a system is deployed. For example, the concentration of surveillance technologies in Black and brown communities has been well documented. In fairness, this often occurs because policing agencies seek to deploy technology in areas with the highest recorded crime. But disproportionate placement today inevitably will lead to disproportionate enforcement in the future.
Misuse
AI technologies also carry real potential for misuse. There are several documented cases in which police have used surveillance tools for personal reasons, such as monitoring romantic partners. In one case, an officer used a VSS to track his estranged wife; in another, an officer accessed VSS data improperly to determine whether he was under investigation.
Notably, there also are cases in which police have used surveillance tools in violation of applicable law — for example, the sharing of surveillance data with immigration enforcement agencies in violation of state and local sanctuary laws.