Public Safety AI: Assessing the Benefits

AI holds great promise in transforming public safety. Agencies are deploying new AI-powered tools which promise to deter crime and close more cases. They are using robots and drones to traverse where humans cannot, in a range of operations such as search and rescue. Some agencies, operating under serious resource constraints, hope that AI might help them automate time-consuming manual tasks, such as writing reports.

Sound regulation of these tools requires policymakers to define with care the intended benefits, in addition to addressing potential risks. A thorough examination into benefits is a necessary first step — after all, one cannot know whether a tool should be deployed without defining its purpose and assessing whether the tool is fit for that purpose.

The goal of this explainer is to help policymakers assess the potential benefits of AI systems as part of a comprehensive evaluative approach. This approach includes:

  • Defining the benefits. Public safety agencies should define the problem they are trying to solve through use of AI and establish a clear vision for what success would look like and how to measure it.

  • Assessing the evidence. Policymakers then should assess the evidence supporting the use of the tool for that purpose. When there is an absence of evidence, pilot programs and independent testing/study, with safeguards in place to protect civil rights and civil liberties, can provide policymakers with crucial information.

To help policymakers understand how to define and assess the benefits of AI in the public safety context, we highlight different uses of AI by policing agencies and propose a set of questions aimed at establishing the benefits.

Fighting Crime

Perhaps the most high-profile use of AI by policing agencies is to fight crime. These tools could prove useful to police in variety of ways, provided they are validated and risks are mitigated appropriately. In our explainer on the capabilities of AI in public safety, we explore the various ways that public safety agencies can use AI systems for crime-fighting. These include:

  • Identification: Tools such as facial recognition, gait recognition, and AI-assisted DNA analysis can help police identify suspects or victims much more quickly, and with much less information than before.

  • Tracking: Tools such as vehicle surveillance systems (also called automated license plate readers) can detect and store information about passing vehicles. When this information is aggregated over time and location it allows police to reconstruct a vehicle’s path wherever the system operates.

  • Detection: Policing agencies use AI to detect crime or provide an early warning of anomalies or suspicious behavior. For example, some tools use audio sensors to detect and locate gunfire; other systems alert police to conduct such as dangerous driving or shoplifting.

  • Prediction: Policing agencies also may use AI to try to predict where crime is more likely to occur in the future or identify individuals at greater risk of committing or falling victim to crime.

As discussed, policymakers should take care not only to define the intended benefits, but also to assess the evidence in support. In assessing the crime-fighting potential of an AI system, policymakers should consider a range of evidence, including:

  • Evidence of deterrence: Is there evidence that use of the system decreases the incidence of crime, or a certain type/form of crime?

  • Evidence of more efficient enforcement: Is there evidence that use of the system leads to more efficient enforcement (i.e., enforcement against a higher proportion of offenders, and/or with fewer officer hours)?

  • Evidence of investigative benefits: Is there evidence that use of the system increases leads and/or case closure rates, or successful prosecutions or diversions?

  • Evidence of precision: Is there evidence that the system is more precise than conventional techniques in targeting suspects or victims, mitigating the burden on the general population?

Emergency Response

AI can play a crucial role in helping first responders act during public emergencies. Robots, for example, can perform search and rescue operations in places that would be difficult for a human to reach, or under conditions that otherwise would be too dangerous. A fleet of drones potentially could cover a wider area than humans alone during a search and rescue mission. Video analytics can enhance the ability of responders to find victims during natural disasters. As policymakers consider these AI systems, they should look to the evidence of benefits, including:

Evidence of decreased response times: Is there evidence that use of the system enables first responders to reach the scene or locate individuals more quickly?

Evidence of efficiency: Is there evidence that the system could conduct operations more efficiently — for example, cover more ground with the same number of officers?

Evidence of benefits to responder safety: Is there evidence that the tool would enhance responder safety, without compromising the safety of others?

Administrative Tools

AI could prove a useful tool in connection with a range of office functions. For example, agencies are using AI tools to help schedule shifts, identifying periods of high demand and factoring in scheduling preferences. Evidence management software can tag and organize files automatically, reducing the burden on officers.

Notably, generative AI (AI systems that generate new content such as text or images) now is being tested at some policing agencies — for example, to write police reports automatically. Generative AI systems are far from perfect — for example, they can produce false or misleading outputs — but they are seeing increasing use in the private sector and likely will adopted by public safety agencies in the coming months and years.

The benefits of these tools might be indicated by:

  • Evidence of productivity gains: Is there evidence that tasks can be completed more quickly or with fewer people, or that resources can be allocated more efficiently?

  • Evidence of increased accuracy: Is there evidence that AI tools perform office functions more accurately and/or consistently than human officers?

  • Evidence of cost savings: Is there evidence that the system could help address financial constraints by performing tasks that previously were handled by officers?

Enhancing Accountability

AI systems also have the potential to enhance accountability. Some vendors, for example, are developing tools which analyze body-worn camera footage, flagging notable incidents (including misconduct) for supervisors. Properly validated and implemented, this sort of technology has the potential to bridge a critical accountability gap — although body-worn cameras have widely been touted as an accountability tool, the vast majority of footage goes unreviewed, meaning that crucial encounters or moments can be missed. The benefits of such tools might be indicated by:

  • Evidence of better police-community interactions: Is there evidence of better interactions between police and community members — for example, a decrease in complaints filed or use-of-force incidents?

  • Evidence of increased compliance: Is there evidence of better compliance with applicable laws and policies by officers?

  • Evidence of better oversight: Is there evidence that oversight functions — either internal to an agency or external to it — can be performed more efficiently, thoroughly, or accurately through use of the system?

To find out more about how the Policing Project can help you craft policy on the responsible use of AI in public safety, please email us: tech@policingproject.org.