Six Steps Policymakers Can Take Now for Safer and More Effective AI

Artificial intelligence (AI) has heralded a radical transformation in policing and public safety. New AI-enabled tools promise an abundance of novel capabilities — to predict where crime will occur, to detect suspicious activity, to triage non-emergency calls to 911, and beyond. These technologies may deliver real benefits for public safety, for example, promising to help law enforcement solve crimes more quickly or aiding 911 dispatchers to more efficiently provide life-saving support. But so too does the use of AI systems in public safety carry real risks, including inaccuracy, racial bias, misuse, and invasions of privacy.

Key to determining the benefits and risks of any AI system is knowing how well or poorly the system works in the real world. But by and large, when it comes to public agency use of AI, these systems remain untested in real-world conditions due to challenges ranging from a lack of consensus standards for evaluation to a lack of agency capacity to conduct testing.

Yet state and local public safety agencies are using AI systems now. So, what steps can policymakers take today to help ensure AI systems used by public safety agencies are safe and effective? Here are 6 recommendations.

Procurement

1) Establish statewide AI procurement standards to ensure vendors provide proof their systems are safe and effective for intended deployment contexts.

There are numerous steps vendors should take pre-deployment to ensure the systems they sell are fit for their intended purpose(s). Procurement is government’s lever to ensure vendors actually take these steps. The federal Office of Management and Budget’s recent policy on federal agency use of AI provides a helpful list of procurement requirements that agencies can use to hold vendors to account, such as requiring documentation of a system’s capabilities and known limitations and contracting for post-award monitoring of system efficacy. Because vendors are incentivized to promote their best metrics, agencies also should use procurement to require proof of independent, third-party evaluation of vendors’ AI systems.

Unlike the federal government, at the state and local levels, most individual agencies don’t have sufficient market power to influence vendors on their own. This is why we recommend that policymakers establish statewide AI system procurement standards that incorporate requirements like the ones articulated by OMB to consolidate agencies’ leverage.

Transparency

2) Require public safety agencies to inventory the AI systems they use.

State and local policymakers should follow the lead of the federal government and require public safety agencies to inventory and disclose, in an easily accessible manner, any AI systems they use that are likely to impact people’s safety, civil rights, or civil liberties, such as automated license plate readers that track people’s locations or predictive policing systems that purport to anticipate where crime will occur. After all, we can’t evaluate what we don’t know exists.

These disclosures should state in simple, intelligible language:

  • Each AI system’s intended objective(s),

  • How the system achieves these aims, and

  • A basic description of intended use cases.

The public should be able to access these disclosures easily, such as on an agency’s website homepage.

For this information to be really useful, disclosures should be standardized across agencies as much as possible. To enable this, states should provide guidance on the information agencies must disclose for their qualifying AI systems (i.e., those likely to impact rights and safety) and provide a reporting template to reduce administrative burden and enable comparison across agencies.

Requiring public safety agencies to publish inventories of their AI systems is an essential first step toward ensuring public accountability over these systems.

3) Develop standardized procedures for agency reporting on use and outcomes of AI systems.

Evaluating the efficacy of AI systems used by public safety agencies is not merely a technical question — it is also a human question. In other words, to know how well or poorly an AI system works, we also have to know how an agency is using it and to what effect. For example, for a gunshot detection system, it is useful to know not just whether the system accurately detects gunfire but also whether its use leads to more arrests for or reductions in gun crime.

To meaningfully evaluate the safety and efficacy of AI systems used by public agencies, agencies must track their use and report on outcomes. Information that agencies should track includes when they use an AI system, for what purposes (e.g., crime type), on what demographics, and to what result (e.g., for police, whether use led to or supported an enforcement action; for other responders, whether services were rendered).

Even in the absence of information on technical accuracy of an AI system, this kind of use and outcome information can start to give agency actors, policymakers, and the public a picture of system efficacy.

And like with the system inventories discussed above, states should encourage and facilitate standardized reporting for statistics on use and outcomes through guidance and simple templates that minimize administrative burden.

4) Require law enforcement agencies to disclose the use of AI systems to defendants in criminal cases.

When it comes to law enforcement use of AI systems, one essential way to ensure its use is safe and effective is by ensuring law enforcement agencies are required to disclose whether and how an AI system was used in a defendant’s case, including any information about system validity. In an adversarial justice system like ours, ensuring robust disclosure to defendants is another way to test the system.

Assessment

5) Require pilots and limited deployments, with monitoring and safeguards in place, prior to largescale use of AI systems.

Sometimes trial is needed to confirm and assess the benefits of a new system. When deploying an AI system that has anticipated (though uncertain) public safety benefits that must be validated in the real world, it makes sense for public agencies to proceed cautiously. Pilots, or limited deployments of a system with safeguards and monitoring in place, can help ensure any potential harms from use are minimized before broader deployment. Of course, in designing pilots, agencies must be careful not to disproportionately burden certain communities and to ensure safeguards are in place to protect people’s civil rights and liberties.

6) Fund public safety agency partnerships with academic and independent research institutions to develop evaluation methods for AI systems.

As of now, there are no standardized testing protocols and evaluation methods for testing AI systems in general, much less in the public safety context specifically. Both national and international standards setting bodies like NIST and ISO are working to develop this measurement science for AI, but they are not the only experts who can be useful here. Academic and independent researchers at public research universities also employ relevant subject matter experts—including data scientists, computers scientists, legal scholars, and social scientists—who can help agencies develop evaluative protocols and best practices. And they can do so more quickly than the typical multi-year formal standards development process. These kinds of interdisciplinary evaluation partnerships long have existed and proven successful in other technology domains such as network infrastructure testing. Evaluation procedures and protocols developed via these research partnerships can provide agencies with actionable testing guidance, serving as placeholders until national and international consensus standards can be developed. State legislative grant funding should encourage these partnerships.

At the Policing Project, we know that every jurisdiction has unique challenges when it comes to policing. We are happy to work with local lawmakers to adapt principles, best practices, and model policies to ensure legislation of public safety AI reflects local realities. To find out more about how the Policing Project can help you with crafting policy on responsible use of AI in public safety, please email us: tech@policingproject.org.