Principles for Sound Governance of High-Risk AI in Public Safety

Human Decisionmakers: Agencies should use AI systems to guide agency decisions, but not allow those systems to make decisions for them. Agency personnel should confirm system outputs independently and/or conduct follow-on investigation, as appropriate, and should remain accountable for any decision made.

  1. Safe, Reliable, Effective, and Equitable Systems: Agencies should hold vendors to an obligation to ensure that AI systems are safe, reliable, effective, and equitable. This obligation should be made clear contractually, with penalty provisions for failures.

  2. Training: Agencies should provide sufficient training on the operation and limitations of AI systems to all personnel who will use the system or make decisions based upon its outputs.

  3. Minimizing intrusiveness: Agencies should use be judicious in their use of AI. Potentially intrusive applications (for example, the identification, tracking, or monitoring of individuals, particularly those for whom there is no suspicion of wrongdoing) should be reserved for serious offenses.

  4. Privacy: In their use of AI systems, agencies should take affirmative steps to respect the privacy of individuals, including safeguarding PII and minimizing the amount of data collected and retained to that absolutely necessary for law enforcement purposes.

  5. Transparency: Agencies should be transparent about their use of any AI system that may impact civil rights or civil liberties. This includes disclosing what tools they are using, what these tools do, their purpose, and the policies governing their use.

  6. Disclosure: Agencies should disclose any role that an AI system played in prosecuting or taking enforcement action against an individual.

  7. Auditing: Agencies should have systems in place to detect and deter misuse of and/or unauthorized access to AI systems.