header { height: 100px; /* set the height of the header to 100 pixels */ }

Our Framework

We are developing an AI Governance Framework to give actionable guidance on the regulation of AI-enabled technologies in the public safety domain. The goal is to help policymakers enact sound regulation that enables society to capture the benefits of these tools while mitigating or eliminating risks to civil rights, civil liberties, and racial justice. What is posted here is provisional, and we welcome input while we develop it further.

The framework has four key components:

  • Categorize: Our framework categorizes AI systems based on the nature and magnitude of potential harms. This matters because over-regulation can chill innovation, while under-regulation can lead to the harms we identify in our document on the risks of AI in policing. The goal is to ensure that regulation is proportionate to the risk.

  • Assess: Our framework offers guidance to policymakers on how best to assess the benefits and harms of public safety AI systems.

  • Regulate: Our framework proposes a set of guardrails to ensure the responsible use of AI in public safety. It considers a range of possible regulatory targets (e.g., the vendor, the agency using the tool), and approaches to accomplish sound regulation. 

  • Evaluate: Our framework includes guidance on how AI systems should be evaluated on an ongoing basis.

Categorize

Our framework takes a “risk-based” approach, categorizing AI systems based on their potential harm. 

  • High-risk systems are systems that pose a substantial risk of impacting civil rights or civil liberties, and/or increasing the likelihood that a person will come into contact with the criminal legal system. These systems are subject to stronger regulatory requirements.

  • Other systems are classified as low-risk, and should be subject to meaningful but light-touch regulation.

We also support the creation of “regulatory sandboxes.” A regulatory sandbox is a special regulatory regime under which developers can create new AI tools with supervision and guidance from regulators, with the goal of supporting responsible innovation.

Assess

Guided by our technology evaluative framework, the "Assess" section of our framework aims to help policymakers evaluate the benefits and costs of AI systems in public safety. This includes assessing how well a system performs, whether it is fit for the purpose for which it will be used, the potential costs (both financial and social), and other factors.

As there are cases in which the benefits and risks of a system will not be readily apparent, our framework surveys a range of different testing and evaluation methods that could be implemented as part of a governance strategy.

Regulate

Our framework will propose a set of guardrails aimed at ensuring the responsible development, deployment, and use of AI in public safety — from transparency and disclosure requirements to guardrails around data use and sharing.

Our framework also will assess a range of different regulatory targets (from vendors and model developers to policing agencies) and approaches (including private governance), with the goal of ensuring that regulation is both targeted and adaptable.

Evaluate

Our framework includes guidance on how regulation can support the evaluation of AI systems on an ongoing basis to ensure their responsible deployment. These topics include:

  • Monitoring: Ensuring meaningful human oversight of a system's operation and performance.

  • Reporting: Disclosure of information about AI systems to regulators and the public to ensure public accountability.

  • Re-evaluation: Periodic and routine re-evaluation to account for changes to the system or the environment in which it is used.