The goal of our AI Governance Framework is to give policymakers actionable guidance on the regulation of AI-enabled technologies in the public safety domain. This framing document sets forth the key components of our framework:

  • Categorize: Our framework takes a risk-based approach, differentiating AI systems based on their potential harm to ensure proportionate regulation.

  • Assess: Our framework incorporates benefit-cost analysis and includes appropriate measures to facilitate such analysis, including testing and evaluation protocols.

  • Regulate: Our framework includes a menu of approaches aimed at different regulatory targets, with the goal of providing flexibility and adaptability.

  • Evaluate: Our framework includes guidance on evaluating AI systems on an ongoing basis to ensure their sound use.

  • Our framework takes a risk-based approach, differentiating AI systems based on their potential harm to ensure proportionate regulation.

    Low-risk systems should be subject to light-touch regulation -- for example, to ensure transparency or guard against blatant misuse. Some systems posing no risk under our framework are outside of its scope.

    High-risk systems are systems that pose a substantial risk of impacting civil rights or civil liberties, and/or increasing the likelihood that a person will come into contact with the criminal legal system.

    Regulatory sandboxes are designed to enable the development of new public safety AI technologies in a controlled and supervised setting, with developers working collaboratively with regulators to ensure responsible innovation.

    Only high-risk systems are subject to the other three main components of our framework -- Assess, Regulate, and Evaluate; low-risk systems and systems developed in the context of a regulatory sandbox are governed by specialized governance regimes.

  • Our framework incorporates benefit-cost analysis and includes appropriate measures to facilitate such analysis, including testing and evaluation protocols. AI systems should be evaluated on the basis of system performance and efficacy, financial costs, social costs, the availability of safeguards, and other factors described in the Policing Project's Technology Evaluative Framework.

    In cases in which the benefits and risks cannot be ascertained because reliability and validity have not been established, our framework offers various testing and evaluation modalities, including, but not limited to:

    Monitored pilot programs to enable testing and study of outcomes.

    Certification that the proposed use is relevantly similar to an already-evaluated use in another context.

    Full-scale operational testing and/or evaluation of efficacy by an independent agency or researchers.

  • Our framework includes a menu of approaches aimed at different regulatory targets, with the goal of providing flexibility and adaptability.

    Regarding the substance of regulation

    Substance of reg

    Reg targets

    Reg approaches

  • Item description