Our Framework
-
Download this section An AI system used in the public safety domain should be subject to regulation if it poses risks to civil rights, civil liberties, or racial justice, or if it would increase the likelihood that a person will come into contact with the criminal legal system.
A. General scope. This framework applies to an AI system:
i. That is used by a law enforcement agency,
ii. For the purpose of investigating, preventing, and/or responding to criminal activity or public safety incidents; and
iii. That has one or more rights or safety-impacting capabilities.
B. Definition of “rights or safety-impacting.” The following capabilities are rights or safety-impacting:
i. Determining or assisting in the determination of the identity, location, movements, activities, associations, and/or sentiments of an individual or group;
ii. Scoring risk or forming predictions about an individual, group, or location;
iii. Detecting unlawful activity or deception, including detecting any object, behavior, or event associated with unlawful activity or deception;
iv. Creating, summarizing, locating, and/or analyzing evidence or data in the context of a criminal investigation or proceeding;
v. Gaining unauthorized access to data, a computer, or an electronic system; and
vi. Deploying, monitoring, and/or controlling a robotic system, including ground robots and aerial drones.
C. Definition of “use.” For purposes of this framework, a law enforcement agency “uses” an AI system when it operates or accesses an AI system or data derived therefrom, regardless of whether such system or data is owned by a law enforcement agency, another public agency, or a private individual or entity.
D. Exception for certain accountability tools. This framework does not apply to any AI system or component of such system used solely for the purpose of meeting auditing, compliance, and/or transparency requirements required by law.
Editor’s Note
The purpose of this section is to clarify the scope of the framework.
AI technologies vary widely in the extent to which they may impact individual rights. Some tools, such as face recognition and predictive policing systems, raise serious concerns around incursions upon privacy, inaccuracy, bias, and more. Other tools, such as those that automate administrative tasks such as scheduling, pose few such risks.
Accordingly, this Section is intended to ensure a proper regulatory focus on rights and safety-impacting uses of AI. Subsection A limits the scope of the framework to the subset of AI systems used by police for crime-fighting and incident response applications. Subsection B then enumerates specific operations or functions that are considered “rights or safety-impacting,” and thus should be subject to regulation. Subsection D clarifies that the framework does not apply to certain accountability-enhancing tools or features. For example, a component of an AI system that analyzes user activity to detect potential misuse would not be subject to framework requirements.
Notably, the framework includes within its scope law enforcement use of privately-owned AI tools. Police reliance on privately-owned surveillance is a notable new trend — many agencies, for example, have access to vehicle surveillance systems owned and operated by entities such as homeowner associations. Subsection C is meant to ensure that such access is covered by regulation, regardless of system ownership.
-
This section describes the regulatory authorities needed to implement a public safety AI law.
A. Creation of Regulatory Authorities. Legislation should create a set of regulatory authorities to implement the Public Safety AI Law, including authority to adopt rules, administer programs, and conduct enforcement activities, in accordance with the State Administrative Procedure Act.
B. PSAIA. The agency or agencies to whom this authority is delegated is referred to throughout this framework as the “Public Safety AI Authority,” or “PSAIA.” This may take the form of expanding the authority of an existing agency or agencies (such as the Office of the State Attorney General) or establishing a new agency.
C. Establishment of New Agency. If the PSAIA is to be housed in a new agency, legislation should provide:
a. That, in order to ensure a diversity of experience and viewpoints, the membership of the PSAIA include:
i. The State Attorney General or their designee;
ii. The Commissioner of the State Police or their designee;
iii. The State Public Defender or equivalent, or their designee;
iv. The State Chief Information/Technical Officer or their designee; and
v. Members of civil rights and civil liberties organizations.
b. That official action by the PSAIA requires the approval of a majority of its members.
c. That the PSAIA be designated a budget adequate to employ and fix the compensation of such professional and clerical assistants and other employees as deemed necessary for the effective conduct of its work.
D. Advisory Council. Legislation should establish an advisory council consisting of experts specializing in law, criminology, ethics, and computer science to advise the PSAIA.
E. Rulemaking. Legislation should empower the PSAIA to adopt, amend, or rescind rules to implement the State AI Law, pursuant to the procedures set forth in the State Administrative Procedure Act.
F. Program Administration. Legislation should empower the PSAIA to administer the programs created by the State AI Law, including the Pilot Support Program and Regulatory Sandbox (see Section 14: Responsible Innovation) and the External Audit Program(see Section 13: Enforcement).
G. Enforcement. Legislation should give the PSAIA the power to enforce the State AI Law. For example, legislation should provide that:
a. The PSAIA shall establish procedures for the collection and processing of complaints alleging potential violations of the State AI Law;
b. The PSAIA may, at its discretion, initiate and conduct investigations to determine whether any person or entity has violated, is violating, or is about to violate the State AI Law;
c. In the course of such investigations, the PSAIA has the power to administer oaths and affirmations; subpoena witnesses and compel their attendance; and subpoena any other material and relevant records and other evidence;
d. The PSAIA has authority to conduct hearings in accordance with the State Administrative Procedure Act to determine whether the State AI Law has been violated;
e. The PSAIA shall have the authority to issue final orders that are binding upon the parties and enforceable as provided by law, and that in circumstances in which serious imminent or ongoing harm has been demonstrated, the PSAIA may issue orders granting injunctive relief prior to a final disposition;
f. The PSAIA is authorized to order remedies, including but not limited to monetary damages (as described in Section 13: Enforcement), against parties found in violation of the provisions of the State AI Law; and
g. Any party aggrieved by a final decision of the PSAIA may seek appellate review in the intermediate court of the State.
Editor’s Note
This Section proposes the creation of a set of regulatory authorities to implement the Public Safety AI Law, including authority to adopt rules, administer programs, and conducting enforcement activities, in accordance with the State. These authorities may be vested in a new agency, an existing agency, or delegated amongst a set of agencies.
There are significant advantages to delegating certain policy decisions to an administrative agency. First, although the Framework sets forth general rules, many of the details would benefit from further development at an agency level. Detailed testing protocols, for example, might best be fleshed out in rulemaking, with input from computer scientists and others with technical expertise. And delegation may enable regulation to be more responsive to changes in technologies and use contexts over time.
Accordingly, this Section envisions the creation of a Public Safety AI Authority, tasked with rulemaking and the enforcement of the state AI law. This entity or entities also would administer the various programs proposed in this framework, including external auditing and programs to encourage responsible innovation.
-
This section requires that decisions about whether to deploy rights- or safety-impacting AI systems be made in a manner that ensures accountability to the public.
A. Authorization Required. Legislation should require authorization for any use of Covered AI Systems and should provide that any unauthorized use is unlawful.
B. Forms of Authorization. AI systems can be authorized in a number of ways, including:
i. State-Level Authorization. AI systems and/or capabilities could be authorized through state legislation, for all law enforcement agencies within the state. This could serve as a regulatory floor, with localities having the ability to add additional requirements or restrictions provided they are consistent with state law.
ii. Local Authorization. In the absence of state-level authorization, local governments (e.g., a city council) could authorize the use of AI systems and/or capabilities. (Under preemption principles, any state prohibitions on AI systems and/or capabilities would override local authorization.)
iii. Regulatory Authorization. A state might authorize the use of a system or capability in general terms, but delegate more specific authorization decisions to the PSAIA.
a. For example, enabling legislation might authorize general categories of AI systems (e.g., “biometric technologies” or “predictive technologies”), and delegate to the PSAIA the decision whether to authorize specific tools (such as gait recognition or person-based predictive policing).
b. As new categories of AI systems emerge, legislators could supplement this initial set of delegations (i.e., enable the PSAIA to authorize specific tools within these new categories).
iv. Agency “Self-Authorization.” Similarly, a state might authorize general categories of AI systems and delegate to individual law enforcement agencies whether to deploy specific tools. Procedures could be implemented to enhance democratic accountability around these decisions, such as requiring agencies to provide notice and solicit public comment.
C. Benefit-Cost Analysis. Regardless of which authorization process is chosen, the decision whether to authorize an AI system or capability always should be based on a holistic evaluation of benefits and costs, including proof of efficacy and assessment of whether a system is fit for its intended purpose, as well as any risks to privacy, equity, and other values, as discussed in Section 4: Assessment.
Editor’s Note
It is a foundational principle of American governance that executive agencies, including law enforcement agencies, must have legislative authorization for their activities. Whether to deploy technologies that could impact individual rights and liberties is a substantial policy question for the people’s elected representatives to decide. Accordingly, the purpose of this section is to ensure that these decisions are made in a democratically accountable manner.Subsection A codifies the constitutional rule requiring legislative authorization for agency action. SeeINS v. Chadha, 462 U.S. 919, 953 n.16 (1983) (“[T]he Executive’s administration of the laws . . . cannot reach beyond the limits of the statute that created it.”).
Subsection B provides a menu of options for how systems and/or capabilities might be authorized democratically. Two of these options entail legislative authorization of specific systems and/or capabilities, while the other two entail legislative authorization in general terms, with specific decisions being delegated to the PSAIA or law enforcement agencies themselves.
Finally, Subsection C sets forth a general standard for an authorizing entity’s decision, including the evaluation of benefits and risks discussed in Section 4: Assessment.
-
This Section sets forth requirements for assessment of Covered AI both prior to approval and during deployment to ensure it is safe, reliable, effective, and equitable. It aims to balance the need for meaningful evaluation with a process that is not so onerous as to be unworkable.
A.Assessment Requirements. Legislation should require that prior to approval, agencies seeking to deploy Covered AI complete an assessment to demonstrate if it is safe, reliable, effective, and equitable. Existing systems also should be assessed within a reasonable period of time after the law is enacted. At a minimum, legislation should require the deploying agency to:
i. Describe the intended legitimate law enforcement purpose(s) and deployment conditions for use.
ii. Demonstrate that the system is fit for its intended purpose(s).
a. At a minimum, this should include a showing that the Covered AI performs as expected according to generally accepted performance metrics for the current state of the art for the system.
iii. Articulate the anticipated benefits of use, including but not limited to how the AI will promote public safety and/or improve law enforcement mission effectiveness, and any positive impacts on transparency, public trust, and protection of fundamental rights.
iv. Articulate the anticipated costs, as well as the means of mitigating any significant potential risks. Costs should include, but not be limited to:
a. Actual costs of use, such as the monetary costs of acquisition, operation, training, and use; and
b. Risks to privacy and other constitutionally-protected interests, including any potential for unlawful discrimination or harmful bias, and negative impacts on transparency and public trust, increased criminalization, and indiscriminate surveillance.
B. Waivers.
i. Legislation should provide or direct the PSAIA to develop a process and standards by which, in limited circumstances, an agency may receive a waiver from one or more of the requirements in this section, including, but not limited to, when:
a. The Covered AI is substantially similar in design, intended use, and deployment conditions to other AI that previously has been authorized by legislation or approved by the PSAIA; or
b. Completing the assessment would increase the risk to rights or safety or would create an insurmountable impediment to an agency’s critical public safety operations. 1
ii. Waiver requests and approvals should be documented publicly.
C. Facilitating Assessment and Reducing Administrative Burden. To facilitate assessment and ensure it is not overly burdensome, legislation should provide for necessary or helpful resources, including:
i. Establishing or directing the PSAIA to establish procurement standards that require vendors to provide adequate documentation of system capabilities and limitations, including test results demonstrating accuracy across demographic groups present in the deployment environment under intended real-world deployment conditions, and requirements for vendors to include transparency and monitoring features to assist agencies with their compliance obligations.
ii. Charging the PSAIA with developing simple digital templates for assessment, including guidance on how to determine benefits and risks, sample approval profiles, common use case descriptions, and best practices for measuring performance.
D. Standard for Approving Use of Covered AI. Where decisions to approve Covered AI are made by the PSAIA or by law Standard for Approving Use of Covered AI. Where decisions to approve Covered AI are made by the PSAIA or by law enforcement agencies themselves (see Section 3: Approving AI Systems), legislation should require that the approval entity ascertain that the anticipated benefits justify the potential risks, bearing in mind that certain risks or potential harms may not be justifiable, such as when an agency is unable to adequately mitigate any associated risk of unlawful discrimination against protected classes.
E. Promoting Pilots, Limited Releases. Legislation should address or direct the PSAIA to address how limited releases or pilot deployments may be used to inform the assessment process (see Section 15: Responsible Innovation).
F. Ongoing Monitoring. In addition to pre-deployment assessment, legislation should require that agencies continue to assess Covered AI periodically throughout deployment to monitor its performance. Monitoring should review whether the intended purposes, deployment conditions, and benefits and costs of use have changed materially since the initial assessment (see Subsections A(i)-(iv)). Agencies should document the results of this monitoring with the legislative body or PSAIA to demonstrate compliance.
i. Post-deployment monitoring should incorporate reporting on the outputs and effects of actual use as required by Section 11: Documentation and Reporting.
ii. Legislation should balance the need for periodic monitoring with the administrative burdens of conducting this monitoring and implement practices intended to reduce this burden.
iii. Where approval decisions are made by the PSAIA or by law enforcement agencies themselves, legislation should require the PSAIA to ensure that agencies halt the use of any AI for which post-deployment assessment and monitoring demonstrates that the benefits of use do not justify the costs.
G. Supporting Research for Evaluation. Legislation should support independent research and evaluation of public safety AI by funding public safety agency partnerships with qualified academic and research institutions to develop evaluation methods and protocols for these systems.
1 See OMB, Advancing Governance, Innovation, and Risk Management for Agency Use of AI, https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-Artificial-Intelligence.pdf.
Editor’s Note
This section provides a framework to ensure that any approved rights-and safety-impacting AI is safe, reliable, effective, and equitable.
Subsection A establishes the basic assessment requirements that agencies must meet to deploy rights- and safety- impacting systems. Jurisdictions may mandate these requirements through legislation or may choose to direct the PSAIA to issue regulations effecting these goals. The legislature or PSAIA will have to define what constitutes a “reasonable period of time” for completion of assessments for existing systems, though this period should not exceed one year.
Subsection A(i) requires agencies to create a record of the intended purpose(s) and deployment conditions the Covered AI which should include when, where, by whom, and for whom the AI will be used, what decisions it will make or support, and any prohibited uses.
Subsection A(ii) addresses the issue of efficacy by requiring a demonstration that the Covered AI is fit for its intended use. Measuring performance of Covered AI Systems is an open sociotechnical problem and there is no single metric to prove efficacy. Cf. NIST AI RMF (“The current lack of consensus on robust and verifiable measurement methods for risk and trustworthiness, and applicability to different AI use cases, is an AI risk measurement challenge.”). Various different metrics may be relevant to this inquiry and their relevance may depend on the particular system in use. See NAIAC Field Testing (discussing various metrics for measuring operational performance of AI tools used by law enforcement). Rather than privileging a single metric, such as accuracy (which itself does not have a consensus definition), this subsection relies on a flexible standard of proof that the AI “performs as expected” to ensure performance measurement is tied to general expectations and the current “state of the art” regarding evaluating efficacy and preventing harmful bias. See NIST Special Publication 1270.
Subsections A(iii)-(iv) contain nonexclusive lists of anticipated benefits and costs. Typically, these accountings should include quantitative or qualitative data—such as crime statistics, measured community sentiment, calls for service, contract costs. In some cases, the anticipated benefit of deploying a covered AI may be related to organizational effectiveness rather than the mitigation of a public safety problem—for example, AI used to review body-worn camera footage to determine if officers engaged respectfully during citizen encounters.
Subsection C recognizes the need for standardizing assessment to ensure consistency and feasibility and promotes reliance on model documents and procurement standards. Existing templates and guidance documents can serve as useful models, including the GovAI Coalition’s AI Fact Sheet and Use Case Template, OMB’s AI Use Case Inventory template and Responsible Acquisition of AI Procurement Standards, and the National AI Advisory Council’s field testing checklist for law enforcement AI tools.
Subsection D establishes requirements for assessment approval when the authorizing entity is not the legislature. It requires a demonstration that the anticipated benefits justify the costs and acknowledges there may be circumstances in which certain costs cannot be justified because they impermissibly infringe on fundamental rights. Because there is no consensus methodology for conducting holistic cost-benefit analysis of public safety AI, this section does not prescribe any particular method, but rather ensures the approving entity’s determination be transparent and explainable.
Subsection E recognizes that limited deployments sometimes may be necessary to obtain the information required for meaningful assessment. Accordingly, it instructs lawmakers to facilitate pathways for pilots or limited releases in according with Section 15: Responsible Innovation.
Subsection F recognizes the need for ongoing assessment of Covered AI as these systems and the conditions on the ground change over time in ways that may affect performance. Legislation may directly mandate these monitoring requirements or may choose to direct the PSAIA to issue regulations effecting these goals.
-
This Section provides that legislation should (a) generally require law enforcement agencies to conduct reasonable human oversight of AI systems and (b) incentivize vendors to develop oversight-enhancing features.
Salon Note: The purpose of requiring human oversight of AI systems is to ensure their sound operation — both in their technical performance and in ensuring that their use comports with individual rights and liberties. This Section calls for a human check or confirmation of AI outputs where appropriate, and other ways of monitoring the system when it is not.
A. Meaningful Human Oversight Required. Legislation should require that agencies conduct meaningful human oversight of AI systems to ensure they work properly and in a manner that minimizes potential harms.
B.Forms of Human Oversight. Legislation should task the PSAIA with determining the appropriate form of human oversight for each type of AI system, based on what is practicable. These forms of human oversight may include:
i. Officer Review. Officer review of a system output before action is taken based on that output (for example, confirming a license plate number before conducting a stop based on a license plate reader alert).
ii. Expert Review. Third-party corroboration of a system output before action is taken based on that output (for example, for face recognition, employing a forensic examiner to see if they independently would reach the same result as a facial recognition system).
iii. System Monitoring. Routine monitoring of systems to detect anomalies that may indicate system faults (for example, large, unexpected deviations in the number of gunshots detected by a city’s gunshot detection system).
C.Exceptions. When human oversight is neither practicable nor useful (for example, when an AI system exceeds human capabilities, including when human review would decrease reliability), an AI system should be permitted to operate only if testing establishes that the system meets an exceptionally high performance threshold, to be determined by the PSAIA.
D.Role of Vendors. Legislation should direct the PSAIA to develop procurement rules that incentivize the creation of features that improve oversight and minimize “automation bias” (the tendency of human decisionmakers to disregard contradictory information when offered a computer-generated solution).
E. Training. Legislation should require training for users on how to interpret system outputs and on possible sources of error.
Editor’s Note
The purpose of human oversight is to ensure the sound operation of AI systems — both in their technical performance and in ensuring that their use comports with individual rights and liberties. This Section imposes those sorts of “human in the loop” protections.
There is no “one-size-fits-all” form of human oversight. Subsection B tasks the PSAIA with determining what form of oversight is appropriate for each type of AI system.
For some systems, such as license plate readers, oversight may be as simple as an officer confirming plate and vehicle information before conducting a stop — i.e., “officer review” (see Subsection B(ii)).
Other, more complex, systems such as face recognition may require independent third-party examination — i.e., “expert review” (see Subsection B(ii)). Even when these are used, care must be taken to avoid “automation bias” (the tendency of human decisionmakers to disregard contradictory information when offered a computer-generated solution).
In some cases, problems may become evident only when looking at a system’s aggregate performance over time — i.e., “system monitoring” (see Subsection B(iii)).
This is not an exhaustive list, and the PSAIA might consider the propriety of other approaches, such as “red-teaming,” in which an individual or group challenges a system output to integrate critical and contrarian thinking into the decision-making process.
The oversight measures to be promulgated by the PSAIA should make clear when oversight must take place. For example, an alert generated by crime-detection software should be reviewed before an officer is dispatched; an AI-generated police report should be reviewed within a relatively short timeframe after an incident has taken place.
Subsection C recognizes that human oversight is not always practical and/or useful. For example:
A system may be designed in a way that makes it impossible for humans to understand how the system reached its conclusion.
Human review might technically be possible but prove burdensome or impracticable.
When an AI system is more capable than humans, human review could decrease accuracy, validity, and/or reliability.
In such cases, legislation should permit an AI system to operate only if testing establishes that the system performs at a very high level. The framework tasks the PSAIA with determining the appropriate threshold.
Subsection D concerns the way that AI systems can be designed to facilitate oversight. Procurement rules set by the PSAIA should incentivize vendors to develop oversight-enhancing features, which might include:
“Confidence levels,” which show how certain a model is in its output (for example, whether a vehicle surveillance system has a high or low level of confidence in a license plate scan).
“Feature maps,” which highlight to the user exactly which information or data a system relied on to reach its conclusion (for example, highlighting the sections of a video an AI system relied on to determine that certain activity was “suspicious”).
“Out-of-distribution detection,” which detects when an input was not in the model’s training data (for example, a gun-detection system recognizing that it has encountered something it has never seen before, such as a gun-shaped pipe, and flagging this for human overseers).
Explanation of reasoning (for example, body-worn camera analytics describing the specific reasons an officer interaction was flagged as problematic).
-
This Section provides a regulatory structure for protecting privacy by creating three categories of uses of AI systems: those that are prohibited entirely, those that are permitted pursuant to a judicially-issued warrants; and those that do not require a warrant, but for which other safeguards are in place.
A. Privacy-Critical Data. For purposes of this Section, “Privacy-Critical Data” means data regarding an individual’s activities, associations, beliefs, communications, locations, medical history, or movements.
B. Scope. This Section applies to any Covered AI System that processes, produces, analyzes, or otherwise uses Privacy-Critical Data in relation to an “identifiable” person. This applies:
i. When a person is identifiable based on personally identifiable information, or “PII,” or
ii. When a person is identifiable because their identity reasonably may be inferred from non-PII data (for example, location data showing a person’s travels between work and home).
C. Prohibited Uses. Legislation should define, or delegate to the PSAIA to define, the set of systems and/or capabilities that are prohibited because (a) they lack a legitimate law enforcement use and (b) they are inherently intrusive (i.e., by their nature expose sensitive information). These might include, for example, the use of AI to infer religious affiliations or sexual characteristics. See EU AI Act § 5.
D. Uses Permitted with Warrant. Legislation should define, or delegate to the PSAIA to define, the set of systems and/or capabilities that are permitted pursuant to a warrant issued upon probable cause because (a) they do have a legitimate law enforcement use but (b) they are inherently intrusive. These might include the use of forensic tools to extract and analyze cell phone data, the use of location-tracking tools to monitor a person’s movements over a particular course of time, and the use of aerial drones or ground robots to surveil the interior of a private residence.
E. Uses Permitted Without a Warrant. For systems or capabilities that are not prohibited or subject to a warrant requirement, legislation should institute privacy-protecting guardrails, including:
i. Guardrails regarding sensitive locations and activities. Special rules, such as documentation and supervisor approval requirements, should be implemented before Privacy-Critical Data can be collected in or around sensitive locations (such as houses of worship) or sensitive activities (such as protests).
ii. Minimization requirements.Regulation should require that agencies access only the data strictly necessary for a specific investigation. Users should be required to justify the scope of any data query with a brief, documented explanation, automatically logged in the system’s audit trails.
iii. Retention. Maximum data retention periods should be established, after which the data would be rendered inaccessible unless specific procedural requirements are met. The PSAIA should be tasked with determining the shortest possible retention period for each type of data (e.g., data used for location tracking, for identification, for predictive systems, and so forth).
Editor’s Note
AI systems can have a profound impact on individual privacy, exposing sensitive details about a person’s life — from their daily habits to their movements and locations over time. Such data can reveal the “privacies of life” — the Supreme Court has held, for example, that location data derived from cell phones “provides an intimate window into a person’s life, revealing not only his particular movements, but through them his familial, political, professional, religious, and sexual associations.” See Carpenter v. United States, 138 S. Ct. 2206 (2018).Protecting privacy is fundamental to sound AI regulation, and existing frameworks emphasize the importance of privacy in safeguarding “human autonomy, identity, and dignity.” See NIST AI RMF § 3.6. This Section provides guidance to lawmakers on how best to minimize the potential privacy risks posed by AI tools in the public safety domain.The Section begins by defining as “privacy critical” data regarding individuals’ activities, associations, beliefs, communications, locations, medical history, or movements. Systems that do not handle privacy-critical data fall outside of the scope of this Section. Likewise, systems that do not relate privacy-critical data to an identifiable individual are not covered. For example, although data regarding “activities” and “movements” are considered privacy-critical, a system that automatically detects people entering a park for purposes of estimating the number of people in a crowd, without identifying those individuals, would fall outside of the scope of this Section.
Subsections C and D discuss restrictions on uses that are “inherently intrusive” — that is, uses that by their nature expose sensitive information. Examples of inherently intrusive uses include the use of tools to extract and analyze data from cell phones, or the use of tools to monitor an individual’s activities over times. Cf.Riley v. California, 573 U.S. 737 (2014) (noting that cell phones “hold for many . . . the privacies of life” (cleaned up)); Carpenter v. United States, 138 S. Ct. 2206 (2018).
Subsection C prohibits certain inherently intrusive uses, while Subsection D requires a warrant for others. The difference between the two is whether there is a legitimate law enforcement need for the use in question. When there is no legitimate use — for example, the use of AI to infer religious affiliations or sexual characteristics — the practice should be prohibited. Cf. EU AI Act § 5. When there may be a legitimate use, legislation should require police to obtain a warrant issued upon probable cause.
Subsection E then offers a set of privacy-protecting guardrails for lawmakers to consider for uses that are not inherently intrusive, but that pose a risk to individual privacy. Subsection E(i) addresses the need for special rules around sensitive locations and activities. Certain settings present heightened concerns around “chilling effects” — i.e., that individuals seeking to engage in activity protected by the First Amendment might be deterred from doing so by government surveillance. To address this, legislation might include special rules, such as documentation and supervisor approval requirements, for the use of AI systems to collect privacy-critical data in or around sensitive locations or activities.
Subsection E(ii) proposes a rule that agency users be permitted to access only the data necessary for a specific investigation. For example, if an officer is seeking to confirm a suspect’s location at a specific date and time using license plate reader data, they should be precluded from conducting a broader query for the suspect’s locations at other dates and times. To enforce this rule, users could be required to give a brief explanation for each query of data.
Finally, Subsection D(iii) discusses retention periods, which limit how long data police may access data. Shorter retention periods reduce the amount of data police can access, protecting privacy. Yet it also is the case that premature deletion of data can hinder investigations and result in exculpatory evidence being lost. To balance these concerns, legislation might set a retention period after which data is not destroyed, but “logically deleted” — that is, rendered inaccessible to users unless certain procedural requirements (such as issuance of a warrant) are met. Along similar lines, legislation might provide for the creation of a data trust — an entity and system independent of law enforcement responsible for controlling access to data, to ensure that police access such data only when the procedural requirements are met.
-
This Section addresses the equitable development and deployment of AI systems.
Salon Note: Decisions about where to deploy policing technologies involve difficult and complex trade-offs. Understandably, agencies often want to deploy a tool in the area they believe have the highest rate of crime. But this can result in pushback, especially if it means concentrating surveillance in marginalized communities, including Black and brown neighborhoods. This Section tries to tackle the issue of equity in where AI systems are deployed, and also the problem of potential bias in how those systems are designed.
A.Enforcement of Existing Law. To enable better enforcement of existing constitutional protections against discrimination, legislation should authorize the PSAIA to investigate and immediately suspend any use of a Covered AI System that targets an individual or group on the basis of race, religion, or another protected characteristic unless that characteristic is part of a specific suspect description.
B.Enhancing Accountability for Deployment Decisions. Legislation should require agencies to publish a list or map disclosing the general locations of Covered AI Systems. The PSAIA should be tasked with setting flexible and practicable disclosure rules and with determining what the relevant “location” of a system is, based on the type of system. For example:
i. For a system such as a dedicated license plate reader, the location is where that device is located physically.
ii. For software that is not physically integrated with hardware — for example, face recognition software that uses existing CCTV cameras — the location is where the cameras/sensors used in conjunction with the system are located.
iii. For a place-based predictive policing system, the location is where enforcement patterns have changed in response to the system.
C.Nondiscrimination in Deployment Decisions. Legislation should require that any decision to deploy an AI system in a particular targeted area (as opposed to evenly across the jurisdiction) be justified by a sound basis in fact (such as higher crime rates in the targeted area). The PSAIA should develop best practices to assist agencies in accounting for potential bias in the crime data upon which deployment decisions are made.
D. Algorithmic Fairness. Legislation should task the PSAIA, or another appropriate entity, with developing procurement rules to advance algorithmic fairness. These should include:
i. Prohibiting acquisition of a Covered AI System that uses any protected characteristic as a factor or weight, except when necessary to advance a non-discriminatory purpose (for example, the analysis of facial features for purposes of face recognition to ensure accuracy across demographic groups). This should include the use of any “proxies” for a protected characteristic (for example, a head covering serving as a proxy for members of a religious group).
ii. Requiring vendors to address proactively any racial or other disparities in the performance of Covered AI Systems as a condition of procurement. This requirement should include addressing “feedback loops,” in which AI systems trained on biased historical data reinforce that bias. 1
a. These measures may include record-keeping requirements regarding the data used to train systems, the implementation of technical measures to reduce bias, and pre-deployment algorithmic auditing.
Editor’s Note
Decisions about where to deploy policing technologies involve difficult and complex trade-offs. Agencies often deploy technology in the areas they believe to have the highest rates of reported crime. Although this is understandable, the practical consequence often is a concentration of surveillance in marginalized communities, including in Black and brown neighborhoods and areas with lower socioeconomic status. These communities may stand to benefit disproportionately from the technology, but they also disproportionately bear the costs — from privacy intrusions to overenforcement.
Striking the right balance is a significant policy challenge as to which there is no one uniform solution. An equity-focused approach requires accountability around deployment decisions, with attentiveness to the unique needs, circumstances, and desires of different communities, and the imperative not to reinforce existing biases in the criminal legal system.
This accountability can take place only if there is at least some degree of transparency around deployment decisions. For this reason, Subsection B requires agencies to disclose where AI systems are deployed or used. The framework calls for such disclosure in general terms, recognizing that precise disclosure may undermine enforcement efforts (such as disclosing the exact location of a traffic enforcement system).
The framework tasks the PSAIA with developing rules regarding disclosure, including the relevant “location” of a system, which may vary depending on the type of system. It may make sense, for example, to disclose the physical location of a dedicated license plate reader; but in the case of a predictive policing system, it may make more sense to disclose where enforcement patterns have changed in response to the system. The framework’s reference to the practicability of these rules is meant to acknowledge that agencies may find it difficult to comply with disclosure requirements and that the PSAIA should afford them significant flexibility.
Subsection C requires that, in addition to transparency, agencies must have a sound factual basis for any targeting of a particular area in their deployment of an AI system. Cf. ALI Principles of the Law: Policing, § 5.04. Notably, however, this factual basis often will be crime data, which itself may be biased. Historically, low-income neighborhoods and communities of color have often been subject to more policing — and with more policing comes more enforcement and a perception of more crime. For this reason, the PSAIA should develop best practices to assist agencies in accounting for potential bias in the crime data upon which deployment decisions are made.For example, data based on reported crimes may be more reliable than data based on proactive enforcement activities.
Subsection D calls for the PSAIA or another entity to develop procurement rules that advance algorithmic fairness. First, the framework would prohibit the acquisition of any system that expressly factors or weighs a protected characteristic or a proxy for a protected characteristic, unless necessary to advance a non-discriminatory purpose. Legislation, or the PSAIA, should define “protected characteristic” carefully to achieve an appropriate scope — for example, age may be an appropriate characteristic to consider for a system that assesses the risk of recidivism, but race is not.
Second, the framework calls for the PSAIA to develop rules requiring vendors to implement procedural and technical measures to address unintended disparities. Subsection D(ii)(a) tasks the PSAIA with developing record-keeping requirements for training data to help identify potential disparities in system performance (such as the racial disparities found in the performance of face recognition technology, caused by flawed training data). These requirements might include recording information about the data used to train AI systems, why that data was chosen, and how the vendor plans to remedy any bias caused by shortcomings in the dataset.
Subsection D(ii)(b) tasks the PSAIA with setting forth technical requirements for data and system design that promote equity and fairness. These measures may come from the branch of computer science now referred to as Fairness, Accountability, and Transparency, or “FAccT.” Potential techniques include methods to strip out express or proxy references to race in models while preserving accuracy (such as removing racial demographic information from a social media analysis algorithm) and modifying datasets and/or algorithms to account for biased data in predictive policing systems. Resources like NIST and academic publications provide overviews of existing fairness metrics and existing methods to remove bias and feedback loops from predictive policing algorithms. This is a developing field of science, and lawmakers might reference the implementation of “reasonable technical measures given the current state of the art.”
Subsection D(ii)(c) refers to algorithmic auditing for the purpose of identifying risks to equity. Although there are many different kinds of audits, this refers to black-box or empirical auditing, which entails systematically querying or entering inputs, measuring the related outputs, and systematically comparing the results to detect bias. The PSAIA should determine the exact requirements for an effective algorithmic audit for AI systems.
—
1 See Rashida Richardson, Jason Schulz, and Kate Crawford, Dirty Data, Bad Predictions, NYU L Rev Online (2019), on the operation of feedback loops in predictive systems. For the science behind how feedback loops function in predictive algorithms, see Arvind Narayanan & Sayash Kapoor, AI Snake Oil (2024); Tian An Wong, The Mathematics of Policing; Ensign et al, Runaway Feedback Loops in Predictive Policing.
2 Barton Gellman & Sam Adler-Bell, The Disparate Impact of Surveillance,
Century Foundation (2017). -
This Section sets forth an agency’s obligation to engage with the public around the use of covered AI technologies.
A.Community Engagement Required. Legislation should require affirmative steps be taken to engage and partner with the communities they serve around the use of Covered AI Systems to promote public safety and community well-being.
i. This Section assumes that community engagement will be undertaken by law enforcement agencies, but legislation could instead task lawmakers or the PSAIA with conducting community engagement if that is more feasible.
B. Standards for Engagement. The appropriate form of engagement may vary greatly depending on various local factors, such as an agency’s size, resources, and relationship with the community. For this reason, the legislative entity with regulatory authority over an agency (for example, a city council) should set minimum standards regarding the means by which that agency conducts community engagement.
C. Form of Engagement. In determining these minimum standards for public engagement, a legislative body should define clearly:
i. When, and how often, community engagement is required;
ii. The form of community engagement; and
iii. How agencies must respond to community feedback.
A legislative entity also should ensure that the chosen methods are inclusive of all populations that may be impacted, especially traditionally underrepresented populations. If resources permit, a standing task force or advisory board could be created to advise on agency use of technology and incorporate community feedback.
Editor’s Note
It is important for policing agencies to engage the public around the use of AI technologies. Agency legitimacy depends on forging ties between the community and police through shared goals and visions of public safety. See American Law Institute, Principles of the Law Policing (“ALI”) § 1.08. Members of the community, moreover, may have valuable insights about local challenges; they may have important feedback about which practices are likely to be successful and which might have adverse effects or be ineffective. See ALI § 1.08.Duty to engage. Subsection A sets forth a general duty to engage the public; Subsection B delegates to local lawmakers how this duty is to be satisfied. This accounts for the great variance among agencies and jurisdictions: a variety of factors makes a one-size-fits-all solution untenable, and local lawmakers are best equipped to determine how to ensure adequate community engagement in their communities. Subsection C then details some of the questions that lawmakers should answer about public engagement:
When, and how often, should community engagement be required?
Under one approach, agencies might be required to solicit public feedback prior to seeking approval for use of a tool.This feedback can help agencies address concerns from the outset and help lawmakers assess potential benefits and costs.
Alternatively, or additionally, agencies could be required to seek feedback on their use of Covered AI Systems on a recurring basis. For example, in some jurisdictions it may be feasible for an agency to solicit feedback about its use of Covered AI Systems annually or biannually.
Agencies might also be required to solicit feedback before making major changes in how a technology is used — for example, using a system for a purpose not envisioned when its use was authorized.
What form should community engagement take?
Public meetings might provide an opportunity for back-and-forth dialogue with community members and can serve as an opportunity for agencies to address questions and comments directly. Lawmakers also may wish to consider hybrid (in-person/virtual) meetings, as some individuals may find it difficult to attend in-person meetings due to family or work obligations, or the lack of reliable transportation. Efforts should be made to conduct meetings within impacted communities (as opposed to a model in which community members must come to the seat of power to be heard). And clear and effective notice should be given prior to meetings regarding what is being discussed and how to participate.
Online surveys can be employed to reach a broader audience than would be possible with a public meeting or hearing. Surveys may be better suited to gauging public attitudes in a general way, however, than facilitating a dialogue between police and community.
In some communities, resources might permit the creation of a standing task force or advisory board to advise on agency use of technology and incorporate community feedback. These entities might be structured in a way to enhance community voice, either in ensuring diverse membership or in their serving as a conduit for community perspectives. The creation of such a body could prove valuable, although not all jurisdictions will be equipped to stand one up. And if one is stood up, care should be taken to ensure the body does not become a rubber stamp for the policing agency. See Julian Clark & Barry Friedman, Community Advisory Boards: What Works and What Doesn’t: Lessons from a National Study, 47 Am. J. Crim. L. 2 (2021).
How must agencies respond to community feedback?
Similar to notice-and-comment procedures, agencies could be required to respond to community feedback in a substantive way, such as providing brief responses as to why suggestions from the public were or were not incorporated. This may be especially valuable when an agency is considering deploying a new technology or making a major policy change.
Alternatively, agencies might not be required to respond substantively, but instead be required to disclose the feedback received to lawmakers and/or the public. This transparency can facilitate accountability and lead to legislative changes.
-
The purpose of this Section is to require appropriate disclosure of the use and operation of AI systems to those arrested or prosecuted.
A. Disclosure of Use. Legislation should require disclosure to arrestees and, in the event of prosecution, to defense counsel, of any Covered AI System that was used in the course of an investigation or prosecution of that individual.
i. Pursuant to this obligation, prosecutors should be required to make a diligent, good faith effort to ascertain the existence of any information discoverable under this Section and to disclose such information as soon as practicable after arraignment. Policing agencies should be required to include sufficient information about the use of AI Systems in the case files they provide to prosecutors.
B.Disclosure of System Information. Legislation should ensure appropriate disclosure of information regarding how a Covered AI System works whenever such disclosure would meaningfully assist in an individual’s defense. For example, for purposes of state judicial rules governing the admissibility of evidence, legislation might provide as follows:
i. “Notwithstanding any contrary provision of the State Trade Secrets Law, a Covered AI Vendor shall produce under protective order any source code, training data, algorithms, system outputs, documentation, and/or any other relevant materials to defense counsel, upon a showing that such materials would be relevant and material to the court’s determination of whether to admit evidence under [Daubert or Frye].”
Editor’s Note
Although there are existing rules governing the admissibility of evidence in criminal proceedings, see e.g. Federal Rule of Evidence (“FRE”) 403 (excluding relevant evidence for unfair prejudice, misleading the jury, and others); FRE 901 (requiring authentication of evidence); FRE 705 (requiring disclosure of facts and data underlying an expert’s opinion), these rules may be inadequate when it comes to the use of AI systems by law enforcement, for two reasons.
First, AI tools often are employed early in investigations to generate leads. The outputs of these tools are not necessarily “evidence” to be introduced at trial. These outputs might, however, influence the evidence that is introduced at trial in consequential ways.
For example, facial recognition technology, or “FRT,” is used in some criminal investigations to identify suspects. There is a risk that an FRT system could misidentify an innocent individual who looks very similar to, but is not, the person of interest. Because the innocent individual identified by the FRT system has very similar features to the person of interest, this increases the risk that witnesses to whom the individual is shown will also misidentify them. In other words, even if the results of an AI system are not admitted as evidence directly, they still could influence the evidence that is admitted, such as a witness’s identification.
For this reason, Subsection A requires disclosure of AI system usage as it pertains to discovery and admissibility of evidence in trial, so long as it reasonably would further an investigation or prosecution. This might include using Covered AI Systems to identify suspects or weapons, track the locations or movements of individuals, or predict criminality. Subsection A also requires disclosure to arrestees who are not prosecuted, as a means of ensuring accountability and transparency around wrongful arrests caused by reliance on AI systems.
Second, even when AI system use is disclosed, defense counsel may not have the means to understand the system works, preventing them from challenging the validity of the tool under the applicable evidentiary standards. Companies that develop AI tools often refuse to disclose their systems’ algorithms and testing processes, claiming they are trade secrets. This opacity prevents courts and defendants from understanding potential flaws in the technology, and thus adequately challenge its accuracy.
For example, in State v. Loomis, the risk assessment tool COMPAS was used at sentencing to deny a defendant probation. See 371 Wis.2d 235 (2016). The defendant challenged this use, claiming the algorithm violated his right to be sentenced using accurate information since the methodology was deemed a trade secret. To ensure due process, Subsection B requires disclosure of system details under an appropriate protective order. See, e.g., Virginia v. Watson, No. FE-2019-279 (Va. Cir. Ct. 2020)(requiring probabilistic genotyping software vendor to disclose source code to defense experts under protective order).
-
This Section requires each law enforcement agency to have in place a policy, or set of policies, governing the use of Covered AI Systems.
Salon Note: The purpose of this section is to create a process through which a uniform floor for AI use policies within a state may be set. It is not intended to create a new set of substantive obligations around how technologies are used; rather, it is meant to ensure that internal agency policies are consistent across the state, and in conformance to the state’s AI law.
A. Development of Baseline Policy. Legislation should require the PSAIA, in consultation with the State Attorney General, State Police Officer Standards and Training Board, and other stakeholders, to develop a Baseline Policy governing the use of covered AI-enabled technologies and shall update the Baseline Policy on an annual basis.
B. Details of Policy. The Baseline Policy should include the information, rules, procedures, and standards necessary to ensure compliance with the State AI Law, including, but not limited to:
a. A description of authorized, and prohibited, systems and capabilities, as well as any restrictions on how systems may be used (see Section 3: Approving AI Systems);
b. A description of the types of data collected (e.g., video, audio, biometric), and the rules and procedures related to the collection, sharing, analysis, retention, and other use of this data (see Section 6: Protecting Privacy);
c. Internal auditing requirements (see Section 11: Documentation and Reporting), including who is responsible for conducting audits, how often they occur, and applicable guidelines and standards;
d. Procurement standards, including vendor selection criteria, needs assessment, and processes for community consultation (see Section 4: Assessment and Monitoring and Section 8: Community Engagement);
e. Standards for the equitable deployment of technology, including that race, color, ethnicity, national origin, or any other protected characteristics may not be used as the sole factor for initiating enforcement action (see Section 7: Equity);
f. Training requirements, including training regarding an AI system and on the policies governing use of the system; and
g. Penalties for violations and a description of the agency’s internal process through which incidents are reported and adjudicated.
C. Policy Adoption. A law enforcement agency in the State using covered AI-enabled technologies shall adopt and publish the Baseline Policy and any updates made thereto. The Baseline Policy shall be published on the agency’s website and should be:
a. Easily navigable through a logical structure or table of contents; and
b. Fully searchable through machine-encoded text.
D. Additional Policies. A law enforcement agency in the State may adopt and publish additional policies not inconsistent with the State AI Law. Public engagement around the development of these policies may be required (see Section 8: Community Engagement). Significant policy changes may have an effect on the benefits and/or costs of operating a system, which may warrant reassessment of those benefits and costs (see Section 4: Assessment and Monitoring).
Editor’s Note
The purpose of this section is to create a process through which a uniform floor for use policies within a state may be set. It is not intended to create a new set of substantive obligations around how technologies are used; rather, it is meant to ensure that internal agency policies are in compliance with the state’s AI law.
In the interest of uniformity and because some agencies may not be equipped to develop full use policies, Subsection A requires the creation of a Baseline Policy by the PSAIA. Depending on what tools are authorized in a state, this Baseline Policy may be a single document or a set of documents that are specific to a capability, risk, or tool. The Baseline Policy can fulfill agency requirements, absent the agency utilizing procedures set by the PSAIA to change the Baseline.
Subsection B outlines the specific requirements the Baseline Policy must cover to attain responsible and transparent use.
Subsection C requires each agency within the state to adopt the Baseline Policy and make it easily accessible to the public. Public access to agency policies promotes accountability and informed dialogue among policymakers and the public.
Subsection D allows flexibility for agencies to develop their own additional policies, provided they remain consistent with other legal requirements.
-
This Section sets forth requirements for documentation and reporting of agency use of Covered AI to facilitate assessment and ensure public accountability.
A. Documentation. Legislation should require or direct the PSAIA to establish requirements for documenting sufficient detail regarding each use Covered AI in order to support assessment of system efficacy (as required by Section 4: Assessment and Monitoring) and public accounting of use.
i. Documentation should include, at a minimum, who used/accessed the system; when it was accessed; for what purpose (i.e., incident or crime type); data inputs and outputs; when applicable, the location of deployment and demographic information if used in connection with a suspect, witness or victim; and the outcome of use. Many of these details could be recorded automatically by the system, with no need for manual human entry.
C. Reporting. Legislation should require that agencies publish a report summarizing information about their use of Covered AI sufficient to enable assessment (see Section 4: Assessment and Monitoring) and meaningful public scrutiny of use. This information should be easily accessible and interpretable by the public.
C. Facilitating documentation and reporting and reducing administrative burden. Legislation should require or direct the PSAIA to require:
i. Procured Covered AI to be designed to the greatest extent possible to be “self-auditing” and interoperable with existing records and data management systems (i.e., built with sufficient capabilities such that law enforcement agencies can fulfill the requirements in Subsections A and B); and
ii. The development of simple, digital templates for documentation and reporting and guidance to assist agencies in effectively meeting these requirements.
Editor’s Note
This section establishes basic requirements for agency documentation and reporting to facilitate assessment and public scrutiny. See ALI, Principles of the Law: Policing § 3.04.
Subsection A establishes basic internal documentation requirements that agencies should complete any time a covered AI system is used. Information tracked may vary by system, but always should answer basic questions about the manner use (i.e., who/what/when/where/why/how) and outcomes. For example, for a vehicle surveillance system, agencies should document which hotlists are in use, a record of each hotlist alert, and any action taken on account of the alert, among other things. For a face recognition search, by contrast, information documented should include details such as image source, quality, and demographics and whether or not the search generated an investigative lead.
Subsection B promotes public scrutiny via a routine, public reporting requirement that summarizes system-level use information. Although the public generally does not require individual case information, aggregate information on system use and outcomes enables public assessment of efficacy and utility. See ALI, Principles of the Law: Policing § 3.04.
Agencies may satisfy the public reporting requirement in different forms or formats depending on the system(s). For example, agencies may produce yearly narrative reports that present basic statistics (in machine-readable formats) and facts of use for some or all Covered AI. Alternately, agencies may work with their IT departments or vendors to create digital transparency portals that give a live picture of use for particular systems. Several agencies have set up such portals, including for license plate readers and aerial drones. Ideally, lawmakers or the PSAIA would establish standards around reporting to ensure reporting is consistent across agencies.
It is also key that reporting be easily accessible and navigated by the public. Here again there are options for how to achieve this end goal. For example, legislation could require agencies post their reporting in a legible format displayed prominently on the agency’s website. Or, more helpfully, legislation could establish a unified reporting repository that standardizes and centralizes all agency reporting in a single, easily searchable website. A useful model exists for such a repository at the federal level, which has created a website to consolidate all federal agency reporting on required AI use case inventories. SeeAI.gov, Federal AI Use Cases.
Subsection D provides ways to minimize the administrative burdens of documentation and reporting in two key ways: (1) by giving procurement preference to vendors that design systems to automatically track and create automated reports to the extent possible and solve for interoperability challenges, see ALI Principles of the Law: Policing § 3.04, and (2) by standardizing documentation and reporting through the creation of templates.
-
The purpose of this Section is to support agencies in complying with the rules in this Framework regarding data.
A.Compliance Support Program. Data is crucial to the functioning of AI systems, and this Framework includes a variety of rules regulating data — such as the data minimization and retention requirements (see Section 6: Protecting Privacy) and the reporting and auditing requirements (see Section 11: Documentation and Reporting). To help agencies achieve compliance with these requirements while minimizing administrative burden, legislation should task the PSAIA with administering a Compliance Support Program.
B.Form of Program. The purpose of the Compliance Support Program is to help agencies implement the Framework’s data requirements in one of the following ways:
i. Regional Data Custodians. Legislation could task the PSAIA with administering the creation of Regional Data Custodians. These Regional Data Custodians would help agencies ensure the safe storage, management, and protection of data in conformance with the State AI Law at all law enforcement agencies within a particular region.
ii. Private Certifier. Legislation could task the PSAIA with setting up a certification scheme pursuant to which private entities, accredited by the PSAIA, could evaluate data management practices at law enforcement agencies and help agencies remedy any violations.
C.Responsibilities. A Regional Data Custodian or Private Certifier should work with agencies on the following data-related issues:
i. Ensuring the implementation of any privacy-preserving techniques, query documentation requirements, retention periods, logical deletion procedures, and/or other safeguards described elsewhere in this Framework;
ii. Ensuring compliance with the interagency data-sharing requirements described in Section __ [forthcoming], including the execution and enforcement of data-sharing agreements, as well as any requirements related to the sharing of data with non-government third-parties (such as researchers);
iii. Ensuring that those with access to data have received sufficient training on the requirements of the State AI Law, and assisting agencies in such training and the drafting and implementation of internal policies; and
iv. Advising law enforcement agencies on the impact, from a compliance perspective, of (a) any new system, or modification or update thereof, (b) any new integration, or (c) any new use case.
D. Disclosure. Legislation also should provide that agencies must disclose promptly any data breach or unauthorized access of or use of data to the PSAIA. In the event of such a data breach or unauthorized access or use of data, the PSAIA should be required to disclose information about the event to the public in a manner that reasonably protects privacy, operational security, and the confidentiality of law enforcement sensitive information.
Editor’s Note
Agencies may need support in implementing the Framework’s requirements, especially those agencies with limited technical staff. Throughout the Framework there are rules governing data — from how it is retained to what data must be disclosed, and in what manner. To help agencies achieve compliance and to minimize administrative burden, Subsection A tasks the PSAIA with administering a Compliance Support Program.
Subsection B sets out two possibilities for how legislation might structure a Compliance Support Program. First, the PSAIA (or another appropriate agency) could create regional data custodians — individuals or entities who are tasked with protecting and managing data. A regional data custodian could oversee data for all agencies within a region, which may be a more practical model than requiring each agency (including very small ones) each to have their own data custodian.
Another possibility is the creation of a certification scheme in which private entities could certify agency compliance with regulation and help remediate any violations. Today, many entities exist to assist agencies in complying with Criminal Justice Information Services requirements; private compliance firms could assist agencies in complying with AI regulation in a similar manner.
-
The purpose of this section is to ensure that any violations of a state’s AI law are identified and remedied promptly.
A. External Audits. To facilitate enforcement of the State AI Law, legislation should require routine external audits of Covered AI Systems. Depending on what is feasible, audits could be performed:
i.By the PSAIA or another agency;
ii.By a private entity certified by the PSAIA; or
iii. By existing entities with oversight authority, where applicable.
The results of these audits, including any violations discovered, should be released publicly.
B. Requirements to Facilitate Auditing. To facilitate this auditing, legislation should require:
i. That agencies being audited provide access to any system, data, records, or personnel necessary for the audit; and
ii. That procurement rules give preference to systems that include reasonable technical measures to enable auditing, such as audit trails that create a record of system use.
C. Remedial Action. Legislation should empower the PSAIA (or another appropriate entity, see Editor’s Note) to order remedial action to correct any instances of noncompliance with the State AI Law including, in appropriate cases, the power to suspend an agency’s use of a Covered AI System until corrective action has been taken.
D. Financial Penalties. Legislation should include financial penalties, to be imposed by the PSAIA (or another appropriate entity) (a) on agencies found to be in noncompliance with the State AI Law, and (b) on individuals found to use a Covered AI System for personal purposes outside of the scope of their employment.
E. Whistleblower Protections. Legislation should include whistleblower protections for individuals who report suspected violations of the State AI Law to the PSAIA or another appropriate authority.
Editor’s Note
Subsection A calls for external auditing of AI systems to ensure that any violations of applicable law are detected and reported to regulators. Audits could be conducted by the PSAIA or, if it is more feasible, by private certified entities or existing oversight bodies. To facilitate these audits and reduce administrative burden, agencies and/or vendors should be required to implement reasonable technical measures such as the creation of automated audit trails that record system use. Auditing results should be made public, with appropriate redactions in place, if applicable, to protect privacy and sensitive information.
Although external auditing is a best practice to ensure audits are robust and independent, the framework includes internal auditing procedures and encourages the development of “self-auditing” systems, see Section 11: Reporting and Internal Auditing, Subsection D, in recognition that some jurisdictions may choose not to create an external auditing regime due to capacity or budget concerns.
Subsection C gives the PSAIA the power to order any necessary action to remedy violations, including suspending an agency’s use of an AI system. (The procedures governing the PSAIA’s adjudicative powers are described in Section 2: Regulatory Authorities.) Should legislators prefer not to give the PSAIA such powers, legislation might authorize the PSAIA, the State Attorney General, or another appropriate entity to bring suit in a court of competent jurisdiction in response to violations.
Subsection D calls for the imposition of financial penalties where appropriate. In addition to imposing penalties on agencies, legislation should authorize the imposition of penalties on individuals who misuse an AI system for purely personal purposes.
Legislation should include criteria to guide the PSAIA in assessing the amount of a penalty, with the goal of providing adequate deterrence and redressing harm. These criteria may include (a) the nature and severity of the violation, (b) whether the violation was isolated or part of a pattern of conduct, (c) whether the violation was negligent, reckless, or willful, (d) any damage or harm caused by the violation, and (e) the size and budget of the violating agency.
Legislation might also create a private cause of action, to the extent that existing remedies under state privacy and tort law are insufficient to redress the harm caused by violations. Additionally, legislation potentially could provide for the exclusion from any criminal trial of evidence obtained as a result of a substantial violation of a state’s AI law if lawmakers deem the other remedies provided in this Section insufficient to deter misuse.
-
This Section proposes the development of programs to encourage responsible innovation, including a pilot support program, a pre-development steering program, and regulatory sandboxes.
A. Pilot Support Program. To encourage the responsible development and assessment of novel public safety technologies, legislation should create and fund a pilot support program.
i. The PSAIA should be tasked with selecting technologies that address a public safety need but that have not been widely deployed or studied. Funding should be made available for the study of the system’s benefits and risks by qualified academic or non-profit research institutions.
ii. The conduct of pilot programs should be governed by memoranda of understanding entered into by the PSAIA, a developer, a law enforcement agency, and a qualified research institution. This agreement would set forth the terms and conditions of the pilot program, including any regulatory requirements from which parties to the agreement are exempt.
iii. The PSAIA should monitor the pilot on an ongoing basis and the PSAIA should be empowered to modify or terminate the pilot, at its discretion, in the event of serious risks or harms.
iv. The pilot should conclude with the issuance of a public report, which should include the findings of the research entities and an independent assessment by the PSAIA as to whether the system, under the conditions in which it operated, complies with applicable law.
B. Pre-Development Steering Program. Legislation should create a steering program in which the PSAIA offers early-stage guidance to developers before initiating the design of a Covered AI System. This proactive support would help ensure that systems comply with legal standards and are designed with risk mitigation in mind. The program should include the provision of non-binding assessments by the PSAIA, which would offer guidance on whether the proposed design meets legal and ethical requirements and would include recommendations.
C. Regulatory Sandbox. Legislation should establish a regulatory sandbox to be administered by the PSAIA. The regulatory sandbox should be made available to qualified vendors as they develop a new public safety technology, and should have the following features:
i. Regulatory supervision of a product as it is developed, including the provision of guidance to ensure compliance with regulations and to proactively address potential risks;
ii. The furnishing of development resources to the vendor, at the PSAIA’s discretion, including access to testbeds, datasets, and/or technical expertise;
iii. Limited exemptions from certain regulatory requirements, including the waiver of financial penalties for inadvertent violations of the State AI Law; and
iv. A requirement that a system currently participating in the sandbox may not be used in any criminal investigation or prosecution.
Editor’s Note
This section aims to support responsible innovation through a pilot support program, steering program, and regulatory sandbox. It seeks to encourage interdisciplinary collaboration between vendors, law enforcement, research institutions, and regulators.
Subsection A proposes a program to support pilot studies for novel public safety technologies. Pilots offer a number of benefits — they can demonstrate a product’s feasibility, identify problems early before a product is scaled, and help both vendors and regulators understand how a product functions in the real-world.
The framework envisions pilot studies operating as a partnership between a developer, a law enforcement agency, a research institution, and the PSAIA. The developer would provide the system, and the law enforcement agency would be permitted to use the system on a time-limited basis. The research institution would conduct a study of the system’s benefits and risks. The PSAIA would provide funding, monitor the study, and determine any appropriate regulatory exemptions. The PSAIA would retain the discretion to modify or terminate the pilot if necessary to avoid risks or harms. The support provided by the PSAIA should be sufficient to ensure robust evaluation and the development of appropriate metrics.
The post-pilot public report described in Subsection A(iv) should include an evaluation of feasibility, benefits and risks, and regulatory compliance. The goal of this report is to provide initial guidance (a) to vendors on any modifications that need to be made to ensure responsible operation of the system and (b) to policymakers on whether the benefits of the system outweigh the costs, and under what policies.
Subsection B proposes a pre-deployment steering program to encourage developers to ingrain essential values — like transparency, equity, and privacy — from the very beginning of the product lifecycle. The goal is to address potential harms proactively, before this becomes more challenging financially and technically. The framework envisions regulators giving vendors feedback that can guide early product designs.
Subsection C proposes the creation of a regulatory sandbox in which vendors develop new tools under close regulatory supervision, and in return receive access to development resources and limited regulatory exemptions. This helps promote responsible development of new products and informs future policymaking.
Regulatory sandboxes have been used successfully to test new police technologies. For example, the UK’s Information Commission Office (ICO) conducted a sandbox with the Thames Valley Police (TVP) to test “Thames Valley Together”: a centralized data-sharing, cloud-based service that predicts and identifies underlying causes of serious violence. 1 Through several rounds of project proposals and revisions, the TVP regularly updated their privacy notice, held public engagement meetings, and informed relevant individuals that their personal data was obtained. With the ICO’s guidance, the TVP refined their product to enhance public safety and uphold civil liberties.
The regulatory sandbox envisioned in the framework would entail regulatory supervision and guidance to vendors, the furnishing of development resources, and limited regulatory exemptions. Because the purpose of the sandbox is to support innovation only at the development and testing stage, Subsection C(iv) provides that systems participating in the sandbox cannot be used in criminal investigations or prosecutions.
1 “Regulatory Sandbox Final Report: Thames Valley Police,” Information Commissioner’s Office, November 2023, https://ico.org.uk/media/for-organisations/documents/4027506/thames-valley-police-regulatory-sandbox-final-report.pdf.