Accountability


1.    Authorized uses: Regulation should specify with particularity how policing agencies may use robots.

Although some uses for robots are accepted widely — such as the use of ground robots for bomb disposal or aerial drones for search and rescue operations — other uses have engendered much controversy. For example, proposals to weaponize drones and robots have provoked significant backlash in recent years. Jurisdictions differ greatly as to whether, and to what extent, drones and robots may be used to conduct surveillance. (These particular issues are addressed in detail in Sections II and III of this framework, respectively.)

 Decisions about how police robots may be used should be made through the regulatory process, not left to the discretion of individual policing agencies or officers. Policymakers should carefully define the permissible uses — for example, specifying clearly whether robots may only be used for emergency response, or whether they may also be used for purposes such as general surveillance. Regulation also should provide that any data collected for an authorized purpose cannot be used for a purpose which has not been authorized.

 

2.    Data governance: Regulation should specify the types of data robots may collect, how long data may be retained, and which  analytics may be used.

Robots, as with other surveillance technologies, collect data through various cameras and sensors. This data potentially can power a range of analytics, from license plate and facial recognition to anomaly detection.

Left unchecked, this data collection and use gives rise to serious ethical concerns. The use of analytics that can identify individuals or track their movements presents real privacy risks. Absent strong guardrails, collected data can be misused, including for illegitimate purposes. For these reasons and more, lawmakers should ensure that robots, as with all surveillance technologies, are subject to sound data regulation. Although the topic of data governance is beyond the scope of this document, our AI Policy Hub explores these issues in detail, and our model Authorized Databases and Police Technology Act lays out minimum standards for data use policies that may be relevant.

 

3.    Transparency, generally: Regulation should require policing agencies to disclose various information about how they use robots. At a minimum, agencies should track and disclose for each deployment:

a.       A description of how the robot was used in furtherance of an authorized purpose;

b.      The nature of the incident;

c.       A summary of the types of data collected and analytics used; and

d.      The disposition of the incident, including any enforcement action.

This information should be recorded in an agency’s records management system or another appropriate platform, and disclosed to oversight bodies, policymakers, and/or the public (for example, through a public website).

 

4.    Transparency around where patrol robots are deployed: In general, legislation should require policing agencies to disclose in advance the locations where a robot used for patrol purposes will be deployed, and the reasons why those locations were chosen.

Accountability around where patrol robots are deployed is important both to guard against the possibility of disparate impact (for example, a pattern of deploying patrol robots in certain neighborhoods and human officers in another) and the potential for chilling constitutionally protected activities (for example, deploying patrol robots at a protest or place of worship).

For this reason, regulation should require agencies to disclose in advance the location of any robot deployed for general surveillance purposes and the reasons why that location was chosen. An exception might be made where an agency documents significant reasons why a deployment location should not be disclosed publicly.


5.    Human accountability: Decisions about where a police robot navigates/patrols and how it interacts with its environment and with people should be made by human operators. These commands should be recorded automatically in audit logs.

Robots differ in the extent to which they can operate without human intervention. Some robots are teleoperated, while others can determine the best route to a location and avoid obstacles automatically. Still other robots can conduct patrols in specified areas without active human monitoring.

Regulation should require human operators to exercise a reasonable degree of control over decisions regarding where a robot navigates/patrols, and how it interacts with its environment and with people. [1] For example, the decision to send a robot to navigate or patrol a particular area should be made with a human operator, or with a human-in-the-loop if the decision is made with algorithmic assistance. Some navigation decisions might be made autonomously however (e.g., decisions necessary to avoid obstacles or hazards). The goal is to ensure meaningful human oversight and accountability over decision-making.


6.    Communications: Two-way communications made through use of a robot should be conducted only by human operators who, at the beginning of the interaction, identify themselves and the agency with which they are associated.

Some robots are equipped with speakers and microphones, enabling communication with individuals. When individuals communicate with a police robot, it should be clear that a human is on the other side of the interaction, and who that individual is. This provision is intended to apply only to communications, as opposed to general announcements, which police might make through a robot in the event of an emergency or hazard.

 

7.    Auditing: Regulation should task appropriate agencies or officials (for example, state attorneys general or inspectors general) with conducting routine audits of agency use of police robots to ensure compliance with all applicable laws and policies.


8.    Standards and Testing: An appropriate entity (such as a government agency or a private standards-setting organization such as IEEE or A3) should establish minimum physical safety thresholds to ensure the safe operation of robots that operate in close proximity to people (e.g., a robot on a public street).

Regulation also should require robust and independent testing for compliance with these standards. This testing might be performed by an appropriate federal agency or through an independent certification regime.  And vendors should be required to provide data validating any safety or efficacy claims made in marketing materials.

 

9.    Clawback: Vendors should be required to have the means to disable and/or claw back robots or robot features from agencies using them in violation of applicable law or terms of service.


10.  Enforcement: Regulation should include enforcement mechanisms to ensure agency compliance. This could take the form of a private right of action for harms caused by violations, or the creation of an independent oversight body with the authority to redress such harms.


[1] As this technology is still relatively new in many cases, it is likely that courts have not yet had a chance to precisely define this level of control. As part of that endeavor, policymakers and courts could look to the existing science of human-computer interactions or human-robot interactions for guidance.