FORCE

1.    Moratorium on robotic use of force: Regulation should prohibit policing agencies from using robots to deploy force against a person. To this end, regulation should:

a.       Prohibit agencies from equipping a robot with a weapon to be used against a person;

b.      Prohibit agencies from using a robot, or any accessory or payload, in a manner reasonably likely to cause death or injury to a person; and

c.       Prohibit agencies from using a robot, or any accessory or payload, to physically threaten a person.

The prospect of police using robots to deploy force has attracted significant attention and controversy. In 2016, police in Dallas used a bomb-equipped robot to kill an active shooter. Since that time, some in law enforcement have defended the use of robots to deploy lethal force in certain circumstances (although police in the United States have not used lethal force by robot since the Dallas incident).

Proponents typically point to a possible emergency situation in which a lethally equipped robot is necessary to prevent death or grievous injury to officers or other individuals, for example in the case of an active shooter. They often disavow any present intention to deploy such robots.

In the view of others, lethal weapons should be off the table, but robots equipped with less-lethal weapons potentially could save lives. For example, by enabling officers to engage a suspect at a safe distance, force-equipped robots might enable police to use less-lethal weapons or irritants instead of conventional firearms, thereby ending incidents without taking life or causing serious injury.

Opponents to weaponization have many reasons for their opposition — among them that equipping robots with weapons could result in an increase in the use of force overall. One reason for this concern is the potential for “dehumanization” — that is, police may be more prone to use force when a subject appears as a figure on a computer screen rather than as a person in a face-to-face encounter. There also are serious operational concerns — weapons could misfire, injuring innocent people; systems can be hacked by bad actors.

The absence of regulation has negative ramifications for those on all sides of the debate. Proponents of weaponization do not want to take what they perceive to be a valuable tool off the table, yet that is precisely what may happen if unregulated attempts at weaponization lead to harm and public backlash (including, potentially, backlash against non-weaponized robots, such as those used for search and rescue). Opponents of weaponization fear that regulation will permit agencies to deploy weaponized robots, yet under the status quo (i.e., no regulation), there is little standing in the way of agencies deploying such tools today.

With all this in mind, our policy framework calls for a moratorium on the use of robots to deploy force, both lethal and less-lethal, for two reasons. First, entrenched structural problems in policing, including the lack of effective oversight and the various ways in which police are shielded from liability for wrongdoing, make the use of robots to deploy force unreasonably risky at this time. Second, more research should be conducted to understand the impact of force-equipped robots on individuals and individual rights. This research might come from the fields of human-robot interactions, science and technology studies, or other disciplines that focus on socio-technical systems.

One approach would be for the moratorium to be in effect until such time as a task force or working group has studied the issue, enabling a fuller assessment. That task force might be charged to:

a.       Conduct a landscape analysis of force-equipped drones and robots, including products currently on the market and those in development, domestically and internationally.

b.      Establish appropriate criteria to assess force-equipped drones and robots (for example, around safety and accuracy, accountability and transparency features, and data security).

c.       Explore public sentiment towards force-equipped robots, especially in those communities in which force-equipped robots are most likely to be used.

d.      Identify what safeguards or regulations would need to be in place before a force-equipped robot was used, including appropriate use-of-force standards.

e.       Study the implications of force-equipped robots for long-term safety and existential risk.

f.        Propose legal changes, including the removal of immunity from liability for reckless or negligent conduct that results in injury from force-equipped robots.

If such a task force were to be created, it should include members of civil society as well as community advocates, to ensure that diverse points of view and the input of those who will be impacted by this technology are considered.

2. Physical contact, outside of intentional use of force: A police robot should not intentionally and without consent make physical contact with a person, cause an object to make physical contact with a person, or obstruct a person’s movement, except when necessary to protect that person’s safety in the context of a search and rescue operation or to prevent individuals from coming into contact with a hazard.