Lost in Translation: A Critical Look at the E.U. AI Act from a U.S. Perspective
Key Takeaways
The E.U. AI Act introduces important measures to mitigate the potential harms of AI systems, including in the law enforcement context. In many ways, the Act could serve as a model for regulation in the U.S.
In some cases, however, the AI Act falls short, and some provisions do not translate well to the U.S. context.
These areas include the creation of an exemption for national security uses, potential accountability gaps related to how systems are classified as “high-risk,” a lack of transparency for law enforcement systems, and shortcomings in the impact assessment process.
On March 13, 2024, the European Parliament adopted the EU AI Act, a comprehensive legislative framework designed to regulate artificial intelligence technologies. The AI Act introduces several significant measures, including provisions that ensure human oversight over AI systems and that prohibit certain practices, such as the untargeted scraping of facial images from the internet.
Undoubtedly, the AI Act could serve as a model for regulation in the United States; yet there also is much to learn from where and how the Act falls short. The purpose of this explainer is to examine critically four key aspects of the AI Act as they relate to law enforcement, with a focus on how these provisions might translate to a U.S. context. This is part one of a two-part series, with the second part focusing on the regulation of biometric technologies.
National Security Exemption
Although the AI Act covers a wide range of AI systems, including those used by law enforcement, it exempts wholly from its coverage systems that are used “exclusively for military, defence or national security purposes, regardless of the type of entity carrying out those activities.” AI Act, ch. I, art. 2(3).
National security exemptions are not uncommon in regulatory regimes, yet the sweeping nature of the exemption in the EU AI Act would raise concerns were it adopted in U.S. regulation. First and foremost, U.S. persons have constitutional rights which prohibit certain government practices regardless of whether those practices serve a national security purpose. Regulation plays an important role in protecting these rights, including in the national security realm.
Moreover, the fact that an AI tool is used in the national security context does not necessarily make that tool less vulnerable to errors or misuse. As the Brennan Center has documented, national security agencies in the U.S. have in some cases adopted powerful surveillance tools without adequate testing or safeguards. And, importantly, the line between national security and “ordinary” law enforcement sometimes can be unclear — the Department of Homeland Security’s surveillance of protesters during the racial justice protests of 2020 are one example.
Classifying High-Risk Systems
The AI Act takes a risk-based approach, imposing greater requirements on AI systems that are deemed “high-risk.” Various systems used by law enforcement are classified as high-risk under the Act; but the Act then exempts those same systems from the high-risk classification if they are deemed not to “pose a significant risk of harm to the health safety or fundamental rights of natural persons.” AI Act, ch. III, art. 6. The difficulty stems from the fact that it is up to vendors, not regulators, to determine in the first instance whether a system is high-risk. Whether this determination ought to be left to vendors is a complicated issue. On the one hand, some have expressed concern that vendors will misclassify their systems as low-risk, in effect opting themselves out of regulation. On the other hand, requiring regulators to assess the risk level of each AI new system coming onto the market could ultimately prove onerous or unworkable.
At bottom, the effectiveness of these classification provisions will depend in large part on whether regulators enforce the law rigorously, scrutinizing vendors’ determinations and penalizing those who skirt the rules. In the U.S. context, where policing technology vendors often make unsubstantiated claims about their products with little repercussion, stronger forms of oversight around classification decisions are required.
Limited Transparency for Law Enforcement
In general, the AI Act requires high-risk systems to be registered in a publicly accessible database maintained by the EU. See AI Act, Ch. III, art. 49. This registration includes disclosure of important information such as the purpose of the AI system, its functions, and a description of any data to be used. See AI Act, Ann. VIII. Crucially, however, systems used by law enforcement are to be registered in a non-public section of the database and are subject to fewer reporting obligations. See AI Act, Ch. III, art. 49(4).
Although policing agencies at times must keep some details about their operations secret, basic information about which AI systems are in use and how they operate should be a matter of public record. Transparency is fundamental to democratic governance — without transparency, the public cannot have informed opinions and lawmakers cannot make informed decisions. There may be occasions in which the specific details of AI systems must be kept secret so as not to tip off individuals who would use that knowledge to avoid appropriate legal scrutiny. But those occasions are, in general, rare. Unjustified secrecy around the use of policing technology is hardly unheard of in the United States — countless times, agencies have failed to disclose their use of technologies, including those entailing significant risk to civil rights and liberties. What is needed are regulatory mechanisms that increase transparency around AI-powered surveillance, not shield these systems from public scrutiny.
Concerns Around Impact Assessments
Finally, public agencies deploying high-risk AI systems are required by the AI Act to complete a fundamental rights impact assessment (“FRIA”). This assessment includes an evaluation of potential harms caused by the system, the groups of people likely to be affected, and the human oversight and governance measures to be implemented. See AI Act, Ch. III, art. 27.
Impact assessments can play a valuable role in ensuring responsible use of AI, but the AI Act’s approach raises some concerns, especially if that approach ultimately serves as a model for U.S. regulation.
First, under the AI Act, the responsibility for conducting a FRIA is borne by policing agencies. Although this may make sense in the E.U. context, in the United States policing agencies are, in general, considerably smaller and more local — indeed, nearly half of U.S. agencies have ten or fewer full-time sworn officers. Many of these agencies may lack the capacity or skills to meaningfully assess the impact of novel AI systems on civil rights and civil liberties.
Second, the ability of agencies under the Act to reuse previously completed impact assessments from other jurisdictions gives us pause. The impact of an AI system depends on a number of factspecific considerations, including an agency’s internal practices and personnel, its relationship to the community, its enforcement priorities, and other factors. In other words, the fact that a given system may be appropriate to use in one jurisdiction does not guarantee it is appropriate in another. Policymakers should consider whether there are ways to require tailored impact assessments without significant regulatory burden on agencies — for example, the use of templates or checklists that agencies can complete with support from vendors.
Finally, it remains to be seen whether regulators will take sufficient action with respect to systems that perform poorly on FRIAs, or with respect to agencies that fail to take the assessment process seriously. Again, this concern is more pronounced in the U.S. context — in the past, laws requiring U.S. policing agencies to evaluate the impact of new policing technologies have been skirted with few consequences.
Conclusion
Although there are many aspects of the AI Act that the United States would do well to emulate, some of the Act’s provisions, as detailed above, are worthy of scrutiny. As U.S. lawmakers prepare to enact laws regulating AI, both the strengths of the AI Act and its shortcomings will prove instructive.