•  
  •  
 
University of Cincinnati Law Review

Abstract

Federal agencies have caught the artificial intelligence (AI) bug. A December 2023 report by the Government Accountability Office found that twenty of twenty-three federal agencies surveyed reported using some form of AI, with about two hundred current use cases for algorithms and about one thousand more in the planning phase. These agencies are using algorithms in all aspects of administration, including rulemaking, adjudication, and enforcement. The risks of AI are well-documented. Previous work has shown that algorithms can be, among other things, biased and prone to error. However, perhaps no problem poses a more serious threat to the use of algorithms by agencies than the fact that algorithms can be opaque, meaning it can be difficult to understand how an algorithm works and why it reaches certain results. Opacity compromises reason-giving, a basic pillar of administrative governance. Inadequate reason-giving poses legal problems for agencies because the reasons agencies provide for their decisions form the basis of judicial review. Without adequate reason-giving, agency action will fail arbitrary and capricious review under the Administrative Procedure Act. Inadequate reason-giving poses normative problems, too, since reason-giving promotes quality decision making, fosters accountability, and helps agencies respect parties’ dignitary interests.

This Article considers whether agencies can use algorithms without running afoul of standards, both legal and normative, for reason-giving. It begins by disaggregating algorithmic reason-giving, explaining that algorithmic reason-giving includes both the reasons an agency gives for an algorithm’s design (systemic reason-giving) and the reasons an agency gives for an individual decision when the decision making process involves an algorithm (case-specific reason-giving). This Article then evaluates systemic reason-giving and case-specific reason-giving in turn. Once the normative assessment is complete, this Article considers its implications for arbitrary and capricious review, concluding that at least some algorithms should pass judicial muster. The Article finishes by offering a framework that courts can use when evaluating whether the use of an algorithm is arbitrary and capricious, and that agencies can use to decide whether to create an algorithm in the first place.

Although understanding the relationship between algorithms and reason-giving is important, this Article’s true aim is broader. It seeks to reframe debates over agencies’ use of AI by emphasizing that the baseline against which these algorithms should be compared is not some idealized human decision maker, but rather the various kinds of policies—rules, internal procedures, and guidance—that agencies have used since their inception to promote core administrative values like consistency, accuracy, and efficiency. The comparison between algorithms and policies better captures the role algorithms currently play in administrative governance, gives proper weight to the reasons agencies have for turning to algorithms in the first place, and helps us see how algorithms do and do not fit within the existing structures of administrative law. At bottom, comparing algorithms to policies reminds us that the tension between individualized consideration and centralized bureaucratic management is endemic to agency administration. At most, algorithms have given this tension a new flavor. Make no mistake: this tension cannot be eliminated, only managed. Algorithmic reason-giving is a case in point.

Share

COinS