Many problems in the criminal justice system would be solved if we could accurately determine which offenders would commit offenses in the future. The likelihood that a person will commit a crime in the future is the single most important consideration that influences sentencing outcomes. It is relevant to the objectives of community protection, specific deterrence, and rehabilitation. The risk of future offending is also a cardinal consideration in bail and probation decisions. Empirical evidence establishes that judges are poor predictors of future offending—their decisions are barely more accurate than the toss of a coin. This undermines the efficacy and integrity of the criminal justice system.
Modern artificial intelligence systems are much more accurate in determining if a defendant will commit future crimes. Yet, the move towards using artificial intelligence in the criminal justice system is slowing because of increasing concerns regarding the lack of transparency of algorithms and claims that the algorithms are imbedded with biased and racist sentiments. Criticisms have also been leveled at the reliability of algorithmic determinations. In this Article, we undertake an examination of the desirability of using algorithms to predict future offending and in the process analyze the innate resistance that human have towards deferring decisions of this nature to computers. It emerges that most people have an irrational distrust of computer decision-making. This phenomenon is termed “algorithmic aversion.” We provide a number of recommendations regarding the steps that are necessary to surmount algorithmic aversion and lay the groundwork for the development of fairer and more efficient sentencing, bail, and probation systems.
Mirko Bagaric, Dan Hunter, and Nigel Stobbs,
Erasing the Bias Against Using Artificial Intelligence to Predict Future Criminality: Algorithms are Color Blind and Never Tire,
88 U. Cin. L. Rev.
Available at: https://scholarship.law.uc.edu/uclr/vol88/iss4/3