What makes us really uncomfortable: the use of AI to assess likelihood of recidivism, or the policy of sentencing based on likelihood of recidivism? – Criminal law
United States: What makes us really uncomfortable: the use of AI to assess likelihood of recidivism, or the policy of sentencing based on likelihood of recidivism?
To print this article, simply register or connect to Mondaq.com.
In her recent New York Review article, “Sentenced by Algorithm,” a review of former SDNY judge Katherine Forrest’s book, “When Machines Can be Judge, Jury and Executioner: Justice in the Age of Artificial Intelligence,” Current SDNY judge Jed Rakoff assesses the many shortcomings of existing AI products intended to predict recidivism rates of ex-offenders. These products are designed to help judges determine whether an accused’s sentence should be extended on the basis of a theory of “incapacity” – essentially, to protect the general public from the possibility of the accused continuing his sentence. pattern of crime in the future. As Rakoff succinctly explains, the products currently available have unacceptably high error rates, which mainly tend to overestimate future crime. Additionally, their “black box” design raises concerns about the assumptions underlying the algorithm and the defendant’s ability to effectively challenge the algorithm’s output.
The review of the book is informative and certainly an interesting introduction to the use of AI products in criminal convictions. But perhaps the most salient point of Judge Rakoff is raised at the very end of his article: “More broadly, the fundamental question remains: even if these algorithms could be made much more precise and less biased than they are. currently, should they be used in the criminal justice system to determine who to lock up and for how long? My personal view is that increasing the jail term of an accused on the basis of assumptions of future crimes is fundamentally unfair. “
The idea of incarcerating someone for a crime they did not and may never have committed is inherently baffling. And when the decision is separated from human judgment and empathy, it feels even less fair, perhaps because of our inherent mistrust of what we cannot understand.
AI products designed to predict recidivism may not be well developed again, but if the current trajectory of AI is generally an indicator, these products could very soon become more sophisticated and more precise – and almost certainly more precise, on the whole, than any individual human judgment. It is then that we must ask the real question, the difficult question, raised by Judge Rakoff.
Warning: This alert has been prepared and posted for informational purposes only and is not offered, nor should it be construed as legal advice. For more information, please consult the full warning.
POPULAR ARTICLES ON: The Criminal Law of the United States