Artificial Intelligence and Criminal Justice: How to Use Algorithmic Sentencing Support in Real Life (and Ethically non-ideal) Penal Systems?

Research output: Contribution to journalJournal articleResearchpeer-review

Abstract

The use of artificial intelligence as an instrument to assist judges in determining sentences in criminal cases is attracting increasing theoretical attention. While many theorists have argued that there may be important advantages to introducing algorithmic sentencing support in criminal cases, almost no one has considered how such systems should actually be implemented. The purpose of this chapter is to fill this void. First, it is argued that current penal practice is non-ideal in the sense that it is dominated by overpunishment of offenders (the overpunishment assumption), and that algorithmic sentencing support systems are unlikely to be introduced in a way that appears to disturb the existing penal order (the preservation assumption). Second, a model called the “Restricted Application Model” is presented for how such algorithms might be used by judges within a framework characterized by the two outlined assumptions. Third, three objections to the model are considered and ultimately rejected. Thus, the model serves as a first attempt at outlining a procedure for the use of sentencing advisory systems by judges within real-life, and ethically non-ideal, penal systems.
Original languageEnglish
JournalAI and Ethics
ISSN2730-5961
DOIs
Publication statusPublished - 2025

Keywords

  • Artificial intelligence
  • Judges
  • Overpunishment
  • Penal practice
  • Punishment
  • Sentencing algorithms

Cite this