University of Florida Homepage

Workshop on the Ethics of Criminal Justice AI

About the Workshop

Courts and law enforcement institutions around the world are increasingly implementing AI systems to aid in sentencing decisions, predict locations of criminal activity, and identify possible future offenders. The promise of AI in these domains is that it is more efficient, prevents crime, and mitigates or eliminates bias. But the use of AI in criminal justice also gives rise to pressing ethical challenges. How does the implementation of AI systems in policing affect officers’ understanding of their social role? Does the inscrutability of the algorithms at use in criminal justice pose a special problem for public trust and institutional legitimacy? Are risk assessment algorithms compatible with retributivist theories of punishment? And what problems arise in using algorithms to determine criminal sentences? This workshop brings together leading scholars of ethics in AI to explore these challenges. Abstracts are available here.

Program

  • March 12, 2021

  • 10:00am – 10:15am
    • Gathering, Opening Remarks
  • 10:15am – 11:15am
    • Luke Hunt (University of Alabama)
    • “Algorithms and Justice in Policing”
  • 11:15am – 11:30am
    • Break
  • 11:30am – 12:30pm
    • Renée Jorgensen Bolinger (Princeton University)
    • “How Predictive Policing Distributes Risks”
  • 12:30pm – 1:30pm
    • Lunch Break
  • 1:30pm – 2:30pm
    • Duncan Purves and Jeremy Davis (University of Florida)
    • “Public Trust, Legitimacy, and the Use of AI in Criminal Justice”
  • 2:30pm – 2:45pm
    • Break
  • 2:45pm – 3:45pm
    • Clinton Castro (Florida International University)
    • “Do Algorithmic Sentences Make Sense?”
  • 3:45pm – 4:00pm
    • Break
  • 4:00pm – 5:00pm
    • Toby Napoletano and Hanna Kiri Gunn (University of California, Merced)
    • “Can Retributivism and Risk Assessment be Reconciled?”
  • 5:00pm – 5:15pm
    • Closing Remarks

Abstracts

How Predictive Policing Distributes Risks Renée Jorgensen Bolinger (Princeton University)

Authorizing an organization to harmfully interfere with or intimidate people is prima facie wrong. So what could justify it, in the case of (idealized) policing? The justifying goods of law-enforcement seem to be roughly these: preserving the rule of law, securing people from violent intrusions of their rights, and securing them from unjust domination by threats of force. These are risk-reduction benefits: the benefit received by the majority of people is not that a victimization they would otherwise have suffered is directly prevented, but that their risk of being victimized is reduced through the various activities of law-enforcement. I argue that if this is the right way to think about justifying good of policing activities, then rather than viewing police as prima facie morally authorized to use the means necessary to most efficiently prevent the greatest number of victimizations, we should instead frame the task as fairly managing ineliminable social risk. This implies that principles of distributive justice strongly constrain which methods may be used. I argue that because of the ways that they pool risk disproportionately on members of disadvantaged groups without providing compensating benefits to members of these groups, predictive tools cannot be permissibly used to control street crime.

Do Algorithmic Sentences Make Sense? Clinton Castro (Florida International University)

There have been several instances of the intuitively acceptable practice of algorithmic sentencing, the practice of imposing more severe sanctions on persons convicted of crimes when they have higher, algorithmically-generated forecasts of recidivation. While there has been much critical discussion of the technologies this practice relies on—questions of bias, opacity, and so on—there has been comparatively little discussion of whether the underlying practice makes sense. We argue that it does not by showing that there is no plausible theory of punishment that supports it.

Can Retributivism and Risk Assessment be Reconciled? Toby Napoletano and Hanna Kiri Gunn (University of California, Merced)

In this paper, we explore whether or not the use of risk-assessment tools to determine the extent of one’s punishment can be made compatible with a retributivist justification of punishment. We argue that a retributivist approach on which the severity of punishment partly depends on one’s character in addition to their acts—where one’s character would be relevant to the granting (or not) of sentence reduction—offers some hope of reconciling retributivism with the use of risk-assessment tools in considerations of sentencing reduction. Ultimately, however, we argue the attempted reconciliation fails, so long as risk-assessment tools fail to distinguish between risk that one is responsible for, and risk that one is not responsible for.

Algorithms and Justice in Policing Luke Hunt (University of Alabama)

This paper examines three models of policing and the extent to which each model is justified: First, the archetypal model: Police training promoting the tenets of a police ethos based upon individuated archetypes, such as the (just) police “warrior” or “guardian.” Second, the reallocation model: Reallocating societal resources such that the police are no longer needed in society (defunding and abolishing) because reform strategies (such as “community policing”) cannot fix the way societal problems become manifest in (archetypal) policing. Third, the algorithmic model: Subsuming policing into technocratic judgements encoded in algorithms through strategies such as predictive policing (mitigating human, archetypal bias). The paper concludes that these and other policing models should be augmented with a nonideal theory priority rule that provides a moral foundation—distinct from the values of law enforcement and crime reduction—for strategies such as procedurally just community policing.

Public Trust, Legitimacy, and the Use of AI in Criminal Justice Duncan Purves and Jeremy Davis (University of Florida)

Moral concern about the use of predictive algorithms in criminal justice contexts has emerged in part from the fact that algorithmic methods of classification cannot be comprehended by the individuals subjected to classification. This opacity, as it is sometimes called, can be a result of at least three interacting factors: the need for specialist knowledge in order to understand the operation, intellectual property protections, and the innate complexity of machine learning methods. This paper attempts to identify a novel foundation of the moral concern about opaque algorithms. The central claim of the paper is that opaque algorithms is morally problematic when used in decision-making by state institutions such as law enforcement, because their opacity can threaten the trustworthiness of those institutions. In order to defend this claim we first clarify what public trust in state institutions amounts to. We then identify several mechanisms through which opaque algorithms can undermine trustworthiness. We then argue that a failure of trustworthiness on the part of public institutions is morally problematic because (a) it threatens the institution’s ability to achieve its valuable social aims, and (b) it threatens the institution’s very legitimacy. We close by suggesting some remedies for the loss of institutional trustworthiness given the fact of algorithmic opacity.