Beyond Algorithmic Reform: Re-imagining the Role of Artificial Intelligence and Statistical Discourse in Criminal Law
Data-driven decision-making regimes, in the form of predictive tools like crime hotspotting maps and risk assessment instruments, are rapidly proliferating across the criminal justice system as a means of addressing accusations of discriminatory and harmful practices by police and court officials. In recent times these data regimes have come under increased scrutiny, as critics point out the myriad ways that they can reproduce or even amplify pre-existing biases in the criminal justice system. These concerns have given rise to an influential community of researchers from both academia and industry who have formed a new regulatory science under the rubric of “fair, accountable, and transparent” algorithms, which seek to optimize accuracy and minimize bias in algorithmic decision making systems.
In this talk, I argue that the ethical, political, and epistemological stakes of criminal justice applications of AI cannot be understood simply as a question of bias and accuracy. Nor can we measure the impact of these tools if key outcome measures are left unexamined. I outline a more fundamental, abolitionist approach for excavating the ways that predictive tools reflect and reinforce the punitive practices that drive disparate outcomes in the criminal justice system. Finally, I will illustrate a more transformational approach to re-imagining the way data might be used to challenge the penal ideology and de-naturalize carceral state practices.
04:00 PM - 06:00 PM
Centre for Ethics, University of Toronto