Proctor : February 2018
32 PROCTOR | February 2018 Criminal justice algorithms: AI in the courtroom When one thinks of frontier areas of law, criminal law is hardly the first that springs to mind. However, criminal justice has recently seen the emergence of artificial intelligence (AI) in the courtroom, causing great controversy. Particularly in the United States, where at least 10 states are using this software, 1 algorithms that assist in evidence gathering and risk assessment are being integrated in order to determine the likelihood of a defendant skipping bail or reoffending. 2 Advocates advance arguments in favour of such technology on the basis that “as these tools become more sophisticated, they have the potential to alleviate the massive congestion facing our state and federal justice systems, while improving fairness and safety”. 3 On the other hand, opponents have significant concerns about transparency, oversight and agency. COMPAS and Wisconsin v Loomis The case of Wisconsin v Loomis4 (Loomis) placed the role of AI in criminal justice at the forefront of legislators’ and legal advocates’ minds. Loomis involved a defendant found guilty for his role in a drive-by shooting. As he was being processed, he responded to a series of questions and the responses were then put into an AI algorithm called ‘correctional offender management profiling for alternative sanctions’ (COMPAS). The software gave him a ‘high risk’ score, meaning that he was deemed to have a high likelihood of reoffending. The judge took the finding into account during sentence – though it was noted that the court would not have reached a different view without the assessment. The creator of the COMPAS software, private enterprise Northpointe Inc., retains the right to protect its intellectual property interests and has not released how the software makes its assessments. It is this lack of disclosure that has caused the greatest concern within the legal fraternity, and it led to Loomis’ counsel launching an appeal on the basis that their client should have been allowed to assess the algorithm.5 The appeal was ultimately dismissed by the Wisconsin Supreme Court in a decision that has had far-reaching consequences for AI in the courtroom. This decision has compelled many to question the opaque nature of the software and its place in the criminal justice system, especially given two fundamental rights of an accused: the right to appeal and to due process.6 Allegations of racial bias Opponents to criminal sentencing algorithms have been outspoken in the legal community over the past few years. The criticisms range from their bias against ethnic groups, to the simple fact that they just don’t do what they say they do. A report by non-profit news website ProPublica found that a negative risk assessment “was particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants”, and that “white defendants were mislabeled as low risk more often than black defendants”. 7 The same investigation found that the algorithms used were not actually able to make predictions about the likelihood of future offending with any accuracy.8 The results of the investigation are certainly concerning, particularly when one considers that then US Attorney-General Eric Holder cautioned against the rollout of such technologies before thorough testing was undertaken, and yet they were rolled out anyway. The ethical arguments Interestingly, those tasked with designing criminal sentencing algorithms are motivated by the thought that they can remedy the perceived faults of the current system. They are critical of the fallibility of human decision- making, believing that AI could be the saviour of justice by removing human error and delivering fairer and more reliable outcomes. The argument for change is somewhat vindicated by the evidence out of the US, which for some time now has demonstrated that the criminal justice system is hardwired against black people.9 A paper on extraneous factors in judicial decisions from 2011 shows fascinating and concerning findings that a person’s chances of being granted parole could be dependent on whether the judicial officer had had their lunch or not, or even how well their local college football team was doing.10 Thus, this software has been promoted as being capable of removing such elements of human error, and has been touted as being capable of cutting crime by up to 24.8% with no change in jailing rates, and reducing jail populations by 42% with no increase in crime rates.11 The case for Australia While these secretive algorithms are yet to make their way into courtrooms in Australia, they are already being used in police targeting operations in New South Wales for both adult and juvenile offenders. The controversial technology has been criticised for the same reasons as COMPAS, in that it can learn gender and racial biases and target accordingly. 12 It seems therefore that it is only a matter of time before this becomes widespread in the Australian criminal justice system. So what should be done to ensure that fairness and transparency are not compromised by the pursuit of easing the strain on the courts?