Proctor : September 2019
48 PROCTOR | September 2019 Computer says no ...but then what? BY ANGUS MURRAY, THE LEGAL FORECAST “What was once inconceivable, that a complex decision might be made without any requirement of human mental processes is, for better or worse, rapidly becoming unexceptional... “The legal conception of what constitutes a decision cannot be static; it must comprehend that technology has altered how decisions are in fact made and that aspects of, or the entirety of, decision making, can occur independently of human mental input.” 1 These are the insightful words of Justice Kerr in the recent decision of Pintarich v Deputy Commissioner of Taxation  FCAFC 79. For some, the exercise of the discretion to remit general interest charges may not be an invigorating read; however, this recent decision does raise the important and interesting consequences of automated decision-making. It is unquestionable that technology has become prevalent in most of our day-to-day lives. Indeed, you may have a smartphone screen staring at you as you read this article or an Apple Watch that’s about to remind you that you have a meeting in 15 minutes. The increased use of technology raises significant questions about the role of technology, as well as the ethical and legal parameters that could be applied to computerised or computer-assisted decision- making. This article does not intend to answer these questions; however, it briefly outlines the current administrative use of computer programs and provides a potential basis for the legal profession’s response to emerging technology. Computerised decision-making Computerised decision-making has made its way into many government agencies, in areas such as intellectual property and migration law. Broadly, there are two common themes with computerised decision-making. Firstly, the responsible human decision-maker may, under their control, arrange for the use of a computer program for any purpose that exists within their (delegable) mandate. 2 Secondly, the human decision-maker may substitute a different decision. In regard to the second point, it is interesting that a consistent approach has not been taken between citizenship decisions (which require specific notice that the computer program was not functioning correctly)3 and intellectual property decisions (which require the registrar to be satisfied that the decision by the computer program is incorrect). 4 In either approach, there is an obligation that a human decision- maker retains control of the computer program and is sufficiently tech-literate to identify and ensure that the program is operating correctly. The recent issues with Centrelink’s automated debt collection process highlights that these systems are not perfect5 and that errors do occur with computerised decision-making. The interesting consequences that flow from automated decision-making are not something that have been comprehensively or even clearly tested before the courts, and certainly the legislature and the courts have a difficult task ahead to properly address automation within decision-making. This task becomes even more complex as the decision-making process is enhanced by artificial intelligence.