Proctor : May 2018
36 PROCTOR | May 2018 Artificial intelligence and legal liability What happens when AI goes wrong? Benjamin Teng is a Queensland executive member of The Legal Forecast (TLF). Special thanks to Michael Bidwell of TLF for technical advice and editing. TLF (thelegalforecast.com) aims to advance legal practice through technology and innovation. TLF is a not-for- profit run by early career professionals passionate about disruptive thinking and access to justice. Artificial intelligence (AI) is the ability of machines to imitate human decision-making and behaviour.1 AI can thus take many forms, with varying levels of sophistication. AI might imitate intelligent human behaviour by simply executing a series of preordained, prioritised protocols. More advanced AI might be self- learning, possessing the ability to rewrite its own code in response to its own experiences (self-learning AI). While we are yet to develop sentient AI (strong AI) that can pass the famous Turing test (AI that can exhibit intelligent behaviour indistinguishable from that of a human), 2 self-learning AI itself still possesses fantastic and terrifying potential. The capacity to self-learn enables such AI to evolve beyond what it was initially programmed to be. In one documented case, programmers of self-learning AI admitted to not understanding the ‘mysterious mind’ of the machine that they themselves had created.3 These self-learning characteristics pose interesting and difficult questions for the law. In particular, who is legally liable when self- learning AI goes wrong? The programmers? The users? Or even the AI itself? This article introduces this AI liability conundrum, and then offers some solutions to it. The AI liability conundrum In one sense, the law is about attributing fault. The law attributes fault through legal mechanisms such as causation and foreseeability. In tort, for example, an individual is only liable for negligence if they caused foreseeable harm. 4 This in mind, can it really be said that a programmer has been negligent or has caused foreseeable harm when self-learning AI learns to act in a way that it was not intended to? When it becomes effectively autonomous? Problematically, what AI learns and becomes is unpredictable because it is a function of what AI experiences, which is not necessarily controlled by its programmers.5 Consider two identically programmed driverless cars, C1 and C2 , that are released onto the road at T0. One week later, at T1 ,C 1 has been involved in a wet weather accident and so now drives five kilometres under the speed limit during wet weather, but C2, having experienced no such accident, does not. What if, for example, AI-controlled traffic lights, programmed to ensure efficient traffic flow, learned that they could manage traffic more efficiently by changing to a green light one second instead of three seconds after the pedestrian crossing lights turned red, but that this caused more accidents. The AI-controlled lights were not programmed to cause more accidents. It is therefore difficult to see how those accidents were foreseeable or caused by the programmers and, if there were no relevant identifiable faults in the programming of the AI, how those programmers could be said to have been negligent.6 So, what happens when AI goes wrong? Some solutions There are a number of possible solutions to the AI liability conundrum. First, it has been suggested that AI programmers could be held liable on a novel agency basis. 7 Current agency law would not apply when agent AI begins to make its own decisions and goes rogue, it exceeds its authority or severs the agency relationship to its programmer principal.8 But perhaps the authority given by a programmer to AI could be construed as an authority to fully explore and utilise its deep learning algorithms, irrespective of the consequences? Second, AI programmers could be held strictly liable. This is a simple solution, but it risks suppressing our exploration of the awesome potential of AI by deterring would- be programmers for fear of being held strictly liable.9 The law should be mindful of this. Strict liability makes sense in a paradigm manufacturer-and-consumer compensation case, but self-learning AI is unique. What is more, a strict liability model does not assist in the case where self-learning AI commits a criminal offence because a well-intentioned programmer will always lack mens rea. Third, no one could be held liable. Instead, parties who suffer civil damages caused by AI could be compensated from a ‘claims pool’ maintained by the AI industry. 10 AI programmers and manufacturers could be required to pay a levy to obtain a certificate from a ‘Turing Registry’, 11 which allows them to sell their product on the market. 12 This levy would essentially be anticipatory consideration for and proportional to the risk that the AI will cause harm. A similar no-fault system, albeit not in an AI context, already exists in New Zealand under the Accident Compensation Act 2001, which establishes a claims pool to settle all forms of personal injury accidents. 13 Finally, could it be possible to hold the AI itself liable? In United States v Athlone Industries, Inc., the Court of Appeals for the Third Circuit stated that “robots cannot be sued”, 14 but did the court countenance the self-learning AI that we have today? Practically, AI (probably) will not ever have currency, so to hold AI liable is probably not going to assist in the resolution of civil cases requiring the payment of compensation. In the context of AI crime, our criminal justice system currently seems irreconcilable with prosecuting AI; from incompatibilities associated with courtroom procedure15 and punishment to deterrence and mens rea. More generally, there are complicated philosophical issues associated with imbuing ‘machina sapiens’ with legal personhood and holding ‘them’ liable under the law. 16 Conclusion The exponential rate at which AI technologies are developing can be contrasted with the careful and gradual march of the law.17 When put in perspective, the liability conundrum, while significant, is just one of the legal issues engendered by AI.