- January 6, 2017
- Clayton Rice, K.C.
The developments in artificial intelligence technology during 2016 continued to raise profound ethical questions and dramatic implications for the legal system and the paradigm for delivery of legal services. In an article titled 5 Ways Artificial Intelligence Freaked Us Out In 2016: WaveNet, AlphaGo, Interceptor And More published by Tech Times on December 31, 2016, Athena Chan discussed Google’s launch of Neural Machine Translator that can translate to or from English by looking at full sentences instead of individual words. What is striking is that the machine used the function to create a form of interlingua which is an artificial language the machine then used to bridge two languages it had not previously learned.
Rapid innovations in technology far exceed the ability of the world’s domestic and international legal systems to keep pace. Two years ago Professor Stephen Hawking, the pre-eminent theoretical physicist at the University of Cambridge, was quoted by the BBC as saying: “The development of full artificial intelligence could spell the end of the human race. It would take off on its own, and re-design itself at an ever increasing rate.” But during his more positive remarks at the opening of the Leverhulme Centre for the Future of Intelligence, Professor Hawking mused that the amplification of the human mind by artificial intelligence “could be the biggest event in the history of our civilization.” (See: Rory Cellan-Jones. Stephen Hawking warns artificial intelligence could end mankind. BBC. December 2, 2014; and, Alex Hern. Stephen Hawking: AI will be ‘either best or worst thing’ for humanity. The Guardian. October 19, 2016)
1. What is artificial intelligence?
The term artificial intelligence is generally applied to technology that imitates cognitive functions such as learning and problem solving. The capabilities of artificial intelligence include the comprehension of human speech such as Google’s WaveNet. Other capabilities are strategic game systems such as Chess, self-driving cars like the Tesla and the interpretation of complex data. The subcategories of research include reasoning, knowledge and perception. Max Tegmark, a cosmologist at the Massachusetts Institute of Technology and co-founder of the Future of Life Institute, has described the domains of artificial intelligence this way: “Artificial intelligence today is properly known as narrow AI (or weak AI), in that it is designed to perform a narrow task (e.g. only facial recognition or only internet searches or only driving a car). However, the long-term goal of many researchers is to create general AI (AGI or strong AI). While narrow AI may outperform humans at whatever its specific task is, like playing chess or solving equations, AGI would outperform humans at nearly every cognitive task.” (See: Max Tegmark. Benefits & Risks of Artificial Intelligence. Future of Life Institute.org)
2. Ethics of Artificial Intelligence
The ethics of technology is divided into robot ethics and machine ethics. Robot ethics is generally concerned with the moral behaviour of designers who construct artificially intelligent technology. Machine ethics is concerned with giving them ethical principles enabling them to function responsibly through their own ethical decision making. Human dignity is at the core of the ethical question: Should technology be used to replace humans in fields that require empathy? Judges, therapists and soldiers are three examples of positions that require recognition and consideration of compassion and mercy. Artificial intelligence is a threat to human dignity because machines replace empathy with alienation. [See: Anderson and Anderson (ed.). Machine Ethics (2011)]
The distinction between the ethics of the designer and the ethics of the machine brings me to the point here – the application of artificial intelligence to legal systems and the practice of law. Nick Bostrom and Eliezer Yudkowsky, in their article titled The Ethics of Artificial Intelligence, argue that the ethical issues relate both to ensuring that the machines do not harm humans and to the moral status of the machines themselves: “It is…important that AI algorithms taking over social functions be predictable to those they govern. To understand the importance of such predictability, consider an analogy. The legal principle of stare decisis binds judges to follow past precedent whenever possible. To an engineer, this preference for precedent may seem incomprehensible – why bind the future to the past, when technology is always improving? But one of the most important functions of the legal system is to be predictable, so that, e.g., contracts can be written knowing how they will be executed. The job of the legal system is not necessarily to optimize society, but to provide a predictable environment within which citizens can optimize their own lives.” [See: Frankish and Ramsey (ed.). Cambridge Handbook of Artificial Intelligence (2011)]
3. Artificial Intelligence Judge
Software that is able to weigh evidence, and questions of right and wrong, has been developed by computer scientists at University College London. The AI Judge reached the same verdicts as judges of the European Court of Human Rights in a sample of cases involving torture, degrading treatment, fair trials and privacy. The AI Judge analyzed data sets for 584 cases and reached the same verdict as the court in 79% of them. In an article titled Artificial intelligence ‘judge’ developed by UCL computer scientists published in The Guardian edition of October 24, 2016, Chris Johnston quoted Dr. Nikolaos Aletras, the lead researcher: “We don’t see AI replacing judges or lawyers, but we think they’d find it useful for rapidly identifying patterns in cases that lead to certain outcomes. It could also be a valuable tool for highlighting which cases are most likely to be violations of the European Convention on Human Rights.” (See also: Sarah Knapton. Artificially intelligent ‘judge’ developed which can predict verdicts with 79 per cent accuracy. The Telegraph. October 24, 2016)
4. What is ROSS?
The world’s first artificially intelligent lawyer was hired in 2016 by the law firm Baker Hostetler in the United States. ROSS is software that uses the supercomputing power if IBM Watson to research court rulings and give opinions about their relevance and application. Andrew Arruda, the CEO and co-founder of ROSS Intelligence, said that the challenge in building ROSS was to find a way to make it as intuitive as an actual colleague. That meant programming it to respond to normal patterns of speech and not just keyword fragments. The company sees ROSS as levelling the playing field by reducing legal fees since lawyers will not have to pay humans to do research. Others see it as unfairly tilting the balance depending on who has the deepest pockets. (See: Chris Weller. The world’s first artificially intelligent lawyer was just hired at a law firm. Business Insider. May 16, 2016)
5. Delivery of Legal Services
ROSS is one example of the application of artificial intelligence to the practice of law. Lawgeex and Beagle are two others. Artificial intelligence is now performing tasks that were once assigned to junior lawyers. But robots are not seen as potentially replacing lawyers. They will work with them. In an article titled How Artificial Intelligence Will Transform The Delivery Of Legal Services published in the Forbes edition of September 6, 2016, Mark Cohen identified technology as the engine for a better, faster and cheaper delivery of services. The use of artificial intelligence for review and standardization of documents and its impact on efficiency, risk management and cost is significant. “AI’s broader potential to streamline legal services,” he said, “is also evident in the retail market segment. The ability of the vast majority of individuals and small businesses to secure legal representation due to lack of access and high cost is an acute problem often referred to as ‘the access to justice crisis’. It has profound implications for our society and its rule of law. AI is a game changer.” (See also: Zach Abramowitz. Do Robots Make Better Lawyers? A Conversation About Law And Artificial Intelligence. Above The Law. June 13, 2016)
We live in a world where artificial intelligence applications do many things better than we do. The impact socially, economically and legally can be catastrophic. What will be the consequences to millions in the trucking industry caused by self-driving vehicles? How will humans of limited composure cope with machines of limitless patience? How is control maintained over a complex intelligent system and how will it be protected from adversaries? How are mistakes and unintended consequences to be avoided – an artificial intelligence system is designed to develop a cure for cancer and kills everyone on the planet? The ethical questions persist. How will prejudice be eliminated? Artificial intelligence systems are designed by humans who can be purposefully or inadvertently biased. And, artificial intelligence is one thing – but artificial stupidity is quite another. As Professor Luciano Floridi of the University of Oxford recently quipped – “artificial intelligence is almost an oxymoron”. (See: Luciano Floridi. Charting Our AI Future. Project-Syndicate. January 2, 2017; and, Julia Bossman. Top 9 ethical issues in artificial intelligence. World Economic Form. October 21, 2016)