Red de Servidores Públicos
Justice 4.0 – the Impact of Artificial Intelligence in the Law.
Nicolás H. Varela
Introduction
The world is not facing a time of changes; it is going through a change of times. The development of new technologies is changing the way in which human beings interact, work, and live. Among these new technologies there is one in particular with a transformative potential without comparison: artificial intelligence (hereinafter: A.I).
Among the innumerable applications of A.I today, the law is one the areas that is being transformed by this technology. This essay intents to explore the history of A.I, compares artificial with human intelligence, as well as current application of A.I systems in the field of law specially focusing in the desirability of these developments considering the benefits and risks that this technology entails.
The ambiguity of intelligence
There is no consensual definition about A.I, but it could be defined as a set of algorithms, methods, theories, or techniques whose objective is to reproduce, by means of machines, the cognitive abilities of human beings, that is, artificially imitate human intelligence.
The difficulty in finding a unique definition of A.I responds, in part, to the difficulty in also defining what human intelligence consists of. We have tried to measure human intelligence from the ability to retain information for an exam or by standardized IQ tests. Moreover, authors such as Howard Gardner, with his concept of multiple intelligences which allowed us to comprehend the diversity of abilities and areas in which human knowledge can be deployed (Gardner, 1983), or Daniel Goleman and his development of the concept of Emotional Intelligence, permitted us to move away from relating intelligence as a merely cognitive function and understand it as a complex phenomenon that involves various aspects such as the recognition and regulation of our own emotions. (Goleman, 1995).
Although it is difficult to agree on a single concept, there is a common element in various definitions of human intelligence that we could agree on: the ability to process information to solve problems in order to achieve objectives (Kurzweil, 1990).
History of A.I
In 1950 Alan Turing, considered the father of computer science and who discovered how to decode the encrypted messages of the Nazis to win the Second World War, wondered if machines could think in the same way that humans use available information and reason to solve problems and make decisions. This was the logical framework of his article, Computing Machinery and Intelligence (Turing, 1950), in which he proposed a concrete test to determine whether a machine was intelligent or not: his famous Turing Test, a test to determine if a machine is capable of imitating the human intelligence so that whoever interacts with it cannot realize whether one is talking to a human being or a machine.
Five years later, Allen Newell, Cliff Shaw, and Herbert Simon’s started the proof of concept. They designed Logic Theorist[1], a program designed to mimic the problem-solving skills of a human being. It is considered by many to be the first artificial intelligence program and was featured at the 1956 Dartmouth Summer Research Project on A.I, widely regarded as the founding event in the field of A.I, which was hosted by Marvin Minsky and John McCarthy (who coined the term Artificial Intelligence).
Since that project, the term Artificial Intelligence has evolved significantly, allowed by the maturity of machine learning techniques, along with large data sets and the increase in current computational power in accordance with Moore's law, which states that about every two years the number of transistors in a microprocessor double[2].
This technological growth has generated that every day more of our life happens through data, from choosing what we are going to watch on Netflix, listen on Spotify or shop on Amazon, to more important decisions such as who we might vote for in the next elections, or if we will be held in prison or granted freedom until an upcoming trial.
This is one of the reasons why some of the most valuable companies of this century such as Apple, Microsoft, Alphabet (Google), Amazon or Meta (Facebook) are technology companies that handle data, which is why it is often said that data is the oil of the 21st century. As Israeli author Yuval Harari points out, "Data is becoming the world's most important asset, and the most important political question of our time is who controls the data." (Harari, 2019).
A.I and the Law
Among the innumerable fields of society that A.I. is disrupting, the law is not exempt from these changes. In recent years, there has been a boom in legal assistance tech start-ups, which use data-mining technology and publicly available legal documents to create powerful legal bots.
A start-up called CaseText uses crowdsourcing to analyse thousands of state and federal legal cases. Some of the biggest law firms in the United States already “hired” a robot lawyer called ROSS to assist them with cases. This A.I machine, powered by IBM’s Watson technology, and “who” has been marketed as “the world’s first artificially intelligent attorney”, serves, as a legal researcher for the firms and it is responsible for sifting through thousands of legal documents to bolster firms’ cases using machine-learning technology[3].
Moreover, an 18-year old British coder developed a parking ticket bot called DoNotPay that quickly handles ticket appeals through a Q&A chat as well as insurance claims. The bot, which is available for free online, has successfully appealed more than $4 million worth of tickets, saving drivers the cost of hiring a lawyer for the appeal, which can run between $400 and $900[4].
Likewise, a software by Lex Machina mines public court documents using natural language processing to help predict how a judge will rule in a certain type of case. Although, some countries are sceptical of this use of A.I. France, for example, has prohibited the use of certain types of data analysis on court decisions, that is the application of statistics and machine learning to predict judicial behaviour, to limit “forum-shopping” by litigants[5] (the practice of choosing the court which is more likely to provide the most favourable judgment).
However, not only private companies use A.I in the field of law. Many initiatives take place in the public sector as well to implement this technology in judicial cases.
The Estonian Government implements artificial intelligence in minor trials[6]. Latvia stated that it was exploring the possibilities of machine learning for the administration of justice[7]. China, which recently opened millions of legal documents to the public domain to be used to train artificial intelligence algorithms, has proposed to implement Smart Courts[8], which will promote the application of A.I in the collection of evidence, analysis of cases, reading and analysis of legal documents, with the ability to pass judgment intelligently. Brazil, also uses an A.I system called “Victor” to read and classify judicial claims that reach the Supreme Federal Court[9].
COMPAS vs. Prometea
However, there are two different judicial A.I systems that I would like to focus on here. First, a software called COMPAS (which stands for Correctional Offender Management Profiling for Alternative Sanctions), which since 1998 assess a criminal defendant’s likelihood to re-offend in many States in the United States.
When arrested (and usually without legal counsel) defendants have to fill a questionnaire[10], the system analyzes the responses and calculates the risk of recidivism, then the judge defines whether or not to grant conditional release while the judicial process is completed. A general critique of this software is that since the algorithms it uses are trade secrets, they cannot be examined by the public.
An investigation by Propublica[11], a non-profit that does investigative journalism, unleashed a strong controversy when they analysed the cases of 7 thousand people arrested in the State of Florida during two years. The conclusion was that when comparing a black person and a white person with the same age, gender, criminal history and other factors, the black person was 45% more likely to obtain a high-risk score than the white person, despite that the questionnaire to be answered by the accused does not include the racial issue.
COMPAS rose to fame in 2013 with the Loomis case[12]. Eric Loomis received six years in prison and five years of probation because COMPAS estimated a high risk of recidivism. Loomis appealed, claiming that his defence could not contest COMPAS’ methods because the algorithm was not public. The case also alleged that COMPAS violates due process rights by taking gender and race into account[13]. The Supreme Court denied the writ of certiorari, thus declining to hear the case, on June 26, 2017[14][15].
The second program to analyse it is called Prometea, an I.A created in Argentina in 2017, within the scope of the Public Prosecutor's Office of the City of Buenos Aires, which combines natural language recognition, automation and prediction, under supervised machine learning techniques to solve court cases. This intelligent system helps to significantly reduce time, bureaucracy and increase efficiency, while improving quality standards for public processes and procedures.
Realizing that most of the time in the Prosecutor’s Office was used verifying personal data, and information that is repeated, they grouped decisions into clear sets that were mechanical and predictable, and made a system that only answering 5 questions could automatically create a judicial opinion in 20 seconds, as well as the relevant statistics for the case, and links of interest to illustrate the decision so that a public official only had to review it and sign it to complete a job that used to takes months in just a few minutes.
Prometea predicts the solution to a court case with a success rate of more than 96%, it improves productivity and simplifies tasks through automation in the preparation of documents through a decision tree that contains the different documents that make up the process. (Aberastury & Corvalán, 2018)
Unlike COMPAS, it is traceable to know how it reaches the results it obtains. In this way, it avoids what is known as a "black box”, obtaining a result without fully understanding how it was acquired, because they consider that the State cannot use a black box system to resolve issues that impact fundamental human rights as it has to be able to justify and explain its decisions.
Desirability of these developments.
As we have seen with the comparison between two different A.I judicial systems such as COMPAS and Prometea, there is one main difference between these systems that we can refer to as “explicability”, the degree to which a human can understand the cause of a decision.
Due to the complexity of the algorithms, there is a risk that these systems reach inexplicable conclusions and results for users (Bostrom and Yudkowsky, 2018). This is the main problem of current deep learning-based systems, they are black boxes whose decision-making process cannot be traced or explained.
The obligation to explain judicial sentences has two main motives. First, it permits society to know the reasons for imprisoning or punishing someone to ensure transparency and trust in the judicial system. Second, as a procedural instrument, it allows the defendant to challenge and appeal the conclusions of the judge so other judges can review his decision. Therefore, the judge must fully express and justify his selective work both in the apprehension and evaluation of the facts and evidence and of the legal norms.
An A.I system that does not have mechanisms to clearly show their internal reasoning and operation, risks affecting fundamental human rights, therefore, a judicial A.I must be transparent, reliable and explainable to ensure systems are accountable and governments, regulators, lawyers, consumers, research centres, journalists, and citizens in general can be able to permanently assess its fairness.
Explicability it is also important because there is always the risk that if the data used to train the algorithm is biased, the software will likely yield biased results, e.g. if the representative sample exhibits patterns of discrimination, the system will likely reproduce them[16].
As Nobel laureate Daniel Kahneman says, a cognitive bias is a systematic but purportedly flawed pattern of responses to judgment and decision problems (Kahneman, Tversky 1972). Through biases we can make decisions unfairly because they are made based on our personal perceptions that may be wrong.
As the data used to train algorithms comes from humans, it can be contaminated with biases, which are then transmitted to the algorithm. If the technology has biases, either because of the biased data obtained or because of the way the data is analysed, this can also result in an amplification of that bias[17].
For example, in 2014 the company Amazon used an experimental hiring tool with artificial intelligence to give job candidates scores ranging from one to five stars (much like shopper’s rate products on Amazon). Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry. In effect, Amazon’s system taught itself that male candidates were preferable. It penalized resumes that include the word “women’s,” as in “women’s chess club captain.” And it downgraded graduates of two all-women’s colleges, according to people familiar with the matter[18].
Therefore, we have to know how A.I makes decisions because they could be based on criteria that affect individual rights. Hence the importance of A.I systems being explainable to understand how the decision was made.
Nonetheless, biases do not affect solely A.I systems. Although human judges often prefer that their behaviour not be subjected to external scrutiny, various investigations have found arbitrariness in judicial decisions[19]. For example, studies show that judges tend to be more unfair if they are hungry or sleepy[20].
Therefore, if both an A.I and a human judge had the risk of bias, it could be argued that the best system is one with hybrid intelligence where both a judge and an A.I system could work together to solve a case and, in a way, they can control each other so the public could obtain the best of the two worlds.
Hybrid intelligence
Emerging technologies such as A.I have a significant potential to make the public sector smarter, more agile, efficient, user-friendly, reliable and transparent. Its ability to analyse the massive volume of information and to organize, detect, correlate, classify, segment, or automate data allows to make a judicial decision faster and easier.
Especially in those cases in which judicial operators are mere data controllers, they copy and paste information, perform repetitive tasks, or they analyse whether certain requirements are met to solve in a certain pre-established way. Automatization in the “easy” cases could allow to decongest the vast amount of judicial files and allow the judicial operators to focus on the truly complex tasks that a machine today is not yet capable of solving (perhaps cases with particularities that have never been analysed before or that imply an analysis of a new law) thus providing a better service of justice for all.
Humans, on the other hand, could be better at improvising and solving problems to new dilemmas that an A.I could have not been trained to solve yet.
Therefore, I believe the best system is one where A.I is a complement and not a substitute for human intelligence. In this sense, a judge should not have to “compete” against a machine to see which one does the job better, rather work as allies so the A.I can help the judge do his work more efficiently.
Algorithms also allow standardizing decisions, so that two different answers are not provided to the same problem. This is a strong point in favour of the automation of judicial processes, that is, argument consistency.
In my country, Argentina, it is not unusual to see in the same case that a first instance judge rules in a certain manner, then the losing side appeals to a Second Instance for the case to be reviewed by other three judges, which one might rule in the same way that the first judge and the other two (the majority) rule in a different manner. The case then might go to the Supreme Court where now five judges will decide, and let’s say three might argue in one way and the other two in another. This means that the same case analysed by nine different judges (all of them experts in the matter), five might have seen the case in one way, and the other four in a completely different manner. This implies very little legal security and could be fixed with more standardized criteria by an A.I.
A legal system requires that the decisions made by judges be as predictable as possible. That is, if a judge has ruled in a certain way in a case, it should be expected that a similar conclusion would be reached in a similar case. Otherwise, there would be no legal security and citizens could not operate in society without a clear definition of what can and cannot be done. This principle is known as Stare decisis.
However, the judiciary is also governed by another principle that involves analysing each case in particular base on its merits and the particular conditions and facts of the case. This implies finding the differences that operate in each case and defining a unique solution to that case. It is impossible for two cases to be exactly the same, something will always be different, the people involved, the place, the date, etc. All these factors could imply that no case follows the same patron, or, even more dangerous, that an A.I be unable to distinguish the important differences that separate one case from another and solve all of them in the same fashion.
As judges should always be able to inspect the reasoning behind the A.I, these systems should not seek to replace judges but to assist them, and prevent these types of errors from occurring. Otherwise, it would be like preventing a judge from using other tools to help him work more efficiently, such as a typewriter, a calculator or a book.
Using A.I responsibly with the development of white-box systems, that is, supervised, explainable, interpretable and traceable; the implementation of A.I in judicial processes should take advantage of the benefits of A.I without facing most of its risks.
Citizen services can also improve with the incorporation of the A.I, it can generate a more personalized, more available, and timely public service, sensitive to the needs of citizens to the extent that it has access to a broader database at the time of interaction. This can mean greater closeness and trust and a better use of public spending for the taxpayer.
With the increasing implementation of A.I in the judicial process, and knowing that these systems are new, and errors could still be found in its development in the future, it could be argued that citizens should have a right to decide to be judged by a human. One option to solve this, is to give parties in each case the possibility to decide whether they want their case to be analysed by a human or an A.I system, knowing that if they choose an A.I, the cost of the process could be cheaper and the matter be resolved in a faster manner. If both parties agree, the case can be solved by an A.I, and if at least one party decides he or she prefers a human judge, a human could solve the case. Thus, we could slowly move to a smarter justice system without forcing people to accept these changes while protecting individual rights.
Conclusion
The landscape of artificial intelligence has evolved significantly since 1950 when Alan Turing raised the question of whether machines can think.
On the one hand, we can already see hundreds of millions of people using and benefiting from A.I today. Not only private sector companies are taking advantage of this technology, but the public sphere it is also slowly realizing its transformative potential.
Every day more organizations are beginning to explore how these new technologies can help them to make their work more efficiently, have greater effectiveness and allow them to respond to the changes that this new industrial revolution has introduced in society.
Intelligent algorithms will be increasingly decisive to simplify processes, optimize human activities and maximize results or obtain others that, without A.I, would be impossible to achieve from our cognitive capacities in front of huge masses of data.
On the other side, we cannot forget that the need for an in-depth public debate on these tools before their implementation it is quintessential, to provide a framework for the development of artificial intelligence algorithms while respecting fundamental rights.
A.I can lead to harmful or beneficial applications for society. History itself provides us with several examples of such innovations: gunpowder was originally used in China to make fireworks, while its later introduction in Europe led to the production of firearms; knowledge of nuclear energy allowed the creation of weapons of mass destruction and also, the development of medical treatments that use radioactive isotopes.
In this sense, regulating these technologies responsible is a necessity that cannot be postponed. An unprecedented technological revolution such as this requires a local and global legal framework for the protection of individual’s data, privacy, and the common good, which ensures both transparency and confidentiality. The challenge is not minor and in this process it is important to listen to the different voices of society, both citizens, and the public, private, and the third sector; as well as learning from the successes and failures of other countries and from the recommendations of international organizations.
Understanding that A.I will inevitably become more present in our daily lives, it is key to know its benefits, but also its risks. Thinking about artificial intelligence, as Einstein once suggested[21], we cannot forget about the infinite potential of human stupidity, and as many Hollywood movies had shown us, not paying enough attention to the ethics involved in the development of A.I could have catastrophic consequences.
The risks are enormous but so are the benefits; we must understand that this disruptive technology offers a new opportunity to become more intelligent and to solve problems better but along with optimism about the enormous potential of A.I, a prudent attitude is also needed to help design and use these systems in a just, inclusive and responsible way.
A.I can enhance the capacities of the judicial sector with an automatisation of decision-making processes that will allow adopting a new paradigm of hybrid intelligence, combining human with artificial intelligence, and allow us to successfully arrive to a 4.0 judicial system.
References
Aberastury, Pedro y Corvalán Juan Gustavo (coordinadores). Administración pública digital. Revista Jurídica de Buenos Aires - año 43 - número 96 - 2018 Facultad de Derecho - Universidad de Buenos Aires Departamento de Publicaciones
Amunátegui, Carlos Perelló. “Sesgo e inferencia en redes neuronales ante el derecho”. 2020.
Bostrom, Nick & Yudkowsky, Eliezer. The Ethics of Artificial Intelligence. 10.1201/9781351251389-4. (2018).
Estevez, Elsa; Linares Lejarraga, Sebastián y Fillottrani, Pablo. Prometea. Transformando la administración de justicia con herramientas de inteligencia artificial. BID. 2020. Washington.
EUROPEAN COMMISSION FOR THE EFFICIENCY OF JUSTICE (CEPEJ) (2018) European ethical Charter on the use of Artificial Intelligence in judicial systems and their environment. Available at: https://rm.coe.int/ethical-charter-en-for-publication-4-december-2018/16808f699c
Gardner, Howard. Frames of Mind: The Theory of Multiple Intelligences. New York: Basic Books, 1983.
Goleman, Daniel. Emotional Intelligence: Why It Can Matter More Than Iq. New York: Bantam Books, 1995. Print.
Harari, Yuval. 21 lessons for the 21st century. Vintage. 2019.
Kahneman, Daniel & Tversky, Amos. (1972). Subjective probability: A judgment of representativeness. Cognitive Psychology, 3(3), 430–454. https://doi.org/10.1016/0010-0285(72)90016-3
Kurzweil, Ray. The Age of Intelligent Machines. Cambridge, Mass.: MIT Press. 1990.
Turing, Alan. Computing Machinery and Intelligence. Mind, New Series, Vol. 59, No. 236 (Oct., 1950), pp. 433-460 Published by: Oxford University Press on behalf of the Mind Association
[1] Logic Theorist Explained – Everything You Need To Know. Available at: https://history-computer.com/logic-theorist/ [2] The statement that the number of transistors that can be placed on an integrated circuit doubles every two years. This statement was first made by Gordon Moore, the president of Intel, in 1965 and it has remained valid for the first fifty years of the existence of integrated circuits. However, there are various reasons for thinking that this will come to an end in the future. For example, as circuits become smaller, the quantum effects associated with individual atoms and electrons become more significant. Available at: https://www.oxfordreference.com/view/10.1093/oi/authority.20110803100208256 [3] Meet ‘Ross,’ the newly hired legal robot. Available at: https://www.washingtonpost.com/news/innovations/wp/2016/05/16/meet-ross-the-newly-hired-legal-robot/ [4] Chatbot lawyer overturns 160,000 parking tickets in London and New York. Available at: https://www.theguardian.com/technology/2016/jun/28/chatbot-ai-lawyer-donotpay-parking-tickets-london-new-york [5] France Bans Analytics of Judges’ Decisions. Available at: https://www.lexology.com/library/detail.aspx?g=ff53dfbe-0fe6-4dee-8a1d-990bf8459020 [6] Can AI Be a Fair Judge in Court? Estonia Thinks So. Available at: https://www.wired.com/story/can-ai-be-fair-judge-court-estonia-thinks-so/ [7] EUROPEAN COMMISSION FOR THE EFFICIENCY OF JUSTICE (CEPEJ) European ethical Charter on the use of Artificial Intelligence in judicial systems and their environment. Available at: https://rm.coe.int/ethical-charter-en-for-publication-4-december-2018/16808f699c [8] Available at: https://www.iacajournal.org/articles/10.36745/ijca.367/ [9] Vieira de Carvalho Fernandes, R., Barros Mendes, D., Carvalho, G..A., & Honda Ferreira, H. (2021). "The VICTOR Project: Applying artificial intelligence to Brazils Supreme Federal Court". In Research Handbook on Big Data Law. Cheltenham, UK: Edward Elgar Publishing. doi: https://doi.org/10.4337/9781788972826.00021 [10]The 137 questions of COMPAS are available at: https://www.documentcloud.org/documents/2702103-Sample-Risk-Assessment-COMPAS-CORE.html. [11] ProPublica, “Machine bias: There’s software used across the country to predict future criminals. And it’s biased against blacks”, available at: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing. [12] [12] Loomis v. Wisconsin, 881 N.W.2d 749 (Wis. 2016), cert. denied, 137 S.Ct. 2290 (2017) [13] "FindLaw's Supreme Court of Wisconsin case and opinions". Findlaw. [14] "Supreme Court Order List (06/26/2017)" (PDF). Supreme Court of the United States. [15] "Docket for 16-6387". www.supremecourt.gov. [16] The need to avoid unfair bias has been expressed by an independent high-level expert group on artificial intelligence set by the European Commission in 2018. Available at: https://www.aepd.es/sites/default/files/2019-12/ai-definition.pdf [17] Amunátegui Carlos Perelló. “Sesgo e inferencia en redes neuronales ante el derecho”. 2020, page 32. [18] Amazon scraps secret AI recruiting tool that showed bias against women. (2018) Available at: https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G [19] Judicature. Getting Explicit About Implicit Bias. Available at: https://judicature.duke.edu/articles/getting-explicit-about-implicit-bias/ [20] Nature. Hungry judges dispense rough justice (2011) Zoë Corbyn. Available at: https://www.nature.com/articles/news.2011.227 [21] From the quote attributed to Albert Einstein: 'Two things are infinite: the universe and human stupidity; and I'm not sure about the universe.'