Вы находитесь на странице: 1из 36

Judicial Applications of Artificial Intelligence - Google Knjige

July 11, 2018

Briony Harris Senior Writer at Formative Content


Could an AI ever replace a judge in court?

Xiaofa stands in Beijing No 1 Intermediate People’s Court, offering legal advice and helping the public get to
grips with legal terminology. She knows the answer to more than 40,000 litigation questions and can deal with
30,000 legal issues. Xiaofa is a robot.
 
China already has more than 100 robots in courts across the country as it actively pursues a transition to smart
justice. These can retrieve case histories and past verdicts, reducing the workload of officials. Some of the robots
even have specialisms, such as commercial law or labour-related disputes. 
 
Chinese courts also use artificial intelligence to sift through private messages or comments on social media that
can be used as evidence in court. And traffic police are reportedly using facial recognition technology to identify
and convict offenders.
 
But these legal uses for AI are just the beginning of what may be possible in the future.  
 
An aide to judges
 
China has a civil law system that uses case law to determine the outcome of trials. With just 120,000 judges to
deal with 19 million cases a year, it is little wonder the legal system is turning to AI, law firm Norton Rose
Fulbright says.
 
The Supreme People’s Court has asked local courts to take advantage of big data, cloud computing, neural
networks and machine learning. It wants to build technology-friendly judicial systems and explore the use of big
data and AI to help judges and litigants resolve cases.
 
An application named Intelligent Trial 1.0 is already reducing judges’ workloads by helping sift through material
and producing electronic court files and case material.
But the emphasis is still on helping – rather than replacing – judges, barristers and lawyers.
 
“The application of artificial intelligence in the judicial realm can provide judges with splendid resources, but it
can’t take the place of the judges’ expertise,” said Zhou Qiang, the head of the Supreme People’s Court, who
advocates smart systems.
 
Eliminating bias?
But recent advances in AI mean the technology can do far more than sifting through vast quantities of data. It is
developing cognitive skills and learning from past events and cases.
This inevitably leads to questions as to whether AI will one day make better decisions than humans.
All human decisions are susceptible to prejudice and all judicial systems suffer from unconscious bias, despite
the best of intentions. 
Algorithms that can ignore factors that do not legally bear on individual cases, such as gender and race, could
remove some of those failings.
One of the most important considerations for judges is whether to grant bail and how long prison sentences
should be. These decisions are usually dictated by the likelihood of reoffending.
Algorithms are now able to make such decisions by giving an evidence-based analysis of the risks, rather than
relying on the subjective decision-making of individual judges.
Despite these obvious advantages, it is far from clear who would provide oversight of the AI and check their
decisions are not flawed. And more cautious observers warn that AIs may learn and mimic bias from their
human inventors or the data they have been trained with.
Making connections
But AI could also help solve crimes long before a judge is involved. VALCRI, for example, carries out the
labour-intensive aspects of a crime analyst’s job by wading through texts, lab reports and police documents to
highlight areas that warrant further investigation and possible connections that humans might miss.
AIs could also help to detect crimes before they happen. Meng Jianzhu, former head of legal and political affairs
at the Chinese Communist Party, said the Chinese government would start to use machine learning and data
modelling to predict where crime and disorder may occur.
“Artificial intelligence can complete tasks with a precision and speed unmatchable by humans, and will
drastically improve the predictability, accuracy and efficiency of social management,” Mr Meng said.
Setting a precedent
It is as yet uncertain which of these technologies may become widespread and how different governments and
judiciaries will choose to monitor their use.
The day when technology will become the judge of good and bad human behaviour and assign appropriate
punishments still lies some way in the future.  
However, legal systems often provide ideal examples of services that could be improved, while trials are likely
to benefit from better data analysis. 
The law often requires a trial to set a precedent – so watch out for the test case of AI as judge.

9.499 views|Jan 17, 2020,12:36am EST

The Future of Lawyers: Legal


Tech, AI, Big Data And Online
Courts

Bernard MarrContributor
Enterprise Tech




In the future, is it conceivable that a firm would be charged with legal
malpractice if they didn't use artificial intelligence (AI)? It certainly is.
Today, artificial intelligence offers a solution to solve or at least make the
access-to-justice issue better and completely transform our traditional legal
system. Here's what you need to know about how AI, big data, and online
courts will change the legal system.

The Future of Lawyers: Legal Tech, AI, Big Data And Online Courts
 ADOBE STOCK

The Future of Law

When I sat down in conversation with Richard Susskind, OBE, the world's


most-cited author on the future of legal services, to discuss the future of law
and lawyers, it became apparent just how much change the legal system will
face over the next decade thanks to innovation brought about by artificial
intelligence and big data.
In Richard’s book The Future of Law, published in 1996, he predicted that
in the future, lawyers and clients would communicate via email. This
revelation was shocking at the time, especially to those working in the legal
system; however, transmitting communication via email is now
commonplace for lawyers and their clients. This story gives insight into the
challenges faced in bringing the traditionally conservative legal system into
the 21st century.

Today In: Enterprise Tech

In his brand new book Online Courts and the Future of Justice, Richard
argues that technology is going to bring about a fascinating decade of
change in the legal sector and transform our court system. Although
automating our old ways of working plays a part in this, even more, critical
is that artificial intelligence and technology will help give more individuals
access to justice.

Our current access-to-justice problem, even in what are typically thought of


as mature systems, is significant. In fact, only about 46% of people have
access to the legal system. There are unimaginable backlogs in some court
systems. For most of us, litigation takes too much time and money. We can
use technology to help with this issue and make court a service rather than a
place as we move legal resolution online.
PROMOTED

UNICEF USA BRANDVOICE

 | Paid Program
All Children Count: Advocating For Birth Registration
Grads of Life BRANDVOICE

 | Paid Program
5 Ways Policymakers Can Help Build A Future-Ready Workforce Today
Civic Nation BRANDVOICE
 | Paid Program
Crafting A Newsletter Out Of First Gen Experiences

Some of the technologies that would allow this transition are quite basic.

The first generation is the idea that people who use the court system submit
evidence and arguments to the judge online or through some form of
electronic communication. Essentially judgments move from the courtroom
to online. In a digital society, we should certainly be able to institute
extended courts where we go beyond decisions made by judges to some kind
of diagnostic system to guide people regarding their legal options, how to
assemble evidence, and provide alternative ways for dispute resolution.

The second generation of using technology to transform the legal system


would be what Richard calls “outcome thinking” to use technology to help
solve disputes without requiring lawyers or the traditional court system. It
is entirely conceivable within a relatively small number of years that we will
have systems that can predict the outcomes of court decisions based on past
decisions by using predictive analytics. Imagine if people had the option
instead of waiting for a court date (and support from the traditional legal
system) to use a machine-learning system to make a prediction about the
likely outcome of a case and then accept that as a binding determination.

Some of the biggest obstacles to an online court system are the political will
to bring about such a transformation, the support of judges and lawyers,
funding, as well as the method we’d apply. For example, decisions will need
to be made whether the online system would be used for only certain cases
or situations.

Ultimately, we have a grave access-to-justice problem. Technology can help


improve our outcomes and give people a way to resolve public disputes in
ways that previously weren’t possible. While this transformation might not
solve all the struggles with the legal system or the access-to-justice issue, it
can offer a dramatic improvement.

The Future of Lawyers

So far, the emphasis on technology in the legal system has been to support
lawyers and their staff in some of the work they do, such as email,
accounting systems, word processing, and more. Now, we're beginning to
see the merits of using technology to automate some tasks such as
document analysis or document drafting—essentially moving from the back
office to the front office.

One of our biggest struggles in the future of the law profession is law
schools because they’re still generating 20th-century lawyers when what we
need is 21st-century lawyers to meet the demand of companies and
individuals who want a lower-cost legal option that is conveniently available
and delivered electronically.

Some legal work can now be done by machines when in the past, this was
unthinkable. Large disputes often have a huge number of documents to
analyze. Typically, armies of young lawyers and paralegals are put to work
to review these documents. A properly trained machine can take over this
work. Document drafting by machines is also gaining traction. We also see
systems that can predict the outcome of disputes. We're beginning to see
machines take on many tasks that we used to think were the exclusive role
of lawyers.

Tomorrow's lawyers will be the people who develop the systems that will
solve clients' problems. These legal professionals will be legal knowledge
engineers, legal risk managers, system development, experts in design
thinking, and more. These people will develop new ways of solving legal
problems with the support of technology. In many ways, the legal sector is
undergoing the digitization that other industries have gone through, and
because it's very document-intensive, it's actually an industry poised to
benefit greatly from what technology can offer.

Richard believes that in the next decade, machines and lawyers will work
alongside each other as well as some jobs being taken over by machines.
Eventually, he believes that the legal system and, therefore, a lawyer's job
will change because technology is allowing us the ability to solve problems
in a new way. For example, in the future, he expects there will be far fewer
cases tried in a traditional court, and therefore there will be less need for
lawyers who advocate on behalf of clients in a courtroom. Lawyers have a
choice to either compete with these systems or help build them. Richard
certainly counsels the latter.

What is Legal Artificial Intelligence (AI) and How Will It


Affect the Next Generation of Legal Professionals?

By the SMU Social Media Team


Lawyers have long made for compelling protagonists in popular culture—it’s a job
that requires emotional intelligence and quick-witted oratory, at least if you go by
courtroom dramas. But in the real world, effective lawyering increasingly hinges on
the use of data analytics and artificial intelligence (AI).
Zaid Hamzah, founder of Future Law Academy  and the CEO and General Counsel
of Asia Law Exchange , traces the beginning of advanced legal analytics to around
2012, when the term “Legal Analytics” and “Legal AI” started appearing on websites
and various legal publications. Since then, the use of natural language processing
and machine learning—both aspects of AI—have become common in mature
markets like the US and UK. In China, AI-assisted sentencing is even helping to
speed up the processing of cases in the judicial system.
“Over the last seven to eight years, these technologies have made a profound impact
on the legal profession,” he notes. “Data forms the foundation of AI. The rise of
analytics tools and the ability to develop machine learning algorithms have given
birth to legal AI. Legal AI has transformed the legal industry by automating areas
such as knowledge management, contract drafting, due diligence, predicting court
outcomes, and workflow management through robotic process automation.”
Lawyers can now use legal AI to skip the grunt work of reviewing millions of pages of
legal documents, and still obtain a fairly accurate picture of the outputs of their legal
review. For instance, they can more easily gather data and gauge how a particular
judge thought in the past and make decisions based on cases that stretch over
decades. This area is called “judicial analytics” and the use of legal AI can show how
a judge reasoned over a particular issue.
 

New tools for a new generation 


With the way things are going, the next generation of lawyers naturally need to be
equipped with the right data and AI skills so they can harness these technologies to
promote greater efficiency and greater access to justice. Against this background,
Zaid has worked with Singapore Management University’s (SMU) School of Law  to
design an SMU-X course  called ‘Legal Analytics and Artificial Intelligence in Law’.
This course focuses on integrating aspects of data analytics and AI in the field of
law, and trains students to be able to collaborate with data science teams to design
and develop legal products and services driven by data and AI. As the SMU-X
course involves sending students out into the real world, they will also have the
opportunity to learn how to manage and design legal AI projects for real-life law firms
or corporate legal departments.
“The idea is for them to understand the basics of how analytics and AI work in the
context of lawyering, and pick up practical skills for managing teams of coders,
programmers and data scientists, who will then translate legal contents into software
programmes or legal processes,” Zaid explains. In addition, students will also be
taught how data and AI apply to business strategies and management processes,
and the ethics involved in a responsible legal AI practice, he adds.
 

AI for advanced legal reasoning


The potential of legal AI to transform the world of law is still largely untapped in Asia,
Zaid observes.
“Today, there are attempts to develop legal AI that can reason like a lawyer or
judge,” he points out. “While we’re still at the very early stages of computational
thinking in law, developing a legal AI engine that can understand nuances in human
behaviour like a real lawyer would be ground-breaking. The next stage is going to be
about using AI for advanced legal reasoning through pattern recognition, concept
modelling through the use of advanced natural language processing, and deep
learning based on neural networks.”
Does that mean that human lawyers may eventually become irrelevant? Zaid, for
one, chooses to see technology as an enabler that would complement and
strengthen the lawyering skills of human lawyers. By leaving “the drudgery of
mechanical finding, retrieval and basic analysis to the machines”, lawyers of the
future will be freed to focus on higher value and more strategic work, he reasons.
What’s critical is that lawyers have to reskill themselves and to continue to relearn
and upskill. They must learn about emerging technologies like AI but, more
importantly, learn how to apply these technologies to enhance the lawyering process.
Zaid himself had to to go back to school to learn about data science. He studied at
the Singapore Polytechnic over a three-month period to pick up new knowledge and
skills related to AI and analytics. As part of his reskilling, Zaid engaged with global
companies like IBM and Microsoft to learn how they are changing the nature of
lawyering through data science.
Says the 60-year-old: “In my view, even the senior lawyers have to reskill, especially
if they are overseeing teams of younger lawyers. One has to relearn and reskill, so
continuous lifelong learning must go on. Can you imagine a lawyer today who never
learnt to use Microsoft Word? He wouldn’t be able to function. Likewise, in the future,
all lawyers must understand legal AI and legal analytics to be able to function
effectively.

Can AI Be More Efficient Than


People in the Judicial System?
AI is quicker and more efficient than humans in many ways, but should it
ever pass judgement on us?

By  Christopher McFadden

January 04, 2020

PhonlamaiPhoto/iStock
AI is set to replace many human jobs in the future, but should lawyers and judges be
among them? Here we explore where AI is already being used in judicial systems
around the world, and discuss if it should play a broader role.

In particular, could, or should, AI ever be developed that could pass judgment on a


living, breathing human being?

RELATED: CHINA HAS UNVEILED AN AI JUDGE THAT WILL 'HELP'


WITH COURT PROCEEDINGS

How is AI currently being used in judicial


systems?
Believe it or not, AI and other forms of advanced algorithms are already widely used
in many judicial systems around the world. In a number of states within the United
States, for example, predictive algorithms are currently being used to help reduce the
load on the judicial system.

"Under immense pressure to reduce prison numbers without risking a rise in crime,
courtrooms across the U.S. have turned to automated tools in attempts to shuffle
defendants through the legal system as efficiently and safely as possible."
- Technology Review. 
In order to achieve this, U.S. Police Departments are using predictive algorithms to
develop strategies for where to deploy their forces most effectively. Using an analysis
of historical crime statistics and technology such as facial-recognition, it is hoped this
level of automation will help improve the effectiveness of their human resources. 

The U.S. judicial service is also using other forms of algorithms, called risk
assessment algorithms, to help handle post-arrest cases, too. 

"Risk assessment tools are designed to do one thing: take in the details of a
defendant’s profile and spit out a recidivism score—a single number estimating the
likelihood that he or she will re-offend.

A judge then factors that score into a myriad of decisions that can determine what type
of rehabilitation services particular defendants should receive, whether they should be
held in jail before trial, and how severe their sentences should be. A low score paves
the way for a kinder fate. A high score does precisely the opposite." - Technology
Review. 

In China, AI-powered judges are also becoming a reality. Proclaimed as the "first of its
kind in the world," the city of Beijing has introduced an internet-based litigation
service center that features an AI-judge for certain types of casework.
Source: jessica45/Pixabay

The judge, called Xinhua, is an artificial female with a body, facial expressions, voice,
and actions that are based on an existing living and breathing human female judge in
the Beijing Judicial Service.

This virtual judge is primarily being used for basic repetitive casework, the Bejing
Internet Court has said in a statement. 'She' mostly deals with litigation reception and
online guidance rather than final judgment.

The hope is that use of the AI-powered judge and the online court will make access to
the judicial process more effective and more wide-reaching for Beijing's citizens. 

"According to court president Zhang Wen, integrating AI and cloud computing with
the litigation service system will allow the public to better reap the benefits of
technological innovation in China." - Radii China.

AI is also being used in China to sift through social media messages, comments, and
other online activities to help build evidence against potential defendants. Traffic
police in China are also beginning to use facial recognition technology to identify and
convict offenders.

Other police forces around the world are also using similar tech. 

Could Artificial Intelligence ever make good


decisions?
The answer to this question is not a simple one. While AI can make some types of
legal decisions, this doesn't mean it is necessarily a good idea. 

Many AI systems and predictive algorithms that use machine learning tend to be
trained by using existing data sets or existing historical information. 

While this sounds like a relatively logical approach, it relies heavily on the type and
quality of the data supplied.

"Junk in, junk out." as the saying goes. 

One major use of machine learning and big data is to identify correlations, or apparent
correlations, within data sets. This could potentially lead to false positives, in the case
of crime data, and not actually be very useful for identifying the underlying causes of
crime. 

As another famous adage warns, "correlation is not causation."

Humans are often just as guilty of this logical fallacy as an artificial replica could
potentially be. One famous example is the correlation between low income and a
person's proclivity towards crime.

Poverty is not necessarily a direct cause of criminal behavior, but it can be an indirect
cause, creating conditions that make crime more likely.  

If similar errors of correlation are not handled correctly, an AI-law enforcement


decision or judgment could quickly degenerate into a vicious cycle of imposing
penalities that are too severe or too lenient.

As with everything in life, the situation is actually more nuanced than it appears.
Humans are not perfect decision-making machines either.
If studies from 2018 are also correct, it seems that AI can be faster and more
accurate at spotting potential legal issues than human beings. This supports the
arguement that AI should be used in legal support roles, or at least reviewing legal
precedent. 

Could AI be used to replace human judges?


As we have already seen, AI and advanced algorithms are already in use around the
world for certain clerical and data gathering tasks. They are, in effect, doing some of
the "legwork" for human judges and lawyers.

But could they ever be used to completely replace humans in a judicial system? What
exactly would be the advantages and disadvantages of doing so?

Source: MF3d/iStock

Many would claim that an AI should be able to remove any bias in the final judgment-
making process. Their final decisions should, in theory, be based purely on the facts at
hand and existing legal precedent. 

This, of course, is supposed to already be the case with human judges. But any human
is susceptible to incomlete knowledge, prejudice, and unconscious bias, despite the
best of intentions.
But, probably more significantly, just because something is law doesn't necessary
mean it's just. "Good" and "bad" behavior is not black or white, it is a highly nuanced
and completely human construction.

Such questions remain within the realm of philosophy, not computer science.
Although, others would likely disagree, and that might be seen as a "good" thing. 

Judges also have the role of making decisions on the offender's punishment post-
conviction. These can range from the minor (small fines) to the life-changing, such as
imposing long-term imprisonment, or even the death penalty in areas where it is used. 

Such decisions are generally based on a set of sentencing guidelines that takei nto 
account factors such as the severity of a crime, its effect on the victims, previous
convictions, and the convict's likelihood of re-offending. As we have seen, this is one
area where AI and predictive algorithms are already being used to help with the
decision-making process.

Judges can, of course, completely ignore the recommendation from the AI. But this
might not be possible if humans were completely removed from the process. 

Perhaps a case could be made here for panels of AI judges made up of a generative
adversarial network (GAN). 

But that is beyond the scope of this article. 

Would AI judges be unbiased?


One apparent benefit of using AI to make decisions is that algorithms can't really have
a bias. This should make AI almost perfect for legal decisions, as the process should
be evidence-based rather than subjective — as can be the case for human judges. 

Sounds perfect, doesn't it? But "the grass isn't always greener on the other side."

Algorithms and AI are not perfect in-and-of-themselves in this regard. This is


primarily because any algorithm or AI needs to first be coded by a human.

This can introduce unintended bias from the offset.

AIs may even learn and mimic bias from their human counterparts and from the
specific data they have been trained with. Could this ever be mitigated against?
Another issue is who will oversee AI-judges? Could their decisions be challenged at a
later date? Would human judges take precedence over a decision by an AI, or vice
versa?

The World Government Summit held in 2018, made an interesting and poignant


conclusion on this subject that bears repeating verbatim: -

"It is as yet uncertain which of these technologies may become widespread and how
different governments and judiciaries will choose to monitor their use.

The day when technology will become the judge of good and bad human behavior and
assign appropriate punishments still lies some way in the future.  

However, legal systems often provide ideal examples of services that could be
improved, while trials are likely to benefit from better data analysis. The law often
requires a trial to set a precedent – so watch out for the test case of AI as a judge." 

In conclusion, could AI ever replace human legal professionals or be more efficient at


legal decision-making? The answer, it seems, is both yes and no. 

Yes, with regards to performing support or advisory roles such as gathering evidence
or estimating the likelihood of re-offending. No, with regards to making final
judgments and sentencing decisions. 

It is probably prudent to give human beings, rather than code, the last word when it
comes to sentencing. The law and legal systems can, after all, be legitimately labeled
as a human construction.

Existing legal systems are both beautifully jury-rigged and maddening illogical at
times, and have been adapted and upgraded as sense and sensibilities evolved over
time — and that suits human beings just fine. Most legal systems are not set in stone
for all time; they evolve as society does.

It is not likely that a machine could ever be trained to understand, empathize, or pass
judgment "in the spirit of the law."

Perhaps humans, with all our imperfections and logical inconsistencies, are the only
possible arbiters of justice on one another. For this reason, it could be argued that
"justice" should never be delegated to machines, as their "cold logic" could be seen as
being at odds with the "human condition". 

But we'll let you make up your own mind.

How can artificial intelligence affect courts?

12 de March de 2017

Researches on artificial intelligence are increasing greatly in recent years. This


expansion it associated with an increasing availability of data shared by users,
allowed by a context of expansion of internet access. This sheer amount of
information is often used in teaching processes for algorithms, like machine learning.
In these cases the algorithm is not previously programed to do just X or Y but it is
constructed in a way that it can learn from a range of inputs.

Machine learning allows a program to analyze a set of data and then learn how to
make predictions, or take decisions, based on what was learned. This subfield of
computer science is already a reality in our daily lives, from facial recognition
programs, like the one used by Facebook, to areas like marketing, speech
translation, improvement of search algorithms, DNA researches, etc.

Strong and Weak Artificial Intelligence (AI)

There are also researchers trying to apply these tools in law, like utilising AI in court
rulings. But it is important to try to comprehend a little about how AI works in general
in order to avoid some exaggerated extrapolations. That’s because it has been
frequently appearing news articles about the replacement of lawyers and judges by
robots in a near future. In article of  The Guardian is stated that:

‘Software that is able to weigh up legal evidence and moral questions of right and
wrong has been devised by computer scientists at University College London, and
used to accurately predict the result in hundreds of real life cases. The AI “judge” has
reached the same verdicts as judges at the European court of human rights in almost
four in five cases involving torture, degrading treatment and privacy[…] The algorithm
examined English language data sets for 584 cases […] In each case, the software
analysed the information and made its own judicial decision. In 79% of those
assessed, the AI verdict was the same as the one delivered by the court.”

An inattentive reading could suggest  that the program would be the equivalent of a
human conscience, capable of judging several cases based on the analysis of a
large number of jurisprudence.
However, current advances in artificial intelligence are not capable of simulating a
human brain, which is referred to as strong AI, and there is still strong debate about
whether this would even be possible.

In this respect, Professor Nikolaos Aletras, one of the project leaders, clarifies that
researchers do not expect that judges and lawyers will be replaced by AI in the
future, but it is extremely possible that IA tools could help them in their rulings. A
judge analyzing a new case may use a similar program to compare law cases,
indicating which similarities and differences were found or even how an IA would rule
it based on previous rulings.

So the possibilities of artificial intelligence nowadays are at the level of weak AI, a
category in which the algorithm is able to perform only specific tasks, not having a
general learning capacity. Although they are not at the same level of a broad
intelligence, as the case of human beings, such programs are quite sophisticated,
creating opportunities for diverse applications.

Machine Learning and preventive detention

Another important research was conducted by the National Bureau of Economic


Research in the USA. Economists and computer scientists have developed software
to measure the likelihood of defendants fleeing or committing new crimes while they
are awaiting trial in liberty. The algorithm assigns a risk score based on information
from the case (which offense they are suspected of, where and when the person was
detained), the rap sheet of the accused, and age.

The program has been trained with information from hundreds of thousands of New
York criminal cases, and it has been tested on hundreds of thousands of new other
cases, proving to be more effective at assessing risk than judges.

“They estimate that for New York City, their algorithm’s advice could cut crime by
defendants awaiting trial by as much as 25 percent without changing the numbers of
people waiting in jail. Alternatively, it could be used to reduce the jail population
awaiting trial by more than 40 percent, while leaving the crime rate by defendants
unchanged.”

These results demonstrate that such a tool could bring benefits to the Brazilian penal
system, since one in three prisoners are in preventive detention awaiting their trial,
probably unnecessarily in a significant number of cases. These avoidable
preventives only worsen the overcrowded scenario of brazilian prisons, since it
has 659.020 people imprisoned

Transparency and prejudice replicated by the machine


However, there is a fundamental need for accountability and transparency of these
algorithms because it may reproduce human prejudices, if the data provided for the
program’s learning is biased. This fact was detected in a software developed by a
private US company to calculate the likelihood of criminal recidivism. The algorithm
was wrong in a significative number of predictions when they were of African-
Americans, showing racial disparities in risk score.

Therefore It is important that such tools are auditable in order to avoid unfair and non
transparent decision-making criteria.

The algorithm cited above, developed by the National Bureau of Economic Research,
sought to avoid this problem by using only the demographic data of the defendants.

In an upcoming future it is very likely that this type of artificial intelligence will assist
systems of justice daily, increasing their efficiency and ensuring a better application
of justice. But we, as a society, must always ensure accountability of these tools, in
order to avoid  prejudice and undesirable biases which can deny justice.

AI Goes to Court: The


Growing Landscape of AI for
Access to Justice

Jonah Wu

Follow

Aug 6, 2019 · 14 min read

1. Can AI help improve access to civil


courts?
Civil court leaders have a newly strong interest in how artificial
intelligence can improve the quality and efficiency of legal
services in the justice system, especially for problems that self-
represented litigants face [1, 2, 3, 4, 5]. The promise is that
artificial intelligence can address the fundamental crises in
courts: that ordinary people are not able to use the system
clearly or efficiently; that courts struggle to manage vast
amounts of information; and that litigants and judicial officials
often have to make complex decisions with little support.

If AI is able to gather and sift through vast troves of information,


identify patterns, predict optimal strategies, detect anomalies,
classify issues, and draft documents, the promise is that these
capabilities could be harnessed for making the civil court system
more accessible to people.
The question then, is how real these promises are, and how they
are being implemented and evaluated. Now that early
experimentation and agenda-setting have begun, the study of AI
as a means for enhancing the quality of justice in the civil court
system deserves greater definition. This paper surveys current
applications of AI in the civil court context. It aims to lay a
foundation for further case studies, observational studies, and
shared documentation of AI for access to justice development
research. It catalogues current projects, reflects on the
constraints and infrastructure issues, and proposes an agenda
for future development and research.

2. Background to the Rise of AI in


the Legal System
When I use the term Artificial Intelligence, I distinguish it from
general software applications that are used to input, track, and
manage court information. Our basic criteria for AI-oriented
projects is that the technology has capacity to perceive
knowledge, make sense of data, generate predictions or
decisions, translate information, or otherwise simulate
intelligent behavior. AI does not include all court technology
innovations. For example, I am not considering websites that
broadcast information to the public; case or customer
management systems that store information; or kiosks, apps, or
mobile messages that communicate case information to
litigants.

The discussion of AI in criminal courts is currently more robust


than in civil courts. It has been proposed as a means to monitor
and recognize defendants; support sentencing and bail
decisions; and better assess evidence [3]. Because of the rapid
rise of risk assessment AI in the setting of bail or sentencing,
there has been more description and debate on AI [6]. There has
been less focus on AI’s potential, or its concerns, in the civil
justice system, including for family, housing, debt, employment,
and consumer litigation. That said, there has been a robust
discourse over the past 15 years of what technology applications
and websites could be used by courts and legal aid groups to
improve access to justice [7].

The current interest in AI for civil court improvements is in sync


with a new abundance of data. As more courts have gathered
data about administration, pleadings, litigant behavior, and
decisions [1], it presents powerful opportunities for research and
analytics in the courts, that can lead to greater efficiency and
better design of services. Some groups have managed to use data
to bring enormous new volumes of cases into the court system —
like debt collection agencies, which have automated filings of
cases against people for debt [8], often resulting in complaints
that have missing or incorrect information and minimal,
ineffective notice to defendants. If litigants like these can
harness AI strategies to flood the court with cases, could the
courts use its own AI strategies to manage and evaluate these
cases and others — especially to better protect unwitting
defendants against low-quality lawsuits?

The rise in interest in AI coincides with state courts experiencing


economic pressure: budgets are cut, hours are reduced, and even
some locations are closed [9]. Despite financial constraints,
courts are expected to provide modern, digital, responsive
services like in other consumer services. This presents a
challenging expectation for the courts. How can they provide
judicial services in sync with rapidly modernizing other service
sectors — in finance, medicine, and other government bodies —
within significant cost constraints? The promise of AI is that it
can scale up quality services and improving efficiency, to
improve performances and save costs [10].

A final background factor to consider is the growing concern


over public perceptions of the judicial system. Yearly surveys
indicate that communities find courts out of touch with the
public, and with calls for greater empathy and engagement with
“everyday people” [11]. Given that the mission of the court is to
provide an avenue to lawful justice to constituents, if AI can help
the court better achieve that mission without adding on averse
risks, it would help the courts establish greater procedural and
distributive justice for its litigants, and hopefully then bolster its
legitimacy to the public and engagement with it.

3. What could be? Proposals in the


Literature for AI for access to justice
What has the literature proposed on how AI techniques can
address the access to justice crisis in civil courts? Over the past
several decades, distinct use cases have been proposed for
development. There is a mix of litigant-focused use cases (to
help them understand the system and make stronger claims),
and court-focused use cases (to help it improve its efficiency,
consistency, transparency, and quality of services).

 Answer a litigant’s questions about how the law applies to


them. Computational law experts have proposed automated
legal reasoning as a way to understand if a given case is in
accordance with the law or not [12]. Court leaders also
envision AI to help litigants conduct effective, direct research
into how the law would apply to them [4,5]. Questions of how
the law would apply to a given case lay on a spectrum of
complexity. Questions that are more straightforwardly
algorithmic (e.g., if a person exceeded a speed limit, or if a
quantity or date is in an acceptable range) can be automated
with little technical challenge [13]. Questions that have more
qualitative standards, like whether it was reasonable,
unconscionable foreseeable, or done in good faith, are not as
easily automated — but they might be with greater work in
deep learning and neural networks. Many propose that
expert systems, or AI-powered chatbots might help litigants
know their rights and make claims [14].

 Analyze the quality of a legal claim and evidence. Several


proposals are around making it easier to understand what
has been submitted to court, and how a case has proceeded.
Some exploratory work has pointed towards how AI could
automatically classify a case docket, the chronological events
in a case, in order that it could be understood
computationally [15]. Machine learning could find patterns in
claims and other legal filings, to indicate whether something
has been argued well, whether the law supports it, and
evaluate it versus competing claims [16].

 Provide coordinated guidance for a person without a lawyer. Many


have proposed focus on developing a holistic AI-based
system to guide people without lawyers through the choices
and procedure of a civil court case. One vision is of an
advisory system that would help a person understand
available forms of relief, helping them understand if they can
meet the requirements, informing them of procedural
requirements; and helping them to draft court documents
[17, 18].

 Predict and automate decisionmaking. Another proposal,


discussed within the topic of online dispute resolution, is
around how AI could either predict how a case will be
decided (and thus give litigants a stronger understanding of
their changes), or to actually generate a proposal of how a
disputes should be settled [19, 20]. In this way, prediction of
judicial decisions could be useful to access to justice. It could
be integrated into online court platforms where people are
exploring their legal options, or where they are entering and
exchanging information in their case. The AI would help
litigants to make better choices regarding how they file, and
it would help courts expedite decision-making by either
supporting or replacing human judges’ rulings.

4. What is happening so far? AI in


action for access
With many proposals circulating about how AI might be applied
for access to justice, where can we see these possibilities being
developed and piloted with courts? Our initial survey identifies a
handful of applications in action.

4.1. Predicting settlement arrangements, judicial


decisions, and other outcomes of claims
One of the most robust areas of AI in access to justice work has
been in developing applications to predict how a claim, case, or
settlement will be resolved by a court. This area of predictive
analytics has been demonstrated in many research projects, and
in some cases have been integrated into court workflows.

In Australian Family Law courts, a team of artificial intelligence


experts and lawyers have begun to develop Split-Up system, to
use rules-based reasoning in concert with neural networks to
predict outcomes for property disputes in divorce and other
family law cases [21]. The Split Up system is used by judges to
support their decision-making, by helping them to identify the
assets of marriage that should be included in a settlement, and
then establishing what percentage of the common pool each
party should receive — which is a discretionary judicial choice
based on factors including contributions, amount of resources,
and future needs. The system incorporates 94 relevant factors to
make its analysis, which uses neural network statistical
techniques. The judge can then propose a final property order
based on the system’s analysis. The system also seeks to make
transparent explanations of its decision, so it uses Toulmin
Argument structures to represent how it reached its predictions.

Researchers have created algorithms to predict Supreme Court


and European Court of Human Rights decisions [22, 23, 24].
They use natural language processing and machine learning to
construct models that predict the courts’ decision with strong
accuracy. Their predictions draw from the formal facts
submitted in the case to identify what a likely outcome, and
potentially even individual justices’ votes will be. This judicial
decision prediction research can possibly used to offer predictive
analytic tools to litigants, so they can better assess the strength
of their claim and understand what outcomes they might face.
Legal technology companies like Ravel and LexMachina
[25, 26], claim that they can predict judges’ decision and case
behavior, or the outcomes of an opposing party. The
applications are mainly aimed at corporate-level litigation,
rather than access to justice.

4.2. Detecting abuse and fraud against people


the court oversees
Courts’ role in overseeing guardians and conservators means
that they should be reducing financial exploitation of vulnerable
people by those appointed to protect them. With particular
concern for financial abuse of elderly by their conservators or
guardians, a team in Utah began building an AI tool to identify
likely fraud in the reported financial transactions that
conservators or guardians submit to the court. The system,
developed in concert with a Minnesota court system in a
hackathon, would detect anomalies and fraud-related patterns,
and send flag notifications to courts to investigate further [28].

4.3. Preventative Diagnosis of legal issues,


matching to services, and automating relief
A robust branch of applications has been around using AI
techniques to spot people’s legal needs (that they potentially did
not know they had), and then either match them to a service
provider or to automate a service for them, to help resolve their
need. This approach has begun with the expungement use case
— in which states have policies to help people clear their
criminal record, but without widespread uptake. With this
problem in mind, groups have developed AI programs to
automatically flag who has a criminal record to clear, and then
to streamline the expungement. help automate the expungement
process for their region. In Maryland, Matthew Stubenberg from
Maryland Volunteer Lawyers Service (now in Harvard’s A2J
Lab) built a suite of tools to spot their organization’s clients’
problems, including overdue bills and criminal records that
could be expunged. This tool helped legal aid attorneys diagnose
their clients’ problems. Stubenberg also made the criminal
record application public-facing, as MDExpungement, for
anyone to automatically find if they have a criminal record and
to submit a request to clear it [29].

Code for America is working inside courts to develop another AI


application for expungement. They are work with the internal
databases of California courts to automatically identify expunge
eligible records, eliminating the need for individuals to apply for
[30].
The authors, in partnership with researchers at Suffolk LIT Lab,
are working on an AI application to automatically detect legal
issues in people’s descriptions of their life problems, that they
share in online forums, social media, and search queries [31].
This project involves labeling datasets of people’s problem
stories, taken from Reddit and online virtual legal clinics, to
then train a classifier to be able to automatically recognize what
specific legal issue a person might have based on their story.
This classifier could be used to power referral bots (that send
people messages with local resources and agencies that could
help them), or to translate people’s problem stories into
actionable legal triage and advisory systems, as had been
envisioned in the literature.

4.4. Analyzing quality of claims and citations


Considering how to help courts be more efficient in their
analysis of claims and evidence, there are some applications —
like the product Clerk from the company Judicata — that can
read, analyze, and score submissions that people and lawyers
make to the court [32]. These applications can assess the quality
of a legal brief, to give clerks, judges, or litigants the ability to
identify the source of the arguments, cross check them against
the original, and possibly also find other related cases. In
addition to improving the efficiency of analysis, the tool could be
used for better drafting of submissions to the court — with
litigants checking the quality of their pleadings before
submitting them.

4.5. Active, intelligent case management


The Hebei High Court in China has reported the development of
a smart court management AI, termed Intelligent Trial 1.0
system [33]. It automatically scans in and digitizes filings; it
classifies documents into electronic files; it matches the parties
to existing case parties; it identifies relevant laws, cases, and
legal documents to be considered; it automatically generates all
necessary court procedural documents like notices and seals;
and it distributes cases to judges for them to be put on the right
track. The system coordinates various AI tasks together into a
workstream that can reduce court staff and judges’ workloads.

4.6. Online dispute resolution platforms and


automated decision-making
Online dispute resolution platforms have grown around the
United States, some of them using AI techniques to sort claims
and propose settlements. Many ODR platforms do not use AI,
but rather act as a collaboration and streamlining platform for
litigants’ tasks. ODR platforms like Rechtwijzer, MyLaw BC, and
the British Columbia Civil Resolution Tribunal, use some AI
techniques to sort which people can use the platform to tackle a
problem, and to automate decision-making and settlement or
outcome proposal [34].

We also see new pilots of online dispute platforms in Australia,


in the state of Victoria with its VCAT pilot for small claims (that
is now in hiatus, awaiting future funding) — and in Utah, for its
small claims in one place outside Salt Lake City.

These pilots are using platforms like Modria (part of Tyler


Technology), Modron, or Matterhorn from Court Innovations.
How much AI is part of these systems is not clear — it seems
they are mainly platform for logging details and preferences,
communicating between parties, and drafting/signing
settlements (without any algorithm or AI tool making a decision
proposal or crafting a strategy for parties). If the pilots are
successful and become ongoing projects, then we can expect
future iterations to possibly involve more AI-powered
recommendations or decision tools.
5. Agenda for Development and
Infrastructure of AI in access to
justice
If an ecosystem of access to justice AI is to be accelerated, what
is the agenda to guide the growth of projects? There is work to
be done on the infrastructure of sharing data, defining ethics
standards, security standards, and privacy policies. In addition,
there is organizational and coalition-building work, to allow for
more open innovation and cross-organization initiatives to
grow.

5.1.Opening and standardizing datasets


Currently, the field of AI for access to justice is harmed by the
lack of open, labeled datasets. Courts do hold relatively small
datasets, but there are not standard protocols to make them
available to the public or to researchers, nor are there labeled
datasets to be used in training AI tools [35]. There are a few
examples of labelled court datasets, like from the Board of
Veterans Appeals [36]. A newly-announced US initiative, the
National Court Open Data Standards Project, will promote
standardization of existing court data, so that there can be more
seamless sharing and cross-jurisdiction projects [37].

5.2.Making Policies to Manage Risks


There should be multi-stakeholder design of the infrastructure,
to define an evolving set of guidance for issues around the
following large risks that court administrators have identified as
worries around new AI in courts [4, 5].
 Bias of possible Training Data Sets. Can
we better spot, rectify,
and condition inherent biases that the data sets might have,
that we are using to train the new AI?

 Lack of transparency of AI Tools.


Can we create standard ways
to communicate how an AI tool works, to ensure there is
transparency to litigants, defendants, court staff, and others,
so that there can be robust review of it?

 Privacy of court users. Can


we have standard redaction and
privacy policies that prevent individuals’ sensitive
information from being exposed [38]? There are several
redaction software applications that use natural language
processing to scan documents and automatically redact
sensitive terms [39, 40].

 New concerns for fairness. Willcourts and the legal profession


have to change how they define what ‘information versus
advice’ is, as currently guide regulations about what types of
technological help can be given to litigants? Also, if AI
exposes patterns of arbitrary or biased decision-making in
the courts, how will the courts respond to change personnel,
organizational structures, or court procedures to better
provide fairness?

For many of these policy questions, there are government-


focused ethics initiatives that the justice system can learn from,
as they define best practices and guiding principles for how to
integrate AI responsibly into public, powerful institutions
[42, 43, 44].

6. Conclusion
This paper’s survey of proposals and applications for AI’s use for
access to justice demonstrates how technology might be
operationalized for social impact.
If there is more infrastructure-oriented work now, that
establishes how courts can share data responsibly, and set new
standards for privacy, transparency, fairness, and due process in
regards to AI applications, this nascent set of projects may
blossom into many more pilots over the next several years.

In a decade, there may be a full ecosystem of AI-powered courts,


in which a person who faces a problem with eviction, credit card
debt, child custody, or employment discrimination could have
clear, affordable, efficient ways to use use the public civil justice
system to resolve their problem. Especially with AI offering
more preventative, holistic support to litigants, it might have
anti-poverty effects as well, ensuring that the legal system
resolves people’s potential life crises, rather than exacerbating
them.

AI

How could AI impact the justice system?


The use of artificial intelligence (AI) and machine learning is already imposing
changes to market practice and service delivery in some parts of the legal sector.
However, the role of efficiency driven solutions powered by AI—and the legal tech
space broadly—is still evolving within the legal industry.

In the justice system specifically, AI has the potential to radically influence the way
criminal and civil proceeding are heard and decided—though, there are many
questions around its eventual application, and the necessity to consider the ethical
implications of using such technology. Sylvie Delacroix, Professor in Law and Ethics
at the University of Birmingham, spoke to Thomson Reuters Legal Insights Europe
about her views on the subject, and the work that she has been doing in this area of
the legal industry.

You are part of The Law Society’s new Public Policy Commission set up to look
at ‘Algorithms in the Justice System’. What has that work involved, and what
have the outcomes been so far?

The Commission was set up to examine the use of algorithms in the justice system in
England and Wales and what controls, if any, are needed to protect human rights
and trust in the justice system. Christina Blacklaws is chairing, and Sofia Olhede and
myself have been taking evidence from a range of experts (tech, government,
commercial and human rights) on whether algorithms and their use within the justice
system should be regulated, and if so, how. There are two more upcoming evidence
sessions (7 and 14 February 2019). We are keen to hear from a wide range of
stakeholders and there is still time to submit your evidence, which will be taken into
account when drafting the commission’s report (due this summer).

As the legal industry increasingly engages with efficiency driven solutions,


including AI and machine learning—what controls, if any, are needed to ensure
that trust and basic human rights are protected in the justice system?

I think it’s helpful to distinguish between two kinds of issues.

Within customer-facing solutions, we are going to see an explosion of ‘legal apps’.


There will be cases (think parking fines) where there is little downside to the vital
increase in affordability and accessibility that automation brings, provided
transparency, accountability and privacy are safeguarded. Yet such clear-cut cases
of unproblematic automation are not that common. Laudable as it may be, the drive
to democratise legal expertise by distilling it into mass-market, problem solver apps
can conceal issues that demand human input. As an example, an app that allows
those who have recently been dismissed from their job to avail themselves of their
right to severance pay (which may be opaque due to complex legislation) is
commendable. Yet without a proactive referral system, such an app would fail its
users.  The vulnerability that is concomitant with finding oneself jobless cannot be
addressed by algorithms, no matter how much empathy such apps may be able to
display.

At a larger scale, there is a risk that a focus on efficiency. For instance—through


increasingly performant prediction tools—it will make us blind to the fact that
increased automation is changing the very nature of the legal system. Given their
impressive accuracy, it is highly likely that lawyers will increasingly refer to prediction
tools to advise clients on whether their claim is worth pursuing. This may seem like a
welcome innovation, except for the fact that this will insidiously contribute to a
growing degree of conservatism, since cases with a low success prediction are
unlikely to be heard in court. This in turn makes organic changes within case law less
likely. Shifts in case law often depend upon an accumulation of previous,
unsuccessful cases that trigger a growing number of dissenting voices (both within
and without the judiciary). There may be ways of developing tools that not only
predict the chances of success in court, but also the likelihood that a particular case
will eventually contribute to some organic evolution within case law, but commercial
incentives for both the development and use of such tools will be low.

How do you envisage AI impacting the legal profession and the role of lawyers
in the next five years?

There is little doubt that advancements with computer systems will play an essential


role within the legal profession, and that this could transform it for the better.
Automated document management (and discovery) is already becoming
commonplace, saving lawyers a lot of dull workhours, but we are still a long way from
harnessing the full potential of the data now available. Everything hangs on
exactly how we harness that potential, whether we allow an instrumentalist logic to
take over or whether the aims that preside over such data mining reflect what we
want law for.
In terms of future roles for lawyers, again, there is no doubt that the nature of that
role will change. In many areas we probably won’t need quite as many lawyers.
Nobody will be surprised to hear that. What few people realise, however, is just how
urgently we need lawyers trained in data governance. I believe that in the next five
years we will see an increasing need for lawyers acting as intermediaries between
data subjects and data controllers (both in GDPR countries and elsewhere). Law
schools need to get their act together and urgently train future lawyers in data
governance. This would ideally be within the context of inter-disciplinary degrees. We
do need lawyers with some minimal training in statistics and computer science.

Tell us about the paper that you published earlier this year ‘Computer Systems
Fit for the Legal Profession?’, and what inspired you to undertake this
research?

I was struck by how easy it is to adopt a bluntly consequentliast outlook, according to


which automation within the professions is both legitimate and desirable provided it
improves the quality, accountability and accessibility of professional services. That
this line of argument is so successful is in partly our fault, legal
theorists/philosophers. I think we’ve failed to explain in a credible way what grounds
the particular responsibility of professionals, and what distinguishes it from that of
expert service providers in general. I tried to remedy that in an earlier paper I
published, ‘A Vulnerability-based Account of Professional Responsibility’,
explaining how, in many lay-professional encounters, it is our very commitment to
moral equality that is at stake.

I believe this turns the case for wholesale automation on its head. One can no longer
assume that, as a rule, wholesale automation is legitimate, provided it improves the
quality and accessibility of legal services. The assumption, instead, is firmly in favour
of designing systems that better enable legal professionals to live up to their specific
responsibility.

SUPREME COURT TO USE


ARTIFICIAL INTELLIGENCE FOR
BETTER JUDICIAL SYSTEM
ARTIFICIAL INTELLIGENCE ASIA LATEST NEWS LEGAL

by Smriti Srivastava November 27, 2019 0 comments


SA Bobde, the Chief Justice of India said that the Supreme Court has proposed to
introduce a system of AI (artificial intelligence) that would help in better administration
of justice delivery. However, he made clear that people should form the impression
that the AI would ever replace the judges.

The CJI was addressing the Constitution Day function organized by the Supreme
Court Bar Association (SCBA). He said – “We propose to introduce, if possible, a
system of artificial intelligence. There are many things which we need to look at
before we introduce ourselves. We do not want to give the impression that this is
ever going to substitute the judges.”

According to the CJI, machines cannot replace humans specifically the knowledge
and wisdom of judges. The deployment of the AI system will help reduce pendency
and expedite judicial adjunction.

The President of India Ram Nath Kovind was also present at the event. He launched
the Supreme Court mobile application. Justice Bodbe, while talking about the
application, asserted that an artificial intelligence fuelled law translation system will
facilitate the quality translation and will further help in improving the efficiency of the
Indian Judicial System.

Reportedly, the app that was released will translate Supreme Court judgments in
more than 9 regional languages. The CJI said that the constitution is based on
plurality and popularity. He further added that – “Our Constitution is based on
plurality, popularity. Through our Constitution, a billion voices speak and articulate
many things.”

Talking about the constitution, he also recalled that the Indian Constitution which is
popularly known as the ‘People’s Constitution’ was drafted on 26 November 1949.
According to him, the ‘sacrosanct document’ represents the high attributes of
harmony in the country’s history.

According to CJI, the role of the constitution is not limited to mere establishing
institutions for governance of India, rather it truly exemplifies as transformative verso
in character. The drafting of the constitution marked the transition of India from the
culture of the authority of a colonial regime to the culture of justification of democratic
purity.

Justice SA Bobde also stated that the constitution of India is reflective of a fine
balance of blending diversity with unity, a plurality with stability, pragmatism with
idealism, formality with adaptability and liberty with security. It combines all within a
framework and is carefully designed to safeguard the basic freedoms.

He quoted that, “Over the years, the Indian judiciary led by the Supreme Court of
India has facilitated a social revolution infusing it with renewed vigor and vitalism at a
crucial juncture of our nation’s history and I also led the same impression when I
attend an international conference. Somehow, our constitution and our judiciary are
viewed at very differently these days….”

Union Law Minister was also present at the event along with the several other sitting
apex court judges. Attorney General KK Venugopal and SCBA president Rakesh
Khanna also attended the function.

The Union Minister said that India had commenced its “start-up movement” in 2015
and today it has become the third-largest country in terms of start-ups. According to
him, more than 24,000 start-ups have come up since 2015. Out of these 10,000 are
IT-based start-ups.

Вам также может понравиться