Man-Made Law, ML-Powered Lawyers and AI Judges

We’ve already written quite a few times about various aspects of computational law. We’re taking the discussion a step further: what about AI-powered programs and human law?

Man-Made Law, ML-Powered Lawyers and AI Judges

By Iffy Kukkoo

03 Dec, 2017

We’ve already written quite a few times about various aspects of computational law. We’re taking the discussion a step further: what about AI-powered programs and human law? Is AI capable of changing the way we think about legal procedures? Would it be able to make human law more just? If so – how? If not – should we abandon the idea altogether?

 

The Evolution of the Judge: From Humans to Computer Programs

Let’s think for a moment about what it takes for someone to be a good judge. First of all, one needs to have an extensive knowledge of the existent law in a particular area, which implies that he or she needs to have dedicated many years of his or her life studying the law. Studying the law is, of course, only part of the equation; the theoretical part, to be exact. Real experience comes with practice.

Both the theoretical and the practical part of a judge’s background has its foundation in legal history. Meaning: both are based on the careful analyses of existent legal practice against the background of thousands and thousands of relevant cases. So, it’s a complex decision-making process which involves finding the common and recurring patterns in similar-looking and/or applicable cases. In a nutshell, that’s how judges train themselves to make the right choice.

For humans, recognizing patterns is a mostly unconscious process which takes into account many things nobody is consciously aware of. We know that we can learn something from anything, and we are able to apply the acquired knowledge to solving similar problems in the future. That’s why we don’t really have a problem differentiating cats from dogs even after being shown just a few examples. Computers do not work this way: whatever they are capable of learning is strictly defined by means of an algorithm. And this is only one of their limitations.

Because of this, a judge absolutely devoid of a so-called “human factor” would sound to many like a recipe for a disaster. More often than not, the phrase has been used strictly in a negative context. To say that some process lacked a human factor usually means to say that that process lacked reliability. Is it really so? And if it is – can we expect something similar from a computer program?

 

Enter AI: Patterns and Law

All current limitations aside – the answer is probably yes.

Because modern AI-powered programs are already capable of finding and understanding patterns as good (and in some cases – better!) than humans. Super smart computers have already mastered our games mostly by themselves. Or, to our problem with the dogs and cats from above: even though they need many more example sets, modern AI programs have learned to learn in a similar way a child does (just read the quote by Sebastian Thrun here)! It’s a very human-like way of progress and self-development which may soon result in a highly objective program with a personality of its own.

We get it: in your world, “personality” and “program” are two words which contradict each other. Strictly speaking, that’s true; but, soon enough, it might not be.

You see, modern AI-based programs were practically built to recognize patterns. And until recently, they were able to only recognize what we told them to – only much, much faster than us. Nowadays, they are capable of doing something more: finding patterns all by themselves – even such we are unaware of.

Just think about what this may mean for law. AI is able to learn from the decisions made by thousands and thousands of different judges about similar cases. It is also able to analyse the evidence, to find connections between the cases and the final decisions, and to build the whole picture about the new case.

And, believe it or not, grasping the whole picture may be even more effectively handled by an AI computer program than by the human mind. This is where that coveted human factor works against being objective: a high chance of racial or gender bias has existed for centuries and, though in a lesser form, exists today as well. Judges don’t even need to aware of it: it’s deeply rooted in their cultural background. Computer programs don’t have a cultural background: they depends solely on the data they are fed with.

Speed is another obvious advantage computer programs have when compared to human judges. They take a lot less time to learn complex things and apply this knowledge to new problems. Sure, you’ll need a lot of data to teach and test a program, but obviously, this will take you much less time than acquiring the theoretical knowledge at a university and practicing some of it at a law firm or in a court of law. Most of the developed AI programs we have today are able to take into consideration many more factors and predict the risks and outcomes with higher accuracy than any living human.

This all sounds quite optimistic. So much so that, by this moment, you may be already wondering why lawyers aren’t already using AI-powered programs. Believe it or not, the only reason for this is the absence of pre-prepared data. Because, in order to create an objective software, it’s necessary to have large datasets before using the software. Otherwise, you’ll end up with a flawed program and, consequently, a flawed decision.

Our privacy promise to you
dee.ie Do you need some help?

 

AI-Powered Programs in Law

But, if you think that lawyers aren’t up-to-date with advancements in AI – think again.

Namely, despite current uncertainties, there’s an obvious interest among lawyers about implementing AI in their practice. Numbers don’t lie. AI-based legal research firm Casetext, which is behind CARA (Case Analysis Research Assistant), recently successfully closed a 12 mln million dollars funding round.

David Eiseman, partner at Quinn Emanuel, calls CARA “an invaluable, innovative research tool”:

 

With CARA, we can upload a brief and within seconds receive additional case law suggestions and relevant information on how cases have been used in the past, all in a user-friendly interface. This feature is unique to CARA, and a major step forward in how legal research is done.

David Eiseman

CARA may be great, but AI researchers have already tried tackling much more challenging tasks, such as assisting judges in making verdicts. For example, computer scientists at the University College London developed an “AI judge” that managed to make verdicts with 79% success rate. Meaning: in just as many cases (461 out of the 584 examined) it arrived at the same verdict as the human judge had in reality. And if banks can use AI programs to predict whether giving a loan to a client is risky or not – why shouldn’t lawyers and judges use AI programs to make the same predictions about the defendants?

As is usually the case – there’s one problem. We’ve already hinted at it above and the more careful reader may have already guessed it. It’s the lack of transparency. Or, to be more general, the “black box problem”. In other words, due to the human-like way AI programs have learned to learn – even their creators are uncapable of explaining how exactly they reached a certain decision. In other words, AI programs find patterns all by themselves and we don’t really know how they do this. Consequently, we don’t know if they are wrong.

Does this mean that we’ll just have to learn to trust them? But, are we willing to take that risk?

 

Conclusions

AI is an extremely promising tool which, in time, may help us create a more just society. It finds common and recurring patterns among example sets in a manner not unlike humans do. And it does this much, much faster. Moreover, since it is devoid of “the human factor”, its decisions are free from the burden of the personal and cultural bias.

Gathering and preparing useful datasets is currently the main problem. Some companies have already started doing this and their legal AI tools are already helping many lawyers around the world. Even more, researchers have created “AI judges” which reach verdicts with a human-like precision.

The only problem at the moment is the lack of transparency. Since AI programs learn by themselves, even their creators don’t know what they’ve learnt and how they’ve reached their conclusions. This is a more general AI paradox called “the black box problem” and it is one that scientists will have to solve so that we finally gather enough courage to change our human judges with AI judges and our human law – with a more objective AI law.

Posted By: Iffy Kukkoo
Resident Editor-In-Chief

Iffy is our exclusive resident technology newshound editor, relentlessly exploring the beauties of the world from a 4th dimensional viewpoint. When not crafting, editing or publishing our IT content, she spends most of her time helping people understand life and its basic principles. You know, the little things around you, that you've failed to grasp each day.

Dee.ie IT blog has updates on IT Consultancy, IT Contractors and Software Development related posts, on how your business can be managed effectively using technology.

Feel free to read more and or reach out to share your thoughts, feelings and input on our articles, our team would love to hear from you!

Our privacy promise to you
Have a Question or Need an Answer? Ask our Live Chat and we will include it in our FAQ’s to make things easier for others

Our IT Blog

Latest Blog Post
blog-post

How to improve your businesses Software Maintenance?

Latest Blog Post
blog-post

What is the Difference Between a CTO & IT Consultant?

All Posts