We’ve already written quite a few times about various aspects of computational law. We’re taking the discussion a step further: what about AI-powered programs and human law?
By Iffy Kukkoo
03 Dec, 2017
We’ve already written quite a few times about various aspects of computational law. We’re taking the discussion a step further: what about AI-powered programs and human law? Is AI capable of changing the way we think about legal procedures? Would it be able to make human law more just? If so – how? If not – should we abandon the idea altogether?
Let’s think for a moment about what it takes for someone to be a good judge. First of all, one needs to have an extensive knowledge of the existent law in a particular area, which implies that he or she needs to have dedicated many years of his or her life studying the law. Studying the law is, of course, only part of the equation; the theoretical part, to be exact. Real experience comes with practice.
Both the theoretical and the practical part of a judge’s background has its foundation in legal history. Meaning: both are based on the careful analyses of existent legal practice against the background of thousands and thousands of relevant cases. So, it’s a complex decision-making process which involves finding the common and recurring patterns in similar-looking and/or applicable cases. In a nutshell, that’s how judges train themselves to make the right choice.
For humans, recognizing patterns is a mostly unconscious process which takes into account many things nobody is consciously aware of. We know that we can learn something from anything, and we are able to apply the acquired knowledge to solving similar problems in the future. That’s why we don’t really have a problem differentiating cats from dogs even after being shown just a few examples. Computers do not work this way: whatever they are capable of learning is strictly defined by means of an algorithm. And this is only one of their limitations.
Because of this, a judge absolutely devoid of a so-called “human factor” would sound to many like a recipe for a disaster. More often than not, the phrase has been used strictly in a negative context. To say that some process lacked a human factor usually means to say that that process lacked reliability. Is it really so? And if it is – can we expect something similar from a computer program?
All current limitations aside – the answer is probably yes.
Because modern AI-powered programs are already capable of finding and understanding patterns as good (and in some cases – better!) than humans. Super smart computers have already mastered our games mostly by themselves. Or, to our problem with the dogs and cats from above: even though they need many more example sets, modern AI programs have learned to learn in a similar way a child does (just read the quote by Sebastian Thrun here)! It’s a very human-like way of progress and self-development which may soon result in a highly objective program with a personality of its own.
We get it: in your world, “personality” and “program” are two words which contradict each other. Strictly speaking, that’s true; but, soon enough, it might not be.
You see, modern AI-based programs were practically built to recognize patterns. And until recently, they were able to only recognize what we told them to – only much, much faster than us. Nowadays, they are capable of doing something more: finding patterns all by themselves – even such we are unaware of.
Just think about what this may mean for law. AI is able to learn from the decisions made by thousands and thousands of different judges about similar cases. It is also able to analyse the evidence, to find connections between the cases and the final decisions, and to build the whole picture about the new case.
And, believe it or not, grasping the whole picture may be even more effectively handled by an AI computer program than by the human mind. This is where that coveted human factor works against being objective: a high chance of racial or gender bias has existed for centuries and, though in a lesser form, exists today as well. Judges don’t even need to aware of it: it’s deeply rooted in their cultural background. Computer programs don’t have a cultural background: they depends solely on the data they are fed with.
Speed is another obvious advantage computer programs have when compared to human judges. They take a lot less time to learn complex things and apply this knowledge to new problems. Sure, you’ll need a lot of data to teach and test a program, but obviously, this will take you much less time than acquiring the theoretical knowledge at a university and practicing some of it at a law firm or in a court of law. Most of the developed AI programs we have today are able to take into consideration many more factors and predict the risks and outcomes with higher accuracy than any living human.
This all sounds quite optimistic. So much so that, by this moment, you may be already wondering why lawyers aren’t already using AI-powered programs. Believe it or not, the only reason for this is the absence of pre-prepared data. Because, in order to create an objective software, it’s necessary to have large datasets before using the software. Otherwise, you’ll end up with a flawed program and, consequently, a flawed decision.
But, if you think that lawyers aren’t up-to-date with advancements in AI – think again.
Namely, despite current uncertainties, there’s an obvious interest among lawyers about implementing AI in their practice. Numbers don’t lie. AI-based legal research firm Casetext, which is behind CARA (Case Analysis Research Assistant), recently successfully closed a 12 mln million dollars funding round.
David Eiseman, partner at Quinn Emanuel, calls CARA “an invaluable, innovative research tool”: