Learn how ThoughtRiver’s AI is deliberately configured to err on the side of caution, and how that keeps the lawyer in control at all times.
Proceed with caution
While it is foolish to apply a broad brush stereotype to GCs or senior in-house legal professionals, we can say with a fair degree of certainty that they are typically cautious by nature. Risk-aversion is a trait common in lawyers, defining not just how they process their day-to-day work, but also how they approach new challenges.
Legal technology is a case in point. In contrast to other industries, which leapt at the opportunity to embrace the benefits of new technology and push the digital transformation agenda, the legal industry has been far more circumspect in its uptake.
That caution has slowly been eroded, and legal tech is now far more accepted among legal professionals. By 2024, Gartner estimates that legal departments will have automated 50% of legal work related to major corporate transactions.
Now the issue is not whether legal technology will deliver ROI – GCs can now see exactly how the right solutions will add value – but how to get the rest of the business on board. This is because it is rarely legal professionals who get to approve such investments.
Legal tech sits outside a typical in-house budget
According to Gartner research, legal technology spending has increased from 2.6% of in-house budgets in 2017 to 3.9% in 2020. By 2015, Gartner predicts that legal technology spending will increase to approximately 12% of in-house budgets, a threefold increase from 2020 levels.
However, this is still a small percentage of what is already a tight budget. In-house legal typically operate on shoestring budgets in the first place. Carving out the investment needed for even the most modest legal tech is outside their scope.
In other words, getting the financial backing needed for legal tech will require the GC to go cap in hand to the FD or CFO. It will also require presenting a water-tight pitch to the board to illustrate exactly why legal tech should be seen as essential, and how it will deliver demonstrable ROI.
This is where a lawyer’s natural caution will resurface. Requesting funding means the GC staking his or her reputation within the business. Get it right and they become the hero; but if something goes wrong, or there’s a delay or an unexpected turn, the GC becomes the whipping boy (or girl).
The GC is forced out of their comfort zone, which is possibly why so many simply won’t bother. The rewards of delivering a business-changing solution are not worth the risk of burning a hole in the FD’s bottom line. The GC takes a safety-first approach and keeps quiet.
AI that mirrors your caution
ThoughtRiver was created by lawyers, which means we understand how you work. What makes you tick. We get it.
That is why the artificial intelligence (AI) embedded into the ThoughtRiver platform is specifically designed to think the same way you do. We train it to be cautious, applying the same safety-first, no-risk approach as you do.
This is where it is important to be honest – and realistic – about what AI can and cannot do. AI is never 100% infallible, and we will never claim it is. It can certainly work faster, with greater consistency and accuracy for longer than any human; but the principles applied to the work it is undertaking is exactly the same as a human would apply.
AI delivers predictions based on its understanding of the world around it, and the training examples that it has been trained on. But it is always a prediction, and as such will never be correct 100% of the time – just like you are not right 100% of the time. This is an important point to bear in mind as you start to look at embedding AI into your workflows, and begin reviewing different AI solutions on the market. We see many potential users holding the AI to a much higher bar than they would hold a human lawyer; they seem to expect perfection.
Let’s take a look at why that matters.
Why false positives are a positive sign
For the sake of clarity, a false positive in this world can be defined as an alert triggered by the AI to show a potential problem – in our case, an unacceptable clause or wording raises an issue that needs to be reviewed by the lawyer – when in fact no such problem exists. It is a false alarm.
The other side of this coin is a false negative. This is when there is no alert raised as no cause for concern has been detected, yet there actually is a potential problem. It is telling you that everything is in order when it is not. This is far more concerning.
Our Contract Acceleration Platform is deliberately configured to deliver false positives over false negatives when it is unsure in its prediction. Just like you and your legal team do, we train our AI to also err on the side of caution and to check, check, and check again.
Let’s be clear: we do this deliberately, and see this as a positive feature. The blunt truth is that AI, just like humans, will always make mistakes because it makes predictions, and when it does make a mistaken prediction, a false positive is far better than a false negative. A false positive gives you, the lawyer, complete control at all times. It is raised for you to double-check, and confirm that no further action is necessary.
Confidence and control
This configuration that generates a small number of false positives helps our users to gain confidence in the automated contract review platform. A false positive is not seen as “AI getting it wrong.” It is not; it is just being cautious, exactly as we have trained it to.
Think about how you act when there is a small but undeniable ounce of doubt. You pause. You consider the issue for a moment. Perhaps you refer to previous contracts or instances where something similar has occurred before. Perhaps you ask a colleague for their opinion.
This does not mean you are “wrong” or have made a mistake. It is how sensible humans act. We check and check again; and not just the 50/50 cases, but even those that are 70/30 or even 80/20.
A false positive is ThoughtRiver’s way of asking you to check. AI is asking you to make the final decision, and is giving you the final say.
We strongly believe that it is incorrect to consider this as wasted time, or a sign that the technology is not working optimally. Let us remember the big picture here, and what AI is designed to do. Spending a few moments dismissing a handful of false positives is much better than spending several hours or days having to read the whole document. And it is certainly better than discovering a false negative weeks, months or even years down the line.
Combining AI and human intelligence for optimal results
It is easy to exaggerate the capabilities of AI. And it is easy to hold AI to a much higher bar than us mere humans.
But in both cases it is wrong. We are wise to look at AI from a philosophical viewpoint as well as a technical one. Robots are not here to replace human lawyers. They offer extraordinary capabilities that allow humans to become better lawyers. Ultimately, it is the combination of artificial and human intelligence that is key to success for legal teams.
AI can crunch huge volumes of data at breakneck speed and be trained to make simple decisions so that lawyers do not have to spend their time on low-value tasks. The lawyer, however, is always in full control, making the final decisions and determining how the contracting process should proceed.
It is clear that forward-thinking in-house legal teams need AI to serve the business at the speed the business demands, but let us not forget that AI needs human lawyers to validate their work too.