Learn how ThoughtRiver’s AI is trained to manage risk in a human way.
What does accuracy mean to lawyers?
We all know that accuracy is essential for legal professionals. Assessing risk and determining what is and is not acceptable to the business requires precision. Misunderstanding or misinterpreting obligations can be disastrous to the reputation of both the business and the lawyer.
Accuracy also necessitates consistency. The whole purpose of a risk playbook is to allow the organisation to operate within strict parameters and thus retain as much control as possible over third-party obligations. If the playbook is not applied consistently to all contracts then it serves little purpose.
That is not to say that what the business deems as acceptable will not change over time. As the business evolves so will its assessment of risk; and as new suppliers and customers are brought in, its standard terms of operation may change to reflect new commercial agreements.
However, these changes will be part of a conscious evolution, carefully monitored and controlled by senior lawyers and management – and not left to the whims of individuals.
Accuracy and the human factor
Arguably the biggest risk when it comes to accuracy is the human factor. For example:
- Two lawyers have different interpretations – of the same wording, of the playbook, or even how to de-risk a particular issue.
- The different form of legalese in a contract leads to lawyers having a different understanding of the actual terms being defined.
- One lawyer is more junior than a colleague and has less experience in a particular subject matter.
- The same lawyer interprets the same wording in different ways on different days. Perhaps they are overworked and pushed for time, or tired or stressed, meaning that something that would usually be spotted is missed on this occasion.
While the business sets a high bar for lawyers, we have to remember that they are only human and therefore subject to the same inconsistencies as the rest of us mere mortals.
Accuracy and artificial intelligence
Using artificial intelligence (AI) to automate part of the contracting process is one way to address and improve accuracy. AI is particularly useful when conducting an initial contract review, guiding human lawyers to the areas of a contract that need their involvement while ensuring that they do not need to waste time reading parts of the contract that pose no risk. Many users of ThoughtRiver for example describe the platform as a second pair of eyes.
This time-saving efficiency cannot be underestimated. Until recently, there has been no way for a lawyer to understand the risks contained within a contract without having to read the entire document. With AI entrusted with the first pass, this time-consuming task is removed or at least greatly reduced.
So how does the AI in ThoughtRiver’s contract acceleration platform ensure consistency and accuracy? Let’s take a deeper dive.
Accuracy can be measured as a combination of precision and recall.
- Precision
Precision is measured by asking: of all the risks the platform has identified and flagged as needing further investigation, how many of them are true (ie, pose actual risk in their current state)? - Recall
Recall is measured by asking: of all the risks within the contract, how many has the platform identified? In other words, how many have been captured, and how many have been overlooked?
The crucial thing to consider when discussing the merits of AI is that it is not 100% infallible – and we will never claim that it is. Anyone who does is selling snake oil. AI delivers predictions based on how it has been trained, and how it interprets the world around it. In many ways this is no different to the factors that human lawyers use when coming to their own conclusions about the best course of action to take.
In ThoughtRiver’s case, this acknowledgement of imperfection and the need for caution when reviewing a contract manifests through a deliberate design to err on the side of delivering false positives over false negatives.
A false positive is when ThoughtRiver’s issue list – a digital list detailing all issues that need to be resolved before the contract is acceptable – advises a human lawyer that there may be a risk worthy of attention, but upon closer inspection the human lawyer decides that there is in fact no risk to the business so simply dismisses the issue from the list. It is similar to when predictive text guesses what you are typing and you simply ignore or dismiss the suggestion because it is wrong, but you do not turn the predictive text function off because it is correct more often than not and thus saves you significant time.
Conversely, a false negative is when the lawyer is told erroneously that there are no risks to be found within the contract when, in fact, there are.
It is easy to see how false negatives are more worrisome. You cannot manage what you do not know, and you cannot mitigate risks that you do not know exist.
Attentive AI that replicates your human lawyers
Just like you and your legal team, we train our AI to also err on the side of caution and to check, check, and check again. That is why when we tune our AI, we prefer the system to flag something for the reviewer’s attention when it is not sure, rather than not. Not only is this how your lawyers manage risk, but it is also how your business wants you to manage risk. It is always better to be safe than sorry.
And just like your lawyers, ThoughtRiver learns with experience. Our AI analyses each new contract and compares it against a digitised risk playbook as well as your existing signed contracts.
This means that while ThoughtRiver can see that wording within a contract is technically classed as risky according to the playbook, the platform’s AI can see that it has regularly been approved by the company’s lawyers. It will therefore still flag the wording as posing a potential risk, but also surface this information for the lawyer to make a quick and informed decision themselves. ThoughtRiver provides the right information at the right time, and delivers it with context.
So why use AI if it operates in the same way as human lawyers? Simply put, AI can undoubtedly work faster, with greater consistency and accuracy for longer than any human. When you couple this rapid first pass review to create a digital issues list with the advanced decision-making capabilities of a human lawyer you get a beautiful synergy, and accelerated contracting. The legal team can complete necessary but tedious contract reviews at speed, improving deal velocity and ensuring that contracting momentum is not lost.
The principles applied to the work it is undertaking is exactly the same as a human would apply, and it will complete that work in a fraction of the time.
This means that in-house lawyers have more time and freedom to focus efforts on higher-value tasks. Crucially, they always retain full control over risk management:
- A bias towards false positives ensures human lawyers also have the final say.
- The decisions taken by AI are determined by how it is trained, which means it will develop a bespoke understanding of risk according to your specific needs.
- AI constantly learns with each new contract, so is influenced by how human lawyers manage risk.
This also means that first-pass contract reviews can be delegated to non-lawyers, such as colleagues in sales or procurement. Theoretically anyone can initiate the review, safe in the knowledge that AI – trained to understand risk specific to your business – will act as a reassuring safety net.