🏆 We are proud to be the only Legal AI vendor that shares their accuracy statistics. Read more.
Our advanced AI technology and vast legal knowledge base has been trained on 2,000+ (and counting!) Lexible Fusion™ questions, created and continuously developed by our in-house lawyers and Professional Services team.
We test it multiple times a week against 750,000+ data points to ensure continued precise, reliable contract analysis and risk assessment.
We are the only Legal AI vendor that publishes their accuracy statistics, this is because we believe in transparency and quality when it comes to contract review, leaving no room for risky errors.
Accuracy (F1 score)
Precision
Recall
These statistics are updated once a week
The Accuracy (or the F1 score) is a measure that combines recall and precision. There is a trade-off between precision and recall, F1 can therefore be used to measure how effectively our models make that trade-off.
One important feature of the F1 score is that the result is zero if any of the components (precision or recall) fall to zero. Thereby it penalises extreme negative values of either component.
Precision is a metric that gives you the proportion of true positives to the amount of total positives that the model predicts. It answers the question:
“Out of all the positive predictions we made, how many were true?”
Recall focuses on how good the model is at finding all the positives. Recall is also called true positive rate and answers the question:
“Out of all the data points that should be predicted as true, how many did we correctly predict as true?”
"ThoughtRiver helps shorten review time without compromising high-quality standards. It also reduces operating expenses and gives general counsel offices greater ability to accurately budget legal costs."
Chanley Howell
Partner at Foley & Lardner LLP
Take the first step! 👇