How DISCO Review Helps Matters Large and Small

Back to Blog Posts

Just a few weeks ago, our team counseled a client about updating their workflows to use DISCO’s AI. We talked about statistics, reducing costs, improving accuracy, and all the other numbers that come up in conversations with lawyers about technology. My client then asked, “How big does my case have to be before using all this technology is worth it? Most of my cases aren’t that big — shouldn’t I save DISCO for the big cases?”

Fortunately, we wrapped up a case earlier this year that is a perfect case study for the use of AI on even very small cases. Adams and Reese had only 6,415 documents that needed review and used our managed review services. What we saw directly answered my client’s questions about DISCO AI.

Read the full case study

We started with an estimation sample of the documents to get an understanding of how many documents were going to be responsive. That estimation sample came back with a 9.2% responsiveness rate, meaning that in a standard review, reviewers would have expected a 9.2% responsiveness rate in each of their batches. 

Instead, the estimation sample trained the AI what to look for. Our reviewers started the review with batches that had a 27.5% responsiveness rate. That meant that from the get-go, our reviewers were finding responsive documents three times more often than standard methods.

At first blush, it might seem like it doesn’t really matter when reviewers find the responsive documents. As long as you find the responsive documents, you’ve done your job. 

However, the value in finding responsive documents earlier in the process comes from helping your review team understand what they’re looking for. If you’re able to teach your reviewers early in the case what is important and what is not, then your reviewers spend their time making accurate calls. This means less cleanup and fewer mistakes throughout the life of the case. 

Everybody performs better when they’re confident about what they’re doing. They’re faster and make better decisions. Bringing all of these responsive documents to the front of the review queue means that the reviewers are able to code documents more effectively as well. 

In this case study, our reviewers coded documents at a rate of 86 documents per hour — well ahead of the industry standard of 55 docs/hr. While it is true that they would have gone even faster on a larger case, it is pretty impressive that our team was able to code that quickly without more time to get up to speed. The team’s confidence improved their accuracy too — less than 5% of the documents were overturned for responsiveness and only 1% of the documents had changed privilege calls. 

While larger matters show more drastic returns from using AI managed review, even a small case can reap the benefits of empowering reviewers with technology to make smarter decisions: a faster, more accurate review that saves money.

Click here to read the full case study.

Subscribe to the blog
DISCO
0%
100%