Practical Advice on How to Run a Predictive Prioritized Review

Back to Blog Posts

In this post, we outline a simple review process using DISCO Artificial Intelligence (AI) that may provide some insight for any particular case.

In summary, the process includes conducting a macro review to determine which documents can be safely culled and/or mass tagged as nonresponsive to winnow down the potential set of responsive documents, randomly sampling that set to obtain a prevalence estimate of particular tags and quality control (QC), performing the review using a combination of DISCO AI and more traditional keyword searching, followed by a final sampling to ensure the results are acceptable. The following hypothetical case will provide more detail to this process...

Predictive Prioritized Review

Assume a set of data has been collected from the client as potentially responsive to requests for production from an opposing party. After de-duping, de-NISTing, etc., the remainder of the data yields a corpus of 1.1 million documents.  A cursory “macro” review (e.g., using document types, date ranges, or common email spam domains) yields 100,000 documents as clearly non-responsive and these are removed from the corpus. At this point, there are 1 million documents remaining that are potentially responsive and need to be evaluated in more detail.

The next step is to randomly sample the documents to get a baseline of the number of documents that are responsive (that is, the “prevalence”). To achieve a 95% degree of confidence with a 2% margin of error, a random sample of 2,395 of the remaining 1 million documents would need to be reviewed (that number can be found using any one of many online sample calculators, or using DISCO’s software). After reviewing the 2,395 random sample set of documents, the review manager would then have their target range of likely responsive documents in the 1 million document population. For example, assuming one found 17% of the sampled documents as responsive, that would mean that one could anticipate that between 15–19% (or between 150,000 and 190,000) of the underlying population would be responsive. In fact, one can say that they are 95% certain of their range, which was the “confidence” level provided by the sample.

With those numbers in mind, one can begin the review, using machine learning or any other method. One suggestion is to begin by doing “obvious” or “precise” keyword searches or search strings, such as the fairly unique name of the project, product, or contract that is at issue in the litigation, or a linear review of the most critical dates or custodians. After the lawyer has exhausted the obvious methods, begin reviewing according to the DISCO AI predictions of responsive documents. DISCO AI will provide a score for each document, so one could review those documents that DISCO’s AI rates as the “most likely” to be responsive — in ranked order according to the score. Using a managed review, DISCO’s AI can be combined with DISCO’s just-in-time batching to ensure that each new batch that is checked out by a review team member will have documents with the highest possible AI scores for the remaining unreviewed documents.  

After reviewed documents get into the target range (for example, 155,000 responsive documents have been found), and after the algorithm no longer recommends any additional documents, (e.g. the predictive ranking shows that no more responsive documents exist) consider taking a second random sample, this time of the remaining unreviewed documents. Again, let’s assume for round numbers that to find the 155,000 responsive documents, one also found 115,000 non-responsive documents in the course of the review; thus leaving 730,000 documents remain that have not been reviewed at all.

For the random sample of the unreviewed set, the review manager would probably want a higher degree of confidence and lower margin of error than their initial sample, since they may need to use this second sample to defend their work. An acceptable number might be a 99% confidence level, with a 2% margin of error, which would require in this case a random sample of 4,137 of the 730,000 “population” of the unreviewed documents. Let’s assume the lawyer found that approximately 1% of the sample was in fact responsive (that is, 41 documents in the sample were responsive). The math on sampling is a bit more difficult when the “prevalence” is very low or very high, as here, making it not quite correct that one can simply take the 1% found and apply a 2% margin both directions to get one’s estimate of responsive documents in the population. Here, the actual range is between .64% and 1.46% responsive (again, a range with which one can be 99% confident), meaning there are likely between 4,672 and 10,658 responsive documents in the unreviewed set of 730,000 unreviewed documents.

Proportionality and Defensibility

With those numbers in mind, the question is what to do? Should the review continue? Can one defend a decision to stop reviewing? Of course, the answer is it depends, and it cannot be overemphasized that this decision should be based on the legal judgment of the lawyer managing the review. The most basic analysis would be that there are (with 99% confidence) no more than 10,658 of the 730,000 unreviewed set that are responsive. Using the metrics ascertained in the review to provide the approximate number of documents that can be reviewed per hour (that is, to review the set to get to 155,000 responsive), the approximate cost of reviewing the additional documents is fairly easy to quantify. For example, assume that a review group reviewed 50 documents per hour, with an average hourly rate of $150 per hour. To review the remaining 730,000 documents would then cost approximately $730,000. Much harder to quantify, of course, is the potential “benefit” that (in all likelihood the opposition would argue) the remaining review might yield. If the entire amount in controversy is $100,000, the proportionality analysis is “probably” straightforward (and in fact the entire scope of this review would have been questionable).

However, a proportionality analysis may not be appropriate until all avenues of review have been exhausted except a full linear review. That is, if keyword, date, custodian, or other metadata searches could reasonably target some or all of the remaining 10,658 responsive documents, those efforts should also be evaluated. One easy method is to use the 41 documents found in the second sample, and determine if these 41 documents suggest any other avenues by which more responsive documents could be identified. Similarly, but with more effort, information learned during the review of the 155,000 responsive documents may provide additional clues for searching the remaining corpus of unreviewed documents. A defensibility position needs to anticipate the argument that there is a “better” (and cheaper) alternative to a full linear review; namely that a targeted search would significantly reduce the cost component of a given proportionally analysis. Once those potential objections are addressed, counsel will at least have the ammunition necessary to defend the decision to stop the review.

And speaking of defensibility, it is important to document the decisions made during the workflow.  Maintain records and lists of any keywords, custodians, date ranges, etc. used for culling decisions, what sample calculations and calculators were used and results, prevalence estimates found for each measured issue (e.g. privilege, responsiveness, issues), and alternative search strategies and results of each.  With the combination of powerful technology such as DISCO AI and documented statistically-accepted methodologies, counsel will be able to maximize search potential while providing the client with the most cost-effective review.

Subscribe to the blog
Trevor Jefferies
0%
100%