CAN PREDICTIVE CODING SOLVE THE PROBLEM?
Many attorneys and judges have reached a consensus that
it is prohibitively expensive—and often impossible—for
human beings to review every page of evidence. Some
industry experts believe predictive coding technology
will solve this problem because it uses statistical analysis
techniques to help legal teams classify documents, for
example as responsive or non-responsive.
In Da Silva Moore v. Publicis Groupe (2012), Judge Andrew Peck
of the U.S. District Court for the Southern District of New
York endorsed the use of predictive coding technology “in
appropriate cases.” However, the litigants in this case could not
agree on how to decide which documents were responsive.
Judges have approved the use of predictive coding as a way
to reduce review costs in cases such as Global Aerospace Inc. v.
Landow Aviation, L.P. (2012) and In Re: Biomet M2a Magnum Hip
Implant Products Liability Litigation (MDL 2391) (2013).
Predictive coding technology is likely to be uncontroversial in
most cases, however, there is potential for the parties to get
bogged down in procedural details such as the seed set of
training documents, as happened in Da Silva Moore.
In addition, predictive coding can be at least as accurate as
human reviewers. One study that asked seven teams of lawyers
to classify the same set of 28,000 documents as responsive
or non-responsive found these human reviewers did not
consistently classify more than half (57%) of the document set
and widely disagreed on which items were responsive. 3
A BROADER APPROACH IS NECESSARY
Predictive coding has the potential to save countless hours
of work in document review. However, many attorneys are
still not comfortable handing such important decisions
over to computers. This is especially the case for “black box”
predictive coding solutions that do not clearly explain how
their engines classify documents. If lawyers can’t understand
how the technology works, they almost certainly can’t
explain it to a judge.
I believe predictive coding alone will not greatly reduce
discovery costs because it occurs at the wrong end of the
process—once a case has already progressed to full litigation.
eDiscovery requires a multifaceted approach that combines
technologies across the Electronic Discovery Reference
Model (EDRM) process. These technologies include:
• Light metadata scans
• Legal hold
• Targeted collection from multiple sources
• Efficient collection from email archives
• Rapid data indexing
• Investigative search
• Predictive coding
This approach gives attorneys access to all the facts sooner
and makes it possible to drive a proactive, winning strategy.
For example, I was involved in a major litigation where the
responding party used these combined techniques to find
literally a million documents that were not responsive. They
provided a sample of these documents to the receiving
party and both legal teams agreed they could set those
documents aside. By eliminating one million documents
from the review process on both sides, the parties saved
huge amounts of money.
REDUCING COSTS ACROSS THE DISCOVERY PROCESS
Attorneys and eDiscovery experts can minimize costs and
use their superior knowledge of the facts to guide their
strategy and maintain a leading edge by applying the right
technologies across all stages of litigation.
1 For example Fulbright & Jaworski LLP’s “8th Annual Litigation
Trends Report” ( http://www.fulbright.com/litigationtrends01)
showed a large proportion of corporate counsel expecting to
see more litigation in the coming year, with a large majority
anticipating more litigation based on stricter regulations.
Similarly, in The Co wen Group’s “Q2, 2012 Quarterly Critical
Trends Corporate Market Snapshot” (http://co wengroup.
com/documents/ qsnap.2012.q2.corp.pdf), three-quarters of
respondents said they were dealing with a growing volume
of electronic evidence and 45% said their litigation workloads
and data volumes were increasing.