Tag Archives: Big Data

Data Analytics Comes to the Legal Profession

there are insights in there somewhere

Did “data analytics” ruin baseball? Depends on whom you ask: the cranky old man in a Staten Island bar or the nerd busy calculating Manny Machado’s wRC+ (it was 141 in 2018, if you cared to know).  

What is indisputable, though, is that the so-called “Sabermetrics revolution” rapidly and fundamentally changed how the game is played – this is not your grandpa’s outfield! 

And data is eating the whole world, not just baseball. Now it’s coming for the legal profession, of all places. The Financial Times recently published an article on how law analytics companies are using statistics on judges and courts to weigh how a lawsuit might play out in the real world. One such company does the following (per the article):  

The sort of information that might be analysed includes how many times the opposing lawyer has filed certain types of lawsuit, in which court, with what success rate, who they have represented, and which attorneys they have faced. Once a judge has been assigned to the case, legal research companies can provide statistics on his or her record as well.  

Another law analytics firm “shows the litigation history of judges, lawyers and law firms, including win/loss rates for trials that are benchmarked to competitors, the success rates of different types of motion in individual courts and a database of who sues and gets sued most often.” 

Proponents reportedly argue that this is a) a more efficient way to go about the business of law and b) a way to identify where the legal system is inconsistent.  

That being said, it’s not yet all sunshine and roses for legal system Sabermetricians. As the FT notes, most litigation is dropped or settled, which means there are no public court documents for those cases. Which means no data to be mined. How many cases get dropped or settled? Perhaps as many as 90 percent. Big data is hard when most of the data don’t exist.  

So that means doing things the old-fashioned way. One law firm identified by the FT supplements data gaps by using (quel horreur!) real human lawyers to assess how a case might fare during the legal process.  

Another issue is whether anything useful can be gleaned from what little data there are. One gentleman quoted in the article put it thus: “The judge analytics demonstrations I have seen to date oscillate between the blindingly obvious and the statistically irrelevant.”  

Nonetheless, as the datasets grow, it doesn’t seem impossible that the ability to assess lawsuits will only improve. Which leads me to wonder: will judges change their behavior in response? The baseball data revolution didn’t just reveal information – it changed how players actually played in response to that information. Data isn’t passive, turns out. It remains to be seen how shining the light of data on the court system could change the court system itself.

I.I.I.’S CEO TESTIFIES BEFORE CONGRESS ON TECHNOLOGICAL INNOVATION IN THE FIGHT AGAINST INSURANCE FRAUD

Sean Kevelighan, the I.I.I.’s chief executive officer told a U.S. Senate Subcommittee in Washington, D.C., today that U.S. auto, home and business insurers pay an estimated $30 billion annually —nearly 10 percent of their total claim payouts—in fraudulent auto, home, and business insurance claims. To combat fraud insurers are increasingly turning to vendors who offer technological innovations stemming from big data and artificial intelligence. These vendors are allowing insurers to assess prospective customers, verify claims and identify suspicious activity in ways that were not previously possible.

In a report released last month, the Boston-based Aite (pronounced EYE-TAY) Group outlined the fact that insurers are recognizing their fraud-fighting efforts must adapt to this new era, and found reason for optimism. The Aite Group reports insurers are retaining state-of-the-art vendors, like data aggregators, producers, and receivers and then analyzing this data through the use of artificial intelligence and predictive analytics. The result? Insurance companies are equipping themselves with the high-tech tools they need to assess a prospective customer, verify a claim, and identify suspicious activity.

Click here for the full testimony.

Predictive Modeling Seminar Ahead

Insurance Information Institute (I.I.I.) chief actuary James Lynch will be in San Diego at the Casualty Actuarial Society’s (CAS) annual Ratemaking and Product Management conference, March 27 to 29. Here’s a preview:

The I.I.I. partners with the CAS at its conferences. I generally write three or four articles based on conference sessions for the CAS Actuarial Review. These tend to be fairly meaty actuarial topics, but I try to make them digestible. Here is something I wrote about predictive models a while back.

At this meeting, I plan to write three more articles about predictive models. These are sophisticated models that draw on Big Data to help insurers serve their customers better.

Many, if not most personal insurers, use predictive models to price their products. Lately they’ve been developing models to help them settle claims quickly and accurately.

It’s an important, growing area in property/casualty insurance, particularly among actuaries and other quantitative experts. The CAS is recognizing the emerging skill through the CAS Institute – iCAS for short – its subsidiary that awards credentials for quantitative professionals.

The Institute’s first designation is for Certified Specialist in Predictive Analytics, or CSPA, and it will be awarded in a formal ceremony at the conference. I’ll be live-tweeting that event.