Tag Archives: Big Data

Tapping the insurance ecosystem for insights

I had the pleasure last week of attending “Data in the New: Transforming Insurance” – the third annual insurtech-related thought leadership event held by St. John’s University’s Tobin Center for Executive Education and School of Risk Management.

To distill the insights I collected would take far more than one blog post.  Speakers, panelists, and attendees spanned the insurance “ecosystem” (a word that came up a lot!) – from CEOs, consultants, and data scientists to academics, actuaries, and even a regulator or two to keep things real. I’m sure the presentations and conversations I participated in will feed several posts in weeks to come.

Herbert Chain, executive director of the Center for Executive Education of the Tobin College of Business, welcomes speakers and attendees.
Just getting started

Keynote speaker James Bramblet, Accenture’s North American insurance practice lead, “set the table” by discussing where the industry has been and where some of the greatest opportunities for success lie. He described an evolution from functional silos (data hiding in different formats and databases) through the emergence of function-specific platforms (more efficient, better organized silos) to today’s environment, characterized by “business intelligence and reporting overload”.

Accenture’s James Bramblet discusses the history and future of data in insurance.

“Investment in big data is just getting started,” Jim said, adding that he expects the next wave of competitive advantage to be “at the intersection of customization and real time” – facilitating service delivery in the manner and with the speed customers have come to expect from other industries.

Jim pointed to several areas in which insurers are making progress and flagged one – workforce effectiveness – that he considers a “largely untapped” area of opportunity. Panelists and audience members seemed to agree that, while insurers are getting better at aggregating and analyzing vast amounts of data, their operations still look much as they have forever: paper based and labor intensive. While technology and process improvement methodologies that could address this exist, several attendees said they found organizational culture to be the biggest obstacle, with one citing Peter Drucker’s observation that “culture eats strategy for breakfast.”

Lake or pond? Raw or cooked?

Paul Bailo, global head of digital strategy and innovation for Infosys Digital, threw some shade on big data and the currently popular idea of “data lakes” stocked with raw, unstructured data. Paul said he prefers “to fish in data ponds, where I have some idea what I can catch.”

Data lakes, he said, lack the context to deliver real business insights. Data ponds, by contrast, “contain critical data points that drive 80-90 percent of decisions.”

Stephen Mildenhall, assistant professor of risk management and insurance and director of insurance data analytics at the School of Risk Management, went as far as to say the term “raw data” is flawed.

“Deciding to collect a piece of data is part of a structuring process,” he said, adding that, to be useful, “all data should be thoroughly cooked.”

Innovation advice

Practical advice was available in abundance for the 80-plus attendees, as was recognition of technical and regulatory challenges to implementation. James Regalbuto, deputy superintendent for insurance with the New York State Department of Financial Services, explained – thoroughly and with good humor – that regulators really aren’t out to stifle innovation. He provided several examples of privacy and bias concerns inherent in some solutions intended to streamline underwriting and other functions.

Perhaps the most broadly applicable advice came from Accenture’s Jim Bramblet, who cautioned against overthinking the features and attributes of the many solutions available to insurers.

“Pick your platform and go,” Jim said. “Create a runway for your business and ‘use case’ your way to greatness.”

Data Analytics Comes to the Legal Profession

there are insights in there somewhere

Did “data analytics” ruin baseball? Depends on whom you ask: the cranky old man in a Staten Island bar or the nerd busy calculating Manny Machado’s wRC+ (it was 141 in 2018, if you cared to know).  

What is indisputable, though, is that the so-called “Sabermetrics revolution” rapidly and fundamentally changed how the game is played – this is not your grandpa’s outfield! 

And data is eating the whole world, not just baseball. Now it’s coming for the legal profession, of all places. The Financial Times recently published an article on how law analytics companies are using statistics on judges and courts to weigh how a lawsuit might play out in the real world. One such company does the following (per the article):  

The sort of information that might be analysed includes how many times the opposing lawyer has filed certain types of lawsuit, in which court, with what success rate, who they have represented, and which attorneys they have faced. Once a judge has been assigned to the case, legal research companies can provide statistics on his or her record as well.  

Another law analytics firm “shows the litigation history of judges, lawyers and law firms, including win/loss rates for trials that are benchmarked to competitors, the success rates of different types of motion in individual courts and a database of who sues and gets sued most often.” 

Proponents reportedly argue that this is a) a more efficient way to go about the business of law and b) a way to identify where the legal system is inconsistent.  

That being said, it’s not yet all sunshine and roses for legal system Sabermetricians. As the FT notes, most litigation is dropped or settled, which means there are no public court documents for those cases. Which means no data to be mined. How many cases get dropped or settled? Perhaps as many as 90 percent. Big data is hard when most of the data don’t exist.  

So that means doing things the old-fashioned way. One law firm identified by the FT supplements data gaps by using (quel horreur!) real human lawyers to assess how a case might fare during the legal process.  

Another issue is whether anything useful can be gleaned from what little data there are. One gentleman quoted in the article put it thus: “The judge analytics demonstrations I have seen to date oscillate between the blindingly obvious and the statistically irrelevant.”  

Nonetheless, as the datasets grow, it doesn’t seem impossible that the ability to assess lawsuits will only improve. Which leads me to wonder: will judges change their behavior in response? The baseball data revolution didn’t just reveal information – it changed how players actually played in response to that information. Data isn’t passive, turns out. It remains to be seen how shining the light of data on the court system could change the court system itself.

I.I.I.’S CEO TESTIFIES BEFORE CONGRESS ON TECHNOLOGICAL INNOVATION IN THE FIGHT AGAINST INSURANCE FRAUD

Sean Kevelighan, the I.I.I.’s chief executive officer told a U.S. Senate Subcommittee in Washington, D.C., today that U.S. auto, home and business insurers pay an estimated $30 billion annually —nearly 10 percent of their total claim payouts—in fraudulent auto, home, and business insurance claims. To combat fraud insurers are increasingly turning to vendors who offer technological innovations stemming from big data and artificial intelligence. These vendors are allowing insurers to assess prospective customers, verify claims and identify suspicious activity in ways that were not previously possible.

In a report released last month, the Boston-based Aite (pronounced EYE-TAY) Group outlined the fact that insurers are recognizing their fraud-fighting efforts must adapt to this new era, and found reason for optimism. The Aite Group reports insurers are retaining state-of-the-art vendors, like data aggregators, producers, and receivers and then analyzing this data through the use of artificial intelligence and predictive analytics. The result? Insurance companies are equipping themselves with the high-tech tools they need to assess a prospective customer, verify a claim, and identify suspicious activity.

Click here for the full testimony.

Predictive Modeling Seminar Ahead

Insurance Information Institute (I.I.I.) chief actuary James Lynch will be in San Diego at the Casualty Actuarial Society’s (CAS) annual Ratemaking and Product Management conference, March 27 to 29. Here’s a preview:

The I.I.I. partners with the CAS at its conferences. I generally write three or four articles based on conference sessions for the CAS Actuarial Review. These tend to be fairly meaty actuarial topics, but I try to make them digestible. Here is something I wrote about predictive models a while back.

At this meeting, I plan to write three more articles about predictive models. These are sophisticated models that draw on Big Data to help insurers serve their customers better.

Many, if not most personal insurers, use predictive models to price their products. Lately they’ve been developing models to help them settle claims quickly and accurately.

It’s an important, growing area in property/casualty insurance, particularly among actuaries and other quantitative experts. The CAS is recognizing the emerging skill through the CAS Institute – iCAS for short – its subsidiary that awards credentials for quantitative professionals.

The Institute’s first designation is for Certified Specialist in Predictive Analytics, or CSPA, and it will be awarded in a formal ceremony at the conference. I’ll be live-tweeting that event.