I had the pleasure last week of attending “Data in the New: Transforming Insurance” – the third annual insurtech-related thought leadership event held by St. John’s University’s Tobin Center for Executive Education and School of Risk Management.
To distill the insights I collected would take far more than one blog post. Speakers, panelists, and attendees spanned the insurance “ecosystem” (a word that came up a lot!) – from CEOs, consultants, and data scientists to academics, actuaries, and even a regulator or two to keep things real. I’m sure the presentations and conversations I participated in will feed several posts in weeks to come.
Just getting started
Keynote speaker James Bramblet, Accenture’s North American insurance practice lead, “set the table” by discussing where the industry has been and where some of the greatest opportunities for success lie. He described an evolution from functional silos (data hiding in different formats and databases) through the emergence of function-specific platforms (more efficient, better organized silos) to today’s environment, characterized by “business intelligence and reporting overload”.
“Investment in big data is just getting started,” Jim said, adding that he expects the next wave of competitive advantage to be “at the intersection of customization and real time” – facilitating service delivery in the manner and with the speed customers have come to expect from other industries.
Jim pointed to several areas in which insurers are making progress and flagged one – workforce effectiveness – that he considers a “largely untapped” area of opportunity. Panelists and audience members seemed to agree that, while insurers are getting better at aggregating and analyzing vast amounts of data, their operations still look much as they have forever: paper based and labor intensive. While technology and process improvement methodologies that could address this exist, several attendees said they found organizational culture to be the biggest obstacle, with one citing Peter Drucker’s observation that “culture eats strategy for breakfast.”
Lake or pond? Raw or cooked?
Paul Bailo, global head of digital strategy and innovation for Infosys Digital, threw some shade on big data and the currently popular idea of “data lakes” stocked with raw, unstructured data. Paul said he prefers “to fish in data ponds, where I have some idea what I can catch.”
Data lakes, he said, lack the context to deliver real business insights. Data ponds, by contrast, “contain critical data points that drive 80-90 percent of decisions.”
Stephen Mildenhall, assistant professor of risk management and insurance and director of insurance data analytics at the School of Risk Management, went as far as to say the term “raw data” is flawed.
“Deciding to collect a piece of data is part of a structuring process,” he said, adding that, to be useful, “all data should be thoroughly cooked.”
Practical advice was available in abundance for the 80-plus attendees, as was recognition of technical and regulatory challenges to implementation. James Regalbuto, deputy superintendent for insurance with the New York State Department of Financial Services, explained – thoroughly and with good humor – that regulators really aren’t out to stifle innovation. He provided several examples of privacy and bias concerns inherent in some solutions intended to streamline underwriting and other functions.
Perhaps the most broadly applicable advice came from Accenture’s Jim Bramblet, who cautioned against overthinking the features and attributes of the many solutions available to insurers.
“Pick your platform and go,” Jim said. “Create a runway for your business and ‘use case’ your way to greatness.”