Tag Archives: Data

Bridging the Cyber Insurance Data Gap

 

 

Cyber risks are opportunistic and indiscriminate, exploiting random system flaws and lapses in human judgment.

Underwriting cyberrisk is beyond difficult. It’s a newer peril, and the nature of the threat is constantly changing – one day, the biggest worry is identity theft or compromise of personal data. Then, suddenly it seems, everyone is concerned about ransomware bringing their businesses to a standstill.

Now it’s cryptojacking and voice hacking – and all I feel confident saying about the next new risk is that it will be scarier in its own way than everything that has come before.

This is because, unlike most insured risks, these threats are designed. They’re intentional, unconstrained by geography or cost. They’re opportunistic and indiscriminate, exploiting random system flaws and lapses in human judgment.  Cheap to develop and deploy, they adapt quickly to our efforts to defend ourselves.

“The nature of cyberwarfare is that it is asymmetric,” wrote Tarah Wheeler last year in a chillingly titled Foreign Policy article, In Cyber Wars, There Are No Rules.  “Single combatants can find and exploit small holes in the massive defenses of countries and country-sized companies. It won’t be cutting-edge cyberattacks that cause the much-feared cyber-Pearl Harbor in the United States or elsewhere. Instead, it will likely be mundane strikes against industrial control systems, transportation networks, and health care providers — because their infrastructure is out of date, poorly maintained, ill-understood, and often unpatchable.”

This is the world the cyber underwriter inhabits – the rare business case in which a military analogy isn’t hyperbole.

We all need data — you share first

In an asymmetric scenario – where the enemy could as easily be a government operative as a teenager in his parents’ basement – the primary challenge is to have enough data of sufficiently high quality to understand the threat you face. Catastrophe-modeling firm AIR aptly described the problem cyber insurers face in a 2017 paper that still rings true:

“Before a contract is signed, there is a delicate balance between collecting enough appropriate information on the potential insured’s risk profile and requesting too much information about cyber vulnerabilities that the insured is unwilling or unable to divulge…. Unlike property risk, there is still no standard set of exposure data that is collected at the point of underwriting.”

Everyone wants more, better data; no one wants to be the first to share it.

As a result, the AIR paper continues, “cyber underwriting and pricing today tend to be more art than science, relying on many subjective measures to differentiate risk.”

Anonymity is an incentive

To help bridge this data gap, Verisk – parent of both AIR and insurance data and analytics provider ISOyesterday announced the launch of Verisk Cyber Data Exchange.  Participating insurers contribute their data to the exchange, which ISO manages – aggregating, summarizing, and developing business intelligence that it provides to those companies via interactive dashboards.

Anonymity is designed into the exchange, Verisk says, with all data aggregated so it can’t be traced back to a specific insurer.  The hope is that, by creating an incentive for cyber insurers to share data, Verisk can provide insights that will help them quantify this evolving risk for strategic, model calibration, and underwriting purposes.

Tapping the insurance ecosystem for insights

I had the pleasure last week of attending “Data in the New: Transforming Insurance” – the third annual insurtech-related thought leadership event held by St. John’s University’s Tobin Center for Executive Education and School of Risk Management.

To distill the insights I collected would take far more than one blog post.  Speakers, panelists, and attendees spanned the insurance “ecosystem” (a word that came up a lot!) – from CEOs, consultants, and data scientists to academics, actuaries, and even a regulator or two to keep things real. I’m sure the presentations and conversations I participated in will feed several posts in weeks to come.

Herbert Chain, executive director of the Center for Executive Education of the Tobin College of Business, welcomes speakers and attendees.
Just getting started

Keynote speaker James Bramblet, Accenture’s North American insurance practice lead, “set the table” by discussing where the industry has been and where some of the greatest opportunities for success lie. He described an evolution from functional silos (data hiding in different formats and databases) through the emergence of function-specific platforms (more efficient, better organized silos) to today’s environment, characterized by “business intelligence and reporting overload”.

Accenture’s James Bramblet discusses the history and future of data in insurance.

“Investment in big data is just getting started,” Jim said, adding that he expects the next wave of competitive advantage to be “at the intersection of customization and real time” – facilitating service delivery in the manner and with the speed customers have come to expect from other industries.

Jim pointed to several areas in which insurers are making progress and flagged one – workforce effectiveness – that he considers a “largely untapped” area of opportunity. Panelists and audience members seemed to agree that, while insurers are getting better at aggregating and analyzing vast amounts of data, their operations still look much as they have forever: paper based and labor intensive. While technology and process improvement methodologies that could address this exist, several attendees said they found organizational culture to be the biggest obstacle, with one citing Peter Drucker’s observation that “culture eats strategy for breakfast.”

Lake or pond? Raw or cooked?

Paul Bailo, global head of digital strategy and innovation for Infosys Digital, threw some shade on big data and the currently popular idea of “data lakes” stocked with raw, unstructured data. Paul said he prefers “to fish in data ponds, where I have some idea what I can catch.”

Data lakes, he said, lack the context to deliver real business insights. Data ponds, by contrast, “contain critical data points that drive 80-90 percent of decisions.”

Stephen Mildenhall, assistant professor of risk management and insurance and director of insurance data analytics at the School of Risk Management, went as far as to say the term “raw data” is flawed.

“Deciding to collect a piece of data is part of a structuring process,” he said, adding that, to be useful, “all data should be thoroughly cooked.”

Innovation advice

Practical advice was available in abundance for the 80-plus attendees, as was recognition of technical and regulatory challenges to implementation. James Regalbuto, deputy superintendent for insurance with the New York State Department of Financial Services, explained – thoroughly and with good humor – that regulators really aren’t out to stifle innovation. He provided several examples of privacy and bias concerns inherent in some solutions intended to streamline underwriting and other functions.

Perhaps the most broadly applicable advice came from Accenture’s Jim Bramblet, who cautioned against overthinking the features and attributes of the many solutions available to insurers.

“Pick your platform and go,” Jim said. “Create a runway for your business and ‘use case’ your way to greatness.”

North American Insurers Lead In Tech Spending

North American insurers lead the way in IT spending globally and will invest $73 billion in tech areas such as data analytics, cloud, and insurtech in 2017.

Digital Insurance reports that global IT spending by insurers is slated to reach $185 billion by the end of this year, according to the Celent “IT Spending in Insurance 2017” report.

After North America, insurer technology spending by region is as follows: Europe ($69 billion); Asia ($33 billion); Latin America ($5 billion); then a group of territories comprising Africa, the Middle East and Eastern Europe (around $5 billion collectively).

Three overarching trends – digitalization, data analytics, and legacy and ecosystem transformation – still dominate investment, Celent said.

“In a few markets globally, we have seen a slight reduction in IT spending this year. Generally, the more mature markets remain under pressure to demonstrate value through efficiency,” said senior vice president Jamie Macgregor.

Long Road To Better Data On Drowsy, Drunk, Drugged And Distracted Driving

States are underreporting critical data from crash scenes that could make a big difference in efforts to prevent help prevent traffic fatalities and injuries.

A National Safety Council review of motor vehicle crash reports found that:

  • All 50 states lack fields or codes for law enforcement to record the level of driver fatigue at the time of a crash;
  • 26 state reports lack fields to capture texting;
  • 32 states lack fields to record hands-free cell phone use;
  • 32 states lack fields to identify specific types of drug use if drugs are detected, including marijuana.

States are also failing to capture teen driver restrictions (35 states), and the use of advanced driver assistance technologies (50 states) and of infotainment systems (47 states).

Excluding these fields limits the ability to effectively address these problems, NSC said.

“Collecting data from a crash scene may be seen as merely “filling out accident reports” for violation and insurance purposes. Data collection efforts immediately following a crash provide a unique opportunity to help guide prevention strategies. Currently, some states are recording this type of data and others are not. When data of this kind is requested to be reported on a crash report and is entered, prevention professionals will have the data to better understand driver and non-motorist behaviors. When this data is not recorded, prevention professionals are left guessing.”

The call for better data collection follows the release of NSC figures showing that in 2016 there were more than 40,000 traffic fatalities in the U.S. for the first time in 10 years.

A recent I.I.I. white paper found that in the past two years, both the accident rate and the size of insurance claims have climbed dramatically. These are the largest and most volatile components of auto insurance.

Check out additional I.I.I. facts and statistics on highway safety.