insuranceciooutlook

Assessing Cyber Risk

By Phil Norton, Vice Chairman, Midwest Region

Phil Norton, Vice Chairman, Midwest Region

Source, quantity and quality of data are the obvious keys to assessing or estimating risk. As the cyber claims world is quite new, the amount of data available is relatively sparse. At the turn of the century, statisticians relied on publicly available FBI data because information was especially scarce and corporations were not required to notify anyone about the breaches that were indeed occurring.

"To secure a highly predictive model, there are two more items to consider, namely the cost-per-record variability per industry and the difference in records maintained by industry."

Today, we have more than 10 years of data collected from a variety of sources and compiled from about 2,000 different companies. This information can be used to help assess a company’s cyber risk exposure in today’s business environment.

First, what factors are included in breach costs? And, based on actual breaches, whose costs have been quantified and what sort of severity does this translate into? 

Some key variables to consider in estimating the cost of a breach are costs from Forensics, Crisis Management, Public Relations, eDiscovery, PCI Compliance fines and penalties and other monies potentially owed to third parties, including banks. 

By taking into account the total cost of the claim, divided by the number of employees (which relates to the number of devices or network access points), we have established a typical breach cost per employee for a number of size categories. The following table reflects overall averages, noting that there are dramatic adjustments to be made once we know the industry of the company being evaluated.   

Not surprisingly, larger companies can create greater efficiencies resulting in a reduction in the cost per employee, cost per record breached, cost per network device and so on. However, larger companies also experience larger breaches, so their total breach costs will be greater in the aggregate. Here are some sample data points to consider:

Once we start thinking about modeling risk for the purpose of buying insurance limits, it is best to look beyond typical, or average, breach costs. The last column in the table above displays approximate 90th percentile costs. To really be conservative, we should acknowledge that the extra cost from hacking breaches versus all other types of breaches is generally about 20 percent. Thus, total breach expenses for a large company with around 18,000 employees would have a hacking 90th percentile closer to $33M. Note that for other lines of coverage purchased, most companies select insurance limits that are between the average and 90th percentile. 

While this process provides a primitive model, the next step in modeling cyber risk is to refine this model with further exposure data readily available and highly correlated with the risk. To secure a highly predictive model, there are two more items to consider, namely the cost-per-record variability per industry and the difference in records maintained by industry. Thus, industry is a critical concern. 

Our modeling does not use actual record counts for Personally Identifiable Information (PII) simply because such data is difficult for clients to determine, especially on a unique individual basis. Further, we certainly don’t have 5-10 years of accurate data on record counts. No one does! We solved this problem by using number of employees, revenues and industry to jointly assess cyber risk in a way that appears to be more successful than any new model relying only on record counts. 

To give an idea of how important industry is, consider the following chart built from a recent Ponemon Institute study on breach costs. Though their costs per record seem high and may include some costs not necessarily covered by cyber insurance, we agree with the basic risk relationships reflected in the chart. Using this information about costs per record allows us to refine the modeling tremendously by altering severity of claim expectations according to industry. Similarly, the frequency of claim expectations should be adjusted by industry. We have done that as well, relying mostly on 10 years of data from the Identity Theft Resource Center. 

Although retail cyber claim severity has always had potential for extreme cost―consider TJ Maxx from years ago and Target, Home Depot and others more recently―clearly smaller retailers do not have the same consistent history of more severe losses compared to their peers.   

Once we have incorporated the various statistical correlations into a single model, we can create three curves for each of 24 industries that we are tracking, similar to the sample graph for a single industry displayed below. 

    

The model that we have built from all the foregoing data discussed, and the application of statistical techniques to fit such data (or on some rare occasions used to extrapolate from the data), is a flexible model, and it can handle blends of industry factors quite easily. It also flexes on the use of employee counts. For example, large retailers with many locations (e.g., restaurants) may need to view the number of locations as the best indicator of cyber risk instead of some function of the number of employees. 

In summary, while this model represents a wonderfully sophisticated assessment technique for measuring cyber risk, this is a fast moving area of liability and subject to constant change. What works impressively now is not a guarantee of future forecasting capability. Regardless, modeling individual cyber risk using quality data and statistical methods does produce a key indicator of risk, and provides a powerful alternative to simple benchmarking of insurance purchasing patterns.    

Current Issue

Pegasystems [NASDAQ:PEGA]: Intelligent Automation for Insurance Underwriting