Predicting Insurance Claim Severity

Introduction

In October 2016, Allstate launched a Kaggle competition challenging competitors to predict the severity of  insurance claims on the basis of 131 different variables. Better understanding the future cost, or severity, of a claim is of utmost importance to an insurance company and would enable Allstate to price their plans more effectively. Additionally, knowing the relative importance of different variables would allow the company to more efficiently evaluate potential customers.

For this competition, we applied various strategies, models, and  algorithms to predict the severity of an insurance claim. As we will discuss, we utilized a variety of supervised machine learning methods, including multiple linear regression, ridge and lasso regression, random forest, gradient boosting machine (GBM) and neural nets. We then used ensembling to combine our models and arrive at more accurate predictions.

Exploring the Data

One of the challenges within the competition was that the 131 variables provided by Allstate were anonymized, meaning there was no explanation as to what the various columns described. In all, there were 72 binary categorical variables, 43 non-binary categorical variables (with 3 to 326 levels), 14 continuous variables, and one dependent variable, “loss”. The company provided a training dataset with 188,318 rows and a testing dataset with 125,546 rows.

We first visualized the loss variable, which ranged from 0.65 to 125,000. However, the histogram was hard to decipher due to many outliers with high “loss” values. When only plotting the first 95% of the data, we were able to visualize the distribution more clearly, although the underlying data was still heavily skewed to the right.

histograms

To remove the skewness, we did a log transformation on the loss variable which normalized the distribution, as seen below.

log-loss

Preprocessing

To prepare the data for analysis, we first joined the train and test dataset to account for several levels that appeared in the test.csv dataset, but not in the train.csv dataset. Because of the many levels within the categorical variables, we created dummy columns for each level with binary values of 0 or 1. However, to reduce the number of new columns, we limited the dummy columns to categories that comprised at least 2% of the variable. Lastly, we applied a log transformation to the response column in order to normalize the distribution.

preprocessing

The resulting training dataset had a total of 280 columns consisting of 265 binary variables, the 14 original continuous variables, and the log-transformed loss variable.

Supervised methods

I. Multiple Linear Regression model

To get a sense of our data and obtain a baseline against which to compare our other models, we first ran a multiple linear regression model using the R Caret package. However, we noticed that due to the approach we took in preprocessing our data, the resulting matrix of predictors turned out to be rank deficient. This prompted us to try several linear regression models to address the multicollinearity in the data and the potential problems of matrix invertibility and non-reliability of confidence intervals.

Our original model included all the variables. To try to solve the rank deficiency problem, we ran a second model which excluded seven variables that had resulted in NA coefficients in the first model. Excluding these variables, however, did not address the original problem.  Finally, a third model excluded all the variables dropped in the second model as well as all the variables that had failed to reach significance at the 90% confidence level in the first model.

This model resulted in an adjusted R^2 of approximately 0.52, a cross-validation RMSE of 0.51, and a mean absolute error (MAE) score of 1249.45 on the test data set. Inspecting the related diagnostic plots, we were able to ascertain that the errors followed a relatively normal distribution. Similarly, we noticed no distinctive patterns in the scatter plot of the residuals against the fitted values, suggesting that the residuals also had a constant variance. These diagnostics gave us confidence in the validity of the model’s F test, however given the modest accuracy of the predictions resulting from the multiple regression model, we proceeded to investigate further models.

linear-regressions

II. Ridge

We next ran a ridge regression, cross validating over a grid of a large range for lambda, and then cross validating again on a smaller range.  However, the Ridge model returned a very low lambda value, close to zero in the ten-thousandth place.  This low lambda indicated a near-zero shrinkage penalty, yielding results very close to that of the linear model with all variables included.  The RMSE for this model was 0.507 and a MAE of 1232.

III. Lasso

In addition, we also ran a lasso model with 10 folds cross-validation. The model returned a similarly low shrinkage penalty value of 0.0007140295, suggesting again the lasso fit would also yield predictions that would be very close to the multiple linear regression model. The model produced a RMSE of 0.507 and a MAE of 1248, performing close to the multiple linear regression model, but somewhat worse than the ridge regression model.

IV. GBM

Another stand-alone model that we evaluated for learning to predict the loss variable was gradient boosting. With this method, we tried to get a more accurate model out of an ensemble of random forests by adjusting the following parameters:

  • The number of iterations: n.trees
  • The depth of each tree: interaction.depth
  • The learning rate: shrinkage
  • The minimum terminal node size: n.minobsinnode

The main challenge with gbm is finding the best mix of parameters, especially in the choice of n.trees and shrinkage. As with all our previous models, we used the Caret package to make parameter selection easier. Caret enables parameter tuning by the use of a tuning grid during training. The tuning grid can take in multiple values of each parameter and train the model over each combination of parameter values. We trained multiple models to eventually end up with the best combination of parameters.

We noticed that at interaction depth of 10 and 500 trees, MAE was minimized without overfitting the validation set. Trying an interaction depth greater than ten makes the model overfit early on in the iteration process. The lowest MAE score we achieved with the best tuning parameter was 1161.49.

Ensembling

As we have seen, none of the individual models performed fairly in predicting loss. However, by trying to reduce the MAE of each model, we were able to derive valuable insights. To achieve a better predictive performance, we next proceeded to combine each of these individual models in an ensemble.

To stack our different models, we used the H2O and H2O ensemble packages. Stacking in H2O works by using multiple base learners on the dataset. This original dataset can be referred to as the “level-zero” data. The base learners can be many different algorithms with different parameters for each. Each of these base learners then computes its own predictions using the level-zero data. Column binding these predictions and regressing them onto the original response variable will now be the “level-one” data. Another learning algorithm will then be used on this level-one data to come up with a prediction, which is better than each of the individual models. This last learning algorithm is called the “meta-learner.”

The best ensemble

Using this framework, we tried using a range of combinations base learns and meta-learners. We started by testing ensembles featuring the default linear model, random forest, gradient boosting machine and neural network available and coupling them with linear, random forest, GBM and neural network meta-learners in turn. At this stage, the ensembles featuring a GBM meta-learner scored best, at a MAE score of 1142.

Next, we started adjusting some of the parameters for the base and meta-learners, and adding more base learners of the same type, but featuring different parameter values. In this step, we obtained our best result by using an ensemble that included three GBM models with different numbers of trees, and one customized model for each of the other base learners. This ensemble yielded a MAE of 1125.

Finally, we tried eliminating some of the weaker performing base learners like the linear and random forest models, using multiple GBM and neural net base learners. Our best scoring ensemble used four gradient boosting machine base learners, five neural net base learner models and a ridge regression for the level one model and yielded a MAE score of 1118.

Conclusions

Looking back on the experience, one conclusion we reached was that building an ensemble that would reach a high score is as much an art as it is a science, and that parameter tuning is a central part of the enterprise. We also noticed that bigger ensembles tend to score higher, even when including base learners of the same type with identical tuning parameters. For instance, one of our ensembles which included three gradient boosting machine and three neural net base learners with parameters that yielded the best MAE scores among individual GBM and neural net models respectively, and the same ridge regression as a meta-learner, reached a lower MAE score than our best model, even though the GBM and ridge models had the same parameter tunings.

Finally, from our perspective, it looks like the manner in which preprocessing is conducted plays a crucial role in the ability to build a model that is capable of yielding high accuracy predictions. 

Cristina Andronescu
Cristina Andronescu
Cristina is a recent MIT graduate with background in quantitative social science. Over the past five years Cristina has been involved in various experimental and quasi experimental research projects inside academia and as part of program evaluation work in the non-profit sector, most recently through the Abdul Latif Jameel Poverty Action Lab. During her studies at MIT, Cristina focused on political economy and quantitative methods, taking a strong interest in causal inference, statistical modeling and a career in data science. At NYCDSA, she used supervised machine learning algorithms and ensemble methods to predict the magnitude of insurance claim losses and built a collaborative filtering based movie recommendation system using Python and Spark.
Oamar Gianan
Oamar Gianan
Oamar Gianan has about 15 years of experience in the information technology industry primarily in cloud computing. He developed a passion for data analysis by working on infrastructure where big data is processed. Before moving to New York, Oamar has helped launch enterprise and consumer cloud computing services for a telecommunications company in Manila, Philippines. An avid surfer, his ultimate goal is to create a machine learning model to predict where the best and least crowded breaks will be.
James Lee
James Lee
James Lee just graduated from New York University with a B.A. in Economics with a minor in Mathematics. James diversified his interests by taking classes in various fields such as Analytical Statistics, Econometrics, Linear Algebra, Organic Chemistry, and Labor Economics. Eventually the many interests began to crystallize into a hunger for the infinite possibilities in Data Science. James is an up and coming Data Scientist with a passion in research, analysis, and food.
Joseph van Bemmelen
Joseph van Bemmelen
Joseph van Bemmelen worked in equity research for Stifel Nicolaus, a mid-sized investment bank, for close to two years before joining NYCDSA. In his role, he wrote reports on publicly traded companies and worked extensively with financial models to project a company's earnings, which sparked his interest in data science methods. He currently works part-time at an early stage fintech startup and is on the board of the Literacy Program, which connects college student tutors with underprivileged high school students at local New York City public schools. He holds a BA in Economics from Yeshiva University and aims to apply analytics and data visualization to solving everyday problems.

Leave a Reply

Your email address will not be published. Required fields are marked *