Data Integration and loan default risk analysis using Machine Learning

Data Integration and loan default risk analysis using Machine Learning

Posted by Jun Kui Chen

Updated: Jun 23, 2019

33627568561_0339d13cc1_oGithub link to the project


Marketplace lending is an alternative to traditional financial institutions. It directly pairs the borrowers with lenders through the use of online lending platforms. The concept was originally called 'Peer-to-Peer' lending, meaning that individuals get financed by other 'peer' investors. As more institutional investors join the market, the term is now evolved to 'marketplace lending'. 

    The first company that offers the marketplace lending is Zopa, a UK company founded in February 2005. In the US, Prosper, founded in February 2006, is the first marketplace lending player. LendingClub joined the competition right after. LendingClub was initially launched as a Facebook applications. After receiving $10.26 million Series A funding in August 2007, LendingClub became a full scale company. LendingClub went public in 2014, and now it's the biggest online lending platform.

    As more and more online lending platforms join the competition, one major issue for investors is how to integrate different formats of data so that investors can compare the performance of loans from different company and project the risk of pools they finance.

Project overview

    The goal of this project is to use machine learning to help integrate different sources of data and predict loan default risk. The first step in data integration is to match the column names with same meanings. Usually, online lending platforms would provide a data dictionary that contains the column names and their respective definitions. Often, the column names would contain abbreviations or jargons that people from other industry are hard to tell its meaning without looking at the definitions. Therefore, column matching based on the column definitions is the best way do it. In many companies, this is still a manual process. It's time consuming if there are many columns to pare. Fortunately, with the advancement of NLP algorithms, it's possible to let computers to decide the closest match. With that in mind, we shed the light to BERT, which is a NLP model developed by Google. Unlike traditional NLP models, which read the text input sequentially, BERT performs bidirectional training for language modeling, and therefore results in better contextual representations for each word. With that capability, BERT is suitable for tasks to pair sentence with similar semantic meaning.

Project infrastructure setup

In this project, different services are containerized in separated Docker containers. The main entry point is through Flash API. BERT encoding service and Dash interactive plots are deployed as a stand-alone services using Docker. I used bert-as-service to map sentences to fixed-length vectors, and then computed normalized dot product as score to rank the similarity of sentences. BERT provides pre-trained models that can be used directly for sentence encoding. In this project, LendingClub data dictionary is read in initially and is used as reference to find the best match for Prosper columns.  As the example, when choosing the "Term" column of Prosper data, which has the definition of "The length of the loan expressed in months", the Flask app returns the top 10 closest match. Nonetheless, the "term" column of LendingClub, which has the definition of "The number of payments on the loan. Values are in months and can be either 36 or 60", are ranked in the first place.

Top 10 match for Prosper column "Term"  

I also performed the EDA to assess the origination and loan statistics for LendingClub loans. I used Dash with Plotly Express to create interactive dashboards. 

To predict the default risk of loans, I used random forest to train the data. The model came out with the feature importance. As the result shows, the top one feature that determines the default risk is outstanding principal, followed by interest rate, debt-to-income ratio and credit history of the borrower. 

Feature importance in random forest model


For the future direction of this project, I would like to fine tune the BERT training model to increase the accuracy of matching. Currently, the Flask API allows uploading csv files of LendingClub loans and it will return the prediction of loan status. In the future, I want to integrate the dashboard generation and loan status prediction, so that when a csv file is uploaded, it will generate a full report.

Jun Kui Chen

Jun obtained Ph.D. from Columbian University in Immunology. He is currently working in a Fintech start up as a Data Analyst.

View all articles

Topics from this blog: Capstone Student Works

Interested in becoming a Data Scientist?

Get Customized Course Recommendations In Under a Minute