An Exploration of Perkins Loan Default Rate Data

Contributed by Gordon. Gordon took NYC Data Science Academy 12 week full time Data Science Bootcamp program between Sept 23 to Dec 18, 2015. The post was based on his first class project(due at 2nd week of the program).

Introduction

In the news cycle driven cog of modern society, we often get caught up in whatever is being discussed by the talking heads on whichever screen we're looking at. When these avatars cease speaking about an issue, it often disappears from our consciousness as well - especially if it doesn't particularly affect us. Out of sight, out of mind, as it were.

The student debt crisis is one of the issues caught in this revolving door, and it also happens to be one of the biggest issues facing a large portion of young Americans. Unofficial counters place the total student debt at well over one trillion dollars. One of this burden's subdivisions are the categories of private and federal loan.

Unsurprisingly, federal loan programs are generally more lenient than those provided by the private sector. Before this exploration, the only Federal loan programs I knew about before doing this project were TAP and FAFSA. It is through the data used in this project that I learned about the Perkins Loan.

1

[Click Images to Enlarge]

Armed with data from 2011-2014, I began my analysis.

The End Goal

My goal in this first project was to explore the default faults associated with this loan through various visualizations.

Methodology

The data source has nine years of data but only the data for the three most recent years were not in pdf form. Still, the xlsx files provided were flush with superfluous trappings like conditional formatting and colors. All of those had to be removed before I could load the data into R. Once that manual labor was done, the real work began.

My first round of data cleaning mostly involved merging the data into one. That involved some sensible renaming of columns and adding an additional column to tag the associated year of each data point.

rename.columns = c('Serial',
'OPEID',
'Institution.Name',
'Address',
'City',
'ST',
'Zip',
'Bwrs.Who.Started.Repayment.Previous.School.Year',
'Bwrs.In.Default.On.June30',
'Cohort.Default.Rate',
'Bwrs.In.Default.For.At.Least.240.Days',
'Principal.Outstanding.On.Loans.In.Default.For.At.Least.240.Days')

names(perkins1112) = names(perkins1213) = names(perkins1314) = rename.columns

perkins1112$year='11-12'
perkins1213$year='12-13'
perkins1314$year='13-14'

perkins.data = rbind(perkins1112, perkins1213, perkins1314)

My second round of cleaning involved introducing state-level granularity. It was important here to apply a per capita scaling since the data would involve a lot of comparisons between states of varying populations. A quick visit to the US Census Bureau's website provided the necessary csv files. It is at this point that some error was intentionally included. The census data is for the calendar year, but the Perkin's loan data is based on the school year. I averaged the population of the consecutive years in question to try to match up the two data files as closely as possible, and then merged them. My data to work with was ready.

data(state.regions)
perkins.data = merge(perkins.data, state.regions, by.x='ST', by.y='abb')
perkins.data = tbl_df(perkins.data)

Results

In terms of structuring the flow of my visualizations, I decided to go from least to most granularity. I started off with looking at the yearly trend of money owed by those in severe default, which, unsurprisingly, increased year on year.

2

A similar temporal visualization based on the number of borrowers in severe default showed a similar trend.

3

Next I looked at state-level data. Using the chloroplethR package, I made a series of chloropleth maps from this state-level data for the three years of data. The gif below shows the default rate in each state over three years.

5

And the second looked at the average amount of money owed by those in default for more than 240 days.

4

To end my exploration I went to the lowest level of granularity and looked at all the colleges across all three years as a whole. The highlighted colleges are the ones I thought were interesting, but special emphasis goes to those with a low number of borrowers but a high principal owed.

6

The CUNY system in New York and Devry in Chicago stood out, as did Johnson and Wales University in Pennsylvania. In fact, the Philadelphia based institution had the distinction of a high volume of loans for a comparatively low number of borrowers.

Looking at individual states North Dakota had the most money owed scaled by population, so I had a look at those colleges.

7

California was at the other extreme with the least money owed per one million people.

8

New York is at forty-eight by the same metric.

9

Comparing New York and California brings up an interesting observation. The data references New York's city college system as a whole while California's equivalent system has its college listed individually. This brings into question the way the data was reported by the colleges, and make one wonder if the government shouldn't establish a standard across the board.

Conclusion

My analysis showed that the northwest of the United States is the mostly severely indebted to the Perkins loan program, with a few states like Maine and Delaware being in a similar state. For the most part, most states seem to have their loans under control from an wide perspective.

This only provides a snapshot of the loan crisis. For the analysis to be hard-hitting I need more data. Economic data of each state would be useful, as would be tuition costs and estimates for cost of living. I hope to expand this analysis in the future.

Reference

Here are the slides from my presentation: http://slides.com/gfleetwood/nycdsa-perkins-2

And the link to my code: http://bit.ly/1XMX0nw

Gordon Fleetwood
Gordon Fleetwood
Gordon has a B.A in Pure Mathematics and a M.A. in Applied Mathematics from CUNY Queens College. He briefly worked for a early stage startup where he was involved in building an algorithm to analyze financial data. However, most of his time has been spent working in various roles in academia —the latest being as an Adjunct Mathematics Lecturer. He is equally comfortable in both the Python and R Data Science stacks, but is strictly Python for Software Engineering. Outside of traditional Data Science, he is also interested in Soccer Analytics and the Open Data movement. With regard to the latter, he has recently become involved with Beta NYC.

Leave a Reply

Your email address will not be published. Required fields are marked *