How We Can Remove Bias From Credit Data

How We Can Remove Bias From Credit Data

Due to its vastly improved speed and accuracy over human underwriting, algorithmic lending is now used by 69% of all lenders and all of the top banks in the U.S., according to MasterCard. Rather than humans, algorithms decide the overwhelming majority of loans made today.

Biased Data Creates Biased Algorithms

As AI popularized itself more quickly in the last 18 months than other technologies in the past, one of the biggest misunderstandings is that it would be inherently unbiased. However, AI learns from past data and decision-makers. The algorithms lenders use today were trained indirectly using data from the human decision-makers of the 1990s and earlier.

A fundamental selection bias runs through all modern lending. Lenders only have access to data for the loans they approve. Algorithms can be trained to distinguish who will default within that dataset accurately, but the algorithm never sees the data of the loans that were not approved. Any algorithm exclusively trained on approved loan data will have the same selection bias baked into its decisions.

Fair lending compliance aside, selection bias is a colossal financial concern for lenders who may be denying good loans based on biased data. Identifying this hidden creditworthy group of potential borrowers is impossible without dramatically changing the underwriting process.

Foundational Testing

Foundational testing is the primary way lenders can develop less biased datasets and improve their models. In foundational testing, lenders invest in their model’s future performance by making loans to applicants who would normally be denied under their existing system.

Start by experimenting with a pool of funds to make loans to these borrowers, expecting it to be a long-term investment rather than a short-term profit. Ideally, the lender would give loans to all applicants, but it would at least have to adapt its underwriting thresholds.

Yes, it will result in high default rates within the experiment, but it will allow the model to be calibrated on a cohort that has been systematically, unjustly and incorrectly excluded from lending.

What Happened in Real Life

Our hypothesis at Salus was that analyzing a loan applicant’s credit risk goes well beyond traditional credit scores. Effective analysis will identify low-risk borrowers deemed less than creditworthy by conventional credit scoring: The “hidden prime” group. Hidden prime borrowers would pay back loans at commercially feasible rates if lenders gave them the opportunity.

Salus used its foundational testing dataset and proprietary alternative credit scoring model to compare its performance to conventional credit scores for microloans. The difference was immediately apparent. While a traditional model associated lower credit scores with higher risk within the applicant group, Salus’ scoring did not.

The alternative risk scoring model assigned risk differently and identified many more creditworthy borrowers. In fact, the model reduced default rates by 75% among the top 20% of applicants despite the borrowers’ deep subprime credit scores.

The Salus methodology also created actual risk tiers of borrowers. The testing showed that, using credit scores, lending to the top 80th percentile of applicants yielded the same default rates as lending to the 40th or 20th percentile of applicants; neither produced commercially reasonable rates.

We strongly believe in credit unions’ mission of people helping people and serving those who cannot find affordable help elsewhere. Alternative credit scoring can better measure risk while also adding targeted services, like microloans paired with financial education, to your credit union’s services for members.

James Chemplavil is the Co-founder/CEO of the Charlotte, N.C.-based Salus, a fintech that helps lenders offer alternatives to predatory payday loans.

Leave a Reply

Your email address will not be published. Required fields are marked *