news.cuna.org/articles/120983-fair-lending-and-ai
2022_06_CUMag_Lending-AI

Fair lending and AI

Evaluating consumers and making credit decisions holistically requires getting your arms around much more data.

May 27, 2022

We in the credit union industry recognize there is strength in diversity. We’ve learned to see that diverse viewpoints and life experiences bring wisdom to our organizations and lasting benefits to credit union members.

We’ve also recognized the importance of bringing greater diversity to those we serve and who have historically been at best disadvantaged and at worst discriminated against by legacy models. But for many financial institutions, cost and technology barriers have rendered it difficult to upgrade to more accurate and equitable approaches.

Focus

  • Responsible and transparent artificial intelligence (AI) technologies permit more fair, accurate, and compliant lending.
  • Leveraging AI to increase lending can transform the lives of those shut out of the credit economy.
  • Board focus: Expanding lending to underserved groups demonstrates a commitment to promoting and empowering diversity.

On March 11, 2022, Bloomberg Businessweek reported that one of the largest banks in the U.S. approved less than half of Black people who applied for mortgage refinancings in 2020. The bank’s response according to Bloomberg: “[We] treat all potential borrowers the same, [are] more selective than other lenders, and an internal review of the bank’s 2020 refinancing decisions confirmed that ‘additional, legitimate, credit-related factors’ were responsible for the differences.”

Of course, that’s the traditional response to alleged fair lending problems. But continuing to rely on this is getting riskier by the day. Setting the legal issues aside for a moment, we have a moral obligation to do better. 

What’s more, consumers demand it. A Harris Poll conducted in 2020, found that seven out of 10 Americans would be willing to change financial institutions to one they thought was fairer to women and people of color. Indeed, the Bloomberg article discusses numerous Black borrowers who won’t do business with the bank because of perceived bias.

Consumers are ready for what credit unions offer: a lender that wants to make the relationship work. They are tired of lenders that act like clipboard-wielding bureaucrats looking for ways to turn them down.

That kind of lender will find itself losing business to financial institutions that get to know their customers and make good decisions based on all of the relevant facts—and rightfully so. Enter artificial intelligence (AI) and machine learning (ML).

Evaluating consumers and making credit decisions holistically requires getting your arms around a lot more data. Old methods like FICO judge consumers on a couple dozen factors. AI- and ML-powered methods, in contrast, allow financial institutions to instantly consider thousands of data points to get the most accurate picture of each borrower. 

Financial institutions that invest in AI and ML will be able to approve more loans more quickly and accurately, all while reducing risk and improving member satisfaction.

The AI solution

The traditional response to potential fair lending issues is also becoming less legally defensible. The disparate impact test for fair lending violations has always had three parts.

Stated simply: look at whether there is disparate impact to protected groups, evaluate whether the disparate impact is business-justified, and make sure there is no less discriminatory way of achieving the same business objective.

Until recently, the traditional response has focused on the second part of the test, namely identifying a business justification for the disparate lending outcomes. In part, that’s because banks, regulators, and lawyers lacked a feasible way to implement the third part of the test by robustly searching for less discriminatory alternative (LDA) underwriting models.

It’s true that responsible lenders tried to use older methods to search for LDAs in the past. They often used a method called “drop one,” which involves recomputing the disparate impact caused by a model multiple times after popping each variable out of the model one at a time.

But that method rarely uncovered LDAs because it is so clunky and inefficient. Continuing to rely on it, therefore, poses significant legal risks and doesn’t serve members well.

It’s a risky practice because various AI/ML-powered technologies have emerged over the past few years that allow lenders to robustly search for LDAs. 

Regulators are aware of these technologies. Civil rights leaders are aware of these technologies. And, increasingly, responsible lending institutions—including the likes of Freddie Mac—are starting to use them.

Studies conducted by Zest AI often indicate instances where LDA models approve Black applicants at 90% the rate of white folks (versus 50% at large lenders), while still being more accurate at assessing default risk than the credit union’s benchmark.

A large credit union in the South increased the approval rate for women by 22% using an LDA. And a small credit union in the Midwest saw approval rates increase for Blacks, Hispanics, and women borrowers more than they increased for whites and men. 

What that means is that the legacy score provider hadn’t been doing enough to look at these groups fairly.

From a purely legal perspective, using the benchmark model when LDAs exist puts lenders squarely in violation of part three of the disparate impact test. Regulators are unlikely to bring an enforcement action under that theory tomorrow because they want to avoid disrupting financial markets by forcing immediate changes. 

But how long will they wait? And is prolonging legacy discrimination what’s best for credit union members?

NEXT: LDAs and AI



LDAs and AI

Right about now you are probably thinking, “Hold on. Wait a minute. What about all the news about biased AI? What about all the warnings coming from Consumer Financial Protection Bureau Director Rohit Chopra?” 

You’re right to ask those questions. But there are answers. 

Before we get to them, a bit of context: AI/ML-based technologies are a substantial threat to incumbent credit scoring providers, which rely on old, outdated methods to compute risk scores. This has led to the emergence of bad—even desperate—arguments against the use of AI and ML.

Among them are the myths that AI/ML models are “black boxes” we can’t explain and that only “interpretable” models are safe to use in practice. Both of these arguments have been debunked.

It’s true that some of the earliest players to market AI-powered underwriting products used questionable data sources and substandard methods explaining their models, and didn’t invest in a robust LDA search process. That led to high-profile situations in which AI and ML were blamed for problems caused by humans who didn’t think things through. 

But throwing the baby out with the bathwater would be a mistake when AI/ML, done right, can be such a powerful force for good. AI/ML models are not “black boxes” when Shapley-based explainability techniques are applied.

Those methods allow models to be explained with mathematical certainty, often providing more insight into how the models behave than do traditional systems based on simple math. 

Similarly, when models are adequately documented, financial institutions know what to expect from the models and how to react if things aren’t going as planned.

Model due diligence

The bottom line from this forward-leaning analysis of the current and future state of consumer finance is that responsible, transparent AI/ML technologies permit more fair, accurate, and compliant lending activity. 

Ensuring the technology is, in fact, responsible and transparent requires three due diligence items that are nonnegotiable: 

1. Demand transparency through proper documentation. Every model should come with both risk and fair lending documentation that explains exactly how it works, what data was used, and how fair lending issues have been addressed. If the model doesn’t have that, it’s junk and can’t be trusted.

2. Make sure you can trust the data used to train and run your model. Stick with tried-and-true data sources for the time being. Alternative data is great, but if it hasn’t been vetted for compliance it can present risks.

3. Demand fairness. Demand models that have been tested to ensure they are as fair as possible to achieve their business objectives and that they have been subject to proper LDA testing. If not, they might violate fair lending laws and harm your members.

Taking these proactive steps will allow your credit union to leverage AI/ML in a way that can transform the lives of countless Americans who have been unfairly shut out of the credit economy by legacy systems.

It can demonstrate both to your members and your employees a firm commitment to promoting and empowering diversity.

AI/ML presents a rare win-win for both consumers and credit unions: an opportunity to do well and do good at the same time.

THEODORE FLO is chief legal officer at Zest.ai, a CUNA Strategic Services alliance provider.

This article appeared in the Summer 2022 issue of Credit Union Magazine. Subscribe here.