Right about now you are probably thinking, “Hold on. Wait a minute. What about all the news about biased AI? What about all the warnings coming from Consumer Financial Protection Bureau Director Rohit Chopra?”
You’re right to ask those questions. But there are answers.
Before we get to them, a bit of context: AI/ML-based technologies are a substantial threat to incumbent credit scoring providers, which rely on old, outdated methods to compute risk scores. This has led to the emergence of bad—even desperate—arguments against the use of AI and ML.
Among them are the myths that AI/ML models are “black boxes” we can’t explain and that only “interpretable” models are safe to use in practice. Both of these arguments have been debunked.
It’s true that some of the earliest players to market AI-powered underwriting products used questionable data sources and substandard methods explaining their models, and didn’t invest in a robust LDA search process. That led to high-profile situations in which AI and ML were blamed for problems caused by humans who didn’t think things through.
But throwing the baby out with the bathwater would be a mistake when AI/ML, done right, can be such a powerful force for good. AI/ML models are not “black boxes” when Shapley-based explainability techniques are applied.
Those methods allow models to be explained with mathematical certainty, often providing more insight into how the models behave than do traditional systems based on simple math.
Similarly, when models are adequately documented, financial institutions know what to expect from the models and how to react if things aren’t going as planned.
The bottom line from this forward-leaning analysis of the current and future state of consumer finance is that responsible, transparent AI/ML technologies permit more fair, accurate, and compliant lending activity.
Ensuring the technology is, in fact, responsible and transparent requires three due diligence items that are nonnegotiable:
1. Demand transparency through proper documentation. Every model should come with both risk and fair lending documentation that explains exactly how it works, what data was used, and how fair lending issues have been addressed. If the model doesn’t have that, it’s junk and can’t be trusted.
2. Make sure you can trust the data used to train and run your model. Stick with tried-and-true data sources for the time being. Alternative data is great, but if it hasn’t been vetted for compliance it can present risks.
3. Demand fairness. Demand models that have been tested to ensure they are as fair as possible to achieve their business objectives and that they have been subject to proper LDA testing. If not, they might violate fair lending laws and harm your members.
Taking these proactive steps will allow your credit union to leverage AI/ML in a way that can transform the lives of countless Americans who have been unfairly shut out of the credit economy by legacy systems.
It can demonstrate both to your members and your employees a firm commitment to promoting and empowering diversity.
AI/ML presents a rare win-win for both consumers and credit unions: an opportunity to do well and do good at the same time.
THEODORE FLO is chief legal officer at Zest.ai, a CUNA Strategic Services alliance provider.
A Zest.ai survey of financial services professionals about bias in the credit scoring system revealed three key insights:
This article appeared in the Summer 2022 issue of Credit Union Magazine. Subscribe here.