[ad_1]
as
A case in point is Upgrade, a San Francisco-based challenger bank that provides mobile banking, personal loans, hybrid debit and credit cards, credit builder cards, auto loans, and home improvement loans to 5 million consumers. Upgrade partners with “built-in fairness” providers.
“what [the partnership with FairPlay] “What we’re accomplishing for us is ensuring fairness and compliance and making the right credit decisions that don’t disparately impact any protection category,” said Upgrade founder and CEO. ) Renaud Laplanche said in an interview. Apply FairPlay to all credit products.
The banking, fintech, and banking-as-a-service ecosystems have recently come under intense regulatory scrutiny.Top of the list of directors and supervisors
These concerns are not new. Financial companies have been leveraging AI in their lending models for years, and regulators are required from the start to comply with all applicable laws prohibiting discrimination based on characteristics, such as the Equal Credit Opportunity Act and the Fair Housing Act. I made that clear. As a race.
But proving that AI-based lending models are non-discriminatory is a new frontier.
“There is a growing consensus that if we want to take advantage of AI and big data, we need to take seriously the biases inherent in these systems,” said Fairplay founder and CEO. Kareem Saleh said in an interview. “We need to rigorously examine these biases and, where we find problems, address them with seriousness and purpose.”
Saleh said Upgrade is demonstrating tremendous leadership for both the company and the industry in strengthening compliance technology in this area.
The upgrade uses a machine learning technique called gradient boosting to make loan decisions. (Behind the scenes, the company’s personal loans and auto refinance loans are made by partners Cross River Bank and Blue Ridge Bank. Home improvement loans and personal lines of credit are also made by Cross River Bank, which issues Upgrade Cards.) ) Purchase a bank upgrade loan of approximately 250.
Banks that buy loans from Upgrade and other fintechs look for evidence of compliance with the Equal Credit Opportunity Act and other laws regulating lending. On top of that, upgrades have their own compliance requirements, as do the banks that partner with you and the banks that purchase your loans. FairPlay’s API monitors all of this. They backtest and monitor the model for any signs that could be negatively impacting the group.
One aspect of the software that attracted Laplanche was its ability to monitor in real time.
“It’s much more effective and easier to use in this respect, as opposed to doing periodic audits and sending the data to a third party and getting the results back weeks or months later,” Laplanche says. said. “We have a continuous service here that is always running and can detect signals very quickly and can help us make adjustments very quickly. I like the fact that it’s not a batch process.”
FairPlay’s software is most commonly used for backtesting lending models. Run the model on loan applications from two years ago to see how the model would have performed if it had been in operation at that time.
“Then it will be possible to make reasonable estimates about how the results of that model will affect different groups,” Saleh said.
If backtesting reveals an issue, such as disproportionate lending to white men over women or minorities, software can be used to determine which variables are driving different outcomes for different groups.
Once they are identified, the question is, “Do I need to rely on those variables as much as before?” Saleh said. “Are there some other variables that may be similarly predictive but have less influence in driving disparities? All of these questions are a good starting point for taking the first step in testing the model and determining the outcomes for all these groups.” You can only ask if you consider what will happen.
For example, a woman who takes a few years off work to raise children has unstable income, which seems like a big red flag for a loan underwriting model. But information about women’s credit performance can be used to adjust variable weights in ways that make the model more sensitive to women as a class, Saleh said.
Black people who grew up in areas without bank branches and used primarily check cashers are less likely to have high FICO scores and may not have a bank account. In such cases, Saleh said models could be adjusted to reduce the impact of credit scores and adjust for the impact of consistent employment.
Such adjustments “allow the model to capture these populations to which it was previously unable to respond due to overreliance on specific information,” Saleh said.
FairPlay backtests can be run on all types of underwriting models, from linear regression and logistic regression to advanced machine learning models, Saleh said.
“These days, AI models are at the center of every action,” Saleh says. More advanced AI models are harder to explain. This makes it harder to understand what variables were driving the decision and can consume more information, which can be cluttered, missing, or incorrect. . As such, fairness analysis is much more nuanced than in the world of relatively explainable models and data that are largely present and accurate. ”
FairPlay monitors the results of your model, so it can be used to detect unfair behavior and suggest changes or fixes.
“When equity starts to deteriorate, we try to understand why,” Saleh said. “How do we ensure underwriting fairness in a dynamically changing economic environment? These are questions that have never really been asked or addressed before. ”
FairPlay started offering real-time monitoring relatively recently. With technology and economic conditions changing rapidly, “temporary tests are no longer sufficient,” Saleh said.
Patrick Hall, a George Washington University professor who has worked on the NIST AI risk management framework, said technologies like FairPlay are important. He considers the FairPlay software to be a reliable tool.
“People are definitely going to need better tools,” Hall said. But to be truly effective, it must align with your processes and culture. ”
A good modeling culture and process includes having diversity in the programmer team.
“The more diverse your team is, the fewer blind spots you have,” Hall said. This doesn’t just mean demographic diversity; it also means having people with a wide range of skills, including economists, statisticians, and psychometricians.
A good process includes transparency, accountability, and documentation.
“It’s just outdated governance,” Hall said. “If you train this model, you need to create documentation for that model. You need to sign that documentation. There can be real consequences if the system doesn’t work as intended.”
[ad_2]
Source link