Bias may present itself in a variety of ways within lending. When assessing biases, two issues frequently arise: Was someone treated differently because of who they are? Or did a sincere goal activity end as being harmful to one group?
A recent wave of technological innovation has enabled lenders to better comprehend how distinct subpopulations might perform differently than the kind of borrower they’re used to seeing or that’s effectively represented in the data.
On this episode of The Lending Link, we sit down with Kareem Saleh, CEO and co-founder of fairplay.ai, which assists lenders in identifying potential disparities in the decisioning systems, provides options to increase profitability and fairness, and helps in demonstrating to consumers, regulators and the public that they are taking strong steps to be fair.
Kareem and Rich discuss how to assess algorithms for bias and optimize them for failures and as well as discuss a number of topics, including:
- The role of alternative data in credit underwriting and machine learning
- What is the adverse impact ratio? How is it determined? How does it function?
- How to identify variables for the protected groups that may not be in your initial model
- Opportunities for lenders to meet the requirements under the Community Reinvestment Act and more!
About Kareem Saleh
Kareem Saleh is the founder and CEO of FairPlay, the world’s first Fairness-as-a-Service company. Financial Institutions use FairPlay’s APIs to embed fairness considerations into their marketing, underwriting, pricing, and loss mitigation algorithms and automate fair lending testing and reporting. Previously Kareem served as Executive VicePresident at Zest.ai, where he led business development for the company’s machine learning-powered credit underwriting platform.
Prior to Zest.ai, Kareem served as an executive at SoftCard, a mobile payments company that Google acquired. Kareem also served in the Obama Administration, first as Chief of Staff to the State Department’s Special Envoy for Climate Change, where he helped manage the 50-person team that negotiated the Paris Climate Agreement, then as Senior Advisor to the CEO of the Overseas Private Investment Corporation (OPIC) where he helped direct the U.S. Government’s $30B portfolio of emerging market investments with responsibility for transaction teams in Europe, Latin America, and the Middle East. Kareem is a Forbes contributor and a frequent speaker on the application of AI to financial services. He is a graduate of Georgetown University Law Center and an honors graduate of the University of Chicago.
Be sure to follow Kareem and our host Rich on LinkedIn, and for the latest GDS Link updates and news, follow us on Twitter and LinkedIn. You can subscribe to the Lending Link on Apple Podcasts, Spotify, Google Play, or wherever you prefer to listen to your podcasts!
Rich Alterman 00:04
You're syncng up and tuning in to The Lending Link Podcast, powered by GDS link, where the modern day lender can dive deeper into the future of data decisioning and Credit Risk Solutions. Welcome to the show everyone. I'm your host Rich Alterman, and on this episode of The Lending Link, we're sitting down with Kareem Saleh, CEO and co founder of Fairplay.ai, which helps lenders identify potential disparities in the decisioning systems, provides them with options to increase profitability and fairness and help prove to consumers regulators and the public that they're taking strong steps. to be fair. Kareem is a Forbes contributor and a frequent speaker on the application of artificial intelligence in the financial services sector. He's a graduate of Georgetown University Law Center and an honors graduate of the University of Chicago. In this episode, Kareem and I are going to spend some time talking about fairness and lending, disparate impact and treatment and how Fairplays offering can benefit lenders, and consumers and so much more. But first, please head over to GDS Link's LinkedIn and Twitter pages at GDS Link and hit those like and follow buttons. And please be sure to subscribe to The Lending Link on Apple podcast, Spotify, or wherever you prefer to listen to your podcast. All right, now let's get synced with GDS Link. Good afternoon, Kareem and welcome.
Kareem Saleh 01:29
Thanks for having me. Rich. It's great to be here. I'm a longtime admirer of Gds Link.
Rich Alterman 01:34
Well, we appreciate that. And thanks for joining me today. So Kareem you co founded Fairplay in 2020. Can you share a bit on your background before starting Fairplay?
Kareem Saleh 01:43
Yeah, my parents are immigrants from Egypt and like so many immigrants to America, they needed a $12,000 loan to start a small business and couldn't get one. And my mom worked so hard to save up that money she just about nearly died in the process. And I have been interested in this question of underwriting inherently hard to score borrowers my whole life thin files, no files, underwriting under conditions of deep uncertainty. I got started doing that work and kind of Frontier emerging markets. So Sub Saharan Africa, Latin America, Eastern Europe, the Caribbean, and yeah, for the last decade I've been doing it in venture backed startups.
Rich Alterman 02:25
Yeah, actually on our one of our prior podcast, when we're talking to Dan Quan, he had a little similar story, I think about coming to this country. And I know that his company, NevCaut Ventures has invested in Fairplay so nice to have you on today after we had talked about you a little bit a couple of weeks back. So can you share with our listeners today any special interest you have outside of the office? I know, you work pretty hard, but I'm sure you take some time off for yourself and your family.
Kareem Saleh 02:52
Yeah, well, I I'm a boxer and a poker player. And I find that both of those sharpened my concentration and keep me from looking at computer screens. But during the pandemic, I got kind of enamored with the lifestyle practices of Dutch extreme athlete called Vim Hof he's known to many as the Iceman. And the Vim Hof Method basically holds that regular breathwork and cold plunges make you happier, healthier, stronger and by learning how to breathe and meditates are extremely cold conditions, you can learn to better control your nervous system, including your innate fight or flight mechanism and relieve stress.
Rich Alterman 03:36
So have you taken the plunge?
Kareem Saleh 03:38
I swim almost every morning in the Pacific Ocean right off Venice Beach after I box, I find it improves my focus reduces my stress, it's a nice way to start the day,
Rich Alterman 03:48
any item on your bucket list to really test yourself some with some real real cold water.
Kareem Saleh 03:53
I can go about 45 minutes. But the really hardcore people can go like two hours.
Rich Alterman 04:00
So let's get down to business. So on your website, you indicate that you founded Fairplay and I quote, in response to calls for greater actions against systematic bias, unquote, with part of your mission to help any business that uses an algorithm to make high stake decisions about people's lives. You dub your firm the world's first fairness as a service company. This is quite a tall, tall order and and noble effort and a strong claim. Obviously, with your background, as you describe to us, I now understand better where some of that's coming from, when he gives us a little more on the backdrop for this call you have.
Kareem Saleh 04:33
Yeah, well, so I was in the US government in the early part of kind of 2012 through 2015 16. And a big part of my responsibility there was underwriting development friendly projects in countries that were foreign policy priorities. So you know, think about solar and wind farms in Sub Saharan Africa and small and medium size enterprise lending facilities for entrepreneurs in Southeast Asia. And what kind of shocked me was how rudimentary the underwriting was for those loans. And so when I was leaving the government, I went in search of people who were kind of taking a more modern approach to underwriting. And it was around that time that I met my co founder, a guy named John Merrill, who had been at Google for a bunch of years and at Microsoft before that. And, you know, we started like making a point of reading academic papers that would come out from places like Stanford and Carnegie Mellon, looking for new mathematical techniques that would give us an underwriting edge. And a few years ago, probably four or five years ago, we started hearing more and more about algorithmic fairness techniques, which you know, have as their express purpose, doing a better job of assessing the risk of populations that are not well represented in the data, or populations whose data is messy, missing or wrong. And so we happen to have the good fortune of working with a big mortgage originator at the time, and they were kind of willing to let us experiment. And we ran some experiments applying one of these algorithmic fairness techniques to their consumer loan underwriting model. And the results were like jaw dropping, it was like, could increase your approval rate by of black applicants by like 10%, with no corresponding increase in risk. And so when we saw that we had a light bulb moment, we were like, oh, maybe, you know, maybe this is going to be a promising area that can both allow folks to have an underwriting edge and also do more good in the world. And then not too long after that the country observed the murder of George Floyd. And there were protests sweeping across the country. And I think many Americans ask themselves at that time, you know, what can I do in my area of influence, that might make a difference, and might help remediate some of the systemic inequities that we know existed in financial services? And so you know, for me, and John, that was underwriting. And we thought that like, maybe we could convince more lenders to use AI fairness techniques, as second look, underwriting tools, basically doing a check to make sure you didn't decline anybody whose riskiness might have been overstated. And if you get that, right, you find more good loans, and you have a fairness benefit.
Rich Alterman 07:52
In that respect. You know, we didn't know the CFPB had pushed out a survey several years ago, to look at the use of alternative data in the credit underwriting as well as machine learning where you and John, you know, involved with any of those responses. Did you have any conversations with the CFPB that might be of interest to the audience today,
Kareem Saleh 08:13
we spend a lot of time with the Federal Financial regulators and state regulators to and we're fortunate to have folks like David Silberman, who was the longtime number two at the CFPB on our advisory board, as well as folks like Manny Alvarez, who spent a bunch of years in industry at a firm, but was also the, you know, commissioner of financial institutions in California. So, I'll tell you, you know, the deputy comptroller at the OCC gave a speech a few weeks ago, in which she said that the Federal Financial regulators, the Prudential regulators are laser focused on fair lending. And I think every day brings news of some new AI system that's gone off the rails, one of the things that the financial services industry has to contend with is a growing perception in the zeitgeist that left to their own devices, algorithmic systems will pose a threat either to consumers or the safety and soundness of the financial system.
Rich Alterman 09:14
You and John both worked over at Zest for many years, as you made a decision to move away and start Fairplay. Any key observations that you noted during your career there that maybe it was a catalyst for your move?
Kareem Saleh 09:30
Well, I tell you, we were among the first people to use complex machine learning algorithms of the sort that Google uses in search in consumer loan underwriting. And what we found was that machine learning algorithms are capable of learning the wrong things. Now, let me just give you one example. Right, so when we first started out, we were lending or our own balance sheet. And we didn't have much data because it was we had a cold start problem. And so we went out and we acquired some data. And we built an underwriting model. And of course, the target of the underwriting model is predict where my defaults will be the lowest train the model, the model comes back and says, Hey, you should go make a bunch of loans in Arkansas. Now, it just so happens that my co founder, John is from Arkansas. And so he happened to know that the regulatory regime in Arkansas was extremely hostile to these kinds of loans. Okay, so we go and we start digging into the data. Why is the model telling us to make loans in Arkansas? Well, we told the model to predict default, the data set didn't have any loans from Arkansas in it, which meant that it didn't have any defaults from Arkansas. Right. And the algorithm concluded that that meant that loans never went bad in Arkansas. Right. And so that was a big aha moment in the sense that like, these systems have to be carefully governed, or they will run your business off a cliff, and potentially do great harm to consumers in the process. The great news is, if you harness them for the right purposes, it's like, you know, going from a Camry to a Formula One car.
Rich Alterman 11:25
So you haven't been around that long. Can you kind of share what the reaction of the market has been what the adoption rate has been? Clearly, we're going to talk a lot about financial services today. So let's kind of focus on that. For the meantime.
Kareem Saleh 11:37
Yeah, we've been extremely fortunate to experience very fast adoption, including by the FinTech industry, who I think appreciates, both uses more complex models, and appreciates a technology solution to a problem, a compliance problem that, you know, many in the industry have historically thrown kind of bodies and consultants at so we're fortunate to work with some of the biggest names in FinTech, like Figure Technologies and Octane and Happy Money, and many others who want economic, reputational and regulatory benefits that arise from turning fairness into a competitive advantage.
Rich Alterman 12:19
So in the lending world, your suit solution is designed to identify disparities in their decisioning systems and provide them with options to increase profitability. And as we said earlier, fairness and proving to customer regulators and the public that they have taken strong steps to be fair, so let's kind of break this down. When you referenced disparities, is it fair to tie this to discussions around disparate impact, disparate treatment, redlining, predatory lending? Kind of like, where's your real focus?
Kareem Saleh 12:46
Yeah, I think those are certainly the common ways that bias manifests in lending, you can broadly Think of how to assess bias in lending in one of two ways. Did you treat someone differently because of who they are? Or did you do something that appeared to be objective, but ended up in worse outcomes for one group?
Rich Alterman 13:08
It's interesting, I could think back to Conversations you'd have with lenders that did in person lending in the short term lending world particular people walking in a store, and you would always talk about how part of the value of a custom model was that you would eliminate some of that bias that is there, whether we believe it's there or not, depending on who's standing in front of me that I'm looking at. So would you say that models really have not come as far as they can to continue to eliminate that bias, and that's really the value proposition or one of the value propositions that you're really trying to bring here to the lending community?
Kareem Saleh 13:41
That's right. I think that, you know, there's been a lot of technology advancement that permits you to understand how certain subpopulations that might differ from the kind of customer that you're used to seeing, or that's well represented in the data might perform. And so what's cool about these new developments is that they allow you to train models to be more sensitive to populations who might exhibit credit behaviors and credit characteristics that differ from those you are used to encountering. And it turns out that if you can find enough of those subpopulations and reach them with products that are appropriate, you can make money and increase access to credit.
Rich Alterman 14:31
I was looking at some of the reports on your website. You know, one of the things I came across was this key measure that you use called adverse impact ratio, or AIR and something called the related 4/5 rule for 80%. Can you share a high level how this is calculated and how it's leveraged in your evaluations? And in general, how does your solution work? Please take some time walk us through how you analyze algorithms for bias and how you optimize algorithms for failures.
Kareem Saleh 15:00
Yeah, so the adverse impact ratio is a measure of fairness that courts and regulators commonly use to understand if one group experiences a positive outcome, like approval for a loan at a lower rate than another group, right? So, at what rate? Are women approved for mortgages relative to men, regulators have never specified what thresholds they consider to be fair. So like, at what rate do you have to approve women relative to men to be quote unquote, fair employment context, courts have articulated something called the four fifths rule, which basically says, If a protected group experiences the positive outcome, like approval for a job, at least four fifths at least 80%, the rate of the control group, we will not necessarily find a disparity that justifies regulatory sanction below four fifths, we start to consider that unfair, and we're going to start inquiring into why is that disparity there? Is it there for a legitimate reason. The cool thing is that these new algorithmic fairness techniques I've been telling you about can expressly be programmed to try to minimize that error, that adverse impact ratio. And that's actually a technology innovation that we took from the world of self driving cars. When John and I were thinking about launching our second look solution powered by AI fairness techniques, we looked at the world of self driving cars. And if you think about it, all algorithms must be given a target, you know, an objective that they seek relentlessly to maximize. So for example, the Facebook algorithm has its as its target, or as its objective, keeping the user engaged. And so the Facebook algorithm is going to do whatever it has to do to keep a user engaged, even if the stuff that it's showing them to keep them engaged is bad for their mental health, or society, right. And you have this problem in self driving cars, too, if you told a self driving car that its mirror objective was to get you from point A to point B, it might do that while driving the wrong way down a one way street, right? Driving on the sidewalk, posing mayhem to pedestrians, right. So what is Tesla do to make sure that the neural network that powers its self driving cars doesn't behave that way, it has to give it the algorithm, two targets, right, get the passenger from point A to point B, while also respecting the rules of the road. We drew from that playbook at Fairplay, to create algorithms, which say, hey, predict who's going to default, but while also minimizing disparities for protected groups. And the cool thing is, it works really, really well, we just got done doing a second look, engagement for a big installment lender that found that they were going to be able to increase their approval rates by 20% and increase their fairness to black applicants by something like 35%. For that particular lender, that meant something like an original 100 and $50 million of credit originated and an additional $12 million in profit. So these techniques have the opportunity to yield and economic reward as well as regulatory and reputational rewards.
Rich Alterman 18:49
Well, that's quite a lift, it's kind of we talked, we talked in prior podcasts about the benefit of open banking data, right, and how it's bringing more opportunity to people where their credit file may not be truly representative of their capability. So it's interesting to hear that so as I understand it, when you guys are building these models, or the AI machine is building these models that you're identifying variables or attributes for the protected classes that need to be or should be looked at that are not necessarily in your initial model. So when you're working with lenders, do you then incorporate those new variables into the existing model? Or do you somehow try to have a segmentation tree that is splitting the populations and only applying those other attributes to the protected classes?
Kareem Saleh 19:36
Yeah, great question. So actually, we always start by using the data that the lender or our partner already has, we just do a very specific thing which lenders have been conditioned their whole lives, never to do, which is we take consideration of the protected attribute into account when we're setting the weights on the variables. So we expose the models to the distribution of protected class applicants during model training, so that the weights on those models are set to be more sensitive to those groups, but in a way that still preserves their predictive power. And let me give you an example of what I mean by that. So like a variable that we often encounter in underwriting models is consistency of employment. And if you think about it, consistency of employment is a perfectly reasonable credit variable on which to assess the creditworthiness of a man. But all things being equal, consistency of employment will necessarily cause a disparity for women between the ages of 18 and 45, who take time out of the workforce to start a family. So maybe what you ought to do is tell the model; Hey, you will sometimes encounter a population of people in the world called women, women will sometimes exhibit inconsistent employment. And before you decline somebody for inconsistent employment, maybe you better do a check on all the other variables to see if they resemble good applicants on dimensions that you didn't heavily consider. And so what we find is that by using these algorithmic fairness techniques, as a second look on your declines allows you to find something like 25 to 33% of opportunities you may have overlooked, because the riskiness of certain populations is overstated by conventional data sources and conventional underwriting techniques.
Rich Alterman 21:39
When lenders are using or looking at your your system, especially when we think about banks, in particular, is there an opportunity? Or have they discussed your offering providing some opportunity to meet some of the requirements they have under the Community Reinvestment Act?
Kareem Saleh 21:54
Yeah, we're seeing a lot of interest on the part of institutions who have ESG requirements, want to meet their obligations under the Community Reinvestment Act, want to establish Special Purpose credit programs designed to increase positive outcomes for historically disadvantaged groups? I think fairness has been a part of the regulatory and operational regime and financial services for 35 years, you know, fairness, 1.0 was kind of like calling the lawyers to write clever statistical justifications for this verities, fairness 2.0, where we're headed as an industry is we take fairness seriously, we inquire into it rigorously. And when we find issues, we commit ourselves with seriousness of purpose to solving them, because there are increasingly good tools to for doing so. And it makes us more money, in addition to allowing us to better serve our customers and the communities that that you know, that form our customer base.
Rich Alterman 22:53
So, you know, throughout my career, Kareem I've been in roles where, you know, work with different scorecard vendors. And actually one job, I would get the three inch binder that came over from the scorecard vendors that had all the attributes. And you know, I don't recall ever seeing anything that talked about bias or just discriminatory practices, even today, would you say that when lenders are building models, whether they're building it themselves, or they're contracting with third parties, is disparate impact, disparate treatment? Are those things really even being considered by those lenders as part of that initial model Build? Or is it really something more on the back end, when compliance tends to get involved?
Kareem Saleh 23:35
Yeah, historically, fair lending compliance has been done as a look back, you put a model into production and a year later, you go back and ask well, how did it do? But I think as these algorithms are taking over more and more high stakes decision in the customer journey, including the marketing decision and the fraud decisions, there is increasingly concern that the use of complex machine learning systems upwards in the funnel could distort fairness could create bias in ways that you might not necessarily perceive if you're only focused on the underwriting and pricing decisions. And so as a result of that, you're starting to see regulators and examiner's and folks who take AI governance and AI ethics seriously, focus on debiasing these models on the front end before you put them into production. You know, if you understand that these systems have a tendency towards bias, not because the people who make them have are bad people, it's because the bias is embedded in the data, then the reasonable expectation is that you're going to inquire into that bias, see if you can quantify it and see if you can introduce alternatives to make it fairer. And the good news is the tools are in increasing existence to allow you to do that. And there can be great economic benefits to doing it.
Rich Alterman 25:07
So kind of walk us through your sales process, when lenders are considering other data bureaus, they always try to do retro studies, right, take a look at their prior population and see how they behaved. Are you taking that same approach, I think he talked about a look back. So kind of walk us through, you know, what you're doing to get opportunities of yours to kind of bite?
Kareem Saleh 25:26
Yeah, what we do is we say, hey, you know, your core customer better than we do. What we specialize in are these populations that are far from the distribution of applicants that you normally encounter. And we do a very specific thing that you don't, which is we train our models with consciousness of those sub populations, which allows our models to be more sensitive to those groups. And so keep your whatever your incumbent underwriting model is in place, because you understand your core customer better than we do but route all the declines from that model to a second look model that's been tuned to be fair and a populations that are not well represented in the data. And in so doing, see if you can find more good applicants that also allow you to make investments in these communities that really need it.
Rich Alterman 26:21
So could you envision some day, you know, we think about in the electronics world, we think about the UL stamp of approval. Right? Underwriters laboratory is one of your goals that there'll be a Fairplay.ai logo on websites to let people know that they use you and that they're more fair than others.
Kareem Saleh 26:40
Fairplay inside? Yeah, yes. Yeah, we are in the process of rolling out a kind of, you know, good housekeeping seal of approval for algorithmic fairness.
Rich Alterman 26:54
So I know that I know that you were recently featured in a Forbes magazine article about bias and mortgage lending without getting into you know, long discussion, any key findings that you think the audience would be interested in from that article?
Kareem Saleh 27:08
Yeah, look, I mean, there were some really sobering findings. Right. So the mortgage market has gotten fairer to women over the last 30 years, it's gotten about stayed the same for Asian Americans, which is to say pretty good. Hispanic Americans do a little better than they did 30 years ago. But the black the fairness of the mortgage market to black homebuyers, basically hasn't budged in 30 years. And for Native American homebuyers the mortgage market fairness has decreased by about 15 percentage points. So in 1990, Native Americans used to be approved at 95%, the rate of white Americans today they're approved around 80% The rate of white Americans. So we have made a lot of progress in some areas in financial services. But we have had stubborn resistance to progress in other areas of financial services. And our message to the world is, hey, look for 30 years, we tried to achieve fairness through blindness, this idea that we could just rely on variables that were neutral and objective predictors of credit risk. But it's probably time for us to admit that neutrality is a fallacy. There's bias in the data, and may be that prohibition on using protected status in underwriting has outlived its usefulness. Maybe it's time for fairness through awareness, where we tune the models to be sensitive to these historically disadvantaged groups, recognizing that the data about them doesn't necessarily represent their true credit worthiness.
Rich Alterman 28:45
Your mission talks about helping any business use these algorithms to you know, help in the area of fairness. You mentioned, hiring, thinking just you mentioned insurance, other other industries that you look at that you think also would fall under that umbrella. Yeah, we
Kareem Saleh 29:01
think there are many domains which require decisions to be made fairly. Financial Services and insurance are obviously to, but healthcare decisions have to be made fairly. So if you're going to rely on an algorithmic system to make a clinical diagnosis about a patient, you have to prove that that algorithmic decision making system has been properly validated and isn't discriminating the employment sector which has many decisions that must be made fairly government services, like benefits administration and predictive policing, all of those decisions have to be made fairly. And then increasingly, the evidence suggests that even a low stakes decision if you make it a bunch of times can add up to having a high stakes impact. So for example, you know, if you think back to the Facebook example, maybe showing a young girl an image, you know, kind of negative body image on Instagram One time, you might say that's a low stakes harm, but you repeatedly show negative body images to a young woman, that kind of low stakes decision ends up having a high stakes impact. So we think that just as Google built search infrastructure for the Internet and Stripe built payments infrastructure for the internet, so to where we need to build fairness infrastructure for the internet, to D bias digital decisions in real time,
Rich Alterman 30:28
maybe someday you'll be doing work with match.com. They look at fairness in their algorithms, right.
Kareem Saleh 30:35
They're grading Yeah.
Rich Alterman 30:36
Before we wrap up, I'll throw one personal question at you. With the holidays coming up. What's your favorite holiday movie? And what does it say about you?
Kareem Saleh 30:45
Oh, Die Hard. And that I'm old school.
Rich Alterman 30:50
Die. That's so funny. My wife keeps saying it's a Christmas movie. And other people say it's not a Christmas movie.
Kareem Saleh 30:56
I perceive it to be a Christmas movie. Right? At one point, I thought is that right? Yeah,
Rich Alterman 31:04
definitely. It definitely is. Well, look I really appreciate your taking time to chat with us today. I wish you and John, much success and your new venture. You're early in your evolution. And I'm sure we're going to see a lot of good things coming out of your company. Once again, this is Rich Alterman, and we've been syncing up with Kareem Saleh, co founder and CEO of Fairplay, we hope you've enjoyed this podcast and will stay connected with GDS Link's the Lending Link to listen to future podcasts and catch up on ones you missed.