Predicting Lending Risk with Machine Learning Models


This scenario asks students to provide recommendations on the use of algorithms and machine learning models to make lending decisions at a bank in Germany. Contextually, past decisions based on these techniques have questionable results.



The objective of this case study discussion is to provide students with an understanding of ethical issues in the use of data and algorithms for banking and credit assessment. The case discussion is set up as a role-play where students are assigned different roles as members of an expert committee that advises a loan manager. The scenario is slightly futuristic in that it asks students to think about alternate data sources and ways of analysis data. This version of the case is broad but there is potential to use the case as an introduction to actual analysis of loan data. Many such datasets are available online through platforms such as Kaggle and used regularly in upper-level or graduate-level courses (links below). 

Role-Play Scenario Narrative and Description of Roles 

A loan manager at Erstes Darlehen und Kredit (EDK) Bank (First Loan & Credit Bank) of Germany, Nina Pritchard, is requesting a statistical model to help her department determine which loan applicants are creditable, i.e., most likely to repay their loans. Typically, an applicant’s demographic and socio-economic profiles are considered by loan managers before a decision is made. Nina’s approach is to work with a team, usually members who can provide her with different kinds of information and use that information to minimize the risk so that the bank’s profit is maximized. 

The German and EU guidelines for credit risk analysis are quite stringent, but lately, there has been a move toward loosening the guidelines to be more inclusive in providing loans and toward utilizing new datasets and analysis techniques, including data mining and machine learning, to arrive at decisions. Since not all analytical approaches are allowed by regulators, the decision-making process can be complex.  EDK was founded to leverage these newer rules and regulations to provide credit and loans largely to first-time applicants who might not be served by other banks or who might take longer to get approved. 

As a relatively new and small bank, EDK outsources many of its services and much of its expertise to outside consultants from across the industry. This helps them keep their costs low and serves consumers in small cities and towns as well as those who are usually underserved by large banks due to their risk profile. EDK, when it was formed, undertook detailed analyses, and determined that many of the applicants who were denied credit by larger banks had a decent risk profile and were not likely to default on their loans. Servicing them required additional due diligence and the ability to use data in new ways that many traditional banks were unwilling to try. 

Nina has sent an urgent request to her team for information on one applicant, Murat Yilmaz, as the 30-day deadline for making the decision on Yilmaz’s application was fast approaching. There were many applicants with similar characteristics, and Nina was hoping that working on this application closely would help her make better decisions on similar cases. In particular, she was interested in the probability that Yilmaz would be able to repay the loan in full, and she wanted to understand what parameters of the applicant’s profile are the best predictor of repayment. If she could develop a good model to understand the driving factors (or driver variables) behind loan default, then EDK could utilize this knowledge for its portfolio and risk assessment. Nina knows that she must be careful as she develops this model to prevent biases, and although risk mitigation is important, it is equally crucial not to discriminate against applicants due to demographic factors. She realized the system appeared to be approving male applicants over females. Murat was greenlighted, but another applicant with similar profile, Sabrina Mann, was denied. 

To assist her with the decision on Yilmaz’s application and to develop a better understanding of how to approach this task, Nina has asked the following people to provide her with their individual opinion on the topic and to discuss the issue and reach some form of consensus recommendation on whether Yilmaz’s loan should be approved or declined. She has also shared with them a dataset that she has created with the information they can use to make the decision. 

Here is the team of experts that is assisting Nina (the roles for discussion) 

Michael Rhode is a data analyst at EDK and works closely with Nina on data collection and cleaning. Although he has some experience and expertise with data analysis, it is a skill he has acquired by playing around with data in his own time. Primarily, he is a database person, and before joining EDK, he worked as a data administrator for a large bank for over a decade. His other expertise is in data security, and at EDK, he is admired and valued for ensuring that their data remains secure. He is a traditionalist who appreciates data quality over everything else – it’s not data but good data that matters. 

Claudia Müller is an underwriting specialist whose primary role is to ascertain whether all the documentation and paperwork submitted by an applicant is appropriate and comprehensive. After working in medium and larger credit organizations for over two decades, she now works as a consultant, and EDK is one of her largest clients. She is committed to their success, and having experienced the myriad of ways in which loan applicants are rejected, often because of loopholes in the system, she is also committed to finding new ways to be more inclusive. She knows credit data well and is always willing to learn about new ways to get more information from that data. 

Thomas Schmidt is the chief credit innovation officer at EDK. He is new to EDK and has been hired for his reputation for coming up with innovative financial products with huge profit margins. He is an aggressive marketer and believes in selling his ideas both internally and externally. He is on a mission to convince Nina, and others, to use Machine Learning techniques to get more out of data and to move toward collecting or buying other datasets of applicants’ digital footprints to make the best decisions for EDK. By best, he means most profitable and least risky. No matter what the issue, his agenda is clear – profits.

Anja Fischer is a research analyst at a financial technology company focusing on using artificial intelligence to make banking decisions. The company is a stealth-mode start-up, and EDK is both a client and a partner organization hoping to put a lot of their ideas and algorithms to the test. Their start-up is hoping to use their seed funding to prove that they can increase profitability for credit organizations while also ensuring the process is fair and can easily pass scrutiny by regulatory agencies. She is especially interested in creating a process flow that allows for transparency in their decision-making. She often finds herself in a situation where she must balance profit-making tendency with a social justice goal. 

Stefani Meyer is a loan process regulatory officer in the central regulatory authority and works for a new office formed to better regulate algorithmic decision-making for lending. The regulatory authority serves as a third-party to certify technology and ensure that it meets the criteria for being fair and just. Stefani has worked almost a decade as a regulator but has been busy trying to keep up on all the new forms of data available to credit companies and their use of algorithms for decision-making. She was originally trained as a mathematician and has a graduate degree in statistics. She has a strong foundation but finds it hard to understand new techniques. Right now, her only goal is to avoid a major mistake by approving something that proves to be severely problematic or denying something that would actually be effective.  

Kwame Alexander is the director of AI in finance at Berlin Institute of AI Ethics (BIAIE) and previously worked at the Google offices in Amsterdam. He is an expert on Machine Learning and Data Mining, with a PhD from University College, London. While living in the Netherlands, he volunteered at a refugee center to assist asylum seekers with their paperwork and realized that lack of access to credit was a major barrier for settling refugees in a country. In his spare time, while working at Google, he took courses to better understand the finance space and he jumped at the opportunity join BIAIE. He is currently working on methods to improve transparency of lending algorithms.  


Cases in this collection are also available on the Mason Tech Ethics website.

Aditya Johri. . Predicting Lending Risk with Machine Learning Models. Online Ethics Center. DOI:.

Role-play Instructions

1. Each student is assigned a role a week before the discussion. 

2. Students assigned to the role of Nina serve as the moderator and lead the conversation based on the script below.

3. The script provided below is there to guide the discussion, but you should leave room for the conversation to flow naturally and allow everyone to contribute.

Role-play script (for Nina)

1.    What role are you playing in the role-play group discussion? Please state the name, title, and describe the role in your own words (couple of sentences). 
[to be answered by each group members individually and in a sequence]

2.    From the perspective of your role, what do you consider to be the best approach to decide on a loan application – what factors should be considered and how should these factors be weighed (what should get more importance)? 
[to be answered by each group members individually and in a sequence]

3.    What decision should Nina take on Yilmaz’s loan: should it be approved or declined? What additional information would you recommend Nina try to obtain in order to make the decision, keeping in mind that there is not much time left to acquire that information?
[to be answered by each group members individually and in a sequence]

4.    What is your overall group recommendation to Nina?
[open discussion, anyone can chime in]

One way to ensure students are prepared for the discussion is to assign a few questions from the script as a pre-discussion assignment (short answers). Similarly, to ensure students reflect on the discussion, they can be assigned the last question from the script as a post-discussion exercise. They can also be asked specifically about concepts or concerns considered in making a loan. 

Reflective Exercise 

[This can be individual or group]
-    What solution was reached following the discussion?
-    What criteria were considered to reach this solution?
-    Was the solution agreed to by all or did one person have more influence? Why?
-    Do you personally agree with the solution reached? Why/Why not?
-    Did playing a role help you/change in perspective (before/after the discussion)?

Dataset for Additional Analysis

Lee, M. S. A., & Floridi, L. (2021). Algorithmic fairness in mortgage lending: from absolute conditions to relational trade-offs. Minds and Machines, 31(1), 165-191. (Link to data used:


Klein, A. (2020). Reducing bias in AI based financial services, Brookings Institution.
World Bank’s Credit Scoring Approaches Guidance:

Background Resources

•    Susan Etlinger - What do we do with all this big data? TED Talk 

•    Cathy O’Neil: The Era of Blind Faith in Big Data Must End TED Talk 

•    Shivani Siroya – A smart loan for people with no credit history (yet) TED Talk

•    Michael Volpe, "Experts say artificial intelligence contributes to discrimination in lending" July 10, 2019, In The News

•    New York Times, Is an Algorithm Less Racist than a loan officer?

•    Townson, S. “AI can make bank loans more fair”, HBR (2020) 

•    Berg et al., NBER Working Paper, “The rise of FinTechs: Credit scoring using digital footprints” 

Authorship and Project Information and Acknowledgements

The scenarios and roles were conceptualized and written by Aditya Johri. Feedback was provided by Ashish Hingle, Huzefa Rangwala, and Alex Monea, who also collaborated on initial implementation and empirical research. This work is partly supported by U.S. National Science Foundation Awards# 1937950, 2335636, 1954556; USDA/NIFA Award# 2021-67021-35329. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the funding agencies. The research study associated with the project was approved by the Institutional Review Board at George Mason University.