top of page


How often should I retrain my model?
It’s a question we get from nearly every client with a credit model in production. Once you’ve launched a model, how often should you revisit it? Is there a fixed schedule you should follow? Or is it only necessary when something breaks? As with many modeling questions, the answer is: it depends. There are some clear principles we use to guide retraining cadence — and some concrete signs that it’s time to act. What we mean by retraining Let’s start by clarifying what we mean

Leland Burns & Jim McGuire
Jan 12


Do I Need to Monitor My Credit Model?
Do you want to accurately and consistently segment risk, therefore enabling your entire credit strategy? Then yes, you need to monitor your model! We see robust monitoring save our clients real money all the time: A shadow scoring test flagged PSI anomalies arising from a difference in a vendor's data at month-end (a quirk that wasn't visible in the development data set). We were able to make adjustments to the model in production. A live model suddenly received drastically d

Leland Burns & Jim McGuire
Aug 18, 2025


What Are Leaky Variables and Why They Ruin Credit Models
What Exactly Is a Leaky Variable? A leaky variable is any feature in your training data that contains information you won't have at decision time. The most extreme example would be using default status to predict default. That creates a perfect model in development with zero real-world utility, as the model will simply learn to predict the outcome with itself. Of course, any serious data scientist would catch an error that massive. But leakage can be subtle: Post-application

Leland Burns & Jim McGuire
Jul 7, 2025


Is Your Test Strategy Just Creating Noise?
Running tests has technically never been easier. With highly configurable back-end tech and sophisticated data analysis tools widely available, even early-stage lenders with small teams can run a sophisticated testing program. The ease is a double-edged sword though, as it's also become easy to drown your insights in noise with sloppy testing. To make sure testing brings meaningful results, follow these four principles. 1. You need a learning agenda. And a budget, too. There

Brandon Homuth
Jul 7, 2025


Gradient Boost Models: Hidden Risks and How to Avoid Them in Credit Modeling
Gradient Boost Models (GBMs) have become the go-to tool for many credit modelers for good reason. GBMs can unlock meaningful lift in predictive accuracy, helping lenders better distinguish between high- and low-risk applicants, expand safe approvals, and reduce losses. But with great power comes great risk. At Ensemblex, we’ve spent years developing, testing, and monitoring GBM credit models. And while we remain strong advocates for their use in the right context, we also kno

Leland Burns & Jim McGuire
Jun 23, 2025


New Research on ML Fairness and Explainability
FinRegLab, a nonprofit focused on innovations that advance responsibility and inclusiveness in the financial sector, has released the results of a broad research project entitled Explainability and Fairness in Machine Learning for Credit Underwriting (1). Noteworthy for its range of empirical testing and numerous collaborators, including many leading financial technology companies, the paper is a comprehensive review of the tools available to help lenders responsibly and comp

Leland Burns & Jim McGuire
Sep 7, 2023


How To Build an Explainable ML Model
Make explainability the core of the process, not a feature of the model.

Leland Burns & Jim McGuire
Aug 16, 2022
bottom of page