top of page

My Model Works. Why Do I Need a New One?

  • Writer: Leland Burns & Jim McGuire
    Leland Burns & Jim McGuire
  • Jul 28
  • 3 min read

Updated: Aug 5

"If it ain't broke, don't fix it."


Lenders often push back when we suggest exploring a new model build. It's fair—model builds require resources, and it can feel silly to fiddle with an underwriting model that "works," especially if origination volumes are on track and losses seem manageable. But at Ensemblex, we know that "works" often means "leaves money on the table."


What Does It Mean for a Model to "Work"?


In technical terms, a credit model is effective if it "slopes risk"—that is, it successfully differentiates between riskier and safer applicants. It produces scores that correlate with performance outcomes and can be used to set product cutoffs, tier pricing, or power other decision policies.


That sounds straightforward. But when a client says their model "works," we always ask: Relative to what?


Often, models are paired with heavy rules-based overlays or generic scores. So, while the overall system might produce acceptable loss rates, it's not always clear how much of that is due to the model itself, or what gains might be left on the table.


Could a New Model Do Better?


The only definitive way to tell is to start testing new models. But there are some signs that you should try:


  • You're using generic scores or linear regression. Unless you're a very early-stage lender, you're likely leaving performance on the table.

  • New data is available. Have you added new data sources since your last build? Do you have more customer history?

  • Competitors are advancing. If your peers are adopting more sophisticated models, they may be pricing more precisely or converting better customers.

  • New products or populations. New segments need new models.

  • Macro shifts. If your current model was trained on an outdated population or if the macro environment has shifted, it might not be performing as well as you think (even if high-level outcomes look stable).


How Do You Know It's Worth It?


A well-designed model development process includes early checkpoints. At Ensemblex, we prioritize getting quick, directional results within the first month of a project. These early tests — AUC, marginal & cumulative loss rates, score stability, and swap-sets relative to other model versions — assess whether the improvements are worth the effort.


Determining that requires assessing the full business impacts, not just scores like AUC. We look for gains like:


  • Higher approvals at the same loss rate. By better separating high- and low-risk applicants, you can approve more customers without increasing risk.

  • Lower losses at the same approval rate. The flip side.

  • Improved product assignment and conversion. For clients with tiered products, a more precise model can help safely expand the top-tier offer, which often has higher conversion and better customer retention.

  • Simplified decision flow. Stronger models reduce the need for compensating policies and rules. That simplifies governance and makes future changes easier to implement.

  • Better stability over time.  Advanced models, when well-developed, tend to degrade more slowly, delivering more consistent outcomes across vintages and macro cycles.


Final Thoughts


It's good practice to check in with your model regularly, even when it seems to be humming along. Is it doing as much as it could be? At Ensemblex, we've helped clients across stages and sectors build models with improvements that surprised (and delighted) them: lowering losses 17% while keeping approval rates steady, increasing conversion by 29%, increasing line size by 21%... Curious what we can do together? Let's talk.

bottom of page