user.first_name
Menu

Better Business

Explainability and AI: What is it and what to do about it? – Gatenby

Explainability and AI: What is it and what to do about it? – Gatenby

Pete Gatenby, AI partner at Novus Strategy
guestauthor
Written By:
Posted:
March 30, 2026
Updated:
March 30, 2026

Lenders and brokers are making great progress bringing artificial intelligence (AI) to bear on mortgage decisions, but producing incorrect recommendations isn’t the only vulnerability they face.

When so much focus is placed on outcomes, the story of how they were reached is easily forgotten.

This blind spot is a key risk in financial services, because there are signs that AI is entering the mortgage journey faster than governance.

The temptation to move fast risks overshadowing firms’ wider obligations to the Financial Conduct Authority (FCA), which protect consumers from advice failures, mis-selling and prejudicial or unfair decisioning.

 

Explainability

This is where ‘explainability’ comes in.

Sponsored

How brokers can shape the future of shared ownership

Sponsored by Halifax Intermediaries

In simple terms, it’s how you demonstrate to the regulator that you can justify a decision or recommendation where AI played a part.

It’s not new. Consumers have been entitled to know why decisions have been made about them for a long time. They also have the right to review and challenge after the fact. So, in a world where decisions like these are often insured, brokers and lenders can’t just point at the AI. They must be able to demonstrate that the steps taken and conclusions reached were justified at the time.

This makes explainability a significant regulatory pressure point, because off-the-shelf models such as Copilot operate like black boxes. AI models are being replaced at a rate of once every six months, so it’s not possible to retrieve accurate accounts of decisioning at a later date, even if they’d sound plausible.

 

What you need to do

Treating explainability as a documentary exercise after the fact is a mistake. You need to anticipate the need for justification in real time, building it into the process.

This will become more important as risk-based pricing, deep-learning credit scoring, complex fact finds and new affordability measures grow. Customer files must be updated with AI justifications when decisions are made.

A lack of explanation at any stage could be misread as discrimination, which is another issue that the Senior Managers & Certification Regime (SM&CR) will need to proactively monitor. Explainability has become a board-level responsibility.

 

Futureproofing

The FCA hasn’t introduced specific regulations for AI, but this may change, as we know lenders are already pushing for explicit guidance for fear of getting it wrong.

Leaning on the EU’s General Purpose AI (GPAI) Code of Practice, as it’s more explicit about how AI should be deployed, will be helpful. It only applies to companies’ operations in the EU, but it’s likely UK authorities will seek greater alignment with it in the near future.

 

Structured data

It’s also worth being aware of horizontal digital integration (HDI), the framework that unlocks the interoperability revolution unfolding in the home buying and selling journey. That operationalises data, which becomes the foundation of any effective AI deployment.

So taking a comprehensive approach to explainability now also helps futureproof firms’ ability to take part in a transformation that will reduce speed to offer, speed to completion and fall-throughs in the future.

Lenders can take a first step by recognising that the most important thing isn’t the model; it’s creating an environment founded on governance and structured data.