user.first_name
Menu

Better Business

AI isn’t ready to advise – and the FCA should stop pretending it is – Murphy

AI isn’t ready to advise – and the FCA should stop pretending it is – Murphy

Sebastian Murphy, group director at JLM Mortgage Services
guestauthor
Written By:
Posted:
October 24, 2025
Updated:
October 24, 2025

When I wrote in Mortgage Solutions last month that the Financial Conduct Authority (FCA) seemed increasingly disengaged from the reality of mortgage advice, my main concern was its growing enthusiasm for technology as a substitute for human judgment.

At the Westminster Business Forum, the regulator had spoken about “injecting innovation” into the market and hinted that artificial intelligence (AI) could one day shoulder a greater share of the advice process.

For anyone who works directly with clients, that idea already felt a long way removed from the real world. Now, with new data emerging on just how inaccurate AI currently is when offering financial ‘guidance’, it should give the regulator serious pause for thought.

 

AI’s shortcomings

According to recent research from Investing Insiders, when AI tools were asked 100 finance-related questions covering savings, housing, and retirement, only 56% of the answers were correct. A further 27% were misleading or deceptive, and 17% were outright wrong.

When asked about housing and major life purchases, AI gave incorrect responses 70% of the time. In any advice context, those results are catastrophic. As has been pointed out, if a human adviser were getting it right just over half the time, they wouldn’t just be subject to retraining, they would be struck off.

Sponsored

Mind the affordability gap

Sponsored by Newcastle for Intermediaries

Yet some within the regulator seem to believe this technology can already begin to play a front-line role in delivering or even replacing advice.

The FCA’s public utterances increasingly position AI as a way to make mortgage processes faster, easier, and cheaper. There is nothing wrong with wanting greater efficiency – we all do.

Advisers already use technology to verify ID, process documentation, and assess affordability. But none of those things are advice. They are tools that help us gather, check, and interpret information more effectively.

 

Advice is an ‘act of judgement’

Advice, by contrast, is an act of judgement; it depends on understanding nuance, balancing priorities, and applying experience to each client’s individual circumstances.

AI, at least in its current form, is a long way from replicating that.

The Consumer Duty makes this distinction clear. It requires firms to prevent foreseeable harm and deliver positive consumer outcomes.

An algorithm that can only give accurate guidance barely half the time cannot possibly meet that bar. Even if its accuracy improves, it cannot meet the deeper test of suitability: taking into account personal goals, financial resilience, family dynamics, and vulnerability.

The FCA itself has acknowledged that around half of UK adults exhibit one or more characteristics of vulnerability. These are precisely the clients who need human empathy, context, and challenge – the very qualities AI lacks.

Advocates will argue that AI will improve rapidly, and they are right. The technology will continue to learn, and advisers will continue to integrate it into their processes. But the pace of improvement doesn’t change the underlying issue.

We are not just talking about the accuracy of data; we are talking about the capacity to understand people. AI can be trained on millions of datasets, but it cannot understand why a couple may choose a longer mortgage term to preserve childcare affordability, or why an older borrower may want to retain flexibility even at a higher rate. These are not errors of information; they are matters of empathy, ethics, and lived experience.

This is where I believe the regulator needs to exercise much caution. By even hinting at the idea that AI is capable of doing a serious amount of advice heavy lifting, even suggesting it can replace – or materially reduce – the need for advice, it risks weakening the very protections it is supposed to uphold.

It also risks confusing firms into thinking that efficiency equates to suitability. Innovation and regulation should complement each other – one setting the standards, the other helping to meet them. When the FCA starts to talk as though it is leading innovation itself via a greater use of AI, that boundary becomes dangerously blurred.

As I mentioned last month, none of this is an argument against technology. Used responsibly, AI can enhance our work enormously. It can improve fact finding, automate paperwork, flag inconsistencies, and allow advisers to focus on conversations that matter most. But that is very different from delegating advice. Technology should support advisers, not substitute for them.

The Investing Insiders data ought to be required reading for anyone at the FCA who believes the future of mortgage advice lies in machine learning.

AI will get better – perhaps dramatically so – but not to the point where it can be relied upon to understand the complexities, vulnerabilities, and personal nuances that underpin real advice.

In the meantime, the regulator should be guided by evidence, not blind enthusiasm. Advice remains the safeguard that makes innovation safe.

Remove it, and the system does not become faster, easier, or cheaper; just riskier.