The committee found that 75% of UK financial services firms are now using AI, with the largest take-up being in insurers and international banks operating in the UK, adding that AI adoption in financial services “substantially” outpaces other sectors.
The report said: “AI and wider technological developments could bring considerable benefits to consumers. We encourage firms and the Financial Conduct Authority (FCA) to work together to ensure that the opportunities for consumers from AI are taken.
“However, the FCA, the Bank of England and HM Treasury are not doing enough to manage the risks presented by AI. By taking a wait-and-see approach to AI in financial services, the three authorities are exposing consumers and the financial system to potentially serious harm.”
The report noted that it had received a “significant volume of evidence” around AI’s risks to financial services consumers.
Examples mentioned include a lack of transparency around AI-driven decision-making in credit and insurance, potential financial exclusions for the most disadvantaged customers, unregulated financial advice from services like ChatGPT misleading or misinforming customers, and a potential increase in fraud.
There are also risks around financial stability, as it heightens cybersecurity vulnerabilities along with the volume and scale of cyber attacks, as well as over-reliance on a small number of US technology firms.
AI-driven market trading could also “amplify herding behaviour”, potentially risking a financial crisis in the worst-case scenario.
The report added that the UK currently does not have AI-specific legislation or AI-specific financial regulations, with the FCA and the Bank of England relying on existing regulatory frameworks to supervise firms’ use of AI.
The regulators in their evidence said existing regulatory frameworks offered “sufficient protection for consumers and financial stability against the risks posed by AI”.
Both have been monitoring AI through a periodic survey and bringing together an AI consortium last year. The FCA has also launched an AI Live Testing Service, which is a voluntary scheme that allows firms to trial AI solutions in a live and safe environment, and Supercharged Sandbox, which allows companies without their own AI infrastructure to experiment with AI solutions.
Stakeholders said the FCA’s current approach to supervising AI implementation was “reactive” and left firms with “little practical clarity on how to apply existing rules to their AI usage”.
Greater clarity on rules, better resilience to market shocks and action on Critical Third Parties regime key recommendations
The committee recommended that the FCA offer financial services “greater clarity on the application of existing rules to the use of AI”.
It called on the FCA to publish a “comprehensive, practical guidance for firms” on the application of existing consumer protection rules to their use of AI and accountability and the level of assurance expected from senior managers for harm caused by AI at the end of the year.
“We recognise that it is difficult to provide prescriptive regulation in the context of fast-moving technological change. However, the current approach gives firms little practical clarity as to how existing rules apply to the use of AI. This leads to uncertainty for firms and potentially increases risks to consumers and the integrity of the financial system,” the committee said.
The second recommendation was to “build firms’ readiness for AI-driven market shocks, so the Bank of England and the FCA must conduct AI-specific stress testing.”
The Bank of England and FCA also do not conduct AI-specific cyber or market stress testing currently.
The committee added that by the end of 2026, HM Treasury must designate major AI and cloud providers as critical third parties in the Critical Third Parties regime.
This aims to tackle the issue that UK financial services firms are “overly reliant” on a small number of US technology firms for AI and cloud services, which could impact operational resilience.
The regulators and government launched the Critical Third Parties Regime in 2023, and it gives regulators new powers of investigation and enforcement on companies that offer critical services to the financial services sector.
HM Treasury is responsible for designating firms as critical third parties but must consult the Bank of England and the FCA before doing so. Typically, it will start the designation after a formal recommendation by starting the process unilaterally and consulting later on.
Evidence suggests that HM Treasury has not yet designated any companies within the scheme but would be in a position to do so in the next 12 months, with Amazon Web Services and Google Cloud expecting to be brought in.
The report said: “Over a year since the regime was established, it is not clear to us why HM Treasury has been so slow to use the new powers at its disposal. The Bank of England’s Financial Policy Committee must monitor the regime’s progress and, if necessary, use its power of recommendation to HM Treasury to ensure swift implementation.”
Dame Meg Hillier, chair of the Treasury Select Committee, said: “Firms are understandably eager to try and gain an edge by embracing new technology, and that’s particularly true in our financial services sector, which must compete on the global stage.
“The use of AI in the city has quickly become widespread and it is the responsibility of the Bank of England, the FCA and the government to ensure the safety mechanisms within the system keeps pace.
“Based on the evidence I’ve seen, I do not feel confident that our financial system is prepared if there was a major AI-related incident, and that is worrying. I want to see our public financial institutions take a more proactive approach to protecting us against that risk.”