Britain needs 'AI stress tests' for financial services, lawmakers say


FILE PHOTO: AI letters and robot hand are placed on computer motherboard in this illustration created on June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo

LONDON, Jan 20 (Reuters) - Britain’s financial ‌watchdogs are not doing enough to stop AI from harming consumers or destabilising markets, a group ‌of lawmakers said on Tuesday, urging regulators to move away from a "wait and see" approach ‌as the technology is deployed widely.

The Financial Conduct Authority and the Bank of England should start running AI‑specific stress tests to help firms prepare for market shocks triggered by automated systems, the Treasury Committee said in a report on AI in financial services.

The ‍committee urged the FCA to publish guidance by the end of ‍2026 on how consumer protection rules apply ‌to AI and how much senior managers need to understand the systems they oversee.

“Based on the evidenceI'veseen, Ido ‍not ​feel confident that our financial system is prepared if there was a major AI-relatedincidentandthat is worrying,” committee chair Meg Hillier said in a statement.

TECHNOLOGY CARRIES 'SIGNIFICANT RISKS'

A race among banks to adopt agentic ⁠AI, which, unlike generative AI can make decisions and take autonomous ‌action, adds new risks for retail customers, the FCA told Reuters late last year.

About three‑quarters of British financial firms now use ⁠AI across core functions ‍from processing insurance claims to credit assessments.

While AI brought benefits, the report warned of "significant risks" as well. These included opaque credit decisions, the potential exclusion of vulnerable consumers through algorithmic tailoring, fraud, and dissemination of unregulated financial advice through ‍AI chatbots.

RISKS TO FINANCIAL STABILITY

Experts who gave evidence to the committeealso ‌highlighted threats to financial stability, due to reliance on a small group of U.S. tech giants for AI and cloud services.

Some noted that AI‑driven trading systems may amplify herding behaviour, or imitative trading decisions, in markets, raising the risks of a financial crisis.

A spokesperson said the FCA would review the report. The regulator has previously indicated it does not favour AI‑specific rules due to the pace of technological change.

AI ALREADY INFLUENCING DECISIONS, HILLIER SAYS

A BoE spokesperson said the central bank had taken steps to assess AI‑related risks and bolster the financial ‌system. The bank would consider the committee's recommendations and respond in due course, the person added.

Hillier told Reuters that increasingly sophisticated forms of generative AI were influencing financial decisions.

“If something has gone wrong in the system, that could have a very ​big impact on the consumer,” she said.

Separately, Britain's finance ministry appointed Starling Bank CIO Harriet Rees and Lloyds Banking Group’s Rohit Dhawan to help steer AI adoption in financial services.

(Reporting by Phoebe Seers; Editing by Tommy Reggiori Wilkes and Bernadette Baum)

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

Day of reckoning arrives for social media after US court loss
Teens get probation after using AI to create fake nudes of classmates
Revolut to base 40% of its global workforce in India by 2026
Apple rolls out age checks for UK users
Munich Re: AI making cyber attacks costlier and more effective
Nanya Technology shares surge 10% after $2.5 billion fundraising
Nvidia-backed Reflection AI eyes $25 billion valuation, WSJ reports
Hundreds of teens to trial social media bans in UK pilot project
Apple plans AI reboot with Siri app, new look and ‘Ask Siri’ Button in iOS 27
Travel tech firm Navan sees strong 2027 revenue on demand from new customers

Others Also Read