Britain needs 'AI stress tests' for financial services, lawmakers say


FILE PHOTO: AI letters and robot hand are placed on computer motherboard in this illustration created on June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo

LONDON, Jan 20 (Reuters) - Britain’s financial ‌watchdogs are not doing enough to stop AI from harming consumers or destabilising markets, a group ‌of lawmakers said on Tuesday, urging regulators to move away from a "wait and see" approach ‌as the technology is deployed widely.

The Financial Conduct Authority and the Bank of England should start running AI‑specific stress tests to help firms prepare for market shocks triggered by automated systems, the Treasury Committee said in a report on AI in financial services.

The ‍committee urged the FCA to publish guidance by the end of ‍2026 on how consumer protection rules apply ‌to AI and how much senior managers need to understand the systems they oversee.

“Based on the evidenceI'veseen, Ido ‍not ​feel confident that our financial system is prepared if there was a major AI-relatedincidentandthat is worrying,” committee chair Meg Hillier said in a statement.

TECHNOLOGY CARRIES 'SIGNIFICANT RISKS'

A race among banks to adopt agentic ⁠AI, which, unlike generative AI can make decisions and take autonomous ‌action, adds new risks for retail customers, the FCA told Reuters late last year.

About three‑quarters of British financial firms now use ⁠AI across core functions ‍from processing insurance claims to credit assessments.

While AI brought benefits, the report warned of "significant risks" as well. These included opaque credit decisions, the potential exclusion of vulnerable consumers through algorithmic tailoring, fraud, and dissemination of unregulated financial advice through ‍AI chatbots.

RISKS TO FINANCIAL STABILITY

Experts who gave evidence to the committeealso ‌highlighted threats to financial stability, due to reliance on a small group of U.S. tech giants for AI and cloud services.

Some noted that AI‑driven trading systems may amplify herding behaviour, or imitative trading decisions, in markets, raising the risks of a financial crisis.

A spokesperson said the FCA would review the report. The regulator has previously indicated it does not favour AI‑specific rules due to the pace of technological change.

AI ALREADY INFLUENCING DECISIONS, HILLIER SAYS

A BoE spokesperson said the central bank had taken steps to assess AI‑related risks and bolster the financial ‌system. The bank would consider the committee's recommendations and respond in due course, the person added.

Hillier told Reuters that increasingly sophisticated forms of generative AI were influencing financial decisions.

“If something has gone wrong in the system, that could have a very ​big impact on the consumer,” she said.

Separately, Britain's finance ministry appointed Starling Bank CIO Harriet Rees and Lloyds Banking Group’s Rohit Dhawan to help steer AI adoption in financial services.

(Reporting by Phoebe Seers; Editing by Tommy Reggiori Wilkes and Bernadette Baum)

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

X down for thousands of users in the US and UK, Downdetector shows
German Social Democrat paper adds to calls for social media curbs for children
Big tech stocks lose billions as AI spending fears hit valuations
India's AI Summit opening in New Delhi marred by long queues, confusion
Iraqi-UAE consortium plans $700 million fast data cable network
Ubisoft targets new decade of 'Rainbow 6' with China expansion
OpenAI hires creator of 'OpenClaw' AI agent tool
UK’s Starmer wants AI chatbots to follow online safety rules
Tech is thriving in New York. So are the rents
India hosts AI summit as safety concerns grow

Others Also Read