FCA may offer its market data to surveillance tech start-ups

The regulator is concerned that rapid AI adoption will favor incumbent vendors; it aims to launch a sandbox.

The UK Financial Conduct Authority is considering opening up its vast trove of market data to vendors of financial crime surveillance technology. The FCA has been prompted by fears that the rapid growth of artificial intelligence could stop smaller firms and future entrants from competing with big tech incumbents who have access to large historical datasets.

The plans are part of a digital sandbox specifically for the use of AI in market surveillance, which the regulator is hoping to launch later this year, according to Jamie Bell, head of secondary market oversight at the FCA. In a bid to ensure competition, however, the watchdog is working to ensure the largest vendors do not dominate the market.

There’s a very real danger that AI will drive consolidation and anti-competitive behaviours in the industry
Jamie Bell, FCA

“One of the problems that I see emerging for AI is competition, because who has access to sufficient training data becomes a really important point. And there’s a very real danger that AI will drive consolidation and anti-competitive behaviors in the industry—[and] as it happens, that’s particularly true in market surveillance,” said Bell, speaking at WatersTechnology’s sibling publication Risk.net’s OpRisk Europe conference on June 7.

“We ingest half a billion trading records a day—we have data going back to 2000 at least that’s very, very rich—and I’d like to make some of that data available to help smaller vendors and academics, frankly to [help them] compete in this space.”

For a number of years, US and European regulators have encouraged banks to harness the potential of advanced approaches to detecting financial crime and wrongdoing. They understand the capacity for more complex machine learning techniques such as neutral networking to spot patterns of behavior and deliver actionable intelligence to risk managers more quickly and effectively than legacy rules-based approaches.

Given the FCA holds reams of highly sensitive trade data on the vast network of authorized firms it supervises, opening up access to its proprietary dataset poses “very significant confidentiality challenges,” Bell added. “What we absolutely don’t want to happen is for people to be able to reconstruct firms’ trading history going back 10 years, if we’ve done it wrong.”

FCA chief executive Nikhil Rathi was previously chief executive of the London Stock Exchange.

Since 2016, the FCA has operated a broader regulatory sandbox that allows firms to road-test new technological approaches before going to market, including access to training datasets as well as private data from vendors. Last month, the regulator announced plans to make permanent its parallel Digital Sandbox, which has so far worked with private sector partners on a pilot basis to explore solutions in the arenas of consumer protection and sustainability.

Bell added that, from a supervisory perspective, the regulator was already starting to make use of similar techniques to spot instances of market abuse and misconduct on its watch.

“There are all kinds of tools that we’re deploying internally to solve problems that we couldn’t solve before,” he said, adding that this was already helping the authority’s roughly 100-strong market surveillance team identify new patterns of suspicious activity in markets.

“We’ve had, for a number of years, rules-based alerts to help us filter that data into something that’s actually actionable and usable. What we’re finding is we’re changing our approach and applying pattern logic and machine learning and AI to those same problems, and coming up with different—not more, different—issues.”

Joking that he was “probably more at the excited end than the Terminator end of the spectrum for AI”, Bell concluded by saying part of regulators’ job should be to help, not hinder the rollout of advanced approaches to modeling at firms under its jurisdiction.

“Internally, we’re incredibly excited about it. It does pose some real challenges as a regulator… We want firms to be able to use AI; we want to be not an obstacle but an accelerator for the development of AI. That’s absolutely my mission. Regulatory accountability is a really important issue; so [we say] ‘firms can’t hide behind a black box’; but actually what that means for us as supervisors is a really interesting question; we don’t have all the answers to that yet. Part of the purpose of the sandbox will be to engage with the industry about how we can effectively supervise AI in the future.”

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@waterstechnology.com or view our subscription options here: http://subscriptions.waterstechnology.com/subscribe

You are currently unable to copy this content. Please contact info@waterstechnology.com to find out more.

SEC squares off with broker-dealers over data analytics usage

The Gensler administration has ruffled feathers in the broker-dealer community with a new proposal seeking to limit their use of predictive data analytics. But at the heart of this deal is something far more seismic: one of the first attempts by the SEC to regulate AI.

The Cusip lawsuit: A love story

With possibly three years before the semblance of a verdict is reached in the ongoing class action lawsuit against Cusip Global Services and its affiliates, Reb wonders what exactly is so captivating about the ordeal.

You need to sign in to use this feature. If you don’t have a WatersTechnology account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here