Google urges regulators, market participants to clarify risk guidance for AI models

A new whitepaper from Google finds that existing guidance on the use of AI and ML models for risk management is a start, but leaves room for improvement.

A newly released whitepaper from Google Cloud urges financial sector regulators and market participants to re-evaluate the clarity and effectiveness of existing model risk management (MRM) guidance as it relates to AI and machine-learning models.

Traditional MRM guidance, which covers the means of assessing whether a financial institution is using risk models safely, was put together jointly by the Office of the Comptroller of the Currency (OCC) and the Federal Reserve Board of Governors in 2011 and has been updated periodically, most recently in 2021. But Google Cloud’s government affairs and policy team, which authored the new whitepaper, believes there is more to do around clarifying and adding to the existing guidance to unearth better, more standard risk mitigation strategies.

The paper makes five observations and recommendations for the existing guidance, including asking regulators to publish further guidance regarding the materiality/risk ratings of AI and ML for common use-cases, developing technical metrics and testing benchmarks beyond model explainability, and advancing the use of governance controls, such as incremental rollouts and circuit breakers.

Behnaz Kibria, head of technical infrastructure and international cloud policy at Google Cloud, joined the tech behemoth six years ago, after working as deputy chief of staff at the Office of the US Trade Representative and for the House Ways and Means Committee’s trade subcommittee prior to that. Her regulatory background means that she’s seen firsthand the fine line that regulators toe between encouraging the use of efficient and innovative tools to make markets safer, while also mitigating the risks that new and semi-mysterious technology can pose to those same markets.

“The question that people in the industry were asking was, ‘Does this regime, which was made before AI and ML were being used widely continue to be useful?’ That’s one of the threshold questions that the paper tries to answer,” she tells WatersTechnology. “Or—and this is what we were getting asked by regulators—do we need to throw everything out and start again?”

To explore the question, the paper focused on models used for the detection of fraud, money laundering, and other financial crimes, such as trade manipulation—a fraught area of finance that, in addition to AI and ML models, has seen a slew of potential technology solutions thrown at it, including graph technology, blockchain, and centralized databases, such as the failed Markit-Genpact deal, which would have standardized know-your-customer (KYC) data for financial institutions.

In all scenarios, bad actors still slip through the cracks.

In an example of where the current MRM guidance leaves a question mark for market participants, Google Cloud’s whitepaper argues that AI models used for crime detection in the banking sector are frequently developed through collaborations between institutions and technology providers, and that these collaborative efforts allow the banking system to incorporate cutting-edge technologies into their businesses.

But the caveat is that they also implicate aspects of the MRM guidance related to third-party vendors, requiring banks, examiners, and vendors to address an additional set of issues, such as how contractual and compliance obligations and product maintenance requirements should be distributed between banks and vendors. It also requires a re-assessment of the degrees of, and approaches to, information-sharing that will be required to satisfy MRM requirements.

“It is still useful; it just needs to be adapted in some ways,” Kibria says, stressing that the OCC’s handbook is broad enough and principles-based enough to still be applicable for financial firms. But she and the paper’s other authors believe the guidance leaves behind enough gaps and interpretive questions and conclusions to warrant robust industry engagement, alerts, and training to flesh these sticking points out.

“I really think there’s a significant amount of urgency to it,” she says. “Technology is both a huge opportunity, and it’s also something that the risks of which need to be properly assessed and mitigated. And I think there is definitely a very significant urgency on both those fronts.”

Another key point of the paper centers around model documentation, with explainability regarded as the main consideration in model-driven risk management. The handbook defines explainability as “the extent to which AI decisioning processes and outcomes are reasonably understood by bank personnel,” but the paper argues that may be insufficient for establishing whether a model itself is sound and fit for purpose.

Indeed, this has been a topic on regulators’ radar for some time. In 2021, the Federal Reserve signaled its interest in requiring senior management to explain how their AI models arrive at each decision. And anti-money laundering models, in particular, have faced questions of data gaps and potential biases from regulators, bank stakeholders, and internal auditors.

The promise that AI and ML models bring over classical, linear regression models in combatting financial crime is vast, Kibria says, and these technologies can greatly help regulatory bodies such as the Financial Crimes Enforcement Network (FinCEN), a division of the US Treasury, better accomplish their own goals of detecting illicit activity.

But first, they must understand the technologies themselves—and demonstrate that understanding to the industry in their guidance.

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@waterstechnology.com or view our subscription options here: http://subscriptions.waterstechnology.com/subscribe

You are currently unable to copy this content. Please contact info@waterstechnology.com to find out more.

SEC squares off with broker-dealers over data analytics usage

The Gensler administration has ruffled feathers in the broker-dealer community with a new proposal seeking to limit their use of predictive data analytics. But at the heart of this deal is something far more seismic: one of the first attempts by the SEC to regulate AI.

The Cusip lawsuit: A love story

With possibly three years before the semblance of a verdict is reached in the ongoing class action lawsuit against Cusip Global Services and its affiliates, Reb wonders what exactly is so captivating about the ordeal.

You need to sign in to use this feature. If you don’t have a WatersTechnology account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here