Fed could require senior management to explain AI models

A new era of accountability might see the Federal Reserve demand model explainability to keep financial system safe.

machine learning

The Federal Reserve has signaled that the banking industry will soon see agencies intervening in the use of artificial intelligence in finance. Such regulatory interventions will likely focus on explainability, the extent to which the internal workings of AI models can be explained and understood in human terms, says Mark Chorazak, a partner in the financial institutions advisory and financial regulatory practice at Shearman and Sterling in New York.

“I think the greater concern in the institutional space is modeling and explainability: understanding how the bank gets from A to B and underwrites a particular credit. Or say a bank has made a loan to a particular company: how did the credit committee make that decision?” he says. “If they are unable to articulate that—and just say, ‘Well, we punched all this data in and the AI tool said it was good’—I think that is classically where examiners get concerned. They want to be able to know that people can articulate an explanation on key decisions.”

At the start of this year, Federal Reserve Board governor Lael Brainard gave a speech before opening a symposium held by the Fed devoted to understanding how AI is being used in banking. While the authority is aware of the technology’s great potential for parsing huge amounts of unstructured data—helping banks to detect fraudulent activity, for instance—there are major worries posed by ‘black box’ models, she said.

In financial services, regulators have expressed particular concern about AI replicating discrimination against minorities in personal and business loans underwriting. And as machine learning takes over from manual tasks in increasingly sensitive areas, these concerns are broadening from consumer protection to existential threats to the safety and soundness of the financial system itself, Brainard said.

To ensure that society benefits from the application of AI to financial services, we must understand the potential benefits and risks, and make clear our expectation for how the risks can be managed
Lael Brainard, Federal Reserve

“In the safety and soundness context, bank management needs to be able to rely on models’ predictions and classifications to manage risk. They need to have confidence that a model used for crucial tasks such as anticipating liquidity needs or trading opportunities is robust and will not suddenly become erratic. For example, they need to be sure that the model would not make grossly inaccurate predictions when it confronts inputs from the real world either that differ in some subtle way from the training data or that are based on a highly complex interaction of the data features.”

In coordination with the other banking agencies, Brainard added that the Fed is collecting information with a view to deciding what their role should be regarding the potential perils presented by emerging technologies.

“To ensure that society benefits from the application of AI to financial services, we must understand the potential benefits and risks, and make clear our expectation for how the risks can be managed effectively by banks. Regulators must provide appropriate expectations and adjust those expectations as the use of AI in financial services and our understanding of its potential and risks evolve,” she said. “To that end, we are exploring whether additional supervisory clarity is needed to facilitate responsible adoption of AI.”

What will AI regulation look like? 

Chorazak says that governors will first want to better understand the level of understanding that bank officers and board members have as to the development and adoption of these tools. “That is the classic way that the regulators approach something like this,” he says.

But he says the Fed is unlikely to roll out new rules around AI oversight, at least for now. “Brainard’s comments [quoted above] should be read as the Fed saying it is in the early stages of heightened scrutiny in the AI space.”

Chorazak says that while the Fed might come forward with proposal “in the next few months,” it won’t be a formal one, but perhaps an advanced notice of proposed rulemaking, which is a document that regulators issue to the public for comment so they can gather more information on a regulatory change before deciding to put forward actual regulations.

“There will be a lot more symposia, a lot more inquiry on the ground, examiners seeking to understand how AI is being used, and how they could apply existing legal requirements to the various uses of AI,” he adds.

Lael Brainard
Lael Brainard

The regulators are likely to look at applying current statutes and regulation to this issue rather than propose new ones, he says. There are various existing rules and laws that the Fed could bring to bear on AI, many of which are consumer protection rules on fair lending and credit reporting, as well as a host of anti-discriminatory laws.

“But then there are the concerns about the general safety-and-soundness powers of the banking agencies,” Chorazak says. “How is the use of AI comporting with an institution’s obligation to run itself in a safe and sound manner, and the obligation for its board to have oversight of its management and its affairs?”  

When it comes to technology, some of these worries about safety and soundness are expressed in guidance on vendor relationships and third-party service providers. Although the Fed doesn’t regulate these relationships directly, it provides guidelines for them. These are very clear that while firms can outsource their activities, they cannot outsource their responsibility of making sure that they are conducted safely and legally.

The Fed could look to extend the approach of these guidelines to AI, Chorazak says. “Well before we even started talking about AI, for the last seven or eight years, the banking agencies have been very focused on the use of vendors and third-party service providers and what banks are doing to monitor their practices. And perhaps the initial frame for the Fed and other agencies to consider AI will be by asking them how the board is understanding these risks, what are the standards of review it is applying to AI, and what information is the board receiving periodically?”  

No need to open the black box

The Fed’s emphasis on explainabilty won’t necessarily equate with regulators insisting that every AI model should be fully transparent from start to finish. 

John Bottega, president of the Enterprise Data Management Council, says the regulators could focus on making sure that a bank’s senior management has a grasp of what data is going into models and the integrity of that data, and then the outcome of those models.

“With artificial intelligence and machine learning, there are two sides: the integrity of the model and analytics—that is obvious, you have to have a good model—but then there is also the integrity of the data,” Bottega says. “Because of the strength of the models, the slightest nuance in the data could be misinterpreted.”

The second aspect of explainability, particularly in machine learning, involves revisiting future results. Bottega says the challenge of AI is that if it is fed with skewed data, its power accelerates the chances of replicating errors at scale. This is true of machine learning, in particular, because of the fact that as it learns, its outputs change without human intervention in ways that humans may not anticipate.

I think the onus will be on financial institutions to demonstrate that they are doing their due diligence
John Bottega, Enterprise Data Management Council

“I think the onus will be on financial institutions to demonstrate that they are doing their due diligence—they have the right set-up, the right analysts and data scientists doing the work—so they can demonstrate from an auditability perspective the test script and that they are constantly monitoring outcomes,” Bottega says. But what happens in the “innards of the black box” will be less of a concern.

Brainard has said before that opacity can be a positive feature of AI models, where information needs to be kept private. In her January speech, she said the Fed would take a contextual approach to explainability: requiring different levels of transparency according to who is using the model and for what purpose.

“The bank employees that interact with machine learning models will naturally have varying roles and varying levels of technical knowledge. An explanation that requires the knowledge of a PhD in math or computer science may be suitable for model developers, but may be of little use to a compliance officer, who is responsible for overseeing risk management across a wide swath of bank operations,” she said.

Artificial intelligence has been on regulators’ radar for years. But the fact that this is coming from Brainard and the Federal Reserve is what makes these statements notable, Chorazak says.

“The speech is important because it shows that these issues are garnering more attention at the most senior levels of the US banking agencies,” he says. “These technologies are only going to mushroom from here, and there has to be some type of supervisory clarity on responsible adoption and use of AI.”

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@waterstechnology.com or view our subscription options here: http://subscriptions.waterstechnology.com/subscribe

You are currently unable to copy this content. Please contact info@waterstechnology.com to find out more.

Data catalog competition heats up as spending cools

Data catalogs represent a big step toward a shopping experience in the style of Amazon.com or iTunes for market data management and procurement. Here, we take a look at the key players in this space, old and new.

You need to sign in to use this feature. If you don’t have a WatersTechnology account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here