Lighting Up the Black Box: A Must for Investors?

Many quants contend that you must be able to interpret machine learning in order to use it.

  • More quant funds are using machine learning to help run some part of decision-making, but the workings can be opaque and the route to outcomes is often unknown.
  • Many funds are queasy about this lack of interpretability—the so-called black-box problem—and hesitate to use machine learning outcomes if they don’t know how they are derived.
  • Others place more emphasis on the accuracy of the model, and care less about understanding its outputs.
  • “We don’t care about interpretability,” says Ernest Chan, managing partner at QTS Capital.
  • There are plenty of shades of grey between these extremes, and even ML skeptics become enthusiasts when the models are applied to the ‘right’ problems. 

The enigma of the black box applies not just to its mystifying workings, but also to the origin of its name. One credible story is that World War II pilots coined it to describe their new-fangled navigational equipment. Not knowing what radar wizardry took place inside the box—or how it performed its apparent miracles – didn’t deter the pilots from doing their jobs.

Machine learning in investing, however, is a different story. Here, the so-called black-box problem—whether to use a model without knowing quite how it reaches its conclusions—is putting the brakes on the technology. While a clutch of interpretability methods, designed to make sense of complex models, is gaining popularity, investment managers continue to be mistrustful of ML as part of their decision-making engine. And many investors think it can be applied only to certain types of problems—problems other than making investment decisions.

Too much is at stake, says Mike Chen, director of portfolio management at PanAgora Asset Management. “An e-commerce company recommending buying one pair of gloves versus another is one thing, but an algorithm that says you should buy stock x over stock y is another,” he says. “We’re putting literally billions of dollars of our client’s money on the line. We have a fiduciary responsibility.”

Many more investors also see the lack of interpretability as a roadblock.

Joshua Livnat, head of research at the $127 billion QMA Asset Management, is among them.  As machine learning models are generally better at making short-term predictions, Livnat acknowledges that quants who are interested in the short term could use it to play shorter-term market conditions as they occur—though this is not QMA’s investment style.

“Economic intuition is extremely important to us. If we don’t understand the economic intuition behind something, we do not use it. If we do not understand the times when we expect to do well with a strategy or a factor or a model, and times that we don’t expect to do well, we simply don’t use it,” he says.

But while some grapple with the dilemma of trusting models they don’t fully understand, for others, it isn’t an issue.

“We don’t care about interpretability,” says Ernest Chan, managing partner at QTS Capital.

There is a difference, says Chan, between pure quants like QTS that take black-box model prediction as the output—and thus care more about its accuracy than its interpretability—versus managers who are more interested in understanding how the machine made its predictions as a way of assisting their discretionary decision-making.

A particular risk that quants run with black-box models is not knowing how they will behave in the future.

No investor would accept the risk of investing with a manager who said, ‘Well, I think the portfolio’s positioning is good today. But I don’t know if it’ll be good next week.’
Alexander Healy, AlphaSimplex Group

Alexander Healy, chief investment officer and portfolio manager at AlphaSimplex Group, says: “No investor would accept the risk of investing with a manager who said, ‘Well, I think the portfolio’s positioning is good today. But I don’t know if it’ll be good next week.’ They need a portfolio manager to be managing with an awareness of the risks and how the world changes going forward.” 

Alpha Simplex’s portfolios are managed using a systematic process, but all models, including AI-based models, are thoroughly reviewed by the firm’s investment committee in order to vet the economic rationale that underlies them. “It’s not enough for a model to just appear to be profitable, you need to understand where the profitability is coming from,” adds Healy.

Michael Steliaros, global head of quant execution services for Goldman Sachs in London, is one quant investigating powerful black-box AI approaches like neural networks and random forests. Many of these approaches have had significant success in fields outside finance – so “it’s extremely tempting” he says, to take them and use them.

“The speed of getting to a result is quite significant,” Steliaros admits.

But most AI has been developed in fields where the outcome is already known—think about models that distinguish cats in photos. This isn’t the case in markets.

“In applications outside finance, the fact that it may be a black-box approach is not necessarily a problem because you can eventually get the same outcome consistently. But in finance, if you try to predict returns, volumes, volatility etc. that is something that is quite endogenous to the process. It is not an outcome that you can predefine and, ultimately, having a vague process for a vague outcome doesn’t lend itself to ease of use or stable results,” he says.

Layering up

The black-box approach does have its advocates. Petter Kolm, a professor at New York University’s Courant Institute of Mathematical Sciences, is a quant working at the heart of academic financial quantitative research. He’s a believer in black-box AI as it can uncover complex relationships in markets in a very streamlined way that was not possible using more classical statistical techniques.

As long as quants use a thorough model development process, for example, starting with a simpler model that is easy to interpret, says Kolm, then extending to a more complex model, users can test and verify that models will not act in bizarre ways.

“Because we often build models by adding one layer of complexity at a time, we can infer any of the improvements we’re making to the process,” Kolm says. Constructing a model that is able to reproduce results in line with theory gives its builders comfort in what the model is doing, even if it’s a black box.

There’s also a body of research aimed at shedding light on the inner workings of black-box models. Much of the work focuses on understanding which factors are most important to the machine when it makes its predictions.

Because we often build models by adding one layer of complexity at a time, we can infer any of the improvements we’re making to the process
Petter Kolm, New York University

But these approaches explain only isolated choices about single stocks, according to Michael Heldmann, who heads multi-factor equity investing for North America at Allianz Global Investors in San Francisco. They don’t explain a rationale behind portfolio-level positioning.

“In the end,” says Heldmann, “you have to revert to performance analysis: calculating hit ratios, calculating risk contributions for single instruments, doing scenario analysis, doing factor analysis, finding out how much value and how much momentum is in the portfolio, how sensitive it is to interest rate change, or GDP improvements in the US.”

For quants like Heldmann, there ends up being a trade-off between the power of a model and how much of a black box it is.

The right stuff

That trade-off can be seen among machine learning skeptics more widely, many of whom become enthusiasts so long as the technology is pointed at the right sort of problem.

Spotting patterns is one such example. PanAgora has found black boxes can be useful for finding patterns previously unknown to them, particularly in large datasets where there may be too many variables for a human to consider.

“We’re very good at creativity and connecting the dots and doing something that’s new and innovative, but in terms of brute force, machines are just much quicker, and they’re more powerful, and they’re more thorough,” Chen says.

Leading machine-learning specialist Marcos Lopez de Prado has said that quant investors that fail to use machine learning in research, for example to select features for more conventional models, will struggle.

In other areas, Man AHL has employed machine learning in routing orders to the best algorithm to execute a trade. At Goldman, quants have used unsupervised models in non-critical places that have a consistent repeatable result.

“What we are not using AI/ML for is directional market or stock price predictions. We do not believe this is an area where our research capacity would be best spent. For some more nuanced, high-frequency, large datasets around liquidity and volatility, permeated by complex, non-linear relationships we have found some encouraging results and applications,” Steliaros says.

Back to basics

When it comes to interpretability more broadly, though, Goldman Sachs Asset Management, like others, is clear.

Osman Ali, managing director and senior portfolio manager at GSAM QIS says: “Much of our research is anchored on some sort of fundamental idea and intuition that we’re trying to use data to express and capture.”

Having a hypothesis and testing it leads to more powerful insights, he argues, whereas machine learning might be right, but teaches investors nothing.

“We empirically test to see whether data is helping us in the way we thought it would and when it does that’s a lot more powerful than just a statistical analysis. It’s analysis that is corroborating an economic prior, and so you can feel pretty good that your chances of data mining and overfitting are low.”

“We would absolutely not subscribe to using a black box,” Ali says.

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@waterstechnology.com or view our subscription options here: http://subscriptions.waterstechnology.com/subscribe

You are currently unable to copy this content. Please contact info@waterstechnology.com to find out more.

SEC squares off with broker-dealers over data analytics usage

The Gensler administration has ruffled feathers in the broker-dealer community with a new proposal seeking to limit their use of predictive data analytics. But at the heart of this deal is something far more seismic: one of the first attempts by the SEC to regulate AI.

The Cusip lawsuit: A love story

With possibly three years before the semblance of a verdict is reached in the ongoing class action lawsuit against Cusip Global Services and its affiliates, Reb wonders what exactly is so captivating about the ordeal.

You need to sign in to use this feature. If you don’t have a WatersTechnology account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here