Artificial Intelligence: From Winter to Spring

AI is experiencing a renaissance, but some are concerned that it could carry hidden risks.

AI Winter
Gone are the long days of the "AI Winter", but the new spring is potentially rife with hazard.

  • While the industry marches headlong into the development and deployment of artificial intelligence-powered processes and systems, regulators—and indeed, market participants themselves—are concerned that widescale adoption could introduce elements of systemic risk.
  • This potential risk ranges from how models behave in periods of market stress, through to how easy it is for humans to unpick the decisions that these machines make.
  • Regulators, burned by their experiences with algorithmic trading, are hesitant to ask for windows into these systems, however, but say they must be governed by appropriate controls.

In July 2017, a spate of newspaper articles claimed that researchers at Facebook had pulled the plug on an experiment between two chatbots powered by artificial intelligence (AI), after the machines began to develop their own language in order to complete a trade. While the experiment was shut down because it wasn’t performing the work required—and not, as more salacious coverage suggested because the researchers were frightened by the development—it raised interesting questions about how AI models interact, and the degree to which humans are capable of understanding not only the conclusions they arrive at, but how they get there in the first place.

It’s certainly an issue that has kept researchers at the Financial Stability Board (FSB) awake at night. As the capital markets increasingly embrace AI, and its various subsets, including machine learning, deep learning, natural-language processing (NLP) and robotic process automation (RPA), the FSB warned in a November 1 report that a headlong rush to implement these technologies could introduce unforeseen risks.

The areas of concern raised include a number of potential risk factors, such as how these models may react in stressed market conditions, how they interact with each other, and whether a handful of providers could gain a monopoly. Most centered on one key question, though: Can humans realistically tell what goes on inside these complex machines, and crucially, can anyone not intimately familiar with their construction work out how they arrive at their conclusions?

Alarmingly, even AI experts have tended to say no. “Algorithms created by AI are very hard to understand, especially when they go wrong, and I don’t think there are the tools to reverse engineer them,” says the New York-based CTO of a US investment bank. “By their nature they are not easily understood.”

The AI Winter

Before examining the current state of AI, it’s important to take a step back and understand that very little of this is actually new. Indeed, AI, as a scientific discipline, has formally been around for over 60 years, with the accepted history being that it was founded by a group of scientists at Dartmouth College in 1956. Elements of the field have their roots even further back, stretching decades to Alan Turing’s Theory of Computation, and even hundreds of years, to Schickard and Pascal’s rudimentary calculating machines, built in the 17th Century.

During the 1960s and part of the way through the 1970s, governments invested heavily in AI, but limitations in the machinery of that time failed to deliver the eye-watering promises of progress that AI specialists advocated. Funding was abruptly cut off, a period known as the “AI winter,” during which little substantive progress was made.

It wasn’t until the 1990s that AI and its disciplines began to be seriously studied—and employed—once more. Advancements in processing power, storage, and the amounts of data being created on a daily basis had finally made the theory possible. “Nothing changed in science except computational platforms,” says Michael Dobrovolsky, executive director, machine learning, AI and decision science at Morgan Stanley Global Wealth Management. “Everything that we’re doing right now, from a scientific point of view, has been known for 10, 20 years. What changed is that computation platforms and compute [power] became available.”

Now, the field is experiencing a renaissance, particularly in financial services. Nearly every major financial institution is engaged, to some degree, in the research and development of AI-powered platforms in a variety of use-cases.

In the back office, trade reporting and settlement have been ripe areas of exploration for RPA, with mixed results, while middle-office processes including surveillance have invested heavily in machine learning and cognitive computing to improve oversight capabilities. This isn’t just theory—in September 2017, Nasdaq rolled out machine learning in the surveillance departments of its Nordic exchanges, designed to more intelligently alert officers to potential compliance breaches.

In the front office, various initiatives have been focused on generating trade ideas, interacting with customers through robo-advisory and increasingly, risk management on a pre- and at-trade basis.

Potential Risks

The FSB’s list of potential risks, in large part, seemed primarily concerned with the idea of AI taking an active role in the front office, in terms of executing trades mechanically. While these concerns are largely academic at present, a number of senior technologists on the buy and sell sides interviewed by Waters did not disregard them out of hand. Some cited the growing availability of AI packages available to developers as one cause for concern. Many modern AI applications are built on tools such as Google’s TensorFlow, or scikit-learn, which a number of specialists posited may lead to a convergence in processes that could prove disastrous in abnormal conditions. A case of smart machines, ironically, acting dumb—in concert.

Elliot Noma, managing director at Garrett Asset Management, for instance, says that there is “major concern” that current AI models are being trained on quiet periods, and that this “could be serious, since we often don’t know what models will do outside these quiet periods, and the boundary locations between normal and extreme events are unknown.”

In addition, he says, the use of the aforementioned packages “encourage analysts to gravitate toward a common set of models. These models could act synchronously once market conditions are outside their trained conditions,” he continues. “Even if we do not get to the point of monopolies for AI providers, we may already be at the point of having de facto monopolies on the methods and approaches we use to create new models.”

Anomalous Conditions 

In some ways, this is an impossible situation to guard against. Anomalous market conditions are so dubbed because they are, in fact, rare events outside of the norm. Historical data can only go so far in training models how to react during stressed conditions, because each period of market stress tends to have wildly different inputs.

The market strain of the credit crunch and the global financial crisis, for instance, could not be replicated verbatim today. That is because regulators—and the industry—have put in place mechanisms to defend against such conditions, including an expanded use of clearinghouses in derivatives markets, and stricter limits on the amounts of capital banks must hold against risky inventory.

Likewise, even less severe events, such as the Flash Crash of May 2010, are unlikely to play out the same way today, thanks to the widespread introduction of circuit breakers, and market-monitoring initiatives such as the Consolidated Audit Trail.

Yet this unknown factor remains an extant concern. The investment bank CTO says that as models are data-driven, they can fall prey to “garbage in, garbage out” scenarios.

“I would not trust AI trained during quiet times,” the CTO says. “They do need to be trained during all different scenarios, but it is impossible to train them on all scenarios, so they are susceptible to erring. There needs to be a framework to define the minimum number of needed scenarios to reduce the risk of poorly trained AI.”

Even so, all of this may not matter if the AIs do go haywire, and market participants and regulators are unable to go back and figure out the chain of events that led to these occurrences.

“Many of the models that result from the use of AI or machine-learning techniques are difficult or impossible to interpret,” the FSB report stated. “The lack of interpretability may be overlooked in various situations, including, for example, if the model’s performance exceeds that of more interpretable models. Yet the lack of interpretability will make it even more difficult to determine potential effects beyond the firms’ balance sheet, for example during a systemic shock.”

Such an emphasis, says Garrett Asset Management’s Noma, puts the industry at risk during systemic events. “This is compounded by the lack of science behind the art of creating stress scenarios for risk management and the unknown interactions among models in extreme situations,” he explains. “One additional concern I have is the lack of experience many in data science have in using a wide range of models, as companies have often put an emphasis on expertise in specific models and datasets, as opposed to a broad understanding of the wide range of models and the pros and cons for using each of them.”

AI Gone Rogue

If all of this sounds familiar—particularly the FSB’s concerns about AI-created algorithms acting synchronously in feedback loops—it should. Similar concerns were raised in recent years as algorithmic trading took off, first in equity and foreign-exchange (FX) markets, then futures, and increasingly, fixed-income and derivatives markets. Then, regulators cracked down hard, insisting on simulations, and in extreme cases, legislation that would require firms to provide regulators with a copy of their source code, a provision since largely abandoned.

The Flash Crash, the collapse of Knight Capital, and other such instances of “algos gone rogue” provided the fuel for these measures, but when it comes to AI, regulators are hesitant to advocate for the same kind of nuts-and-bolts intervention that they do for algorithmic trading.

In a speech on December 6, Rob Gruppetta, the head of the financial crime department at the UK’s Financial Conduct Authority (FCA), cast doubt on the regulatory appetite to examine the guts of AI at banks and other financial institutions, while referencing the FSB report. “If regulators were to insist on a window into the machine’s inner workings, then this would, in effect, be a regulatory prohibition of the use of the more free form varieties of artificial intelligence where such a window is not possible,” Gruppetta said. “But what is it reasonable and proportionate for us to ask for? What we do expect to see is new technology implemented in a way you would any other—testing, governance and proper management. We as regulators clearly need to think more on this. We are encouraged that many firms out there are starting to develop their own code of ethics around data science, encouraging responsible innovation.”

Other regulators have made similar observations, often citing the fact that AI is still relatively immature, or that there is yet to be a case for their intervention. Most, however, stress the importance of proper procedure around any deployment, and that AI should be as rigorously tested and bug-checked as any algorithm, or piece of software.

Steven Maijoor, chair of the European Securities and Markets Authority (Esma), tells Waters that while it “is good to give more attention to technology,” AI and its subsets are just “one part” of technological innovation in financial markets at present. “As Esma we have this dual perspective on technology,” he says. “On the one hand, we think it can improve financial services but on the other hand there can be stability and investor protection issues, there can be risk issues.” 

Maijoor pointed to provisions in the revised Markets in Financial Instruments Directive (Mifid II) around algorithmic trading as evidence that the regulator is active in overseeing technology, and willing to intervene when it becomes necessary.

Sky-not

Humans have always had a complicated relationship with machines. While the industrial revolution mechanized the world and gave rise to living standards that nobody thought possible even a few hundred years ago, it also brought a new age of warfare that resulted in two world wars, and weapons that can level cities. The internet has transformed modern society, but as it develops, it has also ushered in an era of criminal enterprise, identity theft, and risk on an apocalyptic scale.

There is a tendency within our psychology, as a species, to constantly experience a form of cognitive dissonance when it comes to technology. We celebrate development and encourage its forward march, even sometimes at the risk of our own lives, while simultaneously worrying that it may prove to be our undoing—even a cursory glance at the body of speculative fiction, with its killer robots and despotic machines, fluently demonstrates this concern when it comes to AI. Indeed, such is the concern around new technology that fintech—a broad sector that tends to include emerging technologies such as AI—ranked as a major source of concern on the Depository Trust and Clearing Corp.’s semi-annual Risk Barometer for 2018, the first time it had been included in the matrix.

Therefore, when it comes to AI and systemic risk—in particular, the FSB’s concerns—participants don’t just agree, but say they’re actively aware of them already. “While I think that the FSB’s concerns are certainly valid, I think the side of it that’s not represented is that there’s already a strong understanding of what needs to be done within that space,” says Brian Martin, director of technology in the AI practice at Publicis.Sapient. “If collective energy can be focused on methods of understanding, structures of understood accountability, and an aggressively active engagement on the part of regulators, then there is an opportunity to drive deeper efficiency in almost all aspects of financial market systems, while avoiding the pitfalls of reckless exuberance.” 

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@waterstechnology.com or view our subscription options here: http://subscriptions.waterstechnology.com/subscribe

You are currently unable to copy this content. Please contact info@waterstechnology.com to find out more.

Data catalog competition heats up as spending cools

Data catalogs represent a big step toward a shopping experience in the style of Amazon.com or iTunes for market data management and procurement. Here, we take a look at the key players in this space, old and new.

You need to sign in to use this feature. If you don’t have a WatersTechnology account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here