Waters Wrap: Market data spend and nice-to-have vs. need-to-have decisions

Cost is not the top factor driving the decision to switch data providers. Anthony looks at what’s behind the evolution of spending priorities.

There’s a phrase we use internally at WatersTechnology: “Nice to have or need to have?”

The premise is pretty straightforward: Would a given story be nice to have or is it something a subscriber needs to have? Some articles fall into the former category, but we hope the majority of what we publish qualifies as the latter.

Need-to-have is recession-proof. If you keep improving the product, you can grow your business with more users and even introduce incremental price increases without losing customers, which leads to new investment into new products and resources.

This concept came to mind as I was reading a (fairly) recent report from Coalition Greenwich on market data spending in the capital markets. A survey of 79 respondents—41% of which were wealth managers/private banks, 39% asset managers, and 20% sell-side firms—found that 80% believe data budgets will rise over the next 12 months, with more than 25% anticipating a rise of at least 5%.

It’s an interesting study—you can access it here. While it hits on several different ideas, I want to drill down into one specific question from the report: What are the main drivers for changing market data providers? With most anticipating price increases, I would have guessed that cost of data licenses would be at the top of the list—but it came in 6th out of 11. Nope. The clear winner was data accuracy, followed by feed availability/timeliness, data coverage, and data conformity, with provider reputation rounding out the top five.

“With budgets going up and relationships paramount, it is critical for vendors to execute or risk their clients changing providers,” the report states. Nice-to-have vs. need-to-have—cost can be a barrier, but if the information is of high quality and accurate, users will understand the cost increases.

I asked David Easthope—a senior analyst at Coalition Greenwich, and co-author of the report alongside Audrey Blater—if the feedback from the survey is the result of more firms increasing the number of people using that data or is it that they’re using that data more? Perhaps unsurprisingly, he thinks it’s a combination of more data being made available, and more users—but not just in the front office; in the middle and back offices, as well.

“What we’re seeing more of is the data is increasingly well-used across the organization,” Easthope says. “Obviously, in these contracts they’ll limit the number of seats, but the more the data is used the more expensive it gets—it’s baked into these agreements, often.”

Easthope wasn’t surprised that quality was at the top of the list of reasons for changing data providers. While he notes that switching is rare—“they’d have to fall down and make a major mistake for people to go through the painful process of actually switching data providers”—cost isn’t necessarily the deal-breaker. “These [data vendors] need to put quality above all else,” he says, because while switching may be painful, inaccurate data leads to lost alpha, poor risk management, and regulatory fines.

It’s worth noting that Coalition Greenwich surveyed the end-users of the data. Quality and accessibility concerns are top of the list; cost and technical cost of integrating those services were halfway down. But, I can’t help but wonder if those results would have yielded a different set of priorities if the respondents had been market data professionals and people in charge of procuring new datasets.

When I posed that question to a market data manager at a regional bank—so not quite apples to apples—they had this to say: “I think that people believe that quality is the most important [priority]—and it should be and most of the time it is. But when [the C-suite gets] involved during the budgeting process, priorities change.”

As I wrote a few months ago, market data managers are not so enamored with shiny new tools and datasets, much less ChatGPT. It still comes down to data procurement, mapping/lineage, cleansing/quality, and cost control. But the various business units/desks will always drive decision-making.

Context is king

Finally, I’d like to point to an interesting LinkedIn post from Keiren Harris at consultancy MarketData.Guru (MDG).

“Prior to 2023, the growth in market data spend was incremental and budgeting. … However, [in] 2023, the cost/consumption relationship flipped. Increasing market data fees [are] not being matched by commensurate added value from data to the investment process,” Harris writes.

A post on MDG’s blog—(Are they still called blogs? What do the kids call them?)—had this to say about one of its client’s market data costs: “During the period under review starting 2018 and extrapolating to the end of 2023, MDG found the client’s market data spend increased by just over 28%, varying between 3% and 5% [per annum] between 2018 and 2022, but then suddenly accelerating away in 2023 to 9%.” (Side note: lots of increase for reference data, too.)

Let me add here one more line from the Coalition Greenwich report. “The vast majority of study participants select their data providers based on overall data quality, rather than price. This has allowed market data providers to keep premium prices intact, as long as they deliver the highest quality data with little or no down time and, in some cases, with increasing breadth and depth.”

Consider what MDG and Coalition Greenwich are saying. And here I’ll fly off the rails and make wild leaps of logic. If you know the Waters Wrap column well, you probably knew this sentence was coming: Data is great, but being able to provide context/analytics is king.

With executives I’ve been speaking to in recent months, I think this is at the heart of data spend. Cloud and stable APIs are becoming table stakes, and open source is becoming increasingly important in capital markets tech development.

A decade ago, contributing to the open-source community was considered controversial. Private—or at least hybrid—clouds were the standard. APIs, while a thing, weren’t the thing. So it is that we’re at a tipping point in tech evolution.

Technology is allowing for the creation of more data. It allows for the ability to more easily and cheaply store and distribute data. It allows for the ability to provide analytics so that it’s easier to understand and contextualize that sea of data. The race isn’t just to have data, it’s to be able to integrate it and make it interoperable—whether on a desktop, with internal systems, or with other major data platforms. Gone are the days of walled-off gardens.

It’s a free market and if you’re just a nice-to-have provider, for better and worse, it’s easier to cut the cord.

The image accompanying this column is “Wolf and Fox Hunt” by Peter Paul Rubens, courtesy of The Met’s open-access program.

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@waterstechnology.com or view our subscription options here: http://subscriptions.waterstechnology.com/subscribe

You are currently unable to copy this content. Please contact info@waterstechnology.com to find out more.

Data catalog competition heats up as spending cools

Data catalogs represent a big step toward a shopping experience in the style of Amazon.com or iTunes for market data management and procurement. Here, we take a look at the key players in this space, old and new.

You need to sign in to use this feature. If you don’t have a WatersTechnology account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here