Market data infrastructure’s transition to the cloud has gathered significant pace recently, driven by the growing recognition of benefits such as enhanced accessibility, flexible distribution, superior scalability, increased efficiency, and the potential for considerable cost reductions in both sourcing and managing data.
As use cases for market data in the cloud grow, notably in areas such as analytics, data science, and quantitative research, a group of industry experts came together to discuss these trends in depth during a recent A-Team webinar, “Market Data in the Cloud: Fuelling the next generation of data delivery solutions”, sponsored by Refinitiv, an LSEG business. The webinar featured panelists: Bill Bierds, President of BCC Group; Tim Edwards, Sales Engineer, Google Cloud Platform, Google; Anna Branch, Head of Strategic Partnership at TP Quilter; and Jason West, Head of Real-Time Refinitiv Managed Services at Refinitiv, an LSEG business. The discussion was moderated by Sarah Underwood, Editor at A-Team Group.
The discussion started with one panelist describing how the market data landscape is experiencing an evident shift towards cloud-based solutions. Over the last 18 months, the adoption of cloud technology for the sourcing and distribution of market data has notably increased across diverse customer bases, from the front office to the back office. And an array of cloud-based solutions have now been developed to meet this growing demand.
Major cloud services like Microsoft’s Synapse, Google’s BigQuery, Amazon’s Redshift, and Snowflake are stimulating this trend by offering innovative products through user-friendly interfaces. One panelist pointed out that Google’s BigQuery is adept at importing and swiftly distributing data to multiple authorised users, enabling quick, low-cost access to data. Rather than making substantial upfront infrastructure investments, such services accelerate the testing, iterative development, and production rollout of new products – or rapid discontinuation of unsuccessful initiatives. Hence, they encourage a culture of ‘fast failure.’
In response to the first audience poll of the webinar, which asked: “How much progress has your organisation made in moving market data and infrastructure to the cloud?” 50% of respondents said they had made either “significant progress” or “as much progress as possible,” with just 17% of respondents saying they had made “no progress.” Panelists felt this was a positive indicator of cloud adoption, particularly when factoring in the results of the subsequent poll, which asked “What types of data delivery solutions does your organisation use for market data distribution from the cloud?” where 100% of the audience cited real-time feeds as one of their delivery solutions. This came as encouraging news to one panelist, who suggested that until just a couple of years ago, firms would not have used cloud providers for real time feeds, only for historic data.
The subject of cloud data delivery models was then discussed in more depth. Panelists suggested that firms should capitalise on the native capabilities that the various cloud platforms provide, such as data warehouses or pub/sub messaging, for example. They also suggested firms should look at what the various market vendors now offer in terms of cloud-hosted solutions, as they allow firms to access data from trusted vendors on platforms that are scalable, resilient, and can meet the needs of both those providing the data and those looking to consume it.
A debate then ensued on the merits of market data delivered directly into data lakes that allow users to send queries to the data rather than shifting large amounts of data around. One panelist suggested that such an approach can allow an entire organisation, including risk teams, compliance teams, post-trade teams, execution teams, data scientists, and others, to access the same type of data using a single API. However, it was pointed out that a challenge with this approach is around licensing and permissioning, i.e. understanding and managing how the data is being used within the organisation, and by whom.
Another topic of conversation was the use of data for Artificial Intelligence models. As one panelist noted, “AI is everywhere at the moment”, and cloud platforms are ideal for storing and processing the large corpuses of data needed to drive the AI models and to run the necessary computations against that data.
The session concluded with panelists offering practical advice on how to get the most advantage of the cloud when implementing market data delivery models.
First of all, an essential early step in the cloud strategy is to analyse the workloads from critical to non-critical, and to develop the journey accordingly. One thing not to do is attempt a “lift and shift” of existing legacy applications into the cloud. Instead, firms should look at the use cases of what they’re actually trying to achieve and the problems that they’re trying to solve for; break down how they currently do things today; work out how it can be better achieved with cloud services; and iterate from there.
Another sage word of advice was to engage InfoSec teams right from the outset, rather than building something and then trying to implement the necessary security later on.
Finally, implement FinOps at an early stage, to provide full visibility into implementation costs and running costs.
Subscribe to our newsletter