About a-team Marketing Services

A-Team Insight Blogs

The Open Source Market Data Ecosystem – Open for Business

Subscribe to our newsletter

Andrew Miller of Net Effect, a consultant on the OpenMAMA project, takes look at the state of play of open initiatives in the market data space.

A white paper published by A-Team last month went into some depth on the state of market data infrastructure, pain points and benefits of adoption of open source technologies. This article is a quick peek at what the ecosystem might consist of, what exists, what is required and what might be closer than you may think.

The concept is that by adopting open source technologies and, in particular, a common, full function and high performance API abstraction layer, multiple benefits accrue. Some of these are:

  • Full source code is available without download or usage fees, which helps removes vendor ‘lock-in’ and encourages experimentation, leading to innovation.
  • More usage and “eyes on the code” brings higher quality and faster issue resolution.
  • Applications written using the API can connect to any compliant middleware, giving freedom to change or mix and match infrastructure at low cost and low risk.
  • Compliant data sources will work “out of the box”, so end-users can choose the most appropriate sources without complex connectivity issues.
  • Competition and new entrants are encouraged because they can approach the market, knowing that their value-added applications and data are easily integrated.
  • With competition, services and products must improve (promoting innovation) or be cheaper (driving down costs).

It’s a multiple win situation for suppliers, end-users and those charged with increasing profits whilst reducing costs. This is not a quick fix; strategic planning and commitment is needed to truly benefit in the longer term.

Several firms are already well down the path. JP Morgan are public supporters and are seeing many benefits. Other banks and brokers have adopted the approach but are not so public, which is a shame but nonetheless gives credibility. The more end-user firms adopt strategically, the more vendors are encouraged, if not forced to support it, bringing true optionality closer.

So, what’s in the pot and what else do we need?

The abstraction layer is well covered by the Linux Foundation’s OpenMAMA project. Numerous bridges are available providing connectivity to common platforms such as TREP, Bloomberg, Solace and now even ZeroMQ, the open source middleware popular with banks and brokers.

ZeroMQ provides a suitable transport for both trading and market data distribution and so end-to-end messaging can be provided completely within the open source world.

An essential part of many market data infrastructures is the ‘last-value’ cache. There are several examples with OpenMAMA interfaces in production from various suppliers. Feed handlers are also available.

However, more is needed.

The incumbent platforms from data vendors are excellent at dealing with their own data. This is not always the case for 3rd party or internally generated data. Today’s regulatory rigour requires full control, audit and accountability for this data, especially contributed rates and quotes. Other products satisfy this need but there is room for more offerings in the market.

Simple connectivity to support choice of data source is great, but applications are often coded making assumptions about instrument and data field naming conventions. Therefore, a switching ability at some level – server, application or API – to change symbology is necessary. Switching at the server level does not address platform lock-in unless the switching server itself has a common open interface.

Implementation at the API level, with an open interface to symbol look-up services and loadable bridges that allow dynamic source switching truly supports choice. It also allows switching on other criteria such as source failure, cost, latency, etc. OpenMAMA has addressed or is discussing all of these benefits.

I believe we can expect to see the OpenMAMA project catalyse other open source initiatives, maybe even in competition. But competition and innovation are good and surely the industry needs it.

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: The challenges and potential of data marketplaces

Data is the lifeblood of capital markets. It is also a valuable commodity providing financial institutions with additional insight when gathered in an internal data marketplace, or packaged and sold externally to other institutions. While the theory is sound, the practice of setting up a data marketplace can be challenging. Internally, vast amounts of data...

BLOG

The Top 10 Low Latency Data Feed Providers in 2023

Low latency data feeds have become a crucial component of today’s electronic financial markets. Not just for high-frequency trading firms, but across the industry. Buy side and sell side firms rely on fast, accurate market data to make informed decisions in real-time for a wide variety of use cases – from improving trading performance to...

EVENT

RegTech Summit London

Now in its 6th year, the RegTech Summit in London will bring together the RegTech ecosystem to explore how the European capital markets financial industry can leverage technology to drive innovation, cut costs and support regulatory change.

GUIDE

Tackling the Data Management Challenges of FATCA

As the July 1, 2014 deadline for compliance with the Foreign Account Tax Compliance Act – or FATCA – approaches, financial institutions around the world are working to ensure their data management and operational systems will meet the requirements of the US legislation. This report discusses the requirements of FATCA and how the legislation is...