About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Q&A: OneMarketData’s Louis Lovas on Deep Data

Subscribe to our newsletter

OneMarketData has been building technology to process big data since way before the term even existed.  Its customers actually need to process “deep data” – reliable and accurate data instances down to a granular level, stored over long periods of time.  We spoke to OMD’s director of solutions Louis Lovas about deep data, and what trading firms need from it.

Q: What big data problems for the financial markets does OneMarketData address with OneTick?

A: There has been an explosion of hype surrounding big data that has obviously led to confusion rather than clarity.  The term big data has largely been associated with loosely-structured content, originating from web search companies and social media.

Social big data is about gleaning meaningful value from unstructured content.  It is judging the mood of the human psyche portraying an emotion from the voices of millions, which could be twitter, Facebook, click content collected websites and blogs.  In the hunt for business benefit, any one or group of data points cannot be valued as accurate or inaccurate only the determinants of behavior.  The science of social data is the alchemy of behavioral targeting involving not only what to keep, but what to throw away.

Conversely, in capital markets the need for reliable and accurate data is the driving force.  Drilling down to the individual constituent is a mandate, be it a stock symbol, option strike, currency pair or futures contract.

A trading firm’s final goal is to win, they are operating in a fiercely competitive industry.  It is tighter spreads, thinner margins and a lower risk appetite that evoke a wider hunt for alpha as firms look to cross border, cross asset trading models and a reenergized focus on controlling costs.  This has exponentially increased demand for deep data over longer time periods across global markets; equities, futures, options and currencies.

OneTick’s big data capability is about the capture and storage of this deep data and linking disparate data sets under some common thread to tease out an intelligible answer, a diamond from a mountain of coal.  It’s the quantitative research to find the cross asset correlations or understanding how to best hedge a position to offset risk.

Firms demand confidence in the resulting analytics derived from big data … those used in determining the profitability profile of new models, optimising parameters through backtesting of existing models and re-balancing portfolios.  For this they all depend on accurate, clean market data across all their tradable markets and the capture of order executions.  This big data dump is the fuel that drives the engine of the trade life cycle from:

  • Alpha Discovery and Research
  • Strategy Development
  • Strategy Backtesting and Optimisation
  • Portfolio Valuation and VaR Management, Transaction Cost Analysis

OneTick provides the assurance of its accuracy, reliability and timeliness to power this entire process, the engine of the trading business.

Q: Why is aligning Complex Event Processing and Tick Data Management, with Analytics, important?

A: The best way to answer that is with an implementation story looking at a specific market, options. We have a number of OneTick clients in the options business – making markets, trading, servicing retail flow even one client about to launch a new options exchange.

If you look at the options industry, trading volume last year was four and a half billion contracts, OPRA peaked at four million messages a second.  The options market is a quintessential example of big data’s volume and velocity definition.

On a human scale you cannot make sense of what’s inside that avalanche of data without the right technology to consume and analyse.  OneTick is a scalable tick store and real-time analysis engine for options data.

Managing options means coping with a universe of 200,000 strikes streaming at a fire hose rate. OneTick can scale to handle this load for capture, store and analysis and still maintain its high performance characteristics.  Furthermore, options trading is closely tied to the underlying asset price actions (trades/quotes) so that is also part of the big data dump.  This is needed for any of the typical options analysis – Greeks, Implied vol, beta analysis. etc.

This price data is vital to the science of quantitative trade modeling, market making.  Market surveillance and regulatory compliance are obvious requirements for an exchange.

One contract type gaining popularity is the weekly option, those with short-expiration contracts.  These products with rapidly changing deltas can move in the money with short notice causing their trade volume to spike dramatically demanding the low-latency achievable through CEP technology. Options data requires very large storage capacity but just as important is the ability to consume in real-time the high velocity of data, and do so in a fault tolerant manner.

Q: How has OneMarketData grown its business in the last year?  How many OneTick customers do you now support directly or through partnerships?   Is there a commonality of requirements?

A: Our business has grown tremendously in the last year.  We have over 75 clients worldwide ranging in style from tier 1 investment banks, hedge funds, prop shops, retail and institutional brokers, asset managers, market places and exchanges to technology providers.  Last year alone we had 12 client press releases, those OneTick users willing to go public to express the value OneTick has brought to their business, far more than any other vendor in our industry.

The common theme across these clients is their enduring desire to win, to outsmart the competition.  They all find a winning combination in OneTick matching their own goals with its seamless integration of historical and real-time data management, a focus on capital markets and its high performance characteristics.

Q: What have been some of the recent technology updates to OneTick, and what’s coming?

A: We continue to maintain an aggressive product release schedule ascribing to an agile development and release methodology.  We have added new functionality in the area of new connectors for RedLine and visualization via a partnership with Panopticon.

We continue to add and improve our overall analytical toolset, for example including the improved ease with which user’s own code libraries (written in C++, Java, Perl or Python) can be added in-line to a query model.

We have also enhanced the flexibility within the underlying data storage engine providing the option of storing data in either column, row or even a hybrid model.   The added flexibility offers the ability to fine-tune the storage architecture to the data model.  For wide ticks using row storage is more suitable whereas small ticks or cases where only one or few columns are frequently queried, column storage is a better choice for query performance.

We will soon be releasing a new graphical management console which provides OneTick administrators a simplified means to configure, control and monitor OneTick installations from a central side within an enterprise.

Q: And what is the technology/functionality/performance differentiator between OneTick and analytics data bases coming from more generalised big data infrastructure vendors?

A: Big data is messy.  When it comes to financial data, market data of all stripes comes in many shapes, sizes and encodings.  It continually changes and requires corrections and an occasional tweak.  Market practitioner’s worst fears are spending more time processing and cleaning data than analysing it.

To focus on discovering new alpha and optimizing existing strategies, finding that diamond in the rough so-to-speak, demands a confidence in the resulting derived analytics.  It means dealing with:

  • the vagaries of multiple data sources
  • mapping ticker symbols across a global universe – that idea of symbol continuity
  • managing dynamic (rolling) symbologies (i.e. futures contracts)
  • tying indices to their constituents
  • tick-level granularity
  • ingesting cancelations and corrections
  • inserting corporation action price and symbol changes
  • detecting (missing) gaps in history
  • exchange calendars
  • order book management

These are huge data management obstacles to overcome unique to capital markets “tick” data. Generalised data storage infrastructures have no understanding of these requirements; they are simple data repositories – nothing else.  For those vendors these messy data problems are left as an “exercise for the user”.

OneTick on the other hand was built from the ground-up as a platform for financial data.  The core technology easily handles all these “messy” data problems so quants, trader s and other users can focus on the most important tasks – accurate data analysis.  Whether you’re doing …

  • Liquidity analysis across fragmented markets looking at order book dynamics
  • Strategy modeling for alpha discovery and backtesting
  • Correlations within and across asset classes
  • Devising arbitrage models such as index arb or ETFs to their constituents
  • Implied vol or beta analyses
  • Portfolio valuations
  • News sentiment to price action trade models

… you must have accurate prices adjusted for splits, dividends and currencies and have the symbol continuity across exchanges.

The OneTick platform can capture in real-time and bulk load historic data, normalise and cleanse ensuring your big data store is an accurate one.

Q: What do you see as the next challenges to solve for big data, tick data and general financial markets data management?

A: Volumes will continue to grow.  Firms will continue to reach deeper into historical data to weather that perfect storm of thinner trade margins, regulatory actions and cost containment.

Disparate data sources will begin to play a greater role in trade analysis.  The analytics behind linking market data and social data will expand beyond news sentiment.  We will begin see to firms try to understand that to their advantage. Some are doing it today but not in low-latency manner. I believe we will begin to see more of that.

We’ll begin to will see non-financial vendors enter the space – primarily on the analytics side.  As they try to capitalise on their knowledge and skills in social analytics.

Marketplace providers and exchanges will broaden their offering to include more than market data as social content becomes more relevant in trade decision modeling.  Just as in the retail industry where user opinion/product reviews are highly valued as a decision tool for buyers, the combination of market, news and social content will drive a new style of trade model.

As the hype subsides, the big data definition and understanding will clarify.  But unlike technology – hardware or software which always seems subject to commoditisation the business value of data will always remain and only get higher.

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: Proactive RegTech approaches to fighting financial crime

Financial crime is a global problem that costs the economy trillions of dollars a year, despite best efforts by financial services firms, regulators, and governments to stem the flow. As criminals become more sophisticated in how they commit financial crime, so too must capital markets participants working to challenge criminality and secure the global financial...

BLOG

Entrust in Talks to Acquire AI-Powered Identity Verification Specialist Onfido

Entrust, a provider of trusted identities, payments, and data security, is in exclusive discussions to acquire London-based Onfido, a provider of cloud-based, AI-powered identity verification (IDV) technology. If the acquisition completes, Entrust would add a compliant AI/ML-based biometric and document IDV tech stack to its portfolio of identity solutions. It would also have an opportunity...

EVENT

AI in Capital Markets Summit London

The AI in Capital Markets Summit will explore current and emerging trends in AI, the potential of Generative AI and LLMs and how AI can be applied for efficiencies and business value across a number of use cases, in the front and back office of financial institutions. The agenda will explore the risks and challenges of adopting AI and the foundational technologies and data management capabilities that underpin successful deployment.

GUIDE

Regulatory Data Handbook 2023 – Eleventh Edition

Welcome to the eleventh edition of A-Team Group’s Regulatory Data Handbook, a popular publication that covers new regulations in capital markets, tracks regulatory change, and provides advice on the data, data management and implementation requirements of more than 30 regulations across UK, European, US and Asia-Pacific capital markets. This edition of the handbook includes new...