About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

How to Maximise Datasets Created by MiFID II

Subscribe to our newsletter

MiFID II generates about three trillion new data points, begging the question of how financial institutions will maximise their use of new data sources created by the regulation. But how useful is the data six months into MiFID II, what challenges does it present, and will there be winners and losers among firms that can and can’t grasp the data and run with it?

The answers to these questions and more were discussed during a recent A-Team Group webinar that was moderated by A-Team editor, Sarah Underwood, and joined by Gaurav Bansal, director at RCloud Consulting and former MiFID II programme manager; Alex Wolcough, director at Appsbroker; and John Mason, Global Head of Regulatory & Market Structure Strategic Response and Propositions at Thomson Reuters.

Setting the scene for the webinar, an audience poll asking to what purpose organisations are using datasets created by MiFID II showed some 47% of respondents using the data to develop business opportunities, 38% to identify business opportunities, 34% to gain competitive edge and 28% purely for compliance. A further 28% said they are considering how to use the data.

The webinar speakers noted that in their experience firms were moving beyond compliance to consider MiFID II data, particularly pre-trade data, for business purposes, although they also pointed out that these are early days in MiFID II implementation and scepticism remains about the quality of new data and how useful it is today.

Indeed, considering all the new data points, reference data fields, ISINs for OTC derivatives, and market data published by new trading venues and reporting mechanisms established by MiFID II, the data management challenges of using newly created data sources and datasets are many and varied. A second audience poll highlighted getting hold of the data and integrating it as the toughest tasks, ahead of poor data quality, poor data consistency and understanding the data.

Bansal noted problems of collecting, storing and managing the huge volumes of data generated by MiFID II, as well as reconciliation issues. Mason said challenges in the early days of fundamental change were not surprising and suggested firms struggling to source and manage new datasets could use aggregators such as Thomson Reuters.

Wolcough discussed the issues caused by Approved Publication Arrangements (APAs) charging fees at different rates for the data they publish in the 15 minutes before it is supposed to be free. He noted that large firms with deep pockets can afford the date, but small firms may not be able to, a problem that could cause winners and losers in capital financial markets.

With the challenges mastered, the speakers discussed how firms could maximise use of MiFID II datasets. Bansal talked about how combining more client and product data with data from trade execution venues could provide a powerful source of information for purposes such as risk modelling and better client outcomes. Mason noted the need to take data out of siloes and integrate it to maximise the potential of analytics across client, product and trade execution data, and link the data to other information such as news to develop more holistic trading strategies.

The benefits of MiFID II datasets? Significant for both business and operations according to a final audience poll. With the caveat of improved data quality, the speakers agreed, noting clear operational benefits, improved customer service, and the ability to apply emerging technologies such as robotic process automation and artificial intelligence to the data to achieve greater efficiencies and deliver deeper insight into customer behaviour and market activity.

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: Unpacking Stablecoin Challenges for Financial Institutions

The stablecoin market is experiencing unprecedented growth, driven by emerging regulatory clarity, technological maturity, and rising global demand for a faster, more secure financial infrastructure. But with opportunity comes complexity, and a host of challenges that financial institutions need to address before they can unlock the promise of a more streamlined financial transaction ecosystem. These...

BLOG

Symphony and the Future of Market Communications: T+1, DORA, and Deepfake Defence

In May 2024, the U.S. capital markets made the long-awaited transition to T+1 settlement, with RegTech company Symphony playing a quiet but pivotal role. The integration of its platform with DTCC’s Central Trade Manager (CTM) gave firms the ability to resolve trade contract breaks in real time, reducing the risk of settlement failure. “The DTCC,...

EVENT

AI in Data Management Summit New York City

Following the success of the 15th Data Management Summit NYC, A-Team Group are excited to announce our new event: AI in Data Management Summit NYC!

GUIDE

Regulatory Data Handbook 2025 – Thirteenth Edition

Welcome to the thirteenth edition of A-Team Group’s Regulatory Data Handbook, a unique and practical guide to capital markets regulation, regulatory change, and the data and data management requirements of compliance across Europe, the UK, US and Asia-Pacific. This year’s edition lands at a moment of accelerating regulatory divergence and intensifying data focused supervision. Inside,...