About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

How to Maximise Datasets Created by MiFID II

Subscribe to our newsletter

MiFID II generates about three trillion new data points, begging the question of how financial institutions will maximise their use of new data sources created by the regulation. But how useful is the data six months into MiFID II, what challenges does it present, and will there be winners and losers among firms that can and can’t grasp the data and run with it?

The answers to these questions and more were discussed during a recent A-Team Group webinar that was moderated by A-Team editor, Sarah Underwood, and joined by Gaurav Bansal, director at RCloud Consulting and former MiFID II programme manager; Alex Wolcough, director at Appsbroker; and John Mason, Global Head of Regulatory & Market Structure Strategic Response and Propositions at Thomson Reuters.

Setting the scene for the webinar, an audience poll asking to what purpose organisations are using datasets created by MiFID II showed some 47% of respondents using the data to develop business opportunities, 38% to identify business opportunities, 34% to gain competitive edge and 28% purely for compliance. A further 28% said they are considering how to use the data.

The webinar speakers noted that in their experience firms were moving beyond compliance to consider MiFID II data, particularly pre-trade data, for business purposes, although they also pointed out that these are early days in MiFID II implementation and scepticism remains about the quality of new data and how useful it is today.

Indeed, considering all the new data points, reference data fields, ISINs for OTC derivatives, and market data published by new trading venues and reporting mechanisms established by MiFID II, the data management challenges of using newly created data sources and datasets are many and varied. A second audience poll highlighted getting hold of the data and integrating it as the toughest tasks, ahead of poor data quality, poor data consistency and understanding the data.

Bansal noted problems of collecting, storing and managing the huge volumes of data generated by MiFID II, as well as reconciliation issues. Mason said challenges in the early days of fundamental change were not surprising and suggested firms struggling to source and manage new datasets could use aggregators such as Thomson Reuters.

Wolcough discussed the issues caused by Approved Publication Arrangements (APAs) charging fees at different rates for the data they publish in the 15 minutes before it is supposed to be free. He noted that large firms with deep pockets can afford the date, but small firms may not be able to, a problem that could cause winners and losers in capital financial markets.

With the challenges mastered, the speakers discussed how firms could maximise use of MiFID II datasets. Bansal talked about how combining more client and product data with data from trade execution venues could provide a powerful source of information for purposes such as risk modelling and better client outcomes. Mason noted the need to take data out of siloes and integrate it to maximise the potential of analytics across client, product and trade execution data, and link the data to other information such as news to develop more holistic trading strategies.

The benefits of MiFID II datasets? Significant for both business and operations according to a final audience poll. With the caveat of improved data quality, the speakers agreed, noting clear operational benefits, improved customer service, and the ability to apply emerging technologies such as robotic process automation and artificial intelligence to the data to achieve greater efficiencies and deliver deeper insight into customer behaviour and market activity.

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: Best practices for regulatory reporting

Regulatory reporting is a repetitive, time consuming and expensive business. At its best it requires robust data governance, automated data collection and reporting, standardised reporting formats, a centralised reporting system and a means to monitor and review regulatory change. Nothing new here – but there are emerging approaches and technologies that could lighten the load....

BLOG

Spend, Spend, Spend: 2025 Set to be a Year of Bigger Data Budgets

Next year will be one of rising data expenditure by financial institutions as artificial intelligence (AI)-led applications flourish, according to separate surveys. Nevertheless, most aren’t prepared for AI adoption, with organisations having neither the skillsets nor regulatory processes in place, according to another survey. The final flurry of industry studies for 2024 suggest that financial...

EVENT

Data Management Summit New York City

Now in its 15th year the Data Management Summit NYC brings together the North American data management community to explore how data strategy is evolving to drive business outcomes and speed to market in changing times.

GUIDE

Regulatory Data Handbook 2024 – Twelfth Edition

Welcome to the twelfth edition of A-Team Group’s Regulatory Data Handbook, a unique and useful guide to capital markets regulation, regulatory change and the data and data management requirements of compliance. The handbook covers regulation in Europe, the UK, US and Asia-Pacific. This edition of the handbook includes a detailed review of acts, plans and...