The leading knowledge platform for the financial technology industry
The leading knowledge platform for the financial technology industry

A-Team Insight Blogs

Markit Document Exchange Moving into Reference Data Extraction and Validation, Says Davenport

Since its launch back in 2008, Markit Document Exchange has finally got close to achieving critical mass, with an increase of funds using the documentation library from 5,000 to 15,000 over the last year alone. Penny Davenport, managing director and head of Markit Document Exchange, speaks to Reference Data Review about the vendor’s plan to move into extracting the reference data from the documents stored in its platform and validating key data sets, including entity data.

Markit Document Exchange is essentially aiming to eventually become a one stop shop for all the data needed from the buy side in order to provide the industry with real efficiency and risk management benefits, according to Davenport. The vendor is therefore keen to extract value out of the documents stored on its platform, which was established in order to allow financial institutions to post, manage and share these compliance, counterparty credit and regulatory documents securely.

“Over this year, we are focusing on extracting the reference data content of these documents for our customers, in particular customer and entity data,” elaborates Davenport. “This includes extracting the key terms and core reference data from the accounts documents and then validating and processing this data.”

For example, this week for a client’s fund launch the vendor has been able to extract the key reference data during the onboarding process including legal entity name, identifiers, registered address and key contacts. There are eight mandatory fields for this data and the rest are optional, explains Davenport.

As well as the obvious efficiency gains, Davenport reckons there is also a big benefit in terms of reducing operational risk and increasing control of a fund storing all this data on one platform. These funds can better control who has access to what data, for example, she contends. Regulators are also keen on the storage of this data on a centralised platform as there is increased availability of the data and it is easier to track in terms of an audit trail.

“We think that the buy side is the best source for its own reference data and we have put in place proper validation processes for this data. We began with Markit Entity Identifiers (MEIs) in the loan space and are now rolling this out across the rest of the Markit product sets. The goal is to link the entity and instrument data across all of our products and systems,” she says.

The validation process for this entity data is similar to that employed for the vendor’s RED codes on the instrument side of things. It therefore validates this data against the original source documents such as the fund prospectuses stored on the platform.

The next stage, which will take place over the summer, is to provide the data banks via an application programming interface (API). This is likely to go live in May and will provide a direct interface between the data in the sell side systems and the counterparty data systems, according to Davenport. “This will therefore go beyond client onboarding systems into counterparty data systems and means we are working with heads of reference data within these banks,” she says. “The API will take the industry to the next level in terms of achieving STP in the trade processing lifecycle. It should have a major impact in the loans space where some market participants are experiencing a T+30 settlement cycle.”

In order to be able to tackle a challenge this size, the vendor has adopted a phased approach and its biggest concern is keeping up the data quality, she explains. The MEI platform has been built with entity data cross references in mind rather than building a whole new set of identifiers for the sake of it. “There are 30,000 MEIs on the platform at the moment and we hope to increase this to 100,000 by the year end. They are XML based and the MEI will be available as a premium data field for which we will charge a subscription. Ratings data will not yet be included in the feeds but when it is, this will be separately licensed. Eventually, we will be able to indicate whether certain underlying documents have been provided by the buy side,” she continues.

Entity identification is very important from the regulator’s perspective at the moment in the effort to better track systemic risk and monitor trade reporting across counterparties. There is a definite need for standard identifiers to be able to track counterparty exposure at an industry wide and firm level and Markit’s efforts are aimed at meeting this requirement.

Last year, Markit Document Exchange signed a partnership agreement with Avox to work together to provide their mutual clients with a business entity data and documentation solution. The vendor has been engaged in work with Citi and Avox for some time and the official partnership announcement was the formalisation of that work, which was aimed at normalising and validating Citi’s customer data and documentation.

“Those in the entity identification space such as Avox and CounterpartyLink have a more industrial model, whereas we are developing from a smaller scale basis in this respect, says Davenport. “They are working with individual customers such as Citi in the case of Avox to build identifiers within their system but it will be hard to commercialise that work. However, we have an advantage in that we have access to the underlying source documents from the buy side for validation purposes.”

Davenport reckons the advantage of the Markit approach is that the vendor did not set out to create an entity identifier for the sake of it. Instead it is building out the capabilities in order to support a service that has been requested by its clients: they are keen for the ability to link entity data across the Markit products.

She notes that structural and regulatory changes are potentially looming in the future, such as the National Institute of Finance (NIF) in the US and the European Central Bank’s (ECB) proposed reference data utility, but contends that providers will continue to have a place in the market. “I think with the NIF in the US and the ECB’s utility, they may look to outsource some of the services of these bodies to market experts such as Cusip, Swift or even Markit,” she says.

In terms of getting more funds onto the Markit Document Exchange platform, the vendor is hoping that it will be able to increase the current 15,000 to 30,000 over the course of this year, says Davenport. Its key strategic objective is therefore to get at least 10 major new buy side accounts this year and 10 new sell side firms. Markit is also planning to structure creative arrangements with existing sell side firms in order to build usage on the platform.

“We have around 50 banks on Markit Document Exchange at the moment but we are working closely on this entity data project with five design partners (Citi, Royal Bank of Scotland, JPMorgan, Bank of America Merrill and Deutsche Bank). We are drawing up the XML schema and how this data will be validated when changes are required in collaboration with heads of client onboarding and heads of reference data within these firms. We are also keeping a close eye on what is going on within the customer data standardisation area such as JWG’s CDMG work,” she concludes.

Related content

WEBINAR

Recorded Webinar: The UK’s New Prudential Regime for Investment Firms – Time to Prepare!

With the implementation of the new Investment Firms Prudential Regime (IFPR), the FCA is aiming to streamline and simplify the prudential requirements for solo-regulated investment firms in the UK. Under the new regime, all MiFID authorized, Collective Portfolio Management Investment Firms (i.e. UK UCITS ManCo and Alternative Investment Fund Management Firms permitted to undertake Additional Activities)...

BLOG

SmartStream Adds AI to Reconciliation Platform

UK-based SmartStream Technologies has launched TLM Aurora Universal Data Control, an AI-enabled platform for managing the complex data underpinning firms’ reconciliations activities. TLM Aurora UDC builds on the technology used for SmartStream’s Air, its next-generation AI solution for reconciliation processing, version 2 of which was officially launched in October last year. Air introduces advanced AI...

EVENT

ESG Data & Tech Summit 2022

This summit will explore challenges around assembling and evaluating ESG data for reporting and the impact of regulatory measures and industry collaboration on transparency and standardisation efforts. Expert speakers will address how the evolving market infrastructure is developing and the role of new technologies and alternative data in improving insight and filling data gaps.

GUIDE

Entity Data Management Handbook – Seventh Edition

Sourcing entity data and ensuring efficient and effective entity data management is a challenge for many financial institutions as volumes of data rise, more regulations require entity data in reporting, and the fight again financial crime is escalated by bad actors using increasingly sophisticated techniques to attack processes and systems. That said, based on best...