About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Markit Document Exchange Moving into Reference Data Extraction and Validation, Says Davenport

Subscribe to our newsletter

Since its launch back in 2008, Markit Document Exchange has finally got close to achieving critical mass, with an increase of funds using the documentation library from 5,000 to 15,000 over the last year alone. Penny Davenport, managing director and head of Markit Document Exchange, speaks to Reference Data Review about the vendor’s plan to move into extracting the reference data from the documents stored in its platform and validating key data sets, including entity data.

Markit Document Exchange is essentially aiming to eventually become a one stop shop for all the data needed from the buy side in order to provide the industry with real efficiency and risk management benefits, according to Davenport. The vendor is therefore keen to extract value out of the documents stored on its platform, which was established in order to allow financial institutions to post, manage and share these compliance, counterparty credit and regulatory documents securely.

“Over this year, we are focusing on extracting the reference data content of these documents for our customers, in particular customer and entity data,” elaborates Davenport. “This includes extracting the key terms and core reference data from the accounts documents and then validating and processing this data.”

For example, this week for a client’s fund launch the vendor has been able to extract the key reference data during the onboarding process including legal entity name, identifiers, registered address and key contacts. There are eight mandatory fields for this data and the rest are optional, explains Davenport.

As well as the obvious efficiency gains, Davenport reckons there is also a big benefit in terms of reducing operational risk and increasing control of a fund storing all this data on one platform. These funds can better control who has access to what data, for example, she contends. Regulators are also keen on the storage of this data on a centralised platform as there is increased availability of the data and it is easier to track in terms of an audit trail.

“We think that the buy side is the best source for its own reference data and we have put in place proper validation processes for this data. We began with Markit Entity Identifiers (MEIs) in the loan space and are now rolling this out across the rest of the Markit product sets. The goal is to link the entity and instrument data across all of our products and systems,” she says.

The validation process for this entity data is similar to that employed for the vendor’s RED codes on the instrument side of things. It therefore validates this data against the original source documents such as the fund prospectuses stored on the platform.

The next stage, which will take place over the summer, is to provide the data banks via an application programming interface (API). This is likely to go live in May and will provide a direct interface between the data in the sell side systems and the counterparty data systems, according to Davenport. “This will therefore go beyond client onboarding systems into counterparty data systems and means we are working with heads of reference data within these banks,” she says. “The API will take the industry to the next level in terms of achieving STP in the trade processing lifecycle. It should have a major impact in the loans space where some market participants are experiencing a T+30 settlement cycle.”

In order to be able to tackle a challenge this size, the vendor has adopted a phased approach and its biggest concern is keeping up the data quality, she explains. The MEI platform has been built with entity data cross references in mind rather than building a whole new set of identifiers for the sake of it. “There are 30,000 MEIs on the platform at the moment and we hope to increase this to 100,000 by the year end. They are XML based and the MEI will be available as a premium data field for which we will charge a subscription. Ratings data will not yet be included in the feeds but when it is, this will be separately licensed. Eventually, we will be able to indicate whether certain underlying documents have been provided by the buy side,” she continues.

Entity identification is very important from the regulator’s perspective at the moment in the effort to better track systemic risk and monitor trade reporting across counterparties. There is a definite need for standard identifiers to be able to track counterparty exposure at an industry wide and firm level and Markit’s efforts are aimed at meeting this requirement.

Last year, Markit Document Exchange signed a partnership agreement with Avox to work together to provide their mutual clients with a business entity data and documentation solution. The vendor has been engaged in work with Citi and Avox for some time and the official partnership announcement was the formalisation of that work, which was aimed at normalising and validating Citi’s customer data and documentation.

“Those in the entity identification space such as Avox and CounterpartyLink have a more industrial model, whereas we are developing from a smaller scale basis in this respect, says Davenport. “They are working with individual customers such as Citi in the case of Avox to build identifiers within their system but it will be hard to commercialise that work. However, we have an advantage in that we have access to the underlying source documents from the buy side for validation purposes.”

Davenport reckons the advantage of the Markit approach is that the vendor did not set out to create an entity identifier for the sake of it. Instead it is building out the capabilities in order to support a service that has been requested by its clients: they are keen for the ability to link entity data across the Markit products.

She notes that structural and regulatory changes are potentially looming in the future, such as the National Institute of Finance (NIF) in the US and the European Central Bank’s (ECB) proposed reference data utility, but contends that providers will continue to have a place in the market. “I think with the NIF in the US and the ECB’s utility, they may look to outsource some of the services of these bodies to market experts such as Cusip, Swift or even Markit,” she says.

In terms of getting more funds onto the Markit Document Exchange platform, the vendor is hoping that it will be able to increase the current 15,000 to 30,000 over the course of this year, says Davenport. Its key strategic objective is therefore to get at least 10 major new buy side accounts this year and 10 new sell side firms. Markit is also planning to structure creative arrangements with existing sell side firms in order to build usage on the platform.

“We have around 50 banks on Markit Document Exchange at the moment but we are working closely on this entity data project with five design partners (Citi, Royal Bank of Scotland, JPMorgan, Bank of America Merrill and Deutsche Bank). We are drawing up the XML schema and how this data will be validated when changes are required in collaboration with heads of client onboarding and heads of reference data within these firms. We are also keeping a close eye on what is going on within the customer data standardisation area such as JWG’s CDMG work,” she concludes.

Subscribe to our newsletter

Related content

WEBINAR

Upcoming Webinar: How to maximise the use of data standards and identifiers beyond compliance and in the interests of the business

Date: 18 July 2024 Time: 10:00am ET / 3:00pm London / 4:00pm CET Duration: 50 minutes Data standards and identifiers have become common currency in regulatory compliance, bringing with them improved transparency, efficiency and data quality in reporting. They also contribute to automation. But their value does not end here, with data standards and identifiers...

BLOG

Stephan Wolf Steps Down from Role of GLEIF CEO in June 2024

Stephan Wolf, CEO of the Global Legal Entity Identifier Foundation (GLEIF), will step down from the role on 24 June 2024 after a decade of leading the foundation from its start-up phase to the growing organisation it is today. In a post on LinkedIn, Wolf writes: “After a decade of incredible experiences and achievements, I...

EVENT

Data Management Summit New York City

Now in its 14th year the Data Management Summit NYC brings together the North American data management community to explore how data strategy is evolving to drive business outcomes and speed to market in changing times.

GUIDE

Regulatory Data Handbook 2023 – Eleventh Edition

Welcome to the eleventh edition of A-Team Group’s Regulatory Data Handbook, a popular publication that covers new regulations in capital markets, tracks regulatory change, and provides advice on the data, data management and implementation requirements of more than 30 regulations across UK, European, US and Asia-Pacific capital markets. This edition of the handbook includes new...