About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

CFTC’s Data Standards Subcommittee Chair Kirilenko Doles Out ‘To Do’ Lists for Each of its Four Working Groups

Subscribe to our newsletter

As well as a public meeting on the subject of swaps data repositories, last week also saw the first meeting of the Commodity Futures Trading Commission’s (CFTC) data standardisation subcommittee, which is chaired by CFTC chief economist Andrei Kirilenko. Attendees discussed the current obstacles to entity and product identification in the market, how international agreement on standards can be achieved without introducing delays and how to minimise the implementation and adoption costs for industry participants.

For example, on the subject of product entity and identification, Kirilenko asked participants to look at what can be done within the industry beyond the CFTC and Securities and Exchange Commission’s (SEC) current proposals for swaps data repositories and reporting. He said: “Beyond these existing rulemakings, what role can/should government regulators take in setting the timetable for completion of a standard list and obtaining/mandating consensus on standardised product and entity identifiers in the swaps market?” In other words, how far should the regulators step beyond their current remit?

In his opening statement at the start of the meeting, commissioner Scott O’Malia, who chairs the Technology Advisory Committee (TAC), of which this subcommittee is an integral part, stressed the need for decisions to be made on “mandated solutions” in a cooperative manner between the industry and the regulatory community in a true “public/private partnership”. He explained that the idea behind this work is to find a “common goal of reaching a consensus as to how we can standardise the language we use to communicate within the new regulatory landscape.”

He noted that the “window of opportunity” for the industry to get involved in this work is “wide open” and the CFTC is willing to listen to the “advice” that industry participants can provide on data standardisation issues. O’Malia added that firms should also: “keep in mind the general regulatory goals of promoting market stability and efficiency, increasing transparency, removing barriers to entry, and avoiding unduly burdensome transition costs or requirements.”

The meeting, which took place on Friday 5 August, involved updates from all four of the subcommittee’s working groups: product and entity identification; semantic representation of financial instruments; machine readable legal documents, and storage and retrieval of financial data.

Kirilenko’s comments on entity and product IDs kicked off the first set of discussions and, in addition to his remarks about whether there should be an extension to the current remit of regulators in this space, he asked for input on the mechanisms through which these identifiers will be issued. The current plans of the Office of Financial Research (OFR) for the introduction of a new legal entity identification (LEI) standard were therefore top of the list in terms of discussion. Building on this utility concept and moving to instrument identification is next on the list of priorities for the group.

He also added: “How can standardisation be coordinated with international organisations and governmental practice without unduly delaying progress?” This is likely a response to concerns that have been raised by the industry regarding ensuring the standards have a truly international flavour, given that they are being primarily driven by US legislation. The concern about a timely response also likely stems from the previous efforts to establish such an ID as part of ISO’s International Business Entity Identifier (IBEI) initiative, which failed as a result of international disagreement on the tenets of the ID.

Kirilenko suggested that further analysis of how standardisation in this area could be designed to minimise transition costs by not requiring pre-existing business practices to switch to different mark up languages is needed. This is a project that the relevant entity and instrument working group will, no doubt, be getting on with over the coming months.

The machine readable legal documents group discussed the progress thus far that has been made with regards to researching the feasibility of mandating algorithmic descriptions for derivatives by the CFTC and the SEC. Back in April, the regulators indicated that they believe current technology is capable of representing derivatives using a common set of computer readable descriptions and that these descriptions are precise enough to ID “at least a broad cross section of derivatives,” but a few other items such as standardised entity identification must be tackled first.

During last week’s briefing, Kirilenko asked the work group to come up with recommendations on how and when standardised machine readable legal documents can be created for most complex swaps, including International Swaps and Derivatives Association (ISDA) master agreements and collateral documents. He also asked for suggestions about how much of this data can be standardised for regulatory oversight purposes or for delivery to swap data repositories. In addition to this, the working group has been tasked with developing recommendations on whether, how, and when data on lifecycle events can be captured and analysed by regulators to analyse net exposures in the market, as well as how to standardise bespoke products for machine readability.

On the subject of the semantic representation of financial instruments, Kirilenko said: “There is a need and desire to go beyond legal entity identifiers and lay the foundation for universal data terms to describe transactions.” This is in keeping with the work the regulator has been doing with the EDM Council (which is also represented on the working group by managing director Mike Atkin) around its proof of concept for interest rate swaps (IRS), which is seeking to prove the benefits of a common semantics repository for data across the industry.

As explained by Atkin in June, the proof of concept work has been carried out to demonstrate to the regulators that semantics is the right route to go down rather than point solutions. “It will allow for the better integration of standards and is not tied to a particular technology,” he said.

On this note, Kirilenko last week charged the working group with determining how to achieve consensus among varying private industry approaches to this issue. He asked the group to answer: “What is a proposed timeline for achieving semantic representation and taxonomy with realistic milestones? How to coordinate internationally with any domestic regulatory initiatives or partnerships?”

The working group on the storage and retrieval of financial data, of which Google’s engineering director Marc Donner is a member, has been tasked with determining how to deal with the large amount of data that will be delivered to the swaps data repositories as part of Dodd Frank’s new data reporting requirements. Kirilenko said: “There is a tremendous need for efficiency in the design of data storage, transfer and analysis in a standardised format. Particularly for regulatory reporting and real-time reporting, but also for internal business use, standardisation in the treatment of this data is an important common goal of the public and private sectors.”

He asked the group to find a workable solution towards achieving data standardisation but ensuring the privacy of that data and minimal costs to the industry within a “reasonable timeframe”. To this end, he asked the group to find the “low hanging fruit” first and, later, the longer term goals, as well as analysis of potential transitions costs. He added: “What elements of the storage, transfer, and retrieval of financial data could or should be open sourced? Without discussion of specific software or hardware products, what types of designs and technological approaches will best integrate reporting to swaps data repositories with real-time use and analysis of market data by market participants and regulators?”

The new subcommittee is planning to conduct at least two more sessions this year, one in the autumn (on 30 September) and one in the winter (on 4 November), both of which will also be open to the public. Moreover, many of the subcommittee’s members are also likely to get involved in the OFR’s planned regulatory conference on data issues later this year (watch this space for more details).

Subscribe to our newsletter

Related content


Upcoming Webinar: Practical considerations for regulatory change management

Date: 18 September 2024 Time: 10:00am ET / 3:00pm London / 4:00pm CET Duration: 50 minutes Regulatory change management has become a norm across financial markets but a challenge for financial institutions that must monitor, manage and adapt to ensure compliance with both minor and major adjustments to obligations. This year is particularly troublesome, with...


GRI CEO Sees Innovation Lab Easing ESG Data Pain Points

If Global Reporting Initiative (GRI) chief executive Eelco van der Enden had any doubts that a Sustainability Innovation Lab (SIL) it recently co-founded in Singapore would attract interest, he only had to wait a few moments into its launch to find out. No sooner had a chat bot been activated for interested parties to inquire...


AI in Capital Markets Summit London

The AI in Capital Markets Summit will explore current and emerging trends in AI, the potential of Generative AI and LLMs and how AI can be applied for efficiencies and business value across a number of use cases, in the front and back office of financial institutions. The agenda will explore the risks and challenges of adopting AI and the foundational technologies and data management capabilities that underpin successful deployment.


Entity Data Management Handbook – Third Edition

Welcome to the third edition of the Entity Data Management Handbook which is available for free download. In this updated edition we delve into the role entity data plays in the smooth running of financial institutions and capital markets, the challenges of attaining high quality data, and various aspects, approaches and technologies involved in managing...