Serious concerns have been raised about the capacity of the US Office of Financial Research (OFR) to be able to effectively process and number crunch the high volumes of reference and economic data that it will receive from the industry in order to monitor systemic risk. The influx of data inputs could potentially overwhelm the new Treasury agency and thus impede the tracking of systemic risk on a more granular level. The OFR (as well as the industry) could therefore be faced with death by data drowning.
Speakers from Sifma’s Operations and Technology Committee at the association’s annual operations conference in Florida this week highlighted the unstructured way in which reference data is provided and assessed at the moment by the regulatory community as a point of concern in this regard. Morgan Stanley’s global head of operations, technology and data, Stephen Daffron, noted that the current state of the raw data is not conducive to being fed into an analytics engine in order to track granular data such as parent/child hierarchies.
As the vendor community that has sprung up around the cleansing of reference data will testify, it is not a cheap or simple process to track this data at a firm level, let alone on an industry-wide basis. Daffron indicated three main points of concern: the sheer volume of unstructured data out there; the lack of consistency (of data itself and the OFR’s approach to that data); and the OFR’s intended approach to the challenge overall.
One of the fears raised frequently by industry participants over recent months is that the OFR will bite off more than it can chew at the outset. Daffron cautioned that a rushed approach towards collecting vast quantities of data was not the right track to go down, but that this seemed to be the way the OFR was progressing: doing “too much” in a space that is “too complex”.
His fellow panellists, JPMorgan managing director Jeffrey Bernstein, Raymond James Financial’s chief admin officer Angela Biever and the Depository Trust and Clearing Corporation’s (DTCC) COO Michael Bodson all noted that the agency is at the start of a long journey. A lot of lessons need to be learned first with regards to dealing with reference data. Good job then, that a practitioner and economist is at hand to assist.
It has been a few weeks since my last blog on the developments surrounding the OFR, during which time a new supervisor has been appointed in the form of ex-Morgan Stanley economist Richard Berner. So, it will now be up to Berner and his team to convince the industry that the agency will not go down the wrong path with regards to dealing with this data.
For now, I’d recommend small bites and regular checkups (with the industry) to ensure the OFR isn’t choked by too much data at the outset.
It is also interesting to note that Sifma, who is leading the charge with regards to providing feedback to the OFR on the legal entity identifier and other data standardisation concerns, chose a DTCC spokesperson to participate in its conference session on operations challenges and the OFR. Given that DTCC is pitching to be chosen as the technology provider to the new Treasury agency (alongside Swift as the registration authority for the new legal entity ID), is Bodson’s presence on the panel tacit confirmation of the association’s support for the DTCC’s bid? After all, DTCC is hot tipped to be the frontrunner in the race…
Expect these topics and many more to be up for discussion at our upcoming Data Management for Risk, Analytics and Valuations (#DMRAV – for you twitterers out there) in NYC in a couple of weeks’ time.
Subscribe to our newsletter