The leading knowledge platform for the financial technology industry
The leading knowledge platform for the financial technology industry

A-Team Insight Blogs

A Rethink, Redesign and Retool is Needed for Systemic Risk Monitoring, Contends JWG’s Di Giammarino

In order to be able to properly monitor systemic risk, the industry first needs to understand the gaps in the current data framework and then rethink, redesign and retool the method of controlling “the system”, according to PJ Di Giammarino, CEO of JWG. Speaking at the FS Club event on systemic risk last week, Di Giammarino noted that before the regulatory community jumps into mandating data standards for systemic risk monitoring purposes, for example, it first needs to consult with the industry to define a specific roadmap for change.

Following on from the JWG event on funds transfer pricing earlier in the day, which also featured discussions on risk data challenges, the FS Club panel was charged with debating the tools needed to monitor systemic risk across the industry, including reference data standards for instrument and entity identification. Di Giammarino contended that the industry is only beginning to understand what is required to monitor systemic risk but may be jumping ahead of itself without first considering the practicalities.

“There has been a lot of intellectual debate and policy points, but very little in the way of discussion about the practicalities of monitoring systemic risk. There are also significant disconnects across the G20 with regards to approaches to information management that could cause regulatory further silos,” he explained. “There needs to be a complete rethink in the way we approach systemic risk.”

Di Giammarino believes there are three steps that need to be taken first before a roadmap for change can be drawn up: “rethink, redesign and retool”. A rethink is needed to be able to align macro and micro views, although this will be a difficult and costly process, he explained. Regulatory information silos are difficult to locate and to link today (just look at transaction reporting across Europe for an example) and therefore only a partial view of system data is available; hence a “redesign” is required. A retool would therefore involve a global approach to aggregating and interpreting the correct data, he added.

The current assumptions about how to manage risk data need to be fundamentally challenged, according to Di Giammarino, such as appreciating the consequences of getting it wrong. “Decisions based on garbage in, gospel out could be catastrophic and result in a high cost,” he warned. “It should also not be assumed that we have the right data at hand: existing data sets need to be supplemented.”

This, of course, will not come cheap. To this end, Di Giammarino underlined the need for investment in infrastructure by both supervisors and suggested that tariff models are needed to be able to fund the establishment of new data structures. A part of this task will also be related to ensuring that the data is of good quality via standardisation and metrics being put in place, which could then form a part of an “agile and adaptive monitoring system”.

The barriers to achieving such a system are numerous and Di Giammarino grouped them into five main categories related to: disclosure; governance; cost and commercial issues; operating model objectives; and harmonisation. These concerns could also be applied to the push for a centralised and mandated reference data utility for the industry. Data privacy and the lack of agreed standards across the industry, for example, could potentially pose a significant problem for both systemic risk monitoring and the establishment of a utility.

Overall, he promoted the discussion and agreement by the industry and the regulatory community of then end to end design requirements for a systemic risk monitoring system. He noted that appropriate bodies need to be identified and made accountable for delivering the roadmap. The questions that need to be asked during this process are: “For what purpose should a control framework exist? Who will be accountable for its operation? What functionality is required? What metrics and linkages are needed to operate? What data requirements need to be met in terms and quantity and quality of data (including how much historical data should be retained)? And what are the people, processes and technologies required to make it all happen?

Related content

WEBINAR

Recorded Webinar: Brexit: Reviewing the regulatory landscape and the data management response

With Brexit behind us and the UK establishing its own regulatory regime having failed to reach equivalence with the EU, financial firms face challenges of double reporting, uncertainty about UK regulation, and a potential exodus of top talent. The data management response is not easy and could stretch some firms to the limit as they...

BLOG

FundApps Gets ‘Significant’ Growth Funding from Scottish Equity Investment

London-based FundApps has secured a ‘significant’ investment from Scottish Equity Partners (SEP) to fund future growth. The company will use the funds for development of its core Shareholding Disclosure, Sensitive Industries and Position Limits product suite, as well as expansion of its client base of 100 or so buy-side and sell-side firms. “Having bootstrapped our...

EVENT

RegTech Summit London

Now in its 6th year, the RegTech Summit in London explores how the European financial services industry can leverage technology to drive innovation, cut costs and support regulatory change.

GUIDE

Entity Data Management Handbook – Seventh Edition

Sourcing entity data and ensuring efficient and effective entity data management is a challenge for many financial institutions as volumes of data rise, more regulations require entity data in reporting, and the fight again financial crime is escalated by bad actors using increasingly sophisticated techniques to attack processes and systems. That said, based on best...