About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Embracing the Known in FRTB: Why Banks Need to Step Away from the Data Pool and Start with the Familiar

Subscribe to our newsletter

By: Charlie Browne, Head of Market & Risk Data Solutions, GoldenSource.

The Fundamental Review of the Trading Book (FRTB) is coming and it has sent firms into a spin around how to get the data required to prove risk factor modellability. It is the first time banks will be obligated to do this, a mammoth undertaking that has seen many look to data pooling, where banks, data vendors, exchanges and trade repositories combine all their data to ensure a robust number of transactions have taken place previously.

It’s a convincing proposition; banks simply do not have enough of their own data. Add to this the fact that data is very expensive, and that the majority of firms are keen to consolidate costs after several heavy years of regulatory demands, and the initial attraction is clear.

The problem is that firms, at such an early stage of preparations, are getting bogged down in the many intricacies and unknowns of the data pool concept. Would a single vendor become a one stop shop or would banks be reluctant to rely on a single source and instead spread the risk by enlisting multiple vendors? Then there is the question of who will be responsible for working out if a risk factor is modellable or not, and whether the data pool itself is prepared to face potential questioning from regulators down the line.

Instead of getting stuck on the unknowns of risk factor modellability and data pooling, firms need to take a step back and see the bigger picture of a much wider reaching set of rules. FRTB was broadly designed to address the shortcomings of Basel 2.5, which failed to solve many key structural deficiencies in the market risk framework. Ultimately, the base intention of this regulation is much bigger than risk factor modellability: firms need to make a fundamental review of their data strategy.

They can begin to approach this task by getting the right data processes in place at the outset of FRTB preparations. This means having accurate and accessible market and risk datasets, and the right systems in place to run and interpret all of the calculations. By beginning with such a data-centric approach, firms can ensure that they are ready to meet massive potential challenges around aspects such as time-series cleansing, instrument lineage and single identifiers, to name but a few. And they might be pleasantly surprised by the benefits that fall out of the right FRTB strategy.

That is, if you get your data strategy right for FRTB then you will automatically address the data requirements for a lot of other regulations. For example, BCBS 239, Prudential Valuations and CCAR. This is a massive opportunity for firms to evaluate their entire data infrastructure and ensure they are taking a broader approach to regulation, rather than addressing different directives in silos.

As with any new regulation, the temptation with FRTB is for banks to focus largely on the aspects that are completely new and unknown. This is why the conversation around data pools as a solution to non-modellable risk factors has become so prominent. Firms that put too much time and resource into addressing this one single aspect could be missing a trick. In many ways, FRTB is a catalyst for compliance teams to take a step back, take stock, and put together a comprehensive data strategy that protects them against multiple regulatory requirements, and future-proofs them for years to come.

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: Hearing from the Experts: AI Governance Best Practices

The rapid spread of artificial intelligence in the financial industry presents data teams with novel challenges. AI’s ability to harvest and utilize vast amounts of data has raised concerns about the privacy and security of sensitive proprietary data and the ethical and legal use of external information. Robust data governance frameworks provide the guardrails needed...

BLOG

Data Management Summit New York Takes Deep Dive into Modern Data Landscape

The 15th annual A-Team Group Data Management Summit New York City kicks off tomorrow with one theme prominent in the day of discussions, debates and keynote addresses: data quality. Without good quality data organisations can’t hope to achieve their objectives, be they implementation of artificial intelligence applications, automation of essential workflows or compliance with regulatory...

EVENT

Data Management Summit New York City

Now in its 15th year the Data Management Summit NYC brings together the North American data management community to explore how data strategy is evolving to drive business outcomes and speed to market in changing times.

GUIDE

The DORA Implementation Playbook: A Practitioner’s Guide to Demonstrating Resilience Beyond the Deadline

The Digital Operational Resilience Act (DORA) has fundamentally reshaped the European Union’s financial regulatory landscape, with its full application beginning on January 17, 2025. This regulation goes beyond traditional risk management, explicitly acknowledging that digital incidents can threaten the stability of the entire financial system. As the deadline has passed, the focus is now shifting...