By Ken Lamar, Founder and Principal Partner, Lamar Associates LLC
(With Robert Lee, Executive Director, North America Client Engagement, AxiomSL).
Regulators are beginning a journey to design the next generation of data collection systems. A critical aspect of the design process is the need for new data capabilities to effectively conduct supervisory activities to ensure the financial stability, safety, and soundness of individual institutions, and monitor the conditions of the financial markets.
As this journey evolves, stakeholders are considering whether the data collection architecture should continue to be based on a ‘push’ model in which reporting entities gather data from their internal systems and submit it to the authorities or, alternatively, begin to migrate to a new ‘pull’ model by which regulators would be able to retrieve the required data directly from the reporting entity’s system.
As regulators and financial institutions assess changes to the data collection model, they face impending structural changes, many of which are already in progress. It is becoming increasingly clear that RegTech and SupTech firms – often working collaboratively with market participants – will play key roles to create and enable rapid, successful adoption of these advancing data collection systems. To clarify these entities points of view:
- SupTech providers offer supervisory agencies and regulatory bodies innovative technologies that will enable them to receive the increased data and information they require from reporting institutions to carry out their oversight obligations more efficiently and in more compressed timeframes.
- RegTech providers offer financial institutions technology innovations that enable them to transparently manage the expanding and more granular data requirements, and nimbly respond to the new risk and regulatory data collection/reporting regimes.
To facilitate consideration of this rapidly accelerating journey, this paper:
- Outlines the need for advancing data capabilities and discusses some initiatives that are under way
- Considers the necessary contributions of RegTech, SupTech and other market participants to the success of the endeavor
- Highlights the challenges in implementing the next generation of supervisory data collection systems
- Calls out actions that financial institutions should take to prepare for the shift to new data collection models
- Offers some examples of collaborative work in progress in various regions as experienced by this article’s RegTech contributor, AxiomSL
The need for improved data collection systems
The complexity and interconnectedness of firms has made data central to financial institution supervision. Data is crucial for understanding the activities of financial institutions and mitigating risks to financial markets and the economy as a whole. The market stresses over the past decade have shown that without timely data to understand emerging risks, policymakers and supervisors can only respond reactively rather than proactively to identify and mitigate emerging risks. With this in mind, regulators are formulating models for the next generation of data collection systems and the associated capabilities.
Increasingly, they are requiring that data be dynamically available at the product and instrument level. More and more regulatory data will be needed on demand, almost in real time, which leaves financial institutions little lead time to meet the requirement and creates the need for data collections to be implemented using a pull approach.
This differs greatly from today’s environment in which firms aggregate and push data to regulators based on regular schedules. The current model is largely supported by data collection systems designed in terms of specific reporting templates that require significant lead time to complete.
Next generation data collections systems
To keep pace with technology and other innovations, regulatory bodies are establishing innovation offices. A number of these offices are conducting pilots, sprints, and competitions – or have already conducted them – to develop roadmaps to achieve greater data availability and increase the usefulness of the data. Moving forward in efforts to advance data collection systems requires close public and private sector collaboration, including involving third-party RegTech and SupTech vendors.
On the conceptual level, work on advancing the next generation of regulatory data collection systems is focused on several areas:
- Data standardization
- Communicating reporting requirements
- Report templates
- Granular data
- Transmitting data
Data standards are the foundation to evolving the regulatory data collection system. Currently, data items and attributes are report-specific and differ across regulators and jurisdictions. Without establishing widely adopted data standards and clarity of data definitions, it will not be possible to improve the quality of granular data and establish more efficient methods for transmitting data to regulators. And, of course, without achieving those efficiencies, regulatory costs cannot be reduced.
Effective data standards require agreement among regulators on all aspects of the data elements and attributes. Globally, regulators have taken steps to improve the process to standardized data. Regulatory initiatives are being undertaken at a number of supervisory organizations including the Bank of Italy (Bdl), the European Banking Authority (EBA), the Financial Conduct Authority (FCA), and the Central Bank of Austria (OeNB).
In most cases, these efforts are technology related. However, before technology solutions can take hold, improved collaboration and processes are needed where regulators across agencies and jurisdictions agree on standards (conceptual and technical). Granted, there always will be circumstances in which regulators and jurisdictions have some differing data definitions. When this occurs, the differences should be explicitly explained, and the effort and costs associated with establishing and maintaining them should be justified.
Communicating reporting requirements
Most reporting requirements are provided through printed reporting instructions. For the most part, these instructions are translations of regulations, accounting standards, statistical standards, and other types of public policy guidance. The broad nature of these written requirements means firms must first interpret them and then communicate them throughout their organizations, identifying the applicability of the requirements to products and internal policies.
Regulators also use other methods to provide reporting guidance, including questions-and-answers series and supplemental instructions outside of the reporting instructions. Using multiple sources for instruction, however, often creates quite a fragmented experience for the financial institutions preparing to comply and thus can lead to inconsistency in reporting and decreasing data quality and reliability.
Certainly, setting data standards is a critical first step to establishing a base that advances data capabilities. The next step is to give institutions the capabilities they need to implement regulatory data requirements effectively and efficiently. This would enable them to consistently interpret the reporting requirements and trace the requirements back to their authoritative regulatory source.
Several regulators are pursuing efforts aimed at improving the communication channel for reporting instructions by using innovative approaches to provide more comprehensive metadata about data requirements and making the metadata more accessible to the public, reporting institutions, vendors, and data users. These regulators have outlined a target state under which such new communication channels can provide regulations and reporting requirements in machine-readable formats. As a result, reporting instructions to implement requirements would be open to minimal interpretation, ensuring conformance with the requirements. Efforts in this area are under way at the BdI, the European Central Bank (ECB), and the FCA.
Currently, report formats are tabular with defined rows and columns that reporting institutions must populate. Standardizing and making the format and protocols used for regulatory data collections more automated will enable regulators to receive data efficiently while minimizing the reporting burden on firms. Thus, choosing schemas like eXtensible Markup Language (XML) and eXtensible Business Reporting Language (XBRL) across regulators and data collections is another important step forward. Settling on the best schema, however, requires collaboration with reporting institutions and RegTech/SupTech firms since solutions are most effective when they align with how reporting institutions store and manage data in their internal systems.
The complexity of financial products and risk management practices have made product- and transaction-level data a regulatory necessity. Advancing technology is extending the capabilities to provide this data to regulators. These advances present the opportunity to significantly change the model for regulatory data collections to one that uses granular data to:
- Monitor the risk of portfolios across institutions;
- Aggregate data to create reports;
- Merge and transform data to gather insights;
- Eliminate duplicative regulatory reports.
This requires data standardization and technology capable of handling large volumes of data to be exchanged. In a limited way, this model exists today and is used in many countries including for capital planning and stress-testing data. However, for the most part, these emerging data collection models have been designed for specific purposes and are narrowly defined.
A new solution to expand the availability and timely use of granular data is to develop capabilities for regulators to pull data directly from institution’s data repositories. To do this, data at the product and transaction level (i.e., the data source) must have the same degree of controls and data quality standards that exist when institutions now more typically push data to regulators.
Several regulators have initiated projects to advance the capabilities for obtaining granular data through a pull mechanism including the Australian Prudential Regulation Authority (APRA), the EBA, the ECB, the OeNB, and the Federal Deposit Insurance Company (FDIC).
Transmitting data to regulators
As regulators continue on the path toward requiring granular data, they need to create new ways to obtain and onboard data. Data transmissions protocols need to accommodate ever increasing sizes and also address the size limitations that exist today. These efforts should include creating capabilities to implement the pull approach. Moving to this type of pull approach allows regulators to access data as needed – a capability that becomes especially important during times of stress when quick access to data can be critical. And, it enables them to migrate away from issuing ad-hoc requests for data.
The APRA, the EBA, the FDIC and the Monetary Authority of Singapore (MAS) are working on projects to address the feasibility of this data collection model.
Challenges to implementation
The ongoing efforts to advance supervisory data collections are strategic initiatives needed to enable regulators to use data dynamically and apply advanced analytical techniques. However, there are significant challenges associated with the journey to reach this vision of regulators. In the first place, establishing the data standards requires collaboration and agreement among standard setters such as the Financial Standards Accounting Board (FASB) and the International Accounting Standards Board (IASB) and regulators across agencies and jurisdictions. Then, such standards must be adopted by international bodies such as the Bank for International Settlements (BIS).
Realization that data standards are essential to ensure data availability, improve data quality, and reduce regulatory burden is an important step in achieving global standards. Despite this, to date, data standards have yet to be adopted on a wide scale. This can largely be attributed to lack of a governance process that spans across regulators. Separate data definitions continue to be used for internal risk management, public financial reporting, and regulatory reports.
As regulators create processes that more effectively standardize data definitions, translating the legacy definitions will be a difficult task. Using a “big bang” approach to define all data elements (numbering in the tens of thousands) is not practical – the scope is too large, the process is unmanageable, these projects take too long to complete, and they can be quite expensive.
A more effective and practical approach is to focus standard definitions on the product or transactions level (e.g., loans) or even on a subset of products (e.g., real-estate loans). Using a more targeted approach would make it much more possible to define data elements across data collections and regulators – and would enable the standard setters to focus on the attributes needed to identify the risks in these particular products and the markets these products impact.
Need for a collaborative model that embraces security
For standards to be adopted widely, a collaborative engagement model is needed. Collaborative standard-setting should go beyond public sector collaboration to include private-sector data and technology providers. The related discussions should focus on how the data providers use the requested data to manage risk and analyze business trends, which results in lower costs and increased data quality.
To achieve the goal to collect data one time, but use it for many purposes, appropriate data sharing and security protocols must be put in place. The sensitive nature of more granular data naturally increases privacy concerns across the board – with the incumbent risk that firms’ proprietary material non-public information (trading strategies, etc.) could be inadvertently disclosed.
Data quality challenges will also continue to exist if a straight-through model is adopted whereby granular, product-level data is transmitted directly to regulators. The data quality risk could be even greater if proper quality assurance controls were not in place at the source of the data and regulators were to pull data directly from reporters’ systems. Therefore, strengthening and increasing data controls at the point of origination is a significant cultural transformation and investment.
Some firms are in the process of establishing authorized data sources (ADS) to meet regulatory reporting requirements. Many ADS and derivations of them are often needed to conform to reporting requirements. But if there are too many of them, regulators will struggle to pull data from one source. Accordingly, firms should continue to transform multiple data sources into single repositories with all the data attributes, even though this endeavor will continue to require a significant technology and data investment.
How to Meet the Challenge
Since the financial crisis of 2007–2008, many firms have been on a data journey that involves improving data quality, enhancing controls, transforming data culture, and designing more effective data infrastructures. Much work in these areas still needs to be done. But regulatory plans are now accelerating the pace of change to modernize the supervisory data collection proposals.
Underlying many of the proposals now under discussion is the regulators’ objective of obtaining standard, granular data, directly from regulated institutions’ systems. To achieve this, the data most conform with the data definitions and quality expectations set by the regulators. When data is pulled directly, firms need to have the capabilities to provide appropriate context around the target data and must have confidence in the security architectures (regulators’ and firms’) that protect sensitive data.
To effectively use the new data collection models, firms need to carefully monitor the actions being taken by regulators. Collaborating with regulators early in the data collection system design is an important step to ensure that regulators understand the limitations firms encounter when migrating to the new data collections processes and the scope of investments required to transition to these new data collection models.
For a regulatory-driven pull-type data collection model to be successful, changes are required in firms’ data infrastructures, controls, and data governance processes at the source of data. Therefore, firms should begin to assess how these concepts apply to their organizations and to the maturity of their data capabilities. As data becomes more widely distributed to other regulatory data users and is increasingly shared with the public, firms must be able to explain how the data relates to their business models and risk profiles.
SupTech and RegTech firms working collaboratively with all participants will be instrumental to establishing efficient data collection mechanisms on the regulator side and in ensuring reporting financial institutions are fully equipped to accommodate the new methods in a transparent and highly secure controlled environment.
It is clear that regulators’ data requirements are increasing and to meet them, both regulators and firms need more extensive capabilities. The data journey at firms and regulators will continue to evolve, requiring deeper collaborative public-private partnerships as the pull-push evolution accelerates. Several examples follow of interesting new regulatory data collection initiatives in progress around the world as experienced by AxiomSL, a leading RegTech provider.
To reiterate what was stated earlier, the pace of regulatory change is expected to continue to accelerate as regulators strive to leverage new technologies to modernize data collection activities aimed at gathering more detail and granularity. These are extremely exciting initiatives that deserve close attention and planning.