About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Wolters Kluwer Proposes Solutions for the Challenges of Risk Data

Subscribe to our newsletter

Risk data needs to be identified, aggregated and centralised to deliver improved risk management. But the process entails significant challenges as risk data is growing exponentially, operational risk data related to people, processes and systems must be managed, and financial risk data remains inconsistent across institutions.

That’s the message from Ioannis Akkizidis, global product manager at Wolters Kluwer Financial Services, who addressed the problems around risk data and offered some solutions during a webinar last week entitled Risk Data Aggregation and Risk Data Management – Identifying and Understanding Risk Data.

He introduced the webinar with a brief explanation of the post financial crisis realisation that data architectures were inadequate to support the management of financial risks as risk exposures and concentrations could not be aggregated with any completeness, giving rise to inefficient risk reporting. Akkizidis noted a focus on global systematically important banks and the foundation of a legal entity identifier system as moves towards improvements in risk management, before turning to the more granular detail of risk data management.

He explained that risk data management and aggregation, coupled to improved infrastructure, offers gains including the ability to identify, monitor and manage risk; improved and faster decision making; a reduced probability and severity of losses; and greater ability to manage the risk of new products and services.

If these are some of the beneficial outcomes, the challenges of risk data management are equally substantial and come from factors such as business operations in global markets; the integration of many people, processes and systems; the integration of global financial analysis factors; and an increase in interconnections, concentrations and systemic relations among institutions. Reflecting on risk data itself, Akkizidis noted issues of missing data and inconsistent data, yet the need to distribute, audit and aggregate data that is complete, cleansed and quality assured.

Moving on, he discussed the complexity of bringing together operational risk data including statistically driven and assumed behaviour data, ever changing process data and the large quantity of data emanating from systems integration and systems events. Added to this is financial and risk data that includes the input of both fictional data based on assumptions and real or observed data, and the output of estimated or actual data in governance and compliance, reporting and monitoring, risk analysis and profitability analysis systems.

On a more granular level, Akkizidis emphasised the need to identify the nature of risk analysis data, be it, by way of example, market data and assumptions that are observed and simulated, counterparty information that can be both steady and estimated, and reporting data that can be actual, calculated and simulated. He also noted the importance of understanding time issues and the use of through-the-cycle and point-in-time data, and referenced the correlations and hierarchies of risk data that are used in analysis.

Moving back up the value chain, Akkizidis described an evolving data architecture and IT infrastructure that splits data into administrative data including rights management and preservation; structural data including the structure of data schemas; and descriptive data including relational risk, financial and IT systems data. The infrastructure must be flexible to accommodate change and support an aggregation process that harmonises inputs, both factual and assumed, to achieve harmonised aggregated outputs.

With the ability to aggregate data in place, Akkizidis considered how data from different regions can be aggregated and used, concluding that a central database is preferential to many regional databases that often include different data assumptions. He proposed that a single database can be used to deliver consistent data to various regions to meet their reporting requirements and consistent data from the regions can then be aggregated to produce liquidity, concentration and exposure risk reports on an enterprise scale.

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: Cross-Regulation Data Consistency and Accuracy

Regulatory reporting obligations continue to expand, bringing with them more overlaps of data elements across reporting regimes. As many firms struggle with the accuracy and completeness of individual reporting obligations, regulators have increasingly begun to focus on cross-regulation data consistency in their data validations and examination processes. This webinar will identify cases of data overlap...

BLOG

How to Solve the Operations Talent Crisis

Is your organisation short of operations staff and finding it difficult to boost numbers? If it is, you are not alone, with financial services firms in the grip of the Great Resignation and job seekers casting a more favourable eye over creative business opportunities than back-office operational roles. Still more, is business growth at your...

EVENT

TradingTech Summit London

Now in its 12th year the TradingTech Summit London brings together the European trading technology capital markets industry, to explore how trading firms are innovating in today’s cloud and digital based environment to create flexible, scalable trading platforms to support speed to market and business agility.

GUIDE

Data Lineage Handbook

Data lineage has become a critical concern for data managers in capital markets as it is key to both regulatory compliance and business opportunity. The regulatory requirement for data lineage kicked in with BCBS 239 in 2016 and has since been extended to many other regulations that oblige firms to provide transparency and a data...