About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Wolters Kluwer Proposes Solutions for the Challenges of Risk Data

Subscribe to our newsletter

Risk data needs to be identified, aggregated and centralised to deliver improved risk management. But the process entails significant challenges as risk data is growing exponentially, operational risk data related to people, processes and systems must be managed, and financial risk data remains inconsistent across institutions.

That’s the message from Ioannis Akkizidis, global product manager at Wolters Kluwer Financial Services, who addressed the problems around risk data and offered some solutions during a webinar last week entitled Risk Data Aggregation and Risk Data Management – Identifying and Understanding Risk Data.

He introduced the webinar with a brief explanation of the post financial crisis realisation that data architectures were inadequate to support the management of financial risks as risk exposures and concentrations could not be aggregated with any completeness, giving rise to inefficient risk reporting. Akkizidis noted a focus on global systematically important banks and the foundation of a legal entity identifier system as moves towards improvements in risk management, before turning to the more granular detail of risk data management.

He explained that risk data management and aggregation, coupled to improved infrastructure, offers gains including the ability to identify, monitor and manage risk; improved and faster decision making; a reduced probability and severity of losses; and greater ability to manage the risk of new products and services.

If these are some of the beneficial outcomes, the challenges of risk data management are equally substantial and come from factors such as business operations in global markets; the integration of many people, processes and systems; the integration of global financial analysis factors; and an increase in interconnections, concentrations and systemic relations among institutions. Reflecting on risk data itself, Akkizidis noted issues of missing data and inconsistent data, yet the need to distribute, audit and aggregate data that is complete, cleansed and quality assured.

Moving on, he discussed the complexity of bringing together operational risk data including statistically driven and assumed behaviour data, ever changing process data and the large quantity of data emanating from systems integration and systems events. Added to this is financial and risk data that includes the input of both fictional data based on assumptions and real or observed data, and the output of estimated or actual data in governance and compliance, reporting and monitoring, risk analysis and profitability analysis systems.

On a more granular level, Akkizidis emphasised the need to identify the nature of risk analysis data, be it, by way of example, market data and assumptions that are observed and simulated, counterparty information that can be both steady and estimated, and reporting data that can be actual, calculated and simulated. He also noted the importance of understanding time issues and the use of through-the-cycle and point-in-time data, and referenced the correlations and hierarchies of risk data that are used in analysis.

Moving back up the value chain, Akkizidis described an evolving data architecture and IT infrastructure that splits data into administrative data including rights management and preservation; structural data including the structure of data schemas; and descriptive data including relational risk, financial and IT systems data. The infrastructure must be flexible to accommodate change and support an aggregation process that harmonises inputs, both factual and assumed, to achieve harmonised aggregated outputs.

With the ability to aggregate data in place, Akkizidis considered how data from different regions can be aggregated and used, concluding that a central database is preferential to many regional databases that often include different data assumptions. He proposed that a single database can be used to deliver consistent data to various regions to meet their reporting requirements and consistent data from the regions can then be aggregated to produce liquidity, concentration and exposure risk reports on an enterprise scale.

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: Unlocking Transparency in Private Markets: Data-Driven Strategies in Asset Management

As asset managers continue to increase their allocations in private assets, the demand for greater transparency, risk oversight, and operational efficiency is growing rapidly. Managing private markets data presents its own set of unique challenges due to a lack of transparency, disparate sources and lack of standardization. Without reliable access, your firm may face inefficiencies,...

BLOG

Modern Data Platform Adoption Growth Seen as Benefits Become Apparent: Webinar Review

Take-up of modern data platforms (MDPs) is expected to accelerate in the next few years as financial institutions realise the greater agility, scalability and deeper insights offered by the innovation. Organisations that have so far been relatively slow to adopt the streamlined platforms – because they have been unsure of the technologies’ benefits – will...

EVENT

RegTech Summit London

Now in its 9th year, the RegTech Summit in London will bring together the RegTech ecosystem to explore how the European capital markets financial industry can leverage technology to drive innovation, cut costs and support regulatory change.

GUIDE

The Global LEI System – A Solution for Entity Data?

The Global LEI System – or GLEIS – has been in development since the middle of last year. Development has been patchy at times, but much has been done, leaving fewer outstanding issues, but also raising new questions. What’s emerging is a structure for the GLEIS going forward, complete with a mechanism for registering and...