About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Wolters Kluwer Proposes Solutions for the Challenges of Risk Data

Subscribe to our newsletter

Risk data needs to be identified, aggregated and centralised to deliver improved risk management. But the process entails significant challenges as risk data is growing exponentially, operational risk data related to people, processes and systems must be managed, and financial risk data remains inconsistent across institutions.

That’s the message from Ioannis Akkizidis, global product manager at Wolters Kluwer Financial Services, who addressed the problems around risk data and offered some solutions during a webinar last week entitled Risk Data Aggregation and Risk Data Management – Identifying and Understanding Risk Data.

He introduced the webinar with a brief explanation of the post financial crisis realisation that data architectures were inadequate to support the management of financial risks as risk exposures and concentrations could not be aggregated with any completeness, giving rise to inefficient risk reporting. Akkizidis noted a focus on global systematically important banks and the foundation of a legal entity identifier system as moves towards improvements in risk management, before turning to the more granular detail of risk data management.

He explained that risk data management and aggregation, coupled to improved infrastructure, offers gains including the ability to identify, monitor and manage risk; improved and faster decision making; a reduced probability and severity of losses; and greater ability to manage the risk of new products and services.

If these are some of the beneficial outcomes, the challenges of risk data management are equally substantial and come from factors such as business operations in global markets; the integration of many people, processes and systems; the integration of global financial analysis factors; and an increase in interconnections, concentrations and systemic relations among institutions. Reflecting on risk data itself, Akkizidis noted issues of missing data and inconsistent data, yet the need to distribute, audit and aggregate data that is complete, cleansed and quality assured.

Moving on, he discussed the complexity of bringing together operational risk data including statistically driven and assumed behaviour data, ever changing process data and the large quantity of data emanating from systems integration and systems events. Added to this is financial and risk data that includes the input of both fictional data based on assumptions and real or observed data, and the output of estimated or actual data in governance and compliance, reporting and monitoring, risk analysis and profitability analysis systems.

On a more granular level, Akkizidis emphasised the need to identify the nature of risk analysis data, be it, by way of example, market data and assumptions that are observed and simulated, counterparty information that can be both steady and estimated, and reporting data that can be actual, calculated and simulated. He also noted the importance of understanding time issues and the use of through-the-cycle and point-in-time data, and referenced the correlations and hierarchies of risk data that are used in analysis.

Moving back up the value chain, Akkizidis described an evolving data architecture and IT infrastructure that splits data into administrative data including rights management and preservation; structural data including the structure of data schemas; and descriptive data including relational risk, financial and IT systems data. The infrastructure must be flexible to accommodate change and support an aggregation process that harmonises inputs, both factual and assumed, to achieve harmonised aggregated outputs.

With the ability to aggregate data in place, Akkizidis considered how data from different regions can be aggregated and used, concluding that a central database is preferential to many regional databases that often include different data assumptions. He proposed that a single database can be used to deliver consistent data to various regions to meet their reporting requirements and consistent data from the regions can then be aggregated to produce liquidity, concentration and exposure risk reports on an enterprise scale.

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: AI in Asset Management: Buy-Side Attitudes toward GenAI and LLMs

Since ChatGPT exploded onto the scene in late 2022, financial markets participants have been trying to understand the opportunities and risks posed by artificial intelligence and in particular generative AI (GenAI) and large language models (LLMs). While the full value of the technology continues to become apparent, it’s already clear that AI has enormous potential...

BLOG

New York City to Welcome 14th Data Management Summit with Focus on AI

The annual A-Team Group Data Management Summit New York City (DMS NYC) is almost upon us, with a roster of leading companies and executives lined up to cast an expert eye over an industry at yet another pivotal point in its evolution. Artificial intelligence (AI) is up-ending the way financial institutions use the data that...

EVENT

Data Management Summit London

Now in its 15th year, the Data Management Summit (DMS) in London brings together the European capital markets enterprise data management community, to explore how data strategy is evolving to drive business outcomes and speed to market in changing times.

GUIDE

What the Global Legal Entity Identifier (LEI) Will Mean for Your Firm

It’s hard to believe that as early as the 2009 Group of 20 summit in Pittsburgh the industry had recognised the need for greater transparency as part of a wider package of reforms aimed at mitigating the systemic risk posed by the OTC derivatives market. That realisation ultimately led to the Dodd Frank Act, and...