About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Wolters Kluwer Proposes Solutions for the Challenges of Risk Data

Subscribe to our newsletter

Risk data needs to be identified, aggregated and centralised to deliver improved risk management. But the process entails significant challenges as risk data is growing exponentially, operational risk data related to people, processes and systems must be managed, and financial risk data remains inconsistent across institutions.

That’s the message from Ioannis Akkizidis, global product manager at Wolters Kluwer Financial Services, who addressed the problems around risk data and offered some solutions during a webinar last week entitled Risk Data Aggregation and Risk Data Management – Identifying and Understanding Risk Data.

He introduced the webinar with a brief explanation of the post financial crisis realisation that data architectures were inadequate to support the management of financial risks as risk exposures and concentrations could not be aggregated with any completeness, giving rise to inefficient risk reporting. Akkizidis noted a focus on global systematically important banks and the foundation of a legal entity identifier system as moves towards improvements in risk management, before turning to the more granular detail of risk data management.

He explained that risk data management and aggregation, coupled to improved infrastructure, offers gains including the ability to identify, monitor and manage risk; improved and faster decision making; a reduced probability and severity of losses; and greater ability to manage the risk of new products and services.

If these are some of the beneficial outcomes, the challenges of risk data management are equally substantial and come from factors such as business operations in global markets; the integration of many people, processes and systems; the integration of global financial analysis factors; and an increase in interconnections, concentrations and systemic relations among institutions. Reflecting on risk data itself, Akkizidis noted issues of missing data and inconsistent data, yet the need to distribute, audit and aggregate data that is complete, cleansed and quality assured.

Moving on, he discussed the complexity of bringing together operational risk data including statistically driven and assumed behaviour data, ever changing process data and the large quantity of data emanating from systems integration and systems events. Added to this is financial and risk data that includes the input of both fictional data based on assumptions and real or observed data, and the output of estimated or actual data in governance and compliance, reporting and monitoring, risk analysis and profitability analysis systems.

On a more granular level, Akkizidis emphasised the need to identify the nature of risk analysis data, be it, by way of example, market data and assumptions that are observed and simulated, counterparty information that can be both steady and estimated, and reporting data that can be actual, calculated and simulated. He also noted the importance of understanding time issues and the use of through-the-cycle and point-in-time data, and referenced the correlations and hierarchies of risk data that are used in analysis.

Moving back up the value chain, Akkizidis described an evolving data architecture and IT infrastructure that splits data into administrative data including rights management and preservation; structural data including the structure of data schemas; and descriptive data including relational risk, financial and IT systems data. The infrastructure must be flexible to accommodate change and support an aggregation process that harmonises inputs, both factual and assumed, to achieve harmonised aggregated outputs.

With the ability to aggregate data in place, Akkizidis considered how data from different regions can be aggregated and used, concluding that a central database is preferential to many regional databases that often include different data assumptions. He proposed that a single database can be used to deliver consistent data to various regions to meet their reporting requirements and consistent data from the regions can then be aggregated to produce liquidity, concentration and exposure risk reports on an enterprise scale.

Subscribe to our newsletter

Related content

WEBINAR

Upcoming Webinar: The roles of cloud and managed services in optimising enterprise data management

Date: 14 May 2024 Time: 10:00am ET / 3:00pm London / 4:00pm CET Duration: 50 minutes Cloud and managed services go hand-in-hand when it comes to modern enterprise data management (EDM), but how can the best solutions for the business be selected and implemented to ensure optimal data management based on accurate, consistent, and high-quality...

BLOG

TRG Screen Acquires Xpansion to Strengthen Reference Data Usage Management Capability

TRG Screen, a provider of enterprise subscription spend and usage management software, has increased its commitment to reference data management with the acquisition of Xpansion, a TRG Screen partner and vendor of cloud-based solutions for reference data usage monitoring in the financial services industry. The acquisition follows investment by Vista Equity Partners in TRG Screen...

EVENT

RegTech Summit New York

Now in its 8th year, the RegTech Summit in New York will bring together the regtech ecosystem to explore how the North American capital markets financial industry can leverage technology to drive innovation, cut costs and support regulatory change.

GUIDE

Regulatory Data Handbook – Fourth Edition

Need to know all the essentials about the regulations impacting data management? Welcome to the Fourth edition of our A-Team Regulatory Data Handbook which provides all the essentials about regulations impacting data management. A-Team’s series of Regulatory Data Handbooks are a great way to see at-a-glance: All the regulations that are impacting data management today A...