About a-team Marketing Services
The leading knowledge platform for the financial technology industry
The leading knowledge platform for the financial technology industry

A-Team Insight Blogs

Wolters Kluwer Proposes Solutions for the Challenges of Risk Data

Subscribe to our newsletter

Risk data needs to be identified, aggregated and centralised to deliver improved risk management. But the process entails significant challenges as risk data is growing exponentially, operational risk data related to people, processes and systems must be managed, and financial risk data remains inconsistent across institutions.

That’s the message from Ioannis Akkizidis, global product manager at Wolters Kluwer Financial Services, who addressed the problems around risk data and offered some solutions during a webinar last week entitled Risk Data Aggregation and Risk Data Management – Identifying and Understanding Risk Data.

He introduced the webinar with a brief explanation of the post financial crisis realisation that data architectures were inadequate to support the management of financial risks as risk exposures and concentrations could not be aggregated with any completeness, giving rise to inefficient risk reporting. Akkizidis noted a focus on global systematically important banks and the foundation of a legal entity identifier system as moves towards improvements in risk management, before turning to the more granular detail of risk data management.

He explained that risk data management and aggregation, coupled to improved infrastructure, offers gains including the ability to identify, monitor and manage risk; improved and faster decision making; a reduced probability and severity of losses; and greater ability to manage the risk of new products and services.

If these are some of the beneficial outcomes, the challenges of risk data management are equally substantial and come from factors such as business operations in global markets; the integration of many people, processes and systems; the integration of global financial analysis factors; and an increase in interconnections, concentrations and systemic relations among institutions. Reflecting on risk data itself, Akkizidis noted issues of missing data and inconsistent data, yet the need to distribute, audit and aggregate data that is complete, cleansed and quality assured.

Moving on, he discussed the complexity of bringing together operational risk data including statistically driven and assumed behaviour data, ever changing process data and the large quantity of data emanating from systems integration and systems events. Added to this is financial and risk data that includes the input of both fictional data based on assumptions and real or observed data, and the output of estimated or actual data in governance and compliance, reporting and monitoring, risk analysis and profitability analysis systems.

On a more granular level, Akkizidis emphasised the need to identify the nature of risk analysis data, be it, by way of example, market data and assumptions that are observed and simulated, counterparty information that can be both steady and estimated, and reporting data that can be actual, calculated and simulated. He also noted the importance of understanding time issues and the use of through-the-cycle and point-in-time data, and referenced the correlations and hierarchies of risk data that are used in analysis.

Moving back up the value chain, Akkizidis described an evolving data architecture and IT infrastructure that splits data into administrative data including rights management and preservation; structural data including the structure of data schemas; and descriptive data including relational risk, financial and IT systems data. The infrastructure must be flexible to accommodate change and support an aggregation process that harmonises inputs, both factual and assumed, to achieve harmonised aggregated outputs.

With the ability to aggregate data in place, Akkizidis considered how data from different regions can be aggregated and used, concluding that a central database is preferential to many regional databases that often include different data assumptions. He proposed that a single database can be used to deliver consistent data to various regions to meet their reporting requirements and consistent data from the regions can then be aggregated to produce liquidity, concentration and exposure risk reports on an enterprise scale.

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: MiFID II: Meeting the Requirement for Legal Entity Identifiers (LEIs)

Don’t miss this opportunity to view the recording of this recently held webinar. The deadline for compliance with Markets in Financial Instruments Directive II (MiFID II) is looming, requiring firms within the scope of the regulation to ensure they have Legal Entity Identifiers (LEIs) in place for all the clients, counterparties and issuers they deal...

BLOG

GoldenSource Revitalises EDM Platform with Open Source Database, Standardisation and Cloud Agnosticism

GoldenSource has revitalised its enterprise data management (EDM) platform with a new cloud native and agnostic version focusing on open source options and standardisation. The aim of version 8.8 is to help users extract more value from data faster and at a lower total cost. The new release supports the PostgreSQL open source database as...

EVENT

ESG Insight Briefing New York

The briefing will explore challenges around assembling and evaluating ESG data, how to apply new technologies to improve data quality and insight and the impact of regulatory measures on standardisation efforts.

GUIDE

Corporate Actions USA 2010

The US corporate actions market has long been characterised as paper-based and manually intensive, but it seems that much progress is being made of late to tackle the lack of automation due to the introduction of four little letters: XBRL. According to a survey by the American Institute of Certified Public Accountants (AICPA) and standards...