About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Decentralized Data May Cost Firms $600 Million Annually

Subscribe to our newsletter

Decentralized reference data operations and the effects of faulty data on operational risk and capital requirements could be costing financial services institutions an astonishing $600 million each annually, according to the findings of a recent research paper.

The paper – titled Operational Risk and Reference Data: Exploring Costs, Capital Requirements and Risk Mitigation – claims for the first time to attempt to explore scientifically the costs of reference data and effects of faulty data on operational risk and capital, especially under upcoming Basel II requirements. The authors claim others have attempted to quantify these things but usually only in an anecdotally documented fashion.

The paper’s authors are: Allan Grody, president, Financial InterGroup; Fotios Harmantzis, PhD and assistant professor, School of Technology Management, Stevens Institute of Technology; and Gregory Kaple, senior partner, Integrated Management Services Inc.

Among the paper’s conclusions is that reference data systems are emerging as a financial application platform concept, distinct from the application logic that supports individual business processes across a financial enterprise. The authors also point out that being better at transactional reference data management has no strategic value and assert that, since faulty reference data makes up the core of key operations losses, an industry-wide initiative is the only way forward. The paper suggests the industry explores the concept of an industry utility to match and “clear” reference data.

Although the paper includes detailed historical sections on both reference data and risk, oddly, it makes no mention of the failed Global Straight-Through Processing Association (GSTPA), which raised more than EUR 90 million to develop the Transaction Flow Manager (TFM). The central matching concept required a broad community of users for it to become a viable solution – something the GSTPA was unable to secure before it ceased operations just two months after launching the TFM.

Although central matching initially seemed to be a promising concept, it required a ‘big bang’ approach and a degree of cooperative engagement previously unseen and clearly not supported within the industry. After getting their collective fingers burned by the abandonment of the ambitious project, industry players generally run from any suggestion to revisit this type of idea.

Weighing in at 88 pages, this paper provides an excellent primer on both reference data and the interaction of data in risk management under Basel II, including a useful historical context, albeit one with a U.S. listed securities bias.

With one of the principal authors a former member of BASIC (Banking and Securities Industry Committee), the committee formed in the aftermath of the New York Stock Exchange’s ‘paper crisis’ in 1968 and which championed the CUSIP numbering system, it is little wonder that the paper has a U.S. perspective.

What is unique about the paper is its attempt to provide a business case for firms looking to justify investment in reference data systems. The authors feel the lack of a good business case is “thwarting the recognition of the pervasive nature of the issues and the cost and risk associated with reference data.”

A detailed methodology is used to estimate the direct cost of reference data (people, facilities, data, licenses, etc.), losses (fails and corporate actions) and capital costs.

In the case of capital costs, the authors created a “very preliminary calculation for operational risk capital associated with faulty reference data under the Basel guidelines” for the 15 largest U.S.-based firms.

And the big number is anywhere from $266 to $600 million per firm per year. The paper details its methodology for companies that wish to do their own calculations.

Where the paper seems to stumble a bit is in its oversimplified approach to a solution for the industry. As noted above, a central matching utility was tried before and failed.

Although this still may be the best solution forward for reference data, not examining causes of previous failures or addressing industry concerns about this approach (e.g. who takes responsibility for financial losses?) unfairly paints the authors as slightly naïve.  

Their technical argument supporting a central utility similarly seems to lack depth.

The authors assert that intelligent ‘content-enabled’ multicast networks using XML can be implemented to easily route reference data.

While these solutions may be appropriate for retail transactions processing ‘WalMart style’ – one example the authors provide – the reality of global financial processing is somewhat more complicated. The U.S. retail goods sector doesn’t necessary have the same concerns about ever-increasing data volumes, network latency and a morass of securities identifiers or event data, for a start. Or does it? The authors don’t tell us.
Despite these minor points, the paper is a must-read for anyone with responsibilities encompassing reference data. The well-researched report facilitates a “read what you need approach” – with background information provided as separate chapters.

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: How to automate entity data management and due diligence to ensure efficiency, accuracy and compliance

Requesting, gathering, analysing and monitoring customer, vendor and partner entity data is time consuming, and often a tedious manual process. This can slow down customer relationships and expose financial institutions to risk from inaccurate, incomplete or outdated data – but there are solutions to these problems. This webinar will consider the challenges of sourcing and...

BLOG

Northern Trust Integrates FINBOURNE Technology with Data Mesh Digital Backbone

Northern Trust, a large asset servicer, has selected FINBOURNE Technology to provide enhanced valuations and reporting capabilities for its enterprise global technology. The Chicago-headquartered firm ran a thorough technology partner selection process before deciding to implement FINBOURNE’s cloud-native financial data management solution LUSID and data virtualisation engine Luminesce to modernise its valuations and reporting functions...

EVENT

AI in Capital Markets Summit London

The AI in Capital Markets Summit will explore current and emerging trends in AI, the potential of Generative AI and LLMs and how AI can be applied for efficiencies and business value across a number of use cases, in the front and back office of financial institutions. The agenda will explore the risks and challenges of adopting AI and the foundational technologies and data management capabilities that underpin successful deployment.

GUIDE

Regulatory Data Handbook 2023 – Eleventh Edition

Welcome to the eleventh edition of A-Team Group’s Regulatory Data Handbook, a popular publication that covers new regulations in capital markets, tracks regulatory change, and provides advice on the data, data management and implementation requirements of more than 30 regulations across UK, European, US and Asia-Pacific capital markets. This edition of the handbook includes new...