About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Divisions Between Market and Reference Data Causing Issues for Transparency, Says GoldenSource

Subscribe to our newsletter

A major hurdle to achieving a consistent view of pricing, positions and exposure lays in the divide between market data feeds and reference data repositories, according to GoldenSource, a provider of enterprise data management (EDM) solutions.

“By definition, enterprise risk management and reporting should go front to back as well as across assets,” says Gert Raeves, vice president of strategic business development at GoldenSource. “A trusted data environment for interdependent front and back office processes now relies on the centralisation of market and reference data. With market data, there is a convergence point where a snapshot of the market can be captured and validated for downstream applications.”

Pricing mechanisms in the front office and valuations for P&L in the back office often depend on the same price and rate data, yet both areas derive this information from disparate sources. According to research from consultancy A-Team Group, 87% of risk managers are interested in centralising market and reference data. This is to address issues including volatility, accuracy, consistency and the elimination of asset-based silos.

Unprecedented market movements are impacting portfolios, customers, counterparties and issuers faster than ever, meaning back office risk and P&L systems can no longer wait until end of day to access critical market data. Regulators are expecting much of the same data used in the trading decision process to be extended to valuation, P&L and risk management, thus bringing the same issues around accuracy and consistency to both data areas.

Regulatory and client demands for insight into how a value was derived means that golden copy pricing needs to extend beyond static reference sets and include multiple market data sources, says the vendor. Furthermore, siloed desks have traditionally been blamed for the inability to get a complete view of risk and exposure; these silos are now being broken down, yet the divide between market and reference data persists.

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: Navigating a Complex World: Best Data Practices in Sanctions Screening

As rising geopolitical uncertainty prompts an intensification in the complexity and volume of global economic and financial sanctions, banks and financial institutions are faced with a daunting set of new compliance challenges. The risk of inadvertently engaging with sanctioned securities has never been higher and the penalties for doing so are harsh. Traditional sanctions screening...

BLOG

Why AI is Making Data Ownership a Business Imperative

By Edgar Randall, UK&I Managing Director, Dun & Bradstreet. As AI becomes the engine of modern business, the question of verifiable data ownership is no longer theoretical, it’s central to how organisations build trust in AI-driven decisions. The rise of AI means models depend entirely on the quality and integrity of the data they consume....

EVENT

RegTech Summit New York

Now in its 9th year, the RegTech Summit in New York will bring together the RegTech ecosystem to explore how the North American capital markets financial industry can leverage technology to drive innovation, cut costs and support regulatory change.

GUIDE

MiFID II Handbook

As the 3 January 2018 compliance deadline for Markets in Financial Instruments Directive II (MiFID II) approaches, A-Team Group has pulled together everything you need to know about the regulation in a precise and concise handbook. The MiFID II Handbook, commissioned by Thomson Reuters, provides a guide to aspects of the regulation that will have...