About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

PolarLake Launches Standalone Reference Data Reconciliation Solution

Subscribe to our newsletter

Provider of reference data distribution products PolarLake has launched a standalone reference data reconciliation solution aimed at fixing the problem of poor reference data quality in downstream systems. The vendor has created the new solution as part of its plans to develop modular solutions for the market and extend its client reach.

According to PolarLake, the new solution allows organisations to reconcile centralised golden copy with the distributed copies of reference data in downstream systems.

“The key driver for this solution is really the question of data quality once you start distributing reference data. A lot of the focus up until quite recently has been on creating golden copy and building data warehouses, but I think that as the industry has matured, people as realising that is only a part of the problem,” explains John Randles, CEO of PolarLake.

Once that reference data is integrated into downstream applications – fund accounting systems, order management systems and portfolio management systems – the golden copy or reference data can be destroyed in the process, he says. There is also a huge risk around intervention from other users and other processes that mean data you thought was clean ends up as dirty data.

“Our reconciliation solution is really targeting the control process around that data. Whether they use PolarLake for distribution or they use any other mechanism, our solution gives the business management the confidence that what they are distributing still retains its data integrity. This is critical to ensuring the success of enterprise data management,” claims Randles.

PolarLake has been working for over a year on this project and has evolved this offering from the RDD solution into a standalone solution. “We engaged with a number of customers with particular problems around guaranteeing the quality of distributed data and we worked with them on the specifications. We went through a number of versions of the solution before we announced it as a standalone offering. We defined and implemented the solution and we built enhancements on top of that after feedback from these clients. We then defined a set of business processes around that to help clients validate their distribution and ensure they have good quality during the use of data in the trade lifecycle.”

One of the key reasons that the vendor chose to produce a standalone solution was to meet the needs of firms that have distribution problems but because they have invested a large sum in their data infrastructure, they are looking for specific control around data distribution rather than replacing the whole system, explains Randles.

“The key challenge around producing a solution like this is the ability to deal with multiple formats and multiple ways of describing what may be the same piece of data and putting them all into a common format where you can reconcile them. Reconciliation is a well known process and the industry has been doing it for years but the challenge around reference data is dealing with these irregular data structures and formats,” he says.

“There are 101 reconciliation systems out there but none of them are built in particular for the reference data problem. What we found was that a lot of firms were struggling with the more traditional reconciliation systems to get them to deal with the complexities of reference data. They were spending a long time reconfiguring non-purpose built solutions.”

PolarLake has two customers currently using the system and, according to Randles, both are very happy with it. They are getting visibility reports on data quality and data effectiveness and how data becomes corrupted in downstream systems much more readily than they were previously, he says.

“The problems that were caught before were caught late in the trade lifecycle but now they can be proactively caught early in the cycle. This differentiates us from a generic integration solution by providing a proactive solution to catch problems before they occur,” Randles contends.

He believes that the standalone nature of the solution addresses the ability for customers to take components of PolarLake’s data offering without having to replace their entire system. It is compatible with any other data distribution mechanism, so it can be used with home grown solutions on an existing middleware platform, he adds.

“It depends on where people are in their EDM lifecycle, if they are at the start of the process they may look for an end-to-end data distribution mechanism, but if they have put in an infrastructure and finished the integration, they can use our modular solution. There is a huge appetite for a modular approach to different value-add components that people can use around data distribution,” he continues.

To this end, PolarLake has a pipeline of different announcements that will address various challenges around distribution and integration of reference data in a modular fashion, which it will be releasing throughout the rest of the year, says Randles.

Subscribe to our newsletter

Related content

WEBINAR

Upcoming Webinar: How to maximise the use of data standards and identifiers beyond compliance and in the interests of the business

Date: 18 July 2024 Time: 10:00am ET / 3:00pm London / 4:00pm CET Duration: 50 minutes Data standards and identifiers have become common currency in regulatory compliance, bringing with them improved transparency, efficiency and data quality in reporting. They also contribute to automation. But their value does not end here, with data standards and identifiers...

BLOG

Snowflake Cortex Simplifies Route to Deriving Value from Generative AI

Snowflake has unveiled Snowflake Cortex, an innovative managed service designed to simplify how organisations derive value from generative AI. The service provides access to large language models (LLMs), AI models, and vector search functionality in the Snowflake Data Cloud, and includes serverless functions that help users accelerate analytics and build contextualised LLM-powered apps within minutes,...

EVENT

Data Management Summit London

Now in its 14th year, the Data Management Summit (DMS) in London brings together the European capital markets enterprise data management community, to explore how data strategy is evolving to drive business outcomes and speed to market in changing times.

GUIDE

Impact of Derivatives on Reference Data Management

They may be complex and burdened with a bad reputation at the moment, but derivatives are here to stay. Although Bank for International Settlements figures indicate that derivatives trading is down for the first time in 10 years, the asset class has been strongly defended by the banking and brokerage community over the last few...