About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Data Management: Time for a New Best Practice Model

Subscribe to our newsletter

By Nigel Pickering, Founder, www.ref-data.com

Firms have been attempting the ‘data centralisation’ path for many years – yet few, if any, have truly succeeded in delivering a solution to the underlying drivers and objectives.

Many will have concluded that integrating their core data is the route to managing reference and transactional data across the many life-cycle events that pass through their systems and organisation. By consolidating their data they could move tactically forward towards the significant operational efficiencies and integrated control that are increasingly necessary and which would deliver massive benefits.

Experience has taught us that by using a new non-intrusive technology approach it is now viable (and arguably compelling) to achieve the significant improvements that many have already identified, but have discounted due to previous estimates and risks.

The federated model provides the best route forward

In the social networking arena, the major providers have proven that empowering and providing results for each end-user within a centralised process (i.e. the ‘federated’ model) can deliver massive data integration results. None of the new network models would exist if they needed a centralised warehouse before they were useful to their users.

If, for example, you need better control with entity or counterparty data across many locations and applications, don’t waste time trying to define a ‘golden copy’ repository or reconciling them all. It will take too long, involve too many compromises, and each local team will have many valid reasons that will compound the overall delivery delay. Instead, provide the means for each local manager to view and manage any exceptions in his/her local data compared against all other sources.

As this is likely to improve local processes, it is more likely to gain local support. If you incentivise each local site, it is possible to envisage all sites adopting it in parallel and globally integrated results emerging within weeks.

The federated model works for both Ops and IT

By incorporating your existing core operational data into a single and dynamic store you will be working with factual data, which will encourage ‘straight line’ thinking from your Ops and IT teams. It will not risk your production processes and your IT team would no longer need to spend time on producing the contractual level design specifications that were necessary In the past.

Your Ops teams will be able to work within existing silos and operate in the more effective community model. One large bank that applied this federated approach delivered a single online warehouse containing the consolidation of its Instrument and counterparty reference data, and all the different trade types from dozens of internal and external data sources within a few weeks.

By simply adding further data sources, the bank is able to establish full data lifecycle monitoring of all trades within a fully integrated data reconciliation process for a fraction of the cost it would have previously incurred.

There is no longer any reason NOT to start

The federated model is no longer constrained by IT processing capacity. Current network processing power and scalability far exceeds that required for recording over 500,000 traded transactions per day, each with many events, and thousands of client and reference data updates per global day. Indeed with the network processing power available it is viable to consider this model actually serving many trading partners.

Such a model is also not constrained by data storage: most smartphones today could hold many days of data from massive transaction volumes. It is not constrained by the specific data elements either, provided that the platform or tool used is generic and independent of the type of data gathered.

If our model is constrained at all in the near term, it is only by a lack of company vision and your IT team’s historically valid reluctance to empower the user. Today, if you have the right mindset you can now begin to move towards radically improved data processes without any high-risk investment.

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: Navigating a Complex World: Best Data Practices in Sanctions Screening

As rising geopolitical uncertainty prompts an intensification in the complexity and volume of global economic and financial sanctions, banks and financial institutions are faced with a daunting set of new compliance challenges. The risk of inadvertently engaging with sanctioned securities has never been higher and the penalties for doing so are harsh. Traditional sanctions screening...

BLOG

S&P Builds Private Markets-Trained AI Document Search Tool for iLevel Platform

S&P Global Market Intelligence has expanded its private markets data and technology platform iLevel with the addition of AI Document Search, a module that is built on large language models (LLMs) trained specifically to aid participants in the fast-growing alternative assets sector. The new tool enables general partners (GPs), who manage funds on behalf of...

EVENT

Data Management Summit New York City

Now in its 15th year the Data Management Summit NYC brings together the North American data management community to explore how data strategy is evolving to drive business outcomes and speed to market in changing times.

GUIDE

RegTech Suppliers Guide 2019

Welcome to our brand new RegTech Suppliers Guide. This unique guide provides detailed data profiles on close to 100 suppliers in the RegTech world, offering you an unrivalled selection of solutions for your most pressing financial regulatory challenges. The aim of the A-Team’s RegTech Suppliers Guide is to steer you through this complex marketplace, offering...