About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Data Management: Time for a New Best Practice Model

Subscribe to our newsletter

By Nigel Pickering, Founder, www.ref-data.com

Firms have been attempting the ‘data centralisation’ path for many years – yet few, if any, have truly succeeded in delivering a solution to the underlying drivers and objectives.

Many will have concluded that integrating their core data is the route to managing reference and transactional data across the many life-cycle events that pass through their systems and organisation. By consolidating their data they could move tactically forward towards the significant operational efficiencies and integrated control that are increasingly necessary and which would deliver massive benefits.

Experience has taught us that by using a new non-intrusive technology approach it is now viable (and arguably compelling) to achieve the significant improvements that many have already identified, but have discounted due to previous estimates and risks.

The federated model provides the best route forward

In the social networking arena, the major providers have proven that empowering and providing results for each end-user within a centralised process (i.e. the ‘federated’ model) can deliver massive data integration results. None of the new network models would exist if they needed a centralised warehouse before they were useful to their users.

If, for example, you need better control with entity or counterparty data across many locations and applications, don’t waste time trying to define a ‘golden copy’ repository or reconciling them all. It will take too long, involve too many compromises, and each local team will have many valid reasons that will compound the overall delivery delay. Instead, provide the means for each local manager to view and manage any exceptions in his/her local data compared against all other sources.

As this is likely to improve local processes, it is more likely to gain local support. If you incentivise each local site, it is possible to envisage all sites adopting it in parallel and globally integrated results emerging within weeks.

The federated model works for both Ops and IT

By incorporating your existing core operational data into a single and dynamic store you will be working with factual data, which will encourage ‘straight line’ thinking from your Ops and IT teams. It will not risk your production processes and your IT team would no longer need to spend time on producing the contractual level design specifications that were necessary In the past.

Your Ops teams will be able to work within existing silos and operate in the more effective community model. One large bank that applied this federated approach delivered a single online warehouse containing the consolidation of its Instrument and counterparty reference data, and all the different trade types from dozens of internal and external data sources within a few weeks.

By simply adding further data sources, the bank is able to establish full data lifecycle monitoring of all trades within a fully integrated data reconciliation process for a fraction of the cost it would have previously incurred.

There is no longer any reason NOT to start

The federated model is no longer constrained by IT processing capacity. Current network processing power and scalability far exceeds that required for recording over 500,000 traded transactions per day, each with many events, and thousands of client and reference data updates per global day. Indeed with the network processing power available it is viable to consider this model actually serving many trading partners.

Such a model is also not constrained by data storage: most smartphones today could hold many days of data from massive transaction volumes. It is not constrained by the specific data elements either, provided that the platform or tool used is generic and independent of the type of data gathered.

If our model is constrained at all in the near term, it is only by a lack of company vision and your IT team’s historically valid reluctance to empower the user. Today, if you have the right mindset you can now begin to move towards radically improved data processes without any high-risk investment.

Subscribe to our newsletter

Related content

WEBINAR

Upcoming Webinar: The roles of cloud and managed services in optimising enterprise data management

Date: 14 May 2024 Time: 10:00am ET / 3:00pm London / 4:00pm CET Duration: 50 minutes Cloud and managed services go hand-in-hand when it comes to modern enterprise data management (EDM), but how can the best solutions for the business be selected and implemented to ensure optimal data management based on accurate, consistent, and high-quality...

BLOG

FactSet Releases LLM-Based Knowledge Agent for Junior Bankers

FactSet, a digital platform and enterprise solutions provider, has released the beta version of FactSet Mercury, a knowledge agent based on a large language model (LLM) and designed to power digital workflows and enhance fact-based decision making. The solution optimises company research workflows for junior bankers, offering a single, trusted conversational interface to access key...

EVENT

RegTech Summit New York

Now in its 8th year, the RegTech Summit in New York will bring together the regtech ecosystem to explore how the North American capital markets financial industry can leverage technology to drive innovation, cut costs and support regulatory change.

GUIDE

The Data Management Implications of Solvency II

This special report accompanies a webinar we held on the popular topic of The Data Management Implications of Solvency II, discussing the data implications for asset managers and their custodians and asset servicers. You can register here to get immediate access to the Special Report.