The leading knowledge platform for the financial technology industry
The leading knowledge platform for the financial technology industry

A-Team Insight Blogs

New York Data Management Summit: Mapping Developments in Enterprise Architecture

Enterprise architecture for data management is moving on, leaving behind traditional enterprise data warehouses and embracing the potential of semantic data models, decoupling systems, building on open source platforms, and deploying hybrid solutions.

The limitations of existing architectures for data management and the possibilities of emerging ideas were discussed at the fall A-Team Group Data Management Summit in New York. A-Team chief content officer, Andrew Delaney, moderated the discussion, which included experts Amir Halfon from start-up ScalingData; Rohit Mathur, a partner at data management advisory Element22; and Thomas Mavroudis, Americas regional data manager at HSBC Technology and Services.

Delaney kicked off the discussion describing the current state of enterprise architecture, which typically includes a large number of systems that create complexity and maintenance costs, but delivers limited value. Halfon picked up on this, saying: “There are lots of systems and lots of data, but a lot of data is left on the floor and its value is not used. The unified data model is no longer viable and we need to take a top-down approach to enterprise architecture for data management that includes an holistic view of data including non-relational data.”

Mathur concurred with the need for change, saying: “We need to look beyond simple solutions as data management problems can no longer by solved with just a large data warehouse. Data needs to be understood and be accessible to enterprise applications, so it is important to put it in context, in a semantic or metadata model rather than in a data model.”

Turning to the benefits of reducing complexity in enterprise architecture, Mavroudis said: “Reducing complexity can certainly save costs in terms of support and people. It can also make data more traceable and easier to manage. Most banks approach this by centralising data, reference data first and then data that is required by key applications.”

Halfon noted that reducing complexity can deliver not only cost savings, but also value as operational tasks are reduced and more value can be prised from data. He added: “Regulations are onerous, but they are also an opportunity to become more agile and get more value of out data from a marketing and new products perspective.”

Mathur said simplification can reduce the redundancy of systems doing the same things, reduce costs and give banks the ability to respond to changing requirements. He also pointed to the value of conformity underpinned by standards that dictate how data models are defined and interpreted, and introduced decoupling, saying: “The ability to decouple systems and applications needs to be built into enterprise architecture as without it there is significant dependency across layers that adds to the complexity.” Halfon agreed and reflected Mathur’s view that data also needs to be decoupled from schemas, with semantic and metadata models being used rather than traditional relational database schemas.

From a user’s standpoint, Mavroudis explained: “Semantic models are not useful for all data, but they are useful for key attributes. I would map 200 to 300 of the most important attributes to a model and use it consistently to identify data that is valuable.”

If these are some of the technological concepts moving enterprise architecture forward, Delaney questioned whether they should be built or bought. Mathur responded: “There are more options than just building or buying. You can rent, use service providers, or borrow, perhaps open source tools that you can build on. But despite the number of options and the desires of architects, you must consider business requirements and skills within the organisation. When considering skills, it is important to think about their long-term interplay with new technologies and not just keep building what you are building today.”

Looking at where to start development, Halfon said: “A good start has already been made with an organisational focus on data management. The appointment of chief data officers with budget and the ability to make change is also important. I like the notion of borrowing from the open source space as there is much to learn from the open source community, there are tools and practices that can be adopted, and there is no vendor lock in. We need to reverse the traditional model of an enterprise data warehouse built with relational technology and sourced from a large vendor with a closed stack, and consider open source solutions, infrastructure that scales horizontally and holistic perspectives that are less tightly coupled to products and vendors.”

Moving on to data management delivery models, Delaney questioned whether outsourcing options, such as those provided by utilities and managed services, offer practical solutions. Halfon said these types of outsourcing are becoming practical, particularly in the reference data space, but are not suited to all types of data, making hybrid models for data management the most likely outcome. Mathur qualified the potential benefits of outsourcing, saying: “Once a managed service is working, it can reduce complexity, but in the short-term, a managed service adds to integration challenges and will not reduce costs or provide benefits if it is plugged in just as it is. The need is to create a more generic integration layer that can support a number of managed services.”

Considering how to ensure flexibility within enterprise architecture, Mavroudis said the ability to decouple not just systems and applications, but also data sources, is key. Mathur picked up this point and concluded: “Building flexibility into an architecture that includes legacy systems is counterintuitive as more layers add to the complexity of the architecture, but the layers are needed for flexibility and to decouple systems and applications so that changes can be made quickly. Architects need to work with the project team on flexibility and not try to add it as an afterthought.”

Related content


Recorded Webinar: Evolution of data management for the buy-side 2021

The buy-side faced a barrage of regulation in 2020 and is now under pressure to make post-Brexit adjustments and complete LIBOR transition by the end of 2021. To ensure compliance and ease the burden of in-house data management, many firms turned to outsourcing and managed services. But there is more to come, as buy-side firms...


ROC and DSB Sign Memorandum of Agreement on Governance of Unique Product Identifier

The Regulatory Oversight Committee (ROC) and Derivatives Service Bureau (DSB) have finalised a Memorandum of Understanding (MOU) on the implementation of governance arrangements for the globally harmonised Unique Product Identifier (UPI). In October 2019, the Financial Stability Board (FSB) published governance arrangements for the UPI, outlining the governance framework and criteria for the UPI system....


RegTech Summit APAC Virtual

RegTech Summit APAC will explore the current regulatory environment in Asia Pacific, the impact of COVID on the RegTech industry and the extent to which the pandemic has acted a catalyst for RegTech adoption in financial markets.


Entity Data Management Handbook – Seventh Edition

Sourcing entity data and ensuring efficient and effective entity data management is a challenge for many financial institutions as volumes of data rise, more regulations require entity data in reporting, and the fight again financial crime is escalated by bad actors using increasingly sophisticated techniques to attack processes and systems. That said, based on best...