About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

New York Data Management Summit: Mapping Developments in Enterprise Architecture

Subscribe to our newsletter

Enterprise architecture for data management is moving on, leaving behind traditional enterprise data warehouses and embracing the potential of semantic data models, decoupling systems, building on open source platforms, and deploying hybrid solutions.

The limitations of existing architectures for data management and the possibilities of emerging ideas were discussed at the fall A-Team Group Data Management Summit in New York. A-Team chief content officer, Andrew Delaney, moderated the discussion, which included experts Amir Halfon from start-up ScalingData; Rohit Mathur, a partner at data management advisory Element22; and Thomas Mavroudis, Americas regional data manager at HSBC Technology and Services.

Delaney kicked off the discussion describing the current state of enterprise architecture, which typically includes a large number of systems that create complexity and maintenance costs, but delivers limited value. Halfon picked up on this, saying: “There are lots of systems and lots of data, but a lot of data is left on the floor and its value is not used. The unified data model is no longer viable and we need to take a top-down approach to enterprise architecture for data management that includes an holistic view of data including non-relational data.”

Mathur concurred with the need for change, saying: “We need to look beyond simple solutions as data management problems can no longer by solved with just a large data warehouse. Data needs to be understood and be accessible to enterprise applications, so it is important to put it in context, in a semantic or metadata model rather than in a data model.”

Turning to the benefits of reducing complexity in enterprise architecture, Mavroudis said: “Reducing complexity can certainly save costs in terms of support and people. It can also make data more traceable and easier to manage. Most banks approach this by centralising data, reference data first and then data that is required by key applications.”

Halfon noted that reducing complexity can deliver not only cost savings, but also value as operational tasks are reduced and more value can be prised from data. He added: “Regulations are onerous, but they are also an opportunity to become more agile and get more value of out data from a marketing and new products perspective.”

Mathur said simplification can reduce the redundancy of systems doing the same things, reduce costs and give banks the ability to respond to changing requirements. He also pointed to the value of conformity underpinned by standards that dictate how data models are defined and interpreted, and introduced decoupling, saying: “The ability to decouple systems and applications needs to be built into enterprise architecture as without it there is significant dependency across layers that adds to the complexity.” Halfon agreed and reflected Mathur’s view that data also needs to be decoupled from schemas, with semantic and metadata models being used rather than traditional relational database schemas.

From a user’s standpoint, Mavroudis explained: “Semantic models are not useful for all data, but they are useful for key attributes. I would map 200 to 300 of the most important attributes to a model and use it consistently to identify data that is valuable.”

If these are some of the technological concepts moving enterprise architecture forward, Delaney questioned whether they should be built or bought. Mathur responded: “There are more options than just building or buying. You can rent, use service providers, or borrow, perhaps open source tools that you can build on. But despite the number of options and the desires of architects, you must consider business requirements and skills within the organisation. When considering skills, it is important to think about their long-term interplay with new technologies and not just keep building what you are building today.”

Looking at where to start development, Halfon said: “A good start has already been made with an organisational focus on data management. The appointment of chief data officers with budget and the ability to make change is also important. I like the notion of borrowing from the open source space as there is much to learn from the open source community, there are tools and practices that can be adopted, and there is no vendor lock in. We need to reverse the traditional model of an enterprise data warehouse built with relational technology and sourced from a large vendor with a closed stack, and consider open source solutions, infrastructure that scales horizontally and holistic perspectives that are less tightly coupled to products and vendors.”

Moving on to data management delivery models, Delaney questioned whether outsourcing options, such as those provided by utilities and managed services, offer practical solutions. Halfon said these types of outsourcing are becoming practical, particularly in the reference data space, but are not suited to all types of data, making hybrid models for data management the most likely outcome. Mathur qualified the potential benefits of outsourcing, saying: “Once a managed service is working, it can reduce complexity, but in the short-term, a managed service adds to integration challenges and will not reduce costs or provide benefits if it is plugged in just as it is. The need is to create a more generic integration layer that can support a number of managed services.”

Considering how to ensure flexibility within enterprise architecture, Mavroudis said the ability to decouple not just systems and applications, but also data sources, is key. Mathur picked up this point and concluded: “Building flexibility into an architecture that includes legacy systems is counterintuitive as more layers add to the complexity of the architecture, but the layers are needed for flexibility and to decouple systems and applications so that changes can be made quickly. Architects need to work with the project team on flexibility and not try to add it as an afterthought.”

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: Hearing from the Experts: AI Governance Best Practices

The rapid spread of artificial intelligence in the financial industry presents data teams with novel challenges. AI’s ability to harvest and utilize vast amounts of data has raised concerns about the privacy and security of sensitive proprietary data and the ethical and legal use of external information. Robust data governance frameworks provide the guardrails needed...

BLOG

Implementing Technology Business Management with Pace and Precision

By Simon Mendoza, Chief Technology Officer, Calero. Implementing a Technology Business Management (TBM) platform can feel like a major logistical challenge. Every organisation starts from a different place – different data maturity, internal priorities and levels of stakeholder engagement. But that doesn’t mean every implementation needs to be a blank slate. The fastest and most...

EVENT

AI in Data Management Summit New York City

Following the success of the 15th Data Management Summit NYC, A-Team Group are excited to announce our new event: AI in Data Management Summit NYC!

GUIDE

Regulatory Data Handbook 2025 – Thirteenth Edition

Welcome to the thirteenth edition of A-Team Group’s Regulatory Data Handbook, a unique and practical guide to capital markets regulation, regulatory change, and the data and data management requirements of compliance across Europe, the UK, US and Asia-Pacific. This year’s edition lands at a moment of accelerating regulatory divergence and intensifying data focused supervision. Inside,...