About a-team Marketing Services
The leading knowledge platform for the financial technology industry
The leading knowledge platform for the financial technology industry

A-Team Insight Blogs

New York Data Management Summit: Mapping Developments in Enterprise Architecture

Subscribe to our newsletter

Enterprise architecture for data management is moving on, leaving behind traditional enterprise data warehouses and embracing the potential of semantic data models, decoupling systems, building on open source platforms, and deploying hybrid solutions.

The limitations of existing architectures for data management and the possibilities of emerging ideas were discussed at the fall A-Team Group Data Management Summit in New York. A-Team chief content officer, Andrew Delaney, moderated the discussion, which included experts Amir Halfon from start-up ScalingData; Rohit Mathur, a partner at data management advisory Element22; and Thomas Mavroudis, Americas regional data manager at HSBC Technology and Services.

Delaney kicked off the discussion describing the current state of enterprise architecture, which typically includes a large number of systems that create complexity and maintenance costs, but delivers limited value. Halfon picked up on this, saying: “There are lots of systems and lots of data, but a lot of data is left on the floor and its value is not used. The unified data model is no longer viable and we need to take a top-down approach to enterprise architecture for data management that includes an holistic view of data including non-relational data.”

Mathur concurred with the need for change, saying: “We need to look beyond simple solutions as data management problems can no longer by solved with just a large data warehouse. Data needs to be understood and be accessible to enterprise applications, so it is important to put it in context, in a semantic or metadata model rather than in a data model.”

Turning to the benefits of reducing complexity in enterprise architecture, Mavroudis said: “Reducing complexity can certainly save costs in terms of support and people. It can also make data more traceable and easier to manage. Most banks approach this by centralising data, reference data first and then data that is required by key applications.”

Halfon noted that reducing complexity can deliver not only cost savings, but also value as operational tasks are reduced and more value can be prised from data. He added: “Regulations are onerous, but they are also an opportunity to become more agile and get more value of out data from a marketing and new products perspective.”

Mathur said simplification can reduce the redundancy of systems doing the same things, reduce costs and give banks the ability to respond to changing requirements. He also pointed to the value of conformity underpinned by standards that dictate how data models are defined and interpreted, and introduced decoupling, saying: “The ability to decouple systems and applications needs to be built into enterprise architecture as without it there is significant dependency across layers that adds to the complexity.” Halfon agreed and reflected Mathur’s view that data also needs to be decoupled from schemas, with semantic and metadata models being used rather than traditional relational database schemas.

From a user’s standpoint, Mavroudis explained: “Semantic models are not useful for all data, but they are useful for key attributes. I would map 200 to 300 of the most important attributes to a model and use it consistently to identify data that is valuable.”

If these are some of the technological concepts moving enterprise architecture forward, Delaney questioned whether they should be built or bought. Mathur responded: “There are more options than just building or buying. You can rent, use service providers, or borrow, perhaps open source tools that you can build on. But despite the number of options and the desires of architects, you must consider business requirements and skills within the organisation. When considering skills, it is important to think about their long-term interplay with new technologies and not just keep building what you are building today.”

Looking at where to start development, Halfon said: “A good start has already been made with an organisational focus on data management. The appointment of chief data officers with budget and the ability to make change is also important. I like the notion of borrowing from the open source space as there is much to learn from the open source community, there are tools and practices that can be adopted, and there is no vendor lock in. We need to reverse the traditional model of an enterprise data warehouse built with relational technology and sourced from a large vendor with a closed stack, and consider open source solutions, infrastructure that scales horizontally and holistic perspectives that are less tightly coupled to products and vendors.”

Moving on to data management delivery models, Delaney questioned whether outsourcing options, such as those provided by utilities and managed services, offer practical solutions. Halfon said these types of outsourcing are becoming practical, particularly in the reference data space, but are not suited to all types of data, making hybrid models for data management the most likely outcome. Mathur qualified the potential benefits of outsourcing, saying: “Once a managed service is working, it can reduce complexity, but in the short-term, a managed service adds to integration challenges and will not reduce costs or provide benefits if it is plugged in just as it is. The need is to create a more generic integration layer that can support a number of managed services.”

Considering how to ensure flexibility within enterprise architecture, Mavroudis said the ability to decouple not just systems and applications, but also data sources, is key. Mathur picked up this point and concluded: “Building flexibility into an architecture that includes legacy systems is counterintuitive as more layers add to the complexity of the architecture, but the layers are needed for flexibility and to decouple systems and applications so that changes can be made quickly. Architects need to work with the project team on flexibility and not try to add it as an afterthought.”

Subscribe to our newsletter

Related content


Upcoming Webinar: Enabling data democratisation with trusted and well governed data

Date: 7 June 2022 Time: 10:00am ET / 3:00pm London / 4:00pm CET Duration: 50 minutes Data democratisation enables users across an organisation to access and analyse data in a digital format. Its benefits are many and include allowing employees to make informed business decisions without recourse to IT, gaining a better understanding of customers,...


DTCC Replaces Benchmarks Service with ITP Data Analytics for Operational Performance

DTCC has updated its Institutional Trade Processing (ITP) service with ITP Data Analytics, a service including analytical tools that allow broker/dealers and investment managers to measure and compare their operational performance against counterparties, industry standards and peers. ITP Data Analytics replaces DTCC’s legacy Benchmarks service. It consumes underlying daily transaction activity from DTCC’s post-trade central...


Data Management Summit Virtual (Redirected)

The Data Management Summit Virtual brings together the global data management community to share lessons learned, best practice guidance and latest innovations to emerge from the recent crisis. Hear from leading data practitioners and innovators from the UK, US and Europe who will share insights into how they are pushing the boundaries with data to deliver value with flexible but resilient data driven strategies.


ESG Data Handbook 2022

The ESG landscape is changing faster than anyone could have imagined even five years ago. With tens of trillions of dollars expected to have been committed to sustainable assets by the end of the decade, it’s never been more important for financial institutions of all sizes to stay abreast of changes in the ESG data...