The leading knowledge platform for the financial technology industry
The leading knowledge platform for the financial technology industry

A-Team Insight Blogs

New York Data Management Summit: Mapping Developments in Enterprise Architecture

Enterprise architecture for data management is moving on, leaving behind traditional enterprise data warehouses and embracing the potential of semantic data models, decoupling systems, building on open source platforms, and deploying hybrid solutions.

The limitations of existing architectures for data management and the possibilities of emerging ideas were discussed at the fall A-Team Group Data Management Summit in New York. A-Team chief content officer, Andrew Delaney, moderated the discussion, which included experts Amir Halfon from start-up ScalingData; Rohit Mathur, a partner at data management advisory Element22; and Thomas Mavroudis, Americas regional data manager at HSBC Technology and Services.

Delaney kicked off the discussion describing the current state of enterprise architecture, which typically includes a large number of systems that create complexity and maintenance costs, but delivers limited value. Halfon picked up on this, saying: “There are lots of systems and lots of data, but a lot of data is left on the floor and its value is not used. The unified data model is no longer viable and we need to take a top-down approach to enterprise architecture for data management that includes an holistic view of data including non-relational data.”

Mathur concurred with the need for change, saying: “We need to look beyond simple solutions as data management problems can no longer by solved with just a large data warehouse. Data needs to be understood and be accessible to enterprise applications, so it is important to put it in context, in a semantic or metadata model rather than in a data model.”

Turning to the benefits of reducing complexity in enterprise architecture, Mavroudis said: “Reducing complexity can certainly save costs in terms of support and people. It can also make data more traceable and easier to manage. Most banks approach this by centralising data, reference data first and then data that is required by key applications.”

Halfon noted that reducing complexity can deliver not only cost savings, but also value as operational tasks are reduced and more value can be prised from data. He added: “Regulations are onerous, but they are also an opportunity to become more agile and get more value of out data from a marketing and new products perspective.”

Mathur said simplification can reduce the redundancy of systems doing the same things, reduce costs and give banks the ability to respond to changing requirements. He also pointed to the value of conformity underpinned by standards that dictate how data models are defined and interpreted, and introduced decoupling, saying: “The ability to decouple systems and applications needs to be built into enterprise architecture as without it there is significant dependency across layers that adds to the complexity.” Halfon agreed and reflected Mathur’s view that data also needs to be decoupled from schemas, with semantic and metadata models being used rather than traditional relational database schemas.

From a user’s standpoint, Mavroudis explained: “Semantic models are not useful for all data, but they are useful for key attributes. I would map 200 to 300 of the most important attributes to a model and use it consistently to identify data that is valuable.”

If these are some of the technological concepts moving enterprise architecture forward, Delaney questioned whether they should be built or bought. Mathur responded: “There are more options than just building or buying. You can rent, use service providers, or borrow, perhaps open source tools that you can build on. But despite the number of options and the desires of architects, you must consider business requirements and skills within the organisation. When considering skills, it is important to think about their long-term interplay with new technologies and not just keep building what you are building today.”

Looking at where to start development, Halfon said: “A good start has already been made with an organisational focus on data management. The appointment of chief data officers with budget and the ability to make change is also important. I like the notion of borrowing from the open source space as there is much to learn from the open source community, there are tools and practices that can be adopted, and there is no vendor lock in. We need to reverse the traditional model of an enterprise data warehouse built with relational technology and sourced from a large vendor with a closed stack, and consider open source solutions, infrastructure that scales horizontally and holistic perspectives that are less tightly coupled to products and vendors.”

Moving on to data management delivery models, Delaney questioned whether outsourcing options, such as those provided by utilities and managed services, offer practical solutions. Halfon said these types of outsourcing are becoming practical, particularly in the reference data space, but are not suited to all types of data, making hybrid models for data management the most likely outcome. Mathur qualified the potential benefits of outsourcing, saying: “Once a managed service is working, it can reduce complexity, but in the short-term, a managed service adds to integration challenges and will not reduce costs or provide benefits if it is plugged in just as it is. The need is to create a more generic integration layer that can support a number of managed services.”

Considering how to ensure flexibility within enterprise architecture, Mavroudis said the ability to decouple not just systems and applications, but also data sources, is key. Mathur picked up this point and concluded: “Building flexibility into an architecture that includes legacy systems is counterintuitive as more layers add to the complexity of the architecture, but the layers are needed for flexibility and to decouple systems and applications so that changes can be made quickly. Architects need to work with the project team on flexibility and not try to add it as an afterthought.”

Related content

WEBINAR

Upcoming Webinar: Maximising Success When Migrating Big Data and Analytics to Cloud

Date: 13 April 2021 Time: 10:00am ET / 3:00pm London / 4:00pm CET Duration: 50 minutes Migrating big data and analytics workflows to the cloud promises significant cost savings through efficient use of infrastructure resources and software that scales dynamically based on data volume, query load, or both. These are valuable gains for investment banks,...

BLOG

Data Takes Centre Stage

By Matt Smith, CEO of SteelEye. When it comes to the future of data management, one of the biggest challenges facing all firms is managing increasing data volumes from an ever-expanding range of sources. Looking at financial markets, we foresee 2021 as the year when firms focus on efficient and accurate data management as a...

EVENT

RegTech Summit New York City

Now in its 5th year, the RegTech Summit in NYC explores how the North American financial services industry can leverage technology to drive innovation, cut costs and support regulatory change.

GUIDE

Entity Data Management

Entity data management has historically been a rather overlooked area of the reference data landscape, but with the increase focus on managing risk, the industry is finally taking notice. It is now generally agreed to be critical to every financial institution; although the rewards for investment in entity data management appear to be rather small,...