The leading knowledge platform for the financial technology industry
The leading knowledge platform for the financial technology industry

A-Team Insight Blogs

Data Management Summit: UBS’s Brown on Creating a Data Map of the Financial Enterprise

Rupert Brown, lead architect in the office of the CTO at UBS Investment Bank, opened the A-Team Group Data Management Summit with a keynote presentation covering the creation of a data map for a financial enterprise. Taking a traditional map as his starting point and later touching on the topographical nature of the London Underground map, he asked questions along the lines of: have we got a map; where are we on the map; where do we want to go; how do we get there; how far do we have to go; and, inevitably, are we nearly there yet?

Outlining the drivers for an enterprise data map, Brown explained: “The need is to look at data across the enterprise, not only reference data, but also pricing and risk data. It is also necessary to look at the segregation of duties around data, systems development and operations. We don’t have permanent employees making up IT teams any more, instead we have consultants, outsourcing arrangements and so on, which means meta data is important in sustaining a single source of the truth.”

Brown went on to discuss the need for positional reference points and categorisation on any data map and noted the London Underground map, first drawn in the 1930s and looking very similar today, as an example of a map with a notion of latitude and longitude, orientation with north at the top and standard fonts and annotation. But he warned: “It is not enough to draw a map, people must believe in it and refer to it.”

Turning to an IT intensive financial enterprise, Brown described a project he is leading at UBS, which segregates and maps interconnections as longitudinal bars on a map and applications as latitudinal bars. The data map was generated by feeding the bank’s database into OpenStreetMap and includes nodes and ways, as well as relationships between these that can help put meta data into the map.

The initial mapping concept has been tested using several hundred applications and several thousand connections. The outcome is a good view of data transports and applications, and a highly questionable solution offering many data views without the need to change data and its relationships on the map. Brown commented: “The map is sufficiently simple and consistent to stand the test of zooming in and out.”

Summarising the project to date, he added: “We have learnt that 2D maps are best and can be used as layers or with links, that the OpenStreetMap approach is viable and that the database has to be NoSQL. We have also learnt that we need large format printers.”

As OpenStreetMap is about crowd sourcing, Brown plans to transfer the date map to a web page, allowing people to click on a node and add or edit data. He explained: “This is a key part of the project. We will ask people to push data into the map as they write software and we will need to map protocols with animation. We also need bitemporality to see what the map looked like and what it will look like. This requires versions of the map and animation over time.”

The map concept will be moved to a web server using the open source Mapnik toolkit for rendering maps, while a Java animation tool is being built to provide the ability to add or edit data. UBS is working with University College London to evolve this part of the project and consider aspects such as who should contribute data and how data mapping experts can be identified on the basis of their contributions.

Brown explained the genesis of the data map as a project he undertook in his former role at Merrill Lynch to find out the degrees of separation between reference data and its targets. Looking forward, he said: “Essentially, the map is a data structure that can be used to query dependencies, but it is also part of a bigger idea about how you get people to put data into systems and how you gather knowledge within the enterprise and bring it back into the software development lifecycle. We are building engines that will bring together and map systems and data models, and we are building processes to check the map. The end game is to derisk the enterprise.”

Related content

WEBINAR

Recorded Webinar: Managing entity data hierarchies and keeping track of sanctions, watch lists and PEPs across KYC processes

This webinar has passed, but you can view the recording by registering here. Managing entity data and entity ownership hierarchies to ensure Know Your Customer (KYC) compliance is an ongoing challenge for many financial institutions. The webinar will discuss the most pressing concerns around entity and hierarchy data, and consider how to ensure you are...

BLOG

Blackmore Capital’s Collaboration with OTCfin Completes Integration of ESG Factors into Investment Process

Blackmore Capital, a Melbourne-based asset manager set up in 2018, and New York-based OTCfin have completed the integration of ESG factors with financial data for all Blackmore portfolios. By incorporating ESG factors into Blackmore’s investment process, OTCfin’s risk and regulatory reporting solution will help the asset manager’s team improve portfolio monitoring from both a financial...

EVENT

Virtual Briefing: ESG Data Management – A Strategic Imperative

This briefing will explore challenges around assembling and evaluating ESG data for reporting and the impact of regulatory measures and industry collaboration on transparency and standardisation efforts. Expert speakers will address how the evolving market infrastructure is developing and the role of new technologies and alternative data in improving insight and filling data gaps.

GUIDE

Corporate Actions

Corporate actions has been a popular topic of discussion over the last few months, with the DTCC’s plans for XBRL and ISO interoperability, as well as the launch of Swift’s new self-testing service for corporate actions messaging, STaQS, among others. However, it has not been a good start to the year for many of the...