About a-team Marketing Services
The leading knowledge platform for the financial technology industry
The leading knowledge platform for the financial technology industry

A-Team Insight Blogs

Data Management Summit: UBS’s Brown on Creating a Data Map of the Financial Enterprise

Subscribe to our newsletter

Rupert Brown, lead architect in the office of the CTO at UBS Investment Bank, opened the A-Team Group Data Management Summit with a keynote presentation covering the creation of a data map for a financial enterprise. Taking a traditional map as his starting point and later touching on the topographical nature of the London Underground map, he asked questions along the lines of: have we got a map; where are we on the map; where do we want to go; how do we get there; how far do we have to go; and, inevitably, are we nearly there yet?

Outlining the drivers for an enterprise data map, Brown explained: “The need is to look at data across the enterprise, not only reference data, but also pricing and risk data. It is also necessary to look at the segregation of duties around data, systems development and operations. We don’t have permanent employees making up IT teams any more, instead we have consultants, outsourcing arrangements and so on, which means meta data is important in sustaining a single source of the truth.”

Brown went on to discuss the need for positional reference points and categorisation on any data map and noted the London Underground map, first drawn in the 1930s and looking very similar today, as an example of a map with a notion of latitude and longitude, orientation with north at the top and standard fonts and annotation. But he warned: “It is not enough to draw a map, people must believe in it and refer to it.”

Turning to an IT intensive financial enterprise, Brown described a project he is leading at UBS, which segregates and maps interconnections as longitudinal bars on a map and applications as latitudinal bars. The data map was generated by feeding the bank’s database into OpenStreetMap and includes nodes and ways, as well as relationships between these that can help put meta data into the map.

The initial mapping concept has been tested using several hundred applications and several thousand connections. The outcome is a good view of data transports and applications, and a highly questionable solution offering many data views without the need to change data and its relationships on the map. Brown commented: “The map is sufficiently simple and consistent to stand the test of zooming in and out.”

Summarising the project to date, he added: “We have learnt that 2D maps are best and can be used as layers or with links, that the OpenStreetMap approach is viable and that the database has to be NoSQL. We have also learnt that we need large format printers.”

As OpenStreetMap is about crowd sourcing, Brown plans to transfer the date map to a web page, allowing people to click on a node and add or edit data. He explained: “This is a key part of the project. We will ask people to push data into the map as they write software and we will need to map protocols with animation. We also need bitemporality to see what the map looked like and what it will look like. This requires versions of the map and animation over time.”

The map concept will be moved to a web server using the open source Mapnik toolkit for rendering maps, while a Java animation tool is being built to provide the ability to add or edit data. UBS is working with University College London to evolve this part of the project and consider aspects such as who should contribute data and how data mapping experts can be identified on the basis of their contributions.

Brown explained the genesis of the data map as a project he undertook in his former role at Merrill Lynch to find out the degrees of separation between reference data and its targets. Looking forward, he said: “Essentially, the map is a data structure that can be used to query dependencies, but it is also part of a bigger idea about how you get people to put data into systems and how you gather knowledge within the enterprise and bring it back into the software development lifecycle. We are building engines that will bring together and map systems and data models, and we are building processes to check the map. The end game is to derisk the enterprise.”

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: Data Management – A Finance, Risk and Regulatory Perspective

This webinar has passed, but you can view the recording here. With financial institutions acquiring international businesses – whether through mergers-and-acquisitions activity or organic growth – cross- border trading and investment has become the norm. But with it comes a new level of complexity, as firms grapple to deal with multiple regulatory regimes, market conventions...

BLOG

GoldenSource Integrates FactSet Truvalue Labs ESG Data into ESG Impact Product

GoldenSource has integrated FactSet content, including Truvalue Labs ESG data, into its ESG Impact product. The EDM solution provider will also be listed on the Open:FactSet Marketplace, a platform providing data applications and workflow solutions for investment professionals. GoldenSource ESG Impact is positioned to handle interlinked reference data, as well as fundamental and ESG data,...

EVENT

Data Management Summit London

Now in its 13th year, the Data Management Summit (DMS) in London brings together the European capital markets enterprise data management community, to explore the evolution of data strategy and how to leverage data to drive compliance and business insight.

GUIDE

Pricing and Valuations

This special report accompanies a webinar we held a webinar on the popular topic of Pricing and Valuations, discussing issues such as transparency of pricing and how to ensure data quality. You can register here to get immediate access to the Special Report.