Big data, cloud computing, the semantic web, big meta data, logical data models and in-memory analytics featured in a debate about emerging technologies for enterprise architecture at this week’s A-Team Group Data Management Summit.
Colin Gibson, head of data architecture, markets and international banking at the Royal Bank of Scotland set the scene describing data management development work at the bank, before A-Team Group editor-in-chief Andrew Delaney stepped up to moderate a panel discussion including Gibson; Rupert Brown, IB CTO lead architect at UBS Investment Bank; Amir Halfon, chief technologies, financial services at MarkLogic; and Eyal Gutkind, senior manager, enterprise market development at Mellanox Technologies.
Gibson presented under the title Understanding Data – Analysis, Not Archaeology. He highlighted the need to understand data if maximum value is to be extracted from it and described development of a data knowledge base at Royal Bank of Scotland. Avoiding the data inconsistencies of running numerous data silos and the difficulties of spaghetti-style application architecture, Gibson selected a meta data model for the data knowledge base, which was then filled with content. The development was not without challenges, such as mapping legacy data to the logical model and sustaining stamina throughout the build, but the outcome is a data management solution that meets business needs.
Panellists agreed that an enterprise approach to data management is essential in an increasingly regulated market that must report on both structured and unstructured data, manage internal risk and deliver agile solutions to demanding customers. Halfon commented: “If data can be made available without building a data warehouse, it is possible to be more agile and get to trading more quickly. Adding other types of information other than trade data, perhaps political data or news analysis, can deliver better returns.”
It is these types of business benefits that win C-suite buy-in for data management programmes, but the benefits cannot be delivered without technology innovation. Halfon noted an industry move towards logical data warehouses with data schema dictated by consumers rather than domain experts, as well as the emerging power of semantics in systems development. Brown, like Gibson, advocated the use of meta data models and described the criticality of data sequencing in the ‘forensic pathology’ of finding out how something has happened.
In terms of specific technologies, big data got a thumbs up from the US contingent in the conference room and a thumbs down from the Europeans. The panellists agreed that whether or not you like the term and despite having dealt with big data’s three Vs of volume, velocity and variety for many years, its elements still have a part to play in data management.
Gutkind explained: “We see people wanting to do analysis on data as it flies, so velocity is important. It is not just about putting data into a system for analysis, but analysing it as it goes in.” This is the role of in-memory processing, a technology that is gaining ground, but needs to be part of a wider velocity and volume solution that could also encompass solid state local drives and cloud storage.
On variety, Halfon said: “Variety is where the challenge and opportunity lies. If all data can be brought together in a way that has not been done before, it is possible to manage risk and regulation, and deliver revenue and returns.” He cited tools such as Hadoop and MarkLogic’s NoSQL and search capability as a means of managing and benefitting from big data.
Although cloud computing is some years down the development road, the panellists voted in its favour. Gibson acknowledged its elasticity to deal with data spikes, while Brown suggested it can support better data management as the location of data and where it is moving to and from can be tracked. He also proposed clouds including models of system topology and simulations to discover where data is best placed.
Subscribe to our newsletter