A-Team Insight Blogs

PolarLake Proposes SOA and Semantics for Big Data Management

Share article

PolarLake has responded to market – and regulator – demand for a real-time consolidated view of trade, position, reference and client data that can inform operational efficiency, risk management and compliance with a virtualised data warehouse solution based on service-oriented architecture (SOA) and semantic technologies.

The virtualised data warehouse plays into not only risk and compliance issues, but also the Big Data debate as financial firms begin to look beyond relational databases at how best to access, manage and view vast quantities of data.

“We are talking to investment banks and large buy side firms about the virtualised data warehouse,” John Randles, CEO of PolarLake. “We don’t have to explain the problem, they know it. The pressure from business, risk managers and regulators is to get a better handle on data and understand how it links together. Organisations need to be confident about the reliability of data they are using in operations and reporting.”

The software came to market early this week, but it has been running in a pilot project at a large investment bank in North America since the last quarter of 2011 and is expected to go live at the bank later this year. The pilot consolidated 10 silos of trade, position, reference and client data in five weeks.

Randles explains: “The bank was facing a situation where it had multiple systems and wanted a better consolidated view for operational purposes, risk management and compliance. It could have looked at a traditional data warehouse solution – what we call ‘yet another data warehouse strategy’ – but that would have meant a long development programme to build a new system. An alternative strategy was to use PolarLake technology that leaves data where it is and queries it, or depending on user requirements, loads data into an element of the product called PolarLake Data Store, where it can be queried. Real-time queries can be run against both data in silos and data in the data store.”

The virtualised data warehouse has four components: a data search application that allows users to query data across all repositories; a semantic policy and rules engine that supports the creation of business rules to build consolidated records, as well as the creation of virtual identifiers across all repositories; a data store for source data and index data used in virtual queries; and a connectivity subsystem that allows communications across multiple protocols and formats in batch, real time, request reply and ad hoc distributions.

The decision to query data in source repositories, build a temporary data warehouse style store in the data store, or combine these options, depends on operational considerations. For example, if the requirement is to run a large query across many systems, it may be best to load the necessary data into the data store and take it offline to run the query.

Technologies supporting the virtual nature and performance of the data warehouse are an SOA layer and semantics. Randles explains: “We are at the point where the old approach of massive multi-year data warehousing projects is no longer tenable. The PolarLake approach of a search-based view of data with an integrated semantic policy engine has proved to deliver business requirements in weeks rather than years.”

The search functionality of the software is based on data tags and linkages between data using semantics. The data integration is based on XML pipeline technology that PolarLake patented back in 2004. It treats all data, whatever its type, format or source, as XML without converting it into XML. When using low-latency streaming data, PolarLake says these technologies mean its solution can outperform relational data models by a factor of 11.

“We are all about innovation, our DNA is in integrating all types of data. As our data management platform has evolved, we have moved beyond integration, to link, manage, distribute and search financial and reference data with speed and control,” says Randles. “Other companies have tried to build data management solutions with SOA and messaging technologies, but this is not enough. The need is to understand the data and provide intelligence for searching. We are trying to give people the best of both worlds, SOA and semantics for meaningful searches.”

Leave a comment

Your email address will not be published. Required fields are marked *

*

Related content

WEBINAR

Recorded Webinar: Best practice solutions for client lifecycle management

This webinar has passed, but you can view the recording by registering here. The challenges of client lifecycle management continue to increase, driven by regulation and requiring financial institutions to review processes around client onboarding, Know Your Customer (KYC) and client data maintenance. The webinar will outline the challenges faced by financial institutions, discuss best...

BLOG

Is Crypto Trading for You? Find Out at Next Week’s TradingTech Summit

Financial institutions stand at the edge of the crypto boom. With little interest in unregulated cryptocurrencies and utility tokens targeted at retail markets, yet a desire to get involved in crypto trading but only where markets and assets are regulated and infrastructure is familiar, FIs are increasingly interested in the concept of security tokens. Ahead...

GUIDE

Entity Data Management Handbook – Fourth Edition

Welcome to the fourth edition of A-Team Group’s Entity Data Management Handbook sponsored by entity data specialist Bureau van Dijk, a Moody’s Analytics company. As entity data takes a central role in business strategies dedicated to making the customer experience markedly better, this handbook delves into the detail of everything you need to do to...