About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

How Easy Access to Trusted Data Drives Operational Efficiency for Finance Firms

Subscribe to our newsletter

By Neil Sandle, Head of Product Management, Alveo.

Today, we’re seeing rapid growth in data volumes and in data diversity within financial services firms. Different trends contribute to this growth of available data across the sector. One driver is that firms need to disclose more in order to comply with the continuing push towards regulatory transparency. For example, many firms have ongoing pre-trade or post-trade transparency requirements to fulfil.

We are also seeing much more data generated and collected through digitalisation, as a by-product of business activities (sometimes referred to as ‘digital exhaust’), and through the use of innovative new techniques like natural language processing (NLP) to gauge market sentiment. Data is being used for a range of reasons by these finance firms, everything from regulatory compliance to enhanced insight into potential investments.

The availability of all this data and the potential it provides, coupled with increasingly data intensive jobs and reporting requirements, means financial firms need to improve their market data access and analytics capabilities.

Making good use of this data is complex, however. To gather it in the first place, firms need to develop a shopping list of companies or financial products they want to get the data from. After that, they need to decide what information to collect. Once sourced, they need to expose what data sets are available and highlight to business users what the sources are, when data was requested, what came back, and what quality checks were undertaken.

In other words, firms need to be transparent about what is available within the company and what its provenance has been. They also need to know all the contextual information: was the data disclosed directly; is it expert opinion or just sentiment from the public Internet; and who has permission to use it, for example? With all this in hand it becomes much easier to decide what data they want to use.

In addition, there are certain key processes data needs to go through before it can be fully trusted. If the data is for operational purposes, firms need a data set that is high quality and delivered reliably from a provider they can trust. As the data is going to be put into an automated, day-to-day recurring process, there needs to be predictability around the availability and quality of the data.

If, however, the data is for market exploration or research, the user might only want to use each data set once, but is nevertheless likely to be more adventurous in finding new data sets that give an edge in the market. The quality of the data and the ability to trust it implicitly are, however, still critically important.

Where existing approaches fall short

Unfortunately, there are a range of drawbacks with existing approaches to market data management and analytics. IT is typically used to automate processes quickly, but the downside is financial and market analysts are often hardwired to specific datasets and data formats.

It is often difficult with existing approaches to bring in new data sets because new data comes in different formats. Onboarding and operationalising such data is typically very costly. If users want to either bring in a new source, or connect a new application or financial model, it’s typically very expensive and error prone.

Added to that, it is often hard for firms to ascertain the quality of the data they are dealing with, or even to make an educated guess of how much they can rely on it.

Market data collection and preparation, and analytics are also historically different disciplines, separately managed and executed. So, when a data set comes in, somebody gets to work on verifying, cross referencing and integrating it. And then the data has to be copied and put in another database, before another analyst can run a risk or investment model against it.

In summary, it is hard to get the data in to begin with, but then to put it into a shape and form and place where an analyst can get to work on it is quite cumbersome. The logistics don’t really lend themselves to faster uptime or a quick process.

Finding a way forward

The latest big data management tools can help a great deal in this context. Today, they typically use cloud-native technology, making them easily scalable up and down depending on the intensity or volume of the data. Using cloud-based platforms can also give firms a more elastic way of paying and of ensuring they only pay for the resources they use.

The latest tools are also able to facilitate the integration of data management and analytics, something which has proven difficult with legacy approaches. The use of underlying technologies like Cassandra and Spark make it much easier to bring business logic or financial models to the data, streamlining the whole process and driving operational efficiencies.

In addition to all this, in-memory data grids can be used to deliver a fast response time on queries, together with integrated feeds to streamline onboarding and deliver easy distribution. These kinds of feeds can provide last mile integration both to consuming systems and to users, enabling them to gain critical business intelligence that in turn supports faster and more informed decision making.

Driving data RoI

Ultimately, every firm operating in the finance or financial services space should be looking to maximise their data return on investment (RoI). That means they need to source the right data and then also ensure they are getting the most from it. The ‘know your data’ message is important here. Finance firms need to know what they have, understand its lineage and track its distribution. That’s the essence of good data governance.

Just as important is that the approach these firms take should drive business user enablement. Finance businesses need to ensure all their stakeholders know what data is there and can easily access the data they need. That’s what will ultimately drive true competitive edge for these organisations and the latest big data management tools certainly make this easier to achieve.

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: An update on data standards and global identifiers

Data standards and global identifiers have been parts of capital markets’ practices for many years, and more are being developed, reviewed and shaped as the industry acknowledges their role in streamlining data management, reducing risk, improving transparency, and achieving compliance. This webinar will discuss data standards and identifiers in play, as well as those in...

BLOG

Informatica Adds AI Powered Data Access and Governance to IDMC Data Management Platform

Informatica has released Cloud Data Access Management (CDAM), a solution based on the company’s 2023 acquisition of Privitar, a provider of data access management products. The AI-powered solution is integrated with Informatica’s Intelligent Data Management Cloud (IDMC) and uses the platform’s common metadata foundation to support data access governance. At the heart of CDAM is...

EVENT

RegTech Summit New York

Now in its 8th year, the RegTech Summit in New York will bring together the regtech ecosystem to explore how the North American capital markets financial industry can leverage technology to drive innovation, cut costs and support regulatory change.

GUIDE

Regulatory Data Handbook 2023 – Eleventh Edition

Welcome to the eleventh edition of A-Team Group’s Regulatory Data Handbook, a popular publication that covers new regulations in capital markets, tracks regulatory change, and provides advice on the data, data management and implementation requirements of more than 30 regulations across UK, European, US and Asia-Pacific capital markets. This edition of the handbook includes new...