About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

How Easy Access to Trusted Data Drives Operational Efficiency for Finance Firms

Subscribe to our newsletter

By Neil Sandle, Head of Product Management, Alveo.

Today, we’re seeing rapid growth in data volumes and in data diversity within financial services firms. Different trends contribute to this growth of available data across the sector. One driver is that firms need to disclose more in order to comply with the continuing push towards regulatory transparency. For example, many firms have ongoing pre-trade or post-trade transparency requirements to fulfil.

We are also seeing much more data generated and collected through digitalisation, as a by-product of business activities (sometimes referred to as ‘digital exhaust’), and through the use of innovative new techniques like natural language processing (NLP) to gauge market sentiment. Data is being used for a range of reasons by these finance firms, everything from regulatory compliance to enhanced insight into potential investments.

The availability of all this data and the potential it provides, coupled with increasingly data intensive jobs and reporting requirements, means financial firms need to improve their market data access and analytics capabilities.

Making good use of this data is complex, however. To gather it in the first place, firms need to develop a shopping list of companies or financial products they want to get the data from. After that, they need to decide what information to collect. Once sourced, they need to expose what data sets are available and highlight to business users what the sources are, when data was requested, what came back, and what quality checks were undertaken.

In other words, firms need to be transparent about what is available within the company and what its provenance has been. They also need to know all the contextual information: was the data disclosed directly; is it expert opinion or just sentiment from the public Internet; and who has permission to use it, for example? With all this in hand it becomes much easier to decide what data they want to use.

In addition, there are certain key processes data needs to go through before it can be fully trusted. If the data is for operational purposes, firms need a data set that is high quality and delivered reliably from a provider they can trust. As the data is going to be put into an automated, day-to-day recurring process, there needs to be predictability around the availability and quality of the data.

If, however, the data is for market exploration or research, the user might only want to use each data set once, but is nevertheless likely to be more adventurous in finding new data sets that give an edge in the market. The quality of the data and the ability to trust it implicitly are, however, still critically important.

Where existing approaches fall short

Unfortunately, there are a range of drawbacks with existing approaches to market data management and analytics. IT is typically used to automate processes quickly, but the downside is financial and market analysts are often hardwired to specific datasets and data formats.

It is often difficult with existing approaches to bring in new data sets because new data comes in different formats. Onboarding and operationalising such data is typically very costly. If users want to either bring in a new source, or connect a new application or financial model, it’s typically very expensive and error prone.

Added to that, it is often hard for firms to ascertain the quality of the data they are dealing with, or even to make an educated guess of how much they can rely on it.

Market data collection and preparation, and analytics are also historically different disciplines, separately managed and executed. So, when a data set comes in, somebody gets to work on verifying, cross referencing and integrating it. And then the data has to be copied and put in another database, before another analyst can run a risk or investment model against it.

In summary, it is hard to get the data in to begin with, but then to put it into a shape and form and place where an analyst can get to work on it is quite cumbersome. The logistics don’t really lend themselves to faster uptime or a quick process.

Finding a way forward

The latest big data management tools can help a great deal in this context. Today, they typically use cloud-native technology, making them easily scalable up and down depending on the intensity or volume of the data. Using cloud-based platforms can also give firms a more elastic way of paying and of ensuring they only pay for the resources they use.

The latest tools are also able to facilitate the integration of data management and analytics, something which has proven difficult with legacy approaches. The use of underlying technologies like Cassandra and Spark make it much easier to bring business logic or financial models to the data, streamlining the whole process and driving operational efficiencies.

In addition to all this, in-memory data grids can be used to deliver a fast response time on queries, together with integrated feeds to streamline onboarding and deliver easy distribution. These kinds of feeds can provide last mile integration both to consuming systems and to users, enabling them to gain critical business intelligence that in turn supports faster and more informed decision making.

Driving data RoI

Ultimately, every firm operating in the finance or financial services space should be looking to maximise their data return on investment (RoI). That means they need to source the right data and then also ensure they are getting the most from it. The ‘know your data’ message is important here. Finance firms need to know what they have, understand its lineage and track its distribution. That’s the essence of good data governance.

Just as important is that the approach these firms take should drive business user enablement. Finance businesses need to ensure all their stakeholders know what data is there and can easily access the data they need. That’s what will ultimately drive true competitive edge for these organisations and the latest big data management tools certainly make this easier to achieve.

Subscribe to our newsletter

Related content

WEBINAR

Upcoming Webinar: Hearing from the Experts: AI Governance Best Practices

9 September 2025 10:00am ET | 3:00pm London | 4:00pm CET Duration: 50 Minutes The rapid spread of artificial intelligence in the financial industry presents data teams with novel challenges. AI’s ability to harvest and utilize vast amounts of data has raised concerns about the privacy and security of sensitive proprietary data and the ethical...

BLOG

2024: A Year of Increasing Data Complexity

One thing was apparent in the data management space over the past year; the job of chief data officers became increasingly complex as the volume of data their organisations ingested swelled and the uses to which it was put expanded. All that despite the flowering of technologies with the potential to make life easier for...

EVENT

Future of Capital Markets Tech Summit: Buy AND Build, London

Buy AND Build: The Future of Capital Markets Technology London on September 19th at Marriott Hotel Canary Wharf London examines the latest changes and innovations in trading technology and explores how technology is being deployed to create an edge in sell side and buy side capital markets financial institutions.

GUIDE

BCBS 239 Data Management Handbook

Our 2015/2016 edition of the BCBS 239 Data Management Handbook has arrived! Printed copies went like hotcakes at our Data Management Summit in New York but you can download your own copy here and get access to detailed information on the  principles and implications of BCBS 239 on Data Management. This Handbook provides an at-a-glance...