The leading knowledge platform for the financial technology industry
The leading knowledge platform for the financial technology industry

A-Team Insight Blogs

Share article

By Martijn Groot, Vice President of Marketing and Strategy at Asset Control.

Alternative and unstructured data is rapidly going mainstream. In a recent survey conducted by Adox Research for Asset Control, more than a third of respondents from financial institutions (36%) labelled these new types of data as ‘high importance’ drivers for investment. Alternative data inventory guides and consulting firms are growing in numbers too.

The appeal of alternative data to financial services organisations is clear as it can change the ‘data game’ within these firms and add insight and information not available via traditional content products. The ability of firms to ‘travel across’ different types of data and put them in a meaningful context not only for generating alpha, but also for understanding their current operational status, is an attractive option for any financial services business. That, in a nutshell, explains the growing interest in alternative data. After all, one of the definitions of intelligence and learning is the ability to make new connections.

We are seeing alternative data used to help shape investment decisions. Many quant funds use alternative data to gain an advantage in market insight. But while this kind of data has huge potential there are big gaps in terms of how it is being used today and how it could be in the future. Alternative data is often not structured and organised in the same way as traditional data feeds. Part of the curation job that data vendors do is shifted in-house.

Plotting the Route Forward

Currently though, while more and more alternative or unstructured data is coming into financial services organisations, there is little activity focused on mastering and structuring. In line with the industry shift from data warehouses to data lakes, the work effort between inbound and outbound has shifted. Traditionally, in the data warehouse era, data pre-processing took place to cast the data sets into a predefined schema, making querying very easy once the data was in. In data lakes, data ingestion is fast and easy in the absence of schema constraints, but more effort has to be put into the querying side to connect and process the data to get the answers hidden in the data sets.

That is limiting the insight firms can achieve in using data to address complex problems and it is now starting to give them serious pause for thought. If they are looking at a particular company, for example, how do they bring the structured (traditional equity, CDS, corporate bond data sets) and unstructured data (indirect estimates on sales figures, earnings calls transcripts, news feeds, social media) they have on it together, join it up, and look at all of it as part of a single source in order to make the most informed business decision? The answer is in machine learning techniques that can systemically link data sets, detect outliers and help users get to any answers hidden in the data.

Where we are seeing the most significant developments is in rapidly closing the gap in the tools to integrate these data sources into day-to-day business workflow processes – overcoming the key challenge of bringing structured and unstructured data together. Crudely put, if you can’t put a data source to use, interest in it will rapidly fade.

Adoption areas can range from compliance (early use cases were studying behavioural patterns in large transaction data sets) to gaining an investment edge. In the latter case, there is an analogy that can be drawn between using alternative data sets and building an increasingly accurate and more complete model, map or representation of the financial world. It is rather like moving from a simple compass-based approach to the latest most technologically advanced satnav. Alternative data sets provide additional detail or even additional angles for users to explore.

Furthermore, the tooling required to navigate and map this new data world is increasingly growing in order to prevent users from getting lost and effectively not seeing the wood for the trees. Putting this kind of tooling in place is important. Imposing structure, inferring connections between datasets in structured and unstructured environments and detecting patterns across all these areas is the task of data management. It drives insight and will make the new datasets actionable. A further important area of use cases is risk assessments. The data intensity of risk and reporting processes is likely to continue to grow.

While the amount of alternative data available to financial services firms is growing all the time – and the number of use cases escalating, there is still a lot of work to do to harness and more efficiently manage alternative data and to integrate  or at least align it with more structured data to gain a comprehensive insight into patterns and trends. It will take time for financial services to realise the potential that alternative data offers, but with tools and solutions becoming increasingly capable of aligning unstructured with structured data, the future for this new data class as a driver of business value looks very bright indeed.

2 Replies to “Bringing Alternative Data into the Mainstream”

  1. Hi Martijn, Nice article. What is Asset Control’s strategy toward altdata. Are you letting your clients drive with Asset Control being a data hub for all types of data including alt, and/or do you have a proactive strategy with altdata vendors directly ?

    1. Hi Steve, thanks for your comment. The answer is a bit of both. Proactive in our common use cases in valuation, middle office, data quality management and regulatory reporting where we look for data sets that provide additional colour and context. More client-led in other cases.

Leave a comment

Your email address will not be published. Required fields are marked *

*

Related content

WEBINAR

Recorded Webinar: High noon for surveillance: resolving tension between the costs of false positives, challenges of calibration, and compliance

When it comes to trade surveillance, regulators want firms to do their own alert calibration, examine all alerts, and keep auditable records. Firms need to balance the real cost of false positives with the technical challenge and risk of self-calibrating and auto-calibrating, while compliance, IT and vendors have to grapple with the need for defensible...

BLOG

GLEIF and Evernym Partner on Game-changing ‘Organization Wallets’

The Global Legal Entity Identifier Foundation (GLEIF) and self-sovereign identity specialist Evernym this month piloted a new solution allowing organizations to create and manage ‘organization wallets’ containing digital portable credentials to confirm an organization’s identity and verify the authority of employees and other company representatives. ‘Self-sovereign’ means that the individual identity holder controls their credentials, using them whenever and...

EVENT

Data Management Summit London

Now in its 9th year, the Data Management Summit (DMS) in London explores how financial institutions are shifting from defensive to offensive data management strategies, to improve operational efficiency and revenue enhancing opportunities. We’ll be putting the business lens on data and deep diving into the data management capabilities needed to deliver on business outcomes.

GUIDE

Managing Valuations Data for Optimal Risk Management

The US corporate actions market has long been characterised as paper-based and manually intensive, but it seems that much progress is being made of late to tackle the lack of automation due to the introduction of four little letters: XBRL. According to a survey by the American Institute of Certified Public Accountants (AICPA) and standards...