By Tom McHugh, CEO and Co-founder, FINBOURNE Technology.
I recently headlined a session at this year’s InvestOps Connect event, where we talked at length about the data struggle. I’ve been in this industry for longer than I’d like to admit and the fact that we’re still talking about data challenges in 2021 is an unfortunate fact of an industry steeped in legacy systems and an unhealthy love of bespoke Excel sheets that are not connected to core data sources or controlled in any way.
While asset managers are not in the business of data management, there is no denying that data is central to investment management. Accessing and understanding data in the investment chain is crucial. To do this you need a process that translates all the disparate data sets from across organisational silos, legacy systems and even data lakes if you’ve gotten that far.
The reality is that process is still very much a work in progress. In my session, over half of the asset management firms cited the joining of disparate data sets as their biggest data challenge today (56%), followed by ESG data (22%), which has rapidly proliferated in the last decade and is rightfully demanding its place in mainstream investment management.
Some have turned to disruptive technologies to solve the issue, but while artificial intelligence (AI), machine learning (ML) and robotic process automation (RPA) have their uses in the industry (unstructured data being one of them), the bottom line is, if the data isn’t reliable or accessible across your organisation, these technologies are unlikely to add value or resolve the data headache long term.
So what does it come down to? How can you tackle the data challenge once and for all?
One of the things I have come to realise from my engagement with the buy-side is that a firm can have the most advanced developers and API framework, but despite this, there is still a growing chasm between those creating the programs and the ones using them, and that gap is growing.
Educating users to understand and correctly interact with the underlying data is critical. A lot of firms try to get around this with an API strategy, and while this provides an efficient toolkit, it is really a lack of education that is one of the root causes of a firm’s data impediments.
When I say education, I’m not talking about technical training but basic principles, for instance, understanding that data isn’t linear. You could have the most advanced data lake comprising all your investment, market and reference data, but you simply can’t apply linear logic to this data. Adding up your trades does not give you positions – it’s path dependent, subject to events like corporate actions and that’s just the start.
It is never static, changing status either as it moves along the investment workflow or as it moves from one system to the next. Understanding this empowers firms to essentially understand the implications for their systems, the interfaces sitting on top of this data, and the downstream teams relying on it.
Ownership and Entitlement
Alongside education is ownership, which is often mistaken for the actions relating to a particular data set. A typical issue many institutions have is that their portfolio managers don’t have access to the font office order management and execution management systems, so they create a subset of data piecing what they know together with what data they can access, which creates a disparate picture to that in the front office. Managing the right access controls for the right audience is one of the biggest challenges operations teams face.
The second challenge in this area is working with different states of data and ensuring the right entitlements for the right user. Externally, end clients want real-time data at their fingertips, but the issue asset managers have is controlling what the end client sees, cue another round of arduous manual processes to make the data client friendly. But you can use technology to do this intuitively, instantly and accurately, and deliver access for end clients without erroneously including settlements failures that need internal attention.
The influx of data officers and data governance programs has gone some way to define and address these problems, as well as ensuring there is a good flow of data throughout the organisation. For this purpose, understanding where the breaks in data are occurring and creating a trusted data layer across the organisation, particularly for mission critical functions, is key to success.
Breaking through silos
Which brings us to silos. A tale as old as time, and still as valid today as it was before 2008. When we talk about silos, it’s important to know they exist in different forms. Not just as departments – front office, middle office and back office – but also in different formats and languages. There are a multitude of data identifiers from ISINs, transaction codes, security identifiers and LEIs.
There are also different methodologies to deriving data, such as present value. Essentially, there is no one language for data, and that is where the challenge gets real. What the industry needs is a tool that can intuitively translate and connect front to back data sets so that the data can be used effectively and reliably across investment functions.
Where there are barriers, there are solutions. If we accept that silos, together with a lack of education and entitlement are part and parcel of the industry today, then a new standard for investment data management is needed. A benchmark for accessing, understanding and controlling organisational data across the system landscape.
Having witnessed first-hand the sheer pain that comes from broken data, inconsistent views and the manual reconciliation that follows, we decided it was time to create that standard. A cloud native and open source platform that ingests data from all systems and interfaces, such as portfolio holdings and transactions data, via open APIs, and virtually unifies it all into a single repository where you can own and control your data, without the need for spreadsheets or yet another non-differentiating in-house build.
The answer is a bitemporal Investment Book of Record (IBOR), which composes holdings along a non-linear timeline, where you can easily rewind and access your investment data, for anything from reconciliation, performance, risk and reporting to business intelligence and distribution. That frees up your resources to move away from manual processes on a daily basis and leverage the skills and talent you hired in the first place.
As I said earlier, asset managers aren’t in the business of data, but there’s no escaping the fact that data underpins everything you do. It’s time to stop the data drag and fundamentally change the way the industry manages investment data. It’s time to liberate your resources, simplify your operations and connect your data to empower your organisation.
Subscribe to our newsletter