A recent industry periodical spoke of data as “ubiquitous”… touching all corners of an enterprise and becoming the lifeblood of an organization.” Having spent my 25-year career at various securities data vendors, there’s nothing new here.
Data has always driven investment decisions, transaction processing, back office operations, securities administration, accounting, compliance, reporting and so on. Securities identification has always been an essential factor impacted by corporate affiliations and compounded by new instruments and cross-border trading. Difficult–to-find information on fixed income instruments and corporate actions has entailed ongoing risk. And, yes, the people dedicated to getting data right have long been around.
Many of us recognize how evolution in financial services coupled with technology has taught us a lot about the importance and complexity of maintaining accurate, timely securities data.
High interest rates in the 1980’s increased the need to access bond call and sinking fund schedules; the explosion of mortgage-backed securities necessitated ease of access to pool factors. Both are still key issues.
Rapid expansion in mutual fund investment necessitated more timely pricing and ongoing, growing data demands.
The long-term drive to perfect security master files across the industry has given rise to data and software requirements. STP was a great concept, but a bit of the “cart without the data horse”, underestimating the role for automated data links.
With increasing focus on risk, attention returns to the issuer-issue link.
Lest we forget: the never-ending drive to determine a “correct” bond price or that of any complex instrument.
Now electronic trading – ECNs and Alternative Trading Systems – is upping the ante on timeliness.
The same problem data areas remain on management’s top 10 lists¹. The “reference data” umbrella ties “it” together, but technology and infrastructure are still catching up.
So, why is the industry still struggling to cope with these data issues? Because they’re very hard to solve!
Complex instruments, global investing, legacy systems, and multiple locations make it even harder to be right, timely and consistent.
Complicated data production; back offices are still pulling data off various terminals manually and faxing across locations. Lack of automation introduces error and inconsistency.
In spite of the industry’s drive to reduce redundant data cost and become operationally efficient, there’s still significant redundancy across firms. This is not conducive to a firm adding value in its services.
Regulatory compliance and risk management are big and expanding, increasing the focus on data.
Confluence of all of the above makes it hard to determine data improvement priorities.
What is new?
A significant factor – beyond automation and technology – is data awareness at all levels across the firm and the industry. Yes, senior management is giving data managers deserved recognition, but as important, senior managers are encouraging and supporting thinking outside the box and looking for solutions beyond the firm.
Data centralization across the enterprise could be just a beginning. With new technology options and focus on integration and efficiency, manual intervention could diminish. A shift from intra-firm to inter-firm data sharing – especially for consolidation, validation, cleansing of commodity-like data – could decrease industry redundancy in an operationally efficient way and be a big next step towards a “paradigm shift” in data management.
Time will tell where the shift takes us. In the meantime, it’s good to see the industry empowering the people dedicated to getting data right.