By Evan Schnidman, President of Prattle, a Liquidnet company.
For decades, the asset management community has treated research and data as two distinct goods – separate commodities to be controlled by separate institutions. Data was the domain of Bloomberg, Thomson Reuters (now Refinitiv), FactSet, S&P, and others, while research was the domain of large banks and boutique providers whose analysts drafted copious long-form notes for human consumption.
But things have changed. The rapid spread of ‘alternative data’ has highlighted the inescapable fact that research and data are both crucial parts of the information slurry necessary to make sound investment decisions. Progressive-thinking asset management firms have realised that research and data increasingly go hand-in-hand, and are leveraging this synergy to their advantage.
Data is an increasingly broad category, and research is increasingly quantitative. The current breadth of data can be traced back to the advent of what is now known as alternative data. Corporate exhaust data, purpose-built datasets, and the quantification of previously qualitative information have led to an explosion of datasets that can be used to both streamline the research process and provide quantitative rationale for decisions previously driven solely by qualitative information. This explosion in information has empowered portfolio managers and analysts to become increasingly ‘quantamental’, using a blend of data and research for optimal results.
With so many providers offering such a wide range of information in today’s market, defining alternative data is a nearly impossible task. That said, one fact remains clear: alternative data is much more similar to research than it is to market data. Where market data – such as price information – is purely a commodity necessary to conduct business in financial services, alternative data – and the investment signals it yields – remains far more open to interpretation and is thus more akin to research. In fact, much purpose-built alternative data is designed specifically to streamline the traditional research process. For instance, Prattle provides a tool to automatically extract the most salient remarks from company executives on each earnings call, essentially readymade quotations for a long-form research report.
This Prattle Core Comments feature is just one example of the ways that alternative data can serve to streamline the research process, but it is worth noting that these types of tools can also assist quants and fundamentals in reconciling their respective trading strategies. While quants are known to be purely systematic, striving to be devoid of bias, they often operate with incomplete information. On the other hand, fundamental managers often argue that their ability to assimilate both quantitative and qualitative information allows them to get closer to making investment decisions based on complete information, but their interpretation of that information is fraught with bias.
That is where alternative data comes in. By applying Natural Language Processing (NLP) coupled with machine learning and AI tools, unstructured, qualitative information, such as corporate earnings calls, can now be analysed in an unbiased, quantitative way. Thereby content becomes data. This data can not only be used as a directly tradable signal to the quant, it can also serve as a check on the fundamental manager’s inherent cognitive bias. Applying technology in this manner can help strategies perform better, while also allowing quants and fundamentals to speak the same language.
The blurry line between research and data prompted Prattle to begin using the term ‘research automation’. Although this term has not caught on yet, it remains the most accurate way to describe the very best forms of alternative data. Whereas 20 years ago investors would do research by driving around to count the number of cars in Home Depot parking lots, now they can simply purchase satellite data with those same counts. This means that precisely the same information, the number of cars in parking lots, has transformed from research produced manually by humans into data produced automatically by machines. Such fluidity further highlights the blurriness of the line between research and data.
It is apparent that research and data are increasingly a distinction without a difference. Portfolio managers and analysts need both kinds of information to make informed investment decisions. The confluence of evolving AI and NLP technology with the rapid explosion in available and easily stored data has empowered asset managers to rethink how they procure information. Widespread adoption of these tools has the potential to increase returns, lower operating costs and transform active asset management.