About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Alt Data’s New Competitive Edge: From Discovery to Synthesis

Subscribe to our newsletter

Has the alternative data industry crossed a maturity threshold?

The competitive advantage has migrated from simply having access to novel datasets to building superior frameworks for combining them, and AI is the engine driving that shift. But as a panel of senior practitioners made clear at the recent A-Team/Eagle Alpha Alternative Data Conference in New York, the industry is still working out where machine intelligence ends and human judgment begins.

The panel, moderated by Christos Koutsoyannis, Chief Investment Officer of Atlas Ridge Capital and adjunct professor at NYU Courant, brought together Norman Niemer, Head of Investment Research and Data Science (Real Estate) at UBS Asset Management; Eliza Raphael, Head of Data Strategy at Jump Trading; David X Martin, Chairman and CEO of Arctium Capital Management; Andrew Sprague, VP of the Investor Vertical at Sensor Tower; and Apurv Jain, Founder and CEO of MacroX. The discussion was held under the Chatham House Rule.

From table stakes to synthesis

An audience poll at the start of the session confirmed what most in the room already suspected: AI-driven data processing and modelling has had the biggest real impact on alternative data programmes over the past year, followed by budget pressure and the prioritisation of ROI. Notably, expansion of private market datasets received zero votes. A telling indicator of where practitioner attention is actually focused.

The panel’s opening theme was that much of what was once called alternative data is now simply a requirement for quantitative teams. The alpha no longer resides in access alone. Multiple vendors now offer comparable coverage across familiar categories – credit card data, web traffic, satellite imagery, etc. – and the edge has shifted to how firms synthesise those inputs with traditional data and with each other. Panellists described a competitive landscape in which the discovery phase has largely been commoditised and the winners are those investing in combination frameworks, integration pipelines, and the analytical layers that sit on top of raw data.

This shift is accelerating the long-discussed convergence between quantitative and discretionary trading. Panellists noted that AI and LLM tools are enabling both sides to work with data that was previously too narrow for systematic strategies or too complex for fundamental analysis, creating what several described as genuinely new and orthogonal sources of alpha.

Where the budgets are moving

The reallocation of resources tells its own story. Panellists described a broad pattern of budget moving away from active long-only equities and toward passive, automated, and data-intensive strategies. One described a striking before-and-after view at a major asset manager: the former head of active equities is now the head of AI. That single career trajectory captures the direction of institutional capital.

At the same time, firms are centralising their AI efforts. Where data science and machine learning capabilities were previously scattered across individual teams with ad hoc mandates, there is now a concerted push toward firmwide AI strategies. Some panellists predicted this centralisation phase would eventually give way to decentralisation again, as best practices mature and specialist product groups re-emerge. But for now, the institutional instinct is to consolidate.

On the data spend itself, several panellists pushed back against the assumption that budgets are being cut. The argument was that as firms automate more of their workflows using LLMs, they actually need more data, not less, in part because avoiding hallucinations and ensuring data quality requires richer, more validated inputs. The net effect may be that some legacy vendor relationships are being swapped out in favour of providers whose data is more amenable to LLM-based workflows, but overall spend on data has likely increased.

Speed, rejection, and quality

For firms working at scale – particularly in macro, where hundreds or thousands of data sources may be required to build a coherent signal – the must-have capabilities have shifted toward operational speed. Panellists described a world in which the ability to ingest new data sources rapidly and, crucially, to reject unpromising ones quickly is now more valuable than methodical, months-long evaluation cycles. Where it once took a well-resourced team six months to move from a new data source to an investable signal, AI is compressing that timeline considerably.

But speed creates its own risks. Panellists emphasised that continuous data quality monitoring has become non-negotiable. If a firm is ingesting data at scale, it needs to know immediately whether an anomalous signal reflects a genuine market move or a data quality failure. The cost of hesitation – pausing to wonder whether the data is broken rather than acting on a real signal – is itself a source of lost alpha.

The vendor moat problem

From the vendor side, the panel surfaced a pointed concern: AI is a double-edged sword for data providers. If a dataset is not built on proprietary first-party data, such as a unique panel, a proprietary collection methodology, or a dataset that cannot be reverse-engineered, then AI makes it trivially easy to build a cheaper competitor. The only durable moat, panellists argued, is genuine proprietary ownership of the underlying data. Vendors relying on publicly available inputs processed through clever models face an existential challenge as those models become commoditised.

This extends to the LLM providers themselves. Several panellists observed that the major foundation models from OpenAI, Anthropic, and Google are increasingly commoditised at the consumer level, with users switching between them based on cost, brand sentiment, or even political signals rather than meaningful differences in output quality. Enterprise differentiation may persist for specific use cases, but the panel suggested that for many standard tasks the models are converging.

Intelligence, judgment, and the limits of automation

The sharpest exchanges on the panel concerned the boundary between what AI can and cannot do. One line of argument held that AI sits firmly in the “intelligence” bucket – it can gather, process, and surface information faster and perhaps better than humans – but that judgment and wisdom remain fundamentally human capabilities. The example offered was crisis pricing: when markets seize up, current data dries up, and comps become unreliable, the decision about how to value an illiquid position ultimately rests on human judgment and the willingness to live with consequences that no model can fully anticipate.

Others pushed back, noting that humans are hardly reliable repositories of judgment either, particularly when it comes to probabilistic reasoning and data interpretation. The consensus, to the extent one emerged, was pragmatic: AI is the right tool for some problems and the wrong tool for others, and the firms that will thrive are those with enough internal expertise to understand the difference and build appropriate frameworks around each.

The concern about AI-washing was raised repeatedly. Panellists described seeing investors act on LLM-generated research theses without proper validation, and foundation models confidently returning information drawn from the wrong company’s earnings transcript. The gap between claiming to use AI and using it well is substantial, and panellists argued it will ultimately show up in performance.

Looking forward

A closing audience poll asked where organisations are most likely to increase alternative data spend over the next twelve months. The result was a near-perfect three-way split between AI tools for analysing existing datasets, new alternative data sources, and data infrastructure and integration. Compliance and governance trailed well behind.

That three-way split may be the most telling data point of the session. It suggests an industry that has moved past the question of whether to invest in AI and alternative data and is now working through the operational reality of how, with no single dominant strategy and a recognition that infrastructure, new data, and analytical tooling all need investment simultaneously. As one panellist noted, innovation of this magnitude takes longer than anyone expects. The firms that treat this as a multi-year buildout rather than a single technology bet are likely to be the ones still standing when the dust settles.

Subscribe to our newsletter

Related content

WEBINAR

Upcoming Webinar: From Data to Alpha: AI Strategies for Taming Unstructured Data

Date: 16 April 2026 Time: 9:00am ET / 2:00pm London / 3:00pm CET Duration: 50 minutes Unstructured data and text now accounts for the majority of information flowing through financial markets organisations, spanning research content, corporate disclosures, communications, alternative data, and internal documents. While AI has created new opportunities to extract signals, many firms are...

BLOG

BMLL and Features Analytics Target Surveillance Benchmarking with Level 3 Order Book Data

BMLL and Features Analytics have partnered to develop new trade surveillance benchmarking and market integrity analytics built on reconstructed historical order book data, signalling a shift towards more measurable, performance-driven surveillance frameworks. Under the agreement, Features Analytics will build and commercialise surveillance benchmarking products on top of BMLL’s harmonised historical Level 3, 2 and 1...

EVENT

Eagle Alpha Alternative Data Conference, London, hosted by A-Team Group

Now in its 8th year, the Eagle Alpha Alternative Data Conference managed by A-Team Group, is the premier content forum and networking event for investment firms and hedge funds.

GUIDE

Entity Data Management Handbook – Sixth Edition

High-profile and punitive penalties handed out to large financial institutions for non-compliance with Anti-Money Laundering (AML) and Know Your Customer (KYC) regulations have catapulted entity data management up the business agenda. So, too, have industry and government reports on the staggering sums of money laundered on a global basis. Less apparent, but equally important, are...