About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Innovation in Data and Analytics: Turning Market Data into Strategic Advantage

Subscribe to our newsletter

At the A-Team Group’s recent TradingTech Summit London, a panel of leading industry practitioners explored the evolving role of data and analytics in trading, delving into questions such as: What types of data are truly useful in today’s market environment? How should firms think about the build-versus-buy decision when constructing data platforms? What makes data a real differentiator? And how are firms grappling with the challenges of quality, latency, and scale?

Moderated by Andrew Delaney, President & Chief Content Officer at A-Team Group, the discussion brought together insights from across the capital markets value chain, featuring Will Winzor-Saile, Partner, Execution Analytics & Architecture at Redburn Atlantic; Anna Branch, Head of Strategic Partnerships at TP Quilter; Dr. Elliot Banks, Chief Product Officer at BMLL; Peter Simpson, Product Owner at ONETICK; and Diana Stanescu, Director Finance and Capital Markets at Keysight.

The conversation underscored a core theme: the differentiator is not the volume of data, but the ability to transform data into actionable, timely insights.

From Access to Usability: The Shifting Data Challenge

Access to granular, high-quality market data has improved significantly in recent years, with panellists pointing to the availability of full-depth, tick-by-tick historical data now readily accessible off the shelf. This has levelled the playing field, particularly for firms outside the top tier, enabling them to build models, run robust TCA, and uncover new trading signals.

Yet greater access brings new challenges. As one panelist highlighted from a buy-side perspective, the emphasis has shifted toward “flexibility and accessibility,” not just in the data itself, but in the commercial, contractual, and integration frameworks surrounding it. The goal is a data architecture that supports a mix of internal and external sources across asset classes and use cases, while remaining interoperable and vendor-agnostic.

The quality and structure of metadata and reference data remain foundational. “It’s not just the pricing data. There’s a whole set of ancillary data – symbologies, calendars, constituents, corporate actions – that you need to make sense of your core analysis.”

Build vs Buy: Where Firms Should Focus Their Resources

One of the most animated discussions centred on whether firms should build and maintain their own curated market data platforms. A live poll of the audience showed that around half had attempted to do so to some extent. This result was met with scepticism from several panellists, who warned of the engineering burden and risk of wasted effort.

“There’s a huge amount of effort in managing something like this yourself,” remarked one speaker. Another added, “Understanding the data alone is a big enough job, before even touching the technical challenges of storage, normalisation, and entitlements.”

The consensus was clear: unless a firm has a compelling reason, market data platform engineering should be outsourced or acquired from specialists. Instead, firms should focus their internal resources on value-adding tasks – such as quant research, back-testing strategies, or client-facing analytics – where their IP can generate real differentiation.

Quality Over Quantity: The Real Differentiator

Much of the panel discussion revisited the idea that data quality, rather than mere availability, is the true enabler of competitive edge. Inconsistent formats, gaps, latency spikes, and opaque trade flags across venues can quickly derail even the most sophisticated analytics. “You can have clean data that’s not usable,” said one panellist. “Consistency and normalisation – without losing signal – are the hardest parts.”

For firms engaged in high-frequency trading, this becomes even more pronounced. It was noted that “in high-frequency trading, speed and consistency are not advantages, they’re the game changer.” Infrastructure must support nanosecond-level processing, but also maintain predictability in throughput to avoid execution risk.

The message was echoed across the panel: good data enables firms to spend time on decision-making and insight, rather than on cleaning and reconciling inputs. “Low-quality data is not a differentiator, it’s an inhibitor,” said another speaker.

Leveraging Cloud and Marketplaces – With Caution

The shift to cloud-native data platforms, supported by technologies like Snowflake and Databricks, has brought new possibilities in scale, collaboration, and cost efficiency. However, panellists warned against assuming cloud adoption automatically resolves legacy complexity.

“There’s still a risk of everyone storing the same dataset over and over again,” said one panellist, advocating for selective storage and shared query access. The optimal model, it was argued, involves determining which datasets need to be physically owned versus those that can be accessed and queried as needed.

Cloud-based data marketplaces also show promise, but success depends on standardising ingestion pipelines, structuring unstructured formats, and maintaining governance around entitlements, usage, and cost attribution.

Unstructured Data and AI: Early Promise, Ongoing Caution

The discussion closed with a forward look at AI and unstructured data. While the use of large language models (LLMs) to extract signals from company reports, filings, and news was seen as a promising area, there was scepticism around overreliance on AI-generated outputs.

“LLMs should be assistants, not experts,” remarked one speaker. “They work well for parsing textual content and summarising inputs, but less so for precision tasks like generating code or building test cases.” Others shared concerns about explainability and trustworthiness, especially in regulated contexts.

Still, structured transformation of unstructured data, such as converting news flow into time series inputs or sentiment measures, was flagged as a key area of innovation. As one panellist aptly put it, “Unstructured data is just data that hasn’t been structured yet.”

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: Trade the Middle East & North Africa: Connectivity, Data Systems & Processes

In Partnership With As key states across the region seek alternatives to the fossil fuel industries that have driven their economies for decades, pioneering financial centres are emerging in Egypt, United Arab Emirates (UAE), Saudi Arabia and beyond. Exchanges and market intermediaries trading in these centres are adopting cutting-edge technologies to cater to the growing...

BLOG

Citi Partners with Google Cloud to Advance Digital Strategy with AI and Cloud Integration

Citi has entered a multi-year partnership with Google Cloud to accelerate its digital transformation using cloud technology and artificial intelligence. The collaboration aims to modernise Citi’s technology infrastructure, migrate key workloads to Google’s secure cloud platform, and enhance both employee and client experiences. The agreement includes moving several Citi applications to Google Cloud’s infrastructure, which...

EVENT

RegTech Summit London

Now in its 9th year, the RegTech Summit in London will bring together the RegTech ecosystem to explore how the European capital markets financial industry can leverage technology to drive innovation, cut costs and support regulatory change.

GUIDE

AI in Capital Markets: Practical Insight for a Transforming Industry – Free Handbook

AI is no longer on the horizon – it’s embedded in the infrastructure of modern capital markets. But separating real impact from inflated promises requires a grounded, practical understanding. The AI in Capital Markets Handbook 2025 provides exactly that. Designed for data-driven professionals across the trade life-cycle, compliance, infrastructure, and strategy, this handbook goes beyond...