About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Intelligent Trading Summit: Making the Most of Big Data

Subscribe to our newsletter

Big Data is growing in both volume and adoption, but it needs to be clean, synchronised, analysed and used sensibly in tackling business problems if it is to reach its full potential in the trading environment.

Discussing Big Data for algo development and analytics at last week’s A-Team Group Intelligent Trading Summit, Peter Farley, director at A-Team Group, questioned an expert panel about the development of Big Data, what it comprises and how it is being used. Setting the scene, Roman Chwyl, worldwide financial services and insurance sales leader, IBM Platform Computing, said: “To make money, traders need to take on more risk and comply with more regulations, which means more computing. Today, big banks are taking on more data and processing, and they are leveraging Big Data tools.”

Nick Idelson, technical director at TraderServe, a provider of real-time trading applications, added: “Big Data is the underlying real-time data used by financial institutions. It grows all the time, but it is important to understand the quality of the data and whether it is contemporaneous. Banks are tackling the issues of Big Data and pulling together vast amounts of structured and unstructured data, including news information. They need to get it all into a system, process it and then use it, perhaps for automated trading or decision support, but they must be sure it is clean, synchronised and used sensibly.”

Picking up on this point, Peter Simpson, senior vice president of research and development at Datawatch, a provider of data visualisation software, said: “Successful implementations of Big Data are around business problems and consider where the data is coming from, how clean it is, how it can be analysed and the required business outcomes. Projects should focus on end business requirements and then work back to the technology stack.”

Turning to the issue of social media as a contributor to Big Data, Idelson explained: “The need is to take into account how low the signal to noise ratio is in financial markets and the potential correlation problems of social media. Social media can be used to generate sentiment numbers, but these may change quickly, raising the problem of whether the numbers had a real relationship to what was happening in the market when a trade was processed. In my view, this problem has not yet been adequately solved.”

Simpson continued: “When we think about big data, we think about unstructured data. We might process the data, put algos on a chip to optimise performance and run a machine next to an exchange, but things are always moving so someone needs to be looking at the data. When there is so much data no-one can look at every item, so someone must look for what is not expected to happen. Competitive advantage is in the human element of looking for unusual activity, finding the problem behind it and acting quickly to resolve it.”

These are relatively early days in the development of Big Data applications and tools, a fact borne out by differences of opinion among panellists about the benefits Big Data is delivering. Simpson said Big Data is becoming part of life and is already in use for algo testing and back testing trading realities, while Chwyl said Big Data is providing benefits to the extent that IBM is considering producing multi-tenanted, multiple application shared infrastructure for Big Data. Idelson was more sceptical about the benefits of Big Data, saying: “There is a lot of Big Data analysis going on and banks can make more money, but they seem to be gaining more advantage than customers.”

Subscribe to our newsletter

Related content

WEBINAR

Upcoming Webinar: Navigating the Build vs Buy Dilemma: Cloud Strategies for Accelerating Quantitative Research

Date: 20 May 2026 Time: 10:00am ET / 3:00pm London / 4:00pm CET Duration: 50 minutes For many quantitative trading firms and asset managers, building a self-provisioned historical market data environment remains one of the most time-consuming and resource-intensive steps in establishing a new research capability. Sourcing data, normalising symbologies, handling corporate actions and maintaining...

BLOG

DiffusionData Targets Agentic AI in Finance with New MCP Server

Data technology firm DiffusionData has released an open-source server designed to connect Large Language Models (LLMs) with real-time data streams, aiming to facilitate the development of Agentic AI in financial services. The new Diffusion MCP Server uses the Model Context Protocol (MCP), an open standard for AI models to interact with external tools and data...

EVENT

TradingTech Summit New York

Our TradingTech Summit in New York is aimed at senior-level decision makers in trading technology, electronic execution, trading architecture and offers a day packed with insight from practitioners and from innovative suppliers happy to share their experiences in dealing with the enterprise challenges facing our marketplace.

GUIDE

Regulatory Data Handbook 2019/2020 – Seventh Edition

Welcome to A-Team Group’s best read handbook, the Regulatory Data Handbook, which is now in its seventh edition and continues to grow in terms of the number of regulations covered, the detail of each regulation and the impact that all the rules and regulations will have on data and data management at your institution. This...