Exclusive Q&A with Thomas Bodenski, Chief Operating Officer, TS Imagine.
From trading strategies to risk management and client interactions, financial institutions are continuously exploring how AI can enhance their businesses. Yet despite the hype and noise, many real-world use cases are still more promise than practice. Firms often face challenges in operationalising AI at scale, handling vast amounts of data securely, and delivering clear financial or operational benefits.
For financial markets solutions vendor TS Imagine, however, the results are already visible. By leveraging Snowflake to manage large volumes of trading data and adopting AI-driven processes, the firm is transforming workflows, reducing manual effort, and delivering measurable efficiency gains. In this Q&A with TradingTech Insight, Thomas Bodenski, COO and Chief Data & Analytics Officer at TS Imagine, explores how the firm has approached AI practically and where it’s seeing real returns.
TTI: Welcome, Thomas. Can you explain how TS Imagine is using Snowflake to handle the large volumes of trading data you receive from multiple sources? How does this approach benefit your clients in terms of data access and usability?
TB: Thank you for having me. Snowflake has excellent capabilities to share data without moving it. When I joined TS Imagine in 2021, I quickly realised the vast amount of trading data generated across asset classes, and saw clients often face challenges in collecting and normalising this information efficiently.
We work with direct dealer connectivity to integrate streaming indicative and tradable prices, TRACE prints, and pricing and liquidity information from key venues, as well as handling requests for quotes (RFQs) and dealer runs across a range of protocols. As trading across asset classes becomes increasingly electronic having a robust system to process and persist data is critical for clients.
Snowflake plays a pivotal role in our data management strategy by acting as a scalable data access layer. It enables us to securely store and structure hundreds of millions of data points while keeping client data strictly segregated. This ensures that each client has access only to their own data, with no comingling across accounts. Using Snowflake’s capabilities, we can support seamless data sharing without physically moving it. For example, if a client has a Snowflake account, they can query their data as if it were within their own environment, while it remains securely stored and managed in ours.
Importantly, we do not automatically redistribute or share all incoming data. This only happens at a client’s request and in line with their specific agreements. Snowflake’s architecture ensures we can meet these requirements while maintaining the highest levels of data security and compliance.
Our approach empowers clients to unlock the full potential of their data – whether for systematic trading, portfolio analysis, or other use cases – while providing the flexibility and control they need to stay compliant and secure. By leveraging Snowflake, we’ve built a solution that balances usability, scalability, and privacy, ensuring we meet the evolving demands of today’s fast paced markets.
TTI: When you talk about normalising and persisting the data as it comes in, is there a latency cost to that? Does the time it takes to normalise and persist the data create any issues, particularly when dealing with real-time or streaming prices?
TB: There’s absolutely no latency issue with our system. Our EMS (Execution Management System) handles data processing in real time, taking streaming prices, normalising the data, and immediately passing it through to users—whether displayed on a screen, fed into an API, or used for back-end processes like limit checks or automated responses.
Let’s look at fixed income for example. Latency in this asset class is often misunderstood. Yes, some bonds are illiquid and may not trade for days or weeks. But the reality is, for more liquid and heavily traded bonds, especially in the corporate credit and sovereign debt markets, pricing can move quickly, and traders need immediate access to actionable data. We’re talking about markets where price updates, dealer runs, and RFQs can come in fast, and decision-making needs to keep up. That’s where our technology excels. We assemble incoming data into micro-batches and load it directly into Snowflake with no meaningful delay. The process is seamless. Whether the data is being used for real-time trading decisions, portfolio analysis, or compliance checks, it’s there when and where our clients need it. And for the bonds that do trade infrequently, the same system ensures that when liquidity appears, our clients are ready to act without any data lag holding them back.
TTI: And presumably all of that data is then made available on a historical basis?
TB: Exactly. You no longer lost data. Instead, you start building up a full history that you can analyse with Python, SQL, or even your own notebooks. You can even develop your own apps, essentially bringing the analytics to the data rather than moving the data to the analytics.
At first, we built a data access layer on Snowflake. But we liked it so much that we ended up building our entire data management platform on it. We source data from Refinitiv, SIX, and other providers—things like bond terms and conditions, end-of-day prices, and corporate actions. All of this data is loaded into Snowflake, where it is normalised and distributed to our applications, such as TradeSmart, RiskSmart, and WealthSmart. Snowflake has become our central hub for instrument management, price data, and more.
TTI: Do you make this data available purely for TS Imagine applications, or can clients access that data store using their own applications, via an API?
TB: From a technology perspective, everything and anything is possible, but much of it depends on licensing. If the data vendor is on Snowflake’s marketplace, the data licensing can be managed through Snowflake However, not all vendors are—Refinitiv, for instance, isn’t yet there. In such cases, you’d need to establish a direct contract with Refinitiv, and depending on those terms, TS Imagine might be designated as a third-party processor. Technology-wise, there’s no limitations, but our primary focus is on providing our data as part of our software suite. We’re not in the business of being a managed data provider. When you purchase our software, the data is included.
TTI: You recently published a case study outlining how Snowflake has helped TS Imagine adopt GenAI at scale, leading to improved efficiency and reduced costs. Can you give us a high-level overview of what you did, and the resulting benefits?
TB: Certainly. Once we started experimenting with AI, we realised there was one thing it was very good at—converting unstructured data into structured data. That’s been incredibly powerful for us. Each year, we purchase around $10 million worth of data, which we normalise and integrate into TradeSmart, RiskSmart, and WealthSmart for our clients to use.
On top of that, we receive about 100,000 e-mails from data vendors every year, many of which include critical notifications about upcoming changes to their data products. Properly actioning these changes is essential to avoid production outages. However, not all of these emails are relevant to us—perhaps 50% aren’t. AI has helped us sift through and manage this vast volume of information efficiently.
TTI: And you need to know which 50%.
TB: Exactly. Plus, some of those notifications are updates or enhancements to previous notifications. So, you need to handle duplicates, track changes, and classify them. Is it a fee increase, for example? A product change? A support issue? Then, you need to assign it to the right department or person. All of that used to take around 4,000 hours annually—equivalent to two and a half full-time staff. Now, we’ve automated it with AI.
We built a RAG-based (Retrieval Augmented Generation) AI pipeline that handles these notifications in multiple steps. First, it converts the email notification into embeddings (vectors). Then it runs similarity searches against previous notifications using vector search and embeddings. The AI identifies the most relevant examples and applies sophisticated prompt engineering to interpret the message. Importantly, this system was designed by business users—not data scientists—because they know the right questions to ask.
Today, the AI handles 100% of this data processing, leaving a human in the loop to make the final decision, and action the vendor notification. The tedious, time-consuming work is gone.
TTI: And you’ve operationalised this entirely on Snowflake?
TB: Yes, we leverage Snowflake in combination with large language models from Meta (Llama), Mistral or Snowflake (Arctic). We’re already experiencing significant efficiency gains.
TTI: How long has this AI use case been running?
TB: Over the past year, this approach has been highly successful. The 4,000 hours that we’ve saved can now be used for more meaningful work.
Another area where AI is boosting our productivity is customer service. We handle around 60,000 client inquiries annually, covering everything from configuration questions to troubleshooting technical issues. Responding to those inquiries requires quick access to relevant information, and that’s where AI comes in.
The AI pulls data from log files, documentation, and past tickets to surface the most relevant answers in real time. This helps our product specialists—who are highly knowledgeable in both financial markets and our platforms—to focus on solving client issues, rather than wasting time searching for information.
Snowflake acts as the central data layer for the AI. It brings together data from Salesforce, usage logs, client configuration files, and more. This seamless access ensures the AI can provide accurate, real-time insights.
We’ve also built a tool called the Customer Service Hot List, which uses sentiment analysis and complexity scoring to prioritise tickets. It flags inquiries that are urgent or likely to escalate, enabling managers to intervene before minor issues turn into major problems. For example, if a client’s tone suggests frustration, or if we see a pattern of similar inquiries, the system pushes that higher up the priority list.
This has been a huge productivity boost. Our analysts can now handle more inquiries with the same resources, while managers have a real-time view of potential hot spots, allowing for proactive intervention. As a result, resolution times have decreased, escalation rates have dropped, and overall client satisfaction has improved. By reducing manual effort and streamlining processes, we’ve transformed how we handle support inquiries, delivering a better client experience while doing more with the same resources.
TTI: What advice would you give to firms looking to use AI more effectively?
TB: The most important thing is to stay practical. Too many firms fall into the trap of trying to build big, complex AI solutions that take years to implement. That approach rarely works, and it often leads to frustration and wasted resources. Instead, focus on micro use cases—small, well-defined problems that you can solve with AI in a matter of months. You don’t need a revolutionary transformation to start seeing benefits. If you can identify actionable AI processes—things that directly save time, improve accuracy, or boost productivity—you’ll be very successful.
For example, take our email sorting use case for data vendor notifications. The problem was clear: we were receiving hundreds of thousands of notifications a year from data vendors, many of which weren’t relevant. Instead of trying to overhaul our entire data management system, we focused on automating one specific process—classifying and triaging those notifications using AI. That one micro use case saved us thousands of hours of manual effort and delivered tangible, measurable results. Once that proved successful, we were able to build on it and explore additional AI applications.
So, my advice is to start small, prove value quickly, and then scale. Don’t get caught up in the hype of building large, futuristic AI systems. Focus on what’s actionable right now. The key is to ensure that your use case is solving a real business problem, not just something that looks good on paper.
TTI: And ensuring your data is in order is a key part of that?
TB: Absolutely. You can’t achieve meaningful results with AI if your underlying data isn’t reliable. That’s why we’ve built everything on Snowflake, which acts as our central data layer. Snowflake allows us to manage, access, and share data seamlessly without having to move it around, which is critical for data security and efficiency.
When it comes to AI, data privacy and security are major concerns, especially in financial markets. That’s why we’ve been careful to ensure that our use of large language models (LLMs) is safe and controlled. The LLM doesn’t retain any data we input, nor does it remember previous queries. It forgets everything after each prompt, which ensures that our intellectual property and client information stay protected. This is a key distinction between using public AI models and private, secure implementations. We would never run our processes on a public, commercial LLM because we can’t risk exposing sensitive data. By operationalising our AI entirely within Snowflake, we’ve ensured that everything stays within our secure environment.
In short, data security isn’t optional when working with AI. It needs to be baked into your AI strategy from the start.
TTI: Anything else you’ve learned through this process?
TB: One thing I’ve really learned is that English is the new programming language. When you’re working with LLMs, prompt engineering—how you phrase your questions—makes a huge difference in the quality of the AI’s response. The model will only perform as well as the instructions you give it. If you ask vague questions, you’ll get vague answers. But if you break things down step by step, the AI will deliver far more accurate and actionable results.
At the same time, I wouldn’t allow the AI to make final decisions, yet. It’s very good at handling 80-90% of the grunt work, but there still needs to be a human in the loop to make critical decisions. For now, AI is a tool to enhance human productivity, not a replacement for human judgment. It streamlines workflows, reduces manual effort, and uncovers insights, but for final approvals or high-stakes actions, a human needs to ensure accuracy and accountability.
Finally, I think the future of AI will involve smaller, more focused models. Right now, the large models are incredibly powerful, but they’re also resource intensive. I’m hoping we’ll see smaller language models that are customised for specific business contexts. We don’t need a model that understands everything about general language or consumer topics. We need a model that’s laser-focused on financial terminology and workflows. A smaller, more efficient model like that would be cheaper to run and could provide faster responses in real time.
TTI: Thank you, Thomas
Subscribe to our newsletter