The financial industry has long been at the forefront of automation and data-driven decision-making, yet the introduction of AI Agents represents a fundamental shift in how firms approach complex tasks. Unlike traditional AI models that rely on predefined workflows, AI Agents bring a new level of adaptability, reasoning, and autonomy to financial operations. From investment research and trade execution to risk management and compliance, AI Agents have the potential to completely reshape workflows. However, they also introduce new regulatory, operational, and technological complexities.
So what exactly sets AI Agents apart from previous forms of automation? How are firms leveraging them to gain a competitive edge? What risks do they introduce, and how can financial institutions balance innovation with regulatory and operational safeguards to ensure their responsible deployment?Understanding the Concept
Ask ten different people in financial markets what an AI Agent is, and you’re likely to get ten different answers. The term is often applied broadly, sometimes referring to little more than traditional automation layered with conversational interfaces. This variation reflects not just the diversity of applications emerging in the market, but also the pace at which these technologies are evolving. Yet, beneath this ambiguity lies a more precise definition: true AI Agents are distinct from conventional automation and even from other forms of AI, such as machine learning models or generative systems. They are designed to reason, plan, and act autonomously, adapting to context and responding dynamically to new inputs.
Joseph Lo, Head of Enterprise Platforms at Broadridge, provides a concise definition: “An AI Agent is a system that can take a task from a user, break it down into multiple subtasks, plan how to complete them, and execute them autonomously.”
This planning and decision-making capability is central to what makes AI Agents different. While traditional workflows are based on predefined steps and logic, AI Agents determine how to achieve their objectives by evaluating the context, identifying the tools they need, and adapting their approach in real time. This allows them to operate effectively in environments where variability and complexity make rigid processes impractical.
The development of AI Agents builds on earlier forms of automation such as chatbots, which are typically limited to handling simple, rule-based queries. As business requirements grow more complex and dynamic however, such linear workflows cannot accommodate the range of decisions and actions required. AI Agents have evolved to fill that gap, being capable not only of interpreting user intent but also orchestrating entire processes that draw on multiple data sources and analytical models.“I see an AI agent as a more interactive version of a chatbot, similar to ChatGPT, where the user engages in real-time interaction rather than simply triggering a predefined pipeline with no further input,” says Nick Wood, AI Product Manager at FINBOURNE Technology. “In our framework at FINBOURNE, we are developing AI apps (the agent is a type of app) that consist of both circuits and overarching system prompts. The system prompt defines the model’s global behaviour, while the circuits represent the logic for each individual step. The AI agent orchestrates this logic, determining where to go to complete specific tasks, while the complexity of this process remains abstracted from the user. For example, one of our circuits is designed for querying and creating equities within the system. While users interact through a chat interface, they are actually engaging with the agent, which then maps their requests to the appropriate circuit. The circuit defines the available functionality and directs the request to our underlying systems to either execute an action or return the relevant information.”
This orchestration of logic and execution reflects a broader trend in how AI is being deployed in financial markets. AI Agents don’t simply deliver answers, they interpret objectives, determine the best course of action, and iterate where necessary to improve outcomes. It’s this capability that makes them especially well-suited to financial environments where conditions can shift quickly, data may be fragmented, and decision-making needs to balance both speed and complexity.
Key Use Cases
AI Agents are beginning to demonstrate value across a variety of areas. While their adoption is still in early stages, firms are exploring how these systems can support more dynamic and responsive workflows, particularly where rigid processes have historically limited flexibility and speed.
“AI agents excel in situations with open-ended problems – those that don’t have a fixed set of steps for resolution,” observes Ivan Kunyankin, Data Science Team Lead at Devexperts. “For example, they’re useful for data analysis and portfolio management, where they can retrieve relevant data through multiple queries, significantly reducing the time analysts spend gathering information. They also help with operational efficiency because they don’t need a fixed process and can dynamically adjust to unexpected variations in data analysis or decision-making workflows.”
This adaptability makes AI Agents well suited to areas such as portfolio management and investment research. In environments where analysts must work with large volumes of structured and unstructured data, AI Agents can accelerate insight generation by pulling data from multiple sources, synthesising the findings, and even producing initial drafts of reports. This is especially valuable in areas like private markets, where data is less standardised and more fragmented, requiring a more flexible approach to aggregation and analysis.Trade execution is another promising area for future AI Agent adoption. Traditional algorithmic trading relies on fixed rule sets and static strategies. AI Agents, by contrast, can be used to assess liquidity conditions in real time, optimise routing paths, and adapt execution strategies dynamically as market conditions shift. While these capabilities are still developing, the potential for AI Agents to support more nuanced execution decisions is drawing growing interest from trading desks.
Risk management and compliance also stand to benefit from greater autonomy and responsiveness. AI Agents can be tasked with monitoring exposures across portfolios, flagging potential regulatory breaches, and even initiating pre-approved remedial actions. In doing so, they support a more proactive approach to risk and control, reducing operational burden and allowing human teams to focus on more strategic analysis.
In the back office, AI Agents can help automate tasks such as trade reconciliation, exception handling, and settlement processing. Although these functions are typically more rules-based, the complexity of financial operations – and the need to manage exceptions in real time – makes them fertile ground for AI-enabled automation. Over time, AI Agents are likely to play a growing role in enhancing the scalability and resilience of post-trade infrastructure.
“Let’s look at operations as an example,” says Lo. “Typically, an operations professional receives a trade break alert and must investigate its cause. The issue could stem from incorrect settlement instructions, disagreements over settlement amounts, or discrepancies in trade components. While our initial release of BondGPT already raised the bar by helping users retrieve relevant information, what we’re rolling out soon is a true AI Agent that can do more than just fetch data. A user will be able to say, ‘Help me fix this break,’ and the AI Agent will autonomously research possible causes, compile relevant data, and present it in a structured way. This process isn’t deterministic – it doesn’t just follow a rigid set of rules. Instead, it dynamically assesses the trade in question, determines the most relevant factors, and over time, could even resolve the issue autonomously. Every trade and counterparty has unique nuances. AI Agents will help bridge that gap. They won’t just follow predefined workflows – they’ll interpret which procedures are most relevant to a specific trade and apply them dynamically. This is something we’re really excited about.”
Architecting AI Agent Platforms
As firms expand their use of AI Agents, the focus is beginning to shift from discrete, task-specific deployments to more integrated, platform-based strategies. Instead of assigning individual agents to isolated functions, institutions are starting to build interconnected ecosystems where multiple agents work together, each performing a specialised role within a broader, coordinated workflow.
“From an AI Agent perspective, once everything is encoded in software, even if different business functions and systems are involved, the AI Agent can understand its environment, make context-aware decisions, and execute tasks in the appropriate places,” notes Andrew Morgan, President and Chief Revenue Officer of TS Imagine. “This seems to be the direction AI Agents are heading in. AI is most effective when embedded into the operational workflow.”
This move toward orchestration marks a critical step in the evolution of AI Agent architecture. Rather than functioning in isolation, agents are being designed to collaborate, requesting information from one another, sharing context, and contributing to a shared objective.
“People often talk about AI Agents in terms of specific use cases, but that can be an oversimplification,” points out Symon Garfield, Director Capital Markets Advisory & Digital Strategy, Worldwide Financial Services at Microsoft. “The more forward-thinking and advanced implementations aren’t focused on creating AI Agents for individual use cases or processes. Instead, they’re building platforms composed of software components with specific skills that are reusable across different use cases and processes. The key idea is to develop reusable building blocks that can be quickly reassembled to create new capabilities as needed. For example, many firms need a system that can triage and process emails from a shared inbox, categorising them and taking appropriate actions. Instead of building a separate AI Agent for this each time, the approach is to develop a reusable skill that can be deployed across multiple workflows.”
He continues: “This is where orchestration comes in. Think of a ‘Chief of Staff’ AI Agent that coordinates all other AI Agents in a given process. Simply chaining together multiple Agents won’t result in a well-automated workflow; instead, firms need to atomise functionality and encapsulate software into modular components. This isn’t a new concept – we’re seeing the same principles with AI Agents and orchestration. The difference is that traditional software systems are deterministic, whereas agentic systems are increasingly autonomous.”
Interoperability is another foundational requirement for these platforms. Many firms still operate legacy infrastructure that was never intended to support dynamic, AI-driven interaction. Ensuring that AI Agents can work seamlessly across internal systems and external data environments remains a significant challenge. At the same time, the success of any AI Agent architecture hinges on access to clean, well-structured data. Without it, the flexibility and intelligence of agents are quickly undermined.
“One trend we’ve seen, particularly since COVID, is the ubiquity of Microsoft Teams and M365 Copilot on the financial services desktop,” notes Garfield. “This is combining with the shift that towards generative AI-driven conversational interfaces. Now, more firms are adopting natural language prompts or hybrid interfaces with buttons backed by conversational AI. The next step in this evolution is AI-first interfaces, where the application itself fades into the background, and users interact primarily with the AI. We’re seeing Teams and M365 Copilot become the UI for AI.
“We’re also seeing a shift towards Teams-based AI applications, where conversational AI Agents replace traditional Independent Software Vendor (ISV) apps. Imagine a Teams interface where an organisation’s internal AI Agent, a Microsoft AI Agent (e.g., Excel AI), and an ISV AI Agent can all interoperate within the same workflow. Right now, we’re not quite there yet, but that’s the direction things are heading. The ability for AI Agents to communicate directly rather than through rigid API structures could transform how financial services firms integrate and operate their technology stacks.”
Challenges
While the potential of AI Agents is significant, their deployment introduces a range of new risks and operational challenges that firms will need to address. One of the most immediate concerns is regulatory uncertainty. Existing governance frameworks were designed around deterministic systems and human oversight, not autonomous agents capable of reasoning and taking action. As AI Agents begin to influence functions such as trade execution, financial advice and client interactions, regulators will need to actively assess how best to monitor and control their use.
“Transparency is key,” says Lo. “Users must always know when an AI system is making or influencing a decision. That’s why disclosure, monitoring, oversight, and robust guardrails are essential for AI Agents operating in financial markets.”
Maintaining strict access controls is another core requirement, particularly in environments where data sensitivity is high and permissioning is tightly regulated. AI Agents must operate within well-defined entitlement frameworks to ensure they cannot retrieve or act on data beyond the user’s authorised scope. This is not only essential for compliance, but also for maintaining client trust in the integrity of AI-driven systems.
“Our system is designed so that all actions run under the user’s own entitlements,” states Wood. “If you lack the necessary entitlements to perform an action, you won’t receive any results. The system is strictly limited to what you could manually do yourself. Additionally, every action is logged, allowing you to trace inputs and outputs at every stage of the process. This means you can review your session history and clearly explain how each decision was made. Entitlements also determine data visibility. For example, if you and I have different access levels and run the same query to retrieve all equities in the system, we may see completely different results based on our individual scopes. To facilitate this, we’ve integrated an entitlement engine across the entire system, controlling access at various levels, including data, rows, columns, providers, and views. By placing our AI agents on top of this infrastructure, we ensure that users can only retrieve data and execute actions they are personally authorised to perform. This is a fundamental requirement in the financial industry, where security and traceability are critical to maintaining trust and compliance.”
Explainability and auditability are also essential. Whether applied to trading decisions, portfolio construction, or risk controls, AI Agents must be able to justify their actions. Without a clear record of how and why decisions are made, firms risk exposure to regulatory breaches, reputational damage, or market instability. This places additional pressure on firms to build robust oversight into AI Agent deployments from the outset.
“AI systems in financial markets must operate within a Responsible AI framework,” states Yiannis Antoniou, CTO of Lydatum. “That means ensuring full transparency, traceability, and auditability from data input to decision output. It also means implementing safeguards against bias and ensuring compliance with regulatory requirements. Every AI Agent must be able to demonstrate how its decisions were made, with a clear basis in factual data. This is no different from the requirements for traditional LLM-based systems, but with AI Agents, it becomes even more critical because they interact with external data sources and make real decisions based on that data. The potential for unintended consequences is high, so firms must have robust frameworks in place to prove they are operating responsibly.”
Alongside governance, firms must also contend with the cost and complexity of deploying AI Agents, as they are inherently more sophisticated than single-model tools or traditional automation, often involving multiple components, ongoing coordination, and extensive testing. As a result, implementation tends to require both significant investment and specialised expertise.
“AI agents do come with disadvantages,” observes Kunyankin. “They have higher costs due to frequent model calls and increased context usage. They introduce greater latency because of iterative reasoning processes. They are more complex, creating more room for errors and unpredictability. They require risk-free environments or careful constraints to ensure they don’t negatively impact users. They also introduce new failure modes that are harder to evaluate compared to traditional software or chatbot assistants. This is why AI agent adoption should be gradual, starting with internal tools, then controlled, low-risk user interactions, and only eventually expanding to more autonomous roles in trading and investment.”
“Running AI Agents is typically more expensive than using a standalone LLM because they typically require multiple tools and models working together to orchestrate their actions and responses,” adds Antoniou. “However, with the rise of smaller, more efficient models, we can optimise for speed and cost-effectiveness by using lightweight models for routine tasks while reserving larger models for complex decisions. Right now, orchestrating these multi-model systems requires specialised technical expertise, but as the market matures, we’ll see more user-friendly tools making it easier to deploy AI Agents without deep technical knowledge.”
Measuring ROI
As financial institutions scale their investments in AI Agents, the question of how to measure ROI becomes increasingly important. For some, the appeal lies in the potential to streamline manual processes and reduce operational inefficiencies. For others, AI is viewed more strategically, as a long-term infrastructure play that enables broader transformation across the enterprise.
“Many firms attempt to justify AI investments through personal productivity metrics – for example, calculating that if AI saves 50 employees 10 minutes per day, the cost savings add up,” reflects Garfield. “But the reality is that unless you actually reduce headcount, those savings won’t appear on the bottom line. Some organisations have made substantial investments in generative AI based on these business cases, but many executives I talk to require something more substantive. An alternative approach is to tie AI investments to business processes or journeys where clear metrics can be applied – for example throughput, error reduction, cost savings, or quality improvements.”
This distinction highlights the limitations of viewing AI purely through a time-saved lens. While efficiency is often the initial goal, the more compelling case for AI Agents lies in their ability to influence outcomes that are harder to measure, such as improved decision-making, reduced risk, and the identification of new opportunities. These benefits tend to manifest over time and across multiple workflows, requiring firms to rethink how they assess value.
To capture the broader impact of AI Agents, firms may therefore need to adopt a more quantitative and model-driven approach to evaluation. Rather than focusing on individual tasks, they could instead look at how AI contributes to overall performance, be it through reduced error rates, more consistent investment outcomes, or enhanced client servicing.
“In finance, we’re dealing with a digital industry where everything is measured in numbers,” states Morgan. “Ultimately, we’re talking about a statistical optimisation process. If an AI model can demonstrably improve outcomes or enhance productivity, then it’s doing its job. The challenge comes when trying to explain every individual decision an AI system makes. Markets are incredibly complex and dynamic, with countless variables influencing outcomes. While we may not always be able to pinpoint the exact cause-and-effect relationship behind every decision, we can still measure overall improvements across a statistically significant sample.”
This perspective reinforces the need for financial firms to evolve their measurement frameworks. ROI in the AI Agent era won’t always be immediate or linear, but with the right metrics in place, firms can track progress over time and build a more credible case for continued investment.
The Next Frontier
Looking ahead, one of the most transformative developments in AI Agent technology is likely to be the emergence of direct AI-to-AI communication. Today, most workflows involving AI still require human orchestration: reviewing outputs, triggering actions, or passing information between systems. But as AI Agents evolve, they are expected to take on a more collaborative role, coordinating tasks autonomously across functions and domains.
Rather than operating in isolation, AI Agents could begin to form intelligent networks, requesting data from one another, verifying decisions, and dynamically adjusting strategies in real time. For example, a trade execution agent could draw on insights from a risk management agent to modify its behaviour as market conditions shift. A compliance agent might detect an anomaly and trigger a sequence of corrective actions by operational agents, without any human involvement. In this future state, financial workflows could become far more seamless, responsive, and self-sustaining.
“Overall, AI agents hold tremendous potential in financial markets, particularly in streamlining routine tasks, improving decision-making, and enhancing efficiency,” notes Kunyankin. “However, the barrier to entry in trading is high, meaning we should remain realistic about their short-term adoption.”
This realism is echoed by others in the industry, who recognise that while AI Agent technology is progressing quickly, development remains highly customised. Much of the current momentum depends on firms building bespoke solutions from scratch, leveraging a variety of emerging tools and frameworks, many of which are still maturing.
“I do foresee a future where the industry converges around two or three dominant frameworks for building AI Agents, rather than the current approach where most implementations are built from scratch or by experimenting with multiple frameworks with different levels of capabilities and maturity,” says Antoniou. “Some startups and frontier labs are working on ways to standardise the development of AI Agents, but for now, most solutions require custom programming. Interest in AI Agents is growing across financial services, and as the field matures, we’ll likely see more robust frameworks emerge. Until then, the ability to develop tailored solutions remains a competitive advantage.”
While adoption may be incremental, the direction of travel is clear. AI Agents represent a fundamental shift in how technology supports financial decision-making and operations. For firms that can successfully integrate them – while maintaining control, transparency, and alignment with regulatory expectations – they offer a significant opportunity to rethink how work gets done.
As the field matures, AI Agents are set to become more than just tools; they may soon be integral participants in the day-to-day functioning of financial markets.
- A-Team Group AI in Capital Markets Summit London 2025 will be held on May 22. To see the full agenda, click here or register below.
Subscribe to our newsletter