About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Jefferies Streamlines OTC Derivatives Clearing with AWS for T+1 and More

Subscribe to our newsletter

Investment Banking firm Jefferies has deployed AWS services to streamline OTC derivatives post-trade operations to meet the new T+1 settlement deadline. The implementation was presented as a case study at the recent AWS Summit in New York.

The session was presented by Jefferies’ executives Sudhakar Paladugu, SVP Corporate Technology, and Manish Mohite, SVP Global Head Public Cloud.

Despite the best efforts of the International Swaps Dealers Association (ISDA) the middle office OTC derivatives confirmations (confirm) process has remained largely manual due to variations in templates across counterparties. Under the original process, middle office staff had to read through each third party confirm and manually check the details against the internal trade records.

Every counterparty would have a slightly different format and presentation, some were scanned photocopies of screenshots. With email attachments being the dominant communications platform, completing the confirm process manually was cumbersome and the prospect of automating received an enthusiastic response from the middle-office team.

Jefferies’ journey with AWS began in 2022 with the goal of modernizing the firm’s infrastructure by migrating to the cloud. A CRM platform, data-driven investment advice and applications across front, middle and back offices have followed.

The part of the trade lifecycle in focus for this case study begins after the trade, when trading desk and counterparty have agreed the terms and the middle office receives the counterparty’s trade confirmation. The manual step of reading, deciphering and checking has been automated through an orchestrated set of AWS tools.

Process Overview

This process begins when a user or an application uploads a confirmation image or PDF file to an Amazon S3 bucket. This initial upload action sets off a series of automated processes designed to analyse and extract data from the document accurately.

Once the document is uploaded to the S3 bucket, an Amazon S3 event notification is configured to trigger on detecting this action. This notification sends a message to an Amazon SQS (Simple Queue Service) queue. SQS acts as a decoupling agent that ensures the uploaded document is processed asynchronously. By placing the event notification in the queue, SQS helps manage the workload and ensures that the processing service is not overwhelmed by sudden spikes in uploads.

Upon receiving the S3 event notification from the SQS queue, an application or an AWS Lambda function invokes Amazon Textract’s StartDocumentAnalysis API. This API call initiates the process of extracting text, tables, and forms from the uploaded document. Textract uses advanced machine learning powered OCR to accurately analyse and extract structured data from the document for later matching.

After initiating the document analysis with Textract, the system saves the job ID and the S3 document key into an Amazon DynamoDB table. When Amazon Textract completes the document analysis, it sends a notification via an Amazon SNS (Simple Notification Service) topic. SNS ensures that the notification is delivered reliably and can trigger further actions in the processing pipeline.

Additionally, the extracted results from Textract are placed back into the designated S3 bucket. This structured data is now ready for further downstream processing.

An AWS Lambda function is triggered by the SNS notification to perform a fuzzy Sørensen-Dice match. This function compares the extracted data from Textract with pre-configured mappings stored in DynamoDB. The Sørensen-Dice coefficient, a statistical measure of similarity, helps in identifying and matching the relevant data fields even if there are slight variations or errors in the extracted text. This step returns a confidence interval for all extracted fields to facilitate the human-in-the-loop process.

After performing the fuzzy match, the Lambda function reads the merged JSON data from DynamoDB, which includes the mappings and matches identified in the previous step. It also accesses the original uploaded documents from Amazon S3 to cross-verify and ensure consistency. This integrated approach ensures that all data points are correctly aligned, and any discrepancies are resolved before the data is used in subsequent steps.

AWS API Gateway facilitates secure and efficient interactions between the web UI and the backend processes, allowing users to interact with the document processing pipeline seamlessly.

The final step involves a human-in-the-loop (HITL) interface where users can review the document processing results. This UI allows human operators to analyse the output, verify accuracy, and make any necessary adjustments to the mappings in DynamoDB. This step ensures that the system continuously improves and adapts to new document formats and variations, maintaining high accuracy and reliability in data extraction and processing.

Impact and Next Steps

The AWS powered process passed the T+1 test and is delivering and 80-90% reduction in processing time with further performance improvements expected as the solution is expanded to include additional asset classes. The goal is to convert the current build into a robust generic product API.

The Jefferies AWS roadmap includes leveraging AWS Bedrock to build an Operations Assistant with AI/ML and Generative AI (GenAI) as well as leveraging GenAI to boost efficiencies and performance across post-trade operations generally.

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: Best Practices for Managing Trade Surveillance

The surge in trading volumes combined with the emergence of new digital financial assets and geopolitical events have added layers of complexity to market activities. Traditional surveillance methods often struggle to keep pace with these changes, leading to difficulties in detecting sophisticated market abuses and increased regulatory risk. To address these challenges, financial institutions are...

BLOG

Navigating the Regulatory Data Labyrinth: Foundations, Collaboration, and the Pursuit of Trust

Regulatory reporting remains a significant challenge for financial institutions, driven by ever-evolving requirements and the sheer volume and complexity of data involved. A recent webinar, hosted by A-Team Group and co-sponsored by Nice Actimize and Derivative Service Bureau, brought together industry experts to discuss best practices in data management for regulatory reporting, offering valuable insights...

EVENT

AI in Capital Markets Summit New York

The AI in Capital Markets Summit will explore current and emerging trends in AI, the potential of Generative AI and LLMs and how AI can be applied for efficiencies and business value across a number of use cases, in the front and back office of financial institutions. The agenda will explore the risks and challenges of adopting AI and the foundational technologies and data management capabilities that underpin successful deployment.

GUIDE

Regulatory Data Handbook 2025 – Thirteenth Edition

Welcome to the thirteenth edition of A-Team Group’s Regulatory Data Handbook, a unique and practical guide to capital markets regulation, regulatory change, and the data and data management requirements of compliance across Europe, the UK, US and Asia-Pacific. This year’s edition lands at a moment of accelerating regulatory divergence and intensifying data focused supervision. Inside,...