About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Jefferies Streamlines OTC Derivatives Clearing with AWS for T+1 and More

Subscribe to our newsletter

Investment Banking firm Jefferies has deployed AWS services to streamline OTC derivatives post-trade operations to meet the new T+1 settlement deadline. The implementation was presented as a case study at the recent AWS Summit in New York.

The session was presented by Jefferies’ executives Sudhakar Paladugu, SVP Corporate Technology, and Manish Mohite, SVP Global Head Public Cloud.

Despite the best efforts of the International Swaps Dealers Association (ISDA) the middle office OTC derivatives confirmations (confirm) process has remained largely manual due to variations in templates across counterparties. Under the original process, middle office staff had to read through each third party confirm and manually check the details against the internal trade records.

Every counterparty would have a slightly different format and presentation, some were scanned photocopies of screenshots. With email attachments being the dominant communications platform, completing the confirm process manually was cumbersome and the prospect of automating received an enthusiastic response from the middle-office team.

Jefferies’ journey with AWS began in 2022 with the goal of modernizing the firm’s infrastructure by migrating to the cloud. A CRM platform, data-driven investment advice and applications across front, middle and back offices have followed.

The part of the trade lifecycle in focus for this case study begins after the trade, when trading desk and counterparty have agreed the terms and the middle office receives the counterparty’s trade confirmation. The manual step of reading, deciphering and checking has been automated through an orchestrated set of AWS tools.

Process Overview

This process begins when a user or an application uploads a confirmation image or PDF file to an Amazon S3 bucket. This initial upload action sets off a series of automated processes designed to analyse and extract data from the document accurately.

Once the document is uploaded to the S3 bucket, an Amazon S3 event notification is configured to trigger on detecting this action. This notification sends a message to an Amazon SQS (Simple Queue Service) queue. SQS acts as a decoupling agent that ensures the uploaded document is processed asynchronously. By placing the event notification in the queue, SQS helps manage the workload and ensures that the processing service is not overwhelmed by sudden spikes in uploads.

Upon receiving the S3 event notification from the SQS queue, an application or an AWS Lambda function invokes Amazon Textract’s StartDocumentAnalysis API. This API call initiates the process of extracting text, tables, and forms from the uploaded document. Textract uses advanced machine learning powered OCR to accurately analyse and extract structured data from the document for later matching.

After initiating the document analysis with Textract, the system saves the job ID and the S3 document key into an Amazon DynamoDB table. When Amazon Textract completes the document analysis, it sends a notification via an Amazon SNS (Simple Notification Service) topic. SNS ensures that the notification is delivered reliably and can trigger further actions in the processing pipeline.

Additionally, the extracted results from Textract are placed back into the designated S3 bucket. This structured data is now ready for further downstream processing.

An AWS Lambda function is triggered by the SNS notification to perform a fuzzy Sørensen-Dice match. This function compares the extracted data from Textract with pre-configured mappings stored in DynamoDB. The Sørensen-Dice coefficient, a statistical measure of similarity, helps in identifying and matching the relevant data fields even if there are slight variations or errors in the extracted text. This step returns a confidence interval for all extracted fields to facilitate the human-in-the-loop process.

After performing the fuzzy match, the Lambda function reads the merged JSON data from DynamoDB, which includes the mappings and matches identified in the previous step. It also accesses the original uploaded documents from Amazon S3 to cross-verify and ensure consistency. This integrated approach ensures that all data points are correctly aligned, and any discrepancies are resolved before the data is used in subsequent steps.

AWS API Gateway facilitates secure and efficient interactions between the web UI and the backend processes, allowing users to interact with the document processing pipeline seamlessly.

The final step involves a human-in-the-loop (HITL) interface where users can review the document processing results. This UI allows human operators to analyse the output, verify accuracy, and make any necessary adjustments to the mappings in DynamoDB. This step ensures that the system continuously improves and adapts to new document formats and variations, maintaining high accuracy and reliability in data extraction and processing.

Impact and Next Steps

The AWS powered process passed the T+1 test and is delivering and 80-90% reduction in processing time with further performance improvements expected as the solution is expanded to include additional asset classes. The goal is to convert the current build into a robust generic product API.

The Jefferies AWS roadmap includes leveraging AWS Bedrock to build an Operations Assistant with AI/ML and Generative AI (GenAI) as well as leveraging GenAI to boost efficiencies and performance across post-trade operations generally.

Subscribe to our newsletter

Related content

WEBINAR

Upcoming Webinar: GenAI and LLM case studies for Surveillance, Screening and Scanning

6 November 2025 11:00am ET | 3:00pm London | 4:00pm CET Duration: 50 Minutes As Generative AI (GenAI) and Large Language Models (LLMs) move from pilot to production, compliance, surveillance, and screening functions are seeing tangible results — and new risks. From trade surveillance to adverse media screening to policy and regulatory scanning, GenAI and...

BLOG

Why is Comms Surveillance Still a Problem? Key Insights from RegTech Summit 2024

More than $3 billion in fines and even higher legal costs have underscored the persistent challenges firms face in e-communications surveillance. Despite abundant evidence of the regulatory, financial, and reputational risks at stake, this problem continues to vex compliance teams worldwide. This provided the context for a group of senior practitioners and RegTech experts convened...

EVENT

AI in Capital Markets Summit London

The AI in Capital Markets Summit will explore current and emerging trends in AI, the potential of Generative AI and LLMs and how AI can be applied for efficiencies and business value across a number of use cases, in the front and back office of financial institutions. The agenda will explore the risks and challenges of adopting AI and the foundational technologies and data management capabilities that underpin successful deployment.

GUIDE

BCBS 239 Data Management Handbook

Our 2015/2016 edition of the BCBS 239 Data Management Handbook has arrived! Printed copies went like hotcakes at our Data Management Summit in New York but you can download your own copy here and get access to detailed information on the  principles and implications of BCBS 239 on Data Management. This Handbook provides an at-a-glance...