The leading knowledge platform for the financial technology industry
The leading knowledge platform for the financial technology industry

A-Team Insight Blogs

Q&A: George Salemie of RedDot Networks Talks Latency Monitoring and FPGAs

We came across RedDotNetworks at June’s SIFMA show, and were pleasantly surprised they that are based here in Austin, and doing interesting things in the worlds of latency management and FPGA development. So IntelligentTradingTechnology.com caught up with founder George Salemie to find out more about the company, and where it is heading.

Q: When did RedDot Networks get started, and what was the market need you were/are looking to fill?

A: RedDot Networks was founded in 2009. After working for a number of large corporations as well as a number of startups in the production network infrastructure, pre-deployment test/measurement and management/monitoring spaces, we identified the tremendous opportunity in the fragmented but rapidly growing space of network management/monitoring, specifically in network monitoring infrastructure.

Network monitoring infrastructure consists of TAPs (traffic access points), load balancers and aggregation switches … all of which are commodity products. To differentiate RedDot Networks and address the need for greater visibility, we leveraged FPGA technology to provide deep packet inspection, which RedDot Networks calls Pinpoint Packet Inspection. Pinpoint Packet Inspection overcomes the limit of existing L2 to L4 filtering by identifying a character string anywhere in the packet.

One of our financial customers asked if we could leverage FPGA technology for a low-latency, pre-trade risk management system. So now we provide custom-based FPGA solutions for low-latency applications. In the financial markets, these low-latency applications include:  pre-trade risk checks, market data integration, options valuation and algorithmic trading.

Q: One of your products is Chronos. What does it do?

A: Chronos is a precise latency measurement appliance. It was also developed based upon customer demand – primarily due to the lack of flexibility and high costs of existing solutions.

Q: How do you differentiate Chronos from other offerings in the latency measurement space?

A: Besides being a lower cost and more flexible solution, Chronos’ biggest differentiator is RedDot Networks’ software development kit called SAM (Securities Application Monitor). SAM provides the financial-industry’s first correlated one-way latency-measurement of pre-trade risk. Basically, customers can track a trade and corresponding execution based upon Client Order ID.

Q: Where are you seeing demand for Chronos – what kinds of firms are adopting it and for what applications?

A: Since Chronos is a hardware-based time-stamping platform, it is applicable for any firm that requires precise (10 nanosecond resolution) latency measurement. So financial institutions (e.g. exchanges) are adopting the platform.

Q: Another product is Symphony. What is it, and how did it come about?

A: At RedDot Networks, we are very excited about the Symphony platform, which combines the strengths of an FPGA solution with the simplicity of an Intel-based processor running an RTOS or Linux. Symphony came about to provide a platform for rapid deployment of low-latency solutions. Symphony utilises a split-plane architecture:  the data plane runs on the FPGA and the control and management run on Linux or an RTOS. Communications occur over the PCIe bus.

Q: What kind of applications is Symphony being used for today? Are there any other applications where you think it will be relevant?

A: Symphony is primarily used for pre-trade risk management today; however, it is also relevant for market data integration, options valuation and high frequency algorithmic trading.

Q: What kind of performance increases are you seeing using FPGAs, compared to a traditional x86 server?

A: So many others have answered this question by stating the performance increases are at least 100x over traditional servers. Perhaps the most accurate way to answer this question is that performance increases can vary from 10x to 100x depending upon the application and existing x86 server as well as how, where and when performance is measured.

As an aside, latency measurement can be a very confusing topic for many organisations. Some groups measure latency in the OS using tcpdump, which does not take into consideration the physical (PHY/MAC/Serialisation) latency and is non-deterministic. Other groups use FPGA clock cycles from simulation. Still other groups use hardware-based time stamps, which of course are the most precise. However, you still need to know what you are measuring when using a hardware-based time stamp.

As an example, some organisations have stated that there pre-trade risk check takes two microseconds but never define:

a)    Speed (10G or 1G)
b)    Media type (fiber or copper)
c)    Time (under heavy load or no load)
d)    What (a simple single order quantity risk check or complex risk check which requires memory access off the FPGA, modification and correction (length, TCP and FIX checksums and Ethernet CRC)
e)    How (via tcpdump in the OS or port to port using HW time-stamp)
f)    Where (on the switch port, on the appliance port, in the OS or just FPGA clock cycles).

Q: And what about other benefits – such as power consumption, heat dissipation?

A: You are correct. Among the other benefits of FPGAs are low power consumption and less heat dissipation compared to the traditional servers we see in data centres.

However, the benefits go far beyond those. For a variety of reasons (including the decline of ASICs), we are seeing unprecedented growth in the use of FPGAs and the demand for FPGA engineers. FPGA technology is used by nearly all storage and network equipment manufacturer as well as government agencies around the world.  One FPGA card manufacturer is funded by a foreign government agency.

One measure of the growth of FPGA technology is the demand for FPGA engineers. One website lists openings for over 3,500 FPGA positions.

Q: How are you overcoming the off-cited challenges of developing applications for FPGAs. Where are you seeingimprovements in this area? And where does the pain continue to be felt?

A: Let me respond by first listing three of the challenges, which are 1) battling with the vendor tools to generate builds, 2) accurate module and top-level (system) simulations and 3) timing closure.

The primary way to overcome the challenges with application development on FPGAs is by having experienced developers that are familiar with the tools and capable of implementing simulations. RedDot Networks is fortunate to have excellent senior, US-based, FPGA and low-level driver engineers that are experts in VHDL and Verilog as well as proficient with development tools and capable of implementing simulations to ensure the FPGA code is working before attempting a build.

You may have noticed I listed low-level driver engineer because at some point the FPGA needs to communicate the outside world with other software applications for management and control. Experienced low-level driver engineers are essential to make this occur. The best analogy of an FPGA-based product without a low-level driver engineer is a Bugatti Veyron Super Sport that cannot turn or brake. It is still the fastest car in the world; however, it cannot be controlled or managed.

While there are improvements in the tools used to build code (place and route the logic), the biggest pain is still felt in the area of tools … from the perspective of usage as well as the time it takes to place and route designs that use a significant amount of logic on the FPGA. Another area that causes many organisations great pain is finding and recruiting skilled FPGA developers.

Q: Are there any future plans/directions that you can share with us?

A: Sure. Our plan for RedDot Networks is to continue to grow in a challenging economy by expanding the applications implemented using FPGAs. To do this, we must create standards by which latency is measured (to eliminate the unsubstantiated claims made in the past) and continue to raise awareness of FPGA capabilities for being the lowest latency and most deterministic solution (as they continue to become more commonplace).

Related content

WEBINAR

Recorded Webinar: Market data management, licensing and administration in the post-Covid environment

Market data administration has always been a challenge. For many firms, keeping tabs on permissioning and entitlements, compliance with licensing agreements, and reconciling all that with increasingly complex invoices requires a significant dedicated resource with a clear understanding of the issues involved. As if that weren’t enough, things got more challenging for these teams in...

BLOG

Rising AML Costs Driven by Regulatory Initiatives Rather than Crime Threat, LexisNexis Survey Finds

Regulation itself is the primary driver behind an expected rise to £30 billion in UK financial institutions’ annual AML costs to £30 billion by 2023, a survey from LexisNexis Risk Solutions has found. The increase, from £28.7 billion in 2020, appears set to be invested in personnel rather than technology, however, with the survey –...

EVENT

RegTech Summit APAC Virtual

RegTech Summit APAC will explore the current regulatory environment in Asia Pacific, the impact of COVID on the RegTech industry and the extent to which the pandemic has acted a catalyst for RegTech adoption in financial markets.

GUIDE

Trading Regulations Handbook 2021

In these unprecedented times, a carefully crafted trading infrastructure is crucial for capital markets participants. Yet, the impact of trading regulations on infrastructure can be difficult to manage. The Trading Regulations Handbook 2021 can help. It provides all the essentials you need to know about regulations impacting trading operations, data and technology. A-Team Group’s Trading...