About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Jordan & Jordan Unveils Reference Data Testing Kit

Subscribe to our newsletter

Jordan & Jordan (J&J), a New York-headquartered management and infor-mation technology consulting firm, has introduced a so-called Reference Data Testing Framework (RDTF) and will offer it to clients as part of its reference data consulting services.  

Developed by J&J over the past two years, RDTF is a customizable framework and set of tools that address the need for automated data testing and verification within reference data projects. It allows for one-time creation of test data records along with an analysis specification that is used to analyze test results. Among the data products supported are Bloomberg Data License and Reuters DataScope.

Although RDTF is primarily used during the development and integration test cycles of a reference data project, it can also help with vendor selection and offshore testing.

Essentially, it takes data in different parts of the data-processing lifecycle, compares it to expected results, and reports on exceptions. The exceptions can then be analyzed to determine whether problems lie within the systems, process or the data itself.

“When you want to validate or arbitrate your reference data, you need a test,” says Richard Shriver, managing director, technology services, at J&J. “We recognized that testing involved repeatable generic processing, and so we evolved some of the custom development software into a reusable toolkit. By using the toolkit within our consulting service, we provide clients a more efficient service – better, faster and cheaper – and give clients the confidence that they are dealing with people who understand the business processes enough to ask the right questions.”

Features of RDTF include:
Record count checks for raw records loaded into the system and reports on missed records.
User-friendly test data creation with accompanying test analysis specification on a per-record basis.
Customizable implementation-specific analysis for test results.
Excel-like reports on test results, details of exceptions and client specific report additions.

Components include a data reader/generator, data loader, analyzer and reporter. Test reports include summary pass/fail, exceptions and detail reports, as well as an array of custom reporting options.

The J&J Reference Data Consulting Service comprises domain experts for a broad range of asset and data classes, technology experts who have implemented large-scale multi-asset reference data systems for major investment banks and hedge funds, as well as a testing strategy that includes the use of RDTF. The firm also offers consulting for institutions looking to manage reference data test and support teams offshore.

According to Shriver, with any data reference implementation project, regardless of whether software is provided by third-party suppliers or developed in-house, there are a lot of business rules and data definitions specific to each client. These all need to be independently verified and tested.

He says: “If you want to change the reference data process, you need to verify data that comes into and out of the process.”

Although “there are lots of people selling software and solutions,” Shriver says each client still has a lot of work to do to ensure the system works as expected. That’s where J&J can help.

Shriver says J&J has a growing reference data consulting business. In the past two years, the company has consulted on three major enterprise reference data projects within North America, as well as smaller projects – such as content or software changes – including vendor evaluations and RFI development.
Although pricing for J&J services varies depending on the project, Shriver says a good rule of thumb is to budget 20% of the development and implementation costs for testing and verification.

 jj.jpg

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: Strategies, tools and techniques to extract value from unstructured data

Unstructured data is voluminous, unwieldy and difficult to store and manage. For capital markets participants, it is also key to generating business insight, making better-informed decisions and delivering both internal and external value. Solving this dichotomy can be a challenge, but there are solutions designed to help financial institutions digest, manage and make best use...

BLOG

Buy-Side Faces Structured, Unstructured Data Challenge: A-Team Group Webinar

Buy-side firms are far from satisfied with their present data management strategies and are still struggling to deal with hoary old challenges such as data silos and legacy tech stacks that are inadequate for their modern needs, according to the latest polling from A-Team Group. Only a fifth of respondents feel positively about how they...

EVENT

AI in Capital Markets Summit London

The AI in Capital Markets Summit will explore current and emerging trends in AI, the potential of Generative AI and LLMs and how AI can be applied for efficiencies and business value across a number of use cases, in the front and back office of financial institutions. The agenda will explore the risks and challenges of adopting AI and the foundational technologies and data management capabilities that underpin successful deployment.

GUIDE

Evaluated Pricing

Valuations and pricing teams are facing a much higher degree of scrutiny from both the regulatory community and the investor community in the glare of the post-crisis data transparency spotlight. Fair value price transparency requirements and the gradual move towards a more harmonised accounting standards environment is set within the context of the whole debate...