About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

The Single, Simple Goal of Every Data Operation (and the Three Roadblocks to Achieving It)

Subscribe to our newsletter

By Adam Devine, VP Product Marketing, WorkFusion

To compete with rivals, comply with regulations and please customers, financial institutions must achieve one solitary operational goal: better quality data through smarter, more agile and transparent processes that generate end-to-end audit trails while simultaneously using fewer human resources, rationalising technology point solutions and increasing automation without burdening IT resources or compromising data security. It’s simple, really.

As simple as this goal is, most large banks and financial services businesses face three complex challenges in their efforts to achieve it: Data Indigestion, Process Spaghettification, and Productivity Rigidity.

Defining the Three Challenges

Data Indigestion happens to any enterprise business (and especially financial businesses) when data enters the organisation rapidly in massive volumes and in a wide variety of different formats. Data Indigestion in a back office data operation is not unlike what happens to the human body at an all-you-can-eat buffet during a football season happy hour. Both generally result in operational blockages, bloating, and lethargy.

Process Spaghettification is what happens as tools, process owners and output requirements change over time. It is exacerbated by two or more distinct data operations converging as a result of a merger or acquisition. The sudden mix of dozens of legacy systems and globally dispersed pools of full-time employees and offshore workforces further spaghettify processes such as client onboarding, trade settlements and Know Your Customer (KYC).

Productivity Rigidity is the most common, but also most crippling, enterprise data operation ailment. It is a systemic failure of human and machine resources to adapt, optimise and scale. It is endemic to any large organisation that faces elastic service demands and highly variable incoming data volumes. Peak demands exceed human capacity and there is an expensive excess of human capacity during troughs. Worse, organisations cannot truly assess the productivity and performance of their human workers at an individual or process level, nor can the right task be matched with the right worker at the right time.

One would think that adequate server space should prevent machine Productivity Rigidity, but the problem stems not from capacity but intelligence. In order to adapt to process changes, new business needs and new data formats without the extensive IT maintenance that erodes ROI, machines must learn. Most data collection and extraction automation solutions have humans at the core who manually program and tune the automation. Solving machine Productivity Rigidity means flipping the paradigm and putting automation at the core of the process.

Machine Learning: The Solution for the Challenges

Machine learning is a branch of artificial intelligence and cognitive computing. Machine learning interacts with human data analysts and naturally learns from their patterns of work. It thrives when applied to the complex, high-volume data processes in enterprise data operations.

Machine learning cures Data Indigestion by watching human data analysts perform day-to-day data categorisation, extraction and remediation work, and by training itself to perform the repetitive tasks. When the process or formats change, the system escalates the unfamiliar task to a human worker and thus retrains itself, avoiding the need for IT maintenance. This efficient cycle of human effort and machine learning exponentially increases the processing capacity of data operations and prevents Data Indigestion.

Spaghettified processes are untangled and prevented by pairing workflow tools with machine learning. Whereas typical business process management systems are limited to enabling human users to design workflows, machine learning enabled workflow platforms can automatically generate human-machine workflows through pattern recognition. For example, there may be no defined process in a data operation for updating entity data, but a machine learning platform can watch an analyst visit a government registry, download SEC documents, extract business information from the documents, validate a company’s website and automatically design a distinct automated process for completing the repetitive tasks and delegate judgment work to human data analysts. Combining human judgment with machine pattern recognition radically improves business processes.

Machine learning solves Productivity Rigidity by enabling more agile workforce orchestration. Machine learning can assess the accuracy of human work, assign qualifications to workers and generate detailed performance metrics for individual workers based on their output. These metrics can be used to more effectively delegate tasks to the right worker at the right time and ensure the best workers are prioritised. Machine learning also makes automation more agile by enabling it to adapt to new formats and intelligently escalate errors – like optical character recognition misreading a letter or number – to human analysts within the process.

The Way Forward

Operations and technology leaders have two paths to achieve the aforementioned goal and overcome the three common challenges. They can commission massive internal technology projects aimed at building artificial intelligence and cognitive learning systems that automate work and optimise processes, or they can subscribe to the new breed of software systems that has taken years and millions of dollars of venture funding to develop. The cleverest of COOs, CIOs and CDOs will likely choose the latter path of least resistance and greatest ROI.

Adam Devine will be speaking at the forthcoming A-Team Group’s Data Management Summits in London and New York City.

Subscribe to our newsletter

Related content

WEBINAR

Upcoming Webinar: Practical considerations for regulatory change management

Date: 18 September 2024 Time: 10:00am ET / 3:00pm London / 4:00pm CET Duration: 50 minutes Regulatory change management has become a norm across financial markets but a challenge for financial institutions that must monitor, manage and adapt to ensure compliance with both minor and major adjustments to obligations. This year is particularly troublesome, with...

BLOG

Erste Asset Management Selects Clearwater Analytics Cloud-Native Reporting Platform

Erste Asset Management, a large manager in Central and Eastern Europe, has selected Clearwater Analytics’ cloud-native client reporting platform following an RFP selection process. By using the Clearwater PRISM platform, the asset manager will be able to provide customised digital client experiences and clients will be able to configure insights and analytics, and access data...

EVENT

TradingTech Briefing New York

Our TradingTech Briefing in New York is aimed at senior-level decision makers in trading technology, electronic execution, trading architecture and offers a day packed with insight from practitioners and from innovative suppliers happy to share their experiences in dealing with the enterprise challenges facing our marketplace.

GUIDE

Regulatory Data Handbook 2024 – Twelfth Edition

Welcome to the twelfth edition of A-Team Group’s Regulatory Data Handbook, a unique and useful guide to capital markets regulation, regulatory change and the data and data management requirements of compliance. The handbook covers regulation in Europe, the UK, US and Asia-Pacific. This edition of the handbook includes a detailed review of acts, plans and...