About a-team Marketing Services
The leading knowledge platform for the financial technology industry
The leading knowledge platform for the financial technology industry

A-Team Insight Blogs

The Single, Simple Goal of Every Data Operation (and the Three Roadblocks to Achieving It)

Subscribe to our newsletter

By Adam Devine, VP Product Marketing, WorkFusion

To compete with rivals, comply with regulations and please customers, financial institutions must achieve one solitary operational goal: better quality data through smarter, more agile and transparent processes that generate end-to-end audit trails while simultaneously using fewer human resources, rationalising technology point solutions and increasing automation without burdening IT resources or compromising data security. It’s simple, really.

As simple as this goal is, most large banks and financial services businesses face three complex challenges in their efforts to achieve it: Data Indigestion, Process Spaghettification, and Productivity Rigidity.

Defining the Three Challenges

Data Indigestion happens to any enterprise business (and especially financial businesses) when data enters the organisation rapidly in massive volumes and in a wide variety of different formats. Data Indigestion in a back office data operation is not unlike what happens to the human body at an all-you-can-eat buffet during a football season happy hour. Both generally result in operational blockages, bloating, and lethargy.

Process Spaghettification is what happens as tools, process owners and output requirements change over time. It is exacerbated by two or more distinct data operations converging as a result of a merger or acquisition. The sudden mix of dozens of legacy systems and globally dispersed pools of full-time employees and offshore workforces further spaghettify processes such as client onboarding, trade settlements and Know Your Customer (KYC).

Productivity Rigidity is the most common, but also most crippling, enterprise data operation ailment. It is a systemic failure of human and machine resources to adapt, optimise and scale. It is endemic to any large organisation that faces elastic service demands and highly variable incoming data volumes. Peak demands exceed human capacity and there is an expensive excess of human capacity during troughs. Worse, organisations cannot truly assess the productivity and performance of their human workers at an individual or process level, nor can the right task be matched with the right worker at the right time.

One would think that adequate server space should prevent machine Productivity Rigidity, but the problem stems not from capacity but intelligence. In order to adapt to process changes, new business needs and new data formats without the extensive IT maintenance that erodes ROI, machines must learn. Most data collection and extraction automation solutions have humans at the core who manually program and tune the automation. Solving machine Productivity Rigidity means flipping the paradigm and putting automation at the core of the process.

Machine Learning: The Solution for the Challenges

Machine learning is a branch of artificial intelligence and cognitive computing. Machine learning interacts with human data analysts and naturally learns from their patterns of work. It thrives when applied to the complex, high-volume data processes in enterprise data operations.

Machine learning cures Data Indigestion by watching human data analysts perform day-to-day data categorisation, extraction and remediation work, and by training itself to perform the repetitive tasks. When the process or formats change, the system escalates the unfamiliar task to a human worker and thus retrains itself, avoiding the need for IT maintenance. This efficient cycle of human effort and machine learning exponentially increases the processing capacity of data operations and prevents Data Indigestion.

Spaghettified processes are untangled and prevented by pairing workflow tools with machine learning. Whereas typical business process management systems are limited to enabling human users to design workflows, machine learning enabled workflow platforms can automatically generate human-machine workflows through pattern recognition. For example, there may be no defined process in a data operation for updating entity data, but a machine learning platform can watch an analyst visit a government registry, download SEC documents, extract business information from the documents, validate a company’s website and automatically design a distinct automated process for completing the repetitive tasks and delegate judgment work to human data analysts. Combining human judgment with machine pattern recognition radically improves business processes.

Machine learning solves Productivity Rigidity by enabling more agile workforce orchestration. Machine learning can assess the accuracy of human work, assign qualifications to workers and generate detailed performance metrics for individual workers based on their output. These metrics can be used to more effectively delegate tasks to the right worker at the right time and ensure the best workers are prioritised. Machine learning also makes automation more agile by enabling it to adapt to new formats and intelligently escalate errors – like optical character recognition misreading a letter or number – to human analysts within the process.

The Way Forward

Operations and technology leaders have two paths to achieve the aforementioned goal and overcome the three common challenges. They can commission massive internal technology projects aimed at building artificial intelligence and cognitive learning systems that automate work and optimise processes, or they can subscribe to the new breed of software systems that has taken years and millions of dollars of venture funding to develop. The cleverest of COOs, CIOs and CDOs will likely choose the latter path of least resistance and greatest ROI.

Adam Devine will be speaking at the forthcoming A-Team Group’s Data Management Summits in London and New York City.

Subscribe to our newsletter

Related content


Upcoming Webinar: Leveraging NLP for regulatory compliance

Date: 14 June 2022 Time: 10:00am ET / 3:00pm London / 4:00pm CET Duration: 50 minutes As regulatory compliance becomes more complex, requires larger volumes of data – both structured and unstructured, and comes under greater scrutiny by regulators, financial institutions are looking for RegTech solutions that can help them increase efficiency, reduce costs, and...


A-Team Group Names Winners of Innovation Awards 2022

A-Team Group has named the winners of its prestigious Innovation Awards 2022. The awards, now in their second year, celebrate innovative projects and teams across vendor and practitioner communities that make use of new and emerging technologies to deliver high-value solutions for financial institutions in capital markets. This year’s platinum award winners include Bloomberg, oneZero,...


Data Management Summit USA Virtual (Redirected)

Now in its 11th year, the Data Management Summit USA Virtual explores the shift to the new world where data is redefining the operating model and firms are seeking to unlock value via data transformation projects for enterprise gain and competitive edge.


Regulatory Data Handbook 2021/2022 – Ninth Edition

Welcome to the ninth edition of A-Team Group’s Regulatory Data Handbook, a publication dedicated to helping you gain a full understanding of regulations related to your organisation from the details of requirements to best practice implementation. This edition of the handbook includes a focus on regulations being rolled out to bring order and standardisation to...