About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

The Single, Simple Goal of Every Data Operation (and the Three Roadblocks to Achieving It)

Subscribe to our newsletter

By Adam Devine, VP Product Marketing, WorkFusion

To compete with rivals, comply with regulations and please customers, financial institutions must achieve one solitary operational goal: better quality data through smarter, more agile and transparent processes that generate end-to-end audit trails while simultaneously using fewer human resources, rationalising technology point solutions and increasing automation without burdening IT resources or compromising data security. It’s simple, really.

As simple as this goal is, most large banks and financial services businesses face three complex challenges in their efforts to achieve it: Data Indigestion, Process Spaghettification, and Productivity Rigidity.

Defining the Three Challenges

Data Indigestion happens to any enterprise business (and especially financial businesses) when data enters the organisation rapidly in massive volumes and in a wide variety of different formats. Data Indigestion in a back office data operation is not unlike what happens to the human body at an all-you-can-eat buffet during a football season happy hour. Both generally result in operational blockages, bloating, and lethargy.

Process Spaghettification is what happens as tools, process owners and output requirements change over time. It is exacerbated by two or more distinct data operations converging as a result of a merger or acquisition. The sudden mix of dozens of legacy systems and globally dispersed pools of full-time employees and offshore workforces further spaghettify processes such as client onboarding, trade settlements and Know Your Customer (KYC).

Productivity Rigidity is the most common, but also most crippling, enterprise data operation ailment. It is a systemic failure of human and machine resources to adapt, optimise and scale. It is endemic to any large organisation that faces elastic service demands and highly variable incoming data volumes. Peak demands exceed human capacity and there is an expensive excess of human capacity during troughs. Worse, organisations cannot truly assess the productivity and performance of their human workers at an individual or process level, nor can the right task be matched with the right worker at the right time.

One would think that adequate server space should prevent machine Productivity Rigidity, but the problem stems not from capacity but intelligence. In order to adapt to process changes, new business needs and new data formats without the extensive IT maintenance that erodes ROI, machines must learn. Most data collection and extraction automation solutions have humans at the core who manually program and tune the automation. Solving machine Productivity Rigidity means flipping the paradigm and putting automation at the core of the process.

Machine Learning: The Solution for the Challenges

Machine learning is a branch of artificial intelligence and cognitive computing. Machine learning interacts with human data analysts and naturally learns from their patterns of work. It thrives when applied to the complex, high-volume data processes in enterprise data operations.

Machine learning cures Data Indigestion by watching human data analysts perform day-to-day data categorisation, extraction and remediation work, and by training itself to perform the repetitive tasks. When the process or formats change, the system escalates the unfamiliar task to a human worker and thus retrains itself, avoiding the need for IT maintenance. This efficient cycle of human effort and machine learning exponentially increases the processing capacity of data operations and prevents Data Indigestion.

Spaghettified processes are untangled and prevented by pairing workflow tools with machine learning. Whereas typical business process management systems are limited to enabling human users to design workflows, machine learning enabled workflow platforms can automatically generate human-machine workflows through pattern recognition. For example, there may be no defined process in a data operation for updating entity data, but a machine learning platform can watch an analyst visit a government registry, download SEC documents, extract business information from the documents, validate a company’s website and automatically design a distinct automated process for completing the repetitive tasks and delegate judgment work to human data analysts. Combining human judgment with machine pattern recognition radically improves business processes.

Machine learning solves Productivity Rigidity by enabling more agile workforce orchestration. Machine learning can assess the accuracy of human work, assign qualifications to workers and generate detailed performance metrics for individual workers based on their output. These metrics can be used to more effectively delegate tasks to the right worker at the right time and ensure the best workers are prioritised. Machine learning also makes automation more agile by enabling it to adapt to new formats and intelligently escalate errors – like optical character recognition misreading a letter or number – to human analysts within the process.

The Way Forward

Operations and technology leaders have two paths to achieve the aforementioned goal and overcome the three common challenges. They can commission massive internal technology projects aimed at building artificial intelligence and cognitive learning systems that automate work and optimise processes, or they can subscribe to the new breed of software systems that has taken years and millions of dollars of venture funding to develop. The cleverest of COOs, CIOs and CDOs will likely choose the latter path of least resistance and greatest ROI.

Adam Devine will be speaking at the forthcoming A-Team Group’s Data Management Summits in London and New York City.

Subscribe to our newsletter

Related content


Recorded Webinar: Potential and pitfalls of large language models and generative AI apps

Large language models (LLMs) and Generative AI applications are a hot topic in financial services, with vendors offering solutions, financial institutions adopting the technologies, and sceptics questioning their outcomes. That said, they are here to stay, and it may be that early adopters of Generative AI apps could gain not only operational benefits, but also...


FCA Criticism of Funds’ SDR Approach Stirs Controversy

The UK regulator has criticised fund managers for failing to abide by interim guidance on ESG disclosures just days before it’s expected to announce details of a formal regulation. But the comments have been met with frustration and caution within the ESG data and technology community, with one practitioner arguing that the Financial Conduct Authority...


TradingTech Summit London

Now in its 13th year the TradingTech Summit London brings together the European trading technology capital markets industry and examines the latest changes and innovations in trading technology and explores how technology is being deployed to create an edge in sell side and buy side capital markets financial institutions.


Regulatory Data Handbook 2023 – Eleventh Edition

Welcome to the eleventh edition of A-Team Group’s Regulatory Data Handbook, a popular publication that covers new regulations in capital markets, tracks regulatory change, and provides advice on the data, data management and implementation requirements of more than 30 regulations across UK, European, US and Asia-Pacific capital markets. This edition of the handbook includes new...