The leading knowledge platform for the financial technology industry
The leading knowledge platform for the financial technology industry

A-Team Insight Blogs

The Single, Simple Goal of Every Data Operation (and the Three Roadblocks to Achieving It)

By Adam Devine, VP Product Marketing, WorkFusion

To compete with rivals, comply with regulations and please customers, financial institutions must achieve one solitary operational goal: better quality data through smarter, more agile and transparent processes that generate end-to-end audit trails while simultaneously using fewer human resources, rationalising technology point solutions and increasing automation without burdening IT resources or compromising data security. It’s simple, really.

As simple as this goal is, most large banks and financial services businesses face three complex challenges in their efforts to achieve it: Data Indigestion, Process Spaghettification, and Productivity Rigidity.

Defining the Three Challenges

Data Indigestion happens to any enterprise business (and especially financial businesses) when data enters the organisation rapidly in massive volumes and in a wide variety of different formats. Data Indigestion in a back office data operation is not unlike what happens to the human body at an all-you-can-eat buffet during a football season happy hour. Both generally result in operational blockages, bloating, and lethargy.

Process Spaghettification is what happens as tools, process owners and output requirements change over time. It is exacerbated by two or more distinct data operations converging as a result of a merger or acquisition. The sudden mix of dozens of legacy systems and globally dispersed pools of full-time employees and offshore workforces further spaghettify processes such as client onboarding, trade settlements and Know Your Customer (KYC).

Productivity Rigidity is the most common, but also most crippling, enterprise data operation ailment. It is a systemic failure of human and machine resources to adapt, optimise and scale. It is endemic to any large organisation that faces elastic service demands and highly variable incoming data volumes. Peak demands exceed human capacity and there is an expensive excess of human capacity during troughs. Worse, organisations cannot truly assess the productivity and performance of their human workers at an individual or process level, nor can the right task be matched with the right worker at the right time.

One would think that adequate server space should prevent machine Productivity Rigidity, but the problem stems not from capacity but intelligence. In order to adapt to process changes, new business needs and new data formats without the extensive IT maintenance that erodes ROI, machines must learn. Most data collection and extraction automation solutions have humans at the core who manually program and tune the automation. Solving machine Productivity Rigidity means flipping the paradigm and putting automation at the core of the process.

Machine Learning: The Solution for the Challenges

Machine learning is a branch of artificial intelligence and cognitive computing. Machine learning interacts with human data analysts and naturally learns from their patterns of work. It thrives when applied to the complex, high-volume data processes in enterprise data operations.

Machine learning cures Data Indigestion by watching human data analysts perform day-to-day data categorisation, extraction and remediation work, and by training itself to perform the repetitive tasks. When the process or formats change, the system escalates the unfamiliar task to a human worker and thus retrains itself, avoiding the need for IT maintenance. This efficient cycle of human effort and machine learning exponentially increases the processing capacity of data operations and prevents Data Indigestion.

Spaghettified processes are untangled and prevented by pairing workflow tools with machine learning. Whereas typical business process management systems are limited to enabling human users to design workflows, machine learning enabled workflow platforms can automatically generate human-machine workflows through pattern recognition. For example, there may be no defined process in a data operation for updating entity data, but a machine learning platform can watch an analyst visit a government registry, download SEC documents, extract business information from the documents, validate a company’s website and automatically design a distinct automated process for completing the repetitive tasks and delegate judgment work to human data analysts. Combining human judgment with machine pattern recognition radically improves business processes.

Machine learning solves Productivity Rigidity by enabling more agile workforce orchestration. Machine learning can assess the accuracy of human work, assign qualifications to workers and generate detailed performance metrics for individual workers based on their output. These metrics can be used to more effectively delegate tasks to the right worker at the right time and ensure the best workers are prioritised. Machine learning also makes automation more agile by enabling it to adapt to new formats and intelligently escalate errors – like optical character recognition misreading a letter or number – to human analysts within the process.

The Way Forward

Operations and technology leaders have two paths to achieve the aforementioned goal and overcome the three common challenges. They can commission massive internal technology projects aimed at building artificial intelligence and cognitive learning systems that automate work and optimise processes, or they can subscribe to the new breed of software systems that has taken years and millions of dollars of venture funding to develop. The cleverest of COOs, CIOs and CDOs will likely choose the latter path of least resistance and greatest ROI.

Adam Devine will be speaking at the forthcoming A-Team Group’s Data Management Summits in London and New York City.

Related content

WEBINAR

Recorded Webinar: Best practices for regulatory reporting

Regulatory reporting has been a cost and resource burden for financial institutions for many years, with the race to compliance in a highly regulated market often leading to multiple, singular regulatory reporting solutions. Legacy systems add to the challenges of making reporting changes in line with adapted and new regulations. This webinar will address these...

BLOG

Ikano Bank Places TruNarrative Platform at Centre of Fraud Prevention Strategy

Swedish bank Ikano Bank will deploy fraud prevention technology from UK-based TruNarrative, as part of a wider digital transformation programme. The implementation will involve integration of the TruNarrative client onboarding and fraud detection platform with Ikano’s new technology architecture to facilitate the bank’s Europe-wide fraud prevention strategy. The TruNarrative platform is accessed via a single...

EVENT

TradingTech Summit London

TradingTech Summit London will explore how trading firms are innovating in today’s cloud and digital based environment to create flexible, scalable trading platforms to support speed to market and business agility. Leveraging the cloud, AI and ML technologies to get an edge, automate processes and simplify operations in a cost effective way is the name of the game and will share practical insight from practitioners and technology leaders who are innovating and driving forward change in trading operations.

GUIDE

Entity Data Management Handbook – Seventh Edition

Sourcing entity data and ensuring efficient and effective entity data management is a challenge for many financial institutions as volumes of data rise, more regulations require entity data in reporting, and the fight again financial crime is escalated by bad actors using increasingly sophisticated techniques to attack processes and systems. That said, based on best...