Goldman Sachs has taken a step by step approach towards developing a centralised instrument reference database to support its global operations, according to Jim Perry, vice president of product data quality at the investment bank. Speaking at FIMA earlier this month in London (in addition to his earlier panel slot on regulation), Perry elaborated on how the firm began with US listed equities and migrated each instrument database from the individual business line level to a centralised repository sitting under the operations function.
The main driver behind the move was the exposure of Goldman’s reference data directly to end clients via the internet, said Perry. The firm began with the US and then Europe and, finally, tackled its Asian-based operations. “We built upon each success story to tackle the next and tried to take into account the different uses of the data by different functions such as for client reporting or risk,” said Perry. Of course, this global footprint also complicated matters due to the different regulatory regimes in place in each country and the need to meet various data requirements.
The rationale behind the move to centralise was that the data management function had more knowledge than the front office and other functions about data quality issues and was therefore better able to deal with them. “If data is controlled too far downstream, then data quality can suffer,” he contended. “If you are serious about reference data, you need to ring fence it and put it under the control of a team whose sole function is to ensure quality.”
The data management function currently has 24/6 coverage and is therefore spread over five locations, each with technical presence, he explained. The focus was initially on supporting the clearing and settlement function, but is now increasingly about pre-trade data support, hence the timeliness of data is much more important, said Perry. “The time scale is no longer end of day, it is now before trading.”
Perry noted that the overall implementation “could have gone better”, as the team had to fill its central repository directly with the downstream data without tackling data quality issues first. The downstream data errors took a while to deal with and he noted that a vendor solution rather than an internal build may have been an easier option overall, giving the team more time to tackle the quality issues at the outset rather than taking the impurities upstream.
As for ongoing challenges, Perry indicated that ensuring data completeness is important to ensure that STP is achieved, as well as understanding the needs of downstream consumers of the data. The firm has set up a steering committee from the data function and the IT function in order to determine the resources needed for new projects, he explained. “Over time we have been able to turn off legacy systems and downstream consumers now recognise reference data as an asset,” he said.
Subscribe to our newsletter