A lot of rhetoric has been bandied about of late with regards to the paramount importance of data quality to the risk management function. While these discussions are important in raising data quality’s profile and keeping the issue on the agenda, they don’t provide tangible information on the impact that poor data quality actually has on risk modelling, which is where the International Monetary Fund’s (IMF) recent research paper comes into play…
Handily entitled “Into the Great Unknown: Stress Testing with Weak Data” (which meant I noticed its potential relevance to the reference data function in the first place), the paper provides examples of how data quality impacts regulatory stress testing, which it describes as the risk tool “du jour” in the post-crisis environment. The paper will, no doubt, serve as yet more ammunition for the regulatory community to use in its overall push to improve the underlying data quality of firms’ regulatory reports.
Although the authors are aiming to highlight the difficulties faced within “lower income” countries, where the data quality issue is deemed to be much more challenging, the findings are equally relevant to all other jurisdictions engaged in stress testing practices. Indeed, data quality is a serious issue that could cause “more harm than good if the flawed findings cause undue consternation or lead to inappropriate decisions and actions” in any case. The use of hypothetical, but numerical and thus tangible, examples of stress tests carried out on the basis of poor quality data highlight this serious potential for error.
The paper demonstrates the impact of having to make assumptions around various macroeconomic variables in the case of such data being unavailable, hence the addition of “ad hoc shocks”. For example, when individual bank data are either not provided or incomplete, “back of the envelope” stress tests are performed on the system as a whole with some form of aggregated data. The findings of such tests should therefore be “interpreted with caution”, note the authors.
“The information derived from shocks to the aggregate system could mask problems among individual banks,” says the paper. “In our example, a shock representing a 400% increase in non-performing loans across the board would result in the system’s capital adequacy ratio declining by 4.7% points to 10.4%.” In this particular example, the impact would be that the overall capital adequacy ratio would be deemed to be 1.6 percentage points below the required minimum of 12%, thus resulting in the system failing the test overall.
The performance of the individual banks could be obscured completely by the use of this aggregate data. In the example, one bank’s capital adequacy ratio declines 9.3%, whereas for another the decline is a more moderate 3.4%. “Thus, focusing on the aggregate outcome alone could obscure the possibility that a particular institution may be very vulnerable with potentially systemic consequences,” note the authors.
The paper goes on to suggest alternative methods to the traditional stress test for regulators to use in the case of being unable to ensure a sufficient level of data quality within financial institutions’ reports. Regardless of these proposals however, the authors do a good job of highlighting the real dangers of forcing regulators to use workarounds due to a lack of data quality. This holds true outside of developing economies and could be used to illustrate the regulatory community’s recent crusade to improve standardisation across the markets.
Moreover, if this logic holds water on an industry-wide basis, it should also prove illustrative of the dangers for an individual firm’s own stress testing practices. Making assumptions on aggregate or inaccurate data could result in fairly serious errors in judgement.
Subscribe to our newsletter