Reducing the regulatory burden

Former Federal Reserve Bank of New York senior vice-president Kenneth Lamar discusses risk‑based reporting, its challenges and whether fintech will help reduce the regulatory burden

Reducing the regulatory burden

How has the approach to risk-based reporting/supervision changed since the financial crisis?

Kenneth Lamar: The true transformational change that has occurred in supervisory data is the level of granularity and complexity of the data demanded from firms, particularly the largest ones. It has signalled a move away from thinking about financial institutions in broad categories, and the size of the files now received by regulators and the actual number of transactions they look at has increased significantly. This allows the regulators to think in terms of systemic risk and the risk profile of individual firms. 

Prior to the financial crisis, a lot of the information collected – particularly around risk – was dependent on the individual institution’s own management information system. The change in reporting has standardised this approach. Creating clear data definitions allows regulators to compare firms across the sector and develop meaningful aggregations.

Has the shift to a more risk-sensitive approach to supervision changed firms’ approach to reporting?

Kenneth Lamar: The change has meant firms are having to manage their data across the organisation on a global level, which has resulted in a better understanding of the regulatory expectations around data qualities and data definitions. 

For example, in capital planning, how much time did a retail credit card business line spend focusing on the data needs for regulatory capital? Now, a great deal of attention is given to the data in all material business lines and the data impact on regulatory capital. This has initiated a culture shift where data no longer just belongs in the realm of corporate finance; it is no longer a back-office process. Firms have to start thinking about their data practices strategically, and they have a long way to go. 

Has this move to ‘box-ticking’ helped or hindered regulators in understanding markets and risk?

Kenneth Lamar: It has absolutely helped regulators understand markets and risks. It has allowed them to focus and cut the data any way they want, depending on what is going on in the market locally and globally. It has also helped bring a discipline to viewing how risk is managed. A key challenge for regulators in using a data-driven approach is the risk associated with data quality. How do regulators know the data being supplied to them by firms is correct? If you are data-dependent, an assessment of data quality and data limitations must be available. 

Granular products and transactional data give regulators a view of the firm in detail that did not previously exist. Regulators can now manipulate and integrate data to understand the financial position and risk of a firm, market or sector. But, if the data being supplied is not of a high quality, has material inaccuracies or is incomplete, then regulators’ analyses and actions are at risk. This is one of the greatest risks every central banker or regulator faces. The growth and complexity of data makes this a very real challenge for regulators. 

How do the regulators ensure the quality of the data it receives is correct and up to standard?

Kenneth Lamar: There are a couple of different ways to validate data quality. One of the most effective is through on-site validation programmes where regulators go into firms and test the quality of their data. Firms can also use their internal and external audit functions to prove the quality of their data to regulators and that associated controls are where they should be. Regulators can also ensure data quality is an institutional imperative by ensuring accountability from principal officers; for example, chief financial officers, chief revenue officers and senior directors can attest to the data quality and controls when submitting data.

Regulatory authorities have also become very specific about their expectations from firms in terms of data quality. They have considered how to validate the data they receive in a disciplined way. With the emergence of innovative financial technology – referred to as ‘fintech’ – some are now asking how technology can be used for quality assurance. 

How should firms go about disseminating and aggregating the data required for this new way of regulatory reporting?

Kenneth Lamar: It is really important for financial firms to foster a firm‑wide culture whereby everybody is accountable in the data‑gathering process. Usually, the corporate finance department is responsible for actually aggregating the data and then providing it to the regulators. But to have high‑quality data, firms must ensure each department is accountable for the data the department owns. The best way to do this is to enforce strong accountability policies. 

The corporate finance department should act as a defence, but should not be the only party held accountable if the data is incorrect. It should be responsible for checking the data, validating the data and going back to the business lines if there are anomalies. Internal audit is the final point of control available for firms as the third line of defence. 

From an aggregation standpoint, both the corporate finance and internal audit departments should develop programmes to help the other departments disseminate what regulators require from them – the expectations and the impact. One of the key activities to ensure data quality is end-to-end transaction testing. Done correctly, transaction testing validates the controls around the data from the point it is onboarded to the point it is delivered to a regulator. So if something is wrong with the quality of the data, it can be inferred that something is failing in the controls. The testing uncovers the root cause of these problems. 

Do firms have the resources and capabilities to manage the data required of them?

Kenneth Lamar: They are working on it. I think we are at a point where firms are starting to understand the data circulating within their organisations. However, much of this data is redundant and, due to a lack of standardisation, remains in business lines’ subsystems. So one of the things firms should do is standardise data across the organisation. After this, reporting processes will become far easier. The next step would be for them to start looking at whether data can be organised or tagged in an automated fashion. 

Currently, many firms take a siloed approach to data management. Business lines believe the data belongs to them and when the regulator asks for information they will give them what they can. Instead, firms should be looking at how to leverage the data across the entire organisation. A culture shift must occur for this to happen – not a regulatory change. Firms need to realise that their data is an asset, and once used can help businesses not only meet regulatory expectations but manage risk more effectively. 

Is the solution to the growing data burden more standardised regulation?

Kenneth Lamar: That is absolutely one of the keys to reducing the burden. Treating products, transactions and reference data equally across data collection not only reduces the reporting burden and costs, but increases data quality. Not all regulation lends itself to standardisation – sometimes there needs to be nuance for the benefit of the data. When this occurs, the nuances should be explicitly stated.

How do firms validate data? And can fintech solutions aid this process?

Kenneth Lamar: I think it is important for regulators and firms to first standardise their data. Fintech and regulatory technology – known as ‘regtech’ – are solutions that can help this process, from helping to build data management tools to data tagging. 

One of the challenges in this space is managing and implementing strategic solutions that have long runways at the same time as conducting business as usual. Firms still have to file – on a quarterly, daily or weekly basis – tactical solutions while moving forward on strategic investments. Regtech can help streamline and increase efficiencies in current processes, such as in the use of robotics in the report creation process. 

But firms also have to think about the long‑term strategic solution. What is their future data platform going to look like and how can the firm migrate to it? Some have made a start in this direction, but the pace at which technology evolves means a solution for problems arising out of a long-term project quickly becomes outdated. So the real challenge is dividing the project up in a way that is achievable. 

The UK’s Financial Conduct Authority is working to create machine‑readable regulation, with the notion it could be machine‑executed in the future. Do you think this is a possibility? 

Kenneth Lamar: Yes, I think it is. Regulation has become so complex and there is so much of it: there are compliance rules, capital rules, liquidity rules, supervisory demands, and so on. If regulation were to become machine‑readable and then machine‑executable, firms could react to regulatory demands quickly, accurately and with fewer costs. 

But, for this to happen, building up the relevant expertise is really important. You need to have people who understand the data and who can build a system that will understand it to the same extent and be able to meet the firm’s needs. But firms do not always invest – whether financially or in manpower – if it is not certain whether it will result in a positive impact to the bottom line. If it can be proven that the investment will have a direct impact on the firm’s profitability, senior management is far more likely to sign off on it. 

How should central banks use regulatory data to maximum effect?

Kenneth Lamar: They must be very transparent. There is a lot of pressure on central bankers and regulators to publish aggregated results of regulatory data. Data users want that data very much – not just firms and academics, but the public and the media too. So, it is really important for them to be able to publish what they can without putting a firm at risk by disclosing its priority information. 

They need to continue to educate the public and firms on how regulatory data is being used, how it is being viewed and why it is important. For example, when individual country crises occur, data users will often analyse publicly available data on country risk, drawing conclusions about the size of exposure to a country that may differ from what the firm disclosed in its public financial statements. So it is really important for regulators to explain how data is used and what it represents, and it is just as crucial to explain what it is not.

When looking at data, what sort of analytical techniques should central banks use to get the most out of it?

Kenneth Lamar: First, data operations – people who ensure data is accurate, and attempt to understand the raw datasets that come in from firms. This type of person looks to employ tools that could process the data quickly, discover outliers and perform some comparative analysis. Second is how regulators use that data to either assess risk or include data in supervisory models. In these cases, tools that can handle the large volume of data that is now available are required.

How can central banks and other financial supervisors find a balance between judgement and data-driven risk assessment?

Kenneth Lamar: It is all about the people; you need really good people who understand the data and the outcomes you are seeking. Individuals such as these are hard to find because they need to understand why the data was created, what it measures and what it all means in terms of risk to the organisation. Finding these people – who can build, design and collate data and then make judgements – is the key challenge for firms. Firms that do not seek out this expertise, and take a more formalistic approach when making judgements, tend to find themselves in trouble.

 

Kenneth Lamar

Kenneth Lamar, Former Senior Vice-President, Federal Reserve Bank of New York

Kenneth Lamar is the former senior vice-president, head of the statistic function and senior adviser to the director of research at the Federal Reserve Bank of New York, where he was responsible for most of the New York Fed’s data collection systems and data quality programmes. He has also held a number of leadership positions within the Federal Reserve System supporting the design of data collections, associated quality assurance programmes and the implementation of data collection programmes. He is the founder and principal partner at Lamar Associates and works as an independent senior adviser at the Deloitte Centre for Regulatory Strategies. Lamar s also a member of the AxionSL advisory board.

This article forms part of the Central Banking Risk-based supervision focus report, published in association with Vizor

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@centralbanking.com or view our subscription options here: http://subscriptions.centralbanking.com/subscribe

You are currently unable to copy this content. Please contact info@centralbanking.com to find out more.

Most read articles loading...

You need to sign in to use this feature. If you don’t have a Central Banking account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account

.