Measurement Challenges in Macroprudential Policy Implementation: Essential Data Elements for Preserving Financial Stability

Good afternoon. I am delighted to be here with you.

Since the financial crisis, we have significantly improved the data and analytical tools we need to promote financial stability. Today, I want to share some of our insights on that progress and on the challenges that lie ahead.

Before I begin, I want again to thank President Loretta Mester, Greg Stefani, Stephen Ong, Joe Haubrich, and the staff of the Cleveland Fed for all of their hard work in putting on this conference with us and for their dedication to advancing financial stability analysis and measurement.

Financial stability monitoring, analysis, and research certainly are not new. But the financial crisis that began in 2007 changed the conversation by exposing critical gaps in our analysis and understanding of the financial system, in the data and metrics used to measure and monitor financial activities, and in the policy tools available to mitigate potential threats to financial stability. These gaps in analysis, data, and policy tools contributed to the crisis and hampered official efforts to contain it.

As I noted yesterday, this conference exemplifies the collaboration that is required to create the virtual research-and-data community essential to fill those gaps. We have learned a great deal about financial stability since the crisis. But the proliferation of the good research discussed yesterday and today signals that we still have a lot more to learn and a great deal more work to do.

You all know that interconnectedness can create financial system vulnerabilities. Likewise, think of these three gaps — in policy tools, analysis and data — as being interconnected, as in a chain. Weakness in any of the three links will create vulnerabilities in our overall ability to spot and address weaknesses in the financial system. The sequence matters: You need good analysis to make good policy and you need good data to conduct good analysis. In other words, success in our work must begin with good data.

The Data Sharing Imperative

So that is the theme for my remarks this afternoon: Good policy and analysis depend on good data. For good data, we need data standards and we need to close data gaps — to fill in the missing pieces for a better picture of the financial system and its vulnerabilities. All of those missing pieces are not at any one agency, industry segment, or country. Filling out the full picture requires data sharing and collaboration on a global basis.

At the OFR, data sharing is critical to our mission, and collaboration is essential to make it happen. We are making progress on sharing data with members of the Financial Stability Oversight Council — or Council — by developing protocols and procedures for securely sharing data for monitoring and analysis. In addition, we are participating in international efforts to move forward. That progress is significant and encouraging — but it is dwarfed by the amount still to be done.

This brings me to my first imperative: We in the policy community need to — and we must —share data appropriately without compromising their confidentiality.

The Data Quality Imperative

Likewise, I cannot overemphasize the importance of quality in making financial data usable and transparent. When Lehman Brothers failed six years ago, its counterparties could not assess their total exposures to Lehman. Financial regulators were also in the dark. Why? Because there were no industry-wide standards for identifying and linking financial data representing entities or instruments. Standards are needed to produce high-quality data. And high-quality data are essential for effective risk management in financial companies, especially to assess their connections and exposures to other firms.

Nor can we take the scope of our financial data for granted. Even if we had ample, high-quality data, potential threats to financial stability would still be difficult to assess and monitor. But our work is harder because gaps persist in both the scope and the quality of financial data. We need solid, reliable, granular, timely, and comprehensive data for analysis and monitoring. We have made progress, but there are still significant gaps. The OFR’s job is to fill those gaps by prioritizing and meeting data needs.

We released our 2014 Annual Report this week; in it, we identify threats to financial stability and the tools needed to assess and mitigate them. We’re proud of the report’s analytical focus, and equally of the emphasis on “Advancing Data Standards” and “Addressing Data Gaps.” The chapter on data standards describes our work with the Commodity Futures Trading Commission — CFTC — on data quality in swap data repositories.

Financial reform sought to improve transparency in derivatives markets by requiring that data related to transactions in swaps be reported to swap data repositories. Swap data are critical to understand exposures and connections across the financial system, and the repositories are designed to be high-quality, low-cost data collection points.

The credit default swap data reported to these repositories exemplify new sources of data that supervisors are getting to know for the first time. Because centralizing and reporting these data are so new, some issues are arising in establishing consistent, well understood data definitions — or semantics — and data structures — or schemas — so supervisors can reliably combine and analyze the information.

The OFR and the CFTC both want to promote the use of data standards in swap data reporting to assure data quality and utility. Together, we announced in March a memorandum of understanding for a joint project to enhance the quality, types, and formats of data collected from registered swap data repositories. Under a second agreement, members of the OFR staff are working on a detail at the CFTC. Together, we are aggressively moving forward.

You heard one example yesterday morning about why this initiative is so important when Emil Siriwardane, a graduate intern at the OFR and PhD student at NYU, discussed his paper on corporate credit risk in the credit default swap markets. Working with some of the new data, he was able to establish, for example, that a “tree” structure of risk transfer operates within this market, and that the buyers and sellers of credit protection are highly concentrated. A typical pair of top reference entities will have a significant majority of both buyers and sellers of credit default swap protection in common, suggesting that the same small set of counterparties bear all of the credit risk for a large majority of the market.

The data standards discussion in our annual report also cites our work on the LEI, or Legal Entity Identifier. The LEI, which is like a bar code for precisely identifying parties to financial transactions, is an essential element in the standards toolkit. The OFR has led this global initiative from the start, and the LEI initiative has gone from conception to full-fledged operational system in just a few years.

In 2014, the worldwide LEI system reached significant milestones as the final components were put in place for the governance framework of the LEI system.

To date, the LEI has been required only for some aspects of financial reporting in the United States and abroad. These requirements, together with voluntary implementation, have driven LEI adoption across the globe: 300,000 LEIs issued to entities in 180 countries. But greater —indeed, universal — adoption is necessary to bring efficiencies to reporting entities and useful information to the Council and other policymakers.

The case for ubiquitous adoption of this data standard is strong.

Had the LEI system been in place in 2008, the industry, regulators, and policymakers would have been better able to trace Lehman’s exposures and connections across the financial system. The LEI system also generates efficiencies for financial companies in internal reporting, risk management, and in collecting, cleaning, and aggregating data. I expect it will reduce companies’ regulatory reporting burdens by reducing — and eventually eliminating — overlap and duplication.

The financial services industry has strongly supported the LEI initiative. In fact, major trade groups have called for government regulators to mandate its use — a rare example of industry asking for more regulation.

The global LEI system is up, running, and growing. Like any network, the LEI system has benefits that will grow as the system grows. But ubiquity is needed to realize the full benefits of the LEI.

That brings me to my second imperative: Mandating use of the LEI for regulatory reporting is needed to overcome obstacles to adoption. That is why the OFR and the Council have been calling for regulators to require use of the LEI in regulatory reporting. Ditto for other data standards, as they become available.

We know that the LEI will help participants in industry, such as risk managers, to do their jobs. But how could the LEI help researchers, including the people in this room?

One area where the LEI will deliver clear benefits to researchers is in improving data quality for measuring and modeling counterparty networks, which is emerging as an important and rapidly growing field for financial research. Examples of this approach are Jessie Wang’s paper on “distress dispersion” that you heard about yesterday and Mikhail Oet’s paper on preventive policies for systemic risk that you will learn more about later this afternoon.

Jessie Wang’s paper developed a model of bilateral interactions in a financial market, and showed how network externalities can plague markets when the acquirers of distressed firms do not take into account the full network of financial exposures.

Mikhail Oet’s paper simulated networks of interconnected institutions to experiment with the effects of capital and reserve ratios on overall system fragility. Bringing models like these to the data requires accurate identification of contractual counterparties — exactly the sort of problem the LEI is intended to address.

The LEI project has been the foundation of our efforts on data standards at the OFR and we’re confident that our experience with it will serve as the springboard for us to pursue other data standards initiatives.

Let me explain. The LEI is focused on entities in the financial system and is designed to answer the question, “Who is who?” A follow-on question is, “Who owns who?” To answer that, we need identifiers for hierarchies, or corporate structures. To answer, “Who owns what?” and “What is owned?” we need identifiers for instruments and products. The OFR is working on answers to all of those questions.

For example, we developed plans in 2014 to prepare and publish reference databases for financial entities and companies, and financial instruments, as required in the Dodd-Frank Act. The LEI system will provide all needed inputs to create and maintain a financial entity database. We have also begun to develop formats and standards for reporting financial transaction and position data and for identifying financial instrument types. In addition, the OFR will develop a prototype of the financial instrument reference database.

The Data Gaps Imperative

The chapter in our annual report about addressing data gaps discusses our partnership with the Federal Reserve to fill gaps in data about repurchase agreements, or repo. As you know, a repo is essentially a collateralized loan — when one party sells a security to another party with an agreement to repurchase it later at an agreed price.

The project promises to improve our understanding of a short-term funding market that is instrumental in providing liquidity — the lubrication that helps to keep the global financial system operating.

Repos are an important source of short-term funding for the financial industry. The U.S. repo market provides more than $3 trillion in funding every day. However, the repo market can also contribute to risks to financial stability. Obtaining more information about these transactions will be filling an important data gap.

The repo market is divided into three parts: (1) the triparty repo market, in which transactions are centrally settled by two large clearing banks, (2) the bilateral market, where repo transactions are conducted privately between two firms, and (3) the general collateral financing, or GCF, market, in which interdealer repo transactions are centrally cleared. Information and data on the triparty and GCF repo markets are published regularly, but information about bilateral repos is scant.

We announced this project in October, and today, I can report that we are well underway. Outreach to industry participants has already begun and so far they have been very receptive to participating in the project. We expect to begin gathering data early next year.

The project marks the first time the OFR is going directly to industry to collect financial market information. But participation in the pilot project is voluntary, and participating companies will be asked for input on what data should be gathered. Aggregated data from the survey will be published to provide greater transparency into the bilateral repo market for participants and policymakers.

Egemen Eren’s paper on intermediary funding liquidity that we discussed yesterday is a good example of why repo market data are so important. The paper presented a theoretical model of how demand by dealer banks for funding liquidity should affect haircuts and pricing in the interbank repo markets. If the model is accurate, it would deepen our understanding of participant behavior in this key market. This model generates testable empirical hypotheses. Reliable repo data have the promise of making such testing possible.

Our annual report chapter on data gaps also includes a discussion about OFR analysis of new data about hedge funds and other private funds collected by the Securities and Exchange Commission through Form PF. Collection of this confidential data began in 2012. The OFR is now evaluating leverage across different hedge fund strategies, with a particular interest in sources of leverage.

In another, related project highlighted in our annual report, the OFR is assisting the Federal Reserve Board in a long-term project to enhance the Financial Accounts of the United States, formerly known as the Flow of Funds Accounts. This project is aimed at understanding sources and uses of short-term funding and related markets. The project is linking quarterly, highly aggregated data to more detailed and frequent source data, where available. We are also increasing coverage of financial activity represented in the accounts to include off-balance-sheet and noncash activity. In addition, we are exploring new measures of the flow of collateral and the flow of risks across the financial system.

One final data gap that I’ll highlight from our 2014 Annual Report is in the activities of the asset management industry, particularly in separately managed accounts, also called separate accounts. This type of account is a customized investment product that asset management firms offer to large institutional investors under terms defined in an investment management agreement. Regulators are unable to gauge the potential risk to financial markets from activities in separate accounts because of the lack of publicly available and standardized information about them.

Summing up

Let me sum up. Three imperatives are essential to improve the quality and scope of financial data. First, share data securely and appropriately. Second, require data standards in regulatory reporting. And third, fill the most important data gaps soon. That’s a recipe for producing the good data that underpin good analysis and good policy. Collaboration is the critical ingredient in that mix. The job is massive and challenging. To achieve it, we need the help of people like you in this room, across this country, and across the world. On that score, we are deeply grateful for our collaborative partnership with the Cleveland Fed. As Loretta mentioned yesterday, it goes beyond organizing this conference; it extends to analysis and research, as well as to the quest for high quality data.

Thank you again for being here for this conference. I would be glad to respond to your questions.