Remarks of Richard Berner at the conference on the Interplay Between Financial Regulations, Resilience, and Growth
Published: June 16, 2016
Keynote Remarks of Richard Berner, Director, Office of Financial Research, at a conference on The Interplay Between Financial Regulations, Resilience, and Growth sponsored by the Federal Reserve Bank of Philadelphia, the Wharton Financial Institutions Center, Imperial College Business School, and the Journal of Financial Services Research, June 16, 2016, Philadelphia, Pennsylvania
Thank you for that kind introduction, Bill. Thanks also to the Philadelphia Fed, the Wharton Financial Institutions Center, the Imperial College Business School, and the Journal of Financial Services Research for hosting this conference and inviting me to join you here today.
I’m delighted to be back in Philadelphia and to have the opportunity to speak about several of the themes we will explore during this two-day event.
I am glad to see the reference to economic growth in the title of this conference. A popular narrative puts financial regulation and financial resilience in conflict with growth. In my view, we pursue financial resilience as a policy goal because it supports economic growth. Improving our understanding of how we can achieve that goal and what tools we will need is at the heart of our mission at the Office of Financial Research.
This afternoon, I will frame some of the issues you will discuss at the conference by exploring three questions:
- How should we define, measure, and monitor financial resilience?
- What policies are most effective in strengthening financial resilience? and
- What data and analysis do we need to design and implement such policies?
How do we define, measure, and monitor financial resilience?
Financial stability, or resilience, occurs when the financial system can provide its basic functions, even under stress. We want to be sure that when shocks hit, the financial system will continue to provide those basic functions to facilitate economic activity. In the OFR’s first annual report, we identified six such basic functions: (1) credit allocation and leverage, (2) maturity transformation, (3) risk transfer, (4) price discovery, (5) liquidity provision, and (6) facilitation of payments.
Threats to financial stability arise from vulnerabilities in the financial system — failures in these functions that are exposed by shocks.
Resilience has two aspects:
- Does the system have enough shock-absorbing capacity so it can still function?, and
- Are incentives, such as market discipline or transparent pricing of risk, aligned to limit excessive risk taking?
The shock absorbers buffer hits, while what I call guardrails — or incentives that affect behavior, and thereby constrain — the risk-taking that can create financial vulnerabilities.
Resilience and threats to financial stability are systemwide concepts. To measure and monitor them, we must look across the financial system, for example, at nonbanks as well as banks. We must examine both institutions and markets to appreciate how threats propagate and to evaluate ways to mitigate and manage them.
The financial crisis exposed critical gaps in our analysis and understanding. Gaps in analysis, data, and policy tools contributed to the crisis and hampered efforts to contain it. Filling them is crucial to assessing and monitoring threats, and to developing what we call the macroprudential toolkit to make the financial system resilient.
In the years since the crisis, federal financial regulators have taken important steps to make the financial system more resilient. They have put in place banks’ new capital requirements, and agreed on key components of liquidity regulation and minimum requirements for firms’ holdings of liquid assets. In addition, stress testing and a new regime to resolve large, complex, and troubled financial institutions in an orderly way have dramatically changed the approach to increasing resilience. And they have improved the quality and scope of financial data — essential to making good policy decisions.
We at the OFR help promote financial stability by developing tools to assess and monitor threats to financial stability; by improving the scope, availability, and quality of financial data to measure threats; and by evaluating policies designed to mitigate risks.
For example, our Financial Stability Monitor depicts a framework with five categories of risk: macroeconomic, market, credit, funding and liquidity, and contagion. This risk-based approach aligns with the financial system’s basic functions and enables us to look across the financial system rather than focusing piecemeal on institutions or market segments. The monitor enables us to track and measure risks in banks, shadow banks, other nonbanks, and markets. We update it and its supporting data semiannually on our website.
The Financial Stability Monitor is part of a larger suite of OFR monitors and risk assessment tools we are creating for each of the five risk categories. Taken together, these tools help us examine the interplay among risks and analyze related developments across asset classes. In coming weeks, we plan to launch our online Money Market Fund Monitor, a set of interactive charts for exploring the portfolios of U.S. money market funds.
OFR judgment about financial stability also reflects ongoing market intelligence. Last year, we launched a Financial Markets Monitor that summarizes major developments and emerging trends in global capital markets. Like many of you, we derive enormous benefit from our conversations with market participants.
In our judgment, overall threats to financial stability today remain at a moderate level. Our Financial Stability Monitor shows that macroeconomic, market, funding, and credit and contagion risks are not excessive. However, several risks within those categories have increased in the last six to twelve months.
Some risks are highly visible; for example, macro risks have risen. Persistently sluggish global growth and inflation have triggered aggressive monetary policy responses, keeping interest rates and volatility persistently low by any historical standard. Low rates provide incentives for market participants to reach for yield, partly by taking on more leverage, duration, and credit risk. Increased leverage and slow growth have created vulnerabilities.
Cybersecurity is another key vulnerability. Work is underway to make the financial system more resilient to cyber threats, but challenges remain in how to best assess and monitor threats, and to measure and assure that resilience. I would only say that it is not enough to look at the cyber-resilience of individual entities. As with central counterparties, or CCPs, we need to assess resilience across the system.
Credit risks also continue to rise. Indicators of corporate credit risk in our monitoring tools have been flashing warning signs for some time. Exceptionally narrow corporate bond spreads for much of 2014 promoted increased risk-taking by investors and issuers. Corporate America broadly leveraged its capital structure by issuing debt to return cash to shareholders through dividends and share buybacks. Wider spreads have recently reduced those incentives and risks somewhat. Still, fueled by highly accommodative credit and underwriting standards, and a persistent reach for yield by investors, credit continues to grow at a rapid pace.
But it’s not just the quantity. The quality of new debt issued by companies has been weaker than in previous cycles. High-yield debt accounts for 24 percent of total corporate debt issued since 2008, compared with 14 percent during past cycles. Lending standards eroded during this cycle. Two-thirds of loans to companies have been covenant-lite (lacking strict legal covenants), compared with 33 percent during previous cycles.
Other risks and vulnerabilities that are neither immediately evident nor easily monitored in institutions or markets also make me less than sanguine. By and large, signals from financial markets today appear to be relatively benign, but appearances can be deceptive. Periods of relatively low volatility, low credit spreads in cash markets, low credit default swap spreads, and low repo haircuts are all traditionally viewed as signs of low financial market risks.
For the OFR and others who monitor financial stability, however, these developments often signal rising market vulnerabilities because they also give investors and risk managers incentives and wherewithal to take on leverage. Although analysts traditionally view such indicators as exogenous barometers of risk, they more likely are endogenous indicators of risk appetite and investor sentiment. They are co-dependent. The capacity of intermediaries to take on risk exposures depends on the volatility of asset returns. In turn, volatility depends on the ability of intermediaries to take on risky exposures.
Recognition of that dynamic in either academic or policy analysis is recent. A recent paper by Danielsson, Shin, and Zigrand argues that leverage and volatility are endogenously co-determined, and that low volatility promotes increased leverage and risk.1
Similarly, former Federal Reserve Governor Jeremy Stein observed in 2013 that low volatility gives market participants incentives to write deep, out-of-the-money puts to enhance returns, and in ways that hide risk.2 Our measurement systems generally don’t adequately capture the low-probability future risks that such strategies introduce.
This volatility paradox should change our thinking about early warning indicators, asset allocation, and our macroprudential toolkit. It should also change our thinking about risk management. As my former colleague Rick Bookstaber puts it, “[Treating such indicators as exogenous means that] higher leverage and risk taking in general will be apparently justified by the lower volatility of the market and by the greater ability to diversify as indicated by the lower correlations.”3
Let me touch on three implications of the volatility paradox.
First, leverage and volatility risk are procyclical. That’s because risk is often managed by looking to metrics like value at risk, or VAR. In other words, a risk manager gives a portfolio a risk budget by imposing a VAR constraint. When VAR rises, the manager must sell securities to reduce risk. But the security sales further depress prices, which amplifies incentives to sell.4 This phenomenon is true not only in banks, but also in asset managers and other portfolio managers.
Second, this leverage effect is stronger for indexes (the market) than for individual securities. That’s because the benefits of diversification vanish under stress. Under stress, correlations rise with equities and among the other asset classes, increasing the volatility of the portfolio and its beta-sensitivity. This “de-diversification” usually occurs when investors have taken on more risk and leverage, and investors find themselves “selling what they can,” rather than selling illiquid assets.
Finally, optionality means that distribution of outcomes is asymmetric. The pricing of all securities with embedded options will be affected by volatility. That’s a natural consequence of option pricing. The inherent asymmetry in options means that uncovered put writing — selling insurance — may have limited upside, but unlimited downside.
Question two is more difficult to answer: What are the most effective policies for strengthening financial resilience?
I’ll start with a key challenge: Although policy changes since the crisis have made the banking system stronger, vulnerabilities remain outside the banking perimeter. Analyzing and measuring these emerging vulnerabilities is essential to developing tools to address them. That becomes increasingly important as financial activity migrates to more opaque and potentially less resilient parts of the financial system.
To identify risks in nonbanks, we focus on activities that can cause vulnerabilities. Use of derivatives, secured funding, illiquid asset concentrations, counterparty credit concentrations, and obligations of membership in CCPs may all contribute to the interconnectedness — and potential vulnerability — of these firms.
I think regular stress testing is one of the best tools for assessing potential sources of vulnerabilities and for calibrating microprudential requirements, such as for capital based on firms’ idiosyncratic risks. Stress tests might also help calibrate macroprudential tools, including those aimed at building resilience across the system.
For a coherent and effective approach to financial stability policy, we also need a financial stability policy framework.
The framework requires an ongoing assessment of potential threats to financial stability and that assessment depends on high quality, comprehensive, and detailed data. To mitigate threats, the framework also needs a comprehensive policy toolkit containing a targeted array of policy tools, such as requirements for haircuts, margins, capital, and liquidity. These requirements may vary by institution, by activity, or over time.
In addition, the framework needs a way to assess the effectiveness of the tools, drawing on experiences in the United States and other countries. Finally, the framework requires criteria for choosing the right macroprudential policy tool for the job. An important consideration in making that choice is finding the proper balance between macroprudential and other policies.
Macroprudential tools are hard to implement and even harder to evaluate. They are new and, in some cases, have not yet been used, let alone tested. Three considerations are important in deciding whether the tools may have the intended effect.
First, the goals of macroprudential policy tools should be clear. Are they intended as shock absorbers to limit the effects of shocks on financial system functioning? Are they guardrails to set incentives or controls for the activities of financial institutions and to help promote market discipline?
Second, a macroprudential policy should target the ultimate cause of the problem rather than its symptoms. For example, the expectation of redemption at par makes money market funds vulnerable to redemption runs, but the use of gates to limit redemptions when shocks hit may have the perverse effect of making money funds more vulnerable to run risk because investors have added incentive to be the first to exit.
Third, policymakers should select policies that offer the biggest bang for the buck. Targeted policies with clear, direct effects on a financial stability threat (such as minimum haircuts for securities financing transactions) are preferable to general policies with diffuse effects (such as activating a countercyclical capital buffer).
The OFR does not make policy; we inform it. Accordingly, we are required by statute to evaluate stress tests and similar tools, to conduct policy studies, and to provide advice on the impact of policies related to financial stability. We have obtained access to the data used to conduct stress tests and we are suggesting ways to improve the tests as well as the data, including for nonbank institutions and systemwide risk assessment. For example, a recent OFR working paper found that the indirect effects transmitted through the financial system of a counterparty failure may exceed the direct impact. Other key areas of our research include finding better ways to gauge risk propagation or contagion in stress testing.
I’ll close by discussing the third question: What are the data, research, and analysis we need to support sound macroprudential policymaking?
The policy approach I just described is sound in theory, but choosing the right tool for the job is not simple. Some tools have unintended consequences, while others entail tradeoffs that may or may not be acceptable. Although some tools in the toolkit may complement one another, others may conflict. Some tools may be publicly unpopular, while others may lose their effectiveness when innovation or regulatory arbitrage prompts the migration of financial activity to less-regulated and darker corners of the financial system. Further, although our authorities as policymakers are national, markets and institutions are global. In achieving our shared mission, we must adopt an approach that is collaborative, cross-border, and global. We need to do more to spot vulnerabilities in the shadows, to understand how the financial system fails to function under stress, and to gather and standardize data for analysis and policymaking.
At the OFR, we are working to do just that. Last year, we developed an OFR-wide initiative that identifies core areas of concentration, or programs, that align our priorities to our mission. This programmatic approach initially encompasses eight programs for coordinating our work on data, research, and analysis. We expect that number to increase over time. Three of those programs aim at improving the quality, scope, and accessibility of financial data. Three signature projects will initiate these programs:
- We are improving the quality of financial data by promoting the use of data standards and developing a reference database for financial instruments. As required by statute, we are creating a master reference database that precisely defines and catalogs financial instruments. This work begins with data standardization through identifiers — the building blocks for these definitions. For example, the legal entity identifier, or LEI, is like a bar code for precisely identifying parties to financial transactions. If the LEI system had been in place in 2008, the industry, regulators, and policymakers would have been better able to trace the exposures and connections of Lehman Brothers and others across the financial system. We have led the initiative to establish the LEI on a global basis. Regulators are responding to our call to require using the LEI in regulatory reporting. We are working globally on transaction and product identifiers. For example, we are collaborating with the Committee on Payments and Market Infrastructures and the International Organization of Securities Commissions, known collectively as CPMI-IOSCO, and the Commodity Futures Trading Commission on swap data reporting.
- Data gaps persist in securities financing transactions, including repurchase agreements, or repo, and securities lending. The markets for these critical short-term funding instruments remain vulnerable to runs and asset fire sales. Yet comprehensive data on so-called bilateral repo transactions are scant. We have mapped the sources and uses of such funds and of collateral to better understand these markets, assess risks, and identify gaps in available data. We have also launched pilot projects with the Federal Reserve and Securities and Exchange Commission to understand how to fill the gaps in data for bilateral repo and securities lending transactions with permanent collections.
- We are working to improve data accessibility in the global regulatory community by creating and promoting the linking of metadata catalogs. Metadata are the “data about data.” Metadata catalogs will improve transparency by enabling policymakers and ultimately the public to know about data and use them appropriately, as well as to understand where the data need improvement. Linking them across regulatory boundaries and jurisdictions will help officials marshal the data to understand and monitor risks globally. We are also promoting the use of best practices for data sharing by adopting robust agreements with explicit governance and controls for data security. We are also developing five programs that will direct and structure our monitoring of threats to financial stability and our research and analysis.
One program will identify, assess, measure, and monitor risks in central counterparties, or CCPs. Clearing financial transactions through CCPs promises significant benefits in reducing counterparty default risk, but central clearing also creates a single point of vulnerability for the failure of the system — the CCP — that must be managed carefully.
A second program will focus on developing and improving the monitoring tools I described a few minutes ago.
A third program will evaluate and assess how to improve stress tests for banks and nonbanks, and systemwide. This responds to our Dodd-Frank mandate to evaluate and report on stress tests.
A fourth program is assessing and measuring risks arising from changes in market structure, including from the spread of algorithmic and high frequency trading, and how those changes affect market function and liquidity. Finally, we have a program focused on risks in financial institutions, in which we are evaluating and assessing bank capital and liquidity regulation, including unintended consequences, conflicts, and complementarities. These programs hold great promise for informing how policymakers can enhance the resilience of the financial system. In pursuing them, we must recognize that financial innovation and the migration of financial activity across the system will add to our challenges.
Our goal to eliminate gaps in data and analysis will always be a moving target, but we will continue to fill the most important ones. I welcome your counsel, support, and collaboration in pursuing this essential mission. Thank you again for having me here today. I would be happy to answer your questions.
Jon Danielsson, Hyun Song Shin and Jean-Pierre Zigrand, “Procyclical Leverage and Endogenous Risk,” October 2012. ↩
“Overheating in Credit Markets: Origins, Measurement, and Policy Responses,” at the “Restoring Household Financial Stability after the Great Recession: Why Household Balance Sheets Matter” research symposium sponsored by the Federal Reserve Bank of St. Louis, St. Louis, Missouri, February 7, 2013. ↩
“The Volatility Paradox,” December 12, 2011. ↩
See Tobias Adrian and Hyun Song Shin, “Procyclical Leverage and Value-at-Risk,” Federal Reserve Bank of New York Staff Reports, no. 338 July 2008; revised February 2013 ↩