The Failure of VaR

Reactions from risk managers are varied. Before the crisis, Basel II capital requirements focused on value at risk (VaR) methodologies. Standard VaR assumes returns are normally distributed, when, in fact, they generally are not. Moreover, during financial turmoil, the tails of returns distributions are much fatter than indicated by normality; hence, losses are underestimated during these periods. Countless empirical studies find the normality assumption applied to asset returns to be excessively restrictive. Heavy-tailed distributions (distributions with more mass under the tails attach higher likelihoods to extreme events. Examples include the T-distribution, Pareto, and lognormal) have been used as an alternative.

VaR debuted as a risk measure by informing us about what proportion of the portfolio is at risk during any given time interval. If, for example, returns were assumed normally distributed with mean return 7 percent and volatility equal to 15 percent, then 95 percent of the time the portfolio would lose less than (7% – 1.64515% =) –17.175 percent. Depending on how we want to frame risk, an equivalent statement would be that returns would be even less than –17.175 percent, but only 5 percent of the time. That's a lower bound. Unfortunately, it leaves open the question of how large losses may be, which is precisely the question of interest during a financial crisis. In any case, VaR is not a coherent risk measure, that is, one cannot cap-weight the VaR across programs to get the plan-level VaR. Expected tail losses (ETL) provide a more complete picture of what the loss distribution actually looks like. As the name expected suggests, observed losses are weighted by their likelihoods and summed. With respect to non-normality, we can use maximum likelihood procedures to determine the best candidate probability function describing returns, a point to which we return later. In any case, it should not be surprising that different asset classes should use different distributional assumptions.

We care about downside risk. So, why not model the lower tail of the returns distribution independently? The literature on extreme value theory (EVT) does exactly this. Thus, depending on the asset class, we would have custom distributions informing us of return risks in general, and tails risks specifically. We could then extend these insights to develop tools that identify portfolios that minimize downside risk, another point to which we shall return.

During crises, correlations often align, signaling highly contagious market risks as returns across asset classes begin to move in lockstep. Standard methods track correlations using rolling windows, consisting, for example, of the trailing three months, or one year, or three years of returns. The problem with these measures is that they will always lag changes in volatility. Thus, when volatility shifts higher and correlations follow, these measures will underestimate risk. Our response could be to fit distributions to the returns data whose parameters capture the volatility regime more precisely because these parameters are measured using maximum likelihood methods. Examples include GARCH and EGARCH (asymmetric risk) and multivariate GARCH. The research is unequivocal—GARCH produces a more efficient and unbiased forecast of risk.

These methods were either unavailable or impractical a generation ago. Advances in mathematics, computing power, and data availability have now put these on the front line of risk management tools. Still, many risk managers have failed to adopt them. Others tools include the use of copulas to model and simulate the joint probability of risks across asset classes and Monte Carlo techniques used to simulate scenario analyses on various risks. I include an application using copulas further on that minimize downside risk.

Risks change dynamically, suggesting that exposures to tail losses are conditional through time. Sullivan, Peterson, and Waltenbaugh (2010) developed a set of models that estimate hazards—the probability that an asset class will experience a threshold event (tail loss event) conditional on the values of factors that influence loss events like default or liquidity risk. They extended this thinking to estimate a systemic risk index that estimates the probability that two or more asset classes will simultaneously experience threshold losses. Systemic risks are the topic of the next chapter. A by-product of these analyses is a time-to-failure measure, which signals the number of days between threshold events.

Hubris creates its own risk. No risk management process will succeed in fully anticipating what are essentially unsystematic risks. Because risk management is basically a systematic monitoring process, the tools we have developed and described herein serve that end. Therefore, while our risk programs may identify profiles that match historical systemic risk episodes, we must be vigilant in our efforts to recognize the signs that may signal uniquely different episodes. This is the challenge as I see it going forward. To meet this challenge, we need to focus our efforts on understanding the operation of the global economy and, especially, behavioral signs and patterns among agents that will signal distress that may lead to systemic events. Even during normal times, the complexion of risk is constantly changing with leverage, liquidity, market, operations, counterparty, credit, and political risks competing for attention. Moving forward, risk management needs to monitor a global market environment that will become increasingly competitive and more diversely regulated.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.119.103.96