CHAPTER 3
Risk Management

“ANTONIO:

… I thank my fortune for it,

My ventures are not in one bottom trusted,

Nor to one place; nor is my whole estate

Upon the fortune of this present year:

Therefore my merchandise makes me not sad.”

—Shakespeare, The Merchant of Venice, Act 1, Scene 1

“Risk–reward.”

—Response to the survey question: “How would you describe quantitative finance at a dinner party?” at wilmott.com

Investment advice used to be simplistic. Don't put all your eggs in one basket, or, as Mark Twain said, “Put all your eggs in one basket, and then watch that basket.” There was little in the way of quantification. In the 1950s, economists began to apply probability theory to the problem of asset allocation, and showed how to put numbers on concepts such as risk and reward. Asset managers now had a way of quantifying their strategies. The seeds were being sown for a dramatic shift from finance as art to finance as science – or at least, something that looked a lot like science. But can risk and reward be reduced to hard numbers?

It's a feeling in the pit of your stomach, or a light-headedness. Perhaps a sudden chill, or worst of all the three-o'clock-in-the-morning cold-sweats panic attack. The risk in your portfolio has just been realized and it's much worse than you feared.

Human beings are very poor at estimating probabilities. We tend to be optimistic about our investments, and even the most pessimistic of us is usually pessimistic about the wrong things. So shocks often seem to come out of nowhere. In contrast, investors put their money at risk because they want to earn a reward. And these two aspects – risk and reward – are somehow related, but not in an obvious way. Putting all your cash under the mattress is probably safe, unless your house burns down.

Financial risk management – the craft of balancing risk with reward – is a subject that has had a relatively recent quantitative makeover. One of its great advantages is that it takes the emotions out of estimating risks, and perhaps prepares you when it hits the fan. But that advantage is only as good as the quant methods. If the methods are no good, then risk management is at best a trick for temporarily soothing the psyche. To understand whether it is a useful tool for navigating the choppy waters of finance, we need to step back into asset management history and look at the development of investment techniques.

If we go back to before the 1950s, an optimal investment was considered to be the one that had the best perceived prospects. Of course, people had an innate sense of risk, at a gut level. More cautious investors could mitigate danger by diversifying, in the same way that Antonio in Shakespeare's The Merchant of Venice split his shipments between a number of boats (or “bottoms”), so that if one sunk all his goods were not lost. But risk wasn't something that you could easily quantify, unlike profit which you could. So people usually went for the profit. This approach did have one advantage, that of simplicity. In deciding which investment to concentrate on, you only needed to analyze each one in isolation.

There are several ways in which to analyze stocks, the three most important being fundamental analysis, technical analysis, and what one might call quantitative analysis. We'll look first at each in turn, and then describe the revolution in the 1950s that changed the face of investing.

Fundamentals

Fundamental analysis means studying the business of the company itself, reading balance sheets, income statements, and so on. It's easy to understand why it might be important to know about a company's sales, the quality of its management, whether it is involved in any legal battles over intellectual property, how its competitors are doing, demographics, and so on. However, while it's obvious that such matters are important to the wellbeing and future of the company, it's quite tricky to turn that into a share valuation. We don't know how many of you reading this book are accountants and understand balance sheets and income statements. We are both self-employed, running our own businesses, and we struggle. No, it's not easy to go into all the details of the business of a company, and to interpret them correctly.

To simplify the analysis you will find that people commonly use “multiples” to turn basic accounting concepts into a share price. One such multiple that you'll see in the share pages of the newspapers is the price-to-earnings (P/E) ratio. This quantity is simply the current share price divided by the recent (last 12 months, say) or future forecast earnings per share. For example, if the current share price is $100, and the earnings per share over the last four quarters was $10, then the P/E ratio is 10.

We can try to use the P/E ratio to estimate a company's correct share price. Companies within the same sector may have broadly similar P/E ratios, but ratios vary from sector to sector. If you want to get a ballpark share price for a company you just need to google to find its earnings, number of shares, and what the typical P/E ratio is for that sector. Or conversely, you can see how a company's P/E ratio compares with others in the sector to figure out whether the company is perhaps undervalued by having a low P/E ratio, or overvalued if the P/E ratio is uncharacteristically high. The P/E ratio thus levels the playing field: the size of a company (its earnings), its share price, and how many shares there are, are all scaled out.

If only it were so simple! Unfortunately, for anyone wanting to get a hold on a company's share price there are a multitude of reasons why the P/E ratio doesn't quite fit the bill. Perhaps the last year's earnings don't reflect how well the company is going to do next year, and if you are buying the stock now it's the future you are concerned with, rather obviously. Perhaps the earnings that the company are quoting aren't quite as, ahem, accurate as they would like you to believe. In the UK the supermarket Tesco immediately springs to mind, it having overstated its profits by £263m in 2014. Or perhaps it's just that even within a sector there is an enormous range of P/E ratios (see Figure 3.1).1

Diagram shows price-to-earnings ratios of Industries like consumer goods, financial, utilities, industrial Goods, healthcare, et cetera. It also shows price-to-earnings ratios of sectors like foreign, water, diversified, gas utilities and electric.

Figure 3.1 P/E ratios by industry

Other numbers can also be calculated for valuation purposes. A company's earnings before interest, taxes, depreciation, and amortization (EBITDA) is a common measure of profitability. It takes the earnings and subtracts off costs that really have nothing to do with the actual running or success of the business (the I, T, D, and A). And instead of the stock price you can use the company's enterprise value (EV). This is the theoretical price you would have to pay to buy the company, so it accounts for things like debt. The EV/EBITDA ratio is in some ways a better metric than P/E, because it strips out any dependence on the capital structure of a company. Whether a company has a lot or not much debt is irrelevant, since interest payments are subtracted from the earnings. It's another playing-field leveler.

Even here, though, we don't get the whole story. Consider, for example, a small drug company with a handful of cancer drugs in its pipeline. Investors are attracted to such companies because, for the price of a small investment, they might just get rich, while simultaneously helping to cure cancer. But how do you value a company where the success will only be known after the drug has succeeded in a plethora of animal and human trials, been approved by drug regulatory agencies, and beaten its competitors in the marketplace? Getting a drug to market is a bit like selling a screenplay in Hollywood: the potential payoff is huge, but for a newbie your chances of receiving it are miniscule. Analyzing metrics such as EBITDA is not much use, because the company will consistently lose money until it has a hit. It is like looking at the screenwriter's dingy low-rent basement apartment and his depleted bank account and concluding that the screenplay has no chance, when it might be the next Citizen Kane.

These multiples also don't tell you anything about risk. They may give you a rough estimate of where the share price of a particular company theoretically ought to be, perhaps relative to its peers, at one point in time, but they don't give you much information about the probabilities of the value being higher or lower in the future, and so how your investment might turn out. Risk is about variation around an expected share price, and particularly how much it might fall. In practice, as an investor you might not be too bothered about whether that variation is due to changes in a company's profit, whether it's a sector thing, or even if it's just an irrational whim of the market. You just want to know how much your downside might be and what is its probability – and that will affect the price you are willing to pay.

Beauty Contest

But there is also a deeper problem with fundamental analysis, relating to the whole idea of value. You may be the greatest analyst of all time, able to calculate EV and EBITDA to the nth decimal place, but that ability might not amount to a hill of beans. The reason, unfortunately, is that there is no exact fundamental value. The price of a company in the market is determined by what other investors will pay for it. So the task of a stock investor is not to figure out the true worth of a company, no, he should be figuring out what other investors think. In his General Theory of Employment Interest and Money (1936), the economist John Maynard Keynes compared the stock market to a beauty contest: “It is not a case of choosing those that, to the best of one's judgment, are really the prettiest, nor even those that average opinion genuinely thinks the prettiest. We have reached the third degree where we devote our intelligences to anticipating what average opinion expects the average opinion to be. And there are some, I believe, who practice the fourth, fifth and higher degrees.”2

Of course, you might believe that you have unique insight into the future, which means that the share price should converge over time to the calculated value, once everyone has come to their senses. But as Keynes pointed out, “The market can stay irrational longer than you can stay solvent.” In fact, it can stay irrational forever. Furthermore, future prospects are even harder to divine than the actions of investors. Keynes again: “If we speak frankly, we have to admit that our basis of knowledge for estimating the yield 10 years hence of a railway, a copper mine, a textile factory, the goodwill of a patent medicine, an Atlantic liner, a building in the City of London amounts to little and sometimes to nothing.”

The price of a stock therefore depends less on hard numbers than on inherently fuzzy and unquantifiable factors such as investor sentiment and ideas about where the company, and the rest of the world, is headed. It all rather makes a mockery of deep fundamental analysis. However, if you are really convinced that a company is seriously undervalued you could always just buy the whole thing, which takes the opinion of other investors out of the equation. But that's beyond the resources of most of us. (We do hope Warren Buffett is reading this book though.) The approach can also be applied to things such as houses, see Box 3.1.

Also, some people manage to get rich using a value approach – such as Keynes himself, who parlayed fairly modest savings into what would amount to about £10 million in today's money.3 His technique evolved over time, but after a couple of mishaps in which he was nearly wiped out, he settled on the quaint but effective notion of investing in good companies. In a 1934 letter to a business associate, Keynes wrote: “As time goes on, I get more and more convinced that the right method in investment is to put fairly large sums into enterprises which one thinks one knows something about and in the management of which one thoroughly believes. It is a mistake to think that one limits one's risk by spreading too much between enterprises about which one knows little and has no reason for special confidence… One's knowledge and experience are definitely limited and there are seldom more than two or three enterprises at any given time in which I personally feel myself entitled to put full confidence.”4 We will now stop quoting Keynes.

Technical Analysis

Good companies are hard to find, and not everyone has the same access to information as a Keynes or a Buffett. If fundamental analysis is difficult to do, and unreliable thanks to Keynes's clever observations, then we've got something much simpler for you. But sadly it's equally unreliable. Technical analysis (or “chartism”) means looking for patterns in stock prices in an effort to predict their future values. Figure 3.2 is a simple example.

Graph shows curve of general motors share price for the duration of 29 September 2010 to 1 November 2014. It also shows couple of straight lines drawn from nearly 21 May 2012 to 1 November 2014.

Figure 3.2 General Motors

The figure shows the General Motors share price over a period of more than 3 years. Notice how we've superimposed a couple of straight lines on this. They are meant to represent the trend over an 18-month period. The chartist would look at this and conclude that General Motors is following a trend that will continue into the future. He would advise buying the stock and reaping the rewards. No need to stress about those boring accounting details.

This trendline is only one of the many patterns that chartists look for. Other patterns have names like “saucer bottoms” (a shallow U-shape), “head and shoulders” (a small hump, followed by a big hump, followed by another small hump), “flags” or “triangles” (a stock price that bounces up and down with decreasing amplitude so it looks like a child has badly colored in a flag with a crayon), and more. They also measure quantities such as moving averages, and plot these on top of the stock price graph. When two moving averages collide, it means something apparently. Elliott waves, Bollinger (cheers!) bands, candlestick charts, and more are all supposedly important.

Sadly the evidence is very strong that there is little predictive power in such patterns.7 However, there is also very strong evidence that humans do tend to see patterns where there aren't any. To emphasize this point we have plotted a similar graph and associated trendlines in Figure 3.3. However this share price is a fake, it was generated using random numbers in Excel – there is no trend here. Technical analysis often amounts to reading patterns into events that probably have no pattern. In his efficient markets paper, Fama made a similar point: “If the random walk model is a valid description of reality, the work of the chartist, like that of the astrologer, is of no real value in stock market analysis.”8 We would agree, with the difference that we don't think markets follow a random walk either, even if they sometimes look like it.

Graph shows curve of random share price during the period of 29 September 2010 to 1 November 2014. It also shows couple of straight lines drawn parallely marking the ups and downs in the graph during 21 May 2012 to 1 November 2014.

Figure 3.3 Random

For our present discussion, like fundamental analysis, technical analysis also says nothing about risk. The chartist is trying to tell us where the share price will be in the future (with a large supply of excuses in preparation for when the prediction goes wrong), not about the probabilities, the variation, and the risks. The main risk to the technical analysis believer is that it is all rubbish.

Now, as an aside, we do have some sympathy with technical analysts. And we believe that there could be a grain of truth in their ideas. There is a simple mechanism which they could exploit to make their predictions do much better. But this requires them to change their current thinking, in two ways.

At the moment, the technical analysts present their predictions with the sort of conviction used in UK weather forecasts: There won't be a hurricane tomorrow. Period. But when there is a hurricane, as famously happened in the UK in October 1987, then people tend to remember, and not trust future predictions. The exact quote by weatherman Michael Fish in 1987 was “Earlier on today, apparently, a woman rang the BBC and said she heard there was a hurricane on the way… well, if you're watching, don't worry, there isn't!” The following day was the worst storm to hit the UK for hundreds of years. In the USA the weather forecasters are more sophisticated, presenting their predictions with a probability, which is also useful as a get-out clause. Technical analysts are as emphatic as UK weather forecasters, perhaps because they rarely have the quants' training in probability theory. Instead, what technical analysts do when they are wrong is to blame it on the pattern, it wasn't a “head and shoulders,” it was a “Mount Rushmore” (we made that one up!). So, Step 1: give percentages. After all, you only need to be a few percentage points above 50% accuracy to make a fortune.

This still doesn't make technical analysis work, it just makes it harder to disprove. The second step is to get together and decide on a single indicator that they are all going to use for prediction. Ideally nothing too simple, since they don't want everyone to be able to do it themselves. Once they are all using the same single indicator, and all making the same prediction, then a BUY alert from them would result in people buying the stock, followed by the stock, as a consequence, rising. Thus making their prediction come true. This is a simple feedback effect – the power of suggestion and herding – of which psychologists are aware, but it has only recently been studied with respect to share prices. While chartists are all making different predictions their buys and sells will cancel out, and there is no feedback. We will discuss the good and bad sides of feedback later, and will have more to say on more modern versions of market prediction.

Again we have mentioned probabilities. And this is key to the modern methodology for risk measurement and management.

Quant Analysis

In the 1950s some pretty straightforward ideas in probability were applied to this problem, and asset allocation suddenly became something that you could write about in respected journals. Fundamental analysis and technical analysis both concentrate on the possible rewards, while neglecting risk, and up until the 1950s risk didn't really get a look in as far as asset management was concerned. This changed with the work of University of Chicago's Harry Markowitz, published in 1952, which was known as modern portfolio theory (MPT). His great insight was to quantify share price behavior in a probabilistic sense, and relate it to the idea of risk. Even today, if you have a robo-advisor running your investments, or for that matter a human one, the strategy is probably based on a version of MPT.

There is always an implicit trade-off between risk and price. Consider a simple game where you toss a coin, and get $10 if you call it correctly and nothing if you lose. Since you have a 50/50 chance of winning, the expected gain (i.e., the average over a large number of tosses) is $5. Therefore, the fair value for an option to play the game is also $5. But who would want to play that game? Contrary to Bachelier's assumption of zero expected profit, most people would say that they'd play only if they had an edge, in mathematical language they'd want a positive expectation. Maybe we could tempt you to play if the upfront premium was only $4. With an expected payoff of $5 that gives you an expected, but not guaranteed, profit of $1.

In this situation most people would be “risk averse.” In financial terms, this means that you want a positive expectation. More subtly, you might link your expectation to the degree of risk you are taking. If you are “risk neutral” then you don't need an expected profit. And if you are “risk seeking” then you are comfortable with negative expectations. This is lottery territory, where you expect to lose but the potential enormous payoff outweighs losing a few dollars now and then. It can be totally rational to play games with negative expectations. If winning the lottery is your only way of getting a life-saving operation, then you will play.

Markowitz expressed this trade-off by representing the behavior of an individual share over a set time horizon in terms of two parameters: the expected return and the standard deviation. The first measured reward, the second measured a kind of risk. A stock whose price tended to experience wild fluctuations was considered riskier than one which was more stable. Knowing these two parameters amounted to knowing the probabilities for stock price behavior in the future. One could answer questions such as what is the probability of the stock doubling in value over the next year? Or how long before we can expect the stock to hit a key level?

The means and standard deviations for stocks were estimated using historical time series. As an example, let's say you have the daily closing prices for the stock going back 10 years. From this you can calculate daily returns. This is just the percentage change from one day to the next. So, if the stock was at $50 one day and the day after it is at $51, then that's a 2% return. If it was at $49 then that's a −2% return. You now have a time series of returns. You can calculate the expected return by averaging all the returns, and the standard deviation gives you the risk.9

Of course, we can't guarantee that the future parameters will be the same as the historical ones. Using past data to estimate future returns looks suspiciously like a chartist using a trendline to predict the future. And there is no reason why volatility, as measured by the standard deviation, should be stable either. But this problem will crop up over and over in our book.

Markowitz would then plot individual shares on a risk/return chart. The horizontal axis representing risk, the standard deviation, the volatility, and the vertical axis representing the expected return. Figure 3.4 shows an example.

Expected return versus risk graph shows distribution of Apple, Coca Cola, Ford, IBM, Johnson & Johnson and Procter & Gamble. It also shows arrow marks representing good, hmm and bad.

Figure 3.4 Example risk/return chart

In the figure we see six US stocks: Apple, Coca Cola, Ford, IBM, Johnson & Johnson, and Procter & Gamble. Take IBM for example. It has a horizontal coordinate of 0.27, meaning that its standard deviation, volatility, or risk is 27%. And it has an expected return, measured on the vertical axis, of 8%. These are both annualized numbers.

From plots like this we can immediately see which stocks are appealing and which are no-hopers. Ford (F) is clearly useless. It has quite a large risk, almost as much as Apple (AAPL), but without Apple's impressive return. Johnson & Johnson (JNJ) and Procter & Gamble (PG) are pretty similar, it's not worth distinguishing between them based on this data alone. Actually, all of Ford, Coca Cola (CCE), and IBM are worse than JNJ/PG on the grounds of having higher risk yet lower expected return. If we were to invest in a single stock then we'd have to rule out all of those three. That leaves JNJ/PG and AAPL as possibles. However, we can't decide between AAPL and JNJ/PG. Why not?

All things being equal (i.e., the same risk or volatility), then the higher up (vertically) this plot the better. See the large arrows in the plot. And then it's better to have lower risk than higher risk, again all things (now expected return) being equal. So you want a stock further to the left. But if you have a choice between bottom left and top right, it's not necessarily easy to make a choice. In our example here AAPL has a great expected return but that comes at a cost, high risk. Even with simple pictures like this, Markowitz is giving us an easy way of quantifying our investments, helping us to compare individual investments while taking both the two dimensions of risk and reward into account. However, there's more to come, because Markowitz had another trick up his sleeve.

Right at the start of this chapter we talked about analyzing stocks in isolation, and so far that is all we've done. In his MPT, Markowitz also looked at how two stocks behave together, and then how entire portfolios of stocks behave. The key to this analysis is the concept of correlation.

Correlation

In statistics, a correlation is a number which measures how two quantities tend to vary together. For example, the purchase of umbrellas is highly correlated with rain storms; ice-cream sales with heat waves. Some correlations are spurious, nothing more than statistical flukes. As just one example, US spending on science, space, and technology in the period 1999–2009 had an uncannily exact (0.99) correlation with the number of suicides by hanging, strangulation, and suffocation, which is not very useful information.10 There are many ways to measure correlation, but the one most commonly used in quantitative finance is that developed by Karl Pearson in the 1880s. To measure the Pearson correlation between two stocks you need two time series of their returns, measured at the same times. The calculation gives you a correlation coefficient that is between plus and minus 1. Loosely speaking, a positive number means that ups and downs are more or less in sync. If it's negative then an up in one stock tends to be associated with a down in the other, and vice versa. If the correlation is zero then we say that the stocks are uncorrelated.

To see why correlation could be important, suppose you have a portfolio of two stocks which have the same expected return and are highly correlated. Then the prices of the individual stocks will tend to move up and down in unison, and the portfolio which includes both will therefore bounce along in time as well. However, if the stocks are uncorrelated, the volatility will be lower; and if the two stocks have negative correlation, it will be lower still, because when one stock zigs the other zags, and the fluctuations cancel each other out. The same idea can be extended to a larger portfolio of many stocks. By selecting stocks with the right mix of correlations, it is possible to reduce overall risk while retaining expected rewards.

For example, in Figure 3.4 we can ask what would happen if we buy 1000 shares of AAPL and 1000 shares of JNJ. While we won't go into the sums, it's an obvious concept that this portfolio too will have an expected return and a risk. It's then a small step to asking, why buy 1000 of each stock? Why not different amounts for each? And that will give us more dots, more potential portfolios to invest in. And why just AAPL and JNJ? Why not throw the other stocks into the pot? And why just buy? Can we perhaps sell stocks short? We could also broaden the mix with other securities such as bonds (the price of bonds is usually supposed to be inversely correlated with stocks, although that relationship broke down after the recent financial crisis when assets of all types were pumped up by quantitative easing).

The end result is that we can get a much wider range of risks and expected returns if we allow our portfolio to have any possible combination of assets. See the dot in Figure 3.5 labeled “A portfolio.” Varying the constituents of the portfolio and their quantities we can move that dot up, down, and sideways.

Expected return versus risk graph shows distribution of a portfolio, Apple, Coca Cola, Ford, IBM, Johnson & Johnson and Procter & Gamble.

Figure 3.5 A selection of stocks plotted according to their risk and expected return, and a portfolio

Given that we can generate all these different portfolios with varying risk/reward characteristics, the next question Markowitz tackled was: how can we choose a portfolio such that for a given amount of risk we maximize the expected return? That is, move the dot up as high as possible. It might even be that poor old F finds a role, either as a stock to sell short or because its correlation with other stocks decreases risk sufficiently to make it appealing.

This is a nice optimization problem. Markowitz answered it by applying a mathematical technique known as linear programming, which is a method to optimize some quantity subject to certain constraints (represented by linear equations). It was first invented in 1937 by the mathematician Leonid Kantorovich who, while working for the Soviet government, used it to optimize the production of plywood. It was kept secret during World War II, when the Russians used it to optimize the war effort, but afterwards began to be adopted more widely in business. Markowitz had the good idea of adapting the technique to the problem of risk and reward. The result was a chart like in Figure 3.6.

Expected return versus risk graph shows two curves for capital market line and efficient frontier in which risk-free investment and market portfolio are indicated. It also shows distribution of Apple, Coca Cola, Ford, IBM, Johnson & Johnson and Procter & Gamble.

Figure 3.6 Lines representing the efficient frontier and, when the risk-free investment is included, the capital market line and market portfolio

There's a lot going on here, so bear with us. The first thing to notice is the curve marked “Efficient Frontier.” We get this by doing the above optimization, choosing a number for risk, and optimizing the expected return. We then move onto a different level of risk. All of the points on this curve can be attained by different portfolios. MPT says that there's no point in investing in any portfolio that is below this curve, since you can do better by optimizing your portfolio, increasing the expected return for a given amount of risk.

Of course, no portfolio analysis would be complete without including a risk-free asset, such as cash held in a bank account, in the mix. Because risk is zero, this is a point on the vertical axis. It's marked “Risk-free investment” in the figure. The bold line which joins this point to the tangent of the efficient frontier is called the capital market line, and the tangent point itself is the tangency portfolio. You can get to any point between the risk-free dot and the tangent point by holding a mix of cash and the tangency portfolio. The higher-risk portion of the line to the right can also be reached by employing leverage (i.e., borrowing cash to buy the shares in the tangency portfolio). Choices anywhere on this line therefore give the maximum reward for a given risk. All you need is the right mix of cash (or debt) and the tangency portfolio. It is an efficient frontier with bells on. We'll come back to this tangency portfolio shortly.

Markowitz now has no more to say. Where you personally want to be on the straight line is entirely a matter for you. Markowitz cannot help. Feeling nervous? Hold 50% cash and 50% stocks. Want to take a flutter? Borrow cash and double up on the market.

This is clever stuff. We've got some statistics, some mathematics (nothing too complicated, but more than most people are comfortable with), and some great concepts including an optimization, and it's always nice when you can optimize something. We have mentioned the word “efficient” many times, which makes us sound like engineers. And it still leaves a little bit of room for personal preference in choosing your portfolio. Best of all is the way it reduces the task of choosing from an enormous array of securities, with their complex and intractable mix of risks and rewards, to a simple, straight, elegant line. No wonder Harry Markowitz was awarded an economics Nobel gong in 1990. What could go wrong?

Well…

Quite a lot, as it turns out. The problem with MPT, and with all of quantitative methods in finance, is that it is only as good as the underlying assumptions. Some of these concern the basic properties of markets. Like the efficient market hypothesis, MPT assumes that investors act rationally to further their self-interest, make decisions independently, have access to similar levels of information, etc. As a result, stock prices follow a random walk, with an upward bias that corresponds to the average growth rate and daily changes that follow a normal distribution.

So far, nothing new. But in addition, MPT assumes that we can measure meaningful correlations between different securities. If we have N stocks then we have N expected returns to calculate and N volatilities. But how many correlation parameters are there? If you ever did combinations and permutations at school you may have a vague memory of how to work this out. Each correlation is between two stocks. So the question is, how many combinations of two stocks are there if there are N  stocks to choose from? First choose one of the stocks, there are N ways to do this. Now choose one of the remaining stocks, there are N−1 of these. Choosing the two together gives N(N−1) ways. But we don't care whether we choose stock A first and then B, or vice versa, so divide this number by 2. This leaves the number of combinations and thus the number of correlation parameters as N(N−1)/2.

Now that's a lot of parameters to measure! For example, if we had 500 stocks to choose from (say, from the SPX Index) then that would be 500 return parameters, 500 volatilities, and 500 × 499/2 correlations, a total of 125,750 parameters to estimate!

The problem is not so much the number of parameters that need to be measured, because the method is relatively simple. No, it's more a question of the stability of the parameters. Some correlations will be completely spurious and go away, others just fluctuate with time. This can particularly be a problem during a market crash, when asset price changes tend to be highly correlated because they are all falling together.

But perhaps the most important assumptions, whose problems go to the core of the theory, are that we can compute expected risk and reward for each stock in the first place. Astute, or skeptical, readers may have noticed that MPT asks us to input expected growth rates for individual stocks; but as seen above, it is not possible to accurately predict expected returns using either fundamental or technical analysis. This empirical fact was one of the main justifications for efficient market theory.

Just as concerning is the idea that we can measure risk using the standard deviation of past price changes. When we estimate returns, we are making a prediction about the future; but when we estimate risk, we are predicting the uncertainty in our forecast – a prediction about our prediction – which is even more difficult. The standard deviation tells us something about past fluctuations, but there is no reason why it should remain constant (there are ways around this, as discussed in Chapter 7, but none of them are very appealing). It also seems to be a slightly strange way of measuring risk, because it assumes that sudden price increases are as bad as sudden decreases, while in fact we only worry about the latter. You probably don't lose sleep or go into a blind panic if your portfolio suddenly surges overnight. Then there is the fact that risks might not express themselves through volatility. Consider the previous example of a drug company. Its share price might be quite stable, or not, but that says nothing about the probability of its drugs being successful. There is also a more subtle point, which is whether the standard deviation is even a meaningful concept for financial data in the first place. We return to that below.

Efficiency Squared

Some of these concerns were addressed by William Sharpe, who later shared the 1990 economics Nobel with his mentor Markowitz. When asked in a 1998 interview what had appealed to him about Markowitz's work, he replied: “I liked the parsimony, the beauty, of it… I loved the mathematics. It was simple but elegant. It had all of the aesthetic qualities that a model builder likes.”11 Searching for a way to simplify MPT even further, and make it even more beautiful, he asked what would happen if everyone in the market optimized their portfolio according to Markowitz's calculations. The answer was that the “market portfolio” – defined as a portfolio whose holdings of each security are proportional to that security's market capitalization – would be an efficient portfolio. In other words, the market itself would adjust prices to an equilibrium level that optimally balanced risk and reward.

Here at last we had a kind of synergy between efficient markets and efficient portfolios. Some economists and analysts took this a little too seriously. In their minds, they had found the unique, perfect, beautiful portfolio, and it was called the market. Blah, blah, gibberish, gibberish. Wild flights of fancy ensue. To the efficient frontier and beyond! As everyone followed MPT, and behaved perfectly rationally, so the market portfolio and the tangency portfolio would move toward each other, eventually becoming one. Everyone would own the one portfolio, and it would be perfectly efficient. No more annoying uncertainty, or risk, or irrationality. All would be well with the world.

If anyone can flog an already sick horse to death, it is an economist.

This unquestioning enthusiasm for elegant theory was dented somewhat on October 19, 1987, otherwise known as Black Monday, when stock prices mysteriously became completely correlated as they plunged by 22% in the USA and by similar amounts around the world. In surprise terms it was the economic equivalent of the UK hurricane, which had hit just the week before. As discussed further below, the crash was later partly blamed on portfolio management – the very thing which was supposed to protect against such crashes – because, with beautifully choreographed synchronization, institutions using the same models were all managing their portfolios in the same direction by selling assets at the same time. Whenever a model becomes too popular, it influences the market and therefore tends to undermine the assumptions on which it was built.

Despite economists getting a bit carried away with unrealistic theories, a number of good and useful ideas came out of MPT, such as the Sharpe ratio. This is the ratio of a stock's, or a portfolio's, expected return in excess of the risk-free rate to the volatility. In our MPT plots you just take the line that joins the risk-free dot to the dot representing a specific investment and measure its slope. This can also be called the “market price of risk” (for the stock in question), because the greater the slope, the greater the compensation you get in terms of expected return above the risk-free rate for each unit of risk taken. Each financial instrument has its own market price of risk. This is another one of those nice quantities that level the playing field; in words it is just the risk-adjusted return, where volatility is a proxy for risk. Investments with higher Sharpe ratio are essentially better than those with lower Sharpe ratio. And the capital market line is clearly the highest and therefore best you can achieve. The Sharpe ratio is also measured for hedge funds, and if you read the prospectus of a hedge fund they will invariably quote theirs, trying to entice new investors.

Another useful invention was that of the index fund. If markets represent the optimal portfolio, then just buy the market index. Of course, this raises the question of which index. But cue the invention in the mid-1970s of funds such as the highly successful Vanguard 500 Index Fund. This does nothing more complicated than track the Standard & Poor's 500 Index, but still handily beats most fund managers once expenses are taken into account. Sharpe told the Wall Street Journal: “When I taught Investments at the M.B.A. level at Stanford, I started the first class by writing a phone number on the board. I then told the students that it was the most valuable information they would get from me. You probably guessed that it was the number for Vanguard.”12 Note that the index fund approach represents the exact opposite of Keynes's advice, which was to focus on a handful of companies.

The success of index funds is routinely supplied as evidence that markets are efficient. But a better way to look at it is that index funds are a very good business model that acts as a kind of parasite on the financial system. The market is made up of scores of funds and individual investors, who are making judgments about the value of companies. An index fund represents a kind of average of their decisions, a way of replicating their strategies, so by definition it should give average performance. However it can achieve this without any research or thought at all, so of course its expenses are minimal. That gives it an advantage over other funds. It also has the positive effect of keeping industry management fees in check. But if every fund adopted an index approach, the system would fall apart, since all a company would have to do to succeed is get in whatever index is the most popular. Indeed, index funds have grown so large that the heaviest trading of the year often occurs on the day when the Russell indices – a favorite among US fund managers – are updated.

Value at Risk

Perhaps the main contribution to come out of portfolio theory, though, was that asset managers were now quantifying their strategies, they were measuring expected returns and risk, and balancing them off as two sides of the same coin. Following Black Monday, the investment community started to take risk measurement and risk management even more seriously. A methodology called “value at risk” (VaR) began to gain traction. This was based on the portfolio risk measurement used in MPT and gave senior management a single number designed to give a sense of how much a bank might be expected to lose. In its basic form VaR has two key elements. The first is a degree of confidence: 95%, say. The second is a time horizon: 1 day, say. The risk manager might then say that the VaR is $2 million.

This is interpreted as meaning that 95 days out of 100, losses on the portfolio will be less than $2 million. If instead the degree of confidence is 97%, then this statement would change to 97 days out of 100. If the time horizon is 1 year then it would be adjusted to so many years out of 100. And so on. The manager would then decide whether the $2 million VaR was acceptable or not. If not then action would be taken to reduce the number by changing the portfolio or hedging.

VaR has come under a great deal of criticism. In fact, since 2008 it is impossible to find anyone in the field who hasn't criticized it. Before 2008 it was a different story. The main criticisms are as follows.

  • It focuses on typical market movements, the frequent events. This is fine, but understand that it's not the frequent events that usually cause institutions to collapse.
  • It can lead to a false sense of security.
  • It usually assumes normal distributions. However, share price returns are not normally distributed.
  • It doesn't tell you how much you might lose on those days when the VaR number is exceeded.
  • It uses highly unstable parameters. During typical market movements there may be some correlation between assets, but come the big crash then all assets tend to be extremely highly correlated, and this totally destroys the VaR numbers.
  • It creates dangerous incentives.
  • It is easily abused.

All of these are fairly obvious criticisms, and mostly can be improved by different mathematics. The last two, however, are more subtle.

First, incentives. Let's modify the casino game of roulette to make it on average profitable (i.e., give it a positive expectation). We'll work with the European wheel which has 37 numbers, 1 to 36 plus a single zero. We'll play the game where a $100 bet will get you an extra $3 if any nonzero number comes up. But if it's zero then you lose the $100. You expect to make a profit of 36/37 × 3 − 1/37 × 100 = 21.6 cents. This is positive, so you expect to make money. (If only real roulette were like this.) However one time in 37, that is about 3% of the time, you expect to lose everything you've bet. If you look at VaR at the level of 95% with a time horizon of one spin of the wheel, then it will look like there is no risk. This is because the 3% chance of losing is within the 100% − 95% = 5%, and not seen. If you are senior management with little clue about the subtleties of this “investment,” then you might be rather pleased with the trader who has found it. And since there seems to be no chance of a loss at the 95% level, you might be tempted to gamble rather a lot. If you do, then it won't be that long before you are wiped out.

Second, abuse. There follows a true story told to Paul by a risk manager. We shall tell it in the risk manager's own words. “I'm a quant on a trading desk. One of my jobs is to measure the risk in our traders' portfolios. Last week one of them gave me a breakdown of his portfolio and asked me to tell him his VaR so he could report the number back to his boss. I went away and did the numbers. I gave the trader my report, essentially just a single number as the conclusion, his VaR. The trader looked at this and then looked at me. He said to me ‘No, it's not. Go away and come back with the right answer.’ He said it in a way that made it clear what I had to do. I had to do something with the model, or the parameters, or anything that would produce a significantly lower number. If I couldn't then the trader would have to scale back his positions. He didn't want to do that. He could make my life very difficult.”

If the model is based on unreliable assumptions, and if the parameters are unstable, then it is easy to choose the model or the parameters to make the reported risk as low as possible. (Hey, there's another optimization problem here… albeit an evil one. We'll see this again in Chapter 9.) And this is all that the traders want. The lower the reported risk the greater the volume they can trade, and if all goes well the bigger their profit and bonus.

In the early 1990s the investment bank J.P. Morgan released its RiskMetrics™ methodology for measuring VaR. It involved a special way of measuring volatility, and some software. There was nothing particularly earth shattering about what they were doing, but it did throw another spanner into the works… systemic risk. Once everyone is using the same (wrong) techniques, then the risk to the system increases. On a personal note, Paul had a meeting with the RiskMetrics™ team in the late 1990s with the goal of explaining to them the importance of extreme stock movements in the risk of portfolios. He and a student of his, Philip Hua, had recently developed a model for analyzing portfolios in anticipation of crashes – and cheekily called it CrashMetrics.® J.P. Morgan didn't seem to care. CrashMetrics didn't involve measuring any correlations or volatilities, so was obviously not going to be of interest to them.

The Edge of Chaos

One of the main advantages of using hard numbers to measure risk is that it is supposed to make decisions scientific and objective. But clearly, if a trader can adjust his VaR calculation in order to please his boss, something strange is going on with the mathematics itself. The process looks objective, but is actually subjective.

The reason for this flexibility can be traced back to the abovementioned fact that portfolio theory is based on the idea that price changes follow a normal distribution, with a stable and easily measured standard deviation. Real price data tend to follow something closer to a power-law distribution, and are characterized by extreme events and bursts of intense volatility, which as discussed earlier are typical of complex systems that are operating at a state known as self-organized criticality. This is also sometimes called the “edge of chaos,” because such systems aren't fully random, or perfectly ordered, but instead operate in the interesting space between those extremes. In the case of financial data, one technical implication is that the measured volatility depends on the particular time period over which it is measured. Leave out those awkward moments like Black Monday and you get a very different result. Which can be convenient, if the aim is to tell a story.

Now, we have nothing against a degree of chaos, in moderation. A plot of the human heartbeat, for example, has chaotic qualities, though an overly rough or erratic pulse is a symptom of a heart condition known as atrial fibrillation. However, these properties are a powerful reminder that we are dealing with a complex, living system, rather than a deterministic, mechanical one. They also undercut the picture of calm rationality projected by MPT. Instead of operating on the efficient frontier, we are operating at the edge of chaos, which somehow doesn't have quite the same reassuring ring to it.

Theories such as MPT or VaR fail just when you need them most, in the moments when apparent stability breaks down to reveal the powerful forces beneath. The reason is that they model the financial system in terms of random perturbations to an underlying equilibrium, and can't handle the inherent wildness of markets, where storms can come out of nowhere. In particular, as discussed later, they ignore the nonlinear dynamics of money, contagion between institutions due to network effects, and the bad things that happen when credit suddenly dries up. “No investment strategy based on mainstream finance theory can… protect investors from market-wide crashes,” according to a recent study by the CFA Institute.13 In other words, for risk-management techniques, they aren't much good at managing risk – and in fact can create risks of their own. But that doesn't stop them from being taught in every business school. In the next chapter, we'll look at how probability theory was used to not just try and manage risk, but eliminate it altogether – and how risk responded by mutating into new, and even more virulent, forms.

Notes

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
44.200.196.114