CHAPTER 7
The Rewrite

“The truth is, the Science of Nature has been already too long made only a work of the Brain and the Fancy: It is now high time that it should return to the plainness and soundness of Observations on material and obvious things.”

—Robert Hooke, Micrographia (1665)

“During emission testing, the vehicles' ECM ran software which produced compliant emission results under an ECM calibration that VW referred to as the ‘dyno calibration’… at all other times during normal vehicle operation, the ‘switch’ was activated and the vehicle ECM software ran a separate ‘road calibration’ which reduces the effectiveness of the emission control system.”

—The US Environmental Protection Agency explains how Volkswagen calibrated the electronic control module (ECM) in its diesel cars to pass emission tests.

Quants put values on esoteric financial products by using sophisticated mathematical models to simulate their behavior. Once a suitable model has been selected, it is first calibrated or tuned to existing data. This typically involves modifying various settings within the model, a process which is rather like adjusting the control surfaces of a model airplane, or tweaking the storyline of a screenplay after a screening. The model is then ready for launch. The quant can let it go and stand back to admire its performance. But what if, instead of working as expected, it veers off course and crashes? This chapter looks at the process of calibration, and shows that model tuning is often as much about fixing appearances, or rewriting reality, as it is about performance.

Our focus here will be on one particularly worrying aspect of quant finance modeling. It's not that this is the worst problem in quant finance, it's just one out of many topics we could have addressed. But we pick on this one because of how it not only illustrates a confusion over modeling in finance, but also sheds light on how quants think, how regulators think, and shows how similar yet how different are finance and proper science.

Let's suppose that your job as a quant is to value an up-and-out call option on the stock of a particular company called XYZ. As discussed in Chapter 5, this is like a regular call option, with the difference that if the stock rises so far as to hit some pre-set trigger level any time before expiration, then it “knocks out” and becomes worthless. This feature makes it a little cheaper, but also makes it very sensitive to volatility, since a volatile stock is more likely to exceed the trigger level.

The straightforward way to estimate volatility is to get a time series of past XYZ stock prices, and analyze these statistically in order to quantify the variability in the numbers. The statistics can be as simple or as complicated as you like – but whatever technique you use will have one fundamental problem, namely how do you know that the future is going to be like the past? The volatility you've just estimated is a number from the past. The future may be completely different. And it's the future value of volatility you need to know; since the contract expires in the future, its value depends on how much the underlying asset moves around from now until expiry.

Another way to approach this problem is to try and infer the future volatility from market prices of simpler derivatives, the calls and puts which are traded in large volumes. These vanilla contracts also depend on estimates of volatility; and these are volatilities over the future, precisely where we need them. However, the prices depart from what you would calculate using Black–Scholes, because – as discussed above – traders adjust them to better account for things like extreme events, and because like everything else these options are subject to market opinion and the forces of supply and demand. One way to interpret this is to say that the model is wrong, or that the model is right and the traders are wrong. But another way, if you believe that markets are efficient, is to say that the prices of these contracts – which concern the future – are telling us something important about volatility in the future. There's information in them thar options! And just as we can use the Black–Scholes model to calculate the price of a vanilla option based on a known volatility, so we should be able to go the other way and infer the future volatility by knowing the price. Or if that fails, at least our estimate will be consistent with the other options being traded. It is a financial version of Auto-Tune, the audio processor which corrects singers who sound a bit off so they harmonize perfectly with the rest of the band (here again we see the role of models as a coordination device).

So, returning to our example, we still have to value our complex derivative – for which we need the volatility of the XYZ share price. Fortunately, there is a plain vanilla option on XYZ trading on an exchange for $10. We ask the question: “What value of volatility must be used in our derivatives-valuation model so that it gives an answer of $10 for this basic vanilla contract?” Suppose that the answer was simply that we need a volatility of 0.2, usually written as 20%. This is the implied volatility. Now we are all set to value the more complicated up-and-out call, we just use the same 20% value to calibrate our model for that contract. Job done. Or is it?

Blowing Smoke

Calibration is an example of what are called in mathematical circles “inverse problems.” In most physical problems, you are usually trying to figure out from a model how something might behave in the future. Weather forecasting would be a good example. But sometimes you want to go backwards. This would be like trying to figure out what the weather was last week, knowing what it is today. Solving such problems can be easy, as when you infer the stiffness of a spring from experiments with weights, or they can be like driving backwards down a highway using only the rearview mirror – the problems may be larger than they appear.

As an example: you walk into a classroom filled with students, the air is dense with smoke… who is the guilty smoker? Given the distribution of smoke in the room, can you go back mathematically to figure out the source of the smoke, the cigarette?

Smoke concentration obeys the laws of diffusion. This is relatively simple second-year undergraduate mathematics. An undergraduate exam question might ask about the distribution of smoke given the position of its source. But we are not asking that, we are asking the inverse: we want to know the source given its distribution. Superficially similar, they are actually very different. In fact, the smoke problem is what mathematicians call “ill posed,” meaning that the slightest disturbance to the distribution could make the backwards calculation impossible. The information has effectively been blurred out.

Talking of blurring reality, fans of CSI Miami will remember the episode in which the clue to the identity of a murderer was on a piece of fabric torn from the sail of a yacht, but the fabric had become damp and the ink or mark on the fabric had become diffused. H took the fabric back to CSI headquarters, and, using their clever computer wizardry, undiffused the writing. Well, there's a reason why the verb “to undiffuse” doesn't exist,1 and that's because IT IS IMPOSSIBLE, H. Our faith in CSI was destroyed at that very moment.2

Calibration in finance shares some of the problems of the diffusion problem. As described in Chapter 2, share prices can be modeled as diffusing in time as they are jostled around by random currents, rather like a particle of smoke. Option prices tell you something about what traders think the smoke pattern will look like after a certain time. The Black–Scholes model relies for its accuracy on a single key parameter, the volatility, which is assumed to sum up everything you need to know about a security's behavior. So, if the model were an accurate description of reality, then the inverse problem for any option on a single underlying would also always yield a single number. But as mentioned in Chapter 5, the implied volatility tends to vary with factors such as the strike price and exercise time, so that 20% volatility we calculated for one XYZ option might be 25% for another. And to fit the range of option prices with the model, it turns out (see Box 7.1) that we need to assume the volatility depends both on time and the security's current value. The result of the calibration is not a single constant number, but a lookup table. Furthermore, the values in the table are very sensitive, and jump around in a way that does not look natural. In a sense, the complexity of the real world has snuck back into the model by transforming volatility from a single number into something much more complicated.

This isn't the end of the world, as there are mathematical ways of “regularizing” or tidying things up – though it is certainly a clue that all is not well. Of more importance is whether calibration in finance has any grounds for justification – or is it just something that quants do because they can, not because it works? Information is the key. Just how much real information about the future is contained in today's option prices? Is it a lot? Do markets know the future? Or nothing? Traders have to trade, and that leads to prices, even if they have no clue what's going on.

Going backwards with the smoke problem is difficult, but at least it can be justified. Financial calibration cannot easily be justified – in fact, as seen next, it is potentially dangerous.

Calibrating the Crystal Ball

The complexity of this model meant there were now no nice formulas for the values of options. But that is no problem for the gifted mathematicians and computer scientists writing the code for our volatility model. More importantly, let's do a sanity check. Does it really make sense that future volatility – the amount of variability in a share price – is a function of asset price and time? Remember, this volatility is backed out from the prices of traded options. So does this mean that traders have access to a crystal ball that tells them the future of volatility? Do the market prices know about the next major earthquake, its date, location, and strength? Do options on the shares of agricultural companies, or ice-cream vendors, or umbrella makers, contain knowledge about next year's rainfall, when weather forecasters struggle with next week?

More subtly, note that the volatility that is backed out is given for all asset prices. For example, the calibration might say that volatility will be 23% in six months' time if the underlying share is $68, or 21% if it is $77, or 20.3% if the share is $83. But the calibration never tells us what the share price itself will be. Hang on a minute, your crystal ball can tell us what the volatility will be for all asset prices. Well, wouldn't it be better if it could instead tell us what the asset price will be in six months? Forget volatility. With that crystal ball we can make serious money.

Common sense says that this assumption of a non-constant volatility that we can somehow predict is extremely unrealistic. But that's just common sense. Can we show this scientifically?

There are two ways we could try to confirm that this calibrated model isn't going to work. The first is to wait six months and measure volatility on that date. Using the same numbers as above, if the asset price happens to be $68, then is volatility 23% as predicted, etc.? The problem is that measuring volatility on a specific day is itself a tricky statistical problem, because volatility is defined in terms of an average fluctuation over a reasonably long time period, not just one day. Also, we have to wait a frustrating six months.

A much, much easier method is to do the calibration today, then come back in one week and recalibrate (i.e., use the new market prices of traded options to back out the volatility function), compare the new function with last week's, and check if they are the same. Using the same numbers, does the new function still say that the volatility will be 23% if the asset is $68 in six months less one week? Note that we aren't asking whether the forecast volatility is actually correct. No, we are asking the simpler question of whether the forecast is stable.3

The answer is almost certainly that no, they are not the same. The new forecast is different from the old forecast. This is a game you can all play at home, but using weather forecasts. Look at the weather forecast for a specific location in one week. Come back two days later and see what the forecast is now, for the same place and date. (Use a location like England, not Fuerteventura where the weather is always the same.) The forecast usually changes. A day later it may have changed again. And this is for the same date and place. You don't even have to wait until next week to see what the weather turns out to be. What does this make you think? The obvious conclusion is that forecasting isn't much good. To be fair, at least we know that weather forecasting isn't that accurate, and we've come to expect forecasts to change. In quant finance the calibrated function is assumed not to change, and a lot of money is bet on that assumption.

Is this what quants think when they do this recalibration? Surprisingly not. If this were a model of a physical process, most scientists would say it's time to go back to the drawing board. Not so in finance. The real purpose of calibration, it seems, is to fix the appearances of the model, and provide what looks like a mathematically consistent story.

In any case, this is now where things start to get interesting. So far it's all been mathematics and models. Now we have to understand how the quant thinks and his motivations, not to mention the thoughts and motivations of his bosses.

Sources of Confusion

The first point is that the average quant is sadly confused about a number of issues, such as randomness. Which is surprising, to say the least. The basic models that quants use assume that share prices are random. They also have models for random interest rates, random everything. There comes a point where they forget what's modeled as random and what's assumed to be fixed.

In the volatility calibration described above we had only one quantity as random, which was the underlying share price. Everything else was fixed. But fixed doesn't necessarily mean constant. We had a volatility function that depended on asset price and time, but it was meant to be a function that didn't change. Confusing? How can something be time dependent but not changing? Easy. Think of the TV guide. There you'll see that what's on the TV is time dependent: one hour there's a chat show, next there's a comedy, then a movie, and so on. But the schedule is fixed. Imagine sitting down to watch The Third Man and Dumb and Dumber comes on. That's a rescheduling, in finance a recalibration. In finance it is considered okay, in the world of home entertainment less so.

It is very common to hear quants say that because they always recalibrate it means that the model is always right. Yes, it does mean that the model for one fleeting instant gives the appearance that it gets traded option values right. But it's in appearance only. If you ever recalibrate it means either the model was wrong before, is wrong now, or was wrong both times. And if you happily recalibrate without a second's thought then we have to conclude that it's the last – i.e., the model is always wrong. It is like having to recalibrate a weighing scale every time you use it, instead of just once. Maybe the scale is broken.

Another relevant issue, which affects many quantitative finance models, is the question of price vs. value. The quant is called upon to find a value, a theoretical value, for new products. This value depends on the model. But it's not the same as the price. At least for his sake we hope not. No, the price that a contract is sold for ought to be higher than the theoretical value, because that represents profit. Yet traded prices are used for calibration, and you have no idea how much of that price represents the value that is really needed for the calibration.

Here's a simple example of this. Your car is worth $20k. Your annual insurance premium is $1k. Let's suppose this insurance is only for crashing, not third party, etc. The quant would deduce from this that the probability of crashing in one year was 1 in 20, or 5%. He would completely miss the point that the $1k premium includes a substantial profit margin for the insurance company. And barring loss leaders, etc., the probability of crashing for the average person in this situation would be a lot less than 5%. That's price vs. value for you – and another source of model error, and quant confusion. Ironically, the assumption of no arbitrage creates another opportunity for arbitrage (see Box 7.2).

David once worked in a small firm, listed on the stock market, which at its most minimalist point had only four full-time staff (not including part-time board members, etc.). But at the same time there were two fairly active online chat forums discussing the company. So it was possible to get a sense of how much information investors had about what was actually going on. The share price was very volatile, and would occasionally spike or fall because of some news announcement, or a sudden change in sentiment on the part of investors, or an obvious attempt to ramp or manipulate the share price; but in nearly all cases the relationship between the stock quote and what was actually going on at the company – i.e., between price and information – was tenuous or just wrong.

Here's one final example to illustrate just how much information is contained in a market price. Or rather, how little. Do you remember when oil first hit $100 a barrel? It was the beginning of 2008. Do you remember why? What were the economic circumstances? Okay, well make a guess. Something about the situation in the Middle East? Maybe. Demand in China? Could be. Actually no. It was reported that a lone trader bought 1000 barrels, and immediately sold them, at a $600 loss. His goal? To be able to tell his grandchildren that he was the person who first paid $100 a barrel. And how much information is contained here? Not a lot about oil, or the Middle East, or China, but quite a lot about one trader and his family.

Model Risk

Because the model's theoretical output is briefly the same as the market's option prices, people are fooled into thinking that the model is right. And in being right, there's no risk in the valuation. Unfortunately, this could not be further from the truth.

In quantitative finance there's always a question about the accuracy of models. This is termed “model risk.” There are many, many forms of risk, all of which the responsible bank will try to measure and if necessary reduce. However, if we are constantly recalibrating it means that we never get to see the risk in the volatility model. Certainly it is possible to see how bad the model is by seeing how much our table of future volatilities changes at each recalibration. But there's not much incentive to do this, as we'll see later. Worse, there are some models that can go straight to valuation of derivatives without ever going through the step of formally calculating the calibrated quantities.4 This means that you never get to see the model error, it remains hidden somewhere in the bowels of the model.

As discussed further in the next chapter, one reason why mathematical modelers in general prefer to use simple models is because the assumptions and their associated risks are more transparent. One of the great appeals of the Black–Scholes model is its parsimony. The tendency of an asset price to fluctuate is summed up by a single number, the volatility. But when we attempt to back out volatility from market data, and use it to value a derivative, the single number is replaced by a lookup table that mutates with time. It is no longer correct to say that the volatility is a single parameter – it is a whole series of separate parameters, which apply for different prices and dates. A simple concept – an asset's volatility – has been transformed into something that is highly complex, and the model risk has become intractable.

This goes to the heart of the danger that derivatives can pose to the financial system if used incorrectly. Engineers can build a complex system like an airplane out of hundreds of thousands of parts, because they understand the rules that describe the behavior of the parts within a certain regime, and they make sure that those parts operate within that regime. The motor that actuates the rudder is designed to be able to withstand the forces that it will experience; the landing gear can support the stress of a forced landing; the flaps can handle the stress when extended; and so on. As a result, the airplane responds in a predictable fashion to its controls. But financial derivatives, and therefore much of the financial system, are cobbled together from components such as implied volatility, which are highly unstable and unreliable – so you can bet the whole is as well.

Flying Blind

It might seem that these problems are reasonably obvious, and it is true that the more sophisticated banker is aware of them (though such people are sadly not as common as you'd expect for such a highly paid job). However, for institutional reasons his main aim is not to debunk the model – after all, there is no bonus for that. Instead, it is to justify the use of the calibrated model, to himself, to his boss, to risk managers, regulators, and investors. There are strong incentives to go with the flow. Two main justifications are commonly used.

The first is that the method may not be perfect, but it is always possible to hedge the derivatives using exchange-traded vanillas, which mitigates the risk. This isn't too bad a justification – as long as it's right. Unfortunately, it's not only hard to estimate the model risk from this sort of hedging, it's also something that people take on faith, and they rarely try to estimate the remaining model error in practice. (It's also a bit like saying we are using scales to weigh something rather than a spring. Scales will work whatever the force of gravity, because they measure weight directly against known quantities rather than indirectly via a spring. This may be the perfect justification, but can we have some more research on this please?)

The second, more scary but very common, justification is: what else can we do? The banker says: “We need to trade, we need a model, this is what we've got, there's nothing better, we use it.” Leaving aside the question of whether there is a better model, this justification makes you wonder about the morals here. Is it true that they “need” to trade? Isn't there the option of trading simpler products? It could even be counterproductive. If you want to trade but a risk manager says there's too much risk, that you can't, well, there goes your fee. Trading is much easier if you don't know the risks involved. Don't ever forget it's OPM – other people's money.

Without an understanding of model risk, the financial system is flying blind, the controls are not responding as expected, and we are headed for a crash. In the next chapter, we consider the fundamental cause of model risk, of which calibration problems are just a symptom – namely the category error of treating a human system as a mechanical one.

Notes

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
44.220.131.93