CHAPTER 4
Market Makers

“There is a theory which states that if ever anybody discovers exactly what the Universe is for and why it is here, it will instantly disappear and be replaced by something even more bizarre and inexplicable. There is another theory which states that this has already happened.”

—Douglas Adams, The Original Hitchhiker Radio Scripts

“Money is, to most people, a serious thing. They expect financial architecture to reflect this quality – to be somber and serious, never light or frivolous. The same, it may be added, is true of bankers. Doctors, though life itself is in their hand, may be amusing. In Decline and Fall Evelyn Waugh even has one who is deeply inebriated. A funny banker is inconceivable. Not even Waugh could make plausible a drunken banker.”

—John Kenneth Galbraith, Money: Whence It Came, Where It Went

One of the most basic of financial instruments is the option. This is a contract or agreement which gives you the option (but not the obligation) to buy or sell something in the future at a certain price. Even a coin can be considered as an option to purchase government services, or pay taxes – if we don't want to keep the option, then we can melt the coin down, which people sometimes do when the cost of the metal exceeds the value of the option. Despite the fact that options have been around for millennia, it was only in the 1970s that traders began to use mathematical models to price them. In this chapter, we show how mathematicians developed formulas for valuing options – and in doing so completely changed the market for them.

And in the financial markets today the Dow Jones rose by 123 points, a positive note on which to end an otherwise disappointing week. Yields on government bonds fell… Meanwhile, thanks to instability in the Middle East, the price of a barrel of crude oil rose to…” or something similar, is commonly heard on the TV news reports. However, it's only news of the simplest financial instruments that gets the publicity. You don't hear so much about the more complex financial products, the ones that only the mathematicians understand, the ones that add up to quadrillions of dollars. (A quadrillion, again, is a one followed by fifteen zeros. It's a thousand times a trillion, which is itself a thousand times a billion, which in turn is a thousand million.) These are amounts, in other words, that would put John Law to shame. After all, in a world of quadrillions, who wants to be a millionaire?

The simplest instruments are the shares, the indices, bonds, and futures. You can buy shares in individual companies. The indices, such as the Dow Jones, are the values of baskets of representative or important assets. Bonds are just loans, to governments or companies, giving you a fixed amount at a set date in the future. Yields are the interest rates that these bonds are effectively paying. And then there are the commodity futures, such as oil, whereby you promise to pay a set amount to receive the oil at a set date in the future.

And that's all you need to know, in one snappy paragraph, if you want to build your own portfolio using the basic asset classes. You don't need much mathematics, just a gut feel for what's going up and what's going down, or a dart and a copy of the Wall Street Journal. We could have expanded on the details, given you lots of examples, but there are enough such books around already. No, we are going to shift gear and introduce you to the more complicated financial contracts. These are the ones that stay under the radar, and that's not easy for a quadrillion dollars. Did we say that they're the ones that only the mathematicians understand? Well, maybe we were being a bit optimistic, even they have problems, as we'll be seeing.

Options

We quite fancy an electric car. A really swish one, not a Prius, no, something a lot more glam. There's a company starting up that reckons it has new battery technology and a great design team (Italian, of course). But they haven't got the cash for development. To raise the cash, they have this deal going in which you give them $10,000 now, and they promise that when (if?) they produce the car you will be able to buy it at a cost of $40,000. This deal is just an option, an option to buy the car in the future.

In essence, buying the option is equivalent to making a bet on the future value of this electric car. Your downside is limited to the upfront premium, the $10k. If the car doesn't get made then you've lost the premium. You've also lost the premium if, when the car is finally unveiled, it turns out to cost only $39,995. After all, why pay the $40k when its showroom cost is less? But if the car is priced at $95,000 then you are laughing. It's only costing you $40,000, plus the $10,000 premium, meaning that if you decide to buy and then immediately sell the car there's a $45,000 profit, assuming you don't crash it on the way out of the lot. In fact, if the car costs more than $50,000 ($10k + $40k) you've made a profit, anything less and you've made a loss.

Financial options work in a similar way. A call option is a contract that allows you to buy, at some date in the future, a specific share for a price set now. This is the same as the electric-car example above, except for some details. First, the call option has a set date on which you must make the decision whether or not to buy the share. Second, the person who sells you the financial option may have nothing to do with the company whose shares the call option is based on.

Before proceeding further, let's get some jargon out of the way. The strike or exercise price is the amount you are allowed to buy the option for (the $40,000 in the example). The expiration is the date on which you have to exercise the option, if you so wish. The premium is the amount you pay upfront for the right to buy the share (the $10,000), and the underlying asset is the share on which the option is based.

There's another type of option that is very popular, known as the put option. This contract allows you to sell (rather than buy) a share for a specified price. To understand this contract you really need to understand how these option contracts are used and by whom.

What are Options for?

Call options are easy to understand. You would buy one if you think that the asset is going to rise by the expiration date, but you didn't want to buy the asset itself just in case you are catastrophically wrong. The underlying share will cost you a lot more to buy than the call option's premium, so there's a lot less downside with the call option. This also means that there's more leverage. If the asset does rise significantly then your return, in percentage terms, will be that much greater with the call. The downside to the option is that if the asset doesn't move up much then you will have lost out.

Put options are a bit trickier. It's probably easiest if you imagine holding shares in XYZ, but are worried that there might be a fall in their value. You could sell the shares, but if you turn out to be wrong and instead the shares rise then you will have missed all that upside. Regret is a terrible thing. So you buy a put option which gives you the right to sell the shares for a set price. If the stock does fall then your downside is limited; if the stock falls from $50 to $10, but you have a put with a strike of $40, then you can sell the stock for $40 rather than for the $10 you'd get in the market. And who do you sell the shares to? Why, the person who sold you the put option, known as the writer.

With put options you don't even need to own the shares in the first place to buy this protection. In which case “protection” is totally the wrong word to use. Buying a put option without owning the underlying asset is then a way of betting on the share price falling, something which is otherwise not so simple.

Options can therefore be used either for speculating on share prices, if you have a view on the direction, or for insurance, if your aim is to protect a portfolio.

Options have been around for a while. In Politics, Aristotle describes how the Greek philosopher Thales predicted, on the basis of astrology, that the coming olive harvest would be much larger than usual, so arranged an option with local olive pressers to guarantee the use of their presses at the usual rate. “Then the time of the olive-harvest came, and as there was a sudden and simultaneous demand for oil-presses he hired them out at any price he liked to ask. He made a lot of money, and so demonstrated that it is easy for philosophers to become rich, if they want to; but that is not their object in life.”1

In the 17th century, options were being sold at stock exchanges in financial centers including Amsterdam and London. However, they were generally viewed as a disreputable way of gambling on stock price movements, and regulators attempted to ban them from time to time. In the United States, they came close to being outlawed after the crash of 1929, and even in the 1960s were only traded on an ad hoc basis in a small New York market.2 But their unpopularity was due not just to their lack of respectability, but also to the fact that no one prior to 1973 knew how to price them. What is the correct premium to sell these options for?

Before explaining further, we ought to point out that such niceties didn't completely stop people trading options. Oh no. The traders didn't say “Sorry, we can't sell you that option because we don't yet have a sound theoretical foundation for our valuation. I know you really want to buy it, we really want to sell it to you, and we are both over twenty one. But until we get the green light from the boffins…” For example, an ad hoc approach option would be to just sketch out the probability of a few different scenarios, and base the price on the average payoff over the different scenarios. If you're unsure of your estimates you can always add a hefty profit margin before you sell the thing. Or you can look at what other people are charging. Or you can trade in small quantities, so a big mispricing doesn't lead to a big disaster. Or you can diversify perhaps, by trading in options on many different stocks. But as we'll see, it was only when a model was discovered (or rediscovered) that options hit the big time.

Bachelier's Return

As discussed in the previous chapter, the first to apply formal mathematical theory to options pricing, at the very start of the 20th century, was the French mathematician Louis Bachelier. His random walk model described the behavior of a stock's price based only on its initial price, and the amount of randomness or standard deviation (Bachelier referred to it as the “nervousness” of the stock).3 From these assumptions, Bachelier derived the correct or fair price for an option, which accurately reflected the odds of it paying off. Problem (almost) solved! Unfortunately his thesis remained filed away for the next 60 years, until the economist Paul Samuelson found a copy “rotting in the library of the University of Paris” while chasing a reference for a friend. He found it so interesting that he arranged for a translation, which was published in Paul Cootner's 1964 book of finance papers (the one which included a paper by Mandelbrot, see Chapter 2).

As Samuelson later told the BBC: “After the discovery of Bachelier's work there suddenly came to the mind of all the eager workers the notion of what the Holy Grail was. There was the next step needed. It was to get the perfect formula to evaluate and to price options.”4 There were a couple of problems with Bachelier's model. For example, it allowed an asset's price to go negative – it didn't matter where it started, it could still random walk all the way down to zero and just keep going. This was corrected when Samuelson and the physicist M.F.M. Osborne suggested that it would make more sense to work with logarithms of prices. Logarithmic charts are often used in finance because they give a more realistic picture of price changes. For example, if a share price grows exponentially by 6% on average each year, then after 20 years it will have more than tripled from its initial value, and recent fluctuations will seem disproportionately large. A logarithmic plot, in contrast, will show growth as a straight line, so recent fluctuations will have the same scale as those from earlier in the series.

The plot therefore conveys how large a change is relative to the current state, which is what we usually care about. Osborne quoted as support the Weber–Fechner law from psychology: “equal ratios of physical stimulus, for example, of sound frequency in vibrations/second, or of light or sound intensity in watts per unit area, correspond to equal intervals of subjective sensation, such as pitch, brightness, or noise.” A similar point was made by the mathematician Daniel Bernoulli in the 18th century when discussing the psychological effect of different rewards. Our reaction to stimuli such as noise depends not on the absolute change, in terms of decibels, but on relative change, and the same is true of stocks.

Economists therefore tweaked Bachelier's model by simply adapting it for logarithms of asset prices. In this model the asset prices themselves could never become negative, because negative logarithms still correspond to positive prices. This so-called lognormal random walk had the daily stock price return determined by a sophisticated version of dice rolling. The standard deviation could be estimated from the past variability of the stock.

This still left open the question of how to balance the risk and reward involved in purchasing the option, for those involved in just purchasing the stock, or for that matter holding the money risk free in cash. These parameters in the model seemed impossible to estimate from empirical data. Bachelier had avoided the issue by assuming that expected profits were always zero, but that didn't seem very realistic. Probably the first person to crack this problem, and get within a gnat's whisker of a fully fledged option valuation theory, was the mathematician Ed Thorp.

The Ultimate Machine

Today, wearable technology is all the rage. In 1960, though, it was rather less common. So when a woman looked across a crowded room at an MIT professor called Ed Thorp, and noticed a wire dangling from his ear, she registered astonishment. It didn't help that they were in a Las Vegas casino, and Thorp was trying to beat the casino at roulette.

Thorp's partner was Claude Shannon, who is better known today as the father of information theory (he invented the word “bit” for the 0s and 1s that make up computer language). Thorp had initially approached Shannon for advice on his research into blackjack. Shannon loved inventing machines, and his house was full of odd devices, such as automatons that could juggle, or toss coins. He didn't have a perpetual-motion machine, but he had something called the “ultimate machine,” which was in some sense the opposite. This consisted of a box with a lid and a single switch. When the switch was turned on, the lid would open and a hand would emerge and turn it off again. The science fiction writer Arthur C. Clarke saw it on Shannon's desk at Bell Labs and wrote: “There is something unspeakably sinister about a machine that does nothing – absolutely nothing – except switch itself off.”5 (This is somehow reminiscent of the efficient market theory, whose only prediction is that it cannot predict.)

The 17th-century French mathematician – and one of the founders of probability theory – Blaise Pascal may have invented both the mechanical calculator and an early version of the roulette wheel, but Shannon and Thorp were certainly the first to develop a toe-operated wearable computer that could predict the trajectory of a ball as it rolled around a roulette wheel. One person, whose shoes housed the computer, would give a toe tap when the wheel's zero passed a fixed point, another tap when it passed again. Wheel position and speed calibrated. Then, as the ball was sent in the opposite direction, two more taps to calibrate the ball position and speed. The computer would do its calculations and then transmit a tone to the earpiece of the person who was placing the bet (usually Thorp), telling him which octant of the wheel to bet on.

The project was plagued with technical difficulties, and was risky since being caught cheating at a casino, at a time when such venues were often run by organized crime, was likely to lead to a beating or worse. (It seems strange to think of two professors, one of whom was the father of information theory, involved in this kind of ruse – a bit like hearing that Newton and Law had joined forces to play a shell game on the streets of Paris.)

Blackjack was a somewhat safer bet.

Thorp's idea was that the odds favored the player at some times, and the dealer at others, depending on the composition of cards that were left in the deck. So by keeping track of which cards had already been dealt, the player would know when the odds were in his favor – so when to bet small, and when to go all in. The exact fraction of the bankroll to bet, as a function of the odds, was determined using a formula – known as the Kelly criterion – developed by one of Shannon's former colleagues, John Kelly Jr. from Bell Labs.6 In 1961, Kelly was the first to synthesize speech, using an IBM computer to sing the song Daisy Bell. Arthur C. Clarke, on another of his visits to Bell Labs, witnessed the demonstration, and made it the swan song for the HAL computer in his novel and screenplay 2001: A Space Odyssey.

Thorp first published his research in an academic journal, but it was soon picked up by journalists. He was then contacted by a couple of high-rolling businessmen with an interest in gambling, who agreed to fund him to the tune of $10,000 to try out his method in Reno. The method worked, and Thorp managed to double his money after a few days. Even more successful, though, was his 1962 book Beat the Dealer, which sold several hundred thousand copies and disseminated his ideas to a wide audience. Also there was no need to travel to Las Vegas in person to play cards while wearing a disguise and dodging security (in one incident he was offered a free, but spiked, cup of coffee which nearly knocked him out).

The English humorist Douglas Adams joked about a theory “which states that if ever anybody discovers exactly what the Universe is for and why it is here, it will instantly disappear and be replaced by something even more bizarre and inexplicable.” Of course, when Newton discovered the laws of gravity, the universe didn't suddenly change its rules just to annoy him. Casinos, however, do. After the publication of Thorp's book, they modified their procedures to make life much harder for card counters, for example by increasing the number of decks or the frequency of shuffles.

Thorp soon shifted his attention to a much larger casino. According to Fama, markets were efficient so could not be gamed. But as Thorp wrote, he “arrived on this scene with a unique perspective.” He had already demonstrated that “the blackjack ‘market’ was ‘inefficient’ ” and his work with Shannon showed that “the casino gambling ‘market’ had yet another ‘inefficiency.’ ” So, “by 1964 I began to consider the greatest gambling game of all time, the stock market. Whereas I thought of card counting in blackjack as a million dollar idea, my stock market explorations would lead to a hundred million dollar idea.”7 And just as card counting would change casinos, the mathematical ideas of Thorp and others would change the markets.

Beat the Market

Thorp attacked the problem by looking for empirical relationships between the current stock price and the price of a call option at a specified strike price. When the two are plotted against one another as in Figure 4.1, a basic feature of the curves is that the option price should never exceed the stock price, because otherwise it would make more sense just to buy the stock instead of the option. This defined an upper bound on the maximum option price, shown by the upper dotted line. Similarly, the option price should never fall below the difference between the stock price and the strike price; because if it did, then anyone could buy the option and exchange it for stock at a profit. (For example, if the stock price was $110, the strike price was $100, and the option price was $5, it would make sense to just buy the option, use it to purchase the stock at $100, and sell it at $110 for a quick profit of $5.) This minimum bound on price, shown by the lower dotted line, would be the actual value of the option at the expiry date.

Call option price versus stock price graph shows two curves and two dashed lines. Dashed line at zero represents minimum and dashed line at 100 represents maximum.

Figure 4.1 Plot of theoretical option vs. stock price curves, for a call price of $100

The upper and lower bounds are shown by dotted lines, and the empirical curves developed by Thorp and Kassouf are the solid lines between these extremes.

In practice, plots of option prices versus stock prices were somewhere between these two extremes. As the date moved closer to the expiration of the option, the price curve moved down toward the lower bound. Thorp used an equation first developed empirically by his collaborator, the economist Sheen Kassouf at Columbia, to define “normal price curves” that could then be used to identify pricing anomalies. If an option, when plotted in this way, appeared to be underpriced, they could buy the option and hedge the position by shorting the stock. More typically, they found the option was overpriced – investors were bullishly overestimating the probability that a stock would go up in price – so they would do the opposite: short the option and buy the stock. They didn't care much about the likely prospects for the underlying share, since they could profit if it went up, down, or sideways.

Either way the key idea was to hedge one contract, the option, with another, the stock. Since the option is a derivative of the stock (i.e., its value is derived from the stock), its value depends on the stock and the two are, at least theoretically, correlated with each other. See the example in Box 4.1.

Just as Thorp had used card counting to guide his betting at blackjack, so the discrepancy between theoretical and actual prices told them how much to bet in their hedging strategy. Thorp and Kassouf published their system in their 1967 book, Beat the Market. This method, which came to be known as convertible bond arbitrage, later spawned a number of copycat hedge funds.

Hedging your Bets

Thorp continued to puzzle over the relationship between options and stock prices, and improving the hedging strategy, and developed an equation that seemed to capture all of the relevant details. He later said: “I just happened to guess the right formula and put it to use some years before it was published. I was convinced it was right because all the tests that I applied to it worked. It did all the right things; it gave all the right values, and had all the right properties.”8

Thorp was using the formula for his hedge fund, making average 20%+ gains a year, and didn't want to make too much of a song and dance about it. But it would later go on to be rather well known: according to one author, it might be “the most widely used formula, with embedded probabilities, in human history.”9 The reason that it is called the Black–Scholes equation, rather than the Thorp equation, is because the University of Chicago's Fischer Black and Myron Scholes, working with MIT's Robert C. Merton, came up with – and of course published – a convincing mathematical proof, based on the accepted economic principles of equilibrium, rationality, and efficiency.

The trick was a process known as “dynamic hedging,” which sounds like an advanced, and exhausting, gardening technique, but in finance actually refers to the practice of reducing or even removing risk by making trades whose risks cancel each other out as much as possible. It seems reasonable that the higher a share price is, the more valuable a call option will be. After all, the share is more likely to end up “in the money” so there's a positive profit. As the share rises in value, so does the call. As the share falls in value, so does the call. Here's a cunning idea: Why not buy a call option and simultaneously sell some stock, in such a ratio that as the stock moves about and the call moves about, this portfolio doesn't change in value?

In portfolio theory, the result of dynamic hedging is that option and stock portfolios collapse to that single point on the risk/reward diagram, the risk-free investment. But we are getting ahead of ourselves. Although Ed had found the right formula, and he knew a lot about hedging, he hadn't quite put two and two together to get 1.2 quadrillion.

Let's see an example of how hedging works. Suppose that our stock is priced at $100 and we do two things: buy a call option with a strike price of $100 and sell half a unit of stock (i.e., $50 worth) at the same time (even if we don't own the stock, we can still short it as discussed later). If the stock goes up to $101, the option pays $1, but we also lost out on $0.50 because we sold that stock before it appreciated. So the net outcome from our action is $0.50. In contrast, if the stock goes down to $99, the option pays zero, but here selling the depreciating stock saved $0.50. So the net outcome is the same, $0.50. In other words, buying the option allows us to make $0.50 with no risk. That means the price of the option has to be $0.50 as well, because if it weren't then it would open up an arbitrage opportunity (which in theory is not allowed). Note here that we haven't used the probability of a price change anywhere in the discussion. The price is still $0.50, whether we think the stock is going up in price or falling, and such independence is the point of hedging. The value does, however, depend on our assumption that the stock moves up or down $1. If instead it was up or down $2 we'd get a different answer. And that's why option values still do depend on a stock's range, its volatility, even if not its direction.

Ed Thorp knew about the idea of hedging the option with a short position in the stock, but he hadn't considered doing this dynamically. By “dynamically” we mean that every day, and at every stock move, we have to rebalance this portfolio by more buying or selling of the underlying asset to maintain the perfect hedge ratio. Technically, every day isn't fast enough. Hourly? Minute by minute? Still not often enough. Technically we really do mean continuously. In the jargon this perfect hedge ratio is called the “Delta.” In the example above, we sold half a unit of stock so Delta was 0.5.

It was Black, Scholes, and Merton who showed in an absolutely watertight mathematical framework that by maintaining this dynamic Delta hedge one could construct a portfolio that was entirely risk free. Its return should therefore be the same as a risk-free asset such as a bank account. In mathematical terms, this acted as a constraint on the equations and made it possible to solve for the option price. Dynamic hedging also pointed to a way for banks to construct any kind of option and make money from it. They could sell an option to a client with a built-in profit margin, and perform dynamic hedging so they carried no risk themselves.

The model was again based on a lognormal random walk model, with constant standard deviation, and assumed the other tenets of efficient market theory – for example, the hedging argument assumed that stocks were correctly priced, and that “speculators would try to profit by borrowing large amounts of money” to exploit any small anomaly that might appear. The solutions to the equation again look a lot like the curves in Figure 4.1, and can be solved numerically in a manner similar to that outlined in Box 4.1, by working backwards from the option's value at expiry. In the early 1970s Black and Scholes had difficulty getting their paper published, but the formula, or rather its rigorous derivation, later won Scholes and Merton a “Nobel Prize” (Black died before the award was made). See Box 4.2 for further details.

Mathematical Dynamite

At this point we should acknowledge that certain writers and critics have pointed out that the economics version of the Nobel Prize is properly called the “Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel.”10 The award was created in 1969, seven decades after Nobel's death, by the Bank of Sweden, so some consider it to be a glorified version of a bank prize. Peter Nobel said in 2004 that the bank had “infringed on the trademarked name of Nobel. Two thirds of the Bank's prizes in economics have gone to US economists of the Chicago School who create mathematical models to speculate in stock markets and options – the very opposite of the purposes of Alfred Nobel to improve the human condition.”

On the contrary, as the inventor of dynamite, Alfred Nobel was clearly into blowing stuff up. And given that derivatives such as options would later help to blow up much of the world financial system, we think that the Nobel association is appropriate (he may even have felt kudos were due). So, in the context of this book, “Nobel” it is!

As discussed above, aesthetic principles such as elegance and symmetry play an important role in science and in finance; and in aesthetic terms at least the great appeal of Black–Scholes (sometimes called Black–Scholes–Merton or BSM in recognition of Merton's contribution) was that, unlike earlier versions of option-pricing models, it needed – in true Newtonian fashion – only a single parameter to describe the stock, namely its volatility. The growth rate of the stock had just dropped out somewhere in their derivation. What this means in practice is that the value of a call option, or indeed any option, depends only on the volatility of the underlying asset, and not on how fast the stock is growing. Even though a stock that is growing rapidly is more likely to end up in the money, is more likely to end up a long way in the money, is more likely to make a profit, and that profit is more likely to be huge… this doesn't affect the theoretical value of an option. The option value would be the same if the underlying share price was falling off a cliff. Only the volatility mattered. Counterintuitive or what? But the reason again is that Black–Scholes showed how to hedge the option with the stock. As they wrote in their paper, “in equilibrium, the return on such a hedged position must be equal to the return on a riskless asset.” If you hold this hedged portfolio you just don't care whether the stock is rising or falling.

Note that, while the Black–Scholes formula might not be able to directly incorporate your view of the growth rate as a tunable parameter, the market view does appear indirectly through the current stock price. If you are buying a stock, then the price you are willing to pay will depend on your perception of its expected future value, balanced against a risk premium. Two people with different views of these factors will therefore arrive at different prices, and the market price will reflect a kind of consensus view. When pricing an option with Black–Scholes, the formula takes the stock price as a given, and assumes that the option is priced “correctly” in the sense that it reflects the risk–reward balance baked into the market stock price. It does this by shifting to a risk-neutral setting, which takes the risk premium out of the picture. When this is done, the only parameter left over is volatility – and again, using different values will give different results. For this reason, option prices are often interpreted as reflecting views on volatility, while stock prices are seen as reflecting views on growth, but in fact both represent a similar trade-off between risk and reward. And if you disagree with the market's assessment of a stock's growth potential, then yes you will disagree with the option price produced by Black–Scholes, but you will also disagree with the current stock price produced by the market.

No Risk

Traders now had a rigorous framework for valuing options. It was a framework based on a model for the underlying asset, with some important concepts such as dynamic hedging. Valuation even got reinterpreted abstractly in terms of imaginary worlds where imaginary people valued imaginary options with imaginary behavior. Valuation had shrunk down to that single point on Markowitz's risk–return diagram, the market price of risk for hedged options was zero, economists had no reason to worry about inconvenient human characteristics such as risk aversion… never ever.

Now, at this stage in the book the average reader may be thinking, yay, so now I know how to calculate the price of an option – what's the point of that? Unless you're a quant – in which case you will already have been exposed to this information, albeit without the interesting historical background and pithy asides. However, our aim is to demystify the topic, show the assumptions that are being made, and also give a sense of how some fairly basic mathematics could dramatically affect the world of finance.

For this simple formula did much more than simulate option prices – it changed them, by putting option trading on what appeared to be a sound mathematical basis. Recall that in the early 1970s, option trading was very small scale, in part because of its association with gambling. This all changed after Black–Scholes caught on. With the encouragement of University of Chicago economics professors including Milton Friedman, the Chicago Board Options Exchange opened for business in April 1973. As its counsel explained: “Black–Scholes was really what enabled the exchange to thrive… [I]t gave a lot of legitimacy to the whole notions of hedging and efficient pricing, whereas we were faced, in the late 60s–early 70s with the issue of gambling. That issue fell away, and I think Black–Scholes made it fall away. It wasn't speculation or gambling, it was efficient pricing. I think the SEC [Securities and Exchange Commission] very quickly thought of options as a useful mechanism in the securities markets and it's probably – that's my judgement – the effects of Black–Scholes. [Soon] I never heard the word ‘gambling’ again in relation to options.”11 It also helped that Texas Instruments and Hewlett Packard came out with handheld calculators that could easily handle the Black–Scholes formula.

The formula also contained within it the promise of a perfect, automated system for making money. By dynamically hedging their bets, those who understood the Black–Scholes formula could exploit anomalies in bond and stock markets to make what appeared to be risk-free profits, without needing to worry about the messy realities of the underlying company. Finance now existed on a higher mathematical plane, serenely detached from the rest of the world. As the derivatives trader Stan Jonas puts it: “The basic dynamic of the Black–Scholes model is the idea that through dynamic hedging we can eliminate risk, so we have a mathematical argument for trading a lot. What a wonderful thing for exchanges to hear. The more we trade, the better off the society is because the less risk there is. So we have to have more contracts, more futures exchanges, we have to be able to trade the Nikkei futures in Japan, we have to be able to trade options in Germany. Basically in order to reduce risk we have to trade everywhere and all the time.”12 In 2000, Alan Greenspan testified to Congress that this ability to hedge risk had made the financial system more robust: “I believe that the general growth in large institutions has occurred in the context of an underlying structure in markets in which many of the larger risks are dramatically – I should say fully – hedged.”13

Positive Feedback

So, did the formula make the markets more efficient? It certainly seemed that way. As traders began to adopt the formula, prices converged so that it was more difficult to arbitrage between stock and option prices. A rule of finance, known as the “law of one price,” says that the price of a security, commodity, or asset will be the same anywhere once things like exchange rates and expenses are taken into consideration, since otherwise an arbitrageur can buy cheap in one place and sell in another. However, as seen in the next chapter, the fact that markets agree on one price does not necessarily mean they have converged to the right price (whatever that is) or that the price will be stable. The Black–Scholes model is an elegant equation which is useful so long as its limitations are understood; but any formula which is based on the perfect, symmetrical, stable, rational, and normal world of abstract economics, where investors can effectively make predictions about the future of a stock based on nothing more than past volatility, will never be a realistic model.14

The disassociation from gambling was also not entirely positive. Gamblers are aware that they are dealing with risk and can lose their stake. The idea that in finance you could even come close to eliminating risk through the use of hedging strategies, in contrast, led some firms (not Thorp's) to a dangerous hubris.

As an example: in 1976, the three founders of the firm Leland, O'Brien, and Rubinstein (LOR) had a brilliant idea, which was to use the Black–Scholes option valuation model to protect stock portfolios against crashes. If you are worried about the possibility of a stock-market crash then there are several things you can do. You could sell some, or even all, of your portfolio. But then what if the market rises? Or you could buy put options to protect the downside. But put options are overpriced (most insurance is overpriced), and you'd be forever rolling over your options as they expire, and buying and selling as your portfolio changes. But Black and Scholes had shown how you can make options synthetically. We've explained that the Black–Scholes model shows how to hedge an option by dynamically buying and selling the underlying shares. Well, what if you go about the motions of buying and selling but without actually owning any options? If you do that then you've replicated a short position in the same contract. Change the signs, by buying when you would have sold and vice versa, and you've made a synthetic long position.

The result was a new form of portfolio insurance. On behalf of their clients, LOR would buy and sell index futures so as to replicate a put option, only more cheaply (again, futures are a contract which obliges you to buy or sell at a fixed price in the future, while puts give you the choice but at a price). And you could specify how much was the maximum loss you could sustain, a bit like the strike price of an option.

The technique amounted to something like the following. As the market fell, they'd start selling futures. As it rose, they'd buy them back. As the market fell further, the short position would grow so that beyond a certain point you'd stop caring anymore. As the market rose higher and higher, they'd buy back the futures so that you wouldn't lose out on the upside.

Can you see the fatal flaw in the business model? Or perhaps it's not a flaw (we'll return to this later in Box 10.1).

As the market rises so the model, the Black–Scholes model, says buy more of the futures. And what happens when people buy en masse? And when the market falls, the same formula tells them to sell. And when a lot of people sell, what happens to the price? Yes, it's positive feedback. And we don't mean positive in a good way.

Positive feedback accentuates small perturbations, the famous (but rather misleading15) example being the butterfly effect. Mainstream economics, being all about stability, has little to say on the topic of positive feedback; it prefers to concentrate on negative feedback, which reduces fluctuations. An example is the invisible hand, where if prices depart too much from their “natural” level, suppliers enter or leave the markets and equilibrium is restored. But both types of feedback play a role in finance.

In the period leading up to Black Monday in 1987 there was about $60 billion worth of assets protected by portfolio insurance. That's $60 billion following, religiously, the same formula. It's the mathematical equivalent of everyone on one side of the world jumping in the air at the same time. Which is why, ironically, portfolio insurance has been cited as one of the factors behind the crash.

Another firm to experience the risky and fragile nature of risk management was Long-Term Capital Management (LTCM), whose partners included both Scholes and Merton. It used its expertise in option pricing to construct complicated and highly leveraged financial bets. As their October 1993 prospectus said, “The reduction in the Portfolio Company's volatility through hedging could permit the leveraging up of the resulting position to the same expected level of volatility as an unhedged position, but with a larger expected return.”16 The strategy was highly profitable right up until August 1998, when the Russian government decided to default on its bonds. Dynamical hedging doesn't work so well in a crisis, when no one wants to execute your orders. The company had to be rescued at a cost of $3.6 billion in order to avoid an even greater crisis.

LTCM had miscalculated the real risk levels because they didn't take model error into account. Of course, this did not stop people from using the same models to trade/gamble on derivatives, or prevent the market/casino from growing in size. The next chapter looks at how derivatives allowed the world money supply to blossom in a way that John Law could only have dreamt of; and how this came to an abrupt end only in September 2008, when the lid of the box creaked open, the invisible hand reached slowly out, and the financial system turned itself off.

Notes

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.200.136