15


Quantitative analytics

Role of quantitative analytics

Classification of pricing models

Numerical methods

Analytical methods

ROLE OF QUANTITATIVE ANALYTICS

The quantitative analytics team (also known as ‘quants’) is the crucial part of the investment bank front office. They are highly educated individuals who typically hold PhDs in maths and physics, whose job is the creation of complex mathematical models used for pricing existing and innovative financial instruments. Their expertise is best utilised in derivatives markets, as they operate with many unknown variables. They tend to specialise in a particular class of products, so the teams are typically sub-divided into interest rate derivatives, equity and FX derivatives, commodity derivatives and credit derivatives. As the interest rate market is the biggest by volume and offers the widest range of derivatives products, the majority of quantitative analytics work is done in this field, which is reflected in the number of quants. Their work is usually complemented by the support of the quant IT team who ‘translate’ the mathematical models into computer software to be incorporated into a wide range of IT systems used by the bank.

CLASSIFICATION OF PRICING MODELS

There are numerous quantitative models used to price financial instruments. Whilst there is a market convention when it comes to the main concepts and techniques used to price different product classes, most investment banks use proprietary models or bespoke modifications to standard methods to price their products. This chapter offers a classification of the most popular, and publicly known, models used in derivatives pricing. For each model or technique only a brief description, with the pros and cons, is given. This was done deliberately, as these techniques are very complex, each requiring advanced knowledge of calculus and an entire book devoted to them. Hence this chapter aims just to give an overview of various pricing techniques, as a glimpse into the complexity involved in derivatives valuation.

The main division of existing pricing models is into:

  • Numerical models
  • Analytical models.

The numerical models can be further sub-divided by their use into:

  • Models offering complete solutions by numerical methods
  • Numerical solutions to complex analytical equations.

Whilst the analytical models can be:

  • Equations used for pricing particular instruments
  • Equations describing the behaviour of the underlying variables.

The main numerical methods currently in use in quantitative analysis are: Monte Carlo simulation, binomial and trinomial trees, finite difference methods and interest rate trees.

Key Point: Summary of numerical quantitative methods

Monte Carlo simulation

Trees (binomial, trinomial)

Finite difference methods

Interest rate trees

The main analytical quantitative methods currently in use are: simple pricing equations (money market products, bonds, swaps etc.); vanilla product models (Black–Scholes, Black models); models for exotic products (models of the entire yield curve) – further subdivided into equilibrium models and no-arbitrage models.

Key Point: Summary of analytical quantitative methods

Simple pricing equations (money market products, bonds, swaps etc.)

Vanilla product models (Black–Scholes, Black models)

Exotic products (models of the entire yield curve):

Equilibrium models

No-arbitrage model

NUMERICAL METHODS

Monte Carlo simulation

General principles

Monte Carlo simulation generates random samples of possible behaviours of a variable. Many samples are generated and an average is calculated as a most likely outcome. For example, Monte Carlo calculation of the constant π can be done by drawing a circle inside a square and randomly positioning dots inside the whole area of the square. Since the area of a square is well known (A = a2) and the area of the circle inside it is a2 π/4, π can be calculated from the ratio of dots that fall in and out of the circle. Similarly for the calculation of a payoff of a certain financial product that depends on the evolution of the entire yield curve, simulating different paths that the curve can take will provide an average (and most likely) payoff.

Pros and cons

The Monte Carlo technique is useful for pricing derivatives where the payoff is dependent on the history of the underlying variable or where there are several underlying variables. It can accommodate complex payoffs and complex stochastic processes. It is numerically very efficient, as the time taken for the simulation increases approximately linearly with the number of variables. Its drawback is that it can be used for European style derivatives only, i.e. no early exercise is allowed.

Binomial and trinomial trees

General principles

Binomial and trinomial trees are typically used to price options on an underlying variable (e.g. stock). They rely on the assumption that the stock price movements consist of very small binary steps in very short time intervals. The time to maturity is divided into small intervals and at each interval the stock price can only go up or down in by a predetermined amount. The probability of up and down movements is assumed. Since payoff at maturity is known, it can be calculated at each branch of the tree. Working backwards through the nodes enables the calculation of the option price today.

Pros and cons

Trees are useful for pricing products where decisions can be made at each step (e.g. American options) as well as for pricing dividend paying stocks. Options on indices, currencies and futures contracts can also be valued using this technique. Variables (interest rates, volatility) can be varied between the tree nodes. Binomial and trinomial trees can also be used in conjunction with control variate techniques. For example, the European option price can be calculated using the Black–Scholes formula and compared with the price obtained from the binomial tree. Assuming that this is the error that the tree introduces into pricing, the price of an American style option (obtained using the tree) can be adjusted by this amount. This approach can be easily extended to non-recombining trees, trees with more than up/down movements (e.g. trinomial trees that have a mid-path). However, tree building is time-consuming and computationally extensive. Furthermore, its main shortcoming is that it is not suitable for derivatives dependent on past history.

Finite difference methods

General characteristics

Finite difference methods are similar to trees, whereby the calculation works backwards from the terminal value to the start date. They are used to solve a differential equation that a variable satisfies when the equation does not have an analytical solution or is very difficult to solve. The equation is converted into a set of finite difference equations and those are solved iteratively. For example, a partial differential equation (containing derivatives of any order) in two variables can be represented by a two-dimensional mesh with one variable on the x-axis and another on the y-axis. Each (x, y) point represents one state of the world, with grid boundaries representing the final values. Akin to tree techniques, the value of the derivative is known at expiry (e.g. the value of a call option with strike 20 is 5 when the stock price at expiry is 25), hence it is used as boundary condition. First, second and higher order derivatives can also be represented by differences in grid positions. In this way a single differential equation is represented by a different equation at every grid point. The set of equations is solved to yield the price of a derivative at inception.

Pros and cons

Finite difference methods are used for the same types of derivative pricing problems as trees. Hence, they can price both American and European style options, but do not easily handle derivatives with a payoff dependent on historical values. Finite difference methods can also be combined with control variate techniques for improved accuracy. Furthermore, risk parameters (delta, gamma) can be calculated directly from the grid, making this approach very useful in risk evaluation and management. These techniques can be computationally extensive when used for problems involving several state variables.

Interest rate trees

General characteristics

The principle of interest rate tree construction is the same as for stock price trees (described earlier). Whereas in stock price trees the discounting rate is typically constant between the nodes, in interest rate trees the rate varies from node to node. It is often more convenient to use trinomial trees (rather than binomial ones) as they give more freedom. For example, trinomial trees can model mean reversion.

Pros and cons

As with all similar techniques, interest rate trees are used for pricing products where decisions can be made at each step (e.g. American options) as well as for pricing dividend-paying stocks, options on indices, currencies and futures contracts. Variables can be varied between the tree nodes. Binomial and trinomial trees can also be used in conjunction with control variate techniques. As with all tree-based techniques, this approach is computationally extensive. The main advantage is that it can be used to represent many yield curve models. Given the flexibility of the method, it can be used to fit any term structure.

Summary of the numerical methods

All of the above methods are reliable and tractable. Which one will be used depends mainly on the characteristics of the derivative being valued and the accuracy required. Monte Carlo simulation works forward from the beginning of the contract to maturity. It cannot be used for derivatives requiring knowledge of history. It is more efficient than tree and grid methods.

On the other hand, tree and finite difference methods work from the expiry backwards in order to evaluate the security in question. Computationally very demanding, these methods can accommodate early exercise. Interest rate trees are effectively just a sub-class of a standard tree, but give a flexibility that accommodates changes of discount rates between the nodes. As they can fit any yield curve term structure required, they can implement many of the analytical models.

ANALYTICAL METHODS

Simple pricing equations

This class of methods includes pricing products that have known payments, no optionality or other special features. Typically these products are:

  • Simple money-market deposits
  • Bonds
  • Swaps.

These products all have known payment dates (simple deposits and zero coupon bonds typically have only one payment at maturity) and the payment values are either fixed or linked to the yield curve (floating rate). There is no uncertainty over whether the payments will take place, how much will be paid (we are not certain of the yield curve in the future, but current forward rates are the best estimate of the future cashflows). This is why there are no specific models to price these products. The equations are simple streams of future cashflows discounted to today using discount factors calculated from today’s yield curve. They have all been extensively covered in previous chapters.

The above products have some features that require closer attention. For example:

  • CMS/CMT (constant maturity swaps/constant maturity treasury) swaps have one or both legs linked to a long-term swap rate (rather than the Libor rate whose tenor matches payment frequency). In this case the model has to take into account convexity (non-linear relationship between the yield and the price) adjustment before it can continue pricing the swap in the same manner as its vanilla counterpart.
  • RPI (retail price index – price index of common household goods, and thus a measure of inflation) swaps have one leg linked to the RPI index. This requires the knowledge of the forward inflation rates (in UK modelled using Gilts). LPI in particular has a feature that collars the movements of RPI index and as such has some features of caps and floors. This moves the product out of this ‘simple’ category, as we now have to assume that the inflation is stochastic and use Monte Carlo simulation (or other suitable method) to predict the most likely value out of the many paths inflation can take.
  • Quanto swaps (where one or more legs are linked to an index in one currency, but payable in another) require calculation of the Quanto adjustment (correlation between the FX and the interest rate) before continuing with pricing using a ‘vanilla’ equation.

In summary, pricing the products with the known number, timing and often size of the cashflows reduces to calculating their present value in order to price the products at inception. At some future date the value of the position is calculated in the same way, taking into account all the outstanding payments. These products use yield curves (built from the most liquid products in the market) to project and discount the future cashflows. The discounting and projection index can be, but are not required to be, the same.

Vanilla product models

This class of models includes, for example, the Black–Scholes (used to price stock options) and Black models (used to price interest rate derivatives), both described in Chapter 9. These models are quick and robust but are not flexible enough to accommodate pricing more exotic products.

Black–Scholes model

Much quicker than any of the numerical methods, the B–S model uses a single equation to price call and put options on stocks. The model assumptions are:

  1. No dividends or accrued interest on underlying.
  2. No early exercise.
  3. Constant risk free-interest rate.
  4. Constant volatility.
  5. Price of the underlying is normally distributed.
  6. Continuous trading is assumed and no transaction costs/taxes.
  7. Short selling of securities is allowed.

The B–S model has been extended (with minor modifications) to cover dividend-paying stocks (known dividend model), American style options (pseudo American model), options on indices, currency options (Garman–Kohlhagen model), options on futures (Black model), American options with only one dividend (Roll–Geske–Whaley model) etc.

Black model

As mentioned above, the Black model is an extension of the B–S model, hence the same assumptions apply. In terms of implementation and the computational speed, it is the same as B–S. It uses the assumption of the B–S model that the security will be held until its maturity, thus the evolution of interest rates during the life of the security is irrelevant and all that matters is its value at maturity. Hence rates are treated as if they are tradable securities.

Pros and cons
  • The model is simple to use, quick to implement and compute.
  • One formula used for pricing a range of derivatives.
  • It is widely used for valuing caps/floors, European bond options, swap options etc.
  • Its drawback is that it does not model yield curve term structure.
  • Due to the constant volatility assumption, it gives rise to volatility smile (the volatility of deep OTM and ITM options implied from the B–S and Black models is higher than for ATM options).
  • Assumptions behind the model are not realistic.

Exotic products (models of the entire yield curve)

These models are attempting to accommodate different features of the products emerging in OTC markets. Their aim is to model the evolution of the entire yield curve (rather than working with the terminal values of interest rates). They can be broadly classified as: equilibrium models and no-arbitrage models.

Equilibrium models

Equilibrium models are also called short rate models as they describe behaviours of economic variables in terms of an instantaneous short-term rate r (the rate that prevails from one moment to the next). As this rate is not a tradable quantity it does not describe the real world. The models based on the short rate assume that derivative prices depend only on the process followed by r in a risk-neutral world (where the positions are perfectly hedged to create zero profit/loss regardless of market moves). Once the process for r is fully defined, it is assumed to implicitly define the initial term structure and its future evolution. The main disadvantage of the equilibrium model is that the initial yield curve term structure is the output of the model, rather than the input to it. Hence calibrating the model outputs to the available market data can be an issue.

One-factor models

In one-factor models the process for the instantaneous short-term rate r has only one source of uncertainty. The drift and the standard deviation are taken to be functions of r, but independent of time. One-factor models imply that, over any short time interval, all rates move in the same direction, albeit by potentially different amounts. This feature enables modelling of a very rich pattern of term structures, but does not allow for curve shape inversion (where long-term rates are lower than the short-term ones).

Examples of one-factor models are:

  • Rendleman and Bartter model
  • Vasicek model
  • Cox, Ingersoll and Ross model.
Rendleman and Bartter model
  • Assumes that the short-term rate exhibits properties of stock price.
  • As a consequence, it can easily be represented by a binomial tree.
  • Short-term rates have normal distribution (hence can be negative).
  • Constant expected growth rate and volatility are implicitly assumed by the model.
  • Due to only one factor in the model, mean reversion cannot be incorporated (no pull-back to some long-term average).
Vasicek model
  • Akin to the above, constant expected growth rate and volatility are implicitly assumed.
  • Short-term rates have normal distribution (hence can be negative).
  • Unlike above, this model is extended to support mean reversion, albeit at a constant rate.
  • Can be used to value options on zero-coupon and coupon-bearing bonds.
Cox, Ingersoll and Ross model
  • The short-term rate is log-normally distributed (hence, it is non-negative).
  • This approach also supports mean reversion at a constant rate.
  • Standard deviation of the stochastic term is proportional to r1/2, implying its increase or decrease with the instantaneous short-term rate.
Two-factor models

Here the process for the instantaneous short-term rate r is assumed to have two sources of uncertainty. The drift and the standard deviation are now taken to be functions of both r and time. This allows modelling term structures with an even richer pattern than in one-factor models. Given the two-factor approach, one factor can drive the curve level, whilst the second factor governs its ‘tilt’. Thus the points along the curve can move in opposite directions, allowing for curve inversion. Examples of two-factor models are:

  • Brennan and Schwartz model
  • Longstaff and Schwartz model.
Brennan and Schwartz model
  • The long-term rate is assumed to follow a stochastic process.
  • A yield on a perpetual bond paying $1 annual coupon is used to model the long-term rate.
  • The process governing the behaviour of the short rate reverts to a long-term rate.
  • The analysis is simplified by the fact that the fictive bond used in modelling the long-term rate is a tradable security (i.e. in a risk-neutral world, the rate of increase in bond price must be equal to the risk-free interest rate reduced by the bond yield).
Longstaff and Schwartz model
  • This model assumes stochastic volatility.
  • It is quite tractable, hence favoured by market practitioners.

The disadvantage of all equilibrium models is that they do not automatically fit the market-driven yield curve term structure. However, careful parameter selection enables this class of models to approximately fit most of the term structures encountered in practice (albeit with significant errors). Thus traders and market practitioners are reluctant to rely on these models when pricing derivatives securities. This has led to the emergence of the no-arbitrage models described in the following section.

No-arbitrage models

No-arbitrage models implicitly fit today’s term structure. They take today’s term structure for granted and use it as an input to define the process of its evolution over time. Different no-arbitrage models use one of the three distinct (but equivalent) approaches:

Modelling bond prices

  • Modelling forward rates
  • Modelling short rate.

Models of bond prices and forward rates (the first two approaches) are generally non-Markov* and have to be solved numerically using Monte Carlo simulation or a non-recombining tree. However, the choice of volatilities used in the model is left to the practitioner, with only one condition – the volatility has to approach zero at maturity.

Models of short rates are usually Markov and analytically tractable. However, unlike the approach above, in this case the practitioner does not have complete freedom of choice of volatility. The initial volatility is typically consistent with the modelled value, but the future model-implied volatility might be inconsistent with the market data.

Non-Markov models

An example of a non-Markov model based on forward rates is Heath-Jarrow and Morton. Its main characteristics are:

  • It is a two-factor model of forward rates
  • Rates are assumed to be log-normally distributed (thus non-negative)
  • One factor drives the curve parallel shift, whist the other governs the twist (hence rates can move in opposite directions, allowing for inversion)
  • Due to the model flexibility, a very realistic term structure can be achieved
  • The future rate stochastic behaviour over a short period of time is defined by both the initial value of r as well as the path of its evolution
  • As it is non-Markov, this model cannot be represented by a recombining tree; as an upward move followed by a downward move would not result in the same value as if an opposite route was taken
  • Due to the above, it is typically implemented using Monte-Carlo simulation.
Markov models

Most Markov models are developed in terms of short rate r, assuming the drift to be a function of time (thus these models are typically consistent with the initial term structure). This makes them a logical extension of the equilibrium models described earlier. The Markov property also makes it possible to use recombining trees.

As these models are explicit functions of short rate r (not observable in practice), they tend to be consistent with only the short end, rather than the entire yield curve.

Some examples of Markov models are:

  • Ho and Lee model
  • Hull and White Model
  • Black, Derman and Toy*
  • Black–Karasinski*.
Ho and Lee model
  • This is a one-factor model, with only one function of time.
  • Interest rates are assumed to be normally distributed (hence can be negative).
  • An exact fit to the current term structure is easily achieved.
  • Volatility structure is governed by the model, rather than left to the practitioner’s choice.
  • Standard deviation of spot and forward rates is the identical.
  • Due to only one factor, the model does not allow for mean reversion.
  • The modelled interest rate average direction from one short period to the next is always the same and independent of the current rate level.
  • The model is easy to apply and tractable, thus favoured by some market practitioners.
Hull and White model
  • This is a two-factor model, with the same characteristics as the Vasicek model and the addition of time-dependent reversion level.
  • Similarly, it is identical to the Ho and Lee model, with added mean reversion as a second parameter (hence Ho and Lee can be viewed as a special case of Hull and White with mean reversion parameter a = 0).
  • Given their similarities, the analytical tractability of the Hull and White model is on a par with Ho and Lee, despite added complexity.
  • The rate r is modelled so that on average it is consistent with the behaviour of the initial instantaneous rate. If it diverges, it reverts back at the rate a.
  • Just as in Ho and Lee, the volatility is governed by the model, rather than by the user. However, its structure is determined by two factors – instantaneous standard deviation and mean reversion parameter – providing a wider range of structures than Ho and Lee.
Black, Derman and Toy
  • In this model the short rate r is assumed to be log-normally distributed (thus it is non-negative).
  • It is easy to implement and quite tractable.
  • It can be made to fit the initial term structure, as it can be easily calibrated to caplet market prices.
  • However, as the mean reversion rate is governed by the model, rather than the user, it cannot be calibrated to both swaptions and caps simultaneously.
  • Here, several parameters are a function of time, thus allowing for flexibility.
Black and Karasinski
  • This model is very similar to Black, Derman and Toy in that the short rate r is assumed to be log-normally distributed (non-negative).
  • Here, several parameters are a function of time, allowing for flexibility.
  • Calibration to market prices of caplets is simple, hence it can fit the initial term structure.
  • It is tractable and easy to implement.
  • However, this model allows for adjustment of the mean reversion rate.
  • Can be implemented using binomial trees, whereby varying time periods associated with different steps can lead to rich patterns.

In summary, short rate models are simple, easy to implement and fast to run. However, neither the short rate nor mean reversion and short rate volatility are directly observable in the market. Thus the models suffer from a general lack of transparency and a difficulty in incorporating market changes.

BGM model

The BGM (Brace–Gaterek–Musiela) model is an arbitrage-free model. It allows the arbitrage-free evolution of discrete points of a yield curve described either in terms of forward rates or swap rates. Unlike other models described earlier, it does not rely on the dynamics of unobservable or pseudo-observable quantities (e.g. instantaneous short rate, instantaneous forward rates, variance of the short rate etc.). Instead, it directly models market-observable variables, such as Libor forward and swap rates and their volatilities. Due to its direct link to market parameters, it is often called a ‘market model’. It implicitly allows for straightforward calibration to market rates/prices, whilst the forward and swap rate volatilities can be directly derived from the model, as they are directly used for its creation.

For the derivation of no-arbitrage conditions, BGM approach requires the following:

  • The market completeness (all prices and parameters have to be observable)
  • The deterministic volatilities and correlations, or dependent on the same stochastic variables (forward or swap rates) incorporated in the yield curve.
  • Due to the above restrictions, BGM cannot be used when volatilities are stochastically independent or when there are very large random jumps.
  • The BGM model can be used to price:
  • Path-dependent securities (e.g. average rate options, average price options, outperformance options, trigger swaps etc.)
  • Single look derivatives with complex payoffs (captions, floorptions or swaptions).

Conclusion

The above was a brief summary of methods and models used by quantitative analytics to price a wide range of derivatives, with the emphasis on interest rate derivatives. Modelling interest rates is equally important for other classes of securities; as funding, cashflow projections and a host of other parameters are ultimately affected by the yield curve evolution. However, pricing credit and commodity derivatives is typically done using product-specific models. Due to the range and complexity of these derivatives, their pricing models are outside the scope of this book.

* Markov process is a particular type of stochastic process where only the present value of a variable is relevant for predicting the future. This implies that all the history is contained in the present state and is irrelevant for the future evolution of the process.

* These are non-stationary models (several parameters are functions of time).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.218.187