Chapter 6 Introduction to Practical Risk Modelling

In this chapter, we describe the role of models in the risk management process. We discuss the different questions for which Risk Management is asked to provide answers. The element they all have in common is the need for a measure, or set of measures, that allows the risks from positions in different asset classes or participations in different types of business to be compared to each other and to the expected and actual profits of each. In addition, the measure will need to allow decision makers to aggregate the risks at different levels of their business including an accurate summary at the very top level that includes all positions and business activities. Value at Risk and Expected Tail Loss are such measures. We examine each of them in detail. Lastly, we explain the key steps in implementing risk modelling in practice.

6.1 The Purpose of Risk Management

In general terms, Risk Management is concerned with understanding risks and using this knowledge to help shape a course of action that is best suited to achieve a given objective. Understanding risks means getting to understand how the outcome of a particular action can vary. Shaping a course of action involves not only deciding on an overall path but also on how to make the right decisions for each smaller step along the path toward a given goal.

In a financial or investment context, the purpose of Risk Management is to understand the range of outcomes in monetary terms from taking on particular assets or liabilities in one’s portfolio. This knowledge is then used to help select combinations of assets and liabilities that best meet a particular set of objectives. As such, financial Risk Management is as applicable to running a bank as it is to running an insurance or pension fund, to the trading of financial products, or to providing investment and wealth management services to external clients. Although in each case Risk Management is not the only factor in decisions (other considerations from marketing to regulation also have to be taken into account), it plays a crucial role in each organisation’s survival and success.

Financial or investment decisions often are delegated from one person or entity to another. This leads to two slightly different functions and perspectives for Risk Management when applied to investment decisions. The first function is to use Risk Management as a tool for decision making. The second is to use Risk Management as a framework for accountability when an investment decision is delegated by a principal to another person or entity acting as an agent on behalf of the principal. We will look at each of the two in turn.

6.1.1 Decision Making

Investment decisions are concerned with selecting particular assets or liabilities and deciding on an appropriate combination between particular assets or liabilities overall (also called portfolio optimisation; see Chapters 8 and 9).

In most cases, each asset or liability has its own risk profile and thus will provide a range of different results depending on future and as yet unknown events. It is useful to distinguish between Trading, Tactical, and Strategic Decisions since the challenges for Risk Management will be somewhat different in each case. We will briefly look at each in turn.

6.1.1.1 Trading Decisions

Trading decisions require fast decision making and are subject to constraints set by overarching tactical and strategic decisions. When deciding on a transaction, the trader faces three questions:

  • How likely is it that this transaction will help maximise profits?
  • How do the potential rewards of this transaction compare with the prospect of potential losses?
  • How likely is it that the losses or required financial resources will be greater than some acceptable limit at any point during the life of the transaction?

Figure 6.1 illustrates the gradual refinement of decisions in the three steps set out above.Since a trading decision maker normally has the maximisation of (expected) profits as a key objective, the first question is of paramount importance to him or her. However, in order to get a reliable answer to the first question, the trader needs to have robust answers to the other two questions as well.

image

Figure 6.1: Gradual refinement of decisions.

In particular the last question is important since although a transaction may promise outstanding profits taken on its own, it may lead to ultimate failure, either because it has the potential to wipe out the decision maker’s basis for doing business or because it requires resources in excess of those the trading decision maker has or can obtain. The second question is about understanding the nature of the transaction and its likely payoffs (or losses) under different future circumstances.

Having a good answer to both questions paves the way to the answer for the first question: How likely is it that a transaction will help maximise profits? If the trader has only one transaction to consider, the answer to the first question is simply the combination of the answers to the second and third. If the decision maker, however, has a portfolio of assets and/or liabilities, he or she will need to use the information from the answers to the second and third questions to see how the transaction would affect his or her portfolio.

Later in this chapter we will introduce two widely used tools, Value at Risk (VAR) and Expected Tail Loss (ETL), and show you how these can be used to help answer all three questions above.

6.1.1.2 Tactical Decisions

Tactical decisions set the frame for trading decisions within given strategic constraints. The framework set by tactical decisions is usually in the form of asset or transaction type preselection (e.g., trades only in Fixed Income, Equity, or FX), budgets, and risk limits. The aim of tactical decisions is to simplify and coordinate trading decisions and to align these with the general enterprise strategy. The tactical decision maker faces three questions very similar to the ones of the trading decision maker:

  • Which types of transactions will most likely help maximise profits?
  • How do the potential rewards of different types of transactions compare with the prospect of any potential losses from them?
  • For any relevant type of transaction, how likely is it that the losses or required financial resources will be greater than some acceptable limit at any point during the life of any such transaction?

Instead of evaluating and comparing individual transactions, tactical decision makers must evaluate different types of transactions available in their scope of business and decide whether any of them are suitable under given circumstances and what general mix of different types is most likely to lead to success. Again, the two tools, VAR and ETL, introduced later in this chapter, will be of help to provide the answer to the previous three questions.

6.1.1.3 Strategic Decisions

Strategic decisions set the frame for both tactical and trading decision making within any given organisational constraints. The framework set by strategic decision makers is usually in the form of a business model, a strategy roadmap, as well as strategic budgets and risk limits. The business model sets out the direction for the business, selecting certain types of transactions and ruling out others. The strategy roadmap sets out the expected evolution of the business model over a medium- to long-term horizon, usually three to five years.

Strategic budgets and risk limits provide concrete details for the implementation of the business model through tactical and trading decisions. They set out planned resources for achieving the objectives in the roadmap and define the organisation’s risk appetite through strategic risk limits.

The strategic decision maker has to find answers to three questions, which are ultimately related to those of the tactical and trading decision makers:

  • Which business model (mix of different types of transactions) will be most likely to maximise profits?
  • How do the potential rewards of different business models available compare with the prospect of any potential losses from them?
  • For any relevant business model(s), how likely is it that the losses or required financial resources will be greater than some acceptable limit at any point during the life of any such business model(s)?

Yet again, the two tools, VAR and ETL, will be shown to help provide the answer to these three strategic questions.

6.1.2 Accountability

When investment decisions are delegated by a principal to another person or entity acting as an agent on behalf of the principal, it is important to have a reliable and fair framework for accountability in place. Such a framework will allow the agent to demonstrate how they discharged their fiduciary duty and will allow the principal to hold the agent to account for his or her actions.

It is worthwhile distinguishing three different contexts, each requiring a different perspective on accountability. The first context is delegation in the sense of an investment mandate or of running a business unit. The second context is delegation within the context of a financial services enterprise such as a bank or insurance. Finally, the third context is delegation within the context of a society or within financial markets themselves.

We will briefly look at each context in turn.

6.1.2.1 Business Unit/Mandate

Consider a principal who delegates the management of a portfolio or a financial service business unit to another person or entity. A real life example for this is a client yielding the management of his assets to a private bank. The client most likely has clear expectations about the amount and types of risks with which he would be comfortable, andsimilarly which he would see as unsuitable or excessive risks.

The person or entity who takes on the task is equally likely to want to have clear, objective, and easy-to-interpret instructions from the principal on which types and levels of risk are acceptable and which are not.

When it comes time for the agent to render and account to the principal on how they have managed the portfolio mandate or business unit, both principal and agent are likely to want a framework that gives a clear, fair, and accurate picture of not only the results for the period in question but also the risks taken in order to obtain that result.

Later in this chapter you will see that the two mentioned tools, VAR and ETL, provide just such a framework.

6.1.2.2 Enterprise

Within a financial services enterprise such as a bank or insurance, many business units have to be coordinated and money needs to be allocated between them in such a way as to maximise the likelihood of achieving the highest possible overall return for the enterprise. At the same time, the level of risk for the enterprise as a whole should not be excessive according to pre-agreed limits.

In order to achieve this, a measure is needed that allows decision makers in different parts of the enterprise to compare the risks between proposed business from different units and add up the risks across all business units to arrive at a risk measure for the enterprise as a whole.

As we will show later, ETL is a measure that allows such risk aggregation.

6.1.2.3 Society/Market

Finally, society or the market in general may find it advantageous to put in place some means that allow controlling the risks undertaken by financial institutions. This combines with a view to minimise the risks of catastrophic collapses of financial institutions with all their negative effects they have on markets and society as a whole.

Regulators acting on behalf of governments (representing society) or of self-regulatory organs (acting on behalf of market participants) are particularly interested in ensuring that financial institutions put aside sufficient capital. This “regulatory capital” enables them to survive any unexpected but predictable extreme events, which they are likely to face throughout their lives.

In addition, regulators may find it useful for fund managers and other financial product providers to label more clearly the risks of the products they provide.

In both cases VAR and in particular ETL are useful measures that can be used to achieve the desired effect.

6.2 Early Approaches to Risk Management

Since the middle ages, when early precursors of our modern financial system emerged in the rich port and trading cities of medieval Europe (such as Florence, Genoa, Lyon, or Antwerp), the early traders and bankers were acutely aware of the importance of mastering the market and credit risks they faced for their survival and success. This even more so because at that time no social security networks existed and economic survival was closely connected to physical survival.

For much of modern history, however, Risk Management remained a “black art” that adopted new scientific advances only slowly and cautiously. One of the earliest steps toward practical scientific risk measures for investment products was the concept of duration developed by Macaulay in 1938 (see Section 6.2.1). Markowitz’s seminal work on portfolio theory in the 1950s was the next big step, followed by Black, Scholes, and Merton’s foundation of option pricing theory in the early 1970s.

Since then, work on approaches to market and credit risk has flourished and the “black art” of old gradually has been replaced by modern, more scientific measures. The early approaches to risk modelling prevalent in the 1970s, 1980s, and even the beginning of the 1990s can be divided into two groups:

  • (1) Sensitivity Analysis, which was more common in investment markets; and
  • (2) simple Risk Simulations, which were more common in the banking industry.

We will look at each of the two in turn.

6.2.1 Sensitivity Analysis

The most popular and influential measures of Sensitivity Analysis are best grouped by the three asset classes for which they were first developed: sensitivities for bond and money market securities and portfolios, sensitivities for equity portfolios, and sensitivities for options.

6.2.1.1 Bond–Debt–Interest Rate Sensitivity Measures

The most common, useful, and popular sensitivities for interest rate instruments like bonds and money market securities as well as portfolios built from them are Duration, Modified Duration, and Present Value of an 01.

Duration is a measure of the effective or discounted mean term of a bond or other fixed income (or money market) security. It measures the expected time until the investor gets 100% of his or her money back.

The measure itself is the weighted average of cash flow terms (period measures from the date of calculation until cash flow payment) using the present value of the instrument’s cash flows as weights for each of the payments up to and including redemption. Table 6.1 shows the actual and discounted cash flows of a hypothetical 7 year bond that we will use as an example to illustrate the calculation of the Duration measure.

Table 6.1: Schedule of cash flows for a hypothetical coupon bond

Term (in years) Cash Flow in % Present Value in %
1 10.000 9.000
2 10.000 8.110
3 10.000 7.300
4 10.000 6.570
5 10.000 5.920
6 10.000 5.330
7 110.000 52.770


image


To illustrate this concept, consider a seven-year bond with an annual 10% coupon having a price of 95.00 % and a yield of 11.063 %. The duration measure is calculated at the date of issuance of the bond. At that date, the term to the first (coupon) cash flow is 1 year, to the second 2 years, and so on. Terms are measured in year fractions.


image


Modified Duration is a measure of the sensitivity of the price of a bond to changes in its (Gross Redemption) Yield.


image


The Gross Redemption Yield is the Internal Rate of Return of all cash flows to maturity, including the redemption of principal.

The Modified Duration allows you to obtain an approximation of a change in bond price given a change in yield.


image


To illustrate this concept, consider the same seven year bond as earlier, and assume a 1% change in yield from 11.063% to 12.063%:


image


Using


image


You can calculate the approximate change in price to be


image


and the new price of the bond will now be


image


The Present Value of an 01 (PV01) or Dollar Value of an 01 (DV01) is the change in price of a bond due to a one basis point change in yield.

This implies the same formula as for Modified Duration, except that the change in yield is fixed to one basis point. Since a basis point equals 0.01%, the change is equal to 0.0001. Therefore,


image


To illustrate this concept consider the same seven-year bond as before. The formula for PV01 or DV01 gives


image


Again, you are invited to try out the above measures on the companion web site of the book using Lab Exercise 6.1 below.

Lab Exercise 6.1: Fixed Income Sensitivity

  • 1. Open the file FISensitivity.R from the set of sample files from the companion web site for this book (http://modelbook.bancstreet.com/) and explore the R code illustrating the performance calculation shown above.
  • 2. Using the R application load the file FISensitivity.R and run the following examples at the command prompt in your R GUI:

    R> InitialiseFISensitivity()

    R> RunFISensitivity()

    R> FISensitivityReport()

6.2.1.2 Equity Sensitivity Measures

The two most common and popular sensitivities for equity instruments and equity portfolios are the Sharpe Ratio and the Treynor Ratio.

The Sharpe Ratio is a measure of the Risk Adjusted Annualised Excess Return of an equity or equity portfolio. Both concepts are nested, as a one-equity portfolio is simply a single equity. We will therefore speak in the following only of equity portfolios.

The Excess Return is the return of an instrument in excess of the risk-free market rate over a given period. The Sharpe Ratio is named after its inventor, the Noble Laureate William Sharpe. The risk adjustment in the Sharpe ratio is based on the total risk of the equity or portfolio.


image


where:

  • APR = Annualised Portfolio Return
  • RFR = Risk Free Interest Rate
  • StdDevAPR = Standard Deviation of the Annualised Portfolio Return

To illustrate the Sharpe Ratio, consider a portfolio with an annualised return of 9.73 with a standard deviation of the annualised portfolio return of 22.48, and assume a risk-free interest rate of 2%.


image


The Treynor Ratio is an alternative measure of risk adjusted return for an equity portfolio. It replaces the risk measure of the Sharpe Ratio (Standard Deviation of the Annualised Portfolio Return) by a measuring the systematic or market risk in the form of the Beta estimated for the portfolio via the Capital Asset Pricing Model (CAPM). It is named after Jack Treynor, who first proposed it.


image


where:

  • APR = Annualised Portfolio Return
  • RFR = Risk Free Interest Rate
  • Beta = Systematic risk of the stock or portfolio according to the CAPM

To illustrate the Treynor Ratio, consider a portfolio with an annualised return of 9.73 and a Beta of 0.9977, and assume a risk-free interest rate of 2%.


image


Both the Sharpe and the Treynor Ratio are relative in the sense that their absolute values do not have a direct economic meaning. They mostly serve to compare investments in differently composed equity portfolios.

We recommend that you study the usage and effect of both ratios using Lab Exercise 6.2.

Lab Exercise 6.2: Equity Sensitivity Measures

  • 1. Open the file EquitySensitivity.R from the set of sample files from the companion web site for this book (http://modelbook.bancstreet.com/) and explore the R code illustrating the performance calculation shown above.
  • 2. Using the R application load the file EquitySensitivity.R and run the following examples at the command prompt in your R GUI:

    R> InitialiseEquitySensitivity()

    R> RunEquitySensitivity ()

    R> EquitySensitivityReport()

6.2.1.3 Option Sensitivity Measures

Sensitivity measures for simple options can be derived from the Black & Scholes option pricing formula. The two most commonly used sensitivities are Delta, which measures the sensitivity of the option price to changes in the underlying price, and Gamma, which measures the sensitivity of the option’s Delta to changes in the underlying price.

In algebraic terms, if V is the value of the option and spot is the price of the underlying being linked via


image


then Delta is the first derivation of V with respect to spot, and Gamma the second derivation. Gamma hence measures the speed, or velocity, by which the change in option price due to a change in underlying price increases or decreases as the changes become large and larger.

We use the Black & Scholes option pricing formula to illustrate these measures.

Black Scholes Option Pricing Formula


image



image



image



image


where:

  • t: time to expiry as a fraction of year (365 days)
  • σ annualised volatility of the underlying
  • r: continuously compounded interest rate
  • N(d): standardised normal cumulative probability function

Delta is a measure of the sensitivity of the price of an option to changes in the underlying price. It is the first derivation of V with respect to spot.


image


Using the Black & Scholes formula this is


image



image


where:

  • N(d): standardised normal cumulative probability function
  • d1image

Gamma is a measure of the sensitivity of the delta of an option to increasing changes in the price of the underlying asset. It is the second derivative of V with respect to spot.


image


Using the Black & Scholes formula this is


image


where:

  • Z = image
  • N(d): standardised normal cumulative probability function
  • d1 = image

The usage of option sensitivity measures requires some exercise, which you can get by trying out Lab Exercise 6.3 below.

Lab Exercise 6.3: Option Sensitivity Measures

  • 1. Open the file OptionSensitivity.R from the set of sample files from the companion web site for this book (http://modelbook.bancstreet.com/) and explore the R code illustrating the performance calculation shown above.
  • 2. Using the R application load the file OptionSensitivity.R and run the following examples at the command prompt in your R GUI:

    R> InitialiseOptionSensitivity()

    R> RunOptionSensitivity ()

    R> OptionSensitivityReport()

6.2.2 Risk Simulation

Sensitivities are helpful when comparing individual assets or portfolios but they do not give much insight into the variation of risk through time or under specific circumstances. In order to fill this gap, practitioners have developed a variety of simulation approaches. Two approaches, Gap Analysis and Stylised Scenarios, have found particularly widespread use and we therefore present each of them in the following sections.

6.2.2.1 Gap Analysis

Gap Analysis has its root in the need by commercial banks to match their lending at different maturities with both equity and debt funding at appropriate maturities in such a way that the bank will remain solvent and profitable.

A Gap Analysis worksheet shows the period-by-period and cumulative gap between an organisation’s financial assets and its liabilities. Although Gap Analysis originated in commercial banks, it is useful for any entity with material and nontrivial financial asset and liability sections in their actual or notional balance sheet.

The technique is best illustrated by way of a worked example.

Assume a newly opened bank with a balance at the end of day one as shown in Table 6.2. The bank has raised a total of $10,000 USD, split between $3,000 USD in equity and $7,000 USD through issuing certificates of deposit in its own name in the money market.

Table 6.2: Balance sheet

Item Notes Amount
Assets    
Deposits with Other Banks  
Commercial Loans 1 10,000
Retail Loans  
    10,000
Liabilities    
Deposits from Customers  
Short Term Borrowing 2 7,000
Long Term Borrowing  
Equity 3 3,000
    10,000

Notes

1 . Long-term (15 years) variable rate business loan with an interest rate resetting every 6 months to the 10 current spot market rate for USD denominated 6 Month Certificates of Deposit (CD) plus a 150 Basis Point spread.

2 . Issuance of Certificates of Deposit (CD) maturing in six months’ time.

3 . Equity Shares issued for cash.

The bank has invested these funds by making a variable rate loan to a credit high quality business borrower maturing in 15 years’ time. The terms of the loan allow the bank to reset the annualized interest rate for the loan every six months, at the prevailing interest rate for Certificates of Deposits plus a spread of 150 basis points, or 1.5%.

In the Gap Analysis, the CDs are recorded in the six-month asset bucket. The commercial loan is also shown in the six-month bucket since its interest rate will reset to the interest rate of six-month instruments every six months. The gap of $3,000 USD at the six-month bucket column reflects the equity funds of $3,000 USD. Equity is not recorded as a liability with a specific maturity but should always be reflected through the interest rate gap.

Although crude, Gap Analysis is very useful in illustrating, at a high level, an organisation’s risk exposure to interest rates. Table 6.3 shows the detailed workings for the Gap Analysis example.

Table 6.3: Gap analysis

Interest Sensitivity Gap Analysis
Item 1 Day 8–30 Days 31–60 Days 61–90 Days 3–6 Mth 7–9 Mth 10–12 Mth 2–4 Yrs 5–7 Yrs 8–10 Yrs 11+ Yrs TOTAL
Assets                        
Bank Deposits                       0
Commercial Loans         10000             10000
Retail Loans                       0
  0 0 0 0 10000 0 0 0 0 0 0 10000
Liabilities                        
Customers Deposits                       0
Short Term Borrowing         7000             7000
Long Term Borrowing                       0
  0 0 0 0 7000 0 0 0 0 0 0 7000
Interest Rate Gap 0 0 0 0 3000 0 0 0 0 0 0 3000
Cumulative IR Gap 0 0 0 0 3000 3000 3000 3000 3000 3000 3000 3000

6.2.2.2 Stylised Scenario Analysis

The use of Stylised Scenario Analysis started to grow with the increasing availability of ever cheaper and ever more powerful computer hardware. Stylised Scenario Analysis involves the simulation of the financial status of an organisation under different hypothetical scenarios.

Although particularly popular with banks, this technique is equally applicable to investment portfolios, insurance, or pension funds. Stylised Scenarios are a key element for stress testing where the scenarios focus on extreme events rather than more likely expected changes.

Similar to Gap Analysis, Stylised Scenario Analysis is best illustrated using a worked example.

Assume a bank with a balance sheet and income statement sheet as shown in column 01 in Table 6.4 on page 157. To see how the balance sheet and financial position of the bank would look under different circumstances, the bank needs to go through a number of steps.

  • Step 1: The bank will have to define a range of different scenarios. In the example below, some stylised scenarios are shown in columns 02 to 09 and are briefly characterised by descriptive titles such as “3% Yield curve shift downward.” Each scenario needs to be cast into a form that can be used as parameters for the numerical models that implement the simulation.
  • Step 2: The bank will have to build a model for how their positions in different assets may change under different assumptions. Will lending to business or retail consumers for, say, a given fixed rate or for a certain variable rate loan product go up or down with a rate increase, and if so by how much? Each such question must be cast into a form that can be integrated into a numerical simulation model ultimately covering all assets and liabilities.
  • Step 3: The bank will have to build a numerical valuation and income estimation models for all assets that are to be included in the calculations.
  • Step 4: Finally, the bank will have to run the simulation model from step 2 using the different parameters for each scenario from step 1. The results then can be used as the input for the model from step 3 together with the parameters from step 1 for the corresponding scenarios.
  • Step 5: After running the model from step 3 for each scenario, the results for all scenarios from steps 1 to 3 can be summarised and analysed in suitable reports such as the one used as an illustration in this example.

Table 6.4: Scenario simulation summary

Scenario Simulation Summary for Sample Bank Inc.
Item 01 02 03 04 05 06 07 08 09 10
  Baseline 3% Yield curve shift downward 3% Yield curve shift upward 6% Yield curve shift upward 3% YC twist up at long term 6% YC twist up at long term 3% YC twist up at short term 6% YC twist up at short term 10 Drop in Equity Mkts
Assets                    
MM Assets 65 12 92 127 114 78 73 87 54
Commercial Loans 100 120 80 60 60 90 90 110 95
Retail Loans 27 80 20 18 29 23 19 15 33
Others 8 8 8 5 7 9 8 3 8  
  200 220 200 210 210 200 190 215 190
Liabilities                    
Customers Deposits 60 40 60 50 40 60 60 90 60
Short Term Borrowing 20 60 20 40 50 20 10 5 10
Long Term Borrowing 80 80 80 80 80 80 80 80 80  
Equity 40 40 40 40 40 40 40 40 40
  200 220 200 210 210 200 190 215 190
Income                    
Interest Income 10 9 11 12 9 8 11.5 13 9
Fee Income 9 11 9 8 11 12 8 5 3
Other Income 0.9 0.9 0.9 0.9 0.9 0.9 0.9 0.9 0.9
Admin Expenses (11) (9) (10) (12) (9) (8) (10) (11) (7)
Net Income 8.9 11.9 10.9 8.9 11.9 12.9 10.4 7.9 5.9

Note: All numbers are in terms of millions of USD

6.3 Modern Approaches to Risk Management

Although many of the earlier approaches such as Sensitivities, Gap Analysis, or Stylised Scenario Analysis still have their usefulness, none of them can easily be applied across all types of risks to which a financial institution or other organisation is exposed.

In the first part of this chapter we looked at the different questions for which Risk Management is asked to provide answers. The element they all have in common is the need for a measure that can be used across different asset classes or different types of financial services business.

This measure, or set of measures, should allow the risks from positions in different asset classes or participation in different types of business to be compared to each other and to the expected and actual profits of each. In addition, the measure will need to allow decision makers to aggregate the risks at different levels of their business, including an accurate summary at the very top level that includes all positions and business activities.

Value at Risk (VAR) and Expected Tail Loss (ETL) are such measures and we will now look at each in more detail.

6.3.1 Value At Risk (VAR)

Value at Risk (VAR) is a measure that reports the maximum loss from holding an asset (or carrying on a business activity) over a given period and for a given probability level. The easiest way to illustrate what this means is by way of example.

Assume that we are buying 10 shares of a company listed on the Euronext Stock Exchange (Paris) at the end-of-day price of that stock exchange of 8.55 EUR.

The value of our holding at the end of the first day is therefore 10 stocks at 8.65 EUR or 865.00 EUR in total. We then record the end-of-day price of that stock for each day thereafter for exactly 100 days. Table 6.5 shows the evolution of our position over time assuming that we do not change the number of stocks we hold.

Table 6.5: Stock of an initial holding of 10 stock at 8.55 EUR on date t

Day t+i Price Value of Position P/L Result in EUR as compared to value at date t
1 8,55 EUR 855 EUR N/A
2 8.78 EUR 878 EUR +23,00 EUR
3 8.75 EUR 875 EUR −3,00 EUR
95 8.25 EUR 825 EUR −11,00 EUR
96 8.30 EUR 830 EUR +5,00 EUR
97 8.32 EUR 832 EUR −3,00 EUR
98 8.28 EUR 828 EUR −4,00 EUR
99 8.22 EUR 822 EUR −6,00 EUR
100 8.29 EUR 829 EUR +7,00 EUR

All Profit/Loss (P/L) calculations are done with reference to previous day.

In our example we have:

  • On date t = 2 a stock price of 8.78 EUR per share, leading to a position value of 878.00 EUR and a hypothetical (as not realised) profit of +23.00 EUR on the portfolio compared to date t = 1.
  • On date t = 3 a stock price of 8.75 EUR per share, leading to a position value of 875.00 EUR and a hypothetical (as not realised) loss of −3.00 EUR on the portfolio compared to date t = 2.
  • On date t + 100 a stock price of 8.29 EUR per share, leading to a position value of 82.00 EUR and a hypothetical (as not realised) profit of + 7.00 EUR on the portfolio compared to date t = 99.

With these numbers we are able to do simple VAR calculations. Because our valuations and hence our P/L were calculated at a daily basis, our data constrains us to a VAR measure for a single day.

If we now sort the 100 P/L results in descending order, like the one shown in Table 6.6, we can simply read the one day VAR for a given probability level by going to the corresponding row, reading the P/L and multiplying it by −1.

Table 6.6: One-day VAR table for a hypothetical equity position

Row No P/L Result (USD) = −VAR for a given Probability Probability Level (P)
1 25.5 1%
2 21.9 2%
3 18.7 3%
95 −10.0 95%
96 −11.8 96%
97 −13.0 97%
98 −13.3 98%
99 −13.7 99%
100 −20.0 100%

To calculate VAR at a given level, just read the P/L result from the corresponding row and multiply by −1; for example, one day VAR at 95% is −10.00 EUR *−1 or simply 10.00 EUR, and VAR at 99% is 20.00 EUR.

The VAR at 95% probability is 10.00 USD. Thus according to our table at a probability of 95% the most we can lose over a day is just over $5 USD or $5.30 USD, precisely.

The beauty of VAR is that we can calculate it for any type of asset class or financial business. We can even calculate it for portfolios of assets or business units. Thus, with VAR we have a measure of risk that can span entire portfolios or enterprises of arbitrary complexity.

VAR of a Portfolio IS NOT EQUAL to the sum of the VAR of its components.

One problem that remains with VAR is that it is not possible simply to obtain the VAR of a portfolio or an enterprise once we have the VAR measures of its component parts. The VAR of a portfolio or enterprise is likely to be very different from the sum of the VAR measures of its parts. The reason is that VAR measures of various parts are likely to be (positively or negatively) correlated. If part-specific VAR measures were uncorrelated, or independent of each other, then the VAR of the aggregate would be the VAR of the sum of its parts. Otherwise, the aggregate VAR measure depends on the correlation matrix between the VAR measures of its parts.

The effect of this is that in some cases, the VAR of a portfolio or enterprise can be significantly higher than the sum of the VAR of its parts, whereas in others it will be significantly less.

If you are now tempted to rush off and implement the method set out in the previous example, you should wait a little. Although the example is correct, it has been constructed to make it simple and easy to understand, and a lot of real-life practicalities are not covered. In Chapter 7 we will look at three different ways of calculating VAR in real-life practice:

6.3.1.1 Historical VAR

Historical VAR is very similar to the method we just used in our example, with the difference that Historical VAR allows us to deal with much bigger data sets than 100 days, and to calculate VAR for horizons other than one day.

6.3.1.2 Parametric VAR

The second method, Parametric VAR, is a simplification in terms of calculation effort that can be used in many circumstances. It reduces the need of keeping extensive data sets of historical P/L.

Instead of “reading off ” VAR from tables of historical results, Parametric VAR works by first obtaining parametric statistics (such as mean or variance) that describe the tables of historical results. Having these at hand allows calculating the VAR measures through the use of statistical formulas without the need to refer back to or keep the original tables of results.

6.3.1.3 Simulation VAR

The third method, Simulation VAR, is similar to Historical VAR, but instead of using historical results it uses simulation methods like Monte Carlo or Quasi Monte Carlo Simulation to generate the tables of results using parametric statistics and estimations as an input.

Although at first sight this may look like a wasteful combination, it actually is a vital tool to extend VAR into areas where both Historical and Parametric VAR run into unsolvable practical problems.

VAR measures the largest loss only at a given probability but ignores any other and potentially much bigger and more devastating losses in the remainder of the results.

Before we conclude this first introduction to VAR, we need to look at one further important drawback of VAR.

Since VAR gives us only the predicted maximum loss at the given probability, it ignores any bigger losses in the result set. These losses can potentially be much bigger and much more devastating than the loss suggested by the VAR. When using VAR this should always be borne in mind.

However, VAR has a close cousin, Expected Tail Loss (ETL), that overcomes this drawback. The final part of this chapter will give you a brief introduction to ETL.

Lab Exercise 6.4: Simple VAR

  • 1. Open the file SimpleVAR.R from the set of sample files from the companion web site for this book (http://modelbook.bancstreet.com/) and explore the R code illustrating the performance calculation shown above.
  • 2. Using the R application load the file SimpleVAR.R and run the following examples at the command prompt in your R GUI:

    R> InitialiseSimpleVAR()

    R> RunSimpleVAR()

    R> SimpleVARReport()

6.3.2 Expected Tail Loss (ETL)

Expected Tail Loss (ETL) is a measure that tells us our expected maximum loss from holding an asset (or carrying on a business activity) over a given period and for a given probability level. The easiest way to illustrate what this means is by resuming our simple example from the section on VAR measures.

Assume you have £100 to invest, and invest them in a portfolio consisting of several stocks and a small amount of cash. Again perform the same procedure as in the example in Section 6.3.1 and observe the mark-to-market values of the portfolio and hence the day-on-day profit or losses. Then sort them by size and you obtain a table like profits and losses sorted from biggest profit to largest loss: This is shown in Table 6.7.

Table 6.7: One-day VAR table for a hypothetical equity position

Row No P/L Result (USD) = −VAR for a Given Probability Probability Level (P)
1 25.5 1%
2 21.9 2%
3 18.7 3%
95 −10.0 95%
96 −11.8 96%
97 −13.0 97%
98 −13.3 98%
99 −13.7 99%
100 −20.0 100%

To calculate VAR at a given level, just read the P/L result from the corresponding row and multiply by −1; for example, one-day VAR at 95% is −10.00 EUR *−1 or simply 10.00 EUR, and VAR at 99% is 20.00 EUR.

In order to create a “ready reckoner,” or table from which we can read ETL, we need to create a new table very similar to the one we already have but with one important difference.

In the column where we had the P/L we put the ETL for the given probability level. This is calculated from the VAR table by summing all P/L from the row number concerned down to row number 100, dividing the sum by the number of rows added up and multiplying the result by −1.

If we carry out this procedure with the data from the earlier VAR table, we can obtain the ETL table shown in Table 6.8.

Table 6.8: One-day ETL

One-day ETL Table for a Hypothetical Equity Position
Row No ETL for a Given Probability Probability Level (P)
95 7.35 95%
96 7.76 96%
97 8.25 97%
98 9.03 98%
99 10.40 99%
100 11.10 100%

To obtain ETL for this position at a given probability level, read the ETL from the corresponding row; for example, one-day ETL for our hypothetical equity position at 95% is 7.35 EUR and ETL at 99% is 10.40 EUR.

Now we can again simply read off the ETL for a desired probability level. Using Table 6.8 we can easily see that the one-day ETL for our hypothetical equity position at 95% is 7.35 EUR and ETL at 99% is 10.40 EUR.

Like with VAR, this example is a simplification. To get you started with real-life modelling of ETL, the next chapter will also look at three different practical ways of calculating ETL, which are analogous to the corresponding VAR calculation measures, Historical ETL, Parametric ETL, and Simulation ETL.

6.3.2.1 Historical ETL

The first method, Historical ETL, is again very similar to the method we used in our previous example with the difference that Historical ETL allows us to deal with much bigger data sets than 100 days and to calculate ETL for horizons other than one day.

6.3.2.2 Parametric ETL

The second method, Parametric ETL, is again a simplification that can be used in many circumstances and that reduces the need of keeping extensive data sets of historical P/L.

Instead of “reading off ” ETL from tables of historical results Parametric ETL works by first obtaining parametric statistics that describe the tables of historical results and then calculating the ETL through the use of a formula from the statistics without the need to refer back to or keep the original tables of results.

6.3.2.3 Simulation ETL

The third method, simulation ETL, is yet again similar to the first but instead of using historical results it uses simulation methods like Monte Carlo and Quasi Monte Carlo Simulation to generate the tables of results using parametric statistics and estimations as an input. Although at first sight this may look like a wasteful combination, it actually is a vital tool to extend ETL into areas where both Historical and Parametric ETL run into unsolvable practical problems.

Lab Exercise 6.5: Simple ETL

  • 1. Open the file SimpleETL.R from the set of sample files from the companion web site for this book (http://modelbook.bancstreet.com/) and explore the R code illustrating the performance calculation shown above.
  • 2. Using the R application load the file SimpleETL.R and run the following examples at the command prompt in your R GUI:

    R> InitialiseSimpleETL()

    R> RunSimpleETL()

    R> SimpleETLReport()

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.227.209.251