© Carlos Oliveira 2016

CARLOS OLIVEIRA, Options and Derivatives Programming in C++, 10.1007/978-1-4842-1814-3_10

10. Algorithms for Numerical Analysis

Carlos Oliveira

(1)Monmouth Junction, New Jersey, USA

Equation solving is one of the main building blocks for financial algorithms used in the analysis of options and financial derivatives. This happens because of the nature of options pricing, which is based on the Black-Scholes pricing model. Many of the techniques that involve options pricing require the efficient solution of equations and other mathematical formulations.

Given the importance of mathematical techniques in the pricing of such derivatives, it is important to be able to calculate the solution for particular mathematical models. Although this is a vast area of numerical programming, I will present a few illustrations of numerical algorithms that can be used as a starting point for developing your own C++ code.

In this chapter, you will see programming examples for a few fundamental algorithms in numerical programming. In particular, you will learn techniques to calculate equation roots and integrate functions in C++, with a discussion of how they work and how they are applied. The chapter also discusses numerical error and stability issues that present a challenge for developers in the area of quantitative financial programming.

  • Mathematical function representation: I initially discuss a representation for mathematical functions that can be used as the starting point for algorithms that manipulate these mathematical abstractions.

  • Root finding algorithms: One of the most common types of numerical algorithms, root finding techniques are used to find one or more roots of an equation, which are the points where the equations have zero value.

  • Integration algorithms: Another common type of numerical algorithms, integration techniques are used to calculate the numerical value of an integral (which can also be described as the area under a function for single dimensional equations).

  • Numerical examples in C++: This chapter also includes C++ code that implements many of these concepts, with concrete examples of how to code these algorithms.

Representing Mathematical Functions

The first step in this short overview of numerical algorithms is to find a reasonable way to represent mathematical functions in C++. As you saw in the previous chapter, functions can be easily represented in C++ using functional objects, which declare a function call operator as one of its member functions. Using this strategy, it is possible to convert a class instance into a callable object, with semantics similar to native functions.

A similar strategy can be used to represent mathematical functions. The main difference between generic C++ and mathematical functions is that the latter operate only over numeric domains, more commonly using float or double values.

In the following example, a new MathFunction class is declared using this strategy. The declaration of MathFunction as an abstract interface allows programmers to extend this definition as necessary to represent concrete functions, as you will see next.

The abstract class can be defined as presented in Listing 10-1.

Listing 10-1. Definition for the Abstract Class MathFunction
#include <iostream>
#include <vector>


using std::cout;
using std::endl;


class MathFunction {
public:


    virtual ∼MathFunction() {}
    virtual double operator()(double x) = 0;
private:
    // this just an interface
};
Note

Because MathFunction is a polymorphic base class, it needs to define its own virtual destructor. This is necessary because clients will receive pointers or references to the base class. Without a virtual destructor, the compiler cannot determine the right destructor to be called, and as a result such objects will not be properly cleaned up.

The great thing about using this type of interface class is that once you have a class like MathFunction, you can start writing code that uses it directly. Your code is insulated from any worries about the exact representation of objects. For example, consider a useful class called PolynomialFunction, which implements the interface described by MathFunction:

//
//  Polynomial has the form  c_1 x^n + c_2 x^n-1 + ....  + c_n-1 x^1 + c_n
//
class PolynomialFunction : public MathFunction {
public:
    PolynomialFunction(const std::vector<double> &coef);
    PolynomialFunction(const PolynomialFunction &p);
    virtual ∼PolynomialFunction();
    virtual PolynomialFunction &operator=(const PolynomialFunction &p);


    virtual double operator()(double x) override;
private:
    std::vector<double> m_coefficients;
};

The PolynomialFunction class derives from MathFunction so that it can implement the same interface. However, it is only usable to represent polynomial functions, that is, functions that are determined by a polynomial of the form
$$ f(x)={c}_1{x}^n+{c}_2{x}^{n-1}+cdots +{c}_{n-1}{x}^1+{c}_n $$

The polynomial is determined using the coefficients passed as vectors to the constructor of PolynomialFunction. The constructors are responsible for updating the m_coefficients data member using this information.

PolynomialFunction::PolynomialFunction(const std::vector<double> &coef)
: m_coefficients(coef)
{
}


PolynomialFunction::PolynomialFunction(const PolynomialFunction &p)
: m_coefficients(p.m_coefficients)
{
}


PolynomialFunction::∼PolynomialFunction()
{
}


PolynomialFunction &PolynomialFunction::operator=(const PolynomialFunction &p)
{
    if (this != &p)
    {
        m_coefficients = p.m_coefficients;
    }
    return *this;
}

Using Horner’s Method

The main part of the PolynomialFunction class is the implementation for the method call operator. Since this class represents a polynomial, this operator needs to receive a real number x and evaluate the function at that particular point. This is done using the so-called Horner’s method.

Horner’s method is just a quick way to evaluate a polynomial, so that you don’t need to explicitly evaluate the terms x i , for i from 1 to n. This can be done using a loop, where at each step you add a coefficient and multiply the result by x. A simple implementation of this idea can be done as follows:

double PolynomialFunction::operator()(double x)
{
    int n = (int)m_coefficients.size();
    double y = 0;
    int i;
    for (i=0; i<n-1; ++i)
    {
        y += m_coefficients[i];
        y *= x;
    }
    if (i < n) {
        y += m_coefficients[i];
    }
    return y;
}

To test these classes, you create a sample function that evaluates a polynomial function in a particular range. The function tested here is simply x 2 in the real range of –2 to 2. The function also prints the results so that you can visualize the data.

int test_poly_function()
{
    PolynomialFunction f( { 1, 0, 0 } );


    double begin = -2, end = 2;
    double step = (end - begin) / 100.0;
    for (int i=0; i<100; ++i)
    {
        cout <<  begin + step * i << ", ";
    }
    cout << endl;
    for (int i=0; i<100; ++i)
    {
        cout << f( begin + step * i) << ", ";
    }


    return 0;
}

I ran this function and plotted the results as a graph of the function. Figure 10-1 shows the output of the plot.

A371837_1_En_10_Fig1_HTML.jpg
Figure 10-1. Plot of results printed by the test_poly_function function

Finding Roots of Equations

Once you have a good representation for mathematical functions, it becomes possible to solve a few numerical problems. The first one I discuss is this section is finding the roots of an equation, a common problem that occurs as part of several numerical algorithms. Finding roots of an equation consists of determining one or more points in a numerical domain (usually the real numbers) where the equation has a value of zero.

This problem has a long history in mathematics, and for some types of equations it is possible to calculate their roots exactly. For example, you can find such roots for polynomials in general. For other equations, however, this problem can be too complicated to solve using analytical methods, which leads to the need for an algorithm capable of generating approximate solutions to such equations.

A number of numerical algorithms have been proposed in the mathematical literature to find the roots of equations. In this section, you see how to do this using Newton’s method, which is one of the most common algorithms for this problem, and learn how it can be implemented in C++.

Newton’s Method

Newton’s method is based on the use of the derivative as an approximation to the function on a particular neighborhood. To understand how this method works, notice that the derivative of a function at a particular point is known to be the slope of a line segment that is tangent to the function.

Using this property, it is very easy to improve the approximation to the equation root with a new point that is determined by the tangent. Newton’s method will essentially iterate through this process, until the difference between successive approximations is very small.

This method can be readily implemented in C++ using the tools that you already have. The first part consists of creating a class that encapsulates the necessary data for the approximation procedure. Here is the definition for the NewtonMethod class :

#include "MathFunction.hpp"

//
// a Newton method implementation.
//
class NewtonMethod {
public:
    // Takes as parameter the function and its derivatives
    //
    NewtonMethod(MathFunction &f, MathFunction &derivative);
    NewtonMethod(MathFunction &f, MathFunction &derivative, double error);
    NewtonMethod(const NewtonMethod &p);
    virtual ∼NewtonMethod();
    NewtonMethod &operator=(const NewtonMethod &p);


    double getFunctionRoot(double initialValue);
private:
    MathFunction &m_f;
    MathFunction &m_derivative;
    double m_error;
};

The NewtonMethod class contains the commonly used member functions and in addition it provides a function called getFunctionRoot, which receives as a parameter an initial value (a first guess that will work as a starting point).

The class stores as its data a reference to the function for which you want to find roots, and another reference to its derivative. Although it is technically possible to find the derivative for most functions, the techniques to do this in a generic way are beyond the capabilities of this class, so you need to receive the derivative as a constructor parameter and store it.

#include <iostream>
#include <cmath>


using std::endl;
using std::cout;


namespace {
    const double DEFAULT_ERROR = 0.0001;
}


NewtonMethod::NewtonMethod(MathFunction &f, MathFunction &derivative)
: m_f(f),
m_derivative(derivative),
m_error(DEFAULT_ERROR)
{
}


NewtonMethod::NewtonMethod(MathFunction &f, MathFunction &derivative, double error)
: m_f(f),
m_derivative(derivative),
m_error(error)
{
}


NewtonMethod::NewtonMethod(const NewtonMethod &p)
: m_f(p.m_f),
m_derivative(p.m_derivative),
m_error(p.m_error)
{
}


NewtonMethod::∼NewtonMethod()
{
}


NewtonMethod &NewtonMethod::operator=(const NewtonMethod &p)
{
    if (this != &p)
    {
        m_f = p.m_f;
        m_derivative = p.m_derivative;
        m_error = p.m_error;
    }
    return *this;
}

These member functions are necessary just to maintain the state of NewtonMethod objects. The m_f member stores the function that needs to be solved. The m_derivative member stores a reference to the derivative of the main function. You can also tweak the expected error of the solutions found by this class using the m_error member function. If the error is not supplied, this class uses the value stored in the DEFAULT_ERROR error constant.

Next, you’re ready for the implementation of Newton’s method using the given infrastructure. The getFunctionRoot function provides the necessary code for finding the root of the equation. This member function is essentially a loop in which at each step a new approximation for the function root is provided. The loop ends when the absolute difference between the two last approximations is at least equal to the acceptable error:

double NewtonMethod::getFunctionRoot(double x0)
{
    double x1 = x0;
    do
    {
        x0 = x1;
        cout << " x0 is " << x0 << endl;  // this line just for demonstration
        double d = m_derivative(x0);
        double y = m_f(x0);
        x1 = x0 - y / d;
    }
    while (std::abs(x0 - x1) > m_error);
    return x1;
}

Inside the main loop, the steps are:

  1. Find the value at the derivative at the current estimate point using the m_derivative member.

  2. Find the value of the function itself at the current estimate, using the m_f member.

  3. The derivative gives the slope d of the tangent, which can now be used to calculate another estimate point starting from the previous estimate. The equation for this new estimate is given by

$$ {x}_1={x}_0-frac{fleft({x}_0
ight)}{f^{prime}left({x}_0
ight)} $$
where x 0 is the previous estimate and x 1 is the new estimate.

You can use a few sample functions to test the accuracy of this method. I created a SampleFunction class for this purpose. This class inherits publicly the MathFunction interface and can be used to compute the function f(x) = (x–1)3, which has 1 as a root solution.

class SampleFunction : public MathFunction {
public:
    virtual ∼SampleFunction();
    virtual double operator()(double value);
}.


SampleFunction::∼SampleFunction()
{
}


double SampleFunction::operator ()(double x)
{
    return (x-1)*(x-1)*(x-1);
}

To use this class with NewtonMethod, you also need to supply its derivative. I have implemented the Derivative class , which again is derived from MathFunction. Simple math shows you that the derivative is given by f'(x) = 3(x–1)2.

class Derivative : public MathFunction {
public:
    virtual ∼Derivative();
    virtual double operator()(double value);
};


// represents the derivative of the sample function
Derivative::∼Derivative()
{
}


double Derivative::operator ()(double x)
{
    return 3*(x-1)*(x-1);
}

With these two classes, you can create a simple main function that puts them together and finds the root of the desired function. This code instantiates both SampleFunctionand Derivative objects and creates an object of the NewtonMethod class. Finally, the code prints the value for a given initial estimate of 100.

int main()
{
    SampleFunction f;
    Derivative df;
    NewtonMethod nm(f, df);
    cout << " the root of the function is " << nm.getFunctionRoot(100) << endl;
    return 0;
}

Running this function gives as a result a set of points, each one closer to the desired equation root. You can view the sequence of results in Table 10-1.

Table 10-1. Sequence of Values Found by Newton’s Method Applied to Function (x–1)3 and with Initial Guess of 100

Iteration

Estimate

Difference

1

100

 

2

67

33.00000

3

45

22.00000

4

30.3333

14.66670

5

20.5556

9.77770

6

14.037

6.51860

7

9.69136

4.34564

8

6.79424

2.89712

9

4.86283

1.93141

10

3.57522

1.28761

11

2.71681

0.85841

12

2.14454

0.57227

13

1.76303

0.38151

14

1.50868

0.25435

15

1.33912

0.16956

16

1.22608

0.11304

17

1.15072

0.07536

18

1.10048

0.05024

19

1.06699

0.03349

20

1.04466

0.02233

21

1.02977

0.01489

22

1.01985

0.00992

23

1.01323

0.00662

24

1.00882

0.00441

25

1.00588

0.00294

26

1.00392

0.00196

27

1.00261

0.00131

28

1.00174

0.00087

29

1.00116

0.00058

30

1.00077

0.00039

31

1.00052

0.00025

32

1.00034

0.00018

33

1.00023

0.00011

34

1.00015

0.00008

Integration

Another problem that frequently requires the help of mathematical algorithms is the integration of functions. The integral of a function can be visualized as the area under its graph, and it has many applications in finance, engineering, and physics. Several algorithms used in the analysis of options need to evaluate integrals numerically, using techniques similar to the ones covered in this section.

Functions can be integrated analytically or numerically. For some functions, it is possible to find an analytic solution, that is, a closed formula that can be directly evaluated to compute the integral of a function between two points. For example, polynomial functions can be easily integrated analytically, using the anti-derivative. For example, if the function is f (x) = x 2, the anti-derivative $$ F(x)=frac{x^3}{3}+C $$ can be used to calculate the value of the integral between the two points a and b, which becomes F (b) – F(a).

Many functions, however, are too complicated to be integrated analytically. In these cases, you need to use numerical algorithms that slice the function into small parts and calculate the integral, while trying to reduce the error in this process.

In this section, I present an implementation for one of the simplest integration techniques, known as Simpson’s method. Simpson’s method is based on the decomposition of an area that needs to be integrated into a large number of very small pieces.

First, you need to define a class that presents the interface for this solution method. The SimpsonsIntegration class contains data members such as m_f, a reference to the function that will be integrated, and m_numIntervals, the number of intervals used to approximate the integral.

#include "MathFunction.hpp"

class SimpsonsIntegration {
public:
    SimpsonsIntegration(MathFunction &f);
    SimpsonsIntegration(const SimpsonsIntegration &p);
    ∼SimpsonsIntegration();
    SimpsonsIntegration &operator=(const SimpsonsIntegration &p);


    double getIntegral(double a, double b);
    void setNumIntervals(int n);
private:
    MathFunction &m_f;
    int m_numIntervals;
};

The implementation for this class is in the next code fragment. The class uses a default number of intervals, in case you don’t want to set up this value. The DEFAULT_NUM_INTERVALS constant is used for this purpose.

#include "Integration.hpp"

#include "MathFunction.hpp"

#include <iostream>
#include <cmath>


using std::cout;
using std::endl;


namespace  {
    const int DEFAULT_NUM_INTERVALS = 100;
}


SimpsonsIntegration::SimpsonsIntegration(MathFunction &f)
: m_f(f),
m_numIntervals(DEFAULT_NUM_INTERVALS)
{
}


SimpsonsIntegration::SimpsonsIntegration(const SimpsonsIntegration &p)
: m_f(p.m_f),
m_numIntervals(p.m_numIntervals)
{
}


SimpsonsIntegration::∼SimpsonsIntegration()
{
}


SimpsonsIntegration &SimpsonsIntegration::operator=(const SimpsonsIntegration &p)
{
    if (this != &p)
    {
        m_f = p.m_f;
        m_numIntervals = p.m_numIntervals;
    }
    return *this;
}

The main part of this implementation is the getIntegral member function. The two parameters for this function define the interval in which the integration will be performed. The intSize variable is used to define the size of each interval used for Simpson’s method.

The algorithm operates as follows. For each slice of the required interval, you need to compute the approximate area under the function. The formula used by Simpson’s method is $$ frac{b-a}{6}left{f(a)+4fleft(frac{a+b}{2}
ight)+f(b)
ight} $$
where a and b are the beginning and end points of the current interval. This rule has been observed as one of the most effective for evaluating an integral in a short interval.

double SimpsonsIntegration::getIntegral(double a, double b)
{
    double S = 0;
    double intSize = (b - a)/m_numIntervals;
    double x = a;


    for (int i=0; i<m_numIntervals; ++i)
    {
        S += (intSize / 6) * ( m_f(x) + m_f(x+intSize) + 4* m_f ((x + x+intSize)/2) );
        x += intSize;
    }
    return S;
}

This class also provides a method to change the number of intervals, therefore improving the accuracy of the method (at the expense of additional running time).

void SimpsonsIntegration::setNumIntervals(int n)
{
    m_numIntervals = n;
}

To test the results of this integration method, I provide a simple mathematical function as an example. The function to be integrated here is sin (x).

// Example function

namespace  {

    class SampleFunc : public MathFunction
    {
    public:
        ∼SampleFunc();
        double operator()(double x);
    };


    SampleFunc::∼SampleFunc()
    {
    }


    double SampleFunc::operator()(double x)
    {
        return sin(x);
    }


}

The main function can be used as a driver to test the SimpsonsIntegration class. It creates an instance of SimpleFunc and uses it to initialize a SimpsonsIntegration object. Then, this code will call the function getIntegral for the interval 0.5 to 2.5. Next, the number of intervals changes to 200, and the same calculation is performed again.

int main()
{
    SampleFunc f;
    SimpsonsIntegration si(f);
    si.setNumIntervals(200);
    double integral = si.getIntegral(0.5, 2.5);
    cout << " the integral of the function is " << integral << endl;


    si.setNumIntervals(200);
    integral = si.getIntegral(0.5, 2.5);
    cout << " the integral of the function with 200 intervals is " << integral << endl;
    return 0;
}

The result of this function is the following:

 the integral of the function is 1.67876          
 the integral of the function with 200 intervals is 1.67873

This is a very effective method, and with only four intervals, it possible to achieve a reasonable approximation in this case.

Conclusion

Numerical algorithms are one of the main parts of an analytical system for options and derivatives. These algorithms have been refined for decades, and many of them have been implemented in C++ for the purpose of options pricing and related tasks.

In this chapter, you saw a few examples of numerical algorithms and learned how they can be efficiently implemented. I started with an explanation of how mathematical functions can be modeled as classes that are independent of the underlying algorithm. You also saw how to create a generic polynomial function class that efficiently computes the value of a function at each point using Horner’s method.

Next, you learned how to find roots of equations using Newton’s method. This traditional method employs the derivative of a function to estimate the value of the root, and continually improves this estimate until a solution is found. You saw how this method can be relatively easily implemented using the tools developed in the previous sections.

Finally, this chapter also covered the important problem of function integration. To find the integral of a function, you need to evaluate a function in a given range and use those values to estimate the area covered by the function graph. Using the algorithmic methods introduced here, you saw how to implement one of the most common techniques for integrating functions, known as Simpson’s method.

While this chapter introduced simple numeric techniques, in the next chapter you will learn how these techniques can be combined to solve some of the complex differential equations that are common when analyzing options and related derivatives.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.133.82.244