Appendix E
Solutions to the Exercises

Chapter 1

  1. 1.1
      1. (a) We have the stationary solution X t  = ∑ i ≥ 00.5 i (η t − i  + 1), with mean EX t  = 2 and autocorrelations ρ X (h) = 0.5h .
      2. (b) We have an ‘anticipative’ stationary solution
        equation
        which is such that EX t  =  − 1 and ρ X (h) = 0.5h .
      3. (c) The stationary solution
        equation
        is such that EX t  = 2 with ρ X (1) = 2/19 and ρ X (h) = 0.5 h − 1 ρ X (1) for h > 1.
    1. The compatible models are, respectively, ARMA(1, 2), MA(3) and ARMA(1, 1).
    2. The first noise is strong, and the second is weak because
      equation

      Note that, by Jensen's inequality, this correlation is positive.

  2. 1.2 Without loss of generality, assume images for t < 1 or t > n . We have
    equation

    which gives images , and the result follows.

  3. 1.3 Consider the degenerate sequence (X t ) t = 0, 1, … defined, on a probability space (Ω, , ), by X t (ω) = (−1) t for all ω ∈ Ω and all t ≥ 0. With probability 1, the sequence {(−1) t } is the realisation of the process (X t ). This process is non‐stationary because, for instance, EX 0 ≠ EX 1 .

    Let U be a random variable, uniformly distributed on {0, 1}. We define the process (Y t ) t = 0, 1, … by

    equation

    for any ω ∈ Ω and any t ≥ 0. The process (Y t ) is stationary. We have in particular EY t  = 0 and Cov(Y t , Y t + h ) = (−1) h . With probability 1/2, the realisation of the stationary process (Y t ) will be the sequence {(−1) t } (and with probability 1/2, it will be {(−1) t + 1}).

    This example leads us to think that it is virtually impossible to determine whether a process is stationary or not, from the observation of only one trajectory, even of infinite length. However, practitioners do not consider {(−1) t } as a potential realisation of the stationary process (Y t ). It is more natural, and simpler, to suppose that {(−1) t } is generated by the non‐stationary process (X t ).

  4. 1.4 The sequence 0, 1, 0, 1, … is a realisation of the process X t  = 0.5(1 + (−1) t A), where A is a random variable such that P[A = 1] = P[A =  − 1] = 0.5. It can easily be seen that (X t ) is strictly stationary.

    Let Ω* = {ω ∣ X 2t  = 1, X 2t + 1 = 0, ∀ t}. If (X t ) is ergodic and stationary, the empirical means images and images both converge to the same limit P[X t  = 1] with probability 1, by the ergodic theorem. For all ω ∈ Ω* these means are, respectively, equal to 1 and 0. Thus P*) = 0. The probability of such a trajectory is thus equal to zero for any ergodic and stationary process.

  5. 1.5 We have Eε t  = 0, Var ε t  = 1, and Cov(ε t , ε t − h ) = 0 when h ≠ 0, thus t ) is a weak white noise. We also have images , thus ε t and ε t − 1 are not independent, which shows that t ) is not a strong white noise.
  6. 1.6 Assume h > 0. Define the random variable images where images . It is easy to see that images has the same asymptotic variance (and also the same asymptotic distribution) as images . Using images , stationarity, and Lebesgue's theorem, this asymptotic variance is equal to
    equation

    This value can be arbitrarily larger than 1, which is the value of the asymptotic variance of the empirical autocorrelations of a strong white noise.

  7. 1.7 It is clear that images is a second‐order stationary process. By construction, ε t and ε t − h are independent when h > k , thus images for all h > k . Moreover, images , for h = 0, …, k . In view of Theorem 1.2, images thus follows an MA(k) process. In the case k = 1, we have
    equation

    where b∣ < 1 and (u t ) is a white noise of variance σ 2 . The coefficients b and σ 2 are determined by

    equation

    which gives images and σ 2 = 2/b .

  8. 1.8 Reasoning as in Exercise 1.6, the asymptotic variance is equal to
    equation

    Since images , for k ≠ h the asymptotic variance can be arbitrarily smaller than 1, which corresponds to the asymptotic variance of the empirical autocorrelations of a strong white noise.

  9. 1.9
    1. We have
      equation
      when n > m and m → ∞. The sequence {u t (n)} n defined by images is a Cauchy sequence in L 2 , and thus converges in quadratic mean. A priori,
      equation

      exists in ℝ ∪  + {∞}. Using Beppo Levi's theorem,

      equation

      which shows that the limit images is finite almost surely. Thus, as n → ∞, u t (n) converges, both almost surely and in quadratic mean, to images . Since

      equation

      we obtain, taking the limit as n → ∞ of both sides of the equality, u t  = au t − 1 + η t . This shows that (X t ) = (u t ) is a stationary solution of the AR(1) equation.

      Finally, assume the existence of two stationary solutions to the equation X t  = aX t − 1 + η t and u t  = au t − 1 + η t . If images , then

      equation

      which entails

      equation

      This is in contradiction to the assumption that the two sequences are stationary, which shows the uniqueness of the stationary solution.

    2. We have X t  = η t  +  t − 1 + ⋯ + a k η t − k  + a k + 1 X t − k − 1 . Since a ∣  = 1,
      equation

      as k → ∞. If (X t ) were stationary,

      equation

      and we would have

      equation

      This is impossible, because by the Cauchy–Schwarz inequality,

      equation
    3. The argument used in Part 1 shows that
      equation
      almost surely and in quadratic mean. Since
      equation
      for all n , (images ) is a stationary solution (which is called anticipative, because it is a function of the future values of the noise) of the AR(1) equation. The uniqueness of the stationary solution is shown as in Part 1.
    4. The autocovariance function of the stationary solution is
      equation

      We thus have Eε t  = 0 and, for all h > 0,

      equation

      which confirms that ε t is a white noise.

  10. 1.10 In Figure 1.6a, we note that several empirical autocorrelations are outside the 95% significance band, which leads us to think that the series may not be the realisation of a strong white noise. Inspection of Figure 1.6b confirms that the observed series ε1, …, ε n cannot be generated by a strong white noise; otherwise, the series images would also be uncorrelated. Clearly, this is not the case, because several empirical autocorrelations go far beyond the significance band. By contrast, it is plausible that the series is a weak noise. We know that Bartlett's formula giving the limits images is not valid for a weak noise (see Exercises 1.6 and 1.8). On the other hand, we know that the square of a weak noise can be correlated (see Exercise 1.7).
  11. 1.11 Using the relation images , formula (B.18) can be written as
    equation

    With the change of index h = i − ℓ, we obtain

    equation

    which gives (B.14), using the parity of the autocovariance functions.

  12. 1.12 We can assume i ≥ 0 and j ≥ 0. Since γ X (ℓ) = γ ε(ℓ) = 0 for all ℓ ≠ 0, formula (B.18) yields
    equation

    for (i, j) ≠ (0, 0) and

    equation

    Thus

    equation

    In formula (B.15), we have images ij  = 0 when i ≠ j and images ii  = 1. We also have images when i ≠ j and images for all i ≠ 0. Since images , we obtain

    equation

    For significance intervals C h of asymptotic level 1 − α , such that images , we have

    equation

    By definition of C h ,

    equation

    Moreover,

    equation

    We have used the convergence in law of images to a vector of independent variables. When the observed process is not a noise, this asymptotic independence does not hold in general.

  13. 1.13 The probability that all the empirical autocorrelations stay within the asymptotic significance intervals (with the notation of the solution to Exercise 1.12) is, by the asymptotic independence,
    equation

    For m = 20 and α = 5%, this limit is equal to 0.36. The probability of not rejecting the right model is thus low.

  14. 1.14 In view of (B.7), we have r X (1) = ρ X (1). Using step (B.8) with k = 2 and a 1, 1 = ρ X (1), we obtain
    equation

    Then, step (B.9) yields

    equation

    Finally, step (B.8) yields

    equation
  15. 1.15 The historical data from 3 January 1950 to 24 July 2009 can be downloaded via the URL: http://fr.finance.yahoo.com/q/hp?s = %5EGSPC. We obtain Figure E.1 with the following R code:
    > # reading the SP500 data set
    > sp500data <- read.table("sp500.csv",header=TRUE,sep=",")
    > sp500<-rev(sp500data$Close) # closing price
    > n<-length(sp500)
    > rend<-log(sp500[2:n]/sp500[1:(n-1)]); rend2<-rend∧2
    > op <- par(mfrow = c(2, 2)) # 2 × 2 figures per page
    > plot(ts(sp500),main="SP 500 from 1/3/50 to 7/24/09",
    +                ylab="SP500 Prices",xlab="")
    > plot(ts(rend),main="SP500 Returns",ylab="SP500 Returns",
    +                xlab="")
    > acf(rend, main="Autocorrelations of the returns",xlab="",
    +                ylim=c(-0.05,0.2))
    > acf(rend2, main="ACF of the squared returns",xlab="",
    +                ylim=c(-0.05,0.2))
    > par(op)
    
    Graphs illustrating SP 500 from 3/1/1950 to 24/7/2009 (left) and SP500 returns (right) displaying fluctuating ascending curve and spectrum waveforms, respectively.; Correlograms illustrating autocorrelations of the returns depicting vertical lines in the negative and positive axes (left); and ACF of the squared returns depicting vertical lines with 3 highest peaks above 0.15 (right).

    Figure E.1 Closing prices and returns of the S&P 500 index from 3 January 1950 to 24 July 2009.

Chapter 2

  1. 2.1 This covariance is meaningful only if images and Ef2 t − h ) < ∞. Under these assumptions, the equality is true and follows from E t  ∣ ε u , u < t) = 0.
  2. 2.2 In case (i) the strict stationarity condition becomes α + β < 1. In case (ii) elementary integral computations show that the condition is
    equation
  3. 2.3 Let λ 1, …, λ m be the eigenvalues of A . If A is diagonalisable, there exists an invertible matrix P and a diagonal matrix D such that A = P −1 DP . It follows that, taking a multiplicative norm,
    equation

    For the multiplicative norm A‖ =  ∑  ∣ a ij , we have images The result follows immediately.

    When A is any square matrix, the Jordan representation can be used. Let n i be the multiplicity of the eigenvalue λ i . We have the Jordan canonical form A = P −1 JP , where P is invertible, and J is the block‐diagonal matrix with a diagonal of m matrices J i (λ i ), of size n i  × n i , with λ i on the diagonal, 1 on the superdiagonal, and 0 elsewhere. It follows that A t  = P −1 J t P , where J t is the block‐diagonal matrix whose blocks are the matrices images . We have images , where N i is such that images . It can be assumed that λ 1 ∣  >  ∣ λ 2 ∣  > ⋯ >  ∣ λ m  ∣ . It follows that

    equation

    as t → ∞, and the proof easily follows.

  4. 2.4 We use the multiplicative norm A‖ =  ∑  ∣ a ij . Thus log‖Az t ‖ ≤ log ‖A‖ + log ‖z t ; therefore, log+Az t ‖ ≤ log+ ‖A‖ + log+ ∣ z t , which admits a finite expectation by assumption. It follows that γ exists. We have
    equation

    and thus

    equation

    Using Eq. (2.21) and the ergodic theorem, we obtain

    equation

    Consequently, γ < 0 if and only if ρ(A) < exp(−E log  ∣ z t ∣).

  5. 2.5 To show 1, first note that, by stationarity, we have images . The replacement can thus be done in (2.22). To show that it can also be done in (2.23), let us apply Theorem 2.3 to the sequence images defined by images . Noting that images , we have
    equation

    which completes the proof of 1.

    We have shown that, for any images , the stationary sequences images and images have the same top Lyapunov exponent, i.e.

    equation

    The convergence follows by showing that images .

  6. 2.5 For the Euclidean norm, multiplicativity follows from the Cauchy–Schwarz inequality. Since images , we have
    equation

    To show that the norm N 1 is not multiplicative, consider the matrix A whose elements are all equal to 1: we then have N 1(A) = 1 but N 1(A 2) > 1.

  7. 2.6 We have
    equation

    and images

  8. 2.7 We have images , therefore, under the condition α 1 + α 2 < 1, the moment of order 2 is given by
    equation

    (see Theorem 2.5 and Remark 2.6(1)). The strictly stationary solution satisfies

    equation

    in ℝ ∪ {+∞}. Moreover,

    equation

    which gives

    equation

    Using this relation in the previous expression for images , we obtain

    equation

    If images , then the term in brackets on the left‐hand side of the equality must be strictly positive, which gives the condition for the existence of the fourth‐order moment. Note that the condition is not symmetric in α 1 and α 2 . In Figure E.2, the points (α 1, α 2) under the curve correspond to ARCH(2) models with a fourth‐order moment. For these models,

    equation
    Graph illustrating the region of existence of the fourth-order moment for an ARCH(2) model (when μ4 =3), depicted by a descending curve from point a1 to point a2 from approximately 0.6 to 0.

    Figure E.2 Region of existence of the fourth‐order moment for an ARCH(2) model (when μ 4 = 3).

  9. 2.8 We have seen that images admits the ARMA(1, 1) representation
    equation

    where images is a (weak) white noise. The author correlation of images thus satisfies

    E.1 equation

    Using the MA(∞) representation

    equation

    we obtain

    equation

    and

    equation

    It follows that the lag 1 autocorrelation is

    equation

    The other autocorrelations are obtained from (E.1) and images . To determine the autocovariances, all that remains is to compute

    equation

    which is given by

    equation
  10. 2.9 The vectorial representation images is
    equation

    We have

    equation

    The eigenvalues of A (2) are 0, 0, 0 and 3α 2 + 2αβ + β 2 , thus I 4 − A (2) is invertible (0 is an eigenvalue of I 4 − A (2) if and only if 1 is an eigenvalue of A (2) ), and the system (2.63) admits a unique solution. We have

    equation

    The solution to Eq. (2.63) is

    equation

    As first component of this vector, we recognise images , and the other three components are equal to images . Equation (2.64) yields

    equation

    which gives images , but with tedious computations, compared to the direct method utilised in Exercise 2.8.

  11. 2.10 It suffices to show that images for all fixed images . Let images and images for images . For all images , write images with images . We have
    equation

    and the result follows.

  12. 2.10
    1. Subtracting the ( q + 1)th line of (λI p + q  − A) from the first, then expanding the determinant along the first row, and using Eq. (2.32), we obtain
      equation
      and the result follows.
    2. When images the previous determinant is equal to zero at λ = 1. Thus ρ(A) ≥ 1. Now, let λ be a complex number of modulus strictly greater than 1. Using the inequality a − b ∣  ≥  ∣ a ∣  −  ∣ b, we then obtain
      equation
      It follows that ρ(A) ≤ 1 and thus ρ(A) = 1.
  13. 2.11 For all ε > 0, noting that the function f(t) = P(t −1 ∣ X 1 ∣  > ε) is decreasing, we have
    equation

    The convergence follows from the Borel–Cantelli lemma.

    Now, let (X n ) be an iid sequence of random variables with density f(x) = x −2 images x ≥ 1 . For all K > 0, we have

    equation

    The events {n −1 X n  > K} being independent, we can use the counterpart of the Borel–Cantelli lemma: the event {n −1 X n  > K for an infinite number of n} has probability 1. Thus, with probability 1, the sequence (n −1 X n ) does not tend to 0.

  14. 2.12 First note that the last r − 1 lines of B t A are the first r − 1 lines of A , for any matrix A of appropriate size. The same property holds true when B t is replaced by E(B t ). It follows that the last r − 1 lines of E(B t A) are the last r − 1 lines of E(B t )E(A). Moreover, it can be shown, by induction on t , that the i th line i, t − i of B t B 1 is a measurable function of the η t − j , for j ≥ i . The first line of B t + 1 B t B 1 is thus of the form a 1(η t )ℓ1, t − 1 + ⋯ + a r (η t − r )ℓ r, t − r . Since
    equation

    the first line of EB t + 1 B t B 1 is thus the product of the first line of EB t + 1 and of EB t B 1 . The conclusion follows.

  15. 2.13
    1. For any fixed t , the sequence images converges almost surely (to images ) as K → ∞. Thus
      equation
      and the first convergence follows. Now note that we have
      equation

      The first inequality uses (a + b) s  ≤ a s  + b s for a, b ≥ 0 and s ∈ (0, 1]. The second inequality is a consequence of images . The second convergence then follows from the dominated convergence theorem.

    2. We have images . The convergence follows from the previous question, and from the strict stationarity, for any fixed integer K , of the sequence images .
    3. We have
      equation

      for any i  = 1, …, ℓ, j  = 1, …, m . In view of the independence between X n and Y , it follows that images almost surely as n → ∞. Since images is a strictly positive number, we obtain images almost surely, for all i , j . Using (a + b) s  ≤ a s  + b s once again, it follows that

      equation
    4. Note that the previous question does not allow us to affirm that the convergence to 0 of images entails that of E(‖A k A k − 1A 1 s ), because images has zero components. For k large enough, however, we have
      equation

      where images is independent of A k A k − 1A N + 1. The general term a i, j of A N A 1 is the (i, j)th term of the matrix A N multiplied by a product of images variables. The assumption A N  > 0 entails a i, j  > 0 almost surely for all i and j . It follows that the i th component of Y satisfies Y i  > 0 almost surely for all i . Thus images . Now the previous question allows to affirm that E(‖A k A k − 1A N + 1 s ) → 0 and, by strict stationarity, that E(‖A k − N A k − N − 1A 1 s ) → 0 as k → ∞. It follows that there exists k 0 such that images

    5. If α 1 or β 1 is strictly positive, the elements of the first two lines of the vector images are also strictly positive, together with those of the ( q + 1)th and ( q + 2)th lines. By induction, it can be shown that images under this assumption.
    6. The condition images can be satisfied when α 1 = β 1 = 0. It suffices to consider an ARCH(3) process with α 1 = 0, α 2 > 0, α 3 > 0, and to check that images .
  16. 2.14 In the case p = 1, the condition on the roots of 1 − β 1 z implies β ∣  < 1. The positivity conditions on the φ i yield
    equation

    The last inequalities imply β 1 ≥ 0. Finally, the positivity constraints are

    equation

    If q = 2, these constraints reduce to

    equation

    Thus, we can have α 2 < 0.

  17. 2.15 Using the ARCH( q ) representation of the process (images ), together with Proposition 2.2, we obtain
    equation
  18. 2.16 Since images , h > 0, we have images where λ, μ are constants and r 1, r 2 satisfy r 1 + r 2 = α 1 , r 1 r 2 =  − α 2 . It can be assumed that r 2 < 0 and r 1 > 0, for instance. A simple computation shows that, for all h ≥ 0,
    equation

    If the last equality is true, it remains true when h is replaced by h + 1 because images . Since images , it follows that images for all h ≥ 0. Moreover,

    equation

    Since images , if images then we have, for all h ≥ 1, images We have thus shown that the sequence images is decreasing when images . If images , it can be seen that for h large enough, say h ≥ h 0 , we have images , again because of images . Thus, the sequence images is decreasing.

  19. 2.17 Since X n  + Y n  →  − ∞ in probability, for all K we have
    equation

    Since images in probability, there exist K 0 ∈ ℝ and n 0 ∈ ℕ such that P(X n  < K 0/2) ≤ ς < 1 for all n ≥ n 0 . Consequently,

    equation

    as n → ∞, for all K ≤ K 0 , which entails the result.

  20. 2.18 We have
    equation

    as n → ∞. If γ < 0, the Cauchy rule entails that

    equation

    converges almost surely, and the process t ), defined by images , is a strictly stationary solution of model (2.7). As in the proof of Theorem 2.1, it can be shown that this solution is unique, non‐anticipative and ergodic. The converse is proved by contradiction, assuming that there exists a strictly stationary solution images . For all n > 0, we have

    equation

    It follows that a(η −1)…a(η n )ω(η n − 1) converges to zero, almost surely, as n →  ∞ , or, equivalently, that

    E.2 equation

    We first assume that E log {a(η t )} > 0. Then the strong law of large numbers entails images almost surely. For (E.2) to hold true, it is then necessary that log ω(η n − 1) →  − ∞ almost surely, which is precluded since (η t ) is iid and ω(η 0) > 0 almost surely. Assume now that E log {a(η t )} = 0. By the Chung–Fuchs theorem, we have images with probability 1 and, using Exercise 2.17, the convergence (E.2) entails log ω(η n − 1) →  − ∞ in probability, which, as in the previous case, entails a contradiction.

  21. 2.19 Letting a(z) = λ + (1 − λ)z 2 , we have
    equation

    Regardless of the value of images , fixed or even random, we have almost surely

    equation

    using the law of large numbers and Jensen's inequality. It follows that images almost surely as t → ∞.

  22. 2.20
    1. Since the φ i are positive and A 1 = 1, we have φ i  ≤ 1, which shows the first inequality. The second inequality follows by convexity of x ↦ x log x for x > 0.
    2. Since A 1 = 1 and A p  < ∞, the function f is well defined for q ∈ [p, 1]. We have
      equation
      The function q ↦ log E|η 0|2q is convex on [p,1] if, for all λ ∈ [0, 1] and all q, q * ∈ [p, 1],
      equation

      which is equivalent to showing that

      equation

      with X = |η 0|2q , images . This inequality holds true by Hölder's inequality. The same argument is used to show the convexity of images . It follows that f is convex, as a sum of convex functions. We have f(1) = 0 and f(p) < 0, thus the left derivative of f at 1 is negative, which gives the result.

    3. Conversely, we assume that there exists p * ∈ (0, 1] such that images and that condition (2.52) is satisfied. The convexity of f on [p *, 1] and (2.52) implies that f(q) < 0 for q sufficiently close to 1. Thus condition (2.41) is satisfied. By convexity of f and since f(1) = 0, we have f(q) < 0 for all q ∈ [p, 1]. It follows that, by Theorem 2.6, E t | q  < ∞ for all q ∈ [0, 2].
  23. 2.21 Since images , we have a = 0 and b = 1. Using condition (2.60), we can easily see that
    equation

    since the condition for the existence of images is images . Note that when the GARCH effect is weak (that is, α 1 is small), the part of the variance that is explained by this regression is small, which is not surprising. In all cases, the ratio of the variances is bounded by 1/κ η , which is largely less than 1 for most distributions (1/3 for the Gaussian distribution). Thus, it is not surprising to observe disappointing R 2 values when estimating such a regression on real series.

Chapter 3

  1. 3.1 Given any initial measure, the sequence (X t ) t ∈ ℕ clearly constitutes a Markov chain on (ℝ, (ℝ)), with transition probabilities defined by P(x, B) = (X 1 ∈ B ∣ X 0 = x) = P ε (B − θx).
    1. (a) Since P ε admits a positive density on , the probability measure P(x, .) is, for all x ∈ E , absolutely continuous with respect to λ and its density is positive on . Thus any measure ϕ which is absolutely continuous with respect to λ is a measure of irreducibility: x ∈ E ,
      equation

      Moreover, λ is a maximal measure of irreducibility.

    2. (b) Assume, for example, that ε t is uniformly distributed on [−1, 1]. If θ > 1 and X 0 = x 0 > 1/(θ − 1), we have x 0 < X 1 < X 2 < …, regardless of the ε t . Thus there exists no irreducibility measure: such a measure should satisfy ϕ([ −  ∞ , x]) = 0, for all x ∈ ℝ, which would imply ϕ = 0.
  2. 3.2 If (X n ) is strictly stationary, X 1 and X 0 have the same distribution, μ , satisfying
    equation

    Thus μ is an invariant probability measure.

    Conversely, suppose that μ is invariant. Using the Chapman–Kolmogorov relation, by which t ∈ ℕ,     ∀ s,    0 ≤ s ≤ t,    ∀ x ∈ E,     ∀ B ∈ ,

    equation

    we obtain

    equation

    Thus, by induction, for all t , [X t  ∈ B] = μ(B) (B ∈  ). Using the Markov property, this is equivalent to the strict stationarity of the chain: the distribution of the process (X t , X t + 1, …, X t + k ) is independent of t , for any integer k .

  3. 3.3 We have
    equation

    Thus π is invariant. The third equality is an immediate consequence of the Fubini and Lebesgue theorems.

  4. 3.4 Assume, for instance, θ > 0. Let C = [−c, c], c > 0, and let δ = inf {f(x); x ∈ [−(1 + θ)c,  (1 + θ)c]} We have, for all A ⊂ C and all x ∈ C ,
    equation

    Now let B ∈  . Then for all x ∈ C ,

    equation

    The measure ν is non‐trivial since ν(E) = δλ(C) = 2δc > 0.

  5. 3.5 It is clear that (X t ) constitutes a Feller chain on . The λ ‐irreducibility follows from the assumption that the noise has a density which is everywhere positive, as in Exercise 3.1. In order to apply Theorem 3.1, a natural choice of the test function is V(x) = 1 +  ∣ x. We have
    equation

    Thus if K 1 < 1, we have, for K 1 < K < 1 and for g(x) > (K 2 + 1 − K 1)/(K − K 1),

    equation

    If we put A = {x; g(x) = 1 +  ∣ x ∣  ≤ (K 2 + 1 − K 1)/(K − K 1)}, the set A is compact and the conditions of Theorem 3.1 are satisfied, with 1 − δ = K .

  6. 3.6 By summing the first n inequalities of (3.11) we obtain
    equation

    It follows that

    equation

    because V ≥ 1. Thus, there exists κ > 0 such that

    equation

    Note that the positivity of δ is crucial for the conclusion.

  7. 3.7 We have, for any positive continuous function f with compact support,
    equation

    The inequality is justified by (i) and the fact that P f is a continuous positive function. It follows that for f = images C , where C is a compact set, we obtain

    equation

    which shows that,

    equation

    (that is, π is subvarient) using (ii). If there existed B such that the previous inequality were strict, we should have

    equation

    and since π(E) < ∞ we arrive at a contradiction. Thus

    equation

    which signifies that π is invariant.

  8. 3.8 See Francq and Zakoïan (2006b).
  9. 3.9 If images were infinite then, for any K > 0, there would exist a subscript n 0 such that images . Then, using the decrease in the sequence, one would have images . Since this should be true for all K > 0, the sequence would not converge. This applies directly to the proof of Corollary A.3 with u n  = {α X (n)} ν/(2 + ν) , which is indeed a decreasing sequence in view of point (v) on Section A.3.1.
  10. 3.10 We have
    equation

    where

    equation

    Inequality (A.8) shows that d 7 is bounded by

    equation

    By an argument used to deal with d 6 , we obtain

    equation

    and the conclusion follows.

  11. 3.11 The chain satisfies the Feller condition (i) because
    equation

    is continuous at x when g is continuous.

    To show that the irreducibility condition (ii) is not satisfied, consider the set of numbers in [0,1] such that the sequence of decimals is periodic after a certain lag:

    equation

    For all h ≥ 0, images if and only if images . We thus have,

    equation

    and,

    equation

    This shows that there is no non‐trivial irreducibility measure.

    The drift condition (iii) is satisfied with, for instance, a measure φ such that φ([−1, 1]) > 0, the energy V(x) = 1 +  ∣ x and the compact set A = [−1, 1]. Indeed,

    equation

    provided

    equation

Chapter 4

  1. 4.1 Note that images is a measurable function of images and of images , that will be denoted by
    equation

    Using the independence between images and the other variables of images , we have, for all images ,

    equation

    when the distribution images is symmetric.

  2. 4.2 A sequence images of independent real random variables such that images with probability images and images with probability images is suitable, because images , images , images and images . We have used images for any decreasing sequence of events, in order to show that
    equation
  3. 4.3 By definition,
    equation

    and, by continuity of the exponential,

    equation

    is finite if and only if the series of general term images converges. Using the inequalities images , we obtain

    equation

    Since the images tend to 0 at an exponential rate and images , the series of general term images converges absolutely, and we finally obtain

    equation

    which is finite under condition (4.12).

  4. 4.4 Note that (4.13) entails that
    equation

    with probability 1. The integral of a positive measurable function being always defined in images , using Beppo Levi's theorem and then the independence of the images , we obtain

    equation

    which is of course finite under condition (4.12). Applying the dominated convergence theorem, and bounding the variables images by the integrable variable images , we then obtain the desired expression for images .

  5. 4.5 Denoting by images the density of images ,
    equation

    and images . With the notation images , it follows that

    equation

    It then suffices to use the fact that images is equivalent to images , and that images is thus equivalent to images , in a neighborhood of 0.

  6. 4.6 We can always assume images . In view of the discussion on pages 79–80, the process images satisfies an ARMAimages representation of the form
    equation

    where images is a white noise with variance images . Using images and

    equation

    the coefficients images and images are such that

    equation

    and

    equation

    When, for instance, images , images , images and images , we obtain

    equation
  7. 4.7 In view of Exercise 3.5, an ARimages process images , images , in which the noise images has a strictly positive density over images , is geometrically images ‐mixing. Under the stationarity conditions given in Theorem 4.1, if images has a density images and if images (defined in (4.10)) is a continuously differentiable bijection (that is, if images ) then images is a geometrically images ‐mixing stationary process. Reasoning as in step (iv) of the proof of Theorem 3.4, it is shown that images , and then images , are also geometrically images ‐mixing stationary processes.
  8. 4.8 Since images and images , we have
    equation

    If the volatility images is a positive function of images that possesses a moment of order 2, then

    equation

    under conditions (4.34). Thus, condition (4.38) is necessarily satisfied. Conversely, under (4.38) the strict stationarity condition is satisfied because

    equation

    and, as in the proof of Theorem 2.2, it is shown that the strictly stationary solution possesses a moment of order 2.

  9. 4.9 Assume the second‐order stationarity condition (4.39). Let
    equation

    and

    equation

    Using images , we obtain

    equation

    We then obtain the autocovariances

    equation

    and the autocorrelations images . Note that images for all images , which shows that images is a weak ARMAimages process. In the standard GARCH case, the calculation of these autocorrelations would be much more complicated because images is not a linear function of images .

  10. 4.10 This is obvious because an APARCHimages with images , images and images corresponds to a TGARCHimages .
  11. 4.11 This is an EGARCHimages with images , images , images and images . It is natural to impose images and images , so that the volatility increases with images . It is also natural to impose images so that the effect of a negative shock is more important than the effect of a positive shock of the same magnitude. There always exists a strictly stationary solution
    equation

    and this solution possesses a moment of order 2 when

    equation

    which is the case, in particular, for images . In the Gaussian case, we have

    equation

    and

    equation

    using the calculations of Exercise 4.5 and

    equation

    Since images is an increasing function, provided images , we observe the leverage effect images .

  12. 4.12 For all ε > 0, we have
    equation

    and the conclusion follows from the Borel–Cantelli lemma.

  13. 4.13 Because the increments of the Brownian motion are independent and Gaussian, we have
    equation

    and

    equation

    as h → 0. The conclusion follows.

  14. 4.14 We have
    equation

    Using the indication, one can check that images and

    equation

    We thus conclude by noting that

    equation

Chapter 5

  1. Let (ℱ t ) be an increasing sequence of σ ‐fields such that ε t  ∈ ℱ t and E t  ∣ ℱ t − 1) = 0. For h > 0, we have ε t ε t + h  ∈ ℱ t + h and
    equation

    The sequence t ε t + h , ℱ t + h ) t is thus a stationary sequence of square integrable martingale increments. We thus have

    equation

    where images . To conclude, 1 it suffices to note that

    equation

    in probability (and even in L 2 ).

  2. 5.2 This process is a stationary martingale difference, whose variance is
    equation

    Its fourth‐order moment is

    equation

    Thus,

    equation

    Moreover,

    equation

    Using Exercise 5.1, we thus obtain

    equation
  3. 5.3 We have
    equation

    By the ergodic theorem, the denominator converges in probability (and even a.s.) to γ ε(0) = ω/(1 − α) ≠ 0. In view of Exercise 5.2, the numerator converges in law to images . Cramér's theorem 2 then entails

    equation

    The asymptotic variance is equal to 1 when α = 0 (that is, when ε t is a strong white noise). Figure E.3 shows that the asymptotic distribution of the empirical autocorrelations of a GARCH can be very different from those of a strong white noise.

    Image described by caption.

    Figure E.3 Comparison between the asymptotic variance of images for the ARCH(1) process (5.28) with (η t ) Gaussian (solid line) and the asymptotic variance of images when ε t is a strong white noise (dashed line).

  4. 5.4 Using Exercise 2.8, we obtain
    equation

    In view of Exercises 5.1 and 5.3,

    equation

    for any h ≠ 0.

  5. 5.5 Let t be the σ ‐field generated by {η u , u ≤ t}. If s + 2 > t + 1, then
    equation

    Similarly, Eε t ε t + 1ε s ε s + 2 = 0 when t + 1 > s + 2. When t + 1 = s + 2, we have

    equation

    because ε t − 1 σ t  ∈ ℱ t − 1 , images , E(η t  ∣ ℱ t − 1) =  t  = 0 and images . Using (7.24), the result can be extended to show that Eε t ε t + h ε s ε s + k  = 0 when k ≠ h and t ) follows a GARCH(p, q), with a symmetric distribution for η t .

  6. 5.6 Since Eε t ε t + 1 = 0, we have Cov{ε t ε t + 1, ε s ε s + 2} = Eε t ε t + 1ε s ε s + 2 = 0 in view of Exercise 5.5. Thus
    equation
  7. 5.7 In view of Exercise 2.8, we have
    equation

    with ω = 1, α = 0.3 and β = 0.55. Thus γ ε(0) = 6.667, images , images . Thus

    equation

    for i = 1, …, 5. Finally, using Theorem 5.1,

    equation
  8. 5.8 Since γ X (ℓ) = 0, for all ∣ℓ ∣  > q , we clearly have
    equation

    Since images , γ ε(0) = ω/(1 − α) and

    equation

    (see, for instance, Exercise 2.8), we have

    equation

    Note that images as i → ∞.

  9. 5.9 Conditionally on initial values, the score vector is given by
    equation

    where ε t (θ) = Y t  − F θ (W t ). We thus have

    equation

    and, when σ 2 does not depend on θ ,

    equation
  10. 5.10 In the notation of Section 5.4.1 and denoting by images the parameter of interest, the log‐likelihood is equal to
    equation

    up to a constant. The constrained estimator is images , with images . The constrained score and the Lagrange multiplier are related by

    equation

    On the other hand, the exact laws of the estimators under H 0 are given by

    equation

    and

    equation

    with

    equation

    For the case images , we can estimate I 22 by

    equation

    The test statistic is then equal to

    E.3 equation

    with

    equation

    and where R 2 is the coefficient of determination (centred if X 1 admits a constant column) in the regression of images on the columns of X 2 . For the first equality of (E.3), we use the fact that in a regression model of the form Y =  + U , with obvious notation, Pythagoras's theorem yields

    equation

    In the general case, we have

    equation

    Since the residuals of the regression of Y on the columns of X 1 and X 2 are also the residuals of the regression of images on the columns of X 1 and X 2 , we obtain LM n by:

    1. computing the residuals images of the regression of Y on the columns of X 1 ;
    2. regressing images on the columns of X 2 and X 1 , and setting LM n  = nR 2 , where R 2 is the coefficient of determination of this second regression.
  11. 5.11 Since images , it is clear that images , with images and R 2 defined in representations (5.29) and (5.30).

    Since T is invertible, we have images , where Col(Z) denotes the vectorial subspace generated by the columns of the matrix Z , and

    equation

    If e ∈ Col(X) then images and

    equation

    Noting that images , we conclude that

    equation

Chapter 6

  1. From the observations ε1, …, ε n , we can compute images and
    equation

    for h = 0, …, q . We then put

    equation

    and then, for k = 2, …, q (when q > 1),

    equation

    With standard notation, the OLS estimators are then

    equation
  2. 6.2 The assumption that X has full column rank implies that X X is invertible. Denoting by 〈⋅, ⋅〉 the scalar product associated with the Euclidean norm, we have
    equation

    and

    equation

    with equality if and only if images , and we are done.

  3. 6.3 We can take n = 2, q = 1, ε0 = 0, ε1 = 1, ε2 = 0. The calculation yields images .
  4. 6.4 Case 3 is not possible, otherwise we would have
    equation

    for all t , and consequently images , which is not possible.

    Using the data, we obtain images , and thus images . Therefore, the constrained estimate must coincide with one of the following three constrained estimates: that constrained by α 2 = 0, that constrained by α 1 = 0, or that constrained by α 1 = α 2 = 0. The estimate constrained by α 2 = 0 is images , and thus does not suit. The estimate constrained by α 1 = 0 yields the desired estimate images .

  5. 6.5 First note that images . Thus images if and only if η t  = 0. The nullity of the i th column of X , for i > 1, implies that η n − i + 1 = ⋯ = η 2 = η 1 = 0. The probability of this event tends to 0 as n → ∞ because, since images , we have P(η t  = 0) < 1.
  6. 6.6 Introducing an initial value X 0 , the OLS estimator of φ 0 is
    equation

    and this estimator satisfies

    equation

    Under the assumptions of the exercise, the ergodic theorem entails the almost sure convergence

    equation

    and thus the almost sure convergence of images to φ 0 . For the consistency, the assumption images suffices.

    If images , the sequence t X t − 1, t ) is a stationary and ergodic square integrable martingale difference, with variance

    equation

    We can see that this expectation exists by expanding the product

    equation

    The CLT of Corollary A.1 then implies that

    equation

    and thus

    equation

    When images , the condition images suffices for asymptotic normality.

  7. 6.7 By direct verification, A −1 A = I .
  8. 6.8
    1. Let images Then images solves the model
      equation
      The parameter ω 0 vanishing in this equation, the moments of images do not depend on it. It follows that images
    2. and 3. Write images to indicate that a matrix M is proportional to images . Partition the vector images into Z t − 1 = (1, W t − 1) and, accordingly, the matrices A and B of Theorem 6.2. Using the previous question and the notation of Exercise 6.7, we obtain
      equation

      We then have

      equation

      Similarly,

      equation

      It follows that C = A −1 BA −1 is of the form

      equation
  9. 6.9
    1. Let images Let us show the existence of x * . Let (x n ) be a sequence of elements of C such that, for all n > 0, x − x n 2 < α 2 + 1/n . Using the parallelogram identity a + b2 + ‖a − b2 = 2‖a2 + 2‖b2 , we have
      equation
      the last inequality being justified by the fact that (x m  + x n )/2 ∈ C , the convexity of C and the definition of α . It follows that (x n ) is a Cauchy sequence and, E being a Hilbert space and therefore a complete metric space, x n converges to some point x * . Since C is closed, x * ∈ C and x − x *‖ ≥ α . We have also x − x *‖ ≤ α , taking the limit on both sides of the inequality which defines the sequence (x n ). It follows that x − x *‖ = α , which shows the existence.

      Assume that there exist two solutions of the minimisation problem in C , images and images . Using the convexity of C , it is then easy to see that images satisfies

      equation

      This is possible only if images (once again using the parallelogram identity).

    2. Let λ ∈ (0, 1) and y ∈ C . Since C is convex, (1 − λ)x * + λy ∈ C . Thus
      equation

      and, dividing by λ ,

      equation

      Taking the limit as λ tends to 0, we obtain inequality (6.17).

      Let z such that, for all y ∈ C , z − x, z − y〉 ≤ 0. We have

      equation

      the last inequality being simply the Cauchy–Schwarz inequality. It follows that x − z‖ ≤ ‖x − y‖, ∀ y ∈ C . This property characterising x * in view of part 1, it follows that z = x * .

  10. 6.10
    1. It suffices to show that when C = K , (6.17) is equivalent to (6.18). Since 0 ∈ K , taking y = 0 in (6.17) we obtain x − x *, x *〉 ≤ 0. Since x * ∈ K and K is a cone, 2x * ∈ K . For y = 2x * in (6.17) we obtain x − x *, x *〉 ≥ 0, and it follows that x − x *, x *〉 = 0. The second equation of (6.18) then follows directly from (6.17). The converse, (6.18) (6.17), is trivial.
    2. Since x * ∈ K , then z = λx * ∈ K for λ ≥ 0. By (6.18), we have
      equation
      It follows that (λx)* = z and (a) is shown. The properties (b) are obvious, expanding x * + (x − x *)‖2 and using the first equation of (6.18).
  11. 6.11 The model is written as Y = X (1) θ (1) + X (2) θ (2) + U. Thus, since M 2 X (2) = 0, we have M 2 Y = M 2 X (1) θ (1) + M 2 U. Note that this is a linear model, of parameter θ (1) . Noting that images , since M 2 is an orthogonal projection matrix, the form of the estimator follows.
  12. 6.12 Since J n is symmetric, there exists a diagonal matrix D n and an orthonormal matrix P n such that images . For n large enough, the eigenvalues of J n are positive since images is positive definite. Let λ n be the smallest eigenvalue of J n . Denoting by ‖ ⋅ ‖ the Euclidean norm, we have
    equation

    Since images and images , it follows that images , and thus that X n converges to the zero vector of k .

  13. 6.13 Applying the method of Section 6.3.2, we obtain X (1) = (1, 1) and thus, by Theorem 6.8, images

Chapter 7

    1. When j < 0, all the variables involved in the expectation, except ε t − j , belong to the σ ‐field generated by t − j − 1, ε t − j − 2, …}. We conclude by taking the expectation conditionally on the previous σ ‐field and using the martingale increment property.
    2. For j ≥ 0, we note that images is a measurable function of images and of images Thus images is an even function of the conditioning variables, denoted by images .
    3. It follows that the expectation involved in the property can be written as
      equation
      The latter equality follows from of the nullity of the integral, because the distribution of η t is symmetric.
  1. 7.2 By the Borel–Cantelli lemma, it suffices to show that for all real δ > 0, the series of general terms images converges. That is to say,
    equation

    using Markov's inequality, strict stationarity and the existence of a moment of order s > 0 for images .

  2. 7.3 For all κ > 0, the process images is ergodic and admits an expectation. This expectation is finite since images and images . We thus have, by the standard ergodic theorem,
    equation

    When κ → ∞, the variable images increases to X 1 . Thus by Beppo Levi's theorem images converges to E(X 1) =  + ∞. It follows that images tends almost surely to infinity.

  3. 7.4
    1. The assumptions made on f and Θ guarantee that images is a measurable function of η t , η t − 1, …. By Theorem A.1, it follows that (Y t ) is stationary and ergodic.
    2. If we remove condition (7.94), the property may not be satisfied. For example, let Θ = {θ 1, θ 2} and assume that the sequence (X t (θ 1), X t (θ 2)) is iid, with zero mean, each component being of variance 1 and the covariance between the two components being different when t is even and when t is odd. Each of the two processes (X t (θ 1)) and (X t (θ 2)) is stationary and ergodic (as iid processes). However, images is not stationary in general because its distribution depends on the parity of t .
  4. 7.5
    1. In view of (7.30) and of the second part of assumption A1, we have
      E.7 equation
      almost surely. Indeed, on a set of probability 1, we have for all ι > 0,
      E.8 equation

      Note that images and (7.29) entail that images . The limit superior (E.5) being less than any positive number, it is null.

    2. Note that images is the strong innovation of images . We thus have orthogonality between ν t and any integrable variable which is measurable with respect to the σ ‐field generated by θ0 :
      equation
      with equality if and only if images images ‐almost surely, that is, θ = θ 0 (by assumptions A3 and A4; see the proof of Theorem 7.1).
    3. We conclude that images is strongly consistent, as in (d) in the proof of Theorem 7.1, using a compactness argument and applying the ergodic theorem to show that, at any point θ 1 , there exists a neighbourhood V(θ 1) of θ 1 such that
      equation
    4. Since all we have done remains valid when Θ is replaced by any smaller compact set containing θ 0 , for instance Θ c , the estimator images is strongly consistent.
  5. 7.6 We know that images minimises, over Θ,
    equation

    For all c > 0, there exists images such that images for all t ≥ 0. Note that images if and only if c ≠ 1. For instance, for a GARCH(1, 1) model, if images we have images . Let images The minimum of f is obtained at the unique point

    equation

    If images , we have images . It follows that c 0 = 1 with probability 1, which proves the result.

  6. 7.7 The expression for I 1 is a trivial consequence of (7.74) and Covimages . Similarly, the form of I 2 directly follows from (7.38). Now consider the non‐diagonal blocks. Using (7.38) and (7.74), we obtain
    equation

    In view of (7.41), (7.42), (7.79) and (7.24), we have

    equation

    and

    equation

    It follows that

    E.6 equation

    and is block‐diagonal. It is easy to see that has the form given in the theorem. The expressions for J 1 and J 2 follow directly from (7.39) and (7.75). The block‐diagonal form follows from (7.76) and (E.6).

  7. 7.8
    1. We have images The parameters to be estimated are a and α , ω being known. We have
      equation
      It follows that
      equation

      Letting ℐ = (ℐ ij ), images and images , we then obtain

      equation
    2. In the case where the distribution of η t is symmetric we have μ 3 = 0 and, using (7.24), images . It follows that
      equation

      The asymptotic variance of the ARCH parameter estimator is thus equal to images : it does not depend on a 0 and is the same as that of the QMLE of a pure ARCH(1) (using computations similar to those used to obtain (7.1.2)).

    3. When α 0 = 0, we have images , and thus images . It follows that
      equation
      equation

      We note that the estimation of too complicated a model (since the true process is AR(1) without ARCH effect) does not entail any asymptotic loss of accuracy for the estimation of the parameter a 0 : the asymptotic variance of the estimator is the same, images , as if the AR(1) model were directly estimated. This calculation also allows us to verify the ‘ α 0 = 0’ column in Table 7.3: for the images law we have μ 3 = 0 and κ η  = 3; for the normalized χ 2(1) distribution we find images and κ η  = 15.

  8. 7.9 Let ε > 0 and V(θ 0) be such that (7.95) is satisfied. Since images almost surely, for n large enough images almost surely. We thus have almost surely
    equation

    It follows that

    equation

    and, since ε can be chosen arbitrarily small, we have the desired result.

    In order to give an example where (7.95) is not satisfied, let us consider the autoregressive model X t  = θ 0 X t − 1 + η t where θ 0 = 1 and (η t ) is an iid sequence with mean 0 and variance 1. Let J t (θ) = X t  − θX t − 1 . Then J t (θ 0) = η t and the first convergence of the exercise holds true, with J = 0. Moreover, for all neighbourhoods of θ 0 ,

    equation

    almost surely because the sum in brackets converges to +∞, X t being a random walk and the supremum being strictly positive. Thus (7.95) is not satisfied. Nevertheless, we have

    equation

    Indeed, images converges in law to a non‐degenerate random variable (see, for instance, Hamilton 1994, p. 406) whereas images in probability since images has a non‐degenerate limit distribution.

  9. 7.10 It suffices to show that images is positive semi‐definite. Note that images . It follows that
    equation

    Therefore images is positive semi‐definite. Thus

    equation

    Setting x = Jy , we then have

    equation

    which proves the result.

  10. 7.11
    1. In the ARCH case, we have images . It follows that
      equation
      or equivalently images , that is, images We also have images , and thus
      equation
    2. Introducing the polynomial images , the derivatives of images satisfy
      equation
      It follows that
      equation
      In view of assumption A2 and Corollary 2.2, the roots of θ (L) are outside the unit disk, and the relation follows.
    3. It suffices to replace θ 0 by images in 1.
  11. 7.12 Only three cases have to be considered, the other ones being obtained by symmetry. If t 1 < min {t 2, t 3, t 4}, the result is obtained from (7.24) with g = 1 and t − j = t 1 . If t 2 = t 3 < t 1 < t 4 , the result is obtained from (7.24) with images t = t 2 and t − j = t 1 . If t 2 = t 3 = t 4 < t 1 , the result is obtained from (7.24) with images j = 0, t 2 = t 3 = t 4 = t and images .
  12. 7.13
    1. It suffices to apply (7.38), and then to apply Corollary 2.1.
    2. The result follows from the Lindeberg central limit theorem of Theorem A.3.
    3. Using Eq. (7.39) and the convergence of images to +∞,
      equation
    4. In view of Eq. (7.50) and the fact that images , we have
      equation
    5. The derivative of the criterion is equal to zero at images . A Taylor expansion of this derivative around α 0 then yields
      equation
      where α * is between images and α 0 . The result easily follows from the previous questions.
    6. When ω 0 ≠ 1, we have
      equation
      with
      equation

      Since d t  → 0 almost surely as t → ∞, the convergence in law of part 2 always holds true. Moreover,

      equation

      with

      equation

      which implies that the result obtained in Part 3 does not change. The same is true for Part 4 because

      equation

      Finally, it is easy to see that the asymptotic behaviour of images is the same as that of images , regardless of the value that is fixed for ω .

    7. In practice ω 0 is not known and must be estimated. However, it is impossible to estimate the whole parameter (ω 0, α 0) without the strict stationarity assumption. Moreover, under condition (7.14), the ARCH(1) model generates explosive trajectories which do not look like typical trajectories of financial returns.
  13. 7.14
    1. Consider a constant images . We begin by showing that images for n large enough. Note that
      equation
      We have
      equation

      In view of the inequality x ≥ 1 + log x for all x > 0, it follows that

      equation

      For all M > 0, there exists an integer t M such that images for all t > t M . This entails that

      equation

      Since M is arbitrarily large,

      E.7 equation

      provided that images . If images is chosen so that the constraint is satisfied, the inequalities

      equation

      and (E.7) show that

      E.8 equation

      We will define a criterion O n asymptotically equivalent to the criterion Q n . Since images a. s. as t → ∞, we have for α ≠ 0,

      equation

      where

      equation

      On the other hand, we have

      equation

      when α 0/α ≠ 1. We will now show that Q n (α) − O n (α) converges to zero uniformly in images . We have

      equation

      Thus for all M > 0 and any ε > 0, almost surely

      equation

      provided n is large enough. In addition to the previous constraints, assume that images . We have images for any images , and

      equation

      for any α ≥ α 0 . We then have

      equation

      Since M can be chosen arbitrarily large and ε arbitrarily small, we have almost surely

      E.9 equation

      For the last step of the proof, let images and images be two constants such that images . It can always be assumed that images . With the notation images , the solution of

      equation

      is images This solution belongs to the interval images when n is large enough. In this case

      equation

      is one of the two extremities of the interval images , and thus

      equation

      This result, (E.9), the fact that min α Q n (α) ≤ Q n (α 0) = 0 and (E.8) show that

      equation

      Since images is an arbitrarily small interval that contains α 0 and images , the conclusion follows.

    2. It can be seen that the constant 1 does not play any particular role and can be replaced by any other positive number ω . However, we cannot conclude that images almost surely because images , but images is not a constant. In contrast, it can be shown that under the strict stationarity condition images the constrained estimator images does not converge to α 0 when ω ≠ ω 0 .

Chapter 8

  1. Let the Lagrange multiplier λ ∈ ℝ p . We have to maximise the Lagrangian
    equation

    Since at the optimum

    equation

    the solution is such that images Since images we obtain images , and then the solution is

    equation
  2. 8.2 Let K be the p × n matrix such that K(1, i 1) = ⋯ = K(p, i p ) = 1 and whose the other elements are 0. Using Exercise 8.1, the solution has the form
    E.10 equation

    Instead of the Lagrange multiplier method, a direct substitution method can also be used.

    The constraints images can be written as

    equation

    where H is n × (n − p), of full column rank, and x * is (n − p) × 1 (the vector of the non‐zero components of x ). For instance: (i) if n = 3, x 2 = x 3 = 0 then x * = x 1 and images ; (ii) if n = 3, x 3 = 0 then x * = (x 1, x 2) and images .

    If we denote by Col(H) the space generated by the columns of H , we thus have to find

    equation

    where ‖.‖ J is the norm images .

    This norm defines the scalar product z, y J  = z Jy . The solution is thus the orthogonal (with respect to this scalar product) projection of x 0 on Col(H). The matrix of such a projection is

    equation

    Indeed, we have P 2 = P , PHz = Hz , thus Col(H) is P ‐invariant, and Hy, (I − P)z J  = y H J(I − P)z = y H Jz − y H JH(H JH)−1 H Jz = 0, thus z − Pz is orthogonal to Col(H).

    It follows that the solution is

    E.11 equation

    This last expression seems preferable to (E.10) because it only requires the inversion of the matrix H JH of size n − p , whereas in (E.11) the inverse of J , which is of size n , is required.

  3. 8.3 In case (a), we have
    equation

    and then

    equation
    equation

    and, using (E.11),

    equation

    which gives a constrained minimum at

    equation

    In case (b), we have

    E.12 equation

    and, using (E.11), a calculation, which is simpler than the previous one (we do not have to invert any matrix since H JH is scalar), shows that the constrained minimum is at

    equation

    The same results can be obtained with formula (E.10), but the computations are longer, in particular because we have to compute

    E.13 equation
  4. 8.4 Matrix J −1 is given by (E.13). With the matrix K 1 = K defined by (E.12), and denoting by K 2 and K 3 the first and second rows of K , we then obtain
    equation

    It follows that the solution will be found among (a) λ = Z ,

    equation

    The value of Q(λ) is 0 in case (a), images in case (b), images in case (c) and images in case (d).

    To find the solution of the constrained minimisation problem, it thus suffices to take the value λ which minimizes Q(λ) among the subset of the four vectors defined in (a)–(d) which satisfy the positivity constraints of the two last components.

    We thus find the minimum at λ Λ = Z = (−2, 1, 2) in case (i), at

    equation
    equation

    and

    equation
  5. 8.5 Recall that for a variable Z ∼ (0, 1), we have EZ + =  − EZ  = (2π)−1/2 and images . We have
    equation

    It follows that

    equation

    The coefficient of the regression of Z 1 on Z 2 is ω 0 . The components of the vector (Z 1 + ω 0 Z 2, Z 2) are thus uncorrelated and, this vector being Gaussian, they are independent. In particular images , which gives images . We thus have

    equation

    Finally,

    equation

    It can be seen that

    equation

    is a positive semi‐definite matrix.

  6. 8.6 At the point θ 0 = (ω 0, 0, …, 0), we have
    equation

    and the information matrix (written for simplicity in the ARCH(3) case) is equal to

    equation

    This matrix is invertible (which is not the case for a general GARCH(p, q)). We finally obtain

    equation
  7. 8.7 We have images ,and
    equation

    and thus

    equation

    In view of Theorem 8.1 and (8.15), the asymptotic distribution of images is that of the vector λ Λ defined by

    equation

    We have images , thus

    equation

    Since the components of the Gaussian vector (Z 1 + ω 0 Z 2, Z 2) are uncorrelated, they are independent, and it follows that

    equation

    We then obtain

    equation

    Let f(z 1, z 2) be the density of Z , that is, the density of a centred normal with variance (κ η  − 1)J −1 . It is easy to show that the distribution of images admits the density images and to check that this density is asymmetric.

    A simple calculation yields images . From images , we then obtain images . And from images we obtain images . Finally, we obtain

    equation
  8. 8.8 The statistic of the C test is (0, 1) distributed under H 0 . The p ‐value of C is thus images . Under the alternative, we have almost surely images as n →  + ∞. It can be shown that log{1 − Φ(x)}∼ − x 2/2 in the neighbourhood of +∞. In Bahadur's sense, the asymptotic slope of the C test is thus
    equation

    The p ‐value of C * is images . Since log 2{1 − Φ(x)}∼ − x 2/2 in the neighbourhood of +∞, the asymptotic slope of C * is also c *(θ) = θ 2 for θ > 0. The C and C * tests having the same asymptotic slope, they cannot be distinguished by the Bahadur approach.

    We know that C is uniformly more powerful than C * . The local power of C is thus also greater than that of C * for all τ > 0. It is also true asymptotically as n → ∞, even if the sample is not Gaussian. Indeed, under the local alternatives images , and for a regular statistical model, the statistic images is asymptotically (τ, 1) distributed. The local asymptotic power of C is thus γ(τ) = 1 − Φ(c − τ) with c = Φ−1(1 − α). The local asymptotic power of C * is γ *(τ) = 1 − Φ(c * − τ) + Φ(−c * − τ), with c * = Φ−1(1 − α/2). The difference between the two asymptotic powers is

    equation

    and, denoting the (0, 1) density by φ(x), we have

    equation

    where

    equation

    Since 0 < c < c * , we have

    equation

    Thus, g(τ) is decreasing on [0, ∞). Note that g(0) > 0 and images . The sign of g(τ), which is also the sign of D (τ), is positive when τ ∈ [0, a] and negative when τ ∈ [a, ∞), for some a > 0. The function D thus increases on [0, a] and decreases on [a, ∞). Since D(0) = 0 and images , we have D(τ) > 0 for all τ > 0. This shows that, in Pitman's sense, the test C is, as expected, locally more powerful than C * in the Gaussian case, and locally asymptotically more powerful than C * in a much more general framework.

  9. 8.9 The Wald test uses the fact that
    equation

    To justify the score test, we remark that the log‐likelihood constrained by H 0 is

    equation

    which gives images as constrained estimator of σ 2 . The derivative of the log‐likelihood satisfies

    equation

    at images . The first component of this score vector is asymptotically (0, 1) distributed under H 0 . The third test is of course the likelihood ratio test, because the unconstrained log‐likelihood at the optimum is equal to images whereas the maximal value of the constrained log‐likelihood is images . Note also that images under H 0 .

    The asymptotic level of the three tests is of course α , but using the inequality images for x > 0, we have

    equation

    with almost surely strict inequalities in finite samples, and also asymptotically under H 1 . This leads us to think that the Wald test will reject more often under H 1 .

    Since images is invariant by translation of the X i , images tends almost surely to σ 2 both under H 0 and under H 1 , as well as under the local alternatives images . The behaviour of images under H n (τ) is the same as that of images under H 0 , and because

    equation

    under H 0 , we have images both under H 0 and under H n (τ). Similarly, it can be shown that images under H 0 and under H n (τ). Using these two results and x/(1 + x)∼ log(1 + x) in the neighbourhood of 0, it can be seen that the statistics L n , R n and W n are equivalent under H n (τ). Therefore, the Pitman approach cannot distinguish the three tests.

    Using images for x in the neighbourhood of +∞, the asymptotic Bahadur slopes of the tests C 1 , C 2 and C 3 are, respectively

    equation

    Clearly

    equation

    Thus the ranking of the tests, in increasing order of relative efficiency in the Bahadur sense, is

    equation

    All the foregoing remains valid for a regular non‐Gaussian model.

  10. 8.10 In Example 8.2, we saw that
    equation

    Note that Var(Z d )c corresponds to the last column of VarZ = (κ η  − 1)J −1 . Thus c is the last column of J −1 divided by the (d, d)th element of this matrix. In view of Exercise 6.7, this element is images . It follows that images and images . By (8.24), we thus have

    equation

    This shows that the statistic 2/(κ η  − 1)L n has the same asymptotic distribution as the Wald statistic W n , that is, the distribution images in the case d 2 = 1.

  11. 8.11 Using (8.29) and Exercise 8.6, we have
    equation

    The result then follows from (8.30).

  12. 8.12 Since XY = 0 almost surely, we have P(XY ≠ 0) = 0. By independence, we have P(XY ≠ 0) = P(X ≠ 0 and Y ≠ 0) = P(X ≠ 0)P(Y ≠ 0). It follows that P(X ≠ 0) = 0 or P(Y ≠ 0) = 0.

Chapter 9

  1. Substituting y = x/σ t , and then integrating by parts, we obtain
    equation

    Since images and images belong to the σ ‐field t − 1 generated by u  : u < t}, and since the distribution of ε t given t − 1 has the density images , we have

    equation

    and the result follows. We can also appeal to the general result that a score vector is centred.

  2. 9.2 It suffices to use integration by parts.
  3. 9.3 We have
    equation

    Thus

    equation

    when X(θ 0, σ 2), and

    equation

    when X(θ, σ 2). Note that

    equation

    as in Le Cam's third lemma.

  4. 9.4 Recall that
    equation

    and

    equation

    Using the ergodic theorem, the fact that (1 − |η t | λ ) is centred and independent of the past, as well as elementary calculations of derivatives and integrals, we obtain

    equation
    equation

    and

    equation

    almost surely.

  5. 9.5 Jensen's inequality entails that
    equation

    where the inequality is strict if σf(ησ)/f(η) is non‐constant. If this ratio of densities were almost surely constant, it would be almost surely equal to 1, and we would have

    equation

    which is possible only when σ = 1.

  6. 9.6 It suffices to note that images .
  7. 9.7 The second‐order moment of the double Γ(b, p) distribution is p(p + 1)/b 2 . Therefore, to have images , the density f of η t must be the double images . We then obtain
    equation

    Thus images and images . We then show that κ η  ≔  ∫ x 4 f p (x)dx = (3 + p)(2 + p)/p(p + 1). It follows that images .

    To compare the ML and Laplace QML, it is necessary to normalise in such a way that E ∣ η t  ∣  = 1, that is, to take the double Γ(p, p) as density f . We then obtain 1 + xf (x)/f(x) = p − p ∣ x. We always have images , and we have images . It follows that images , which was already known from Exercise 9.6. This allows us to construct a table similar to Table 9.5.

  8. 9.8 Consider the first instrumental density of the table, namely
    equation

    Denoting by c any constant whose value can be ignored, we have

    equation

    and thus

    equation

    Now consider the second density,

    equation

    We have

    equation

    which gives

    equation

    Consider the last instrumental density,

    equation

    We have

    equation

    and thus

    equation

    In each case, images does not depend on the parameter λ of h . We conclude that the estimators images exhibit the same asymptotic behaviour, regardless of the parameter λ . It can even be easily shown that the estimators themselves do not depend on λ .

  9. 9.9
    1. The Laplace QML estimator applied to a GARCH in standard form (as defined in Example 9.4) is an example of such an estimator.
    2. We have
      equation
    3. Since
      equation
      we have
      equation

      It follows that, using obvious notation,

      equation
  10. 9.10 After reparameterisation, the result (9.27) applies with η t replaced by images , and θ 0 by
    equation

    where ϱ =  ∫  ∣ x ∣ f(x)dx . Thus, using Exercise 9.9, we obtain

    equation

    with images .

Chapter 10

  1. The number of parameters of the diagonal GARCHimages model is
    equation

    that of the vectorial model is

    equation

    that of the CCC model is

    equation

    that of the BEKK model is

    equation

    For images and images we obtain Table E.1.

    Number of parameters as a function of m.

    Model m = 2 m = 3 m = 5 m = 10
    Diagonal  9 18  45  165
    Vectorial 21 78 465 6105
    CCC 19 60 265 2055
    BEKK 11 24  96  186
  2. 10.2 Assume (10.100) and define images . We have (10.101) because
    equation

    and

    equation

    Conversely, it is easy to check that (10.101) implies (10.100).

  3. 10.3
    1. Since images and images are independent, we have
      equation
      which shows that images is constant.
    2. We have
      equation
      which is nonzero only if images , thus images and images take only one value.
    3. Assume that there exist two events images and images such that images and images . The independence then entails that
      equation
      and we obtain a contradiction.
  4. 10.4 For all images , there exists a symmetric matrix images such that images , and we have
    equation
  5. 10.5 The matrix images being symmetric and real, there exist an orthogonal matrix images (images ) and a diagonal matrix images such that images Thus, denoting by images the (positive) eigenvalues of images , we have
    equation

    where images has the same norm as images . Assuming, for instance, that images we have

    equation

    Moreover, this maximum is reached at images .

    An alternative proof is obtained by noting that images solves the maximization problem of the function images under the constraint images . Introduce the Lagrangian

    equation

    The first‐order conditions yield the constraint and

    equation

    This shows that the constrained optimum is located at a normalized eigenvector images associated with an eigenvalue images of images , images . Since images , we of course have images .

  6. 10.6 Since all the eigenvalues of the matrix images are real and positive, the largest eigenvalue of this matrix is less than the sum of all its eigenvalues, that is, of its trace. Using the second equality of (10.67), the first inequality of (10.68) follows. The second inequality follows from the same arguments, and noting that there are images eigenvalues. The last inequality uses the fact that the determinant is the product of the eigenvalues and that each eigenvalue is less than images .

    The first inequality of (10.69) is a simple application of the Cauchy–Schwarz inequality. The second inequality of (10.69) is obtained by twice applying the second inequality of (10.68).

  7. 10.7 For the positivity of images for all images it suffices to require images to be symmetric positive definite, and the initial values images to be symmetric positive semi‐definite. Indeed, if the images are symmetric and positive semi‐definite then images is symmetric if and only images is symmetric, and we have, for all images ,
    equation

    We now give a second‐order stationarity condition. If images exists, then this matrix is symmetric positive semi‐definite and satisfies

    equation

    that is,

    equation

    If images is positive definite, it is then necessary to have

    (E.14) equation

    For the reverse we use Theorem 10.5. Since the matrices images are of the form images with images , the condition images is equivalent to (E.14). This condition is thus sufficient to obtain the stationarity, under technical condition (ii) of Theorem 10.5 (which can perhaps be relaxed). Let us also mention that, by analogy with the univariate case, it is certainly possible to obtain the strict stationarity under a condition weaker than (E.14).

  8. 10.8 For the convergence in images , it suffices to show that images is a Cauchy sequence:
    equation

    when images . To show the almost sure convergence, let us begin by noting that, using Hölder's inequality,

    equation

    with images and images . Let images , images for images , and images which is defined in images , a priori. Since

    equation

    it follows that images is almost surely defined in images and images is almost surely defined in images . Since images , we have images almost surely.

  9. 10.9 It suffices to note that images is the correlation matrix of a vector of the form images , where images and images are independent vectors of the respective correlation matrices images and images .
  10. 10.10 Since the images are linearly independent, there exist vectors images such that images forms a basis of images and such that images for all images and all images . We then have
    equation

    and it suffices to take

    equation

    The conditional covariance between the factors images and images , for images , is

    equation

    which is a nonzero constant in general.

  11. 10.11 As in the proof of Exercise 10.10, define vectors images such that
    equation

    Denoting by images the images th vector of the canonical basis of images , we have

    equation

    and we obtain the BEKK representation with images ,

    equation
  12. 10.12 Consider the Lagrange multiplier images and the Lagrangian images . The first‐order conditions yield
    equation

    which shows that images is an eigenvector associated with an eigenvalue images of images . Left‐multiplying the previous equation by images , we obtain

    equation

    which shows that images must be the largest eigenvalue of images . The vector images is unique, up to its sign, provided that the largest eigenvalue has multiplicity order 1.

    An alternative way to obtain the result is based on the spectral decomposition of the symmetric definite positive matrices

    equation

    Let images , that is, images . Maximizing images is equivalent to maximizing images . The constraint images is equivalent to the constraint images . Denoting by images the components of images , the function images is maximized at images under the constraint, which shows that images is the first column of images , up to the sign. We also see that other solutions exist when images . It is now clear that the vector images contains the images principal components of the variance matrix images .

  13. 10.13 All the elements of the matrices images images and images are positive. Consequently, when images is diagonal, using Exercise 10.4, we obtain
    equation

    element by element. This shows that images is diagonal, and the conclusion easily follows.

  14. 10.14 With the abuse of notation images , the property yields
    equation

    The proof is completed by induction on images .

  15. 10.15 Let images be the symmetric positive definite matrix defined by images . If images is an eigenvector associated with the eigenvalue images of the symmetric positive definite matrix images , then we have images , which shows that the eigenvalues of images and images are the same. Write the spectral decomposition as images where images is diagonal and images . We have images , with images .
  16. 10.16 Let images be a nonzero vector such that
    equation

    On the right‐hand side of the equality, the term in parentheses is nonnegative and the last term is positive, unless images . But in this case images and the term in parentheses becomes images .

  17. 10.16 Take the random matrix images , where images with images . Obviously images is never positive definite because this matrix always possesses the eigenvalue 0 but, for all images , images with probability 1.

Chapter 11

  1. For images , we obtain the geometric Brownian motion whose solution, in view of (11.13), is equal to
    equation

    By Itimages 's formula, the SDE satisfied by images is

    equation

    Using the hint, we then have

    equation

    It follows that

    equation

    The positivity follows.

  2. 11.2 It suffices to check the conditions of Theorem 11.1, with the Markov chain images . We have
    equation

    this inequality being uniform on any ball of radius images . The assumptions of Theorem 11.1 are thus satisfied.

  3. 11.3 One may take, for instance,
    equation

    It is then easy to check that the limits in (11.23) and (11.25) are null. The limiting diffusion is thus

    equation

    The solution of the images equation is, using Exercise 11.1 with images ,

    equation

    where images is the initial value. It is assumed that images and images , in order to guarantee the positivity. We have

    equation
  4. 11.4 In view of (11.34) the put price is images We have seen that the discounted price is a martingale for the risk‐neutral probability. Thus images . Moreover,
    equation

    The result is obtained by multiplying this equality by images and taking the expectation with respect to the probability images .

  5. 3.5 A simple calculation shows that images
  6. 11.6 In view of (11.36), Itimages 's formula applied to images yields
    equation

    with, in particular, images In view of Exercise 11.5, we thus have

    equation
  7. 11.7 Given observations images of model (11.31), and an initial value images , the maximum likelihood estimators of images and images are, in view of (11.33),
    equation

    The maximum likelihood estimator of images is then

    equation
  8. 3.8 Denoting by images the density of the standard normal distribution, we have
    equation

    It is easy to verify that images . It follows that

    equation

    The option buyer wishes to be covered against the risk: he thus agrees to pay more if the asset is more risky.

  9. 11.9 We have images where images It follows that
    equation

    The property immediately follows.

  10. 11.10 The volatility is of the form images with images Using the results of Chapter 2, the strict and second‐order stationarity conditions are
    equation

    We are in the framework of model (11.44), with images . Thus the risk‐neutral model is given by (11.47), with images and

    equation
  11. 11.11 The constraints (11.41) can be written as
    equation

    It can easily be seen that if images we have, for images and for all images ,

    equation

    Writing

    equation

    we thus obtain

    equation

    and writing

    equation

    we have

    equation

    It follows that

    equation

    Thus

    equation

    There are an infinite number of possible choices for images and images . For instance, if images , one can take images and images with images . Then images follows. The risk‐neutral probability images is obtained by calculating

    equation

    Under the risk‐neutral probability, we thus have the model

    (E.15) equation

    Note that the volatilities of the two models (under historical and risk‐neutral probability) do not coincide unless images for all images .

  12. 11.12 We have images It can be shown that the distribution of images has the density images At horizon 2, the VaR is thus the solution images of the equation images For instance, for images we obtain images , whereas images The VaR is thus underevaluated when the incorrect rule is applied, but for other values of images the VaR may be overevaluated: images , whereas images
  13. 11.13 We have
    equation

    Thus, introducing the notation images ,

    equation

    The conditional law of images is thus the images distribution, and (11.58) follows.

  14. 11.14 We have
    equation

    At horizon 2, the conditional distribution of images is not Gaussian if images , because its kurtosis coefficient is equal to

    equation

    There is no explicit formula for images when images .

  15. 11.15 It suffices to note that, conditionally on the available information images , we have
    equation
  16. 11.16 For simplicity, in this proof we will omit the indices. Since images has the same distribution as images , where images denotes a variable uniformly distributed on images , we have
    equation

    Using (11.63), the desired equality follows.

  17. 11.17 The monotonicity, homogeneity and invariance properties follow from (11.62) and from the VaR properties. For images we have
    equation

    Note that

    equation

    because the two bracketed terms have the same sign. It follows that

    equation

    The property is thus shown.

  18. 11.18 The volatility equation is
    equation

    It follows that

    equation

    We have images , the inequality being strict because the distribution of images is nondegenerate. In view of Theorem 2.1, this implies that images a.s., and thus that images a.s., when images tends to infinity.

  19. 11.19 Given, for instance, images , we have images and the distribution of
    equation

    is not normal. Indeed, images and images , but images . Similarly, the variable

    equation

    is centered with variance 2, but is not normally distributed because

    equation

    Note that the distribution is much more leptokurtic when images is close to 0.

Chapter 12

  1. Recall that the expectation of an infinite product of independent variables is not necessarily equal to the product of the expectations (see Exercise 4.2). This explains why it seems necessary to impose the finiteness of the product of the E{exp(σ ∣ β i images ∣)} (instead of the α i s).

    We have, using the independence assumptions on the sequences (η t ) and (images ),

    equation

    provided that the expectation of the term between accolades exists and is finite. To show this, write

    equation

    We have

    equation

    Thus EZ t, ∞ < ∞. Similarly, the same arguments show that

    equation

    Moreover, for all k > 0,

    equation

    using again the independence between (η t ) and (images ).

  2. 12.2 We have, for any x ≥ 0:
    equation

    where the last equality holds because η t and (h t ) are independent, and because the law of η t is symmetric. The same arguments show that images for x ≥ 0. Thus, there is a one‐to‐one relation between the law of ε t and that of images . In addition the law of ε t is symmetric. Similarly, for any n ≥ 1, one can show that there is a one‐to‐one relation between the law of (ε t , …, ε t + n ) and that of images . When the distribution of η t is not symmetric, the fourth equality of the previous computation fails.

  3. 12.3 Let Y t  = log h t . The process (Y t ) being the solution of the AR(1) model Y t  = ω + βY t − 1 + σv t its mean and autocovariance function are given by
    equation

    From the independence between (Y t ) and (Z t ), (X t ) is a second‐order process whose mean and autocovariance function are obtained as follows:

    equation

    Since γ X (k) = βγ X (k − 1),  ∀ k > 1, the process (X t ) admits an ARMA(1,1) representation of the form (12.7). The constant α is deduced from the first two autocovariances of (X t ). By Eq. (12.7) we have, denoting by images , the variance of the noise in this representation

    equation

    Hence, if images , the coefficient α is a solution of

    (E.16) equation

    and the solution of modulus less than 1 is given by

    equation

    Moreover, the variance of the noise in model (12.7) is images if β ≠ 0 (and images if β = 0). Finally, if images the relation γ X (k) = βγ X (k − 1) also holds for k = 1 and (X t ) is an AR(1) (i.e. α = 0 in model (12.7)).

    Now, when β ≠ 0 and σ ≠ 0, we get images , using (E.16). It follows that either 0 < α < β < 1/α or 0 > α > β > 1/α . In particular α ∣  <  ∣ β, which shows that the orders of the ARMA(1,1) representation for X t are exact.

  4. 12.4 By expansion (12.3) and arguments used to prove Proposition 12.7, we obtain
    equation

    We also have,

    equation

    Thus

    equation

    for ρ ≠ 0 and β ∣  < 1.

  5. 12.5 The estimated models on the return series {r t , t = 2, …, 2122} and {r t , t = 2123, …, 4245} have the volatilities
    equation

    Denote by θ (1) = (0.098, 0.087, 0.84) and θ (2) = (0.012, 0.075, 0.919) the parameters of the two models. The estimated values of ω and β seem quite different. Denote by images and images the estimated standard deviations of the estimators of ω and β of Model Mi . It turns out that the confidence intervals

    equation

    and

    equation

    have empty intersection. The same holds true for the confidence intervals

    equation

    and

    equation

    The third graph of Figure E.4 displays the boxplot of the distribution of images on 100 independent simulations of Model M1. The difference θ (2) − θ (1) between the parameters of M1 and M2 is marked by a diamond shape. The difference θ (2) − θ (1) is an outlier for the distribution of images , meaning that estimated GARCH on the two periods are significantly distinct.

    Correlograms of SP returns, DAX returns (top left-right), squared SP returns, and squared DAX returns (bottom left-right) depicting vertical lines below 0.1 and above 0.1, respectively.

    Figure E.4 The parameter θ (1) (respectively, θ (2) ) is that of a GARCH(1,1) fitted on the CAC 40 returns from March 1, 1990 to September 3, 1998 (respectively, from September 4, 1998 to December 29, 2006). The box plots display the empirical distributions of the estimated parameters images on 100 simulations of the model fitted on the first part of the CAC.

  6. 12.6 images is a Markov chain, whose initial probability distribution is given by: images , and whose transition matrix is:
    (E.17) equation

    The number of balls in urn images changes successively from odd to even, and conversely, along the steps. For instance images . Thus the chain is irreducible but periodic.

    Using the formula images , it can be seen that images is an invariant law. It follows that images for all images .

    When the initial distribution is the Dirac mass at 0 we have images when images is odd, and images when images is even. Thus images does not exist.

  7. 12.7 Let i and j be two different states, and let d(i) be the period of state i . If the chain is irreducible, there exists an integer m 1 such that images and m 2 such that images . The integer d(i) divides m 1 + m 2 since images . Similarly d(i) divides m 1 + m + m 2 for all m ∈ {m : p (m)(j, j) > 0}. Using m = m + m 1 + m 2 − (m 1 + m 2) = k 1 d(i) − k 2 d(i) = (k 1 − k 2)d(i), it follows that d(i) divides m for all m ∈ {m :  p (m)(j, j) > 0}. Since d(j) is the gcd of {m : p (m)(j, j) > 0}, and we have just shown that d(i) is a common divisor of all the elements of this set, it follows that d(i) ≤ d(j). By symmetry, we also have d(j) ≤ d(i).
  8. 12.8 The key part of the code is the following:
    # one iteration of the EM algorithm
    EM <- function(omega,pi0,p,y){
    d<-length(omega)
    n <- length(y) # y contient les n observations
    vrais<-0
    pit.t<-matrix(0,nrow=d,ncol=n)
    pit.tm1<-matrix(0,nrow=d,ncol=n+1)
    vecphi<-rep(0,d)
    pit.tm1[,1]<-pi0
    for (t in 1:n) {
     for (j in 1:d) vecphi[j]<-{dnorm(y[t],
          mean=0,sd=sqrt(abs(omega[j])))}
     den<-sum(pit.tm1[,t]*vecphi)
     if(den<=0)return(Inf)
     pit.t[,t]<-(pit.tm1[,t]*vecphi)/den
     pit.tm1[,t+1]<-t(p)%*%pit.t[,t]
     vrais<-vrais+log(den)
            }
    pit.n<-matrix(0,nrow=d,ncol=n)
    pit.n[,n]=pit.t[,n]
    for (t in n:2) {
     for (i in 1:d) {
     pit.n[i,t-1]<- {pit.t[i,t-1]*sum(p[i,1:d]*
             pit.n[1:d,t]/pit.tm1[1:d,t])}
             } }
    pitm1et.n<-array(0,dim=c(d,d,n))
    for (t in 2:n) {
     for (i in 1:d) {
     for (j in 1:d) {
     pitm1et.n[i,j,t]<-p[i,j]*pit.t[i,t-1]*pit.n[j,t]/pit.tm1[j,t]
             } } }
    omega.final<-omega
    pi0.final<-pi0
    p.final<-p
    for (i in 1:d)  {
    omega.final[i]<-sum((y[1:n]∧2)*pit.n[i,1:n])/sum(pit.n[i,1:n])
    pi0.final[i]<-pit.n[i,1]
     for (j in 1:d) {
     p.final[i,j]<-sum(pitm1et.n[i,j,2:n])/sum(pit.n[i,1:(n-1)])
            } }
    liss<-{list(probaliss=pit.n,probatransliss=pitm1et.n,
    vrais=vrais,omega.final=omega.final,pi0.final=pi0.final,
            p.final=p.final)}
                   }
    
  9. 12.9 The last equality of Step 2 of the EM algoritm shows that π t − 1, t ∣ n (i 0, j 0) = 0 for all t . Point 3, then shows that p(i 0, j 0) ≡ 0 in all the subsequent steps of the algorithm.
  10. 12.10 We have the Markovian representation images , with
    equation

    The proof of Theorem 2.4 in Chapter 2 applies directly with this sequence (A t ), showing that there exists a strictly stationary solution if and only if the top Lyapunov exponent of (A t ) is strictly negative. The solution is then unique, non‐anticipative, and ergodic, and takes the form (2.18).

  11. 12.11 As in Example 2.1, the last exercise shows that the necessary and sufficient strict stationarity condition is
    equation

    In the ARCH(1) case with d regimes, we obtain the necessary and sufficient condition

    equation
  12. 12.12 If (ε t ) is a strictly stationary and non‐anticipative 3 solution and if the sequence t ) is iid, then α t ) and β t ) are independent of images and images . If in addition images then, setting images , we have
    equation

    For the existence of a positive solution to this equation, it is necessary to have

    equation

    Conversely, under this condition, the process

    equation

    is a strictly stationary and non‐anticipative solution which satisfies

    equation
  13. 12.13 Using the elementary inequality log x ≤ x − 1, we have
    equation

    and the result follows.

  14. 12.14 Conditional to the initial variables images , Equations (12.25), (12.22)–(12.23), (12.26)–(12.28), and (12.29)–(12.31) remain valid, provided the density φ k (ε t ) is replaced by the density φ k (ε t  ∣ ε t − 1, …, ε t − q ) of the Gaussian images distribution (and by replacing the notation M(ε t ) by M(ε t  ∣ ε t − 1, …, ε t − q )).

    The EM algorithm cannot be generalised trivially because the maximisation of Eq. (12.32) is replaced by that of

    equation

    which does not admit an explicit form like (12.35) but requires the use of an optimisation algorithm.

  15. 12.15 For the MS‐GARCH(1,1) model images , with
    equation

    we have

    equation

    under conditions entailing the existence of the series. For the alternative model, ε t  = σ t t )η t with

    equation

    we have

    equation

    Let t be the sigma‐field generated by the past observations {ε u , u < t}, and by the past and present value of the chain u , u ≤ t}. We have

    equation

    but, given the past observations, images only depends on Δ t , whereas h t depends also on u , u < t}.

    This entails differences between the two models, in terms of probabilistic properties (the stationarity conditions are easier to obtain for the standard MS‐GARCH model, but they have also been obtained by Liu (2006) for the alternative model), of statistical inference (the fact that images only depends on Δ t renders the alternative model much easier to estimate), and also on the dynamics behaviour and on the interpretation of the parameters.

    For instance, for the MS‐GARCH, β(i) can be interpreted as a parameter of inertia of the volatility in regime i : if the volatility h t − 1 is high and β(i) is close to 1, the next volatility h t will remain high in regime i . This interpretation is no more valid for the alternative model, since images may not be equal to images .

Notes

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.122.244