Non-stationary time series models

In this section, we will look at some models that are non-stationary but nonetheless have certain properties that allow us to either derive a stationary model or model the non-stationary behavior.

Autoregressive integrated moving average models

The random walk process is an example of a time series model that is itself non-stationary, but the differences between consecutive points, Yt and Yt+1, which we can write as ∆Yt, is stationary. This differenced sequence was nothing but the white noise sequence, which we know to be stationary.

If we were to take the difference between consecutive output points of the differenced sequence, we would again obtain another sequence, which we call a second order differenced sequence.

Generalizing this notion of differencing, we can say that a dth order difference is obtained by repeatedly computing differences between consecutive terms d times, to obtain a new sequence with points, Wt, from an original sequence, Yt. We can express this idea as:

Autoregressive integrated moving average models

We can consequently define an autoregressive integrated moving average (ARIMA) process as a process in which the dth order difference of terms is a stationary ARMA process. An ARIMA(p, d, q) process requires a dth order differencing, has an MA component of order q, and an AR component of order p. Thus, a regular ARMA(p, q) process is equivalent to an ARIMA(p, 0, q) process. In order to fit an ARIMA model, we need to first determine an appropriate value of d, namely the number of times we need to perform differencing. Once we have found this, we can then proceed with fitting the differenced sequence using the same process as we would use with an ARMA process.

One way to find a suitable value for d is to repeatedly difference a time series and after each application of differencing to check whether the resulting time series is stationary. There are a number of tests for stationarity that can be used, and these are often known as unit root tests. A good example is the Augmented Dickey-Fuller (ADF) test. This test builds a regression model as follows:

Autoregressive integrated moving average models

Here, k refers to the maximum number of time lags that will be included in the model. The null hypothesis of the ADF test is that the current time series is non-stationary and as a result, the regression model we just saw will predict a coefficient φ of approximately zero. If the time series is stationary, causing us to reject the null hypothesis, then the expected value of the coefficient is below zero.

We can find an implementation of this test in the R package tseries via the function adf.test(). This assumes a default value k equal to the largest integer that does not exceed the cube root of the length of the time series under test. The ADF test produces a p-value for us to examine. Values that are smaller than 0.05 (or a smaller cutoff such as 0.01 if we want a higher degree of confidence) are typically suggestive that the time series in question is stationary.

The following example shows the results of running the ADF test on our simulated random walk, which we know is non-stationary:

> library(tseries)
> adf.test(random_walk, alternative = "stationary")

  Augmented Dickey-Fuller Test

data:  random_walk
Dickey-Fuller = -1.5881, Lag order = 4, p-value = 0.7473
alternative hypothesis: stationary

As expected, our p-value is much higher than 0.05, indicating a lack of stationarity.

Tip

Another unit root test is the Philips-Perron test, which in R we can run using the function PP.test(). The Kwiatkowski-Phillips-Schmidt-Shin (KPSS) test is yet another example of a unit root test. An implementation of this can also be found in the package tseries. In contrast with the previous two tests, the null hypothesis of this last test assumes stationarity, so the interpretation of the p-values is inverted.

Autoregressive conditional heteroscedasticity models

The key premise behind ARIMA models is that although the sequence itself was non-stationary, we could apply a particular transformation (in this case, by differencing the terms) in order to arrive at a stationary series. Thus, we essentially extended the range of possible types of time series we could model using the tools we have already learned this far, namely the autoregressive and moving average processes. Of course, many non-stationary processes cannot be described well with ARIMA models, so there are many other approaches to handling non-stationarity.

One such approach to building a model for a non-stationary time series is to make the assumption that the model is non-stationary because the variance of the model changes over time in a predictable way. It turns out that modeling the change of variance over time as an autoregressive process, thus using tools already familiar to us results in a model that has important applications in financial econometrics.

This model is known as the autoregressive conditional heteroscedasticity (ARCH) model. Heteroscedasticity is the opposite of homoscedasticity, which describes constant variance.

The equation for an ARCH model of order p is given by:

Autoregressive conditional heteroscedasticity models

In this process, we assume that our series terms, εt, have a zero mean and that the wt terms are white noise. We can compute the variance of this series as follows:

Autoregressive conditional heteroscedasticity models

As we can see, the variance of the ARCH model at time t is a linearly weighted sum of the variances of the p most recent time periods in the past and so we have a recognizable AR process of order p. In this example, we've also made the assumption that the variance of the white noise process, wt, is 1 just to highlight how the process is autoregressive; if the variance of the white noise process is not 1, then this simply introduces a constant multiplicative factor in the result, which does not change the autoregressive nature of the model.

Generalized autoregressive heteroscedasticity models

ARCH models are very popular in finance and for this reason they form the basis of many different extensions. We'll mention one extension here, the generalized autoregressive heteroscedasticity (GARCH) because its form is essentially an ARCH model with a moving average variance component added in. More specifically, the general form of a GARCH(p, q) process is:

Generalized autoregressive heteroscedasticity models

Again, we can see that this is a clear extension of the ARCH(p) process, which is equivalent to a GARCH(p, 0) process.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.27.119