Stationarity

We have often seen that in predictive modeling, we need to make certain important but limiting assumptions in order to build practical models. With time series models, one of the most common assumptions to make that render the modeling task significantly simpler is the stationarity assumption.

Stationarity essentially describes that the probabilistic behavior of a time series does not change with the passage of time. There are two versions of the stationarity property that are commonly used. A stochastic process is said to be strictly stationary when the joint probability distribution of a sequence of points starting at time t, Yt, Yt+1, ..., Yt+n, is the same as the joint probability distribution of another sequence of points starting at a different time T, YT, YT+1, ..., YT+n.

To be strictly stationary, this property must hold for any choice of time t and T, and for any sequence length n. In particular, because we can choose n = 1, this means that the probability distributions of every individual point in the sequence, also known as the univariate probabilities, must be the same. It follows, therefore, that in a strictly stationary time series, the mean function is constant over time, as is the variance.

We also use a weaker form of stationarity, which very often tends to be sufficient for our needs. A stochastic process is weakly stationary when the mean function is constant over time and the autocovariance function depends only on the time lag between two points in the sequence. In symbols, this latter property can be written as:

Stationarity

Additionally, a weakly stationary stochastic process must have finite variance for all time points, something that is not necessary in the definition of a strictly stationary process. When we know that the variance is finite for all time points, a process that is strictly stationary is always also weakly stationary. As is custom, we will use the term stationary stochastic process or stationary time series to imply weak stationarity, and state strict stationarity explicitly where we require it. Armed with this new concept, we can simplify our formulae for the mean function, autocovariance, and autocorrelation for a stationary time series process respectively as:

Stationarity

Note that the autocovariance with a lag of 0, γ0, is just the variance. Let's revisit the two examples of time series that we have seen so far from the perspective of stationarity. White noise is a stationary process as it has a constant mean, in this case of 0, and also a constant variance. Note also that the x axis of the ACF plot displays the time lag k rather than the position in the sequence.

In fact, to estimate the ACF plot, R's acf() function uses all pairs of values separated by a particular lag value irrespective of their position in the sequence, thus assuming stationarity. The random walk, on the other hand, does have a constant mean (not so in the case of the random walk with drift however) but it has a time varying variance and so it is non-stationary.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.143.0.85