5.1. Components of a Series
Decomposition is a management technique for complexity reduction. Separating a problem into its components and then solving the components separately before reassembling the components into a larger decision enables better decision making (e.g., Raiffa 1968). The key idea in our forecasting application is to “decompose” a time series into separate components that can be examined and estimated separately. These components can then be recombined to generate a forecast. Many forecasting methods either require data that has already been decomposed or incorporate a decomposition approach directly into the method.
The components usually examined in a time series analysis are the level, the trend, the seasonality, and random noise. Historically, the business cycle was sometimes seen as an additional component in time series, but since estimating the business cycle and predicting when the cycle turns is inherently very challenging, many time series models do not explicitly consider the business cycle as a separate time series component anymore.
The level of a time series describes the center of the series at any point. That is, if we could imagine a time series where random noise, trend, and seasonality are taken out of the equation, the remainder of the series would be the level.
A trend describes predictable increases or decreases in the level of a series. A time series that grows or declines in a large number of successive time periods has a trend. By definition, trends need to be somewhat stable to allow predictability; it is often a challenge to differentiate a trend from abrupt shifts in the level, as we have discussed in Chapter 3. One can only speak of a trend, as opposed to an abrupt level shift, if one has a reasonable expectation that the shift in the level reoccurs again in a similar fashion in the next time period. A long-run persistence of such increases/decreases in the data is necessary to establish that a real trend exists.
A trend in the time series can have many underlying causes. A product at the beginning of its lifecycle will experience a positive trend as more and more customers receive information about the product and decide to buy it. On the upside of the business cycle, gross domestic product expands, making consumers wealthier and more able to purchase. More fundamentally, the world population is currently growing every year. A firm that sells products globally should to some degree observe this increase in the world population as a trend in their demand patterns.
Seasonality refers to a pattern of predictable and recurring shifts in the level of a time series. Examples include predictable increases in demand for consumer products every December for the holiday season, or increases in demand for air-conditioning units or ice cream during the summer. A company’s regular promotion event in May will also appear as seasonality. The causes of seasonality often depend on the time frame being studied. Yearly data usually has little seasonality. While leap years create a regularity that reappears every 4 years by including an extra day, this effect is often small enough to be ignored. Monthly or quarterly data is often influenced by “time of year” or temperature effects. Weekly or daily data can additionally be subject to payday, billing cycle, and “end of month” effects (Rickwalder 2006). Hourly data will often have visible “lunchtime” effects. All these effects are treated as seasonality in time series forecasting, since they represent predictable recurring patterns over time.
5.2. Decomposition Methods
When interpreting publicly available time series, such as data from the Bureau of Labor Statistics (www.bls.gov), one needs to carefully understand whether or not seasonality has been taken out of the data already. Most government data is reported as “seasonally adjusted,” implying that the seasonal component has been removed from the series. This decomposition is usually applied to the time series to avoid readers overinterpreting month-to-month changes that are driven by seasonality.
Methods to remove seasonality and trends from a time series are available in most commercial software. Taking seasonality and trends out of a time series is also easy to accomplish in spreadsheet modeling software such as Excel. Take monthly data as an example. As a first step, calculate the average demand over all data points and then calculate the average demand for each month (i.e., average demand in January, February, etc.). Dividing average monthly demand by the overall average demand creates a seasonal index. Then dividing all demand observations in the time series by the corresponding seasonal index creates a deseasonalized series. The results of such a seasonal adjustment process are illustrated in Figure 5.1.
Figure 5.1 US retail sales 1992–2015 (in $US bn)
In a similar fashion, the data can be detrended by first calculating the average first difference between observations of the series and then subtracting (n-1) times this average from the nth observation in the time series. For example, take the following time series: 100, 120, 140, and 160. The first differences of the series are 20, 20, and 20, with an average of 20. The detrended series is thus 100 - 0 × 20 = 100, 120 - 1 × 20 = 100, 140 - 2 × 20 - 100, and 160 - 3 × 20 = 100.
While these methods of deseasonalizing and detrending data are simple to use and understand, they suffer from several drawbacks. For instance, they do not allow seasonality indices and trends to change over time, which makes their application challenging for longer time series. One can see in Figure 5.1, for example, that while initially our method of deseasonalizing the data removed seasonality from the series, the later part of the series show a seasonal pattern again, since the seasonal indices have changed. In shorter time series, simple methods can also be strongly influenced by outlier data. For these reasons, more sophisticated (and complex) methods have been developed. The Bureau of Labor Statistics has developed an algorithm for this purpose called X-13ARIMA-SEATS (www.census.gov/srd/www/x13as/). Software implementing this algorithm is available to download for free from the Bureau’s website.
5.3. Stability of Components
The observation that seasonal components can change leads to an important discussion. The real challenge of time series analysis lies in understanding the stability of the components of a series. A perfectly stable time series is a series where the components do not change as time progresses—the level only increases through the trend, the trend remains constant, and the seasonality remains the same from year to year. If that is the case, the best time series forecasting method works with long-run averages of components. Yet time series often are inherently unstable and components change over time. The level of a series can abruptly shift as new competitors enter the market. The trend of a series can evolve over time as the product moves through its lifecycle. Even the seasonality of a series can change if underlying consumption or promotion patterns shift throughout the year.
To illustrate what change means for a time series and the implications that change has for time series forecasting, consider the two illustrative and artificially constructed time series in Figure 5.2. Both are time series without trends and seasonality (or that have been deseasonalized and detrended already). Series 1 is a series that is perfectly stable, such that month-to-month variation resembles only random noise. This is an example of data that comes from a “stationary” demand distribution and is typical for mature products. Series 2 is a time series that is highly unstable, such that month-to-month variation resembles only change in the underlying level. This is an example of data that stems from a so-called random walk and is typical for prices in an efficient market. The same random draws were used to construct both series; in series 1, randomness represents just random noise around a stable level (which is at around 500 units). The best forecast for this series would be a long-run average (i.e., 500). In series 2, randomness represents random changes in the unstable level, which by the nature of randomness can push the level of the time series up or down. The best forecast in this series would be the most recent demand observation. In other words, stable time series can make use of all available data to create an estimate of the time series component and thereby create a forecast. In unstable time series, only very recent data is used for estimation, and data that is further in the past is essentially not used at all in the generation of forecasts. Differentiating between stable and unstable components, and thus using or discounting past data, is the key principle underlying exponential smoothing, which is a technique we have examined already in Chapter 3, and which we will explore further in Chapter 6.
Figure 5.2 A stable and an unstable time series
5.4. Additive and Multiplicative Components
Another key difference to consider in a time series is how the components of the series relate to each other. The question here is whether level, trend, and seasonality are additive or multiplicative. An additive trend implies a linear increase/decrease over time, that is, an increase/decrease in demand by X units in every period. A multiplicative trend implies an exponential increase/decrease over time, that is, an increase/decrease in demand by X percent in every period. Multiplicative trends tend to be easier to interpret, since they correspond to statements like “our business grows by 10 percent every year”; however, if the trend does not change, such a statement implies exponential growth over time. Such growth patterns are common for early stages of a product lifecycle, but time series models using multiplicative trends need to pay extra attention that such growth is not seen as fixed but given an opportunity to taper off over time. Multiplicative seasonality more naturally incorporates growth, since the seasonality effects should grow with the scale of demand. Additive seasonality usually applies to more mature products with relatively little growth. We illustrate the difference between a stable time series with additive trend and seasonality and a stable time series with multiplicative trend and seasonality in Figure 5.3. Differentiating between these different functional forms is important for the general state space modeling framework we will discuss in Chapter 6.
Figure 5.3 Additive and multiplicative trends and seasonality
• In time series decomposition, we separate a series into its seasonal, trend, level, and error components. We analyze these components separately and finally put the pieces back together to create a forecast.
• Components can be decomposed and recombined additively or multiplicatively.
• The challenge of time series modeling lies in understanding how much the time series components change over time.
• Multiplicative components lead to increasing growth/decline patterns over time, whereas additive components imply a more linear growth/decline of firms.
18.218.59.168