The need for the Markov model

Given the range of models we are discussing in this book, is there a need to discuss Markov models? When we speak about forecasting, one of the main inputs is the historical information. This could be in the form of a time series. However, Markov models don't need historical information to be able to forecast. When we build a Markov model, we are interested in the state (value/behavior/phenomenon) of a subject at the present time. We are also interested in the states that the subject can get transitioned to and the transition probabilities involved. A textbook definition of the Markov model would be a stochastic model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. To understand the terms better, let's look at the states that a car being driven may experience:

Transition state flow

P(Stationary|Stationary)=0.3

P(In motion|Stationary)=0.4

P(Braking|Stationary)=0.3

_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

Total Probability of transition from Stationary State = 1

The possible transition states are as follows:

Original state
Transitioned state
Stationary
Stationary
Stationary
In motion
Stationary
Braking
In motion
In motion
In motion
Stationary
In motion
Braking
Braking
Braking
Braking
Stationary
Braking
In motion
Figure 4.1: Inter-state transition probability of a car

While considering transition states, you have to bear in mind that a state can continue to remain the same. Hence, a car in a stationary state can remain in the same state (P(Stationary|Stationary)) or move to a different state, such as in motion (P(In motion|Stationary)). The rules of probability hold true in this case: the transition probabilities from a state always add up to 1. Figure 4.1 contains the flow of the transitional probabilities in a diagram and a tabular format. If we want to estimate the probability of reaching a particular state, we can use the transitional probability. If the car is in motion, then the probability of it being in the same state? The answer to this is P(In motion|In motion)=0.5. To take this illustration a step further, the probability of the next state being in motion and the one after that being stationary is P(In motion|In motion)P(Stationary|In motion)=0.5*0.2=0.1.

Markov models don't need to use time series data as a direct input. Later on, we will see how, by using data points for only one time period, we can build a forecasting model. However, the model does use the transition matrix of probabilities. This transition matrix is derived from the time series data. There is an indirect dependency on time series data in certain business situations, and this will be highlighted in our business problem.

But can Markov models be used only for forecasting? There are various other uses of Markov models. One of the issues that modelers have to deal with is when there is missing information for a variable over various time periods. There exist various imputation methodologies that a modeler could use. One of the methods to impute missing values is to use the Markov model. In this case, the model doesn't need to be used for forecasting but can enable forecasting by helping with imputation. The missing data points can be part of a time series. The Markov model thereby has an important relationship with time series data. It can be used to impute missing values into time series, leverage the time series to generate a transition matrix, or be used for both these purposes.

This chapter will help to distinguish the benefits of the ARIMA and Markov model forecasting techniques. Additionally, we will learn how the Markov method can be used to impute missing values for a time series.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.133.132.99