Markovian models – real-world applications

This book is being written in the year 2018, and Markovian models seem a little bit overlooked by the general public. There is a great hype around deep and adaptive learning, which tends to divert the attention from more traditional models such as the ones that belong to the Markovian family.

Although Markovian models do very well on chronological data, which explains how well they are doing in the finances field, these models are suitable for exploring any relation that can be conveyed in a set of interconnected states of a system. The list of real-world applications can grow very big but, for simplicity's sake, we will quote only a few, as follows: 

  • Typing word prediction: To predict the word we will type based on the latter word or the letters inserted which is a task that can be done using Markovian models.
  • Chatbots: Applications that recognize speech and talk back to humans can be developed using Markov's theory.
  • Page rank: Given these models' properties (more on that later), no matter in what web page a person started, if the surfing goes on for long enough, the probability of landing in a specific page is fixed. Some would say that Google's page rank algorithm is an improved version of Markov's theoretical development.
  • Finances: Economists are used to thinking about events as stochastic processes; Markovian models are an excellent way to model those events. Markovian models are also described as state space models, which are broadly used in the industry to bear predictions and design scenarios.

As you may wonder, all of these tasks can also be performed using rival models. For example, deep learning could likewise handle any of that. Even though different contests had established benchmarks favoring markovian or deep learning models, generally speaking, the evidence suggests that not a single broader class of model is yet to outperform all the rivals disregarding specific problems and scoring measures.

More complicated models won't necessarily outperform simpler ones. 

There are at least two reasons that would reasonably prevent you from deploying solutions generated over a complex model. Understandability is a reason. Some problems may require the solution to be easy to understand and interpret. Those hardly come along with complex models, which are usually tagged as black-box models in contrast to simpler ones, called glass-box.

Whether a model can be called a glass-box or black-box is not a matter of built-in complexity. These terms are related to whether it's easy (glass-box) or hard (black-box) to understand how a model is making the decisions, in other words, the inner workings, and what is happening inside.

Another reason would be that a problem simply doesn't work well with complicated solutions. The problem being too simple or data being too scarce are very common issues. But there is another, greater reason that would prevent any promising forecaster from ignoring a simpler model. 

Collectives beat individuals; frequently, combined models perform better than individual ones. In fact, there is a whole field that studies how to merge models. Even though a chatbot developed using deep learning might outperform another one made through HMMs, the deep learning model could be improved if it received the HMMs' result as an input. Moving on, the following paragraphs look at the foundations of the Markov model.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.216.96.94