Inference Engines

"The first principle is that you must not fool yourself—and you are the easiest person to fool."
- Richard Feynman

So far, we have focused on model building, interpretation of results and criticism of models. We have relied on the magic of the pm.sample function to compute the posterior distributions for us. Now we will focus on learning some of the details of the inference engines behind this function. The whole purpose of probabilistic programming tools, such as PyMC3, is that the user should not care about how sampling is carried out, but understanding how we get samples from the posterior is important for a full understanding of the inference process, and could also help us to get an idea of when and how these methods fail and what to do about it. If you are not interested in understanding how the methods for approximating the posterior works, you can skip most of this chapter, but I strongly recommend you at least read the Diagnosing samples section, as this section provides a few guidelines that will help you to check whether your posterior samples are reliable.

There are many methods for computing the posterior distribution. In this chapter, we will discuss some general ideas and we will focus on the most important methods implemented in PyMC3.

In this chapter, we will learn about:

  • Variational methods
  • Metropolis-Hastings
  • Hamiltonian Monte Carlo
  • Sequential Monte Carlo
  • Diagnosing samples

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.184.209