Approximate inference: stochastic versus deterministic approaches

For most models of practical relevance, it will not be possible to derive the exact posterior distribution analytically and compute the expected values for the latent parameters. The model may have too many parameters, or the posterior distribution may be too complex for an analytical solution. For continuous variables, the integrals may not have closed-form solutions, while the dimensionality of the space and the complexity of the integrand may prohibit numerical integration. For discrete variables, the marginalizations involve summing over all possible configurations of the hidden variables, and though this is always possible in principle, we often find in practice that there may be exponentially many hidden states so that exact calculation is prohibitively expensive.

Although for some applications the posterior distribution over unobserved parameters will be of interest, more often than not it is primarily required to evaluate expectations, for example, to make predictions. In such situations, we can rely on approximate inference:

  • Stochastic techniques based on Markov Chain Monte Carlo (MCMC) sampling have popularized the use of Bayesian methods across many domains. They generally have the ability to converge to the exact result. In practice, sampling methods can be computationally demanding and are often limited to small-scale problems.
  • Deterministic methods, known as variational inference or variational Bayes, are based on analytical approximations to the posterior distribution and can scale well to large applications. They make simplified assumptions, for example, that the posterior factorizes in a particular way or it has a specific parametric form such as a Gaussian. Hence, they do not generate exact results and can be used as complements to sampling methods.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.141.244.201