Common problems when computing Bayes factors

Some common problems when computing Bayes factors the way we did is that if one model is better than the other, by definition, we will spend more time sampling from it than from the other model. This could be problematic because we can undersample one of the models. Another problem is that the values of the parameters get updated, even when the parameters are not used to fit that model. That is, when model 0 is chosen, the parameters in model 1 are updated, but since they are not used to explain the data, they only get restricted by the prior. If the prior is too vague, it is possible that when we choose model 1, the parameter values are too far away from the previous accepted values and hence the step is rejected. Therefore, we end up having a problem with sampling.

In case we encounter these problems, we can perform two modifications to our model to improve sampling:

  • Ideally, we can get a better sampling of both models if they are visited equally, so we can adjust the prior for each model (the p variable in the previous model) in such a way to favor the less favorable model and disfavor the most favorable model. This will not affect the computation of the Bayes factor because we are including the priors in the computation.
  • Use pseudo priors, as suggested by Kruschke and others. The idea is simple: if the problem is that the parameters drift away unrestricted, when the model they belong to is not selected, then one solution is to try to restrict them artificially, but only when not used! You can find an example of using pseudo priors in a model used by Kruschke in his book and ported by me to Python/PyMC3 at https://github.com/aloctavodia/Doing_bayesian_data_analysis.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.147.104.230