Understanding Bayesian inference

Although Bayesian classifiers are relatively simple to implement, the theory behind them can be quite counter-intuitive at first, especially if you are not too familiar with probability theory yet. However, the beauty of Bayesian classifiers is that they understand the underlying data better than all of the classifiers we have encountered so far. For example, standard classifiers, such as the k-nearest neighbor algorithm or decision trees, might be able to tell us the target label of a never-before-seen data point. However, these algorithms have no concept of how likely it is for their predictions to be right or wrong. We call them discriminative models. Bayesian models, on the other hand, have an understanding of the underlying probability distribution that caused the data. We call them generative models because they don't just put labels on existing data points—they can also generate new data points with the same statistics.

If this last paragraph was a bit over your head, you might enjoy the following brief on probability theory. It will be important for the upcoming sections.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.221.133