A Naive Bayes classifier is a supervised learning classifier that uses Bayes' theorem to build the model. Let's go ahead and build a Naïve Bayes classifier.
naive_bayes.py
that is provided to you as reference. Let's import a couple of things:from sklearn.naive_bayes import GaussianNB from logistic_regression import plot_classifier
data_multivar.txt
file. This contains data that we will use here. This contains comma-separated numerical data in each line. Let's load the data from this file:input_file = 'data_multivar.txt' X = [] y = [] with open(input_file, 'r') as f: for line in f.readlines(): data = [float(x) for x in line.split(',')] X.append(data[:-1]) y.append(data[-1]) X = np.array(X) y = np.array(y)
We have now loaded the input data into X
and the labels into y
.
classifier_gaussiannb = GaussianNB() classifier_gaussiannb.fit(X, y) y_pred = classifier_gaussiannb.predict(X)
The GaussianNB
function specifies Gaussian Naive Bayes model.
accuracy = 100.0 * (y == y_pred).sum() / X.shape[0] print "Accuracy of the classifier =", round(accuracy, 2), "%"
plot_classifier(classifier_gaussiannb, X, y)
You should see the following figure:
There is no restriction on the boundaries to be linear here. In the preceding example, we used up all the data for training. A good practice in machine learning is to have nonoverlapping data for training and testing. Ideally, we need some unused data for testing so that we can get an accurate estimate of how the model performs on unknown data. There is a provision in scikit-learn that handles this very well, as shown in the next recipe.
3.143.115.131