The bag-of-words model

In the bag-of-words model, we create from a document a bag containing words found in the document. In this model, we don't care about the word order. For each word in the document, we count the number of occurrences. With these word counts, we can do statistical analysis, for instance, to identify spam in e-mail messages.

If we have a group of documents, we can view each unique word in the corpus as a feature; here, "feature" means parameter or variable. Using all the word counts, we can build a feature vector for each document; "vector" is used here in the mathematical sense. If a word is present in the corpus but not in the document, the value of this feature will be 0. Surprisingly, NLTK doesn't have a handy utility currently to create a feature vector. However, the machine learning Python library, scikit-learn, does have a CountVectorizer class that we can use. In the next chapter, Chapter 10, Predictive Analytics and Machine Learning, we will do more with scikit-learn.

First, install scikit-learn as follows:

$ pip scikit-learn
$ pip freeze|grep learn
scikit-learn==0.15.0

Load two text documents from the NLTK Gutenberg corpus:

hamlet = gb.raw("shakespeare-hamlet.txt")
macbeth = gb.raw("shakespeare-macbeth.txt")

Create the feature vector by omitting English stopwords:

cv = CountVectorizer(stop_words='english')
print "Feature vector", cv.fit_transform([hamlet, macbeth]).toarray()

These are the feature vectors for the two documents:

Feature vector [[ 1  0  1 ..., 14  0  1]
 [ 0  1  0 ...,  1  1  0]]

Print a small selection of the features (unique words) we found:

print "Features", cv.get_feature_names()[:5]

The features are given in alphabetical order:

Features [u'1599', u'1603', u'abhominably', u'abhorred', u'abide']

The code is contained in bag_words.py file in this book's code bundle:

import nltk
from sklearn.feature_extraction.text import CountVectorizer

gb = nltk.corpus.gutenberg
hamlet = gb.raw("shakespeare-hamlet.txt")
macbeth = gb.raw("shakespeare-macbeth.txt")

cv = CountVectorizer(stop_words='english')
print "Feature vector", cv.fit_transform([hamlet, macbeth]).toarray()
print "Features", cv.get_feature_names()[:5]
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.17.137