Online learning

In some cases, we don't have all of the data we need for training before we start our learning. Sometimes, we are waiting for new data to arrive, perhaps the data we have is too large to fit into memory, or we receive extra data after a prediction has been made. In cases like these, online learning is an option for training models over time.

An introduction to online learning

Online learning is the incremental updating of a model as new data arrives. Algorithms that support online learning can be trained on one or a few samples at a time, and updated as new samples arrive. In contrast, algorithms that are not online require access to all of the data at once. The standard k-means algorithm is like this, as are most of the algorithms we have seen so far in this book.

Online versions of algorithms have a means to partially update their model with only a few samples. Neural networks are a standard example of an algorithm that works in an online fashion. As a new sample is given to the neural network, the weights in the network are updated according to a learning rate, which is often a very small value such as 0.01. This means that any single instance only makes a small (but hopefully improving) change to the model.

Neural networks can also be trained in batch mode, where a group of samples are given at once and the training is done in one step. Algorithms are faster in batch mode, but use more memory.

In this same vein, we can slightly update the k-means centroids after a single or small batch of samples. To do this, we apply a learning rate to the centroid movement in the updating step of the k-means algorithm. Assuming that samples are randomly chosen from the population, the centroids should tend to move towards the positions they would have in the standard, offline, and k-means algorithm.

Online learning is related to streaming-based learning; however, there are some important differences. Online learning is capable of reviewing older samples after they have been used in the model, while a streaming-based machine learning algorithm typically only gets one pass—that is, one opportunity to look at each sample.

Implementation

The scikit-learn package contains the MiniBatchKMeans algorithm, which allows online learning. This class implements a partial_fit function, which takes a set of samples and updates the model. In contrast, calling fit() will remove any previous training and refit the model only on the new data.

MiniBatchKMeans follows the same clustering format as other algorithms in scikit-learn, so creating and using it is much the same as other algorithms.

Therefore, we can create a matrix X by extracting features from our dataset using TfIDFVectorizer, and then sample from this to incrementally update our model. The code is as follows:

vec = TfidfVectorizer(max_df=0.4)
X = vec.fit_transform(documents)

We then import MiniBatchKMeans and create an instance of it:

from sklearn.cluster import MiniBatchKMeans
mbkm = MiniBatchKMeans(random_state=14, n_clusters=3)

Next, we will randomly sample from our X matrix to simulate data coming in from an external source. Each time we get some data in, we update the model:

batch_size = 10
for iteration in range(int(X.shape[0] / batch_size)):
    start = batch_size * iteration
    end = batch_size * (iteration + 1)
    mbkm.partial_fit(X[start:end])

We can then get the labels for the original dataset by asking the instance to predict:

labels  = mbkm.predict(X)

At this stage, though, we can't do this in a pipeline as TfIDFVectorizer is not an online algorithm. To get over this, we use a HashingVectorizer. The HashingVectorizer class is a clever use of hashing algorithms to drastically reduce the memory of computing the bag-of-words model. Instead of recording the feature names, such as words found in documents, we record only hashes of those names. This allows us to know our features before we even look at the dataset, as it is the set of all possible hashes. This is a very large number, usually of the order 2^18. Using sparse matrices, we can quite easily store and compute even a matrix of this size, as a very large proportion of the matrix will have the value 0.

Currently, the Pipeline class doesn't allow for its use in online learning. There are some nuances in different applications that mean there isn't an obvious one-size-fits-all approach that could be implemented. Instead, we can create our own subclass of Pipeline that allows us to use it for online learning. We first derive our class from Pipeline, as we only need to implement a single function:

class PartialFitPipeline(Pipeline):

We create a class function partial_fit, which accepts an input matrix, and an optional set of classes (we don't need those for this experiment though):

    def partial_fit(self, X, y=None):

A pipeline, which we introduced before, is a set of transformations where the input to one step is the output of the previous step. To do this, we set the first input to our X matrix, and then go over each of the transformers to transform this data:

        Xt = X
        for name, transform in self.steps[:-1]:

We then transform our current dataset and continue iterating until we hit the final step (which, in our case will be the clustering algorithm):

            Xt = transform.transform(Xt)

We then call the partial_fit function on the final step and return the results:

        return self.steps[-1][1].partial_fit(Xt, y=y)

We can now create a pipeline to use our MiniBatchKMeans in online learning, alongside our HashingVectorizer. Other than using our new classes PartialFitPipeline and HashingVectorizer, this is the same process as used in the rest of this chapter, except we only fit on a few documents at a time. The code is as follows:

pipeline = PartialFitPipeline([('feature_extraction', HashingVectorizer()),
                             ('clusterer', MiniBatchKMeans(random_state=14, n_clusters=3))
                             ])
batch_size = 10
for iteration in range(int(len(documents) / batch_size)):
    start = batch_size * iteration
    end = batch_size * (iteration + 1)
    pipeline.partial_fit(documents[start:end])
labels = pipeline.predict(documents)

There are some downsides to this approach though. For one, we can't easily find out which words are most important for each cluster. We can get around this by fitting another CountVectorizer and taking the hash of each word. We then look up values by hash rather than word. This is a bit cumbersome and defeats the memory gains from using HashingVectorizer. Further, we can't use the max_df parameter that we used earlier, as it requires us to know what the features mean and to count them over time.

We also can't use tf-idf weighting when performing training online. It would be possible to approximate this and apply such weighting, but again this is a cumbersome approach. HashingVectorizer is still a very useful algorithm and a great use of hashing algorithms.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.12.107.31