In supervised learning, we usually deal with a variety of labels. These can be in the form of numbers or words. If they are numbers, then the algorithm can use them directly. However, a lot of times, labels need to be in human readable form. So, people usually label the training data with words. Label encoding refers to transforming the word labels into numerical form so that the algorithms can understand how to operate on them. Let's take a look at how to do this.
from sklearn import preprocessing
label_encoder = preprocessing.LabelEncoder()
label_encoder
object knows how to understand word labels. Let's create some labels:input_classes = ['audi', 'ford', 'audi', 'toyota', 'ford', 'bmw']
label_encoder.fit(input_classes) print " Class mapping:" for i, item in enumerate(label_encoder.classes_): print item, '-->', i
Class mapping: audi --> 0 bmw --> 1 ford --> 2 toyota --> 3
labels = ['toyota', 'ford', 'audi'] encoded_labels = label_encoder.transform(labels) print " Labels =", labels print "Encoded labels =", list(encoded_labels)
Here is the output that you'll see on your Terminal:
Labels = ['toyota', 'ford', 'audi'] Encoded labels = [3, 2, 0]
encoded_labels = [2, 1, 0, 3, 1] decoded_labels = label_encoder.inverse_transform(encoded_labels) print " Encoded labels =", encoded_labels print "Decoded labels =", list(decoded_labels)
Here is the output:
Encoded labels = [2, 1, 0, 3, 1] Decoded labels = ['ford', 'bmw', 'audi', 'toyota', 'bmw']
As you can see, the mapping is preserved perfectly.
3.14.8.206