Implementing your first perceptron

Perceptrons are easy enough to be implemented from scratch. We can mimic the typical OpenCV or scikit-learn implementation of a classifier by creating a perceptron object. This will allow us to initialize new perceptron objects that can learn from data via a fit method and make predictions via a separate predict method.

When we initialize a new perceptron object, we want to pass a learning rate (lr, or η in the previous section) and the number of iterations after which the algorithm should terminate (n_iter):

In [1]: import numpy as np
In [2]: class Perceptron(object):
... def __init__(self, lr=0.01, n_iter=10):
... self.lr = lr
... self.n_iter = n_iter
...

The fit method is where most of the work is done. This method should take as input some data samples (X) and their associated target labels (y). We will then create an array of weights (self.weights), one for each feature (X.shape[1]), initialized to zero. For convenience, we will keep the bias term (self.bias) separate from the weight vector and initialize it to zero as well. One of the reasons for initializing the bias to zero is because the small random numbers in the weights provide asymmetry breaking in the network:

...         def fit(self, X, y):
... self.weights = np.zeros(X.shape[1])
... self.bias = 0.0

The predict method should take in a number of data samples (X) and, for each of them, return a target label, either +1 or -1. In order to perform this classification, we need to implement ϕ(z)>θ. Here we will choose θ = 0, and the weighted sum can be computed with NumPy's dot product:

...         def predict(self, X):
... return np.where(np.dot(X, self.weights) + self.bias >= 0.0,
... 1, -1)

Then we will calculate the Δw terms for every data sample (xi, yi) in the dataset and repeat this step for a number of iterations (self.n_iter). For this, we need to compare the ground-truth label (yi) to the predicted label (aforementioned self.predict(xi)). The resulting delta term will be used to update both the weights and the bias term:

...             for _ in range(self.n_iter):
... for xi, yi in zip(X, y):
... delta = self.lr * (yi - self.predict(xi))
... self.weights += delta * xi
... self.bias += delta

That's it!

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.16.135.36