Perceptron

A perceptron is a basic neural network building block and one of the earliest supervised algorithms. It is defined as a sum of features, which is multiplied by the corresponding weights and a bias. When the input signals is received, it multiplies with the assigned weights. These weights are defined for each incoming signal or input, and the weight gets adjusted continuously during the learning phase. The adjustment of weight depends on the error of the last result. After multiplying with the respective weights, all of the inputs are summed up with some offset value called bias. The value of the bias is also adjusted by the weights. So, it starts with random weights and bias, and with each iteration, the weights and bias are adjusted so that the next result moves toward the desired output. At the end, the final result is turned into an output signal. The function that sums all of this together is called the sum transfer function, and it is fed into an activation function. If the binary step activation function reaches a threshold, the output is 1, otherwise it is 0, which gives us a binary classifier. A schematic illustration is shown in the following diagram:

Training perceptrons involves a fairly simple learning algorithm that calculates the errors between the calculated output values and correct training output values, and uses this to create an adjustment to the weights, thus implementing a form of gradient descent. This algorithm is usually called the delta rule.

A single-layer perceptron is not very advanced, and nonlinearly separable functions, such as XOR, cannot be modeled using it. To address this issue, a structure with multiple perceptrons was introduced, called the multilayer perceptron, also known as the feedforward neural network.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.133.133.61