Artificial neural networks

The following figure shows a simple biological neuron to the left. The neuron has dendrites that receive signals from other neurons. A cell body controls activation, and an axon carries an electrical impulse to the dendrites of other neurons. The artificial neuron to the right has a series of weighted inputs: a summing function that groups the inputs and a firing mechanism (F(Net)), which decides whether the inputs have reached a threshold, and, if so, the neuron will fire:

Neural networks are tolerant of noisy images and distortion, and so are useful when a black box classification method is needed for potentially degraded images. The next area to consider is the summation function for the neuron inputs. The following diagram shows the summation function called Net for neuron i. The connections between the neurons that have the weighting values, contain the stored knowledge of the network. Generally, a network will have an input layer, output layer, and a number of hidden layers. A neuron will fire if the sum of its inputs exceeds a threshold:

In the previous equation, the diagram and key show that the input values from a pattern P are passed to neurons in the input layer of a network. These values become the input layer neuron activation values; they are a special case. The inputs to neuron i are the sum of the weighting value for neuron connection i-j, multiplied by the activation from neuron j. The activation at neuron j (if it is not an input layer neuron) is given by F(Net), the squashing function, which will be described next.

A simulated neuron needs a firing mechanism, which decides whether the inputs to the neuron have reached a threshold. Then, it fires to create the activation value for that neuron. This firing or squashing function can be described by the generalized sigmoid function shown in the following figure:

This function has two constants: A and B; B affects the shape of the activation curve as shown in the previous graph. The bigger the value, the more similar a function becomes to an on/off step. The value of A sets a minimum for the returned activation. In the previous graph, it is zero.

So, this provides a mechanism to simulate a neuron, create weighting matrices as the neuron connections, and manage the neuron activation. How are the networks organized? The next diagram shows a suggested architecture--the neural network has an input layer of neurons, an output layer, and one or more hidden layers. All neurons in each layer are connected to each neuron in the adjacent layers:

During the training, activation passes from the input layer through the network to the output layer. Then, the error or difference between the expected or actual output causes error deltas to be passed back through the network, altering the weighting matrix values. Once the desired output layer vector is achieved, then the knowledge is stored in the weighting matrices and the network can be further trained or used for classification.

So, the theory behind neural networks has been described in terms of back propagation. Now is the time to obtain some practical knowledge.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.119.28.108