Artificial neurons

Before understanding ANN, first, let's understand what neurons are and how neurons in our brain actually work. A neuron can be defined as the basic computational unit of the human brain. Our brain contains approximately 100 billion neurons. Each and every neuron is connected through synapses. Neurons receive input from the external environment, sensory organs, or from the other neurons through a branchlike structure called dendrites, as can be seen in the following diagram. These inputs are strengthened or weakened, that is, they are weighted according to their importance and then they are summed together in the soma (cell body). Then, from the cell body, these summed inputs are processed and move through the axons and are sent to the other neurons. The basic single biological neuron is shown in the following diagram:

 

Now, how do artificial neurons work? Let's suppose we have three inputs, x1, x2, and x3, to predict output y. These inputs are multiplied by weights, w1, w2, and w3, and are summed together, that is, x1.w1 + x2.w2 + x3.w3. But why are we multiplying these inputs with weights? Because all of the inputs are not equally important in calculating the output y. Let's say that x2 is more important in calculating the output compared to the other two inputs. Then, we assign a high value to w2 rather than for the other two weights. So, upon multiplying weights with inputs, x2 will have a higher value than the other two inputs. After multiplying inputs with the weights, we sum them up and we add a value called bias b. So, z = (x1.w1 + x2.w2 + x3.w3) + b, that is:

Doesn't z look like the equation of linear regression? Isn't it just the equation of a straight line? z = mx + b.

Where m is the weights (coefficients), x is the input, and b is the bias (intercept). Well, yes. Then what is the difference between neurons and linear regression? In neurons, we introduce non-linearity to the result, z, by applying a function f() called the activation or transfer function. So, our output is y = f(z). A single artificial neuron is shown in the following diagram:

 

In neurons, we take the input x, multiply the input by weights w, and add bias b before applying the activation function f(z) to this result and predict the output y.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.12.123.189