Emulating a neural network

Let's simplify the preceding diagram of the neural network:

We'll have a circle represent the body of the neuron, and we'll call it the neuron. The "dendrites" of the neuron receive inputs from other neurons (unshown) and add up all the inputs. Each input represents an input from another neuron; so, if you see three inputs, it means that this neuron is connected to three other neurons.

If the sum of the inputs exceeds a threshold value, then we can say the neuron "fires" or is activated. This simulates the activation potential of an actual neuron. For simplicity, let's say if it fires, then the output will be 1; otherwise, it will be 0. Here is a good emulation of it in Go code:

func neuron(threshold int, inputs ...int) int {
var total int
for _, in := range inputs {
total += in
}
if total > threshold {
return 1
}
return 0
}

This is generally known as a perceptron, and it's a faithful emulation of how neurons work, if your knowledge of how neurons work is stuck in the 1940s and 1950s.

Here is a rather interesting anecdote: As I was writing this section, King Princess' 1950 started playing in the background and I thought it would be rather apt to imagine ourselves in the 1950s, developing the perceptron. There remains a problem: the artificial network we emulated so far cannot learn! It is programmed to do whatever the inputs tell it to do.

What does it mean for an artificial neural network "to learn" exactly? There's an idea that arose in neuroscience in the 1950s, called the Hebbian Rule, which can be briefly summed up as: Neurons that fire together grow together. This gives rise to an idea that some synapses are thicker; hence,they have stronger connections, and other synapses are thinner; hence, they  have weaker connections.

To emulate this, we would need to introduce the concept of a weighted value, the weight of which corresponds to the strength of the input from another neuron. Here's a good approximation of this idea:

func neuron(threshold, weights, inputs []int) int {
if len(weights) != len(inputs) {
panic("Expected length of weights to be the same as the length of inputs")
}
var total int
for i, in := range inputs {
total += weights[i]*in
}
if total > threshold {
return 1
}
return 0
}

At this point, if you are familiar with linear algebra, you might think to yourself that total is essentially a vector product. You would be absolutely correct. Additionally, if the threshold is 0, then you have simply applied a heaviside step function:

func heaviside(a float64) float64 {
if a >= 0 {
return 1
}
return 0
}

In other words, we can summarize a single neuron in the following way:

func neuron(weights, inputs []float64) float64 {
return heaviside(vectorDot(weights, inputs))
}

Note in the last two examples, I switched over from int to a more canonical float64. The point remains the same: a single neuron is simply a function applied to a vector product.

A single neuron does not do much. But stack a bunch of them together and arrange them by layers like so, and then suddenly they start to do more:

Now we come to the part that requires a conceptual leap: if a neuron is essentially just a vector product, stacking the neurons simply makes it a matrix!

Given an image can be represented as a flat slice of float64 , the vectorDot function is replaced with matVecMulwhich is a function that multiplies a matrix and vector to return a vector. We can write a function representing the neural layer like so:

func affine(weights [][]float64, inputs []float64) []float64 {
return activation(matVecMul(weights, inputs))
}
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.133.151.220