Learning to use artificial neural networks

Imagine a way to make an enemy or game system emulate the way the brain works. That's how neural networks operate. They are based on a neuron, we call it Perceptron, and the sum of several neurons; its inputs and outputs are what makes a neural network.

In this recipe, we will learn how to build a neural system, starting from Perceptron, all the way to joining them in order to create a network.

Getting ready…

We will need a data type for handling raw input; this is called InputPerceptron:

public class InputPerceptron
{
    public float input;
    public float weight;
}

How to do it…

We will implement two big classes. The first one is the implementation for the Perceptron data type, and the second one is the data type handling the neural network:

  1. Implement a Perceptron class derived from the InputPerceptron class that was previously defined:
    public class Perceptron : InputPerceptron
    {
        public InputPerceptron[] inputList;
        public delegate float Threshold(float x);
        public Threshold threshold;
        public float state;
        public float error;    
    }
  2. Implement the constructor for setting the number of inputs:
    public Perceptron(int inputSize)
    {
        inputList = new InputPerceptron[inputSize];
    }
  3. Define the function for processing the inputs:
    public void FeedForward()
    {
        float sum = 0f;
        foreach (InputPerceptron i in inputList)
        {
            sum += i.input * i.weight;
        }
        state = threshold(sum);
    }
  4. Implement the functions for adjusting weights:
    public void AdjustWeights(float currentError)
    {
        int i;
        for (i = 0; i < inputList.Length; i++)
        {
            float deltaWeight;
            deltaWeight = currentError * inputList[i].weight * state;
            inputList[i].weight = deltaWeight;
            error = currentError;
        }
    }
  5. Define a function for funneling the weights with regard to the type of input:
    public float GetIncomingWeight()
    {
        foreach (InputPerceptron i in inputList)
        {
            if (i.GetType() == typeof(Perceptron))
                return i.weight;
        }
        return 0f;
    }
  6. Create the class for handling the set of Perceptron as a network:
    using UnityEngine;
    using System.Collections;
    
    public class MLPNetwork : MonoBehaviour
    {
        public Perceptron[] inputPer;
        public Perceptron[] hiddenPer;
        public Perceptron[] outputPer;
    }
  7. Implement the function for transmitting inputs from one end to the other of the neural network:
    public void GenerateOutput(Perceptron[] inputs)
    {
        int i;
        for (i = 0; i < inputs.Length; i++)
            inputPer[i].state = inputs[i].input;
        
        for (i = 0; i < hiddenPer.Length; i++)
            hiddenPer[i].FeedForward();
        
        for (i = 0; i < outputPer.Length; i++)
            outputPer[i].FeedForward();
    }
  8. Define the function for propelling the computation that actually emulates learning:
    public void BackProp(Perceptron[] outputs)
    {
        // next steps
    }
  9. Traverse the output layer for computing values:
    int i;
    for (i = 0; i < outputPer.Length; i++)
    {
        Perceptron p = outputPer[i];
        float state = p.state;
        float error = state * (1f - state);
        error *= outputs[i].state - state;
        p.AdjustWeights(error);
    }
  10. Traverse the internal Perceptron layers, but the input layer:
    for (i = 0; i < hiddenPer.Length; i++)
    {
        Perceptron p = outputPer[i];
        float state = p.state;
        float sum = 0f;
        for (i = 0; i < outputs.Length; i++)
        {
            float incomingW = outputs[i].GetIncomingWeight();
            sum += incomingW * outputs[i].error;
            float error = state * (1f - state) * sum;
            p.AdjustWeights(error);
        }
    }
  11. Implement a high-level function for ease of use:
    public void Learn(
            Perceptron[] inputs,
            Perceptron[] outputs)
    {
        GenerateOutput(inputs);
        BackProp(outputs);
    }

How it works…

We implemented two types of Perceptrons in order to define the ones that handle external input and the ones internally connected to each other. That's why the basic Perceptron class derives from the latter category. The FeedForward function handles the inputs and irrigates them along the network. Finally, the function for back propagation is the one responsible for adjusting the weights. This weight adjustment is the emulation of learning.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.133.151.220