Understanding the Anatomy of a Neural Network

Let's see what a neural network consists of:

  • Layers: Layers are the core building blocks of a neural network. Each layer is a data-processing module that acts as a filter. It takes one or more inputs, processes it in a certain way, and then produces one or more outputs. Each time data passes through a layer, it goes through a processing phase and shows patterns that are relevant to the business question we are trying to answer.
  • Loss function: A loss function provides the feedback signal that is used in the various iterations of the learning process. The loss function provides the deviation for a single example.
  • Cost function: The cost function is the loss function on a complete set of examples.
  • Optimizer: An optimizer determines how the feedback signal provided by the loss function will be interpreted.
  • Input data: Input data is the data that is used to train the neural network. It specifies the target variable.
  • Weights: The weights are calculated by training the network. Weights roughly correspond to the importance of each of the inputs. For example, if a particular input is more important than other inputs, after training, it is given a greater weight value, acting as a multiplier. Even a weak signal for that important input will gather strength from the large weight value (that acts as a multiplier). Thus weight ends up turning each of the inputs according to their importance. 
  • Activation function: The values are multiplied by different weights and then aggregated. Exactly how they will be aggregated and how their value will be interpreted will be determined by the type of the chosen activation function. 

Let's now have a look at a very important aspect of neural network training.

While training neural networks, we take each of the examples one by one. For each of the examples, we generate the output using our under-training model. We calculate the difference between the expected output and the predicted output. For each individual example, this difference is called the loss. Collectively, the loss across the complete training dataset is called the cost. As we keep on training the model, we aim to find the right values of weights that will result in the smallest loss value. Throughout the training, we keep on adjusting the values of the weights until we find the set of values for the weights that results in the minimum possible overall cost. Once we reach the minimum cost, we mark the model as trained. 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.116.62.45