Loss function

So far, the model has been randomly initialized and with this we have been able to get an output. In order to assess if the actual output is close to the desired output, loss function is introduced. It enables the generalization of the model, and figures out how well the model is able to reach the desired output.

We can have a look at the new table, which has got actual output as well as desired output:

Input (X) Actual Output (Ya) Desired Output (Y)
2 6 4
3 9 6
4 12 8
5 15 10
6 18 12

 

If we have to put the loss function down, it has to be as follows:

Loss Function = Desired Output-Actual Output

However, putting loss function this way would invite both kinds of values: negative and positive. In the case of a negative value for the loss function, it would mean that the network is overshooting as Desired Output < Actual Output and in the reverse scenario (Desired Output > Actual Output), the network would undershoot. In order to get rid of this kind of thing, we will go for having an absolute loss:

Input(X) Actual Output (Ya) Desired Output (Y) Loss=Y-Ya Absolute Loss
2 6 4 -2 2
3 9 6 -3 3
4 12 8 -4 4
5 15 10 -5 5
6 18 12 -6 6

 

Total Absolute Loss = 20

Having this approach of absolute loss will do no good to the model, as if we try to see the preceding table gingerly, the smallest loss is of 2 units and the maximum coming through is 6 units. One might get a feeling that the difference between maximum and minimum loss is not much (here, 4 units), but it can be huge for the model. Hence, a different route is taken altogether. Rather than taking absolute loss, we would go for the square of losses:

Input(X) Actual output (Ya) Desired output (Y) Loss=Y-Ya Square of Loss
2 6 4 -2 4
3 9 6 -3 9
4 12 8 -4 16
5 15 10 -5 25
6 18 12 -6 36

 

Now, the more the loss, the more the penalization. It can easily make things evident where we have more losses.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.116.36.194