Choosing the triplets

Choosing the triplets is very important as it impacts the learning of the network. 

For negative cases, we just choose them randomly, and most of the time, they actually satisfy the equation easily. The reason for this is, the difference between the anchor and negative image is already great. Thus, no matter the value that you choose, the difference will always be large. We can express this mathematically as the following: 

The network at this point will get lazy and not solve the equations for the upcoming iterations. To resolve this, we need to choose triplets that are complex to train:

This means that we need to choose a negative case that is as similar as the anchor network; this will lead to smaller distance values, thus making the neural network work hard to attain the requisite results. 

The same applies to the positive case, where we use an image that is not very similar to the anchor image, making the neural network work for a greater number of iterations to achieve the desired result. 

During the training process, we need several positive example images. Assume we have 100 employees for the system we already mentioned in the previous section, we may need more than 1,000 images. This gives us an average of 10 positive images per person.

While testing, we may apply the famous one-shot learning, so if a new employee comes in, we don't need to retrain anything because we have a general way to encode the images that we keep in the database. If a new person joins the company, we basically just encode the face image, and we compare it to the one in the database.

This resolves the one-shot learning and the scaling is automatically solved, but again, during the training, we may need several positive examples.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.205.99