There's more...

In the preceding example, without preprocessing, the accuracy of the three data sets is roughly ~40 percent. But when we add preprocessing, the accuracy for the training data increases to 90 percent, but for validation and testing, we still get an accuracy of ~45 percent.

There are many changes which can be introduced to improve the result. First, the dataset that we used in the recipe is the Kaggle dataset with only 22,000 images. If you observe these images you will find the addition of the step of filtering only faces will improve the result. Another strategy, as mentioned in the following paper, is increasing the size of hidden layers instead of reducing them-https://www.cs.swarthmore.edu/~meeden/cs81/s14/papers/KevinVincent.pdf.

Another change that has been found really successful in identifying the emotions is using facial-keypoints instead of training for the whole face, http://cs229.stanford.edu/proj2010/McLaughlinLeBayanbat-RecognizingEmotionsWithDeepBeliefNets.pdf.

With the preceding recipe, you can play around with these changes and explore how performance improves. May the GPU force be with you!

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.216.143.65