More about VGG

In 2014, VGG achieved the second place in the Imagenet Classification challenge and the first place in the Imagenet Localization challenge. As we saw, the VGGNet design choice of stacking many small convolution layers allows for a deeper structure that performs better while having less parameters (if we remove the unnecessary fully connected layers). This design choice is so effective in creating power and efficient networks that pretty much all modern architectures copy this idea and will rarely, if at all, use large filters.

The VGG model is proven to work well in a lot of tasks, and because of its simple architecture, it's a go-to model to start experimenting with or adapting to the needs of your problem. However, it does have the following issues to be aware of:

  • By using only 3x3 layers, especially on the first layer, the amount of compute is not suitable to mobile solutions
  • Even deeper VGG structures don't work as well due to vanishing gradient problems, as mentioned in the previous chapters
  • The huge amount of FC layers in the original design is overkill in terms of parameters, which not only slows the model down but also makes it easier to have overfitting problems
  • The use of many pooling layers, which is currently not considered good design
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.17.174.0