Summary

In this chapter, we looked at what artificial neural networks are, looked at their history, and examined the reasons for their appearance, rise, and fall and why they have become one of the most actively developed machine learning approaches today. We looked at the difference between biological and artificial neurons before learning the basics of the perceptron concept, which was created by Frank Rosenblatt. Then, we discussed the internal features of artificial neurons and networks, such as activation functions and their characteristics, network topology, and convolution layer concepts. We also learned how to train artificial neural networks with the error backpropagation method. We saw how to choose the right loss function for different types of tasks. Then, we discussed the regularization methods that are used to combat overfitting during training. 

Finally, we implemented a simple MLP for a regression task with the Shogun, Dlib, and Shark-ML C++ machine learning libraries. Then, we implemented a more advanced convolution network for an image classification task with PyTorch, a specialized neural network framework. This showed us the benefits of specialized frameworks over general-purpose libraries.

In the next chapter, we will discuss recurrent neural networks (RNNs), which are one of the most well-known and practical approaches for working with time-series data and natural language processing. The key differences from other types of neural networks are that the communication between RNN elements forms a directed sequence for processing sequential data and that the recurrent networks can use internal memory to process sequences of arbitrary lengths.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.142.173.238