Approaches to transfer learning

Let's look into different approaches to transfer learning. There could be different names given to the approaches, but the concepts remain the same:

  1. Using a pre-trained model: There are a lot of pre-trained models out there to satisfy your basic deep learning research. In this book, we have used a lot of pre-trained models from where we derive our results.
  1. Training a model for reuse: Let's assume that you want to solve problem A, but you don't have enough data to achieve the results. To solve this issue, we have another problem, B, where we have enough data. In that case, we can develop a model for problem B, and use the model as a starting point for problem A. Whether we need to reuse all the layers or only a few layers is dependent on the type of problem that we are solving.
  2. Feature extraction: With deep learning, we can extract the features of the dataset. Most of the time, the features are handcrafted by the developers. Neural networks have the ability to learn which features you have to pass on, and which ones you don't. For example, we will only use the initial layers to detect the right representation of features, but we will not use the output because it might be more specific to one particular task. We will simply feed the data into our network and use one of the immediate middle level layers as the output layer.

With this, we will start building our model using TensorFlow.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.218.97.75