Understanding the pipeline

Let's start by cloning the models repository into your computer:

git clone https://github.com/tensorflow/models/

Now, let's dive into the pipeline that we got from Google's model repository.

If you look at the folder at this path prefix (models/research/slim) in the repository, you'll see folders named datasets, deployment, nets, preprocessing, and scripts; a bunch of files related to generating the model, plus training and testing pipelines and files related to training the ImageNet dataset, and a dataset named flowers.

We will use the download_and_convert_data.py to build our DR dataset. This image classification model library is built based on the slim library. In this chapter, we will fine-tune the inception network defined in nets/inception_v3.py (we'll talk more about the network specifications and its concept later in this chapter), which includes the calculation of the loss function, adding different ops, structuring the network, and more. Finally, the train_image_classifier.py and eval_image_classifier.py files contain the generalized procedures for making a training and testing pipeline for our network.

For this chapter, due to the complex nature of the network, we are using a GPU-based pipeline to train the network. If you want to find out how to install TensorFlow for GPU in your machine, then refer to Appendix AAdvanced Installation, in this book. Also, you should have about 120 GB space inside your machine to be able to run this code. You can find the final code files in the Chapter 8 folder of this book's code files.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.137.210.143