Building the CNN graph

Let's go into detail through the build_graph function that contains the network definition, the loss function, and the optimizer used. First, we start the function by defining the placeholders for our inputs. We will use two placeholders to supply data and labels into the graph: __x_ and __y_. The placeholder __x_ will hold our input RGB images, while the placeholder __y_ stores one hot labels of corresponding classes. We use None when defining the N part of the placeholder shape, as this tells TensorFlow that this value can be anything and will be supplied when we execute the graph:

def build_graph(self): 
       self.__x_ = tf.placeholder("float", shape=[None, 32, 32, 3], name='X') 

       self.__y_ = tf.placeholder("int32", shape=[None, 10], name='Y') 

       self.__is_training = tf.placeholder(tf.bool) 

Then, we will define our network within the name_scope model. Name_scope returns a context manager for use when defining TensorFlow ops. This context manager validates that the variables are from the same graph, makes that graph the default graph, and pushes a name scope in that graph.

For this model, we will construct a simple CNN with three convolutional layers, three pooling layers, and two fully connected layers. We use the tf.layers API to construct the CNN layers. The tf.reshape function reshapes the tensor from the last pooling layer to a one-dimensional tensor to match what the dense layer expects to receive. The output of the final layer is assigned to self.__logits, which is the tensor that will be passed as input to our loss function:

       with tf.name_scope("model") as scope: 

           conv1 = tf.layers.conv2d(inputs=self.__x_, filters=64, kernel_size=[5, 5], 

                                    padding="same", activation=tf.nn.relu)  

 

           pool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[2, 2], strides=2) 

 

           conv2 = tf.layers.conv2d(inputs=pool1, filters=64, kernel_size=[5, 5], 

                                    padding="same", activation=tf.nn.relu) 

 

           pool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=[2, 2], strides=2) 

 

           conv3 = tf.layers.conv2d(inputs=pool2, filters=32, kernel_size=[5, 5], 

                                    padding="same", activation=tf.nn.relu) 

 

           pool3 = tf.layers.max_pooling2d(inputs=conv3, pool_size=[2, 2], strides=2) 

 

           pool3_flat = tf.reshape(pool3,  [-1, 4 * 4 * 32]) 

 

           # FC layers  

           FC1 = tf.layers.dense(inputs=pool3_flat, units=128, activation=tf.nn.relu) 

           FC2 = tf.layers.dense(inputs=FC1, units=64, activation=tf.nn.relu)            

           self.__logits = tf.layers.dense(inputs=FC2, units=10)            

The next step is to define the loss function within the name scope loss_func. The loss function that is used here is the softmax cross entropy and, as mentioned earlier, we average the loss across the whole batch with tf.reduce_mean. We create variables to hold training loss __loss and validation loss __loss_val, and add these scalars to the TensorFlow summary data to display in TensorBoard later:

           with tf.name_scope("loss_func") as scope: 

               self.__loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=self.__logits, 

                                                                                    labels=self.__y_)) 

               self.__loss_val = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=self.__logits, 

                                                                                    labels=self.__y_)) 

               # Add loss to tensorboard                

               self.__train_summary = tf.summary.scalar("loss_train", self.__loss) 

               self.__val_summary = tf.summary.scalar("loss_val", self.__loss_val) 

After defining our model and loss function, we need to specify which optimization function we will use to minimize the loss. The optimization function that we choose here is the Adam optimizer, and it is defined within the name scope optimizer.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.147.72.74