How to do it...

We proceed with the recipe as follows:

  1. Import a few utils and core layers for ConvNets, dropout, fully_connected, and max_pool. In addition, import a few modules useful for image processing and image augmentation. Note that TFLearn provides some already defined higher-level layers for ConvNets and this allows us to focus on the definition of our code:
from __future__ import division, print_function, absolute_import 
import tflearn 
from tflearn.data_utils import shuffle, to_categorical 
from tflearn.layers.core import input_data, dropout, fully_connected 
from tflearn.layers.conv import conv_2d, max_pool_2d 
from tflearn.layers.estimator import regression 
from tflearn.data_preprocessing import ImagePreprocessing 
from tflearn.data_augmentation import ImageAugmentation 
  1. Load the CIFAR-10 data and separate it into X train data, Y train labels, X_test for test, and Y_test for test labels. It might be useful to shuffle X and Y to avoid depending on a particular data configuration. The last step is to perform one-hot encoding for both X and Y:
# Data loading and preprocessing 
from tflearn.datasets import cifar10 
(X, Y), (X_test, Y_test) = cifar10.load_data() 
X, Y = shuffle(X, Y) 
Y = to_categorical(Y, 10) 
Y_test = to_categorical(Y_test, 10)
  1. Use ImagePreprocessing() for Zero Center (with mean computed over the whole dataset) and for STD Normalization (with std computed over the whole dataset). The TFLearn data stream is designed to speed up training by preprocessing data on CPU while GPU is performing model training.
# Real-time data preprocessing 
img_prep = ImagePreprocessing() 
img_prep.add_featurewise_zero_center() 
img_prep.add_featurewise_stdnorm()
  1. Augment the dataset by performing random flip right and left and by random rotation. This step is a simple trick used to increase the data available for the training:
# Real-time data augmentation 
img_aug = ImageAugmentation() 
img_aug.add_random_flip_leftright() 
img_aug.add_random_rotation(max_angle=25.) 
  1. Create the convolutional network with the images preparation and augmentation defined earlier. The network consists of three convolutional layers. The first one uses 32 convolutional filters, with size of filters 3 and activation function ReLU. After that, there is a max_pool layer for the downsizing. Then there are two convolutional filters in cascade with 64 convolutional filters, with size of filters 3 and activation function ReLU. After that, there is a max_pool for the downsizing and a fully connected network with 512 neurons with activation function ReLU followed by a dropout with probability 50 percent. The last layer is a fully connected network with 10 neurons and activation function softmax to determine the category of the handwritten digits. Note that this particular type of ConvNet is known to be very effective with CIFAR-10. In this particular case, we use the Adam optimizer with categorical_crossentropy and learning rate 0.001:
# Convolutional network building 
network = input_data(shape=[None, 32, 32, 3], 
                     data_preprocessing=img_prep, 
                     data_augmentation=img_aug) 
network = conv_2d(network, 32, 3, activation='relu') 
network = max_pool_2d(network, 2) 
network = conv_2d(network, 64, 3, activation='relu') 
network = conv_2d(network, 64, 3, activation='relu') 
network = max_pool_2d(network, 2) 
network = fully_connected(network, 512, activation='relu') 
network = dropout(network, 0.5) 
network = fully_connected(network, 10, activation='softmax') 
network = regression(network, optimizer='adam', 
                     loss='categorical_crossentropy', 
                     learning_rate=0.001)
  1. Instantiate the ConvNet and run the train for 50 epochs with batch_size=96:
# Train using classifier 
model = tflearn.DNN(network, tensorboard_verbose=0) 
model.fit(X, Y, n_epoch=50, shuffle=True, validation_set=(X_test, Y_test), 
          show_metric=True, batch_size=96, run_id='cifar10_cnn') 
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.226.93.137