Handwritten digits

Let's start with the typical Hello World of machine learning with images--the MNIST handwritten numeral classification exercise.

The MNIST database being used has 60,000 images for training and another 10,000 for testing. It was originally collected by Chris Burges and Corinna Cortes and enhanced by Yann LeCun. You can find out more about the dataset on Yann LeCun's website (http://yann.lecun.com/exdb/mnist/).

TensorFlow conveniently comes with a test script demonstrating a convolutional neural network using the MSNIST handwritten, available at https://github.com/tensorflow/models/blob/master/tutorials/image/mnist/convolutional.py.

Let's modify this script to allow TensorBoard usage. If you wish to peek ahead, download a golden copy or see deltas; our full set of changes is available on the book's GitHub repository (https://github.com/mlwithtf/mlwithtf ).

For now, we recommend following along and making changes incrementally to understand the process.

Early on in the main class, we will define holders for convn_weights, convn_biases, and other parameters. Directly afterward, we will write the following code to add them to the histogram:

    tf.summary.histogram('conv1_weights', conv1_weights) 
    tf.summary.histogram('conv1_biases', conv1_biases) 
    tf.summary.histogram('conv2_weights', conv2_weights) 
    tf.summary.histogram('conv2_biases', conv2_biases) 
    tf.summary.histogram('fc1_weights', fc1_weights) 
    tf.summary.histogram('fc1_biases', fc1_biases) 
    tf.summary.histogram('fc2_weights', fc2_weights) 
    tf.summary.histogram('fc2_biases', fc2_biases) 

The preceding lines capture the values for the HISTOGRAMS tab. Notice that the captured values form subsections on the HISTOGRAMS tab, which is shown in the following screenshot:

Next, let's record some loss figures. We have the following code to start with:

    loss += 5e-4 * regularizers 

We will add a scalar summary for the loss figures after the preceding line:

    tf.summary.scalar("loss", loss) 

Similarly, we will start with the standard code calculating the learning_rate:

     learning_rate = tf.train.exponential_decay( 
        0.01,  # Base learning rate. 
        batch * BATCH_SIZE,  # Current index into the    
dataset. train_size, # Decay step. 0.95, # Decay rate. staircase=True)

We will add a scalar summary for the learning_rate figures, which is as follows:

    tf.summary.scalar("learning_rate", learning_rate) 

Just these two preceding lines help us capture these to important scalar metrics in our EVENTS tab:

Finally, let's instruct our script to save the graph setup. Let's find the section of the script which creates the session:

    # Create a local session to run the training. 
    start_time = time.time() 
    with tf.Session() as sess: 

Just after defining the sess handle, we will capture the graph as follows:

    writer = tf.summary.FileWriter("/tmp/tensorlogs",  
sess.graph) merged = tf.summary.merge_all()

We will need to add our merged object when running the session. We originally had the following code:

    l, lr, predictions = sess.run([loss, learning_rate,  
train_prediction], feed_dict=feed_dict)

We will add our merged object when running the session as such:

    # Run the graph and fetch some of the nodes.       
    sum_string, l, lr, predictions = sess.run([merged,  
loss,
learning_rate, train_prediction],
feed_dict=feed_dict)

Finally, we will need to write summaries at specified steps, much like we typically output validation set accuracy periodically. So, we do add one more line after the sum_string is computed:

    writer.add_summary(sum_string, step) 

That is all! We have just captured our loss and learning rates, key intermediate parameters on our neural network, and the structure of the graph. We have already examined the EVENTS and HISTOGRAMS tab, now let's look at the GRAPH tab:

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.223.239.226