How to do it...

Here is how we proceed with the recipe:

  1. The first step is, as always, importing the modules needed:
import tensorflow as tf
import matplotlib.pyplot as plt, matplotlib.image as mpimg
  1. We take the input data of MNIST from the TensorFlow examples given in the module input_data. The one_hot flag is set to True to enable one_hot encoding of labels. This results in generating two tensors, mnist.train.images of shape [55000,784] and mnist.train.labels of shape [55000,10]. Each entry of mnist.train.images is a pixel intensity with the value ranging between 0 and 1:
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
  1. Create placeholders for the training dataset inputs x and label y to the TensorFlow graph:
x = tf.placeholder(tf.float32, [None, 784], name='X')
y = tf.placeholder(tf.float32, [None, 10],name='Y')
  1. Create the learning variables, weights, and bias:
W = tf.Variable(tf.zeros([784, 10]), name='W')
b = tf.Variable(tf.zeros([10]), name='b')
  1. Create the logistic regression model. The TensorFlow OP is given the name_scope("wx_b"):
with tf.name_scope("wx_b") as scope:
y_hat = tf.nn.softmax(tf.matmul(x,W) + b)
  1. Add summary OPs to collect data while training. We use the histogram summary so that we can see how weights and bias change relative to each other's value with time. We will be able to see this in the TensorBoard Histogram tab:
w_h = tf.summary.histogram("weights", W)
b_h = tf.summary.histogram("biases", b)
  1. Define the cross-entropy and loss function, and also add name scope and summary for better visualization. Here, we use the scalar summary to obtain the variation in the loss function over time. The scalar summary is visible under the Events tab:
with tf.name_scope('cross-entropy') as scope:
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=y_hat)
tf.summary.scalar('cross-entropy', loss)
  1. Employ the TensorFlow GradientDescentOptimizer with learning rate 0.01. Again, for better visualization, we define a name_scope:
with tf.name_scope('Train') as scope:
optimizer = tf.train.GradientDescentOptimizer(0.01).minimize(loss)
  1. Declare the initializing op for the variables:
# Initializing the variables
init = tf.global_variables_initializer()
  1. We combine all the summary operations:
merged_summary_op = tf.summary.merge_all()
  1. Now, we define the session and store the summaries in a defined folder:
with tf.Session() as sess:
sess.run(init) # initialize all variables
summary_writer = tf.summary.FileWriter('graphs', sess.graph) # Create an event file
# Training
for epoch in range(max_epochs):
loss_avg = 0
num_of_batch = int(mnist.train.num_examples/batch_size)
for i in range(num_of_batch):
batch_xs, batch_ys = mnist.train.next_batch(100) # get the next batch of data
_, l, summary_str = sess.run([optimizer,loss, merged_summary_op], feed_dict={x: batch_xs, y: batch_ys}) # Run the optimizer
loss_avg += l
summary_writer.add_summary(summary_str, epoch*num_of_batch + i) # Add all summaries per batch
loss_avg = loss_avg/num_of_batch
print('Epoch {0}: Loss {1}'.format(epoch, loss_avg))
print('Done')
print(sess.run(accuracy, feed_dict={x: mnist.test.images,y: mnist.test.labels}))
  1. We get an accuracy of 86.5 percent after 30 epochs, 89.36 percent after 50 epochs, and, after 100 epochs, the accuracy increases to 90.91 percent.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.185.69