Building the TensorFlow model

In this application, we will build an MNIST dataset based TensorFlow model that we will use in our Android application. Once we have the TensorFlow model, we will convert it into a TensorFlow Lite model. The step-by-step procedure of downloading the model and building the TensorFlow model is as follows.

Here is the architecture diagram on how our model works. The way to achieve this is explained as follows:

Using TensorFlow, we can download the MNIST data with one line of Python code, as follows:

import tensorflow as tf 
from tensorflow.examples.tutorials.mnist import input_data
#Reading data
mnist = input_data.read_data_sets("./data/", one_hot-True)

Now, we have the MNIST dataset downloaded. After that, we will read the data, as shown in the previous code.

Now, we can run the script to download the dataset. We will run the script from the console, as follows:

> python mnist.py 
Successfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.
Extracting MNIST_data/train-images-idx3-ubyte.gz Successfully downloaded train-labels-idxl-ubyte.gz 28881 bytes.
Extracting MNIST_data/train -labels -idxl -ubyte.gz
Successfully downloaded tlOk -images -idx3 -ubyte.gz 1648877 bytes. Extracting MNIST_data/t1Ok -images -idx3 -ubyte.gz
Successfully downloaded tlOk -labels -idxl -ubyte.gz 4542 bytes. Extracting MNIST_data/t1Ok -labels -idxl -ubyte.gz

Once we have the dataset ready, we will add a few variables that we will use in our application, as follows: 

image_size = 28 
labels_size = 10
learning_rate = 0.05
steps_number = 1000
batch size = 100

We need to define these variables to control the parameters on building the model as required by the TensorFlow framework. This classification process is simple. The number of pixels that exist in a 28 x 28 image is 784. So, we have a corresponding number of input layers. Once we have the architecture set up, we will train the network and evaluate the results, obtained to understand the effectiveness and accuracy of the model.

Now, let's define the variables that we added in the preceding code block. Depending on whether the model is in the training phase or the testing phase, different data will be passed through the classifier. The training process needs labels to be able to match them to current predictions. This is defined in the following variable:

 #Define placeholders 
training_data = tf.placeholder(tf.float32, [None, image_size*image_size])
labels = tf.placeholder(tf.float32, [None, labels_size])

As the computation-graph evaluation occurs, placeholders will be filled. In the training process, we adjust the values of biases and weights toward increasing the accuracy of our results. To achieve this, we will define the weight and bias parameters, as follows:

#Variables to be tuned 
W = tf.Variable(tf.truncated_normal([image_size*image_size, labels_size], stddev=0.1))
b = tf.Variable(tf.constant(0.1, shape-[labels_size]))

Once we have variables that can be tuned, we can move on to building the output layer in just one step:

#Build the network 
output = tf.matmul(training_data, W) + b

We have successfully built the output layer of the network with the training data.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.129.218.69