Operations

It's all well and good having variables in our graph, but we also want to do something with them. We can use TensorFlow ops to manipulate our variables.

As explained, our linear classifier is just a matrix multiply so the first op you will use is funnily enough going to be the matrix multiply op. Simply call tf.matmul() on two Tensors you want to multiply together and the result will be the matrix multiplication of the two Tensors you passed in. Simple!

Throughout this book, you will learn about many different TensorFlow ops that you will need to use.

Now that you hopefully have a little understanding about variables and ops, let's construct our linear model. We'll define our model within a function. The function will take as input N lots of our feature vectors or to be more precise a batch of size N. As our feature vectors are of length 4, our batch will be an [N, 4] shape Tensor. The function will then return the output of our linear model. In the following code, we have written our linear model function, it should be self explanatory but keep reading if you have not completely understood it yet.

def linear_model(input):
# Create variables for our weights and biases 
my_weights = tf.get_variable(name="weights", shape=[4,3]) 
my_bias = tf.get_variable(name="bias", shape=[3]) 
 
# Create a linear classifier. 
linear_layer = tf.matmul(input, my_weights)  
linear_layer_out = tf.nn.bias_add(value=linear_layer, bias=my_bias) 
return linear_layer_out 

In the code here, we create variables that will store our weights and biases. We give them names and supply the required shapes. Remember we are using variables as we want to manipulate their values using operations.

Next, we create a tf.matmul node that takes as argument our input feature matrix and our weight matrix. The result of this op can be accessed through our linear_layer Python variable. This result is then passed to another op, tf.nn.bias_add. This op comes from the NN (neural network) module and is used when we wish to add a bias vector to the result of a calculation. A bias has to be a one-dimensional Tensor.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.190.239.166