Getting ready

In the backpropagation algorithm recipe, we defined layers, weights, loss, gradients, and update through gradients manually. It is a good idea to do it manually with equations for better understanding but this can be quite cumbersome as the number of layers in the network increases.

In this recipe, we will use powerful TensorFlow features such as Contrib (Layers) to define neural network layers and TensorFlow's own optimizer to compute and apply gradients. We saw in Chapter 2, Regression, how to use different TensorFlow optimizers. The contrib can be used to add various layers to the neural network model like adding building blocks. The one method that we use here is tf.contrib.layers.fully_connected, defined in TensorFlow documentation as follows:

fully_connected(
inputs,
num_outputs,
activation_fn=tf.nn.relu,
normalizer_fn=None,
normalizer_params=None,
weights_initializer=initializers.xavier_initializer(),
weights_regularizer=None,
biases_initializer=tf.zeros_initializer(),
biases_regularizer=None,
reuse=None,
variables_collections=None,
outputs_collections=None,
trainable=True,
scope=None
)

This adds a fully connected layer.

fully_connected creates a variable called weights, representing a fully connected weight matrix, which is multiplied by the inputs to produce a tensor of hidden units. If a normalizer_fn is provided (such as batch_norm), it is then applied. Otherwise, if normalizer_fn is None and a biases_initializer is provided then a biases variable would be created and added to the hidden units. Finally, if activation_fn is not None, it is applied to the hidden units as well.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.189.250