Two-layered neural network

Like MAML, Reptile is also compatible with any algorithms that can be trained with gradient descent. So, we use a simple two-layered neural network with 64 hidden units.

First, let's reset the TensorFlow graph:

tf.reset_default_graph()

We initialize the network parameters:

num_hidden = 64
num_classes = 1
num_feature = 1

Next, we define the placeholders for our input and output:

X = tf.placeholder(tf.float32, shape=[None, num_feature])
Y = tf.placeholder(tf.float32, shape=[None, num_classes])

We randomly initialize our model parameters:

w1 = tf.Variable(tf.random_uniform([num_feature, num_hidden]))
b1 = tf.Variable(tf.random_uniform([num_hidden]))

w2 = tf.Variable(tf.random_uniform([num_hidden, num_classes]))
b2 = tf.Variable(tf.random_uniform([num_classes]))

Then, we perform feedforward operation to predict the output, Yhat:

#layer 1
z1 = tf.matmul(X, w1) + b1
a1 = tf.nn.tanh(z1)

#output layer
z2 = tf.matmul(a1, w2) + b2
Yhat = tf.nn.tanh(z2)

We use mean squared error as our loss function:

loss_function = tf.reduce_mean(tf.square(Yhat - Y))

We then minimize the loss using Adam optimizer:

optimizer = tf.train.AdamOptimizer(1e-2).minimize(loss_function)

We initialize the TensorFlow variables:

init = tf.global_variables_initializer()
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.147.85.181