Dueling network

Now, we build our dueling DQN; we build three convolutional layers followed by two fully connected layers, and the final fully connected layer will be split into two separate layers for value stream and advantage stream. We will use the aggregate layer, which combines both the value stream and the advantage stream, to compute the q value. The dimensions of these layers are given as follows:

  • Layer 1: 32 8x8 filters with stride 4 + RELU
  • Layer 2: 64 4x4 filters with stride 2 + RELU
  • Layer 3: 64 3x3 filters with stride 1 + RELU
  • Layer 4a: 512 unit fully-connected layer + RELU
  • Layer 4b: 512 unit fully-connected layer + RELU
  • Layer 5a: 1 unit FC + RELU (state value)
  • Layer 5b: Actions FC + RELU (advantage value)
  • Layer6: Aggregate V(s)+A(s,a)
class QNetworkDueling(QNetwork):

We define the __init__ method to initialize all layers:


def __init__(self, input_size, output_size, name):
self.name = name
self.input_size = input_size
self.output_size = output_size
with tf.variable_scope(self.name):

# Three convolutional Layers
self.W_conv1 = self.weight_variable([8, 8, 4, 32])
self.B_conv1 = self.bias_variable([32])
self.stride1 = 4

self.W_conv2 = self.weight_variable([4, 4, 32, 64])
self.B_conv2 = self.bias_variable([64])
self.stride2 = 2

self.W_conv3 = self.weight_variable([3, 3, 64, 64])
self.B_conv3 = self.bias_variable([64])
self.stride3 = 1


# Two fully connected layer
self.W_fc4a = self.weight_variable([7*7*64, 512])
self.B_fc4a = self.bias_variable([512])

self.W_fc4b = self.weight_variable([7*7*64, 512])
self.B_fc4b = self.bias_variable([512])

# Value stream
self.W_fc5a = self.weight_variable([512, 1])
self.B_fc5a = self.bias_variable([1])

# Advantage stream
self.W_fc5b = self.weight_variable([512, self.output_size])
self.B_fc5b = self.bias_variable([self.output_size])


We define the __call__ method and perform the convolutional operation:


def __call__(self, input_tensor):
if type(input_tensor) == list:
input_tensor = tf.concat(1, input_tensor)

with tf.variable_scope(self.name):

# Perform convolutional on three layers

self.h_conv1 = tf.nn.relu( tf.nn.conv2d(input_tensor, self.W_conv1, strides=[1, self.stride1, self.stride1, 1], padding='VALID') + self.B_conv1 )

self.h_conv2 = tf.nn.relu( tf.nn.conv2d(self.h_conv1, self.W_conv2, strides=[1, self.stride2, self.stride2, 1], padding='VALID') + self.B_conv2 )

self.h_conv3 = tf.nn.relu( tf.nn.conv2d(self.h_conv2, self.W_conv3, strides=[1, self.stride3, self.stride3, 1], padding='VALID') + self.B_conv3 )


# Flatten the convolutional output
self.h_conv3_flat = tf.reshape(self.h_conv3, [-1, 7*7*64])

# Fully connected layer
self.h_fc4a = tf.nn.relu(tf.matmul(self.h_conv3_flat, self.W_fc4a) + self.B_fc4a)

self.h_fc4b = tf.nn.relu(tf.matmul(self.h_conv3_flat, self.W_fc4b) + self.B_fc4b)


# Compute value stream and advantage stream
self.h_fc5a_value = tf.identity(tf.matmul(self.h_fc4a, self.W_fc5a) + self.B_fc5a)

self.h_fc5b_advantage = tf.identity(tf.matmul(self.h_fc4b, self.W_fc5b) + self.B_fc5b)

# Club both the value and advantage stream
self.h_fc6 = self.h_fc5a_value + ( self.h_fc5b_advantage - tf.reduce_mean(self.h_fc5b_advantage, reduction_indices=[1,], keep_dims=True) )


return self.h_fc6
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.139.97.202