Convolution layers

For one-dimensional convolutional, layers we can use keras.layers.Conv1D. We will need to use MaxPooling1D layers to go along with our Conv1D layers, as shown in the following code:

x = Conv1D(128, 5, activation='relu')(embedding_layer)
x = MaxPooling1D(5)(x)
x = Conv1D(128, 5, activation='relu')(x)
x = MaxPooling1D(5)(x)
x = Conv1D(128, 5, activation='relu')(x)
x = GlobalMaxPooling1D()(x)

For the Conv1D layers, the first integer argument is the number of units and the second is the filter size. Our filter only has one dimension, hence the name 1D convolution. Our window size in the preceding example is 5.

The MaxPooling1D layers that I'm using will also use a window size of 5. The same rules apply for the pooling layers in a 1D implementation.

After the last convolutional layer, we apply the GlobalMaxPooling1D layer. This layer is a special implementation of max pooling that will take the output of the last Conv1D layer, a [batch x 35 x 128] tensor, and pool it across time steps to [batch x 128]. This is commonly done in NLP networks and is similar in intent to the use of the Flatten() layer in image-based convolutional networks. This layer serves as the bridge between the convolutional layers and the dense layers.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.147.77