A layer of convolutions

We've previously talked about a layer of a deep neural network consisting of multiple units (which we've been calling neurons) of a linear function, combined with some nonlinearity such as relu. In a convolutional layer, each unit is a filter, combined with a nonlinearity. For example, a convolutional layer might be defined in Keras as follows:

from keras.layers import Conv2D
Conv2D(64, kernel_size=(3,3), activation="relu", name="conv_1")

In this layer, there are 64 separate units, each a 3 x 3 x 3 filter. After the convolution operation is done, each unit adds a bias and a nonlinearity to the output as we did in traditional fully connected layers (more on that term in just a moment).

Before moving on, let's quickly walk through the dimensionality of an example, just so I'm sure we're all on the same page. Imagine we have an input image that is 32 x 32 x 3. We now convolve it with the above convolutional layer. That layer contains 64 filters, so the output is 30 x 30 x 64. Each filter outputs a single 30 x 30 matrix.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.220.53.93