Fully-connected layers and output

The fully-connected layers are where we map our input—the rows resulting from us convolving, max-pooling, and flattening our original extracted features—to our target class or classes. Here, each input is connected to every neuron or node in the following layer. The strength of these connections, or weights, and a bias term present in each node of the network are parameters of the model, optimized throughout the training process to minimize an objective function.

The final layer of our model will be our output layer, which gives us our model predictions. The number of neurons in our output layer and the activation function we apply to it are determined by the kind of problem we're trying to solve: regression, binary classification, or multi-class classification. We'll see exactly how to set up the fully-connected and output layers for a multi-class classification task when we start working with the Zalando Research fashion dataset in the next section.

The fully-connected layers and output—that is, the feedforward neural network component of our architecture—belong to a distinct neural network type from the convolutional neural networks we discussed in this section. We briefly described how feedforward networks work in this section only to provide color on how the classifier component of our architecture works. You can always substitute this portion of the architecture for a classifier you are more familiar with, such as a logit!

With this fundamental knowledge, you're now ready to build your network!

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.186.219