Constructing the layers

We can think of our layer structure having four parts. We are going to have three convolutional layers and one fully connected layer. Our first two layers are extremely similar - they follow the convolution-ReLU-MaxPool-dropout structure that we've described previously:

// Layer 0
if c0, err = gorgonia.Conv2d(x, m.w0, tensor.Shape{5, 5}, []int{1, 1}, []int{1, 1}, []int{1, 1}); err != nil {
return errors.Wrap(err, "Layer 0 Convolution failed")
}
if a0, err = gorgonia.Rectify(c0); err != nil {
return errors.Wrap(err, "Layer 0 activation failed")
}
if p0, err = gorgonia.MaxPool2D(a0, tensor.Shape{2, 2}, []int{0, 0}, []int{2, 2}); err != nil {
return errors.Wrap(err, "Layer 0 Maxpooling failed")
}
if l0, err = gorgonia.Dropout(p0, m.d0); err != nil {
return errors.Wrap(err, "Unable to apply a dropout")
}

Our following layer is similar - we just need to join it to the output of our previous one:

// Layer 1
if c1, err = gorgonia.Conv2d(l0, m.w1, tensor.Shape{5, 5}, []int{1, 1}, []int{1, 1}, []int{1, 1}); err != nil {
return errors.Wrap(err, "Layer 1 Convolution failed")
}
if a1, err = gorgonia.Rectify(c1); err != nil {
return errors.Wrap(err, "Layer 1 activation failed")
}
if p1, err = gorgonia.MaxPool2D(a1, tensor.Shape{2, 2}, []int{0, 0}, []int{2, 2}); err != nil {
return errors.Wrap(err, "Layer 1 Maxpooling failed")
}
if l1, err = gorgonia.Dropout(p1, m.d1); err != nil {
return errors.Wrap(err, "Unable to apply a dropout to layer 1")
}

 

The following layer is essentially the same, but there is a slight change to prepare it for the change to the fully connected layer:

// Layer 2
if c2, err = gorgonia.Conv2d(l1, m.w2, tensor.Shape{5, 5}, []int{1, 1}, []int{1, 1}, []int{1, 1}); err != nil {
return errors.Wrap(err, "Layer 2 Convolution failed")
}
if a2, err = gorgonia.Rectify(c2); err != nil {
return errors.Wrap(err, "Layer 2 activation failed")
}
if p2, err = gorgonia.MaxPool2D(a2, tensor.Shape{2, 2}, []int{0, 0}, []int{2, 2}); err != nil {
return errors.Wrap(err, "Layer 2 Maxpooling failed")
}

var r2 *gorgonia.Node
b, c, h, w := p2.Shape()[0], p2.Shape()[1], p2.Shape()[2], p2.Shape()[3]
if r2, err = gorgonia.Reshape(p2, tensor.Shape{b, c * h * w}); err != nil {
return errors.Wrap(err, "Unable to reshape layer 2")
}
if l2, err = gorgonia.Dropout(r2, m.d2); err != nil {
return errors.Wrap(err, "Unable to apply a dropout on layer 2")
}

Layer 3 is something we're already very familiar with—the fully connected layer—here, we have a fairly simple structure. We can certainly add more tiers to this layer (and this has been done by many different architectures before as well, with differing levels of success). This layer is demonstrated in the following code:

// Layer 3
log.Printf("l2 shape %v", l2.Shape())
log.Printf("w3 shape %v", m.w3.Shape())
if fc, err = gorgonia.Mul(l2, m.w3); err != nil {
return errors.Wrapf(err, "Unable to multiply l2 and w3")
}
if a3, err = gorgonia.Rectify(fc); err != nil {
return errors.Wrapf(err, "Unable to activate fc")
}
if l3, err = gorgonia.Dropout(a3, m.d3); err != nil {
return errors.Wrapf(err, "Unable to apply a dropout on layer 3")
}
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.17.176.72