Visualizing the weights in the intermediate layers 

Now, let's visualize the weights learned in the intermediate layers. The following Python code visualizes the weights learned for the first 200 hidden units at the first dense layer:

from keras.models import Model
import matplotlib.pylab as pylab
import numpy as np
W = model.get_layer('dense_1').get_weights()
print(W[0].shape)
print(W[1].shape)
fig = pylab.figure(figsize=(20,20))
fig.subplots_adjust(left=0, right=1, bottom=0, top=0.95, hspace=0.05, wspace=0.05)
pylab.gray()
for i in range(200):
pylab.subplot(15, 14, i+1), pylab.imshow(np.reshape(W[0][:, i], (28,28))), pylab.axis('off')
pylab.suptitle('Dense_1 Weights (200 hidden units)', size=20)
pylab.show()

This results in the following output:

The following screenshot shows what the neural network sees in the output layer, the code being left as an exercise to the reader:

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.117.182.179