Implementation of a sequential network

Now, let's implement one final class that will combine multiple dense layer and softmax layer objects into a single coherent feed-forward sequential neural network. This will be implemented as another class, which will subsume the other classes. Let's first start by writing the constructor—we will be able to set the max batch size here, which will affect how much memory is allocated for the use of this network – we'll store some allocated memory used for weights and input/output for each layer in the list variable, network_mem. We will also store the DenseLayer and SoftmaxLayer objects in the list network, and information about each layer in the NN in network_summary. Notice how we can also set up some training parameters here, including the delta, how many streams to use for gradient descent (we'll see this later), as well as the number of training epochs.

We can also see one other input at the beginning called layers. Here, we can indicate the construction of the NN by describing each layer, which the constructor will create by iterating through each element of layers and calling the add_layer method, which we will implement next:

class SequentialNetwork:
def __init__(self, layers=None, delta=None, stream = None, max_batch_size=32, max_streams=10, epochs = 10):

self.network = []
self.network_summary = []
self.network_mem = []

if stream is not None:
self.stream = stream
else:
self.stream = drv.Stream()

if delta is None:
delta = 0.0001

self.delta = delta
self.max_batch_size=max_batch_size
self.max_streams = max_streams
self.epochs = epochs

if layers is not None:
for layer in layers:
add_layer(self, layer)

Now, let's implement the add_layer method. We will use a dictionary data type to pass all of the relevant information about the layer to the sequential network—including the type of layer (dense, softmax, and so on), the number of inputs/outputs, weights, and biases. This will append the appropriate object and information to the object's network and network_summary list variables, as well as appropriately allocate gpuarray objects to the network_mem list:

def add_layer(self, layer):
if layer['type'] == 'dense':
if len(self.network) == 0:
num_inputs = layer['num_inputs']
else:
num_inputs = self.network_summary[-1][2]

num_outputs = layer['num_outputs']
sigmoid = layer['sigmoid']
relu = layer['relu']
weights = layer['weights']
b = layer['bias']

self.network.append(DenseLayer(num_inputs=num_inputs, num_outputs=num_outputs, sigmoid=sigmoid, relu=relu, weights=weights, b=b))
self.network_summary.append( ('dense', num_inputs, num_outputs))

if self.max_batch_size > 1:
if len(self.network_mem) == 0:
self.network_mem.append(gpuarray.empty((self.max_batch_size, self.network_summary[-1][1]), dtype=np.float32))
self.network_mem.append(gpuarray.empty((self.max_batch_size, self.network_summary[-1][2] ), dtype=np.float32 ) )
else:
if len(self.network_mem) == 0:
self.network_mem.append( gpuarray.empty( (self.network_summary[-1][1], ), dtype=np.float32 ) )
self.network_mem.append( gpuarray.empty((self.network_summary[-1][2], ), dtype=np.float32 ) )

elif layer['type'] == 'softmax':

if len(self.network) == 0:
raise Exception("Error! Softmax layer can't be first!")

if self.network_summary[-1][0] != 'dense':
raise Exception("Error! Need a dense layer before a softmax layer!")


num = self.network_summary[-1][2]
self.network.append(SoftmaxLayer(num=num))
self.network_summary.append(('softmax', num, num))

if self.max_batch_size > 1:
self.network_mem.append(gpuarray.empty((self.max_batch_size, self.network_summary[-1][2] ), dtype=np.float32))
else:
self.network_mem.append( gpuarray.empty((self.network_summary[-1][2], ), dtype=np.float32))
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.227.134.154