Generating music

It's time for the real fun!  Let' generate some instrumental music. We will use the code from the model setup and training, but instead of executing the training (as our model is already trained) we will insert the learned weights that we obtained in earlier training.

The following code block executes these two steps:

model = Sequential()
model.add(LSTM(
512,
input_shape=(network_input.shape[1], network_input.shape[2]),
return_sequences=True
))
model.add(Dropout(0.5))
model.add(LSTM(512, return_sequences=True))
model.add(Dropout(0.3))
model.add(LSTM(512))
model.add(Dense(256))
model.add(Dropout(0.3))
model.add(Dense(n_vocab))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam')

# Load the weights to each node
model.load_weights('weights_file.hdf5')

By doing this, we created the same model but this time for prediction purposes and added one extra line of code to load the weights into the memory.

Because we need a seed input so that the model can start generating music we chose to use a random sequence of notes obtained from our processed files. You can also send your own nodes as long as you ensure that the sequence length is precisely 100:

# Randomly selected a note from our processed data
start = numpy.random.randint(0, len(network_input)-1)
pattern = network_input[start]

int_to_note = dict((number, note) for number, note in enumerate(pitchnames))

prediction_output = []

# Generate 1000 notes of music
for note_index in range(1000):
prediction_input = numpy.reshape(pattern, (1, len(pattern), 1))
prediction_input = prediction_input / float(n_vocab)

prediction = model.predict(prediction_input, verbose=0)

index = numpy.argmax(prediction)
result = int_to_note[index]
prediction_output.append(result)

pattern.append(index)
pattern = pattern[1:len(pattern)]

We iterated the model generation 1000 times which created a 1000 notes using the network producing approximately five minutes of music. The process we used to select the next sequence for each iteration was that we'd start with the first sequence to submit being that of the sequence of notes at the starting index. For subsequent input sequences, we removed the first note and appended the output from the previous iteration at the end of the sequence. Which is a very crude way to do it and is also known as sliding window approach. But you can play around and add some randomness to each sequence we select which could give more creativity to the musing that is generated.

It is at this point that we have an array of all the encoded representations of the notes and chords.  To turn this array back into Note and Chord objects we need to decode it. 

When we detect that the pattern is that of a note Chord, we will separate the string into an array of notes. We will then loop through the string representation of each note to create a Note object for each item. The Chord object is then created which contains each of these notes.

When the pattern is that a Note, we will use the string representation of the pitch pattern to create a Note object. At the end of each iteration, we increase the offset by 0.5, which can again be changed and randomness can be introduced to it. 

The following function is responsible to determine if the output is Note or a Chord. Finally, we use the Music21 output stream object to create the midi file. Here are few samples of generated music https://github.com/PacktPublishing/Python-Deep-Learning-Projects/tree/master/Chapter%206/Music-ai/generated_music.

To execute these steps you can make use of this helper function in the following code block.

def create_midi_file(prediction_output):
""" convert the output from the prediction to notes and create a midi file"""
offset = 0
output_notes = []

for pattern in prediction_output:
# pattern is a chord
if ('.' in pattern) or pattern.isdigit():
notes_in_chord = pattern.split('.')
notes = []
for current_note in notes_in_chord:
new_note = note.Note(int(current_note))
new_note.storedInstrument = instrument.Piano()
notes.append(new_note)
new_chord = chord.Chord(notes)
new_chord.offset = offset
output_notes.append(new_chord)
# pattern is a note
else:
new_note = note.Note(pattern)
new_note.offset = offset
new_note.storedInstrument = instrument.Piano()
output_notes.append(new_note)

# increase offset each iteration so that notes do not stack
offset += 0.5

midi_stream = stream.Stream(output_notes)

midi_stream.write('midi', fp='generated.mid')

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.188.190.175