Generate Music using Multi-layer LSTM

Our (hypothetical) creative agency client loves what we've done in how we can generate music lyrics.  Now they want us to create some music.  We will be using multiple layers of LSTMs as shown in the following figure. Since by now, we know that RNN's are good for sequential data and we can also represent a music track as notes and chord sequences. In this paradigm, notes become data objects containing octave, offset, and pitch information. Chords become data container objects holding information for the combination of notes played at one time.

Pitch is the sound frequency for a note.  Musicians represent notes with letter designations [A, B, C, D, E, F, G], with G being the lowest and A being the highest.

Octave
 identifies the set of pitches used at any one time while playing an instrument.

Offset 
identifies the location of a note in the piece of music.

Let's explore the following section to build our intuition on how to generate music by first processing the sound files, converting them into the sequential mapping data and then use the RNN to train the model.

So let's do it. You can refer to Music-ai code found at: https://github.com/PacktPublishing/Python-Deep-Learning-Projects/tree/master/Chapter%206/Music-ai

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.7.102