There is more...

As soon as we learn how to create a compact representation of audio via dilated convolutions, we can play with these learnings and have fun. You will find very cool demos on the internet:

  1. For instance, you can see how the model learns the sound of different musical instruments: (https://magenta.tensorflow.org/nsynth
  1. Then, you can see how one model learned in one context can be re-mixed in another context. For instance, by changing the speaker identity, we can use WaveNet to say the same thing in different voices (https://deepmind.com/blog/wavenet-generative-model-raw-audio/) .
  2. Another very interesting experiment is to learn models for musical instruments and then re-mix them in a such a way that we can create new musical instruments that have never been heard before. This is really cool, and it opens the path to a new range of possibilities that the ex-radio DJ sitting in me cannot resist being super excited about. For instance, in this example, we combine a sitar with an electric guitar, and this is a kind of cool new musical instrument. Not excited enough? So what about combining a bowed bass with a dog's bark? (https://aiexperiments.withgoogle.com/sound-maker/view/)  Have fun!:
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.117.159.116