Sequence-to-sequence model architecture

The key to understanding sequence-to-sequence model architecture is understanding that the architecture is built to allow the input sequence to vary in length from the output sequence. The entire input sequence can then be used to predict an output sequence of varying length.

To do that, the network is divided into two separate parts, each part consists of one or more LSTM layers responsible for half of the task. We discussed LSTMs back in Chapter 9, Training an RNN from scratch, if you'd like a refresher on their operation. We will learn about each of these two parts in the following sections.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.138.172.130