Memory and experience replay

A clever solution to these two problems is available when we introduce the concept of a finite memory space where we store a set of experiences the agent has had. At each state, we can take the opportunity to remember the state, action and reward. Then, periodically, the agent can replay these experiences by sampling a random minibatch from memory and updating the DQN weights using that minibatch.

This replay mechanism allows the agent to learn from it's experiences in the longer term, in a general way, since it's sampling from those experiences in it's memory randomly rather than updating the entire network using just the last experience.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.28.70