Summary

In this chapter, we learned how DRQN is used to remember information about the previous states and how it overcomes the problem of partially observable MDP. We have seen how to train our agent to play the game Doom using a DRQN algorithm. We have also learned about DARQN as an improvement to DRQN, which adds an attention layer on top of the convolution layer. Following this, we saw the two types of attention mechanism; namely, soft and hard attention.

In the next chapter, Chapter 10Asynchronous Advantage Actor Critic Network, we will learn about another interesting deep reinforcement learning algorithm called Asynchronous Advantage Actor Critic network. 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.71.115