Implementing a deep n-step advantage actor critic agent

We have prepared ourselves with all the background information required to implement the deep n-step advantage actor-critic (A2C) agent. Let's look at an overview of the agent implementation process and then jump right into the hands-on implementation.

The following is the high-level flow of our A2C agent:

  1. Initialize the actor's and critic's networks.
  2. Use the current policy of the actor to gather n-step experiences from the environment and calculate the n-step return.
  1. Calculate the actor's and critic's losses.
  2. Perform the stochastic gradent descent optimization step to update the actor and critic parameters.
  3. Repeat from step 2.

We will implement the agent in a Python class named DeepActorCriticAgent. You will find the full implementation in this book's code repository under 8th chapter: ch8/a2c_agent.py. We will make this implementation flexible so that we can easily extend it further for the batched version, as well make an asynchronous version of the n-step advantage actor-critic agent.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.151.141