Chapter 11

  1. The policy gradient is one of the amazing algorithms in RL where we directly optimize the policy parameterized by some parameter.
  2. Policy gradients are effective as we don't need to compute Q function to find the optimal policy.
  3. The role of the Actor network is to determine the best actions in the state by tuning the parameter, and the role of the Critic is to evaluate the action produced by the Actor.

 

  1. Refer section Trust region policy optimization
  2. We iteratively improve the policy and we impose a constraint that Kullback–Leibler (KL) divergence between old policy and a new policy is to be less than some constant. This constraint is called the trust region constraint.
  3. PPO modifies the objective function of TRPO by changing the constraint to a penalty a term so that we don't want to perform conjugate gradient.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.139.97.202