0%

Deep reinforcement learning is a fast-growing discipline that is making a significant impact in fields of autonomous vehicles, robotics, healthcare, finance, and many more. This book covers deep reinforcement learning using deep-q learning and policy gradient models with coding exercise.

You'll begin by reviewing the Markov decision processes, Bellman equations, and dynamic programming that form the core concepts and foundation of deep reinforcement learning. Next, you'll study model-free learning followed by function approximation using neural networks and deep learning. This is followed by various deep reinforcement learning algorithms such as deep q-networks, various flavors of actor-critic methods, and other policy-based methods. 

You'll also look at exploration vs exploitation dilemma, a key consideration in reinforcement learning algorithms, along with Monte Carlo tree search (MCTS), which played a key role in the success of AlphaGo. The final chapters conclude with deep reinforcement learning implementation using popular deep learning frameworks such as TensorFlow and PyTorch. In the end, you'll understand deep reinforcement learning along with deep q networks and policy gradient models implementation with TensorFlow, PyTorch, and Open AI Gym.

What You'll Learn
  • Examine deep reinforcement learning 
  • Implement deep learning algorithms using OpenAI’s Gym environment
  • Code your own game playing agents for Atari using actor-critic algorithms
  • Apply best practices for model building and algorithm training 
Who This Book Is For

Machine learning developers and architects who want to stay ahead of the curve in the field of AI and deep learning.

Table of Contents

  1. Cover
  2. Front Matter
  3. 1. Introduction to Reinforcement Learning
  4. 2. Markov Decision Processes
  5. 3. Model-Based Algorithms
  6. 4. Model-Free Approaches
  7. 5. Function Approximation
  8. 6. Deep Q-Learning
  9. 7. Policy Gradient Algorithms
  10. 8. Combining Policy Gradient and Q-Learning
  11. 9. Integrated Planning and Learning
  12. 10. Further Exploration and Next Steps
  13. Back Matter
3.138.138.144