Chapter 3

  1. The Markov property states that the future depends only on the present and not on the past.
  2. MDP is an extension of the Markov chain. It provides a mathematical framework for modeling decision-making situations. Almost all RL problems can be modeled as MDP.
  3. Refer section Discount factor.
  4. The discount factor decides how much importance we give to the future rewards and immediate rewards.
  5. We use Bellman function for solving the MDP.
  6. Refer section Deriving the Bellman equation for value and Q functions.
  7. Value function specifies goodness of a state and Q function specifies goodness of an action in that state.
  8. Refer section Value iteration and Policy iteration.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.45.5