Monte Carlo prediction

In DP, we solve the Markov Decision Process (MDP) by using value iteration and policy iteration. Both of these techniques require transition and reward probabilities to find the optimal policy. But how can we solve MDP when we don't know the transition and reward probabilities? In that case, we use the Monte Carlo method. The Monte Carlo method requires only sample sequences of states, actions, and rewards. the Monte Carlo methods are applied only to the episodic tasks. Since Monte Carlo doesn't require any model, it is called the model-free learning algorithm. 

The basic idea of the Monte Carlo method is very simple. Do you recall how we defined the optimal value function and how we derived the optimal policy in the previous chapter, Chapter 3, Markov Decision Process and Dynamic Programming?

A value function is basically the expected return from a state S with a policy π. Here, instead of expected return, we use mean return. 

Thus, in Monte Carlo prediction, we approximate the value function by taking the mean return instead of the expected return. 

Using Monte Carlo prediction, we can estimate the value function of any given policy. The steps involved in the Monte Carlo prediction are very simple and are as follows:

  1. First, we initialize a random value to our value function
  2. Then we initialize an empty list called a return to store our returns
  3. Then for each state in the episode, we calculate the return
  4. Next, we append the return to our return list
  5. Finally, we take the average of return as our value function

The following flowchart makes it more simple:

The Monte Carlo prediction algorithm is of two types:

  • First visit Monte Carlo
  • Every visit Monte Carlo
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.71.115