Q-learning

This algorithm was proposed by Watkins (in Learning from delayed rewards, Watkins C.I.C.H., Ph.D. Thesis, University of Cambridge, 1989; and further analyzed in Watkins C.I.C.H., Dayan P., Technical Note Q-Learning, Machine Learning 8, 1992) as a more efficient alternative to SARSA. The main feature of Q-learning is that the TD update rule is immediately greedy with respect to the Q(st+1, a) function:

The key idea is to compare the current Q(st, at) value with the maximum Q value achievable when the agent is in the successor state. In fact, as the policy must be GLIE, the convergence speed can be increased by avoiding wrong estimations due to the selection of a Q value that won't be associated with the final action. By choosing the maximum Q value, the algorithm will move towards the optimal solution faster than SARSA, and also, the convergence proof is less restrictive. In fact, Watkins and Dayan (in the aforementioned papers) proved that, if |ri| < R, the learning rate α ∈ [0, 1[ (in this case, α must be always smaller than 1) with the same constraints imposed for SARSA (Σα = ∞ and Σα2 < ∞), then the estimated Q function converges with probability 1 to the optimal one:

As discussed for SARSA, the conditions on the rewards and the learning rate can be managed by employing a clipping function and a temporal decay, respectively. In almost all deep Q-learning applications, these are extremely important factors to guarantee the convergence; therefore, I invite the reader to consider them whenever the training process isn't able to converge to an acceptable solution. 

The complete Q-learning algorithm (with an optional forced termination of the episode) is:

  1. Set an initial deterministic random policy, π(s)
  2. Set the initial value array, Q(s, a) = 0 ∀ s ∈ S and ∀ a ∈ A
  3. Set the number of episodes, Nepisodes
  4. Set a maximum number of steps per episode, Nmax
  5. Set a constant, α (α = 0.1)
  6. Set a constant, γ (γ = 0.9)
  7. Set an initial exploration factor, ε(0) (ε(0) = 1.0)
  8. Define a policy to let the exploration factor ε decay (linear or exponential)
  9. Set a counter, e = 0
  10. For i = 1 to Nepisodes:
    1. Observe the initial state, si
    2. While sj is non-terminal and e < Nmax:
      1. e += 1
      2. Select the action, at π(si), with an exploration factor ε(e)
      3. Observe the transition (at, si) → (sj, rij)
      4. Select the action, at+1 π(sj), with an exploration factor ε(e)
      5. Update the Q(st, at) function (if sj is terminal, set Q(st+1, at+1) = 0) using maxa Q(st+1, a)
      6. Set si = sj
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.147.13.192