Exploitation versus exploration 

Generally, we want the agent to follow a greedy policy, which means we want the agent to take the action that has the biggest Q value. While the network is learning, we don't want it to always behave greedily, however. If it did so, it would never explore new options, and learn new things. So, we need our agent to occasionally operate off policy.

The best way to balance this exploration is an ongoing research topic and it has been used for a very long time. The method we will be using, however, is pretty straightforward. Every time the agent takes an action, we will generate a random number. If that number is equal to or less than some threshold  then the agent will take a random action. This is called an ∈-greedy policy.

When the agent first starts, it doesn't know much about the world and it should probably explore more. As the agent gets smarter, it should probably explore less and use it's knowledge of the environment more. To do so, we just need to gradually decrease  as we train. In our example, we will decrease epsilon by a decay rate every turn, so that it decreases linearly with each action.

Putting this together, we have a linear annealed ∈-greedy Q policy, which is both simple and fun to say.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.221.123.73