Using OpenAI gym

Using the OpenAI gym really makes deep reinforcement learning easy. Keras-RL will do most of the hard work, but I think it's worth walking through the gym separately so that you can understand how the agent interacts with the environment.

Environments are objects that can be instantiated. For example, to create a CartPole-v0 environment, we just need to import the gym and create the environment, as shown in the following code:

import gym
env = gym.make("CartPole-v0")

Now, if our agent wants to act in that environment, it just needs to send an action and get back a state and a reward, as follows:

next_state, reward, done, info = env.step(action)

The agent can play through an entire episode by using a loop to interact with the environment.  Every iteration of this loop corresponds to a single step in the episode.  The episode is over when the agent receives a 'done' signal from the environment.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.147.44.182