Training

Keras RL provides several Keras-like callbacks that allow for convenient model check pointing and logging. I'll use both of those callbacks below. If you would like to see more of the callbacks Keras-RL provides, they can be found here: https://github.com/matthiasplappert/keras-rl/blob/master/rl/callbacks.py. You can also find a Callback class that you can use to create your own Keras-RL callbacks.

We will use the following code to train our model:

def build_callbacks(env_name):
checkpoint_weights_filename = 'dqn_' + env_name + '_weights_{step}.h5f'
log_filename = 'dqn_{}_log.json'.format(env_name)
callbacks = [ModelIntervalCheckpoint(checkpoint_weights_filename, interval=5000)]
callbacks += [FileLogger(log_filename, interval=100)]
return callbacks

callbacks = build_callbacks(ENV_NAME)

dqn.fit(env, nb_steps=50000,
visualize=False,
verbose=2,
callbacks=callbacks)

Once the agent's callbacks are built, we can fit the DQNAgent as we would a Keras model, by using a .fit() method. Take note of the visualize parameter in this example. If visualize were set to True, we would be able to watch the agent interact with the environment as we went. However, this significantly slows down the training.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.225.57.126