There's more...

Training the agent takes lots of episodes, which is both time-and memory-consuming. OpenAI Gym provides a wrapper to save the game as a video and so, instead of using render, you can use the wrapper to save videos and later monitor how the agent learned. AI engineers and enthusiasts can upload these videos to show their results. To do so, we need to first import wrappers, then create the environment, and finally use Monitor. By default, it will store the video of 1, 8, 27, 64, ... and then every 1,000th episode (episode numbers with perfect cubes); each training, by default, is saved in one folder. The code to be added for this is as follows:

import gym
from gym import wrappers
env = gym.make('Breakout-v0)
env = wrappers.Monitor(env, '/save-path')

In case you want to use the same folder in the next training, you can add force=True to Monitor.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.35.178