Now, we have all the pieces to make this DCGAN run! First, take a look at the following Python run script:
#!/usr/bin/env python3
from train import Trainer
# Command Line Argument Method
HEIGHT = 64
WIDTH = 64
CHANNEL = 3
LATENT_SPACE_SIZE = 100
EPOCHS = 100
BATCH = 128
CHECKPOINT = 10
PATH = "/data/church_outdoor_train_lmdb_color.npy"
trainer = Trainer(height=HEIGHT,
width=WIDTH,
channels=CHANNEL,
latent_size=LATENT_SPACE_SIZE,
epochs =EPOCHS,
batch=BATCH,
checkpoint=CHECKPOINT,
model_type='DCGAN',
data_path=PATH)
trainer.train()
Note the following points in the updated Python run script:
- The height and width are 64 x 64—the same size as in the npy file
- 128 is the recommended batch
- The Model_Type flag is set to DCGAN
- Each epoch goes through the entire dataset of batches prior to moving on to the next epoch; this means there is a minimum data_size or batch_size
- The path in this call should be a path that the Dockerfile can reach—ensure that the -v call in your Docker run script is appropriately configured