Exporting the model for production

 def export_model(checkpoint_dir, export_dir, export_name,  
export_version): graph = tf.Graph() with graph.as_default(): image = tf.placeholder(tf.float32, shape=[None, None, 3]) processed_image = datasets.preprocessing(image,
is_training=False) with tf.variable_scope("models"): logits = nets.inference(images=processed_image,
is_training=False) model_checkpoint_path =
get_model_path_from_ckpt(checkpoint_dir) saver = tf.train.Saver() config = tf.ConfigProto() config.gpu_options.allow_growth = True config.gpu_options.per_process_gpu_memory_fraction = 0.7 with tf.Session(graph=graph) as sess: saver.restore(sess, model_checkpoint_path) export_path = os.path.join(export_dir, export_name,
str(export_version)) export_saved_model(sess, export_path, image, logits) print("Exported model at", export_path)

In the export_model method, we need to create a new graph to run in production. In production, we don't need all the variables, as in training, and we don't need an input pipeline. However, we need to export the model with the export_saved_model method, as follows:

 def export_saved_model(sess, export_path, input_tensor,  
output_tensor): from tensorflow.python.saved_model import builder as
saved_model_builder from tensorflow.python.saved_model import signature_constants from tensorflow.python.saved_model import signature_def_utils from tensorflow.python.saved_model import tag_constants from tensorflow.python.saved_model import utils builder = saved_model_builder.SavedModelBuilder(export_path) prediction_signature = signature_def_utils.build_signature_def( inputs={'images': utils.build_tensor_info(input_tensor)}, outputs={ 'scores': utils.build_tensor_info(output_tensor) }, method_name=signature_constants.PREDICT_METHOD_NAME) legacy_init_op = tf.group( tf.tables_initializer(), name='legacy_init_op') builder.add_meta_graph_and_variables( sess, [tag_constants.SERVING], signature_def_map={ 'predict_images': prediction_signature, }, legacy_init_op=legacy_init_op) builder.save()

With this method, we can create a metagraph of the model for serving in production. We will cover how to serve the model in a later section. Now, let's run the scripts to automatically train and export after 3,000 steps:

python scripts/train.py

On our system, with Core i7-4790 CPU and one TITAN-X GPU, the training routine takes 20 minutes to finish. Here are a few of the last outputs in our console:

Steps 3000: Loss = 0.59160 Learning Rate = 0.000313810509397
Test accuracy 0.659375 Train accuracy 0.853125: Loss = 0.25782
Save steps: Test Accuracy 0.859375 is not higher than 0.921875
training: 100%|██████████████████| 3000/3000 [23:40<00:00,  1.27it/s]
    I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX TITAN X, pci bus id: 0000:01:00.0)
    ('Exported model at', '/home/ubuntu/models/pet-model/1')

Great! We have a model with 92.18% test accuracy. We also have the exported model as a .pb file. The export_dir folder will have the following structure:

- /home/ubuntu/models/
-- pet_model
---- 1
------ saved_model.pb
------ variables
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.149.28.126