How to do it...

We proceed with the recipe as follows:

  1. Define the greedy strategy for sampling the decoder. This is easy because we can use the library defined in tf.contrib.seq2seq.GreedyEmbeddingHelper. Since we don't know the exact length of the target sentence, we use a heuristic by limiting it to be a maximum of twice the length of the source sentence:
# Helper
helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(
embedding_decoder,
tf.fill([batch_size], tgt_sos_id), tgt_eos_id)

# Decoder
decoder = tf.contrib.seq2seq.BasicDecoder(
decoder_cell, helper, encoder_state,
output_layer=projection_layer)
# Dynamic decoding
outputs, _ = tf.contrib.seq2seq.dynamic_decode(
decoder, maximum_iterations=maximum_iterations)
translations = outputs.sample_id
maximum_iterations = tf.round(tf.reduce_max(source_sequence_length) * 2)
  1. We can now run the net, giving as input a sentence never seen before (inference_input_file=/tmp/my_infer_file) and letting the network translate the outcome (inference_output_file=/tmp/nmt_model/output_infer):
python -m nmt.nmt 
--out_dir=/tmp/nmt_model
--inference_input_file=/tmp/my_infer_file.vi
--inference_output_file=/tmp/nmt_model/output_infer
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.188.133.131