How to do it...

This is a fairly simple section made up of three primary steps—creating the loss.py file and placing two loss functions in it for us to inherit later on in the development.

Perform the following steps to create the loss.py file:

  1. Add the python3 interpreter to the top of the file and import tensorflow, as follows:
#!/usr/bin/env python3
import tensorflow as tf
  1. Implement the self-regularization loss suggested by the authors (the math). The implementation should be presented as follows:

def self_regularization_loss(y_true,y_pred):
return tf.multiply(0.0002,tf.reduce_sum(tf.abs(y_pred-y_true)))

In the preceding step, you're simply taking the normalized absolute value between the predicted value and the true value during training.

  1. Add the local adversarial loss function based on the mathematical presentation from the paper, which is as follows:

def local_adversarial_loss(y_true,y_pred):
truth = tf.reshape(y_true,(-1,2))
predicted = tf.reshape(y_pred,(-1,2))

computed_loss = tf.nn.softmax_cross_entropy_with_logits_v2(labels=truth,logits=predicted)
output = tf.reduce_mean(computed_loss)
return output

As you can see in the preceding snippet, the softmax_cross_entropy_with_logits_v2 function will essentially compute the probability error between discrete classification tasks, which in our case, is between real and simulated images.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.117.77.158