How to do it...

TensorFlow generates TensorFlow graphs. With the help of XLA, it is possible to run the TensorFlow graphs on any new kind of device.

  1. JIT Compilation: This is to turn on JIT compilation at session level:
# Config to turn on JIT compilation
config = tf.ConfigProto()
config.graph_options.optimizer_options.global_jit_level = tf.OptimizerOptions.ON_1

sess = tf.Session(config=config)
  1. This is to turn on JIT compilation manually:
jit_scope = tf.contrib.compiler.jit.experimental_jit_scope

x = tf.placeholder(np.float32)
with jit_scope():
y = tf.add(x, x) # The "add" will be compiled with XLA.
  1. We can also run computations via XLA by placing the operator on a specific XLA device XLA_CPU or XLA_GPU:
with tf.device  ("/job:localhost/replica:0/task:0/device:XLA_GPU:0"):
output = tf.add(input1, input2)

AoT Compilation: Here, we use tfcompile as standalone to convert TensorFlow graphs into executable code for different devices (mobile).

TensorFlow.org tells about tfcompile:

tfcompile takes a subgraph, identified by the TensorFlow concepts of feeds and fetches, and generates a function that implements that subgraph. The feeds are the input arguments for the function, and the fetches are the output arguments for the function. All inputs must be fully specified by the feeds; the resulting pruned subgraph cannot contain placeholder or variable nodes. It is common to specify all placeholders and variables as feeds, which ensures the resulting subgraph no longer contains these nodes. The generated function is packaged as a cc_library, with a header file exporting the function signature, and an object file containing the implementation. The user writes code to invoke the generated function as appropriate.

For advanced steps to do the same, you can refer to https://www.tensorflow.org/performance/xla/tfcompile.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.7.13