Working with constants, variables, and placeholders

TensorFlow in the simplest terms provides a library to define and perform different mathematical operations with tensors. A tensor is basically an n-dimensional matrix. All types of data, that is, scalar, vectors, and matrices are special types of tensors:

Types of data

Tensor

Shape

Scalar

0-D Tensor

[]

Vector

1-D Tensor

[D0]

Matrix

2-D Tensor

[D0,D1]

Tensors

N-D Tensor

[D0,D1,....Dn-1]

 

TensorFlow supports three types of tensors:

  • Constants
  • Variables
  • Placeholders

Constants: Constants are the tensors whose values cannot be changed.

Variables: We use variable tensors when the values require updating within a session. For example, in the case of neural networks, the weights need to be updated during the training session, which is achieved by declaring weights as variables. The variables need to be explicitly initialized before use. Another important thing to note is that constants are stored in the computation graph definition; they are loaded every time the graph is loaded. In other words, they are memory expensive. Variables, on the other hand, are stored separately; they can exist on the parameter server.

Placeholders: These are used to feed values into a TensorFlow graph. They are used along with feed_dict to feed the data. They are normally used to feed new training examples while training a neural network. We assign a value to a placeholder while running the graph in the session. They allow us to create our operations and build the computation graph without requiring the data. An important point to note is that placeholders do not contain any data and thus there is no need to initialize them as well.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.189.189.63