From the plot, we can see that our simple linear regressor tries to fit a linear line to the given dataset:
![](http://images-20200215.ebookreading.net/16/3/3/9781788293594/9781788293594__tensorflow-1x-deep__9781788293594__assets__58cd208f-6c30-4bfd-a635-015e499001bd.png)
In the following graph, we can see that as our model learned the data, the loss function decreased, as was expected:
![](http://images-20200215.ebookreading.net/16/3/3/9781788293594/9781788293594__tensorflow-1x-deep__9781788293594__assets__a9f46b9c-b011-4396-a196-e1f20f1245a2.png)
The following is the TensorBoard graph of our simple linear regressor:
![](http://images-20200215.ebookreading.net/16/3/3/9781788293594/9781788293594__tensorflow-1x-deep__9781788293594__assets__4b0030fc-dad5-4ad4-9dd7-ee47af77ad05.png)
The graph has two name scope nodes, Variable and Variable_1, they are the high-level nodes representing bias and weights respectively. The node named gradient is also a high-level node; expanding the node, we can see that it takes seven inputs and computes the gradients that are then used by GradientDescentOptimizer to compute and apply updates to weights and bias:
![](http://images-20200215.ebookreading.net/16/3/3/9781788293594/9781788293594__tensorflow-1x-deep__9781788293594__assets__f9692463-d4f6-4776-b899-087b3dfbac21.png)