There is more...

An example of distributed training for MNIST is available online on https://github.com/ischlag/distributed-tensorflow-example/blob/master/example.py

In addition, note that You can decide to have more than one parameter server for efficiency reasons. Using parameters the server can provide better network utilization, and it allows to scale models to more parallel machines. It is possible to allocate more than one parameter server. The interested reader can have a look to https://www.tensorflow.org/deploy/distributed

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.227.24.118