Preface

TensorFlow as a machine learning (ML) library has matured into a production-ready ecosystem. This beginner's book uses practical examples to enable you to build and deploy TensorFlow models using optimal settings that ensure long-term support without having to worry about library deprecation or being left behind when it comes to bug fixes or workarounds.

The book begins by showing you how to refine your TensorFlow project and set it up for enterprise-level deployment. You'll then learn to choose the version of TensorFlow. As you advance, you'll find out how to build and deploy models in a robust and stable environment by following recommended practices made available in TensorFlow Enterprise. This book also teaches you how to manage your services better and enhance the performance and reliability of your artificial intelligence (AI) applications. You'll discover how to use various enterprise-ready services to accelerate your ML and AI workflows on Google Cloud. Finally, you'll scale your ML models and handle heavy workloads across CPUs, GPUs, and cloud TPUs.

By the end of this TensorFlow book, you'll have learned the patterns needed for TensorFlow Enterprise model development, data pipelines, training, and deployment.

Who this book is for

This book is for data scientists, ML developers or engineers, and cloud practitioners who want to learn and implement various services and features offered by TensorFlow Enterprise from scratch. Basic knowledge of the ML development process will be useful.

What this book covers

Chapter 1, Overview of TensorFlow Enterprise, illustrates how to set up and run TensorFlow Enterprise in a Google Cloud Platform (GCP) environment. This will give you initial hands-on experience in seeing how TensorFlow Enterprise integrates with other data services in GCP.

Chapter 2, Running TensorFlow Enterprise in Google AI Platform, describes how to use GCP to set up and run TensorFlow Enterprise. As a differentiated TensorFlow distribution, TensorFlow Enterprise can be found on several (but not all) GCP platforms. It is important to use these platforms in order to ensure that the correct distribution is provisioned. 

Chapter 3, Data Preparation and Manipulation Techniques, illustrates how to deal with raw data and format it to uniquely suit consumption by a TensorFlow model training process. We will look at a number of essential TensorFlow Enterprise APIs that convert raw data into Protobuf format for efficient streaming, which is a recommended workflow for feeding data into a training process.

Chapter 4, Reusable Models and Scalable Data Pipelines, describes the different ways in which a TensorFlow Enterprise model may be built or reused. These options provide the flexibility to suit different situational requirements for building, training, and deploying TensorFlow models. Equipped with this knowledge, you will be able to make informed choices and understand the trade-offs among different model development strategies.

Chapter 5, Training at Scale, illustrates the use of TensorFlow Enterprise distributed training strategies to scale your model training to a cluster (either GPU or TPU). This will enable you to build a model development and training process that is robust and take advantage of all the hardware at your disposal.

Chapter 6, Hyperparameter Tuning, focuses on hyperparameter tuning as this is a necessary part of model training, especially when building your own model. TensorFlow Enterprise now provides high-level APIs for advanced hyperparameter space search algorithms. Through this chapter, you will learn how to leverage the distributed computing power at your disposal to reduce the training time required for hyperparameter tuning.

Chapter 7, Model Optimization, explores the concept of how lean and mean your model is. Does your model run as efficiently as possible? If your use case requires the model to run with limited resources (memory, model size, or data type), such as in the case of edge or mobile devices, then it's time to consider model runtime optimization. This chapter discusses the latest means of model optimization through the TensorFlow Lite framework. After this chapter, you will be able to optimize a trained TensorFlow Enterprise model to be as lightweight as possible for inferencing.

Chapter 8, Best Practices for Model Training and Performance, focuses on two aspects of model training that are universal: data ingestion and overfitting. First, it is necessary to build a data ingestion pipeline that works regardless of the size and complexity of the training data. In this chapter, best practices and recommendations for using TensorFlow Enterprise data preprocessing pipelines are explained and demonstrated. Second, in dealing with overfitting, standard practices of regularization as well as some recently released regularizations by the TensorFlow team are discussed.

Chapter 9, Serving a TensorFlow Model, describes the fundamentals of model inferencing as a web service. You will learn how to serve a TensorFlow model using TensorFlow Serving by building a Docker image of the model. In this chapter, you will begin by learning how to make use of saved models in your local environment first. Then you will build a Docker image of the model using TensorFlow Serving as the base image. Finally, you will serve this model as a web service through the RESTful API exposed by your Docker container.

To get the most out of this book

It would be very helpful to have a fundamental understanding of, and experience with, the Keras API, as this book pivots on a TensorFlow version beyond 2.x, in which the Keras API is officially supported and adopted as the tf.keras API. In addition, having a basic understanding of image classification techniques (convolution, and multiclass classification) would be helpful, as this book reuses the image classification problem as a vehicle to introduce and explain new features in TensorFlow Enterprise 2. Another helpful tool is GitHub. Basic experience with cloning GitHub repositories and navigating file structures would be very helpful for downloading the source code in this book.

From the ML perspective, having a basic understanding of model architectures, feature engineering processes, and hyperparameter optimization would be helpful. It is also assumed that you are familiar with fundamental Python data structures, including NumPy arrays, tuples, and dictionaries.

If you are using the digital version of this book, we advise you to type the code in yourself or access the code via the GitHub repository (link available in the next section). Doing so will help you avoid any potential errors related to the copying/pasting of code.

Download the example code files

You can download the example code files for this book from GitHub at https://github.com/PacktPublishing/learn-tensorflow-enterprise/. In case there's an update to the code, it will be updated on the existing GitHub repository.

We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!

Download the color images

We also provide a PDF file that has color images of the screenshots/diagrams used in this book. You can download it here: https://static.packt-cdn.com/downloads/9781800209145_ColorImages.pdf

Conventions used

There are a number of text conventions used throughout this book.

Code in text: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: 'Just like lxterminal, we can run Linux commands from here too.'

A block of code is set as follows:

p2 = Person()

p2.name = 'Jane'

p2.age = 20

print(p2.name)

print(p2.age)

Any command-line input or output is written as follows:

sudo apt-get install xrdp -y

Bold: Indicates a new term, an important word, or words that you see on screen. For example, words in menus or dialog boxes appear in the text like this. Here is an example: 'Open the Remote Desktop Connection application on your Windows PC.'

Tips or important notes

Appear like this.

Get in touch

Feedback from our readers is always welcome.

General feedback: If you have questions about any aspect of this book, mention the book title in the subject of your message and email us at [email protected].

Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packtpub.com/support/errata, selecting your book, clicking on the Errata Submission Form link, and entering the details.

Piracy: If you come across any illegal copies of our works in any form on the internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.

If you are interested in becoming an author: If there is a topic that you have expertise in, and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.

Reviews

Please leave a review. Once you have read and used this book, why not leave a review on the site that you purchased it from? Potential readers can then see and use your unbiased opinion to make purchase decisions, we at Packt can understand what you think about our products, and our authors can see your feedback on their book. Thank you!

For more information about Packt, please visit packt.com.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.222.67.251