Our docker images have to run on something, so why not a collection of Kubernetes pods? This is where the magic of distributed cloud computing is apparent. Using a central data source, in our case AWS S3, many microinstances for either training or inference are spun up, maximizing AWS resource utilization, saving you money and giving you the stability and performance you need for enterprise-grade machine learning applications.
First, navigate to the /k8s/ directory in the repository that accompanies these chapters.
We will begin by creating the templates necessary to deploy a cluster. In our case, we are going to use a frontend for kubectl, the default Kubernetes command that interacts with the main API.