Lessons learned from production

Kubernetes has been around long enough now that there are a number of companies running Kubernetes. In our day jobs, we've seen Kubernetes run in production across a number of different industry verticals and in numerous configurations. Let's explore what folks across the industry are doing when providing customer-facing workloads. At a high level, there are several key areas:

  • Make sure to set limits in your cluster.
  • Use the appropriate workload types for your application.
  • Label everything! Labels are very flexible and can contain a lot of information that can help identify an object, route traffic, or determine placement.
  • Don't use default values.
  • Tweak the default values for the core Kubernetes components.
  • Use load balancers as opposed to exposing services directly on a node's port.
  • Build your Infrastructure as Code and use provisioning tools such as CloudFormation or Terraform, and configuration tools such as Chef, Ansible, or Puppet.
  • Consider not running stateful workloads in production clusters until you build up expertise in Kubernetes.
  • Investigate higher-function templating languages to maintain the state of your cluster. We'll explore a few options for an immutable infrastructure in the following chapter.
  • Use RBAC, the principle of least privilege, and separation of concerns wherever possible.
  • Use TLS-enabled communications for all inter-cluster chatter. You can set up TLS and certificate rotation for the kubelet communication in your cluster.
  • Until you're comfortable with managing Kubernetes, build lots of small clusters. It's more operational overhead, but it will get you into the deep end of experience faster so that you see more failure and experience the operator burden more heavily.
  • As you get better at Kubernetes, build bigger clusters that use namespaces, network segmentation, and the authorization features to break up your cluster into pieces.
  • Once you're running a few large clusters, manage them with kubefed.
  • If you can, use the features of your given cloud service provider's built-in high availability on a Kubernetes platform. For example, run Regional Clusters on GCP, with GKE. This feature spreads your nodes across several availability zones in a region. This allows for resilience against a single zone failure, and provides the conceptual building blocks for the zero downtime upgrades of your master nodes.

In the next section, we'll explore one of these concepts  limits – in more detail.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.221.185.155