0%

Book Description

An advanced exploration of the skills and knowledge required for operating Kubernetes clusters, with a focus on metrics gathering and alerting, with the goal of making clusters and applications inside them autonomous through self-healing and self-adaptation.

Key Features

  • The sixth book of DevOps expert Viktor Farcic's bestselling DevOps Toolkit series, with an overview of advanced core Kubernetes techniques,-oriented towards monitoring and alerting.
  • Takes a deep dive into monitoring, alerting, logging, auto-scaling, and other subjects aimed at making clusters resilient, self-sufficient, and self-adaptive
  • Discusses how to customise and create dashboards and alerts

Book Description

Building on The DevOps 2.3 Toolkit: Kubernetes, and The DevOps 2.4 Toolkit: Continuous Deployment to Kubernetes, Viktor Farcic brings his latest exploration of the Docker technology as he records his journey to monitoring, logging, and autoscaling Kubernetes.

The DevOps 2.5 Toolkit: Monitoring, Logging, and Auto-Scaling Kubernetes: Making Resilient, Self-Adaptive, And Autonomous Kubernetes Clusters is the latest book in Viktor Farcic's series that helps you build a full DevOps Toolkit. This book helps readers develop the necessary skillsets needed to be able to operate Kubernetes clusters, with a focus on metrics gathering and alerting with the goal of making clusters and applications inside them autonomous through self-healing and self-adaptation.

Work with Viktor and dive into the creation of self-adaptive and self-healing systems within Kubernetes.

What you will learn

  • Autoscaling Deployments and Statefulsets based on resource usage
  • Autoscaling nodes of a Kubernetes cluster
  • Debugging issues discovered through metrics and alerts
  • Extending HorizontalPodAutoscaler with custom metrics
  • Visualizing metrics and alerts
  • Collecting and querying logs

Who this book is for

Readers with an advanced-level understanding of Kubernetes and hands-on experience.

Table of Contents

  1. Title Page
  2. Copyright and Credits
    1. The DevOps 2.5 Toolkit
  3. About Packt
    1. Why subscribe?
  4. Dedication
  5. Contributors
    1. About the author
  6. Preface
    1. Overview
    2. Audience
    3. Requirements
      1. Download the example code files
      2. Download the color images
      3. Conventions used
    4. Get in touch
      1. Reviews
  7. Autoscaling Deployments and StatefulSets Based on Resource Usage
    1. Creating a cluster
    2. Observing Metrics Server data
    3. Auto-scaling Pods based on resource utilization
    4. To replicas or not to replicas in Deployments and StatefulSets?
    5. What now?
  8. Auto-scaling Nodes of a Kubernetes Cluster
    1. Creating a cluster
    2. Setting up Cluster Autoscaling
      1. Setting up Cluster Autoscaler in GKE
      2. Setting up Cluster Autoscaler in EKS
      3. Setting up Cluster Autoscaler in AKS
    3. Scaling up the cluster
    4. The rules governing nodes scale-up
    5. Scaling down the cluster
    6. The rules governing nodes scale-down
    7. Can we scale up too much or de-scale to zero nodes?
    8. Cluster Autoscaler compared in GKE, EKS, and AKS
    9. What now?
  9. Collecting and Querying Metrics and Sending Alerts
    1. Creating a cluster
    2. Choosing the tools for storing and querying metrics and alerting
    3. A quick introduction to Prometheus and Alertmanager
    4. Which metric types should we use?
    5. Alerting on latency-related issues
    6. Alerting on traffic-related issues
    7. Alerting on error-related issues
    8. Alerting on saturation-related issues
    9. Alerting on unschedulable or failed pods
    10. Upgrading old Pods
    11. Measuring containers memory and CPU usage
    12. Comparing actual resource usage with defined requests
    13. Comparing actual resource usage with defined limits
    14. What now?
  10. Debugging Issues Discovered Through Metrics and Alerts
    1. Creating a cluster
    2. Facing a disaster
    3. Using instrumentation to provide more detailed metrics
    4. Using internal metrics to debug potential issues
    5. What now?
  11. Extending HorizontalPodAutoscaler with Custom Metrics
    1. Creating a cluster
    2. Using HorizontalPodAutoscaler without metrics adapter
    3. Exploring Prometheus Adapter
    4. Creating HorizontalPodAutoscaler with custom metrics
    5. Combining Metric Server data with custom metrics
    6. The complete HorizontalPodAutoscaler flow of events
    7. Reaching nirvana
    8. What now?
  12. Visualizing Metrics and Alerts
    1. Creating a cluster
    2. Which tools should we use for dashboards?
    3. Installing and setting up Grafana
    4. Importing and customizing pre-made dashboards
    5. Creating custom dashboards
    6. Creating semaphore dashboards
    7. A better dashboard for big screens
    8. Prometheus alerts vs. Grafana notifications vs. semaphores vs. graph alerts
    9. What now?
  13. Collecting and Querying Logs
    1. Creating a cluster
    2. Exploring logs through kubectl
    3. Choosing a centralized logging solution
    4. Exploring logs collection and shipping
    5. Exploring centralized logging through Papertrail
    6. Combining GCP Stackdriver with a GKE cluster
    7. Combining AWS CloudWatch with an EKS cluster
    8. Combining Azure Log Analytics with an AKS cluster
    9. Exploring centralized logging through Elasticsearch, Fluentd, and Kibana
    10. Switching to Elasticsearch for storing metrics
    11. What should we expect from centralized logging?
    12. What now?
  14. What Did We Do?
    1. Contributions
  15. Other Books You May Enjoy
    1. Leave a review - let other readers know what you think
3.145.115.195