© The Author(s), under exclusive license to APress Media, LLC, part of Springer Nature 2023
N. TolaramSoftware Development with Gohttps://doi.org/10.1007/978-1-4842-8731-6_18

18. cadvisor

Nanik Tolaram1  
(1)
Sydney, NSW, Australia
 

In this chapter, you will look at an open source project called cAdvisor, which stands for Container Advisor. The complete source code can be found at https://github.com/google/cadvisor. This chapter uses version v0.39.3 of the project. The project is used to collect resource usage and performance data on running containers. cAdvisor supports Docker containers, and this is specifically what you are going to look at in this chapter.

The reason for choosing this project is to explore further the topics we discussed in previous chapters, such as
  • Using system calls to monitor filesystem

  • Using cgroups

  • Collecting machine information using /proc and /sys

Source Code

The source code for this chapter is available from the https://github.com/Apress/Software-Development-Go repository.

Running cAdvisor

This section walks through how to check out cAdvisor source code to run it locally. Let’s start by checking out the code using the following command:
GO111MODULE=off go get github.com/google/cadvisor
The command uses the go get command to download the source code from the given URL. The environment GO111MODULE=off tells the go tool to get the module (google/cadvisor) and store it in the GOPATH directory. Once the module has been downloaded, you can go to your GOPATH/src directory and you will see something like the following:
drwxrwxr-x 32 nanik nanik  4096 Jun 17 22:31 ./
drwxrwxr-x  9 nanik nanik  4096 Jun 15 22:19 ../
drwxrwxr-x  2 nanik nanik  4096 Jun 17 22:31 accelerators/
-rw-rw-r--  1 nanik nanik   256 Jun 15 22:19 AUTHORS
drwxrwxr-x  4 nanik nanik  4096 Jun 17 22:31 build/
drwxrwxr-x  3 nanik nanik  4096 Jun 15 22:19 cache/
-rw-rw-r--  1 nanik nanik 22048 Jun 15 22:19 CHANGELOG.md
drwxrwxr-x  4 nanik nanik  4096 Jun 17 22:31 client/
drwxrwxr-x  3 nanik nanik  4096 Jun 17 22:32 cmd/
drwxrwxr-x  3 nanik nanik  4096 Jun 17 22:31 collector/
...
Build the project by changing into the cmd directory and running the following command:
go build -o cadvisor
You will get an executable file called cadvisor. Let’s run the project using the following command to print out the different parameters it can accept:
./cadvisor –help
You will get a printout that looks like the following:
  -add_dir_header
        If true, adds the file directory to the header of the log messages
  ...
  -boot_id_file string
        Comma-separated list of files to check for boot-id. Use the first one that exists. (default "/proc/sys/kernel/random/boot_id")
  ...
  -v value
        number for the log level verbosity
  ...
I will not go through all the different parameters that cAdvisor has. You are just going to use whatever default value it assigns. cAdvisor requires root access to run it, so do so as follows:
sudo ./cadvisor -v 9
By default, it uses port 8080 to run the application, so if you have another application running that uses port 8080, it will fail to run. Use the -p flag to specify a different port number for cAdvisor.
sudo ./cadvisor -port <port_number>

When cAdvisor runs, it collects different information related to the machine and containers, which can only be done if it has root access.

Once cAdvisor is up and running, you will see a lot of log information printed out in the terminal.
I0617 23:06:13.122455 2311171 storagedriver.go:55] Caching stats in memory for 2m0s
W0617 23:06:13.122498 2311171 manager.go:159] Cannot detect current cgroup on cgroup v2
I0617 23:06:13.122591 2311171 plugin.go:40] CRI-O not connected: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
...
I0617 23:06:13.139451 2311171 nvidia.go:61] NVIDIA setup failed: no NVIDIA devices found
...
I0617 23:06:13.192306 2311171 manager.go:991] Added container: "/" (aliases: [], namespace: "")
I0617 23:06:13.192340 2311171 handler.go:325] Added event &{/ 2022-06-14 09:44:58.365378218 +1000 AEST containerCreation {<nil>}}
I0617 23:06:13.192356 2311171 manager.go:301] Starting recovery of all containers
I0617 23:06:13.192424 2311171 container.go:527] Start housekeeping for container "/"
...
I0617 23:06:13.197502 2311171 handler.go:325] Added event &{/user.slice/user-1000.slice/[email protected]/app.slice/dbus.socket 2022-06-14 09:43:53.204034932 +1000 AEST containerCreation {<nil>}}
I0617 23:06:13.197513 2311171 factory.go:220] Factory "docker" was unable to handle container "/user.slice/user-1000.slice/[email protected]/app.slice/app-org.gnome.Terminal.slice/vte-spawn-642db2f1-1648-487e-8c09-58ec92a50865.scope"
Open up your browser and type in http://localhost:8080 to access the user interface. You will something like Figure 18-1.

A screenshot lists the 11 subcontainers involved in the open-source container advisor.

Figure 18-1

cAdvisor UI

To see the containers that are running locally, click the Docker Containers link on the main page. You will see a different container UI, like the one shown in Figure 18-2. My local machine has a Postgres container running, so you are seeing a Postgres container. You will see all the different containers that are running on your local machine.

A screenshot displays the link of a sub-container, postgres specified within a docker container.

Figure 18-2

cAdvisor container UI

In the next section, you will explore further the cAdvisor UI and concepts that are related to the project.

Web User Interface

Make sure you have your cAdvisor running locally and open your browser to access it via URL http://localhost:8080. Let’s understand some of the data that is presented on the webpage.

Note

The information you see on your local machine might be different from what you read in this book. It depends on the operating system or Linux distribution you are using.

Figure 18-3 shows the subdirectory called subcontainers in cAdvisor. This directory provide important statistics and performance information that cAdvisor uses for reporting purposes.

A screenshot lists 15 sub-containers names starting with a forward slash.

Figure 18-3

Subcontainers view

Click the system.slice link and you will see something like Figure 18-4, which shows the different services running on the local machine.

A screenshot displays 4 sub-containers links placed under the system dot slice.

Figure 18-4

/system.slice view

Figure 18-5 shows gauges of the percentage of memory and disk usage.

A photo displays the usage meter of memory and 5 disks. The readings of the meter are 39, 0, 0, 55, 0, and 55.

Figure 18-5

Memory and disk usage

cAdvisor also shows the different processes that are currently running in your system. Figure 18-6 shows information about the process name, CPU usage, memory usage, running time, and other information.

The tabular diagram expresses the percentage of different devices and memories, R S S, virtual size, and status for several pieces of equipment of the c Advisor.

Figure 18-6

Running process information

Besides the processes that are running on your local machine, cAdvisor also reports information about the different Docker containers that are currently running on your machine. Click the Docker Containers link shown in Figure 18-7.

A screenshot displays the docker container link as a forward slash, root, and docker container.

Figure 18-7

Docker Containers link

After clicking the Docker Containers link, you will be shown the list of containers that you can look into. In my case, as shown in Figure 18-8, there is a Postgres container currently running on my local machine.

A screenshot of docker containers displays the link of the subadvisor and the driver status with two different versions 20.10.9 and 1.41.

Figure 18-8

Docker subcontainers view

Clicking the Postgres container will show the different metrics related to the container, as shown in Figure 18-9.

A screenshot of the system dot slice includes isolation, overview, process, memory and network. Memory is in limited and unlimited reservations. In the line graphs, the network is shown by a bell-shaped impulse.

Figure 18-9

Postgres metrics

In the next section, you will dive into the internals of cAdvisor and learn how it is able to do all these things in the code.

Architecture

In this section and the next, you will look at the internals of cAdvisor and how the different components work. cAdvisor supports different containers, but for this chapter you will focus on the code that is relevant to Docker only. Let’s take a look at the high-level component view of cAdvisor shown in Figure 18-10.

The Block diagram represents the parts of plugins, managers and handlers in the form of labels in c Advisor process. Manager with most labels are containers, memory cache, watchers, machine info and watcher.

Figure 18-10

High-level architecture

Table 18-1 outlines the different components and usage inside cAdvisor.
Table 18-1

Components

Events Channel

Channel used to report creation or deletion of containers

InMemoryCache

Cache used to store metric information relevant to all containers being monitored

Container Watcher

Watcher that monitors container activities

Containers

Different containers monitored by cAdvisor

Machine Info

Information related to the local machine that cAdvisor is running on

Plugins

The different container that cAdvisor supports: Docker, Mesos, CRIO, Systemd, and ContainerD

Handlers

HTTP handlers that take care of requests for metrics and other relevant APIs exposed by cAdvisor

In the next few sections, you will look at different parts of cAdvisor and how they work.

Initialization

Like any other Go application, the entry point of cAdvisor is main.go.
func main() {
   ...
  memoryStorage, err := NewMemoryStorage()
  if err != nil {
     klog.Fatalf("Failed to initialize storage driver: %s", err)
  }
   ...
  resourceManager, err := manager.New(memoryStorage, sysFs, housekeepingConfig, includedMetrics, &collectorHttpClient, strings.Split(*rawCgroupPrefixWhiteList, ","), *perfEvents)
   ...
  cadvisorhttp.RegisterPrometheusHandler(mux, resourceManager, *prometheusEndpoint, containerLabelFunc, includedMetrics)
   ...
  rootMux := http.NewServeMux()
   ...
}
The main() function performs the following initialization process steps:
  • Setting up cache for storing a container and its metrics

  • Setting up Manager, which performs all the major processing to monitor containers

  • Setting up HTTP handlers to allow the web user interface to get metric data for different containers

  • Start collecting containers and metrics by starting up the Manager

The cache management is taken care by InMemoryCache, which can be found inside memory.go.
type InMemoryCache struct {
      lock              sync.RWMutex
      containerCacheMap map[string]*containerCache
      maxAge            time.Duration
      backend           []storage.StorageDriver
}
func (c *InMemoryCache) AddStats(cInfo *info.ContainerInfo, stats *info.ContainerStats) error {
      ...
}
func (c *InMemoryCache) RecentStats(name string, start, end time.Time, maxStats int) ([]*info.ContainerStats, error) {
      ...
}
func (c *InMemoryCache) Close() error {
      ...
}
func (c *InMemoryCache) RemoveContainer(containerName string) error {
      ...
}
There are two different HTTP handlers that are initialized by cAdvisor: API-based HTTP handlers that are used by the web user interface and metric HTTP handlers that report metric information in raw format. The following snippet shows the main handlers registration that register the different paths that are made available (inside cmd/internal/http/handlers.go):
func RegisterHandlers(mux httpmux.Mux, containerManager manager.Manager, httpAuthFile, httpAuthRealm, httpDigestFile, httpDigestRealm string, urlBasePrefix string) error {
  ...
  if err := api.RegisterHandlers(mux, containerManager); err != nil {
     return fmt.Errorf("failed to register API handlers: %s", err)
  }
  mux.Handle("/", http.RedirectHandler(urlBasePrefix+pages.ContainersPage, http.StatusTemporaryRedirect))
   ...
  return nil
}
The API-based handlers are found inside cmd/internal/api/handler.go, as shown:
func RegisterHandlers(mux httpmux.Mux, m manager.Manager) error {
   ...
  mux.HandleFunc(apiResource, func(w http.ResponseWriter, r *http.Request) {
     err := handleRequest(supportedApiVersions, m, w, r)
     if err != nil {
        http.Error(w, err.Error(), 500)
     }
  })
  return nil
}
The API handlers expose the /api path. To test this handler, make sure you have cAdvisor running and open your browser and enter the URL http://localhost:8080/api/v1.0/containers. You will see something like Figure 18-11.

A screenshot displays output as sub-containers with creation time links for C P U, memory ranges, status, and schedstat are measured for a certain limit.

Figure 18-11

Output of /api/v1.0/containers

Manager

Manager is the main component of cAdvisor. It takes care of the initialization, maintenance, and reporting of different metrics for the containers it manages. The interfaces are declared as follows:
type Manager interface {
      Start() error
      Stop() error
      GetContainerInfo(containerName string, query *info.ContainerInfoRequest) (*info.ContainerInfo, error)
      GetContainerInfoV2(containerName string, options v2.RequestOptions) (map[string]v2.ContainerInfo, error)
      SubcontainersInfo(containerName string, query *info.ContainerInfoRequest) ([]*info.ContainerInfo, error)
      AllDockerContainers(query *info.ContainerInfoRequest) (map[string]info.ContainerInfo, error)
      DockerContainer(dockerName string, query *info.ContainerInfoRequest) (info.ContainerInfo, error)
      GetContainerSpec(containerName string, options v2.RequestOptions) (map[string]v2.ContainerSpec, error)
      GetDerivedStats(containerName string, options v2.RequestOptions) (map[string]v2.DerivedStats, error)
      GetRequestedContainersInfo(containerName string, options v2.RequestOptions) (map[string]*info.ContainerInfo, error)
      Exists(containerName string) bool
      GetMachineInfo() (*info.MachineInfo, error)
      GetVersionInfo() (*info.VersionInfo, error)
      GetFsInfoByFsUUID(uuid string) (v2.FsInfo, error)
      GetDirFsInfo(dir string) (v2.FsInfo, error)
      GetFsInfo(label string) ([]v2.FsInfo, error)
      GetProcessList(containerName string, options v2.RequestOptions) ([]v2.ProcessInfo, error)
      WatchForEvents(request *events.Request) (*events.EventChannel, error)
      GetPastEvents(request *events.Request) ([]*info.Event, error)
      CloseEventChannel(watchID int)
      DockerInfo() (info.DockerStatus, error)
      DockerImages() ([]info.DockerImage, error)
      DebugInfo() map[string][]string
}

The interfaces and implementation are found inside manager.go.

Manager uses plugins for different container technologies. For example, the Docker plugin is responsible for communicating with the Docker engine. The Docker plugin resides inside the container/docker/plugin.go file. The following is the Docker plugin code:
package docker
import (
  ...
)
const dockerClientTimeout = 10 * time.Second
  ...
func (p *plugin) InitializeFSContext(context *fs.Context) error {
  SetTimeout(dockerClientTimeout)
  // Try to connect to docker indefinitely on startup.
  dockerStatus := retryDockerStatus()
  ...
}
  ...
func retryDockerStatus() info.DockerStatus {
  startupTimeout := dockerClientTimeout
  maxTimeout := 4 * startupTimeout
  for {
       ...
  }
}

The main job of Manager is to monitor containers, but before it is able to do that, it needs to find out what containers are available and how to monitor them. Let’s take a look at the first step, which is finding out what containers will be monitored.

In the previous section, it was mentioned that conceptually cAdvisor refers to containers not only for Docker, but anything that it monitors is considered as containers. Let’s take a look at how cAdvisor finds the containers that it monitors. The collection of containers that it will monitor are collected when the Start() function of Manager is called, as shown here:
func (m *manager) Start() error {
  ...
  err := raw.Register(m, m.fsInfo, m.includedMetrics, m.rawContainerCgroupPathPrefixWhiteList)
  if err != nil {
     klog.Errorf("Registration of the raw container factory failed: %v", err)
  }
  rawWatcher, err := raw.NewRawContainerWatcher()
  if err != nil {
     return err
  }
  m.containerWatchers = append(m.containerWatchers, rawWatcher)
  ...
  // Create root and then recover all containers.
  err = m.createContainer("/", watcher.Raw)
  if err != nil {
     return err
  }
  klog.V(2).Infof("Starting recovery of all containers")
  err = m.detectSubcontainers("/")
  if err != nil {
     return err
  }
    ...
  return nil
}
The collection process is performed by the m.createContainer(..) function and Figure 18-12 shows what is created.

A model diagram of container data along the handler, and the container is displayed and the notification panel at a certain link is marked in the upper part of the image.

Figure 18-12

The createContainer function process

Basically, what it is doing the following:
  • Creating a containerData struct that is populated with container-related information. In this case, it’s populated with information regarding the /sys/fs/cgroup directory.

  • Creating a ContainerHandler and CollectManager that will handle everything related to this particular container (in this case /sys/fs/cgroup) and collecting all the necessary metric information.

  • Once all structs have been initialized successfully, it will call Start() of the containerData struct to start monitoring.

From the steps above, it is clear that cAdvisor is monitoring activities that are happening inside the /sys/fs/cgroup directory. As you learned in Chapter 4, this directory refers to cgroups, which is the cornerstone of Docker containers.

cAdvisor also monitors the subdirectories of /sys/fs/cgroup, which are all treated as containers and will be monitored the same as the main /sys/fs/cgroup directory. This is performed by the detectSubcontainers(..) function, as shown here:
func (m *manager) detectSubcontainers(containerName string) error {
  added, removed, err := m.getContainersDiff(containerName)
  ...
  for _, cont := range added {
     err = m.createContainer(cont.Name, watcher.Raw)
     ...
  }
    ...
  return nil
}
Once all the subdirectories of /sys/fs/cgroup have been processed, it adds those containers to be watched by Container Watcher. This is done by the watchForNewContainers() function shown in the following code:
func (m *manager) watchForNewContainers(quit chan error) error {
      ...
      for _, watcher := range m.containerWatchers {
            err := watcher.Start(m.eventsChannel)
            if err != nil {
                  for _, w := range watched {
                        stopErr := w.Stop()
                        ...
                  }
                  return err
            }
            watched = append(watched, watcher)
      }
      err := m.detectSubcontainers("/")
      ...
      return nil
}

After all containers have been set up to be watched, cAdvisor will be informed about any changes to them. This job is done by the goroutine shown in the above code snippets. In the next section, you will look at how cAdvisor uses inotify, which is provided by the Linux operating system to let applications to be notified if any activities are detected for the directories that are watched.

Monitoring Filesystem

cAdvisor uses the inotify API that is provided by the Linux kernel (https://linux.die.net/man/7/inotify). This API allows applications to monitor file systems events, such as if any files are deleted or created. Figure 18-13 shows how cAdvisor uses the inotify events.

A flow chart has 4 marked events as inotify kernel, library, Goroutine, and monitarization of new directories and subdirectories at the final level.

Figure 18-13

inotify flow in cAdvisor

In the previous section, you learned that cAdvisor monitors and listens for events for /sys/fs/cgroup and its subdirectories. This is how cAdvisor knows what Docker containers are created or deleted from memory. Let’s take a look at how it uses inotify for this purpose.

The code uses the inotify library that listens to events coming in from the kernel. The cAdvisor code uses a goroutine to process the inotify events. This goroutine is created as part of the initialization process when watchForNewContainers is called. watchForNewContainers calls the Start function inside container/raw/watcher.go, as shown:
func (w *rawContainerWatcher) Start(events chan watcher.ContainerEvent) error {
  watched := make([]string, 0)
  for _, cgroupPath := range w.cgroupPaths {
     _, err := w.watchDirectory(events, cgroupPath, "/")
     ...
     watched = append(watched, cgroupPath)
  }
  go func() {
     for {
        select {
        case event := <-w.watcher.Event():
           err := w.processEvent(event, events)
           if err != nil {
              ...
           }
        case err := <-w.watcher.Error():
           ...
        case <-w.stopWatcher:
           err := w.watcher.Close()
           ...
        }
     }
  }()
  return nil
}
The w.processEvent(..) function takes care of the received inotify event and converts it into its own internal event, as shown:
func (w *rawContainerWatcher) processEvent(event *inotify.Event, events chan watcher.ContainerEvent) error {
  // Convert the inotify event type to a container create or delete.
  var eventType watcher.ContainerEventType
  switch {
  case (event.Mask & inotify.InCreate) > 0:
     eventType = watcher.ContainerAdd
  case (event.Mask & inotify.InDelete) > 0:
     eventType = watcher.ContainerDelete
   ...
  }
  ...
  switch eventType {
  case watcher.ContainerAdd:
     alreadyWatched, err := w.watchDirectory(events, event.Name, containerName)
     ...
  case watcher.ContainerDelete:
     // Container was deleted, stop watching for it.
     lastWatched, err := w.watcher.RemoveWatch(containerName, event.Name)
     ...
  default:
     return fmt.Errorf("unknown event type %v", eventType)
  }
  // Deliver the event.
  events <- watcher.ContainerEvent{
     EventType:   eventType,
     Name:        containerName,
     WatchSource: watcher.Raw,
  }
  return nil
}

The function converts the events received into internal events that the code understands: watcher.ContainerAdd and watcher.ContainerDelete. These events are broadcast internally for other parts of the code to process.

Information from /sys and /proc

In Chapters 2 and 3, you learned about the /sys and /proc filesystems and what kind of system-related information can be found. cAdvisor uses the same way to collect machine information that is reported as part of the metric information.

Manager takes care of collecting and updating machine information, as shown in the following code snippet:
func New(memoryCache *memory.InMemoryCache, sysfs sysfs.SysFs, houskeepingConfig HouskeepingConfig, includedMetricsSet container.MetricSet, collectorHTTPClient *http.Client, rawContainerCgroupPathPrefixWhiteList []string, perfEventsFile string) (Manager, error) {
  ...
  machineInfo, err := machine.Info(sysfs, fsInfo, inHostNamespace)
  ...
}
The primary code that does the collection of machine information can be seen in the following snippet (machine/info.go):
func Info(sysFs sysfs.SysFs, fsInfo fs.FsInfo, inHostNamespace bool) (*info.MachineInfo, error) {
  ...
  clockSpeed, err := GetClockSpeed(cpuinfo)
  ...
  memoryCapacity, err := GetMachineMemoryCapacity()
  ...
  filesystems, err := fsInfo.GetGlobalFsInfo()
  ...
  netDevices, err := sysinfo.GetNetworkDevices(sysFs)
  ...
  topology, numCores, err := GetTopology(sysFs)
  ...
  return machineInfo, nil
}
Here’s the GetMachineMemoryCapacity() function and how it collects memory information using the /proc directory:
func GetMachineMemoryCapacity() (uint64, error) {
  out, err := ioutil.ReadFile("/proc/meminfo")
  if err != nil {
     return 0, err
  }
  memoryCapacity, err := parseCapacity(out, memoryCapacityRegexp)
  if err != nil {
     return 0, err
  }
  return memoryCapacity, err
}
The function reads the /proc/meminfo directory and parses the information received by calling the parseCapacity() function. The raw information extracted from /proc/meminfo looks like the following:
MemTotal:       16078860 kB
MemFree:          698260 kB
...
Hugepagesize:       2048 kB
Hugetlb:               0 kB
DirectMap4k:      901628 kB
DirectMap2M:    15566848 kB
DirectMap1G:           0 kB
Let’s look at another function called GetGlobalFsInfo() (fs/fs.go). This function calls another function called GetFsInfoForPath(..) (fs/fs.go), which is shown in the following snippet:
func (i *RealFsInfo) GetFsInfoForPath(mountSet map[string]struct{}) ([]Fs, error) {
  ...
  diskStatsMap, err := getDiskStatsMap("/proc/diskstats")
  ...
  return filesystems, nil
}
It calls getDiskStatsMap(..), passing in /proc/diskstats as the parameter. The function getDiskStatsMap(..) reads and parses the information from that directory. The raw information from that directory looks like the following:
 ...
 259       0 nvme0n1 17925716 1726716 2140111562 27153144 9657604 6144332 374398866 10096182 1 7081436 37829936 0 0 0 0 666569 580610
 ...
 253       2 dm-2 728297 0 5837468 252644 2635588 0 21084640 7281316 0 334744 7533960 0 0 0 0 0 0
Now let’s look at how cAdvisor reads information using the /sys directory. The function GetNetworkDevices(..) (utils/sysinfo/sysinfo.go) shown in the code snippets calls another function to get the information from /sys/class/net.
func GetNetworkDevices(sysfs sysfs.SysFs) ([]info.NetInfo, error) {
  devs, err := sysfs.GetNetworkDevices()
  ...
  return netDevices, nil
}
The sysfs.GetNetworkDevices() (utils/sysfs/sysfs.go) snippet looks like the following:
const (
  ...
  netDir       = "/sys/class/net"
  ...
)
func (fs *realSysFs) GetNetworkDevices() ([]os.FileInfo, error) {
  files, err := ioutil.ReadDir(netDir)
  ...
  var dirs []os.FileInfo
  for _, f := range files {
     ...
  }
  return dirs, nil
}
The function extracts and parses the information, which looks like the following in raw format:
lrwxrwxrwx  1 root root 0 Jun 19 14:09 docker0 -> ../../devices/virtual/net/docker0
...
../../devices/virtual/net/veth710aac6
lrwxrwxrwx  1 root root 0 Jun 19 12:30 veth98e6a97 -> ../../devices/virtual/net/veth98e6a97
lrwxrwxrwx  1 root root 0 Jun 19 14:09 wlp0s20f3 -> ../../devices/pci0000:00/0000:00:14.3/net/wlp0s20f3

Client Library

In the repository inside the chapter18 folder, there are examples of how to use the cAdvisor client library to communicate with cAdvisor. The examples show how to use the client library to get container information, event streaming from cAdvisor, and so on.

Summary

In this chapter, you learned about installing and running cAdvisor to monitor metrics of your local machine and Docker containers. The tool provides a lot of information that shows the performance of the different containers that are running on a machine. This chapter discussed how cAdvisor collects metric information for containers and local machines using the knowledge you learned in previous chapters.

cAdvisor provides much more functionality than what was discussed in this chapter. For example, it has built-in support for exporting metrics to a Prometheus, it provides an API that can be used to integrated with other third-party or in-house tools to monitor container performance, and more.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.57.219