For the last chapter, we have prepared something special. As a long-time Zabbix user, the importance of cloud integration for tools such as Zabbix has not gone unnoticed. For some people, the cloud can be daunting, and thus with this chapter, I want to show you just how easy it can be to start working with the most popular cloud providers and Zabbix.
We are going to start by talking about monitoring the Amazon Web Services (AWS) cloud with Zabbix. Then we will also see how the same things are done using Microsoft Azure so we can clearly see the differences.
After going through these cloud products, we'll also check out container monitoring with Docker, a very popular product that can also benefit greatly from setting up Zabbix monitoring. Follow these recipes closely and you will be able to monitor all of these products easily and work to extend the products using Zabbix. This chapter comprises the following recipes:
As this chapter focuses on AWS, Microsoft Azure, and Docker monitoring, we are going to need a working AWS, Microsoft Azure, or Docker setup. The recipe does not cover how to set these up, so make sure to have your own infrastructure at the ready.
Furthermore, we are going to need our Zabbix server running Zabbix 6. We will call this server zbx-home in this chapter.
You can download the code files for this chapter from the following GitHub link: https://github.com/PacktPublishing/Zabbix-6-IT-Infrastructure-Monitoring-Cookbook/tree/main/chapter13.
A lot of infrastructure is moving toward the cloud these days and it's important to keep an eye on this infrastructure as much as you would if it were your own hardware. In this recipe, we are going to discover how to monitor Relational Database Service (RDS) instances and S3 buckets with our Zabbix setup.
For this recipe, we are going to need our AWS cloud with some S3 buckets and/or RDS instances in it already. Of course, we will also need our Zabbix server, which we'll call zbx-home in this recipe.
Then, last but not least, we will require some templates and hosts, which we can import. We can download the XML files here: https://github.com/PacktPublishing/Zabbix-6-IT-Infrastructure-Monitoring-Cookbook/tree/main/chapter13.
Important Note
Using Amazon CloudWatch is not free, so you will incur costs. Make sure you check out the Amazon pricing for AWS CloudWatch before proceeding: https://aws.amazon.com/cloudwatch/pricing/.
Setting up AWS monitoring might seem like a daunting task at first, but once we get the hang of the technique it's not that difficult. Let's waste no more time and check out one of the methods we could use:
curl https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip -o "awscliv2.zip"
The result will look like this:
For RHEL-based systems:
dnf install unzip
For Ubuntu systems:
apt install unzip
unzip -q awscliv2.zip
./aws/install
The output will look like this:
mkdir /var/lib/zabbix
chown zabbix:zabbix /var/lib/zabbix/
su -s /bin/bash zabbix
aws configure
exit
cd /usr/lib/zabbix/externalscripts
vim aws_script.sh
#!/bin/bash
instance=$1
metric=$2
now=$(date +%s)
aws cloudwatch get-metric-statistics --metric-name $metric --start-time "$(echo "$now - 300" | bc)" --end-time "$now" --period 300 --namespace AWS/RDS --dimensions Name=DBInstanceIdentifier,Value="$instance" --statistics Average—dimension Name=DBInstanceIdentifier,Value="$instance" --statistics Average
chown zabbix:zabbix aws_script.sh
chmod 700 aws_script.sh
cd /etc/zabbix/zabbix_agentd.d/
For the Zabbix agent 2:
cd /etc/zabbix/zabbix_agent2.d/
vim userparameter_aws.conf
#Buckets
UserParameter=bucket.discovery,aws s3api list-buckets --query "Buckets[]"
UserParameter=bucket.get[*], aws s3api list-objects --bucket "$1" --output json --query "[sum(Contents[].Size), length(Contents[])]"
#RDS
UserParameter=rds.discovery,aws rds describe-db-instances --output json --query "DBInstances"
UserParameter=rds.metrics.discovery[*],aws cloudwatch list-metrics --namespace AWS/RDS --dimensions Name=DBInstanceIdentifier,Value="$1" --output json --query "Metrics"
zabbix_agentd -R userparameter_reload
Tip
Using user parameters will work with both the Zabbix agent and Zabbix agent 2. This means that irrespective of whether you are required to run the old or new agent, you can set up AWS monitoring without any issues.
That's the final step for this recipe. Check out the new templates, hosts, and how it all fits together in the How it works… section.
Now that we've done all the setup at the Linux CLI level and have imported the templates and hosts, let's see what they do. Under Configuration | Hosts, we can now see two new hosts.
First, let's look at the AWS Bucket discovery host. This host will discover our AWS buckets, such as the S3 bucket. We can see that the host only has one configuration, which is a discovery rule. If we go to this discovery rule, called Bucket discovery, we can see that it uses the item key bucket.discovery. This item key is defined by us in the user parameters and executes the following command:
aws s3api list-buckets --query "Buckets[]"
We do this to get every single AWS bucket and put it in the {#NAME} LLD macro.
Furthermore, there are three item prototypes on the discovery rule. The most important item prototype is the one called {#NAME}, which will use the Zabbix agent to execute the user parameter with the bucket.get item key for every bucket found for the {#NAME} LLD macro we just discovered in the discovery rule.
The bucket.get item key then executes the command we defined, which is the following:
aws s3api list-objects --bucket "$1" --output json --query "[sum(Contents[].Size), length(Contents[])]"
This command gets all of our information from the AWS buckets found and adds them to an item on our AWS Bucket discovery host. We can then use dependent items as in the two examples cited to extract the information from the item and put them in different items. Check out Chapter 3, Setting Up Zabbix Monitoring, for more information on dependent items.
We also have our AWS RDS discovery host, which will discover AWS RDS instances. If we check out the host, we can see only one configuration, which is the discovery rule Instance discovery. This discovery rule has one host prototype on it to create new hosts from every RDS found in our AWS setup. It uses the rds.discovery item key, which executes the following command:
aws rds describe-db-instances --output json --query "DBInstances"
This puts every RDS instance in the {#NAME} LLD macro and creates a new Zabbix host for it. After creating the host, it will also make sure that the AWS RDS discovery template is linked to the new host. The template has a discovery rule to get some RDS metrics from the RDS instance using the item key rds.metrics.discovery, which executes the following command:
aws cloudwatch list-metrics --namespace AWS/RDS --dimensions Name=DBInstanceIdentifier,Value="$1" --output json --query "Metrics"
What we are doing here is using the AWS CLI to execute commands on our Zabbix server through the Zabbix agent user parameters. The Zabbix agent runs the AWS CLI command and retrieves data from AWS CloudWatch. These metrics are stored in the database and then used in our items.
Now that we've seen how to use AWS CloudWatch, we can extend the Zabbix agent further by adding extra user parameters or by creating more dependent items and using this information.
It takes time to start monitoring with AWS CloudWatch as we need a good understanding of the AWS CLI commands with the use of CloudWatch. When you use the templates provided in this recipe as a basis, you have a solid foundation on which to build.
Make sure to check out the AWS documentation for more information on the commands that we can use at the following link:
https://docs.aws.amazon.com/cli/latest/reference/#available-services
The Microsoft Azure cloud is a big player in the cloud market these days and it's important to keep an eye on this infrastructure as much as you would your own hardware. In this recipe, we are going to discover how to monitor Azure instances with our Zabbix setup.
For this recipe, we are going to need our Azure cloud with an Azure DB instance in it already. The recipe does not cover how to set up an Azure DB instance, so make sure to have this in advance. We will also need our Zabbix server, which we'll call zbx-home in this recipe.
We have split up the Azure CLI installation aspect into RHEL-based and Ubuntu systems. Make sure to use the guide that is appropriate for you.
Then, last but not least, we will require some templates and hosts, which we can import. We can download the XML files here: https://github.com/PacktPublishing/Zabbix-6-IT-Infrastructure-Monitoring-Cookbook/tree/main/chapter13.
For Azure monitoring, we face some of the same techniques as we do for AWS monitoring. It is a bit daunting, but easier than it looks. Let's check out one of the techniques we can use for Azure monitoring:
Let's cover RHEL-based systems first.
First things first, let's check out the installation process on a RHEL-based host:
rpm --import https://packages.microsoft.com/keys/microsoft.asc
echo -e "[azure-cli]
name=Azure CLI
baseurl=https://packages.microsoft.com/yumrepos/azure-cli
enabled=1
gpgcheck=1
gpgkey=https://packages.microsoft.com/keys/microsoft.asc"| tee /etc/yum.repos.d/azure-cli.repo
dnf -y install azure-cli
Now, let's check out the installation process on a Ubuntu host:
apt install ca-certificates curl apt-transport-https lsb-release gnupg
curl -sL https://packages.microsoft.com/keys/microsoft.asc |gpg --dearmor | tee /etc/apt/trusted.gpg.d/microsoft.gpg > /dev/null
AZ_REPO=$(lsb_release -cs)
echo "deb [arch=amd64] https://packages.microsoft.com/repos/azure-cli/ $AZ_REPO main" | tee /etc/apt/sources.list.d/azure-cli.list
apt update
apt install azure-cli
Now that we've installed the Azure CLI we can start setting up the Azure monitoring:
su -s /bin/bash zabbix
az login
exit
For the Zabbix agent:
cd /etc/zabbix/zabbix_agentd.d/
For the Zabbix agent 2:
cd /etc/zabbix/zabbix_agent2.d/
vim userparameter_azure.conf
UserParameter=azure.db.discovery,az resource list --resource-type "Microsoft.DBforMySQL/servers"
cd /usr/lib/zabbix/externalscripts/
vim azure_script.sh
#!/bin/bash
id=$1
metric=$2
curr=$(date --utc +%Y-%m-%dT%H:%M:%SZ)
new=$(date --utc -d "($curr) -5minutes" +%Y-%m-%dT%H:%M:%SZ)
az monitor metrics list --resource $id --metric "$metric" --start-time "$new" --end-time "$curr" --interval PT5M
chown zabbix:zabbix azure_script.sh
chmod 700 azure_script.sh
For the Zabbix agent:
systemctl restart zabbix-agent
For the Zabbix agent 2:
systemctl restart zabbix-agent2
That's the final step for this recipe. Let's stay with the frontend and check out the new templates, hosts, and how it all fits together in the How it works… section.
If you've followed the recipe regarding AWS monitoring, you might think that Azure monitoring works the same. In fact, we employ a different Zabbix monitoring technique here, so let's check it out.
After adding the template and the host to our Zabbix server, we can go to Configuration | Hosts and check out our new host. We can see one new host here called Discover Azure DBs. When we examine this host, we can see that it only has one configuration, which is a discovery rule called Discover Azure DBs, like the host.
Under this discovery rule, we find a single host prototype called {#NAME}. This host prototype uses the discovery rule's azure.db.discovery item key to execute the following command:
az resource list --resource-type "Microsoft.DBforMySQL/servers"
With this command, the {#NAME} LLD macro is filled with our Azure DB instances, and we create a new host for every single Azure DB instance found. The new hosts then get the Azure DB template added to it.
When we check out the template under Configuration | Templates, we can see that it has 12 items. Let's check out the item CPU Load. This item is of the External check type, which uses its azure_script.sh[{$ID},cpu_percent] item key to execute the external script azure_script.sh and feed it the parameters {$ID} and cpu_percent. The script that is executed uses the Azure CLI to retrieve the values and we put this value in the Zabbix database.
We can discover way more from Azure using the method applied in this recipe. The script we employed is used to get metrics from Azure, which we can put in items by feeding them the correct parameters.
Check out the Azure CLI documentation for more information on the metrics retrieved using the script at the following link:
https://docs.microsoft.com/en-us/cli/azure/monitor/metrics?view=azure-cli-latest
Ever since the release of Zabbix 5, monitoring your Docker containers became a lot easier with the introduction of Zabbix agent 2 and plugins. Using the Zabbix agent 2 and Zabbix 6, we are able to monitor our Docker containers out of the box.
In this recipe, we are going to see how to set this up and how it works.
For this recipe, we require some Docker containers. We won't go over the setup of Docker containers, so make sure to do this yourself. Furthermore, we are going to need Zabbix agent 2 installed on those Docker containers. Zabbix agent does not work in relation to this recipe; Zabbix agent 2 is required.
We also need our Zabbix server to actually monitor the Docker containers. We will call our Zabbix server zbx-home.
Let's waste no more time and dive right into the process of monitoring your Docker setup with Zabbix:
For RHEL-based systems:
rpm -Uvh https://repo.zabbix.com/zabbix/6.0/rhel/8x86_64/zabbix-release-6.0-1.el8.noarch.rpm
dnf clean all
For Ubuntu systems:
wget https://repo.zabbix.com/zabbix/6.0/ubuntu/pool/main/z/zabbix-release/zabbix-release_6.0-1+ubuntu20.04_all.deb
dpkg -i zabbix-release_6.0-1+ubuntu20.04_all.deb
apt update
For RHEL-based systems:
dnf install zabbix-agent2
For Ubuntu systems:
apt install zabbix-agent2
vim /etc/zabbix/zabbix_agent2.conf
Server=10.16.16.102
systemctl restart zabbix-agent2
gpasswd -a zabbix docker
That's all there is to monitoring Docker containers with Zabbix server. Let's now see how it works.
Docker monitoring in Zabbix these days is easy, due to the new Zabbix agent 2 support and default templates. On occasion though, a default template does not cut it, so let's break down the items used.
Almost all the items we can see on our host are dependent items, most of which are dependent on the master item, Docker: Get info. This master item is the most important item on our Docker template. It executes the docker.info item key, which is built into the new Zabbix agent 2. This item retrieves a list with all kinds of information from our Docker setup. We use the dependent items and preprocessing to get the values we want from this master item.
The Docker template also contains two Zabbix discovery rules, one to discover Docker images and one to discover Docker containers. If we check out the discovery rule for Docker containers called Containers discovery, we can see what happens. Our Zabbix Docker host will use the docker.containers.discovery item key to find every container and put this in the {#NAME} LLD macro. In the item prototypes, we then use this {#NAME} LLD macro to discover statistics with another master item, such as docker.container_info. From this master item, we then use the dependent items and preprocessing again to include this information in other item prototypes as well. We are now monitoring a bunch of statistics straight from our Docker setup.
If you want to get values from Docker that aren't in the default template, check out the information collected with the master items on the template. Use a new dependent item (prototype) and then use preprocessing to get the correct data from the master item.
If you want to learn more about the Zabbix agent 2 Docker item keys, check out the supported item key list for Zabbix agent 2 under the Zabbix documentation:
3.135.183.138