Monitoring a single file

Let's start with the simpler case, monitoring a single file. To do so, we could create a couple of test files. To keep things a bit organized, let's create a directory, /tmp/zabbix_logmon/, on A test host and create two files in there, logfile1 and logfile2. For both files, use the same content as this:

2018-08-13 13:01:03 a log entry
2018-08-13 13:02:04 second log entry
2018-08-13 13:03:05 third log entry
Active items must be properly configured for log monitoring to work; we did that in Chapter 3, Monitoring with Zabbix Agents and Basic Protocols.

With the files in place, let's proceed to creating items:

  1. Navigate to Configuration | Hosts, click on Items next to A test host, then click on Create item. Fill in the following:
    • Name: First logfile
    • Type: Zabbix agent (active)
    • Key: log[/tmp/zabbix_logmon/logfile1]
    • Type of information: Log
    • Update interval: 1s

  1. When done, click on the Add button at the bottom.

As mentioned earlier, log monitoring only works as an active item, so we used that item type. For the key, the first parameter is required; it's the full path to the file we want to monitor. We also used a special type of information here, log. But what about the update interval, why did we use such a small interval of one second? For log items, this interval isn't about making an actual connection between the agent and the server; it's only about the agent checking whether the file has changed: it does a stat() call, similar to what tail -f does on some platforms/filesystems. A connection to the server is only made when the agent has anything to send in.

With active items, log monitoring is both quick to react, as it's checking the file locally, and avoids excessive connections. It could be implemented as a somewhat less efficient passive item, but that's not supported.

With the item in place, it shouldn't take longer than three minutes for the data to arriveā€”if everything works as expected, of course. Up to one minute could be required for the server to update the configuration cache, and up to two minutes could be required for the active agent to update its list of items. Let's verify this: navigate to Monitoring | Latest data and filter by host, A test host. Our First logfile item should be there, and it should have some value as well:

Even short values are excessively trimmed here. It's hoped that this will be improved in further releases. If the item is unsupported and the configuration section complains about permissions, make sure permissions actually allow the Zabbix user to access that file. If the permissions on the file itself look correct, check the execute permission on all the upstream directories too. Here and later, keep in mind that unsupported items will take up to 10 minutes to update after the issue has been resolved.

As with other non-numeric items, Zabbix knows that it can't graph logs, hence there's a History link on the right-hand side; let's click on it:

All of the lines from our log file are here. By default, Zabbix log monitoring parses whole files from the very beginning. That's good in this case, but what if we wanted to start monitoring some huge existing log file? Not only would that parsing be wasteful, we would also likely send lots of useless old information to the Zabbix server. Luckily, there's a way to tell Zabbix to only parse new data since the monitoring of that log file started. We could try that out with our second file and, to keep things simple, we could also clone our first item. Let's proceed with the following steps:

  1. Navigate to Configuration | Hosts, click on Items next to A test host, then click on First logfile in the Name column. At the bottom of the item configuration form, click on Clone and make the following changes:
    • Name: Second logfile
    • Key: log[/tmp/zabbix_logmon/logfile2,,,,skip]
There are four commas in the item key; this way, we're skipping some parameters and only specifying the first and fifth parameters.
  1. When done, click on the Add button at the bottom.

The same as before, it might take up to three minutes for this item to start working. Even when it starts working, there will be nothing to see in the latest data page; we specified the skip parameter and hence only new lines would be considered.

Allow at least three minutes to pass after adding the item before executing the following command. Otherwise, the agent won't have the new item definition yet.

To test this, we could add some lines to Second logfile. On A test host, execute the following:

 $ echo "2018-12-1 10:34:05 fourth log entry" >> /tmp/zabbix_logmon/logfile2
This and further fake log entries increase the timestamp in the line itself; this isn't required, but looks a bit better. For now, Zabbix would ignore that timestamp anyway.

A moment later, this entry should appear in the latest data page:

If we check the item history, it's the only entry, as Zabbix only cares about new lines now.

The skip parameter only affects behavior when a new log file is monitored. While monitoring a log file with and without that parameter, the Zabbix agent doesn't re-read the file, it only reads the added data.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.223.114.142