Tuning and Observing netmon Queues

During first discovery you may notice that netmon’s CPU utilization is remarkably lower than expected, the rate of discovery is correspondingly depressingly low, or device status changes are displayed late by NNM. Yet, the overall system utilization is quite low. You want to know why netmon isn’t discovering devices faster and what you can do to improve the situation.

Enter the netmon optional -q ICMP-queue-length and -Q SNMP-queue-length parameters in the netmon.lrf file. Both default to 20 on UNIX systems and 3 on Windows NT systems. These values may be increased according to the following guidelines:

The ICMP-queue-length parameter should be increased only if netmon is getting behind (as indicated by the queue always being at its maximum) because the operating system buffer for holding incoming ICMP replies may overflow, resulting in random false status changes. The SNMP-queue-length parameters should be increased only if netmon is getting behind, because the operating system limits the number of open file descriptors per processes. This is a kernel tunable parameter and the default is often set to 64. (As an aside, note that the snmpCollect daemon is similarly constrained.) Remember to record the changes in the netmon.lrf file with ovaddobj before you stop and restart netmon.

The theory behind increasing these maximum queue lengths is that it lets netmon issue more outstanding requests. This increases throughput and helps netmon keep up with its polling schedule. For example, using the default queue length of 20, you can expect netmon to keep up as long as the average response time to SNMP requests is 50 milliseconds or less.

Increasing the maximum SNMP queue length also increases netmon’s resistance to slow and unresponsive SNMP agents. For example, on a bad day, netmon may be polling 15 slow devices on the network (or perhaps they are located on a remote LAN via a congested WAN link). This limits netmon to polling only five other devices at a time for the duration.

How large may these queue limits be? A rule of thumb is 200. After you increase the queue values, check netmon’s queue behavior often. Be alert for anomalous behavior in discovery and polling, and check the log files for potential problems caused by too high a queue value.

Check netmon’s polling activity using the NNM menu Performance:Network Polling Statistics, then wait a minute or so for the 10-second polling samples to demonstrate a graphical trend. A command line method for checking netmon’s activity is netmon -a 5. This signals the running instance of the netmon daemon to dump the size of the ping (ICMP) and SNMP lists to $OV_LOG/netmon.trace. An effective way to use this feature is to open two shells in separate windows. In the first window type in tail -f $OV_LOG/netmon.trace and in the second window type in netmon -a 5. Every time you type netmon -a 5, the output appears in the other window.

The effectiveness of netmon tuning is predicated on the assumption that there are no other inhibitors to its performance. If the NNM disk I/O system is heavily utilized (check this with HP Glance/Plus Motif, the top command, or the iostat command), tuning netmon to increase its throughput will yield no benefits. If the DNS servers are terribly slow, or if a low-bandwidth, highly-utilized WAN link separates the NNM system and its management domain, then tuning netmon won’t improve performance much.

The netmon workhorse is frequently patched, updated, and revised as HP evolves the NNM product. New features and command line arguments come along and some old ones go away. Keep the online manpage handy. Windows NT users should look in the online help for the netmon Reference Page. It is usually the most current documentation available because the HP patches often update manpages along with code changes. In this way, new ways to tune netmon will come to your attention.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.67.40