Introduction

There is an often-heard philosophy out there about performance management in the real world:

“To do network performance planning properly you have to measure utilization at all points of the network and model the topology in detail. You don’t have the resources to measure anything, you don’t really know what the network looks like, and it would take forever to simulate it all. Therefore, it’s impractical to do capacity planning, and we should continue to provision bandwidth to deal with performance problems.”

Regardless of the planning methodology used, it’s important to measure network performance in order to manage it. Network troubleshooters need real-time utilization and error data. The help desk needs to view performance data in relation to a user complaint. The network engineering staff needs performance data for capacity planning. The IT group needs data to present at the monthly service level agreement (SLA) meetings.

Providing data for the monthly meetings means configuring NNM to measure agreed-upon performance metrics. Given the difficulty of gathering end-to-end transaction response time data, more robust metrics such as line utilization and active user counts are more practical.

Determining how long to keep NNM performance data online involves a trade-off between performance, convenience, and cost. Troubleshooters need about an hour’s worth of real-time data, while capacity planners require up to a year’s worth of historical data. Storing more data online can reduce performance and increase system administration overhead unless a more powerful and costly NNM platform is configured.

What is an appropriate SNMP data sample rate? Sampling too quickly may cause some network devices to overload and will certainly increase network management traffic. But long sample intervals miss all the useful variation in the performance metrics. A five-minute sample interval is suggested.

The Heisenberg uncertainty principle of quantum physics can be stretched to explain why excessive SNMP prodding of the network can limit how accurately it can be measured.

How much traffic does an NNM system actually create? You can attempt to quantify this with a simple polling example. Note that configuration checking, status polling, HTTP and Java, X-Windows, ITO, and Measureware contribute traffic as well.

Deciding what SNMP data to collect from the hundreds of possible MIB values is best done using the KISS (keep it simple, stupid) principle. A few system and network utilization and error statistics often suffice. MIB expressions are appropriate because percentages are more useful than raw numeric information.

NNM allows you to configure performance thresholds to generate alarms. Thresholds can be established using baselining, or analytical and rule-of-thumb methods. Set threshold events to low priority unless you have a process for dealing with them.

MIB expressions allow you to configure NNM to sample several SNMP variables, evaluate a formula containing them, and return the result. Typically, you want to calculate a percentage. For example, an error counter is meaningless unless it’s normalized by the packet counter and converted to a percentage. NNM provides many predefined MIB expressions which you use out of the box or as a template.

Viewing historical performance data online can be done graphically with xnmgraph, or by using the SNMP historical data configuration GUI. Data can be viewed and saved textually using xnmgraph or snmpColDump.

Presenting data offline means taking a screenshot or exporting textual data to a presentation or spreadsheet tool such as Star Office, Wingz, or one of the Windows or Macintosh equivalents.

SNMPv2C supports 64-bit counters. These are essential for managing links operating at or faster than 100 megabits per second (Mbps). NNM automatically detects devices with SNMPv2C support, and the ifXentry section in the interface portion of MIB2 defines several 64-bit counter variables.

Collecting RMON data is best done with HP NetMetrix. Remote shared-medium Ethernet segments can also be monitored (in a limited way) with NNM directly using the Etherstats group and a few good MIB expressions.

Crossing over to HP NetMetrix means having complete end-to-end traffic flow by application at your fingertips. Network probes or switches and hubs with built-in RMON2 properly situated on the network can collect enterprise-wide performance data that NetMetrix can massage, present, and report.

After you’ve collected NetMetrix network-wide performance data and a meaningful NNM baseline topology, capacity planning follows. HP Service Simulator can import the performance data and topology. Armed with what-if questions, you can use the simulator to verify that the network can meet performance objectives under various conditions you specify.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.119.106.135