How it works…

Here is an example of monitoring the kernel latency value, that is, KAVG/cmd. This value is being monitored for the vmhba0 device. In the first resxtop screen, the kernel latency value is 0.02 milliseconds (average per I/O command in the monitoring period). This is a good value because it is nearly zero.

In the second resxtop screen (type U in the window), where we have the NFS datastore attached to the ESXi host, we can see there are 18 active I/Os (ACTV) and 0 being queued (QUED). This means that there are some active I/Os, but queuing is not happening at the VMkernel level.

Queuing happens if there is excessive I/O on the device and the LUN queue depth setting is not sufficient. The default LUN queue depth is 64. However, if there are too many (more than 64) to handle simultaneously, the device will get bottlenecked to only 64 outstanding I/Os at a time. To resolve this, you would change the queue depth of the device driver.

Also, look at what esxtop vCenter for device latency per I/O command. This is the average delay in milliseconds per I/O from the time an ESXi host sends the command out until the time the host hears back that the array has completed the I/O.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
52.14.240.178