Most high CPU and memory issues will require working with tech support to conclude the root cause through adequate code-level analysis. There are, however, actions you can take, including collecting useful information, both to speed up the investigation, but also to avoid having to wait for the issue to recur for that information to be captured.
NetScaler has two CPUs that do very different things:
snmpd
). A high management CPU usage, unless prolonged, does not impact packet handling, and a momentary spike should be expected when logs are compressed as part of a rollover.SNMP is the best way to detect high CPU events as it is not practical to constantly monitor the dashboard. NetScaler has specific traps that get sent out when a CPU goes high. You can configure these values by navigating to System | SNMP | Alarms. The following is a typical example:
Consider the following steps when you see CPU staying pegged at 100%:
stat cpu
on the NetScaler CLI to see what the actual packet engine CPU consumption is. If it shows near 100%, try the following steps to lower potential impact to traffic:If this is a VPX or SDX, also consider adding additional packet engines. Citrix article CTX139485 shows how to do this for a VPX.
top
and look for processes other than NSPPE that are taking up high CPU percentage – this can be because of any of the daemons that run in userland
nsaaad
and httpd
. Save this output to a file (for example, top > /var/top.txt
) to include with the case information when engaging Citrix Tech Support.show techsupport
file and share it with Citrix Tech Support to assist with the root cause analysis. The easiest way to do this is from the GUI, under the diagnostics tab.Memory build ups happen more gradually than CPU build ups. As a result, apart from SNMP monitoring, periodically looking at the dashboard or running stat
commands on the NetScaler is a good way to catch them.
Memory build ups can result from:
To troubleshoot memory issues, start by plotting the memory usage versus traffic being handled. The easiest way to do this is to use the CPU versus Memory versus HTTP Requests Rate graph. You will find this graph in the dashboard:
techsupport
file. It will be useful to note any details of features recently enabled or new services being created on the NetScaler for faster identification.sessionless
forms of protection (for example, sessionless
form field consistency).nsconmsg -s ConMEM=2 -d oldconmsg | more
produces a snapshot of the current memory consumption, giving you an insight into the amount of memory that each of the features is consuming. This will help you understand if your NetScaler is undersized for the traffic it needs to handle, or if particular application is receiving more traffic than you planned for:Memory issues can also manifest due to failed memory hardware. Since memory is detected at boot time, dmesg
is a great place to find this info. Use the shell command dmesg | grep memory
. If the real memory is less than what is advertised when you purchased the unit, you could be looking at an RMA. A quick way to verify what it should be is by looking at the HA peer, since the units are generally both the same model.
18.223.209.118