Raising server limits

A TCP connection utilizes a number of operating system resources. The OS kernel limits these available resources by imposing various upper bounds. In this section, we will raise the limits in order to increase the available resources for use.

The queue size

The TCP stack tries to process data packets as soon as they arrive. If the rate of processing is low, the arriving data packets get queued up. The kernel usually specifies a limit on the total number of packets that can be queued at the server. The value is specified by the net.core.netdev_max_backlog key:

$ sysctl net.core.netdev_max_backlog
net.core.netdev_max_backlog = 300 

Increase the queue size to a large value, such as 10000:

$ sudo sysctl -w net.core.netdev_max_backlog=10000 
net.core.netdev_max_backlog = 10000

The listen socket queue size

The OS kernel defines a limit on the listen socket queue size. The limit is specified by the value of the net.core.somaxconn key:

$ sysctl net.core.somaxconn
net.core.somaxconn = 128

Now, increase the queue size to a large value, such as 2048:

$ sudo sysctl -w net.core.somaxconn=2048
net.core.somaxconn = 2048

Note

It is important to note that the parameter will not make the intended impact. NGINX also limits the queue size of pending connections. The limit is defined by the backlog option of the listen directive in the NGINX configuration. By default, the variable defines a limit of -1 for the FreeBSD and OS X platforms and 511 for other platforms. Increase the values of backlog and net.core.somaxconn to alter the size of the pending connections queue.

Half-opened connections

When the server accepts a connection, the connection waits for an acknowledgment from the client. Until that has happened, the connection is in a half-opened state. The OS kernel limits the total number of connections that can be in such a state. The server will drop new requests if the limits are exceeded. The limit is specified by the value of the net.ipv4.tcp_max_syn_backlog key:

$ sysctl net.ipv4.tcp_max_syn_backlog
net.ipv4.tcp_max_syn_backlog = 256

Increase the size to a large value, such as 2048:

$ sysctl -w net.ipv4.tcp_max_syn_backlog = 2048
net.ipv4.tcp_max_syn_backlog = 2048

Ephemeral ports

Ephemeral ports are the port numbers used by an operating system when an application opens a socket for communication. These ports are short-lived and are valid endpoints for the duration of the communication. The Linux kernel defines the ephemeral ports against the net.ipv4.ip_local_port_range key:

$ sysctl net.ipv4.ip_local_port_range
net.ipv4.ip_local_port_range = 32768   61000

The two values signify the minimum and maximum port values out of the total of 65,535 available ports on any system. These values may look adequately large, that is, 61000 - 32768 = 28232 is the number of available ports. It is important to note that 28,232 is the total number of available ports on the system. It does not turn out to be the number of concurrent connections that the server can serve.

As explained in the TCP states section, TCP will block sockets in the TIME_WAIT state with duration of MSL x 2. By default, the MSL is 60 seconds, which makes the TIME_WAIT period 120 seconds long. Thus, the server can only guarantee 28232/120 = 235 connections at any moment in time. If the server is acting as a proxy server, that is, it is serving content from upstream, then the number of connections will be half, that is, 235/2 = 117. Depending on your service and the load, this may not be a great number to look at!

Tip

The number of ports guaranteed by the server at any moment in time can be increased by modifying the MSL. If the MSL is 30 seconds, the TIME_WAIT state comes out at 60 seconds. The result is 28232/60 = 470 available ports at any moment in time.

The range can be modified by specifying the minimum and maximum ports against the net.ipv4.ip_local_port_range key:

$ sudo sysctl -w net.ipv4.ip_local_port_range='15000 65000' 
net.ipv4.ip_local_port_range = 15000 65000

This makes a total of 50,000 ports for TCP socket use.

Open files

The kernel considers each opened socket as a file. It also imposes an upper bound on the total number of opened files. By default, the limit is set to 1,024 opened files:

$ ulimit -n 
1024

Considering the total ephemeral socket range, this range is too low to serve the desired purpose. Under load, the limit may lead to socket failure with Too many opened files error messages in syslog.

The limits can be modified by changing the values in /etc/security/limits.conf. The file defines a soft limit and a hard limit against an item. Increase these values for the nofile item as an asterisk (*) with the user:

* soft nofile 50000
* hard nofile 50000

Note

The configuration specified previously alters the system-wide PAM limits. NGINX can also modify these limits using the worker_rlimit_nofile configuration directive covered in Chapter 3, Tweaking NGINX Configuration. It is preferable to modify the limits for NGINX rather than raise the overall system limits.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.172.38