The Network

The network is a core component of a Ceph cluster, and its performance will greatly affect the overall performance of the cluster. 10 GB should be treated as a minimum; 1 GB networking will not provide the required latency for a high performance Ceph cluster. There are a number of tunings that can help to improve the network performance by decreasing latency and increasing throughput.

The first thing to consider if you wish to use jumbo frames is using an MTU of 9000 instead of 1500; each I/O request can be sent using less Ethernet frames. As each Ethernet frame has a small overhead, increasing the maximum Ethernet frame to 9000 can help. In practice, gains are normally less than 5% and should be weighed up against the disadvantages of having to make sure every device is configured correctly.

The following network options set in your sysctl.conf are recommended to maximize network performance:

#Network buffers
net.core.rmem_max = 56623104
net.core.wmem_max = 56623104
net.core.rmem_default = 56623104
net.core.wmem_default = 56623104
net.core.optmem_max = 40960
net.ipv4.tcp_rmem = 4096 87380 56623104
net.ipv4.tcp_wmem = 4096 65536 56623104

#Maximum connections and backlog
net.core.somaxconn = 1024
net.core.netdev_max_backlog = 50000

#TCP tuning options
net.ipv4.tcp_max_syn_backlog = 30000
net.ipv4.tcp_max_tw_buckets = 2000000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_fin_timeout = 10

#Don't use slow start on idle TCP connections
net.ipv4.tcp_slow_start_after_idle = 0

If you are using ipv6 for your Ceph cluster, make sure you use the appropriate ipv6 sysctl options.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.143.254.90