Memory

Recommendations for BlueStore OSDs are 3 GB of memory for every HDD OSD and 5 GB for an SSD OSD. In truth, there are a number of variables that lead to this recommendation, but suffice to say that you never want to find yourself in the situation where your OSDs are running low on memory and any excess memory will be used to improve performance.
Aside from the base-line memory usage of the OSD, the main variable affecting memory usage is the number of PGs running on the OSD. While total data size does have an impact on memory usage, it is dwarfed by the effect of the number of PGs. A healthy cluster running within the recommendation of 200 PGs per OSD will probably use less than 4 GB of RAM per OSD.

However, in a cluster where the number of PGs has been set higher than best practice, memory usage will be higher. It is also worth noting that when an OSD is removed from a cluster, extra PGs will be placed on remaining OSDs to re-balance the cluster. This will also increase memory usage as well as the recovery operation itself. This spike in memory usage can sometimes be the cause of cascading failures, if insufficient RAM has been provisioned. A large swap partition on an SSD should always be provisioned to reduce the risk of the Linux out-of-memory killer randomly killing OSD processes in the event of a low-memory situation.

As a minimum, the aim is to provision around 4 GB per OSD for HDDs and 5 GB per OSD for SSDs; this should be treated as the bare minimum, and 5 GB/6 GB (HDD/SSD respectively) per OSD would be the ideal amount. With both BlueStore and filestore, any additional memory installed on the server may be used to cache data, reducing read latency for client operations. Filestore uses Linux page cache, and so RAM is automatically utilized. With BlueStore, we need to manually tune the memory limit to assign extra memory to be used as a cache; this will be covered in more detail in Chapter 3, BlueStore.

If your cluster is still running filestore, depending on your workload and the size of the spinning disks that are used for the Ceph OSD's, extra memory may be required to ensure that the operating system can sufficiently cache the directory entries and file nodes from the filesystem that is used to store the Ceph objects. This may have a bearing on the RAM you wish to configure your nodes with, and is covered in more detail in the tuning section of this book.

Regardless of the configured memory size, ECC memory should be used at all times.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
52.15.55.18