To get the best out of the cache tiering feature of Ceph, you should use faster disks such as SSDs and make a fast cache pool on top of slower/regular pools made up of HDDs. In Chapter 8, Production Planning and Performance Tuning for Ceph, we covered the process of creating Ceph pools on specific OSDs by modifying the CRUSH map. To set up the cache tier in your environment, you need to first modify your crush map and create a ruleset for the SSD disk. Since we have already covered this in Chapter 8, Production Planning and Performance Tuning for Ceph we will use the same ruleset for SSD, which is based on osd.0
, osd.3
, and osd.6
. As this is a test setup, and we do not have real SSDs, we will assume the OSDs 0, 3, and 6 are SSDs and will create a cache pool on top of it, as illustrated in this diagram:
Let's check the CRUSH layout using the command, ceph osd crush rule ls
, as shown in the following screenshot. We already have the ssd-pool CRUSH rule that we created in Chapter 7, Ceph under the Hood. You can get more information on this CRUSH rule by running the ceph osd crush rule dump ssd-pool
command:
cache-pool
and set crush_ruleset
as 1
so that the new pool gets created on SSD disks:# ceph osd pool create cache-pool 16 16 # ceph osd pool set cache-pool crush_ruleset 1
osd.0
, osd.3
, and osd.6
:cache-pool
for contents; since it's a new pool, it should not have any content:# rados -p cache-pool ls
cache-pool
to make sure it's storing the object on the correct OSDs:# rados -p cache-pool put object1 /etc/hosts # rados -p cache-pool ls
cache-pool
and object1
; it should get stored on osd.0
, osd.3
, and osd.6
:# ceph osd map cache-pool object1
# rados -p cache-pool rm object1
3.133.132.99