How to use BlueStore

To create a BlueStore OSD, you can use ceph-disk that fully supports creating BlueStore OSDs with either the RocksDB data and WAL collocated or stored on separate disks. The operation is similar to when creating a filestore OSD except instead of specifying a device for use as the filestore journal, you specify devices for the RocksDB data. As previously mentioned, you can separate the DB and WAL parts of RocksDB if you so wish:

ceph-disk prepare --bluestore /dev/sda --block.wal /dev/sdb --block.db /dev/sdb

The preceding code assumes that your data disk is /dev/sda. For this example, assume a spinning disk and you have a faster device such as SSD as /dev/sdb. Ceph-disk would create two partitions on the data disk: one for storing the actual Ceph objects and another small XFS partition for storing details about the OSD. It would also create two partitions for SSD for the DB and WAL. You can create multiple OSDs sharing the same SSD for DB and WAL without fear of overwriting previous OSDs; ceph-disk is smart enough to create new partitions without having to specify them.

However, as we discovered in Chapter 2, Deploying Ceph using a proper deployment tool for your Ceph cluster helps to reduce deployment time and ensures consistent configuration across the cluster. Although the Ceph Ansible modules also support deploying BlueStore OSDs, at the time of publication of this book, it did not currently support deploying separate DB and WAL partitions. For the basis of demonstrating BlueStore, we will use ceph-disk to non-disruptively manually upgrade our test cluster's OSDs from filestore to BlueStore.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.9.124