Disk performance baseline

The disk performance baseline test will be done in two steps. First, we will measure the performance of a single disk, and after that, we will measure the performance of all the disks connected to one Ceph OSD node simultaneously.

Note

To get realistic results, I am running the benchmarking tests described in this recipe against a Ceph cluster deployed on physical hardware. We can also run these tests on the Ceph cluster, hosted on virtual machine, but we might not get appealing results.

Single disk write performance

To get the disk read and write performance, we will use the dd command with oflag set to direct in order to bypass disk cache for realistic results.

How to do it…

  1. Drop caches:
    # echo 3 > /proc/sys/vm/drop_caches
    
  2. Use dd to write a file named deleteme of the size 10G, filled with zeros /dev/zero as the input file if to the directory where Ceph OSD is mounted, that is, /var/lib/ceph/osd/ceph-0/.
    # dd if=/dev/zero of=/var/lib/ceph/osd/ceph-0/deleteme bs=10G count=1 oflag=direct
    

Ideally, you should repeat Steps 1 and 2 a few times and take the average value. In our case, the average value for write operations comes to be 319 MB/s, as shown in the following screenshot:

How to do it…

Multiple disk write performance

As the next step, we will run dd on all the OSD disks used by Ceph on the node, ceph-node1, to get the aggregated disk write performance out of a single node.

How to do it…

  1. Get the total number of disks in use with the Ceph OSD; in my case, it's 25 disks:
    # mount | grep -i osd | wc -l
    
  2. Drop caches:
    # echo 3 > /proc/sys/vm/drop_caches
    
  3. The following command will execute the dd command on all the Ceph OSD disks:
    # for i in `mount | grep osd | awk '{print $3}'`; do (dd if=/dev/zero of=$i/deleteme bs=10G count=1 oflag=direct &) ; done 
    

To get the aggregated disk write performance, take the average of all the write speed. In my case, the average comes out to be 60 MB/s.

Single disk read performance

To get the single disk read performance, we will again use the dd command.

How to do it…

  1. Drop caches:
    # echo 3 > /proc/sys/vm/drop_caches
    
  2. Use dd to read from the file, deleteme, which we created during the write test. We will read the deleteme file to /dev/null with iflag set to direct:
    # dd if=/var/lib/ceph/osd/ceph-0/deleteme of=/dev/null bs=10G count=1 iflag=direct
    

Ideally, you should repeat Steps 1 and 2 a few times and take the average value. In our case, the average value for read operations comes to be 178 MB/s, as shown in the following screenshot:

How to do it…

Multiple disk read performance

Similar to the single disk read performance, we will use dd to get the aggregated multiple disk read performance.

How to do it…

  1. Get the total number of disks in use with the Ceph OSD; in my case, it's 25 disks:
    # mount | grep -i osd | wc -l
    
  2. Drop caches:
    # echo 3 > /proc/sys/vm/drop_caches
    
  3. The following command will execute the dd command on all the Ceph OSD disks:
    # for i in `mount | grep osd | awk '{print $3}'`; do (dd if=$i/deleteme of=/dev/null bs=10G count=1 iflag=direct &); done
    

To get the aggregated disk read performance, take the average of all the read speeds. In my case, the average comes out to be 123 MB/s.

Results

Based on the tests that we performed, the results will look like this. These results vary a lot from environment to environment; the hardware that you are using and the number of disks on the OSD node can play a big part.

Operation

Per Disk

Aggregate

Read

178 MB/s

123 MB/s

Write

319 MB/s

60 MB/s

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.143.5.201