The disk performance baseline test will be done in two steps. First, we will measure the performance of a single disk, and after that, we will measure the performance of all the disks connected to one Ceph OSD node simultaneously.
To get the disk read and write performance, we will use the dd
command with oflag
set to direct
in order to bypass disk cache for realistic results.
# echo 3 > /proc/sys/vm/drop_caches
dd
to write a file named deleteme
of the size 10G
, filled with zeros /dev/zero
as the input file if
to the directory where Ceph OSD is mounted, that is, /var/lib/ceph/osd/ceph-0/
. # dd if=/dev/zero of=/var/lib/ceph/osd/ceph-0/deleteme bs=10G count=1 oflag=direct
Ideally, you should repeat Steps 1 and 2 a few times and take the average value. In our case, the average value for write operations comes to be 319 MB/s, as shown in the following screenshot:
As the next step, we will run dd
on all the OSD disks used by Ceph on the node, ceph-node1
, to get the aggregated disk write performance out of a single node.
# mount | grep -i osd | wc -l
# echo 3 > /proc/sys/vm/drop_caches
dd
command on all the Ceph OSD disks:# for i in `mount | grep osd | awk '{print $3}'`; do (dd if=/dev/zero of=$i/deleteme bs=10G count=1 oflag=direct &) ; done
To get the aggregated disk write performance, take the average of all the write speed. In my case, the average comes out to be 60 MB/s.
Ideally, you should repeat Steps 1 and 2 a few times and take the average value. In our case, the average value for read operations comes to be 178 MB/s, as shown in the following screenshot:
Similar to the single disk read performance, we will use dd
to get the aggregated multiple disk read performance.
# mount | grep -i osd | wc -l
# echo 3 > /proc/sys/vm/drop_caches
dd
command on all the Ceph OSD disks:# for i in `mount | grep osd | awk '{print $3}'`; do (dd if=$i/deleteme of=/dev/null bs=10G count=1 iflag=direct &); done
To get the aggregated disk read performance, take the average of all the read speeds. In my case, the average comes out to be 123 MB/s.
Based on the tests that we performed, the results will look like this. These results vary a lot from environment to environment; the hardware that you are using and the number of disks on the OSD node can play a big part.
Operation |
Per Disk |
Aggregate |
---|---|---|
Read |
178 MB/s |
123 MB/s |
Write |
319 MB/s |
60 MB/s |
3.143.5.201