Upgrading your Ceph cluster software version is relatively simple. You just need to update the Ceph package followed by a service restart and you are done. The upgradation process is sequential and usually does not require any downtime for your storage services if you have configured your cluster in high availability, that is, with multiple OSDs, monitors, MDS, and RADOSGW. As a general practice, you should plan your cluster upgradation during nonpeak hours. The upgradation process upgrades each Ceph daemon one by one. The recommended upgradation sequence for a Ceph cluster is as follows:
You should upgrade all daemons of a specific type and then proceed for upgradation of the next type. For example, if your cluster contains three monitor nodes, 100 OSDS, 2 MDS, and a RADOSGW. You should first upgrade all your monitor nodes one by one, followed by upgrading all OSD nodes one by one, then MDS, and then RADOSGW. This is to keep all specific types of daemons on the same release level.
Proceed with the following steps to upgrade a monitor:
Since our test cluster setup has MON and OSD daemons running on the same machine, upgrading Ceph software binaries to the Firefly release (0.80) will result in upgrading MON and OSD daemons in one step. However, in the production deployment of Ceph, upgradation should take place one by one. Otherwise, you might face problems.
/etc/yum.repos.d/ceph.repo
file that is already present:# yum update ceph
# service ceph restart mon
# service ceph status mon
# ceph mon stat
Proceed with the following steps to upgrade OSDs:
# yum update ceph
# service ceph restart osd
# service ceph status osd
3.128.226.255