High availability monitors

The Ceph monitor does not store and serve data to clients; it serves updated cluster maps to clients as well as to other cluster nodes. Clients and other cluster nodes periodically check with monitors for the most recent copies of cluster maps. Before Ceph clients can read or write data, they must contact a Ceph monitor to obtain the most recent copy of the cluster map.

A Ceph storage cluster can operate with a single monitor, however, this introduces the risk of a single point of failure to the cluster; that is, if the monitor node goes down, Ceph clients cannot read or write data. To overcome this, a typical Ceph cluster consists of a cluster of Ceph monitors. A multi-monitored Ceph architecture develops quorum and provides consensus for distributed decision-making in clusters by using the Paxos algorithm. The monitor count in your cluster should be an odd number; the bare minimum requirement is one monitor node, and the recommended count is three. Since a monitor operates in quorum, more than half of the total monitor nodes should always be available to prevent split-brain problems. Out of all the cluster monitors, one of them operates as the leader. The other monitor nodes are entitled to become leaders if the leader monitor is unavailable. A production cluster must have at least three monitor nodes to provide high availability.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.138.122.11