For a production environment, it's recommended that you configure the RGW on a physical, dedicated machine. However, if your object storage workload is not too much, you can consider using any of the monitor machines as an RGW node. The RGW is a separate service that externally connects to a Ceph cluster and provides object storage access to its clients. In a production environment, it's recommended that you run more than one instance of the RGW, masked by a load balancer, as shown in the following diagram:
Starting with the Firefly release of Ceph, a new RGW frontend had been introduced: Civetweb, which is a lightweight standalone web server. Civetweb has been embedded directly into the ceph-radosgw
service, making the Ceph object storage service deployment quicker and easier.
In the following recipes, we will demonstrate the RGW configuration using Civetweb on a virtual machine that will interact with the same Ceph cluster that we have created in Chapter 1, Ceph – Introduction and Beyond.
To run the Ceph object storage service, we should have a running Ceph cluster and the RGW node should have access to the Ceph network.
As demonstrated in earlier chapters, we will boot up a virtual machine using Vagrant and configure that as our RGW node.
rgw-node1
using Vagrantfile
, as we have done for Ceph nodes in Chapter 1, Ceph – Introduction and Beyond. Make sure you are on the host machine and under the ceph-cookbook
repository before bringing up rgw-node1
using Vagrant:# cd ceph-cookbook # vagrant up rgw-node1
rgw-node1
is up, check the Vagrant status, and log in to the node:$ vagrant status rgw-node1 $ vagrant ssh rgw-node1
rgw-node1
can reach the Ceph cluster nodes:# ping ceph-node1 -c 3 # ping ceph-node2 -c 3 # ping ceph-node3 -c 3
rgw-node1
:# cat /etc/hosts | grep -i rgw # hostname # hostname -f
The previous recipe was about setting up a virtual machine for RGW. In this recipe, we will learn to set up the ceph-radosgw
service on this node.
rgw-node1
. To do this, we will use the ceph-deploy
tool from ceph-node1
, which is our Ceph monitor node. Log in to ceph-node1
and perform the following commands:ceph-node1
can reach rgw-node1
over the network using the following command:# ping rgw-node1 -c 1
ceph-node1
a password-less SSH login to rgw-node1
and test the connection.ceph-node1
, install the Ceph packages and copy the ceph.conf
file to rgw-node1
:# cd /etc/ceph # ceph-deploy install rgw-node1 # ceph-deploy config push rgw-node1
rgw-node1
and install the ceph-radosgw
package:# yum install ceph-radosgw
Since we are using the Civetweb embedded web server for RGW, most of the things have already been set up with the ceph-radosgw
service. In this recipe, we will create Ceph authentication keys for the Ceph RGW user and update the ceph.conf
file.
ceph-node1
:# cd /etc/ceph # ceph-authtool --create-keyring /etc/ceph/ceph.client.radosgw.keyring # chmod +r /etc/ceph/ceph.client.radosgw.keyring
gateway
:# ceph-authtool /etc/ceph/ceph.client.radosgw.keyring -n client.radosgw.gateway --gen-key
# ceph-authtool -n client.radosgw.gateway --cap osd 'allow rwx' --cap mon 'allow rwx' /etc/ceph/ceph.client.radosgw.keyring
# ceph auth add client.radosgw.gateway -i /etc/ceph/ceph.client.radosgw.keyring
# scp /etc/ceph/ceph.client.radosgw.keyring rgw-node1:/etc/ceph/ceph.client.radosgw.keyring
client.radosgw.gateway
section to ceph.conf
on rgw-node1
. Make sure that the hostname is similar to the # hostname -s
command output:[client.radosgw.gateway] host = rgw-node1 keyring = /etc/ceph/ceph.client.radosgw.keyring rgw socket path = /var/run/ceph/ceph.radosgw.gateway.fastcgi.sock log file = /var/log/ceph/client.radosgw.gateway.log rgw dns name = rgw-node1.cephcookbook.com rgw print continue = false
ceph-radosgw
startup script executes with the default user, apache
. Change the default user from apache
to root
:# sed -i s"/DEFAULT_USER.*=.*'apache'/DEFAULT_USER='root'"/g /etc/rc.d/init.d/ceph-radosgw
radosgw
service and check its status:# service ceph-radosgw start # service ceph-radosgw status
ceph-radosgw
daemon should now be running on the default port, 7480:# netstat -nlp | grep -i 7480
18.188.178.181