Configuring Ceph federated gateways

The Ceph RGW can be deployed in a federated configuration with multiple regions, and with multiple zones for a region. As shown in the following diagram, multiple Ceph radosgw instances can be deployed in a geographically separated fashion. Configuring the Ceph object gateway regions and metadata synchronization agents helps to maintain a single namespace, even though Ceph radosgw instances run in a different geographic locale or on a different Ceph storage cluster.

Configuring Ceph federated gateways

Another approach is to deploy one or more ceph radosgw instances that are geographically separated yet are within a region in separated logical containers known as zones. In this case, a data synchronization agent also enables the service to maintain one or more copies of the master zone's data within a region on the same Ceph cluster. These extra copies of data are important for backup or disaster recovery use cases.

Configuring Ceph federated gateways

In this recipe, we will learn to deploy the latter method of the Ceph radosgw federation. Under this, we will create a master region, US, which will host two zones: master zone: us-east, containing the RGW instance us-east-1, and secondary zone: us-west, containing the RGW instance us-west-1. The following are parameters and their values that will be used:

  • Master Region → United States: us
  • Master Zone → United States region-East zone: us-east
  • Secondary Zone → United States region-West zone: us-west
  • Radosgw Instance-1 → United States region-East zone - Instance1: us-east-1
  • Radosgw Instance-2 → United States region-West zone - Instance1: us-west-1

How to do it…

  1. From your host machine, bring up the virtual machines us-east-1 and us-west-1 using Vagrant:
    $ cd ceph-cookbook
    $ vagrant status us-east-1 us-west-1
    $ vagrant up us-east-1 us-west-1
    $ vagrant status us-east-1 us-west-1
    
    How to do it…

    From now on, we will execute all the commands from any of the Ceph monitor machines until otherwise specified. In our case, we will use ceph-node1. Next, we will create Ceph pools that will be used to store a bunch of critical information about object storage data, such as buckets, a bucket index, a global catalog, logs, an S3 user ID, Swift user accounts, e-mails, and so on.

  2. Create Ceph pools for the us-east zone:
    # ceph osd pool create .us-east.rgw.root 32 32
    # ceph osd pool create .us-east.rgw.control 32 32
    # ceph osd pool create .us-east.rgw.gc 32 32
    # ceph osd pool create .us-east.rgw.buckets 32 32
    # ceph osd pool create .us-east.rgw.buckets.index 32 32
    # ceph osd pool create .us-east.rgw.buckets.extra 32 32
    # ceph osd pool create .us-east.log 32 32
    # ceph osd pool create .us-east.intent-log 32 32
    # ceph osd pool create .us-east.usage 32 32
    # ceph osd pool create .us-east.users 32 32
    # ceph osd pool create .us-east.users.email 32 32
    # ceph osd pool create .us-east.users.swift 32 32
    # ceph osd pool create .us-east.users.uid 32 32
    
  3. Create Ceph pools for the us-west zone:
    # ceph osd pool create .us-west.rgw.root 32 32
    # ceph osd pool create .us-west.rgw.control 32 32
    # ceph osd pool create .us-west.rgw.gc 32 32
    # ceph osd pool create .us-west.rgw.buckets 32 32
    # ceph osd pool create .us-west.rgw.buckets.index 32 32
    # ceph osd pool create .us-west.rgw.buckets.extra 32 32
    # ceph osd pool create .us-west.log 32 32
    # ceph osd pool create .us-west.intent-log 32 32
    # ceph osd pool create .us-west.usage 32 32
    # ceph osd pool create .us-west.users 32 32
    # ceph osd pool create .us-west.users.email 32 32
    # ceph osd pool create .us-west.users.swift 32 32
    # ceph osd pool create .us-west.users.uid 32 32
    
  4. Verify the newly created Ceph pools:
    # ceph osd lspools
    
    How to do it…
  5. The RGW instance requires a user and keys to talk to the Ceph storage cluster:
    1. Create a keyring using the following command:
      # ceph-authtool --create-keyring /etc/ceph/ceph.client.radosgw.keyring
      # chmod +r /etc/ceph/ceph.client.radosgw.keyring
      
    2. Generate a gateway username and key for each instance:
      # ceph-authtool /etc/ceph/ceph.client.radosgw.keyring -n    client.radosgw.us-east-1 --gen-key
      # ceph-authtool /etc/ceph/ceph.client.radosgw.keyring -n client.radosgw.us-west-1 --gen-key
      
    3. Add capabilities to keys:
      # ceph-authtool -n client.radosgw.us-east-1 --cap osd 'allow rwx' --cap mon 'allow rwx' /etc/ceph/ceph.client.radosgw.keyring
      # ceph-authtool -n client.radosgw.us-west-1 --cap osd 'allow rwx' --cap mon 'allow rwx' /etc/ceph/ceph.client.radosgw.keyring
      
    4. Add keys to the Ceph storage cluster:
      # ceph -k /etc/ceph/ceph.client.admin.keyring auth add client.radosgw.us-east-1 -i /etc/ceph/ceph.client.radosgw.keyring
      # ceph -k /etc/ceph/ceph.client.admin.keyring auth add client.radosgw.us-west-1 -i /etc/ceph/ceph.client.radosgw.keyring
      
    How to do it…
  6. Add RGW instances to the Ceph configuration file, that is, /etc/ceph/ceph.conf:
    [client.radosgw.us-east-1]
    host = us-east-1
    rgw region = us
    rgw region root pool = .us.rgw.root
    rgw zone = us-east
    rgw zone root pool = .us-east.rgw.root
    keyring = /etc/ceph/ceph.client.radosgw.keyring
    rgw dns name = rgw-node1
    rgw socket path = /var/run/ceph/client.radosgw.us-east-1.sock
    log file = /var/log/ceph/client.radosgw.us-east-1.log
    
    [client.radosgw.us-west-1]
    host = us-west-1
    rgw region = us
    rgw region root pool = .us.rgw.root
    rgw zone = us-west
    rgw zone root pool = .us-west.rgw.root
    keyring = /etc/ceph/ceph.client.radosgw.keyring
    rgw dns name = rgw-ndoe1
    rgw socket path = /var/run/ceph/client.radosgw.us-west-1.sock
    log file = /var/log/ceph/client.radosgw.us-west-1.log
    
    How to do it…
  7. Next, we will install Ceph packages on the us-east-1 and us-west-1 nodes using ceph-deploy from the ceph-node1 machine. Finally, we will add configuration files to these nodes:
    1. Allow cep-node1 to perform a password-less SSH login to the RGW nodes. The root password is default, that is, vagrant:
      # ssh-copy-id us-east-1
      # ssh-copy-id us-west-1
      
    2. Install Ceph packages on the RGW instances:
      # ceph-deploy install us-east-1 us-west-1
      
    3. Once Ceph packages are installed on the RGW instance, push the Ceph configuration files:
      # ceph-deploy --overwrite-conf config push us-east-1 us-west-1
      
    4. Copy the RGW keyrings from ceph-node to gateway instance:
      # scp ceph.client.radosgw.keyring us-east-1:/etc/ceph
      # scp ceph.client.radosgw.keyring us-west-1:/etc/ceph
      
    5. Next, install the ceph-radosgw and radosgw-agent packages on the us-east-1 and us-west-1 radosgw instances:
      # ssh us-east-1 yum install -y ceph-radosgw radosgw-agent
      # ssh us-west-1 yum install -y ceph-radosgw radosgw-agent
      
    6. For simplicity, we will disable firewall on the nodes:
      # ssh us-east-1 systemctl disable firewalld
      # ssh us-east-1 systemctl stop firewalld
      # ssh us-west-1 systemctl disable firewalld
      # ssh us-west-1 systemctl stop firewalld
      
  8. Create the us region. Log in to us-east-1 and execute the following commands:
    1. Create a region infile called us.json under the /etc/ceph directory with the following content. You can refer to the author's version of the us.json file provided with the code bundle of this chapter:
      { "name": "us",
        "api_name": "us",
        "is_master": "true",
        "endpoints": [
          "http://us-east-1.cephcookbook.com:7480/"],
        "master_zone": "us-east",
        "zones": [
          { "name": "us-east",
            "endpoints": [
              "http://us-east-1.cephcookbook.com:7480/"],
              "log_meta": "true",
              "log_data": "true"},
              { "name": "us-west",
                "endpoints": [
                  "http://us-west-1.cephcookbook.com:7480/"],
                "log_meta": "true",
                "log_data": "true"}],
        "placement_targets": [
         {
           "name": "default-placement",
           "tags": []
         }
        ],
      "default_placement": "default-placement"}
      
      How to do it…
    2. Create the us region with the us.json infile that you just created:
      # cd /etc/ceph
      # radosgw-admin region set --infile us.json --name client.radosgw.us-east-1
      
    3. Delete the default region if it exists:
      # rados -p .us.rgw.root rm region_info.default --name client.radosgw.us-east-1
      
    4. Set the us region as the default region:
      # radosgw-admin region default --rgw-region=us --name client.radosgw.us-east-1
      
    5. Finally, update the region map:
      # radosgw-admin regionmap update --name client.radosgw.us-east-1
      
  9. Generate access_keys and secret_keys for us-east and us-west zones:
    1. Generate an access_key for the us-east zone:
      # < /dev/urandom tr -dc A-Z-0-9 | head -c${1:-20};echo;
      
    2. Generate a secret_key for the us-east zone:
      # < /dev/urandom tr -dc A-Z-0-9-a-z | head -c${1:-40};echo;
      
      How to do it…
    3. Generate access_key for the us-west zone:
      # < /dev/urandom tr -dc A-Z-0-9 | head -c${1:-20};echo;
      
    4. Generate secret_key for the us-west zone:
      # < /dev/urandom tr -dc A-Z-0-9-a-z | head -c${1:-40};echo;
      
    How to do it…
  10. Create a zone infile called us-east.json for the us-east zone. You can refer to the author's version of the us-east.json file provided with the code bundle of this chapter:
    { "domain_root": ".us-east.domain.rgw",
    "control_pool": ".us-east.rgw.control",
    "gc_pool": ".us-east.rgw.gc",
    "log_pool": ".us-east.log",
    "intent_log_pool": ".us-east.intent-log",
    "usage_log_pool": ".us-east.usage",
    "user_keys_pool": ".us-east.users",
    "user_email_pool": ".us-east.users.email",
    "user_swift_pool": ".us-east.users.swift",
    "user_uid_pool": ".us-east.users.uid",
    "system_key": { "access_key": " XNK0ST8WXTMWZGN29NF9", "secret_key": "7VJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5"},
    "placement_pools": [
    { "key": "default-placement",
    "val": { "index_pool": ".us-east.rgw.buckets.index",
    "data_pool": ".us-east.rgw.buckets"}
    }
    ]
    }
    
    How to do it…
  11. Add the us-east zone using an infile in both the east and west pools:
    # radosgw-admin zone set --rgw-zone=us-east --infile us-east.json --name client.radosgw.us-east-1
    
    How to do it…

    Now, run the following command:

    # radosgw-admin zone set --rgw-zone=us-east --infile us-east.json --name client.radosgw.us-west-1
    
    How to do it…
  12. Similarly, for the us-east zone, create the us-west.json infile with the following contents. You can refer to the author's version of the us-west.json file provided with the code bundle of this chapter:
    { "domain_root": ".us-west.domain.rgw",
      "control_pool": ".us-west.rgw.control",
      "gc_pool": ".us-west.rgw.gc",
      "log_pool": ".us-west.log",
      "intent_log_pool": ".us-west.intent-log",
      "usage_log_pool": ".us-west.usage",
      "user_keys_pool": ".us-west.users",
      "user_email_pool": ".us-west.users.email",
      "user_swift_pool": ".us-west.users.swift",
      "user_uid_pool": ".us-west.users.uid",
      "system_key": { "access_key": " AAK0ST8WXTMWZGN29NF9", "secret_key": " AAJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5"},
      "placement_pools": [
        { "key": "default-placement",
          "val": { "index_pool": ".us-west.rgw.buckets.index",
                  "data_pool": ".us-west.rgw.buckets"}
        }
      ]
    }
    
    How to do it…
  13. Add the us-west zone using an infile in both the east and west pools:
     # radosgw-admin zone set --rgw-zone=us-west --infile us-west.json --name client.radosgw.us-east-1
    
    How to do it…
    # radosgw-admin zone set --rgw-zone=us-west --infile us-west.json --name    client.radosgw.us-west-1
    
    How to do it…
  14. Delete the default zone if it exists:
    # rados -p .rgw.root rm zone_info.default --name client.radosgw.us-east-1
    
  15. Update the region map:
    # radosgw-admin regionmap update --name client.radosgw.us-east-1
    
  16. After configuring zones, create zone users:
    1. Create the us-east zone user for the us-east-1 gateway instance. Use the same access_key and secret_key that we generated earlier for the us-east zone:
      # radosgw-admin user create --uid="us-east" --display-name="Region-US Zone-East" --name client.radosgw.us-east-1 --access_key="XNK0ST8WXTMWZGN29NF9" --secret="7VJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5" --system
      
      How to do it…
    2. Create the us-west zone user for the us-west-1 gateway instance. Use the same access_key and secret_key that we generated earlier for the us-west zone:
      # radosgw-admin user create --uid="us-west" --display-name="Region-US Zone-West" --name client.radosgw.us-west-1 --access_key="AAK0ST8WXTMWZGN29NF9" --secret="AAJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5" --system
      
      How to do it…
    3. Create the us-east zone user for the us-west-1 gateway instance. Use the same access_key and secret_key that we generated earlier for the us-east zone:
      # radosgw-admin user create --uid="us-east" --display-name="Region-US Zone-East" --name client.radosgw.us-west-1 --access_key="XNK0ST8WXTMWZGN29NF9" --secret="7VJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5" --system
      
      How to do it…
    4. Create the us-west zone user for the us-east-1 gateway instance. Use the same access_key and secret_key that we generated earlier for the us-west zone:
      # radosgw-admin user create --uid="us-west" --display-name="Region-US Zone-West" --name client.radosgw.us-east-1 --access_key="AAK0ST8WXTMWZGN29NF9" --secret="AAJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5" --system
      
    How to do it…
  17. Update the ceph-radosgw init script and set the default user as root. By default, ceph-radosgw runs using the Apache user, and you might encounter errors if the Apache user is not present:
    # sed -i s"/DEFAULT_USER.*=.*'apache'/DEFAULT_USER='root'"/g /etc/rc.d/init.d/ceph-radosgw
    
    How to do it…
  18. Log in to the us-east-1 and us-west-1 nodes and restart the ceph-radosgw service:
    # systemctl restart ceph-radosgw
    
  19. To verify if the region, zone, and radosgw configurations are correct, execute the following commands from the us-east-1 node:
    # radosgw-admin regions list --name client.radosgw.us-east-1
    # radosgw-admin regions list --name client.radosgw.us-west-1
    # radosgw-admin zone list --name client.radosgw.us-east-1
    # radosgw-admin zone list --name client.radosgw.us-west-1
    # curl http://us-east-1.cephcookbook.com:7480
    # curl http://us-west-1.cephcookbook.com:7480
    
    How to do it…
  20. Set up multisite data replication by creating the cluster-data-sync.conf file with the following contents:
    src_zone: us-east
    source: http://us-east-1.cephcookbook.com:7480
    src_access_key: XNK0ST8WXTMWZGN29NF9
    src_secret_key: 7VJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5
    dest_zone: us-west
    destination: http://us-west-1.cephcookbook.com:7480
    dest_access_key: AAK0ST8WXTMWZGN29NF9
    dest_secret_key: AAJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5
    log_file: /var/log/radosgw/radosgw-sync-us-east-west.log
    
    How to do it…
  21. Activate the data synchronization agent. Once the data sync has started, you should see an output similar to the one shown as follows:
    # radosgw-agent -c cluster-data-sync.conf
    
    How to do it…
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.188.152.136