Deployment options
In this chapter we describe the different deployment options for IBM Cloud Object Storage. It contains the following topics:
4.1 Introduction
The IBM COS appliances can be deployed on physical hardware, as a Docker container, or as a VMware appliance.
The first two sections describe the IBM COS hardware appliances and the IBM certified third-party hardware appliances on which IBM COS can be installed.
4.4, “Embedded Accesser” on page 61 explains how to enable the Embedded Accesser service on a Slicestor appliance.
4.5, “IBM Cloud Object Storage system virtual appliances” and 4.6, “Appliance containers on Docker” show how to install and configure the Manager and Accesser appliances as Docker containers and VMware appliances. Docker containers are quick to deploy, require less resources than virtual machines and do not need a hardware hypervisor. On the other hand, a VMware environment provides more security and isolation.
4.2 IBM hardware appliances
As of May 2019, the following IBM hardware appliances (Gen2) have been released for IBM COS:
Manager node Manager M10: Manager designed to support up to 4,500 simultaneous appliances in a single Cloud Object Storage Network.
Accesser node Accesser A10: Provides up to 15% more read and write performance (can complete up to 15% more reads and writes in the same time frame) as the Accesser 3105 appliance.
Slicestor node
The following options are available:
 – Slicestor 2: 3U device that contains 12 drives of 4, 8, or 12 TB.
 – Slicestor 53: 5U device that contains 53 drives of 4, 8, or 12 TB.
 – Slicestor 106: 5U device that contains 106 drives of 4, 8, or 12 TB.
The specifications of these devices are described in detail in Chapter 3, “IBM Cloud Object Storage Gen2 hardware appliances” on page 43.
In addition, the following Gen1 IBM hardware appliances are still supported for IBM COS:
3401/3403 – M01 (Manager 3105)
3401/3403 – A00 (Accesser 3105)
3401/3403 – A01 (Accesser 4105)
3401/3403 – S10 (Slicestor 2212A)
3401/3403 – S01 (Slicestor 2448)
3401/3403 – S02 (Slicestor 3448)
3401/3403 – S03 (Slicestor 2584)
More information regarding the specifications of these devices can be found in IBM Knowledge Center for COS:
4.3 Third-party appliances
In addition to the IBM hardware appliances, IBM also certifies a number of third-party servers, such as HPE, Cisco, Dell, and Lenovo. For a complete list of the supported third-party appliances, contact your IBM Sales Representative.
 
Important: Hardware appliances from different HW vendors can be deployed as part of an IBM COS system. The only limitation is that all of the appliances in a device set have to be identical.
4.4 Embedded Accesser
This feature provides customers an opportunity to save on expenses by using one physical appliance for both Accesser and Slicestor appliance functions.
The Embedded Accesser functions can be enabled on an existing storage pool or newly created storage pool. When enabled, all the Slicestor appliances in the storage pool have the Embedded Accesser functions activated. Consider the following items before you use the Embedded Accesser functions:
Hardware with a minimum of 10 GbE interconnect and a RAM capacity of 96 GB is advised for a full-scale deployment of Slicestor devices with Embedded Accesser functions.
Performance impacts. Not all workloads are suited for Embedded Accesser functionality:
 – Spread the load on all of the available Embedded Accesser appliances.
 – There is some degree of performance degradation on all workloads with Embedded Accesser appliances.
 – Some workloads, such as small file writes, are more severely impacted than others.
Service restart. The Slicestor appliance, which handles user I/O traffic, is restarted when this function is enabled.
4.4.1 Enabling Embedded Accesser functions
To enable this feature on storage pools, perform the following steps.
1. Go to the storage pool that is being targeted for Embedded Accesser functionality.
2. Click the Configure button to reconfigure the storage pool.
3. Click the Change button.
4. Select the box labeled Enable the embedded Accesser service on all the Slicestor devices belonging to this storage pool.
5. Click Update.
6. Upon the activation of the configuration change, the Slicestor appliance restarts. Wait for all the Slicestors to restart before you resume I/O operations.
4.5 IBM Cloud Object Storage system virtual appliances
IBM COS virtual appliances (vAppliances) are bundled as Open Virtual Appliance (OVA) images to be deployed on VMware vSphere 5.5 or later. Table 4-1 shows the available vAppliances types.
Table 4-1 Available vAppliances types
vAppliance Type
Purpose
High availability option
vAccesser
Provides access to Simple Object over HTTP vaults.
Deploy multiple vAccesser appliances with load balancing.
vManager
Configures, monitors, and administers an IBM Cloud Object Storage System.
Install two vManagers for manual failover or use VMware HA for automated failover.
 
Attention: Virtual Slicestor devices are not supported in an IBM COS environment.
4.5.1 Configure the appliance environment
In this section we describe how to configure the appliance environment.
Deploy the OVA template
IBM COS virtual appliances require VMware vSphere Hypervisor ESXi 5.5 or later.
Log in to the vSphere Client:
1. Select File > Deploy OVA Template.
2. Respond to the queries with information that is specific to your deployment.
3. Click Power On After Deployment and click Finish.
Set virtual machine hardware properties
After deploying the virtual appliance on the host system, the hardware properties can be modified on the virtual Manager and virtual Accesser appliance.
These settings suffice for a demonstration or a lightly loaded production IBM COS. For systems with higher performance expectations, you need to provision more resources.
 
Note: Contact IBM COS Customer Support for advised settings for a particular use case.
Table 4-2 shows the minimum settings for all virtual appliances.
Table 4-2 Minimum virtual appliance requirements
Component
vAccesser
vManager
RAM (GB)
32
64
vCPU
2
4
SCSI Controller 0
Paravirtual
Paravirtual
Hard disk 1 (GB) OS
Virtual drive (100 GB)
Virtual drive (100 GB)
Network adapter 1
1 GbE
1 GbE
Start the virtual appliance for the first time
Go to the Console after powering on the virtual appliance. The initial IBM settings appear. See 5.5, “Step 3: Appliance configuration” on page 78 for details about starting the virtual appliance for the first time.
4.6 Appliance containers on Docker
Docker is a set of tools for application layer virtualization that resides on top of a running Linux kernel. Customers can use their familiar hardware and operating systems to run IBM Cloud Object Storage System software alongside existing applications. They can apply the benefits of virtualization with significantly lower overhead than a traditional hypervisor.
4.6.1 Accesser container
The Accesser container can be deployed to take advantage of your existing container environment. The appliance container image can be deployed to customer-managed hardware on a customer-managed operating system. It can be monitored and managed by using the Manager user interface and the Manager API.
Memory requirements
The Accesser Container tries to scale RAM usage automatically based on the amount of system memory. The maximum amount of RAM allocated to Accesser Container can be set through the MAX_MEMORY environment variable. Table 4-3 shows the memory requirements for the Accesser and Manager containers.
Table 4-3 Memory requirements for Accesser containers
Deployment scenario
Suggested system memory (GB)
Suggested MAX_MEMORY setting (MB)
Single site or geo-dispersed
16+
4,000
Two sites mirrored
32+
8,000
 
 
Attention: For scenarios where the file size is large (> 10 GB) and the number of concurrent uploads and downloads is large (> 50), contact IBM customer support for more guidance.
Storage requirements
The Accesser container needs a modest amount of storage for logs and other state information. IBM recommends 60 GB capacity.
4.6.2 Manager container
The Manager container tries to scale RAM automatically usage based on the amount of memory in the system.
The maximum amount of RAM allocated to the Manager container can be set through the DS_MYSQL_MAX_MEMORY and DS_MANAGER_MAX_MEMORY environment variables.
Table 4-4 shows the memory requirements for the Manager container.
 
Note: There is approximately a 2 GB more base memory requirement in addition to the listed settings.
Table 4-4 Memory requirements for Manager containers
Deployment Scenario
Suggested System Memory (GB)
Suggested DS_MYSQL_MAX
_MEMORY setting (MB)
Suggested DS_MANAGER_MAX_MEMORY setting (MB)
All
16+
25% of system RAM
25% of system RAM
Storage requirements
IBM advises 1 TB per 1,000 vaults.
4.6.3 System and network configuration
Servers running COS containers must be configured for clock synchronization through NTP. The host OS should synchronize to the same NTP server as all of the other nodes in the IBM COS system.
 
Important: Unlike the Manager appliance, the Manager container cannot provide NTP synchronization services. Devices that are managed by a Manager container must be configured to use an external NTP server.
More information on NTP synchronization can be found in IBM Knowledge Center for COS:
You can select your current version of the IBM COS software on the top of the page (Change version or product).
Network ports
Appliance containers need connectivity to all system nodes. Table 4-5 shows the port usage for Docker containers.
Table 4-5 Port usage for appliance use
Destination
Port
Purpose
Slicestor nodes
TCP 5000
Data operations and registry lookup
Slicestor nodes
TCP 7 (OPEN or REJECT)
Round-trip time calculation
NTP
TCP or UDP 123
NTP messaging (configured in the host)
Manager node
TCP 443 (by default)
Manager API vault usage query via HTTPS
Manager CNC
TCP 8088
Management control port (non-Manager nodes)
 
Note: TCP port 7 can be closed, but any firewall rules should send REJECT messages and not drop packets.
4.6.4 Configure the appliance container
IBM’s implementation of Docker supports two network operation modes.
--net="host"
The container shares networking with the host OS. Only one appliance container that uses --net="host" can run at one time on a single host. When using --net="host", some ports that are used by services inside the container can conflict with ports that are opened by services in your hosts network namespace. These ports can be remapped by using the DS_CNC_* variables noted in container environment variables or reconfigured through the Manager Web Interface for the Accesser node service.
 
Restriction: The Docker host must not use localhost as its hostname when using an appliance container.
--net="bridge"
The container uses a separate network namespace from the host OS. Docker automatically allocates an internal NAT-like network. Some ports must be published from the container to the host.
 
Note: The hostname parameter must be set to a valid host name other than localhost.
Container ports
Appliance containers run services on several ports. Depending on the use of --net="host" versus --net="bridge" some of these ports might need to be published to host ports to access these services from outside the Docker host by using the -p flag. Port forwarding is operated by putting "-p" in the dock run command:"-p <CNC_PORT>:8088", where CNC_PORT is the port defined by user and forward to. The forwarded port can be defined as any port which is not currently used.
Table 4-6 shows the required ports for running appliance containers.
Table 4-6 Required ports for appliance containers
TCP Port
Accesser container purpose
Manager container purpose
80
Accesser software HTTP
 
443
Accesser software HTTPS
Manager software HTTPS
8080
Accesser software HTTP
 
8088
Manager CNC services
 
8192
Device API
 
8443
Accesser software HTTPS
 
4.6.5 Deployment
 
Note: Docker commands must be run with root privileges. In the following examples, this is represented by using the sudo command before each docker command.
Prerequisites
These are the prerequisites for deployment:
Docker-compatible Linux operating system installation
NTP synchronization must be configured on the host operating system
Docker 1.3 or later
API compatibility
These are the considerations for API compatibility:
All APIs supported by the Accesser appliance are also supported by the Accesser container.
All APIs supported by the Manager appliance are also supported by the Manager container.
Creating a new container
 
Tip: The Docker parameters can be found on the IBM COS Knowledge Center:
You can select your current version of the IBM COS software on the top of the page (Change version or product).
To create a new container, complete the following steps:
1. Load the container image into Docker running on your server:
# cat clevos-3.14.3.65-accesser-container.tar.gz | sudo docker load
2. List the container images to find either the repository/tag pair or image ID to start a container. See Example 4-1.
Example 4-1 List the container images
# sudo docker images
 
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
clevos-manager 3.14.3.65 bd60b4c172a3 2 weeks ago 2.64GB
clevos-accesser3.14.3.65 4556463ecdd9 2 weeks ago 2.05GB
3. Start an appliance container, as shown in Example 4-2.
Example 4-2 Start an appliance container
# sudo docker run -itd --net=host -v /var/lib/accesser-container01:/container-persistence:rw --hostname "docker-accesser9" --name="docker-accesser09" --env="DS_MANAGER_IP=192.168.99.11" --env="DS_MANAGER_AUTOACCEPT=true" clevos-accesser:3.14.3.65
 
bf6077050d639fa8e0c2ef48ab964fcf9d2682645670c3437247d6e2391e5a4c
This is the container ID.
4. Approve the container instance in the Manager web interface. See Figure 4-1.
Figure 4-1 Device approval
Stopping a running container
Enter the docker stop command with the container ID in the command line to stop the container:
# sudo docker stop bf6077050d63
Resuming a stopped container
Enter the docker start command with the container ID in the command line to resume the container:
# sudo docker start bf6077050d63
Executing an interactive shell
To troubleshoot or debug a container, enter the docker exec command with the container ID and a shell file and path in the command line:
# sudo docker exec -it bf6077050d63 /bin/bash
 
Note: If the -i and -t parameters are not specified in the original docker run statement when starting the container, terminal-related error messages might be displayed while trying to use commands inside the container.
Upgrading a container
Containers cannot be upgraded through the Manager device Web Interface. A container must be upgraded on the server on which it runs.
 
Note: The previous container must have been run with a persistent volume mounted into container-persistence with -v, as shown in “Creating a new container” on page 66.
To upgrade a container, complete the following steps:
1. Load the new container image into Docker:
# gunzip -c clevos-3.14.3.81-accesser-container.tar.gz | sudo docker load
2. Stop the old container:
# sudo docker stop 27c60234bf89
3. Run the new container image using the same persistent volume, environment variables, and hostname used for the previous container.
 
Note: If the hostname (--hostname) is not specified when --net="host" is used, the container inherits the hostname of the host OS.
# sudo docker run -itd --env="DS_MANAGER_IP=192.168.99.11" --env="DS_MANAGER_AUTOACCEPT=true" -v /home/data/container-data-1:/container-persistence:rw --net="host" clevos-accesser:3.14.3.65
4. When the new container has started, remove the old container:
# sudo docker rm 27c60234bf89
 
Attention: This removes the container instance from the Docker application, but does not remove the container image from the host operating system. That must be done separately.
Converting from a physical or virtual Manager device to a Manager container
Perform the following steps:
1. Back up the Manager appliance.
2. Configure and start a Manager container instance that is running the same software version as the Manager appliance.
3. Restore the backup of the Manager appliance on the Manager container.
4. Reconfigure any appliances joined to the system served by the Manager appliance, when an IP address or port changed regarding where Manager services originate.
Converting from Manager container to a physical or virtual Manager appliance
Perform the following steps:
1. Back up the Manager container.
2. Image and configure a Manager appliance running the same software version as the Manager container.
3. Restore the backup of the Manager container on the Manager appliance.
4. Reconfigure any appliances joined to the system served by the Manager container, when an IP address or port changed regarding where Manager services originate.
Upgrading a system managed by a Manager container
A Manager container cannot be upgraded through the normal Manager UI orchestration. Upgrading of non-container devices in a Manager container system requires that the Manager container is upgraded externally before using the Manager device Web Interface to upgrade the remaining supported devices.
Perform the following steps:
1. Upgrade the Manager container to the wanted version per the procedure in “Upgrading a container” on page 67.
2. Upgrade any Accesser container instances to the wanted version per the procedure in “Upgrading a container” on page 67.
3. Use the Manager Web Interface to upgrade the remaining hardware or virtual devices.
Monitoring
Monitoring the appliance container in the Manager web interface is nearly identical to monitoring an appliance.
Because it is a software solution, the appliance container does not provide generalized hardware- and operating system-level monitoring.
Statistics that are not displayed (or provided in the Manager REST API or the Device API) include, but are not limited to, the following data:
CPU temperatures
Disk I/O
Disk temperatures
Fan speeds
Events and monitoring not performed include, but are not limited to, the following information:
RAID monitoring
CPU/Disk temperature alerts
Device reboot events
Kernel dump events (does not apply to containers)
Some graphs and stats for appliance containers might behave differently than expected:
CPU usage and system load
These graphs/stats reflect the CPU usage and system load of the host machine as a whole, not an individual appliance container.
Network I/O
These graphs reflect the interfaces visible to the appliance container and are different depending on the containers network settings (docker run --net={"host" | "bridge"}).
Extra rules and restrictions
The following are extra rules and restrictions:
The operator cannot reboot a device from within a container.
The Network Utility Tool (nut) commands do not work in a container.
NTP needs to be configured on each docker appliance individually.
With docker containers you can not separate the data channel and client channel. Only the data channel can be used.
Troubleshooting
Container logs can be viewed in two locations:
On the host system of the persistent volume that is mounted into the container. If the container was started with -v home/admin/container-data:/container-persistence:rw, the logs are visible in /home/admin/container-data/logs/.
In the container, either /container-persistence/logs or /var/log.
In addition to logging, only non-hardware exceptions when using various APIs show as events in the Manager device Web Interface.
For more details about troubleshooting a Docker environment, visit IBM Knowledge Center for COS:
https://www.ibm.com/support/knowledgecenter/STXNRM_3.14.3/coss.doc/applianceContainerAdmin_troubleshooting.html
You can select your current version of the IBM COS software on the top of the page (Change version or product).
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.226.98.166