Installing IBM Spectrum Scale Container Native Storage Access and Container Storage Interface
This chapter describes how to install IBM Spectrum Scale Container Native Storage Access (CNSA) and Container Storage Interface (CSI) and includes the following topics:
 
2.1 Installing IBM Spectrum Scale CNSA and CSI
For more information about installation steps for IBM Spectrum Scale CNSA and CSI, see the following resources:
This chapter provides an overview of the requirements, options, and steps to set up IBM Spectrum Scale CNSA and CSI. It also summarizes the basic configuration and preparation steps and offers a unified deployment with a central config.yaml file.
The installation of the IBM Spectrum Scale CNSA 5.1.0.3 release with CSI 2.1.0 requires two distinct installation steps because the CSI deployment is a separate step from the CNSA deployment. The IBM Spectrum Scale CSI deployment depends on a few manual steps that must be performed by an administration user after the IBM Spectrum Scale CNSA deployment and before the IBM Spectrum Scale CSI deployment.
These manual steps include:
Creating a local CSI user or password on the running IBM Spectrum Scale CNSA cluster:
oc exec -c liberty ibm-spectrum-scale-gui-0 -- /usr/lpp/mmfs/gui/cli/mkuser -p -g CsiAdmin
Obtaining the local cluster ID of the created IBM Spectrum Scale CNSA cluster:
oc exec [ibm-spectrum-scale-core-pod] -- mmlscluster | grep 'GPFS cluster id'
If you configured your environment and OpenShift cluster to meet all the IBM Spectrum Scale Container Native Storage Access (CNSA) and CSI requirements, skip the next section and see “Deployment steps” on page 16.
2.2 Requirements
To install IBM Spectrum Scale CNSA and CSI on OpenShift 4.6 or higher, the following requirements must be met in addition to the other prerequisites for IBM Spectrum Scale CNSA and CSI:
The CNSA 5.1.0.3 tar archive are extracted on a local installation node with access to the OpenShift cluster (for example, by using oc commands).
A regular OpenShift cluster admin user with the cluster-admin role is on the OpenShift cluster to deploy CNSA and push the CNSA images to the internal OpenShift image registry; for example, add an identity provider, such as htpasswd and add a cluster-admin user:
$ oc adm policy add-cluster-role-to-user cluster-admin <user>
Podman is on the local installation node to load, tag, and push the IBM Spectrum Scale CNSA images into the internal OpenShift registry or external registry.
Internet access is available to pull all other dependent images for IBM Spectrum Scale CNSA and CSI from their respective external image registries; for example, quay, and us.gcr.io.
2.3 Preinstallation tasks
Complete the following tasks for the IBM Spectrum Scale CNSA and CSI deployment:
Obtain the IBM Spectrum Scale Container Native Storage Access tar archive file from Fix Central or Passport Advantage.
Extract the CNSA .tar archive. For more information, see this web page.
Configure Red Hat OpenShift Container Platform to increase the PIDS_LIMIT, add the kernel-devel extensions (required on OpenShift 4.6 and higher only), and increase the vmalloc kernel parameter (this parameter is required for Linux on System Z only).
 – --perfileset-quota
 – --filesetdf
 – enforceFilesetQuotaOnRoot
 – controlSetxattrImmutableSELinux
Continue on in the procedure to prepare the OpenShift cluster for the deployment of IBM Spectrum Scale CNSA and CSI.
2.3.1 Uploading IBM Spectrum Scale CNSA images to local image registry
After you extract the IBM Spectrum Scale CNSA tar archive, load, tag, and push the IBM Spectrum Scale CNSA images to a local container image registry.
If you have enabled and exposed the internal Red Hat OpenShift image registry in your OpenShift cluster, push all the IBM Spectrum Scale CNSA images into this registry by following the instructions at this web page.
 
Note: A regular production Red Hat OpenShift cluster includes a correctly configured identity provider and a regular cluster admin user other than the default admin users, such as kube:admin or system:admin, which are meant primarily as temporary accounts for the initial deployment. They do not provide a token (oc whoami -t) to access the internal OpenShift image registry.
For more information about creating a regular cluster admin user, see this web page.
If you configured an identity provider, such as htpasswd on your OpenShift cluster, and added a regular OpenShift cluster admin user with cluster-admin role (for example, with oc adm policy add-cluster-role-to-user cluster-admin <user-name>), this admin user can push images to the internal OpenShift registry.
2.3.2 Preparing OpenShift cluster nodes to run IBM Spectrum Scale CNSA
To increase the PIDS_LIMIT limit to a minimum of pidsLimit: 4096 by using the Machine Config Operator (MCO) on OpenShift, apply the provided YAML file in the IBM Spectrum Scale CNSA tar archive, as shown in the following example:
---
apiVersion: machineconfiguration.openshift.io/v1
kind: ContainerRuntimeConfig
metadata:
name: increase-pid-limit
spec:
machineConfigPoolSelector:
matchLabels:
pid-crio: config-pid
containerRuntimeConfig:
pidsLimit: 4096
Apply it by using the following commands:
# oc create -f <cnsa_extracted_dir>/machineconfig/increase_pid_mco.yaml
# oc label machineconfigpool worker pid-crio=config-pid
 
 
Note: Running this command drives a rolling update across your Red Hat OpenShift Container Platform worker nodes and can take more than 30 minutes to complete, depending on the size of the worker node pool because the worker is restarted. You can check the progress of the update by using the following command:
# oc get MachineConfigPool
Wait until the update finished successfully.
 
 
 
Note: IBM Cloud Pak for Data requires a higher PID setting of 12288. You can apply this setting at this step by changing pidsLimit: 4096 to pidsLimit: 12288.
Confirm the update, as shown in the following example:
# oc get nodes -lnode-role.kubernetes.io/worker=
-ojsonpath="{range .items[*]}{.metadata.name}{' '}" |
xargs -I{} oc debug node/{} -T -- chroot /host crio-status config | grep pids_limit
The output for every node should appear as shown in the following example:
# oc get nodes -lnode-role.kubernetes.io/worker= -ojsonpath="{range .items[*]}{.metadata.name}{' '}" |xargs -I{} oc debug node/{} -T -- chroot /host crio-status config | grep pids_limit
Starting pod/worker0cpst-ocp-cluster-bcpst-labno-usersibmcom-debug ...
To use host binaries, run `chroot /host`
pids_limit = 4096
Removing debug pod ...
Starting pod/worker1cpst-ocp-cluster-bcpst-labno-usersibmcom-debug ...
To use host binaries, run `chroot /host`
pids_limit = 4096
Removing debug pod ...
Starting pod/worker2cpst-ocp-cluster-bcpst-labno-usersibmcom-debug ...
To use host binaries, run `chroot /host`
Pod IP: 9.114.194.185
If you don't see a command prompt, try pressing enter.
pids_limit = 4096
If you are running on OpenShift 4.6.6 (or a higher minor level), you must add the kerneldevel extensions by way of the Machine Config Operator by creating a YAML file (here, named machineconfigoperator.yaml) as shown in the following example:
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  labels:
    machineconfiguration.openshift.io/role: "worker"
  name: 02-worker-kernel-devel
spec:
  config:
    ignition:
      version: 3.1.0
  extensions:
     - kernel-devel
Apply it by using following command:
# oc create -f <cnsa_extracted_dir>/machineconfig/machineconfigoperator.yaml
Check the status of the update by using the following command:
# oc get MachineConfigPool
Wait until the update finishes successfully.
Validate that the kernel-devel package is successfully applied by running the following command:
# oc get nodes -lnode-role.kubernetes.io/worker=
-ojsonpath="{range .items[*]}{.metadata.name}{' '}" |
xargs -I{} oc debug node/{} -T -- chroot /host sh -c "rpm -q kernel-devel"
The output for every node should resemble the following example:
# oc debug node/worker0.cpst-ocp-cluster-b.cpst-lab.no-users.ibm.com -T -- chroot /host sh -c "rpm -q kernel-devel"
Starting pod/worker0cpst-ocp-cluster-bcpst-labno-usersibmcom-debug ...
To use host binaries, run `chroot /host`
kernel-devel-4.18.0-193.60.2.el8_2.x86_64
Removing debug pod ...
2.3.3 Labeling OpenShift worker nodes for IBM Spectrum Scale CSI
By using the default configuration for IBM Spectrum Scale CNSA, you must label the worker nodes that are eligible to run IBM Spectrum Scale CSI:
# oc label nodes -l node-role.kubernetes.io/worker scale=true --overwrite=true
2.4 Deployment steps
In this section, we describe the process to deploy IBM Spectrum Scale CNSA and CSI, which includes the following steps:
1. Prepare IBM Spectrum Scale remote storage cluster, OpenShift namespaces, and secrets.
2. Edit the operator.yaml and ibm_v1_scalecluster_cr.yaml files to reflect your local. environment.
3. Deploy IBM Spectrum Scale CNSA (ibm-spectrum-scale-ns).
4. Deploy IBM Spectrum Scale CSI (ibm-spectrum-scale-csi).
2.4.1 Step 1: Preparing IBM Spectrum Scale remote storage cluster, OpenShift namespaces, and secrets
This step prepares required settings and GUI user accounts on the remote IBM Spectrum Scale storage cluster for the IBM Spectrum Scale CNSA and CSI deployment. The following user accounts are needed:
One user account for a CNSA user (here, we use cnsa_admin with password CNSA_PASSWORD)
One user account for a CSI user (here, we use csi_admin with password CSI_PASSWORD)
This step also prepares the namespaces (that is, projects) and creates the Kubernetes secrets in OpenShift for the IBM Spectrum Scale CNSA and IBM Spectrum Scale CSI driver deployment. The secrets include the credentials for the required CNSA and CSI users for the local and remote IBM Spectrum Scale GUIs.
Preparing GUI users for CNSA on the remote IBM Spectrum Scale storage cluster
Complete the following steps:
1. Check the IBM Spectrum Scale remote storage cluster to determine whether the GUI user group ContainerOperator exists by running the following command:
# /usr/lpp/mmfs/gui/cli/lsusergrp ContainerOperator
2. If the GUI user group ContainerOperator does not exist, create it by using the following command:
# /usr/lpp/mmfs/gui/cli/mkusergrp ContainerOperator --role containeroperator
3. Check to see if no user for CNSA exists in the ContainerOperator group:
# /usr/lpp/mmfs/gui/cli/lsuser | grep ContainerOperator
#
Create a user if none exists:
# /usr/lpp/mmfs/gui/cli/mkuser cnsa_admin -p CNSA_PASSWORD -g ContainerOperator
This user is used later by IBM Spectrum Scale CNSA through the cnsa-remote-gui-secret secret.
Preparing GUI user for CSI on the remote IBM Spectrum Scale storage cluster
Complete the following steps:
1. Check the IBM Spectrum Scale remote storage cluster to determine whether the GUI user group CsiAdmin exists by issuing the following command:
# /usr/lpp/mmfs/gui/cli/lsusergrp CsiAdmin
If the GUI user group CsiAdmin does not exist, create it by using the following command:
# /usr/lpp/mmfs/gui/cli/mkusergrp CsiAdmin --role csiadmin
2. Check to see if no user for the CSI driver exists in the CsiAdmin group:
# /usr/lpp/mmfs/gui/cli/lsuser | grep CsiAdmin
#
Create a user if none exists:
# /usr/lpp/mmfs/gui/cli/mkuser csi_admin -p CSI_PASSWORD -g CsiAdmin
This user is used later by the IBM Spectrum Scale CSI driver through the csi-remote-secret secret.
Applying quota and configuration settings for CSI on the remote IBM Spectrum Scale storage cluster
Complete the following steps:
1. Ensure that per --fileset-quota on the file systems to be used by IBM Spectrum Scale CNSA and CSI is set to no. Here, we use ess3000_1M as the file system for IBM Spectrum Scale CNSA and CSI:
# mmlsfs ess3000_1M --perfileset-quota
flag value description
------------------- ------------------------ -----------------------------------
 --perfileset-quota no Per-fileset quota enforcement
If it is set to yes, change it to no by using the mmchfs command:
# mmchfs ess3000_1M --noperfileset-quota
2. Enable quota for all the file systems that are used for fileset-based dynamic provisioning with IBM Spectrum Scale CSI by using the mmchfs command:
# mmchfs ess3000_1M -Q yes
Verify that quota is enabled for the file system (in our example, ess3000_1M) by using the mmlsfs command:
# mmlsfs ess3000_1M -Q
flag               value                   description
------------------- ------------------------ -----------------------------------
-Q                  user;group;fileset       Quotas accounting enabled
                    user;group;fileset        Quotas enforced
                    none                      Default quotas enabled
3. Enable quota for the root user by issuing the following command:
# mmchconfig enforceFilesetQuotaOnRoot=yes -i
4. For Red Hat OpenShift, ensure that the controlSetxattrImmutableSELinux parameter is set to yes by issuing the following command:
# mmchconfig controlSetxattrImmutableSELinux=yes -i
5. Display the correct volume size in a container by enabling filesetdf on the file system by using the following command:
# mmchfs ess3000_1M --filesetdf
Preparing namespaces in OpenShift
Log in to the OpenShift cluster as regular cluster admin user with a cluster-admin role to perform the next steps.
Create the following namespaces (that is, projects) in OpenShift:
One for the IBM Spectrum Scale CNSA deployment; in our example, we use ibm-spectrum-scale-ns as name for the CNSA namespace.
One for the IBM Spectrum Scale CSI driver deployment; in our example, we use ibm-spectrum-scale-csi-driver for the CSI namespace.
If not yet done, create a namespace/project for CNSA:
# oc new-project <ibm-spectrum-scale-ns>
At this time, we also prepare the namespace/project for the IBM Spectrum Scale CSI driver in advance:
# oc new-project <ibm-spectrum-scale-csi-driver>
The oc new-project <my-namespace> also switches immediately to the newly created namespace/project. Therefore, you must switch back with oc project <ibm-spectrum-scale-ns> to the CNSA namespace as first step of the deployment. Alternatively, you can also use oc create namespace <my-namespace>, which does not switch to the created namespace.
Creating a secret for CNSA
IBM Spectrum Scale CNSA requires a GUI user account on the remote IBM Spectrum Scale storage cluster. The credentials are provided as username and password through a Kubernetes secret in the CNSA namespace.
Create a Kubernetes secret in the CNSA namespace holding the user credentials from the CNSA GUI user on the remote IBM Spectrum Scale storage cluster:
# oc create secret generic cnsa-remote-gui-secret --from-literal=username='cnsa_admin'--from-
literal=password='CNSA_PASSWORD' -n ibm-spectrum-scale-ns
Creating secrets for CSI
CSI requires a GUI user account on the remote IBM Spectrum Scale storage cluster and the local CNSA cluster. The credentials are provided as username and password through Kubernetes secrets in the IBM Spectrum Scale CSI namespace (in our example, ibm-spectrum-scale-csi-driver).
Create and label the Kubernetes secret in the CSI namespace holding the user credentials from the CSI GUI user on the remote IBM Spectrum Scale storage cluster that we created earlier:
# oc create secret generic csi-remote-secret --from-literal=username='csi_admin' --from- literal=password='CSI_PASSWORD'-n ibm-spectrum-scale-csi-driver
# oc label secret csi-remote-secret product=ibm-spectrum-scale-csi -n ibm-spectrum-scale-csi-driver
At this time, we plan ahead and also create the required Kubernetes secret for the CSI admin user in the local CNSA cluster in advance; that is, before we deploy CNSA or create the CSI admin user in the GUI of the local CNSA cluster:
# oc create secret generic csi-local-secret --from-literal=username='csi_admin' --from-literal=password='CSI_PASSWORD' -n ibm-spectrum-scale-csi-driver
# oc label secret csi-local-secret product=ibm-spectrum-scale-csi -n ibm-spectrum-scale-csi-driver
 
Note: The CSI driver user credentials on the local compute (CNSA) and remote storage cluster can be created and configured with different user names and passwords and do not need to be identical.
We use these credentials when creating the CSI admin user in the local CNSA cluster after the IBM Spectrum Scale CNSA deployment.
Verifying access to the remote IBM Spectrum Scale storage cluster GUI
Before moving on, it is a good idea to verify access to the GUI of the remote IBM Spectrum Scale storage cluster by running, for example, with the CNSA admin user and the CSI admin user credentials (from an admin node on the OpenShift cluster network):
# curl -k -u 'csi_admin:CSI_PASSWORD' https://<remote storage cluster GUI
host>:443/scalemgmt/v2/cluster
Successfully running this command ensures that the user credentials are correct and that the nodes on the OpenShift network can access the remote IBM Spectrum Scale storage cluster.
2.5 Editing the operator.yaml and ibm_v1_scalecluster_cr.yaml files to reflect your local environment
The operator.yaml file orchestrates some of the configuration activities for IBM Spectrum Scale CNSA deployment. The operator.yaml file is included in the IBM Spectrum Scale CNSA tar file.
Make sure to provide the local or external registry where the IBM Spectrum Scale images resides (see 2.3.1, “Uploading IBM Spectrum Scale CNSA images to local image registry” on page 13) in the operator.yaml file:
...
# Replace the value to point at the operator image
# Example using internal image repository: image-registry.openshift-image-registry.svc:5000/ibm-spectrum-scale-ns/ibm-spectrum-scale-core-operator:vX.X.X.X
image: REPLACE_SCALE_CORE_OPERATOR_IMAGE
...
The ibm_v1_scalecluster_cr.yaml holds the configurable parameters for your local environment.
Edit the ibm_v1_scalecluster_cr.yaml to match the configuration of your local environment for the IBM Spectrum Scale CNSA and the IBM Spectrum Scale CSI deployment.
To configure the custom resource YAMLs, see the following CNSA and CSI IBM Documentation:
2.5.1 Minimum required configuration
At a minimum, you must configure the following parameters for IBM Spectrum Scale Container Native Storage Access (CNSA).
Here, we configure the primaryFilesystem that is to be mounted on the local CNSA cluster from the remote IBM Spectrum Scale storage cluster and also host the primary fileset of IBM Spectrum Scale CSI to store its configuration data:
# -------------------------------------------------------------------------------
# filesystems block is required for Remote Mount
# -------------------------------------------------------------------------------
# filesystems[name].remoteMount.storageCluster refers to the name of a remoteCluster defined in the proceeding block
# note: adding, removing, or updating a filesystem name or mountPoint after first deployment will require manual pod deletions.
filesystems:
- name: "fs1"
remoteMount:
storageCluster: "storageCluster1"
storageFs: "fs1"
# mountPoint must start with `/mnt`
mountPoint: "/mnt/fs1"
The following parameters are used to configure primaryFilesystem:
name: Local name of the file system on the IBM Spectrum Scale CNSA cluster
 
Note: This local name must comply with Kubernetes DNS label rules (see DNS Label Names).
mountPoint: Local mount point of the remote file system on OpenShift (must be under /mnt).
storageCluster: Internal object name to reference the remote cluster definition object in the next section.
storageFs: Original name of the file system on the remote IBM Spectrum Scale storage cluster (for example, from mmlsconfig or curl -k -u 'cnsa_admin:CNSA_PASSWORD' https://<remote storage cluster GUI host>:443/scalemgmt/v2/filesystems).
Here, we configure the remoteClusters that provides the file system for the remote mount:
# -------------------------------------------------------------------------------
# The remoteCluster field is required for remote mount
# -------------------------------------------------------------------------------
# A remoteCluster definition provides the name, hostname, its GUI secret, and contact node.
# The remoteCluster name is referenced in the filesystems[name].remoteMount.storageCluster
# used for Remote Mount
remoteClusters:
- name: storageCluster1
gui:
cacert: "cacert-storage-cluster-1"
host: ""
secretName: "cnsa-remote-gui-secret"
insecureSkipVerify: false
# contactNodes:
# - storagecluster1node1
# - storagecluster1node2
The following parameters are used to configure remoteClusters:
name: This name is used to identify the remote Storage Cluster.
gui: This information is used to access the remote Storage Cluster’s GUI.
cacert: This name is the name of the Kubernetes configmap that contains the CA certificate for the storage cluster GUI.
host: Hostname for the GUI endpoint on the storage cluster.
 
Note: If insecureSkipVerify is set to false, the hostname that is encoded in the cacert ConfigMap must match the value that is provided for host.
secretName: This name of the Kubernetes secret is created during the storage cluster configuration.
 
Note: Specify the secret name that you noted in Create Secret.
insecureSkipVerify: Controls whether a client verifies the storage cluster’s GUI certificate chain and hostname. If set true, TLS is susceptible to machine-in-the-middle attacks. The default setting is false.
contactNodes (optional): Provide a list of storage nodes to be used as the contact nodes list. If not specified, the operator uses three nodes from the storage cluster.
Based on the registry option, replace the registry in the following section of the ibm_v1_scalecluster_cr.yaml. For example, use image-registry.openshift-image-registry.svc:5000/ibm-spectrum-scale-ns for the internal OpenShift image registry and CNSA namespace ibm-spectrum-scale-ns:
# -------------------------------------------------------------------------------
# images is the list of Docker container images required to deploy and run IBM Spectrum Scale
# -------------------------------------------------------------------------------
# note: changing the following fields after first deployment will require manual pod deletions.
images:
core: REPLACE_CONTAINER_REGISTRY/ibm-spectrum-scale-core:v5.1.0.3
coreInit: REPLACE_CONTAINER_REGISTRY/ibm-spectrum-scale-core:v5.1.0.3
gui: REPLACE_CONTAINER_REGISTRY/ibm-spectrum-scale-gui:v5.1.0.3
postgres: "docker.io/library/postgres@sha256:a2da8071b8eba341c08577b13b41527eab3968bf1c8d28123b5b07a493a26862"
pmcollector: REPLACE_CONTAINER_REGISTRY/ibm-spectrum-scale-pmcollector:v5.1.0.3
sysmon: REPLACE_CONTAINER_REGISTRY/ibm-spectrum-scale-monitor:v5.1.0.3
logs: "registry.access.redhat.com/ubi8/ubi-minimal:8.3"
For more information about the IBM Spectrum Scale CNSA configuration parameters, see CNSA Operator - Custom Resource.
For more information about the IBM Spectrum Scale CSI driver configuration parameters, see Configuring Custom Resource for CSI driver.
2.5.2 Optional configuration parameters
In this section, we describe the available configuration parameters.
Call Home
You can enable and configure Call Home for IBM Spectrum Scale CNSA in the following section of the ibm_v1_scalecluster_cr.yaml file:
callHome:
# call home functionality is optional #
# # TO ENABLE: Remove the first # character on each line of this section to configure and enable call home
# callhome:
# # By accepting this request, you agree to allow IBM and its subsidiaries to store and use your contact information and your support information anywhere they do business worldwide. For more information, please refer to the Program license agreement and documentation.
# # If you agree, please respond with "true" for acceptance, else with "false" to decline.
# acceptLicense: true | false
# # companyName of the company to which the contact person belongs.
# # This name can consist of any alphanumeric characters and these non-alphanumeric characters: '-', '_', '.', ' ', ','.
# companyName:
# # customerID of the system administrator who can be contacted by the IBM Support.
# # This can consist of any alphanumeric characters and these non-alphanumeric characters: '-', '_', '.'.
# customerID: ""
# # companyEmail address of the system administrator who can be contacted by the IBM Support.
# # Usually this e-mail address is directed towards a group or task e-mail address. For example, [email protected].
# companyEmail:
# # countryCode two-letter upper-case country codes as defined in ISO 3166-1 alpha-2.
# countryCode:
# # Marks the cluster as a "test" or a "production" system. In case this parameter is not explicitly set, the value is set to "production" by default. # type: production | test
# # Remove or leave the proxy block commented if a proxy should not be used for uploads
# proxy:
# # host of proxy server as hostname or IP address
# host:
# # port of proxy server
# port:
# # secretName of a basic-auth secret, which contains username and password for proxy server # # Remove the secretName if no authentication to the proxy server is needed.
# secretName:
Host name aliases
The host names of the remote IBM Spectrum Scale storage cluster contact nodes must be resolvable by way of DNS by the OpenShift nodes.
If the IP addresses of these contact nodes cannot be resolved by way of DNS (including a reverse lookup), the hostname and their IP addresses can be specified in the hostAliases section of ibm_v1_scalecluster_cr.yaml file that is shown in Example 2-1.
Example 2-1 Specifying hostname and their IP addresses
# hostAliases is used in an environment where DNS cannot resolve the remote (storage) cluster
# note: changing this field after first deployment will require manual pod deletions.
# hostAliases:
# - hostname: example.com
# ip: 10.0.0.1
 
2.6 Deploying IBM Spectrum Scale CNSA
Log in to the OpenShift cluster as regular admin user with a cluster-admin role, switch to the CNSA namespace (here, ibm-spectrum-scale-ns):
# oc project ibm-spectrum-scale-ns
Deploy the Operator by creating the provided yaml files, as shown in Example 2-2.
Example 2-2 Creating the provided yaml files
oc create -f spectrumscale/deploy/crds/ibm_v1_scalecluster_crd.yaml -n ibm-spectrum-scale-ns
oc create -f spectrumscale/deploy/crds/ibm_v1_scalecluster_cr.yaml -n ibm-spectrum-scale-ns
oc create -f spectrumscale/deploy -n ibm-spectrum-scale-ns
 
Verify that the Operator creates the ScaleCluster Custom Resource by checking pods and Operator logs:
Get the pods:
# oc get pods -n ibm-spectrum-scale-ns
Tail the operator log:
# oc logs $(oc get pods -lname=ibm-spectrum-scale-core-operator -n ibm-spectrum-scale-ns -ojsonpath="{range .items[0]}{.metadata.name}") -n ibm-spectrum-scale-ns -f
Sample output:
[root@arcx3650fxxnh ~]# oc get pods -o wide
NAME READY STATUS RESTARTS AGE   IP     NODE NOMINATED NODE                                       READINESS        GATES
ibm-spectrum-scale-core-5btzt 1/1 Running 0 3h59m 9.11.110.126 worker5.cpst-ocp-cluster-a.cpst-lab.no-users.ibm.com <none> <none>
ibm-spectrum-scale-core-k4gbd 1/1 Running 0 3h59m 9.11.110.157 worker3.cpst-ocp-cluster-a.cpst-lab.no-users.ibm.com <none> <none>
ibm-spectrum-scale-core-q5svl 1/1 Running 0 3h59m 9.11.110.150 worker4.cpst-ocp-cluster-a.cpst-lab.no-users.ibm.com <none> <none>
ibm-spectrum-scale-gui-0 9/9 Running 0 3h59m 10.128.4.9 worker5.cpst-ocp-cluster-a.cpst-lab.no-users.ibm.com <none> <none>
ibm-spectrum-scale-operator-7b7dc6cb5-fjlw2 1/1 Running 0 3h59m 10.131.2.25 worker4.cpst-ocp-cluster-a.cpst-lab.no-users.ibm.com <none> <none>
ibm-spectrum-scale-pmcollector-0 2/2 Running 0 3h59m 10.128.4.8 worker5.cpst-ocp-cluster-a.cpst-lab.no-users.ibm.com <none> <none>
ibm-spectrum-scale-pmcollector-1 2/2 Running 0 3h58m 10.130.2.10 worker3.cpst-ocp-cluster-a.cpst-lab.no-users.ibm.com <none> <none>
Verity that the IBM Spectrum Scale cluster has been created:
oc exec $(oc get pods -lapp=ibm-spectrum-scale-core
-ojsonpath="{.items[0].metadata.name}" -n ibm-spectrum-scale-ns)
-n ibm-spectrum-scale-ns -- mmlscluster
oc exec $(oc get pods -lapp=ibm-spectrum-scale-core
-ojsonpath="{.items[0].metadata.name}" -n ibm-spectrum-scale-ns)
-n ibm-spectrum-scale-ns -- mmgetstate -a
Verify that the storage cluster has been configured:
oc exec $(oc get pods -lapp=ibm-spectrum-scale-core
-ojsonpath="{.items[0].metadata.name}" -n ibm-spectrum-scale-ns)
-n ibm-spectrum-scale-ns -- mmremotecluster show all
Verify the storage cluster file system has been configured:
oc exec $(oc get pods -lapp=ibm-spectrum-scale-core
-ojsonpath="{.items[0].metadata.name}" -n ibm-spectrum-scale-ns)
-n ibm-spectrum-scale-ns -- mmremotefs show
Verify the storage cluster file system has been remotely mounted:
oc exec $(oc get pods -lapp=ibm-spectrum-scale-core
-ojsonpath="{.items[0].metadata.name}" -n ibm-spectrum-scale-ns)
-n ibm-spectrum-scale-ns -- mmlsmount fs1 -L
 
Note: This fs1 file system is the name of the CNSA cluster’s file system and not the remote cluster’s file system. Therefore, the name of the file system can vary based on the name that you used for the file system.
Verify status and events of the IBM Spectrum Scale Operator:
oc describe gpfs
During the CNSA deployment, several Docker images are pulled. You might experience a case where you exceed the Docker pull requests. To prevent this issue, add the Docker secret and link:
oc create secret docker-registry dockerio-secret
--docker-server=docker.io
--docker-username=<docker-username>
--docker-password=<docker-password>
--docker-email=<docker-user>
To link to a pod that indicated Docker pull failure, you can run the following command. In this example, we are linking it to a GUI pod (see Example 2-3).
Example 2-3 Linking to a GUI pod
oc secrets link ibm-spectrum-scale-gui dockerio-secret --for=pull -n ibm-spectrum-scale-ns
Later, you can delete the failing pod. It is automatically recreated, and deployed successfully.
You can check the IBM Spectrum Scale CNSA operator log by using the following command:
# oc logs <ibm-spectrum-scale-operator-pod> -f
Or, you can quickly check for errors by using the following command:
# oc logs <ibm-spectrum-scale-operator-pod> | grep -i error
2.7 Deploying IBM Spectrum Scale CSI
Remain in the ibm-spectrum-scale-ns namespace of the IBM Spectrum Scale CNSA deployment and perform the steps that are described in this section.
Before we can deploy the IBM Spectrum Scale CSI driver, we must create a GUI user for IBM Spectrum Scale CSI on the GUI pod of the local IBM Spectrum Scale CNSA cluster that we just deployed (see Example 2-4). Then, we use the same credentials that we used when creating the csi-local-secret earlier (see “Creating a secret for CNSA” on page 18):
 
Example 2-4 Creating a GUI user
# oc -n ibm-spectrum-scale-ns exec -c liberty ibm-spectrum-scale-gui-0 -- /usr/lpp/mmfs/gui/cli/mkuser csi_admin -p CSI_PASSWORD
-g CsiAdminDeploy Operator
# oc create -f https://raw.githubusercontent.com/IBM/ibm-spectrum-scale-csi/v2.1.0/generated/installer/ibm-spectrum-scale-csi-operator.yaml -n ibm-spectrum-scale-csi-driver
 
For CSI driver, the following custom resource must be downloaded:
# curl -O https://raw.githubusercontent.com/IBM/ibm-spectrum-scale-csi/v2.1.0/operator/deploy/crds/csiscaleoperators.csi.ibm.com_cr.yaml
After downloading the file, the following parameters must be modified according to the environment:
The path to the file system mounted at IBM Spectrum Scale CNSA cluster (for example, /mnt/fs1):
# =============================================================================
scaleHostpath: "< GPFS FileSystem Path >"
Fulfill the information for the IBM Spectrum Scale CNSA cluster in the area that is shown in Example 2-5.
 
Example 2-5 Fulfilling the information for the IBM Spectrum Scale CNSA cluster
==================================================================================
clusters:
- id: "< Primary Cluster ID - WARNING - THIS IS A STRING NEEDS YAML QUOTES! >"
secrets: "secret1"
secureSslMode: false
primary:
primaryFs: "< Primary Filesystem >"
# primaryFset: "< Fileset in Primary Filesystem >" # Optional - default:spectrum-scale-csi-volume-store
# inodeLimit: "< inode limit for Primary Fileset >" # Optional
# remoteCluster: "< Remote ClusterID >" # Optional - This is only required if primaryFs is remote cluster's filesystem and this ID should have separate entry in Clusters map too.
# cacert: "< Name of CA cert configmap for GUI >" # Optional
restApi:
- guiHost: "< Primary cluster GUI IP/Hostname >"
#
# In the case we have multiple clusters, specify their configuration below.
# ==================================================================================
 
Fulfill the information for the remote IBM Spectrum Scale cluster in this area:
# - id: "< Cluster ID >"
# secrets: "< Secret for Cluster >"
# secureSslMode: false
# restApi:
# - guiHost: "< Cluster GUI IP/Hostname >"
# cacert: "< Name of CA cert configmap for GUI >" # Optional
# Attacher image name, in case we do not want to use default image.
# ==================================================================================
To find the mandatory cluster ID for IBM Spectrum Scale CNSA cluster, run the following command:
# oc -n ibm-spectrum-scale-ns exec <ibm-spectrum-scale-core-pod> -- curl -s -k https://ibm-spectrum-scale-gui.ibm-spectrum-scale-ns/scalemgmt/v2/cluster -u "cnsa_admin:CNSA_PASSWORD" | grep clusterId
To find the mandatory cluster ID for Spectrum Scale Remote cluster, run the following command:
# curl -s -k https://example-gui.com/scalemgmt/v2/cluster -u "csi_admin:CSI_PASSWORD" | grep clusterId
The following parameters are available to remote-mount a file system:
id (Mandatory) Cluster ID of the primary IBM Spectrum Scale cluster. For more information, see mmlscluster command in the IBM Spectrum Scale: Concepts, Planning, and Installation Guide.
primaryFs (Mandatory) Primary file system name (local CNSA file system name).
primaryFset (Optional) Primary fileset name: This name is created if the fileset does not exist.
Default value: spectrum-scale-csi-volume-store
inodeLimit (Optional) Inode limit for the primary fileset. If not specified, fileset is created with 1 M inodes, which is the IBM Spectrum Scale default.
cacert Mandatory if secureSslMode is true. Name of the pre-created CA certificate configmap that is used to connect to the GUI server (running on the “guiHost”). For more information, see IBM Documentation.
secrets (Mandatory) Name of the pre-created Secret that contains the username and password that are used to connect to the GUI server for the cluster that is specified against the ID parameter. For more information, see IBM Documentation.
guiHost (Mandatory) FQDN or IP address of the GUI node of IBM Spectrum Scale cluster that is specified against the ID parameter.
scaleHostpath (Mandatory) Mount path of the primary file system (primaryFs).
imagePullSecrets (Optional) An array of imagePullSecrets to be used for pulling images from a private registry. This pass-through option distributes the imagePullSecrets array to the containers that are generated by the Operator. For more information about creating imagePullSecrets, see this web page.
Create the custom resource from the csiscaleoperators.csi.ibm.com_cr.yaml file that was downloaded and modified in the previous section:
# oc create -f csiscaleoperators.csi.ibm.com_cr.yaml -n ibm-spectrum-scale-csi-driver
Verify that the IBM Spectrum Scale CSI driver is installed, Operator and driver resources are ready, and pods are in running state. It might take some time for the CSI driver pods to get scheduled and running.:
# oc get pod,daemonset,statefulset -n ibm-spectrum-scale-csi-driver
NAME READY STATUS RESTARTS AGE
pod/ibm-spectrum-scale-csi-8pk49 2/2 Running 0 3m3s
pod/ibm-spectrum-scale-csi-attacher-0 1/1 Running 0 3m12s
pod/ibm-spectrum-scale-csi-b2f7x 2/2 Running 0 3m3s
pod/ibm-spectrum-scale-csi-operator-67448f6956-2xlsv 1/1 Running 0 27m
pod/ibm-spectrum-scale-csi-provisioner-0 1/1 Running 0 3m7s
pod/ibm-spectrum-scale-csi-vjsvc 2/2 Running 0 3m3s
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/ibm-spectrum-scale-csi 3 3 3 3 3 scale=true 3m3s
NAME READY AGE
statefulset.apps/ibm-spectrum-scale-csi-attacher 1/1 3m12s
statefulset.apps/ibm-spectrum-scale-csi-provisioner 1/1 3m7s
The deployment is now completed and IBM Spectrum Scale CNSA and CSI are successfully running on your OpenShift cluster.
Now, you can start creating Kubernetes StorageClasses (SCs), persistent volumes (PVs) and persistent volume claims (PVCs) to provide persistent storage to your containerized applications. For more information, see IBM Documentation.
2.8 Removing IBM Spectrum Scale CNSA and CSI deployment
For more information about removing IBM Spectrum Scale Container Native Storage Access, see IBM Documentation.
2.9 Example of use of IBM Spectrum Scale provisioned storage
A set of YAML manifest files are available in the examples directory of this this GitHub repository.
These example YAML manifest files are helpful to quickly test dynamic provisioning of persistent volumes with IBM Spectrum Scale CNSA.
These examples feature the following components:
ibm-spectrum-scale-sc.yaml: An SC to allow dynamic provisioning of PVs (created by an admin)
ibm-spectrum-scale-pvc.yaml: A PVC to request a PV from the storage class (issued by a user)
ibm-spectrum-scale-test-pod.yaml: A test pod that is writing a time stamp every 5 seconds into the volume backed by IBM Spectrum Scale (started by user)
An OpenShift admin user must create an SC for dynamic provisioning. In this example, we use an SC that provides dynamic provisioning of persistent volumes that are backed by independent filesets in IBM Spectrum Scale.
IBM Spectrum Scale CSI driver allows the use of the following types of SCs for dynamic provisioning:
Light-weight volumes that use simple directories in IBM Spectrum Scale
File-set based volumes that use:
 – Independent filesets in IBM Spectrum Scale
 – Dependent filesets in IBM Spectrum Scale
For more information, see IBM Documentation.
Edit the provided storage class ibm-spectrum-scale-sc.yaml and set the values of volBackendFs and clusterId to match your configured environment:
volBackendFs: "<filesystem name of the local CNSA cluster>"
clusterId: "<cluster ID of the remote storage cluster>"
Apply the SC, as shown in Example 2-6.
Example 2-6 Applying the storage class
# oc apply -f ./examples/ibm-spectrum-scale-sc.yaml
storageclass.storage.k8s.io/ibm-spectrum-scale-sc created
# oc get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
ibm-spectrum-scale-sc spectrumscale.csi.ibm.com Delete Immediate false 2s
 
Now, we can switch to a regular user profile in OpenShift, and create a namespace:
# oc new-project test-namespace
Now using project "test-namespace" on server "https://api.ocp4.scale.com:6443".
Then, we issue a request for a PVC by applying ibm-spectrum-scale-pvc.yaml:
# oc apply -f ./examples/ibm-spectrum-scale-pvc.yaml
persistentvolumeclaim/ibm-spectrum-scale-pvc created
# oc get pvc
NAME       STATUS  VOLUME             CAPACITY ACCESS MODES STORAGECLASS          AGE
ibm-spectrum-scale-pvc Bound   pvc-87f18620-9fac-44ce-ad19-0def5f4304a1  1Gi      RWX          ibm-spectrum-scale-sc 75s
Wait until the PVC is bound to a PV. A PVC (like a pod) is bound to a namespace in OpenShift (unlike a PV which is not a namespaced object).
After we see that the PVC is bound to a PV, we can run the test pod by applying ibm-spectrum-scaletest-pod.yaml:
# oc apply -f ./examples/ibm-spectrum-scale-test-pod.yaml
pod/ibm-spectrum-scale-test-pod created
When the pod is running, you can see that a time stamp is written in 5 second intervals to a log stream1.out in the local /data directory of the pod:
# oc get pods
NAME READY STATUS RESTARTS AGE
ibm-spectrum-scale-test-pod 1/1 Running 0 23s
 
# oc rsh ibm-spectrum-scale-test-pod
/ # cat /data/stream1.out
ibm-spectrum-scale-test-pod 20210215-12:00:29
ibm-spectrum-scale-test-pod 20210215-12:00:34
ibm-spectrum-scale-test-pod 20210215-12:00:39
ibm-spectrum-scale-test-pod 20210215-12:00:44
ibm-spectrum-scale-test-pod 20210215-12:00:49
ibm-spectrum-scale-test-pod 20210215-12:00:54
ibm-spectrum-scale-test-pod 20210215-12:00:59
ibm-spectrum-scale-test-pod 20210215-12:01:04
ibm-spectrum-scale-test-pod 20210215-12:01:09
ibm-spectrum-scale-test-pod 20210215-12:01:14
ibm-spectrum-scale-test-pod 20210215-12:01:19
The pod’s /data directory is backed by the pvc-87f18620-9fac-44ce-ad19-0def5f4304a1/pvc-87f18620-9fac-44ce-ad19-0def5f4304a1-data/ directory in the file system on the remote IBM Spectrum Scale storage cluster:
# cat /<mount point of filesystem on remote storage cluster>/pvc-87f18620-9fac-44ce-ad19-0def5f4304a1/pvc-87f18620-9fac-44ce-ad19- 0def5f4304a1-data/stream1.out
ibm-spectrum-scale-test-pod 20210215-12:00:29
ibm-spectrum-scale-test-pod 20210215-12:00:34
ibm-spectrum-scale-test-pod 20210215-12:00:39
ibm-spectrum-scale-test-pod 20210215-12:00:44
ibm-spectrum-scale-test-pod 20210215-12:00:49
ibm-spectrum-scale-test-pod 20210215-12:00:54
ibm-spectrum-scale-test-pod 20210215-12:00:59
ibm-spectrum-scale-test-pod 20210215-12:01:04
2.9.1 Other configuration options
This section describes other available configuration options.
Specify node labels for IBM Spectrum Scale CSI (optional)
IBM Spectrum Scale CSI also makes use of node labels to determine on which OpenShift nodes the attacher, provisioner, and plug-in resources are to run. The default node label that is used is scale:true, which designates the nodes on which IBM Spectrum Scale CSI resources are running. These nodes must be part of a local IBM Spectrum Scale cluster (here, IBM Spectrum Scale CNSA).
For Cloud Pak for Data to function correctly with IBM Spectrum Scale CSI driver provisioner, all worker nodes must be labeled as scale=true.
Label the nodes that are selected to run IBM Spectrum Scale CNSA and IBM Spectrum Scale CSI as show in the following example:
# oc label node <worker-node> scale=true --overwrite=true
You can define this label in the IBM Spectrum Scale CSI CR file csiscaleoperators.csi.ibm.com_cr.yaml, as shown in the following example:
# pluginNodeSelector specifies nodes on which we want to run plugin daemoset
# In below example plugin daemonset will run on nodes which have label as
# "scale=true". Can have multiple entries.
# ===========================================================================
pluginNodeSelector:
key: "scale"
value: "true"
Here, we used the default configuration for IBM Spectrum Scale CNSA and CSI and labeled all OpenShift worker nodes with scale=true:
# oc label nodes -l node-role.kubernetes.io/worker scale=true --overwrite=true
# oc get nodes -l scale=true
NAME STATUS ROLES AGE VERSION
worker01.ocp4.scale.com Ready worker 2d22h v1.18.3+65bd32d
worker02.ocp4.scale.com Ready worker 2d22h v1.18.3+65bd32d
worker03.ocp4.scale.com Ready worker 2d1h v1.18.3+65bd32d
Optional: IBM Spectrum Scale CSI also allows the use of more node labels for the attacher and provisioner StatefulSet. These node labels should be used only if running these StatefulSets on specific nodes (for example, highly available infrastructure nodes) is required. Otherwise, the use of a single label, such as scale=true for running StatefulSets and IBM Spectrum Scale CSI driver DaemonSet, is strongly recommended. Nodes that are specifically marked for running StatefulSet must be a subset of the nodes that are marked with the scale=true label.
Managing node annotations for IBM Spectrum Scale CNSA (optional)
The IBM Spectrum Scale CNSA operator automatically (recommended) adds Kubernetes annotations to the nodes in the OpenShift cluster to designate their specific role with respect to IBM Spectrum Scale; for example, quorum, manager and collector nodes:
scale.ibm.com/nodedesc=quorum::
scale.ibm.com/nodedesc=manager::
scale.ibm.com/nodedesc=collector::
Supported IBM Spectrum Scale node designations are manager, quorum, and collector. To designate a node with more than one value, add a dash in between the designations, as shown in the following example:
scale.ibm.com/nodedesc=quorum-manager-collector::
Node annotations can be viewed by issuing the oc describe <node> command.
Automatic node designations that are performed by the IBM Spectrum Scale operator are recommended. For manual node designations with annotations, see IBM Documentation.
You can manually add or remove node annotations. To add node annotations, run the following command:
# oc annotate node <node name> scale.ibm.com/nodedesc=quorum-manager::
To remove node annotations, run the following command:
# oc annotate node <node name> scale.ibm.com/nodedesc-
Specifying pod tolerations for IBM Spectrum Scale CSI (optional)
In the csiscaleoperators.csi.ibm.com_cr.yaml for IBM Spectrum Scale CSI, you also can specify Kubernetes tolerations that are applied to IBM Spectrum Scale CSI pods (see Example 2-7).
Example 2-7 Specifying Kubernetes tolerations
# Array of tolerations that will be distributed to CSI pods. Please refer to official
k8s documentation for your environment for more details. # https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
# ==================================================================================
# tolerations:
# - key: "key1"
# operator: "Equal"
# value: "value1"
# effect: "NoExecute"
# tolerationSeconds: 3600
 
 
 
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.128.199.210