ECE installation procedures
This chapter provides detailed instructions for the IBM Spectrum Scale ECE installation process by using the automated installation toolkit.
For more information, see spectrumscale command in IBM Spectrum Scale Command and Programming Reference page in IBM Knowledge Center.
This chapter includes the following topics:
5.1 Installation overview
The IBM Spectrum Scale installation toolkit can be used to automate the installation and configuration of IBM Spectrum Scale Erasure Code Edition, and other IBM Spectrum Scale components, such as GUI, Protocol software, and AFM.
You can use the installation toolkit to accomplish the following tasks:
Verify that a valid hardware configuration is provided for ECE
Create an IBM Spectrum Scale cluster with ECE storage
You can use the IBM Spectrum Scale mmvdisk command to accomplish the following tasks:
Specify and create collections of VDisk NSDs (VDisk sets) from matching physical disks (pdisks)
Create file systems by using these VDisk sets
List ECE components, such as recovery groups, servers, VDisk sets, and pdisks
The rest of this chapter outlines the installation process by using the installation toolkit and augmented by the use of the mmvdisk command.
5.2 IBM Spectrum Scale ECE installation prerequisites
This section describes the installation prerequisites that must be met to use the installation toolkit.
5.2.1 Minimum requirements for ECE
The following minimum requirements must be met for ECE:
Four or more x86 servers with matching CPU, memory, and storage configurations, with RHEL 7.5 or later; six nodes are used in this demonstration.
A total of 12 or more SCSI or NVMe drives evenly distributed across the servers.
At least 64 GB memory per server for production deployment.
At least 25 Gbps network connection between nodes for production deployment.
5.2.2 SSH and network setup
The following prerequisites must be met for the SSH and network setup:
DNS is configured such that all host names (short or long) are resolvable.
Passwordless SSH is configured from the admin node to all other nodes and to itself by way of IP, short name, and FQDN.
5.2.3 Repository setup
Red Hat Enterprise Linux yum repository must be set up on all nodes in the cluster.
For more information, see IBM Knowledge Center.
5.3 IBM Spectrum Scale ECE installation background
Download the IBM Spectrum Scale Erasure Code Edition self-extracting package from the IBM Spectrum Scale page on IBM Passport Advantage or IBM Fix Central web sites.
Extracting the contents of the toolkit package places the relevant files in the following directory:
/usr/lpp/mmfs/5.0.3.x/installer/
The installation toolkit options are available by entering:
Type: /usr/lpp/mmfs/5.0.3.x/installer/spectrumscale -h
5.4 IBM Spectrum Scale ECE installation and configuration
Complete the following steps to install IBM Spectrum Scale Erasure Code Edition:
1. Download the IBM Spectrum Scale Erasure Code Edition self-extracting package from the IBM Spectrum Scale page on Passport Advantage® or Fix Central web sites.
2. Extract the installation package. The installation toolkit is extracted to the /usr/lpp/mmfs/5.0.x.x/installer/ directory (see Example 5-1).
 
Note: The license agreement must be accepted during the extraction process.
Example 5-1 Extract the installation package
[root@ece1 ~]# ./Spectrum_Scale_Erasure_Code-5.0.3.1-x86_64-Linux-install --text-only
 
Extracting License Acceptance Process Tool to /usr/lpp/mmfs/5.0.3.1 ...
tail -n +641 ./Spectrum_Scale_Erasure_Code-5.0.3.1-x86_64-Linux-install | tar -C /usr/lpp/mmfs/5.0.3.1 -xvz --exclude=installer --exclude=*_rpms --exclude=*_debs --exclude=*rpm --exclude=*tgz --exclude=*deb --exclude=*tools* 1> /dev/null
 
Installing JRE ...
 
If directory /usr/lpp/mmfs/5.0.3.1 has been created or was previously created during another extraction,
.rpm, .deb, and repository related files in it (if there were) will be removed to avoid conflicts with the ones being extracted.
 
tail -n +641 ./Spectrum_Scale_Erasure_Code-5.0.3.1-x86_64-Linux-install | tar -C /usr/lpp/mmfs/5.0.3.1 --wildcards -xvz ibm-java*tgz 1> /dev/null
tar -C /usr/lpp/mmfs/5.0.3.1/ -xzf /usr/lpp/mmfs/5.0.3.1/ibm-java*tgz
 
Invoking License Acceptance Process Tool ...
/usr/lpp/mmfs/5.0.3.1/ibm-java-x86_64-80/jre/bin/java -cp /usr/lpp/mmfs/5.0.3.1/LAP_HOME/LAPApp.jar com.ibm.lex.lapapp.LAP -l /usr/lpp/mmfs/5.0.3.1/LA_HOME -m /usr/lpp/mmfs/5.0.3.1 -s /usr/lpp/mmfs/5.0.3.1 -text_only
 
LICENSE INFORMATION
 
The Programs listed below are licensed under the following
License Information terms and conditions in addition to the
Program license terms previously agreed to by Client and
IBM. If Client does not have previously agreed to license
terms in effect for the Program, the International Program
License Agreement (Z125-3301-14) applies.
 
Program Name (Program Number):
IBM Spectrum Scale Erasure Code Edition V5.0.2.2 (5737-J34)
 
The following standard terms apply to Licensee's use of the
Program.
 
Press Enter to continue viewing the license agreement, or
enter "1" to accept the agreement, "2" to decline it, "3"
to print it, "4" to read non-IBM terms, or "99" to go back
to the previous screen.
1
 
License Agreement Terms accepted.
 
Extracting Product RPMs to /usr/lpp/mmfs/5.0.3.1 ...
tail -n +641 ./Spectrum_Scale_Erasure_Code-5.0.3.1-x86_64-Linux-install | tar -C /usr/lpp/mmfs/5.0.3.1 --wildcards -xvz installer gui hdfs_debs/ubuntu16/hdfs_3.1.0.x hdfs_rpms/rhel7/hdfs_2.7.3.x hdfs_rpms/rhel7/hdfs_3.0.0.x hdfs_rpms/rhel7/hdfs_3.1.0.x smb_debs/ubuntu/ubuntu16 smb_debs/ubuntu/ubuntu18 zimon_debs/ubuntu/ubuntu16 zimon_debs/ubuntu/ubuntu18 ganesha_debs/ubuntu16 ganesha_rpms/rhel7 ganesha_rpms/sles12 gpfs_debs/ubuntu16 gpfs_rpms/rhel7 gpfs_rpms/sles12 object_debs/ubuntu16 object_rpms/rhel7 smb_rpms/rhel7 smb_rpms/sles12 tools/repo zimon_debs/ubuntu16 zimon_rpms/rhel7 zimon_rpms/sles12 zimon_rpms/sles15 gpfs_debs gpfs_rpms manifest 1> /dev/null
- installer
- gui
- hdfs_debs/ubuntu16/hdfs_3.1.0.x
- hdfs_rpms/rhel7/hdfs_2.7.3.x
- hdfs_rpms/rhel7/hdfs_3.0.0.x
- hdfs_rpms/rhel7/hdfs_3.1.0.x
- smb_debs/ubuntu/ubuntu16
- smb_debs/ubuntu/ubuntu18
- zimon_debs/ubuntu/ubuntu16
- zimon_debs/ubuntu/ubuntu18
- ganesha_debs/ubuntu16
- ganesha_rpms/rhel7
- ganesha_rpms/sles12
- gpfs_debs/ubuntu16
- gpfs_rpms/rhel7
- gpfs_rpms/sles12
- object_debs/ubuntu16
- object_rpms/rhel7
- smb_rpms/rhel7
- smb_rpms/sles12
- tools/repo
- zimon_debs/ubuntu16
- zimon_rpms/rhel7
- zimon_rpms/sles12
- zimon_rpms/sles15
- gpfs_debs
- gpfs_rpms
- manifest
 
Removing License Acceptance Process Tool from /usr/lpp/mmfs/5.0.3.1 ...
rm -rf /usr/lpp/mmfs/5.0.3.1/LAP_HOME /usr/lpp/mmfs/5.0.3.1/LA_HOME
 
Removing JRE from /usr/lpp/mmfs/5.0.3.1 ...
rm -rf /usr/lpp/mmfs/5.0.3.1/ibm-java*tgz
 
==================================================================
Product packages successfully extracted to /usr/lpp/mmfs/5.0.3.1
 
Cluster installation and protocol deployment
To install a cluster or deploy protocols with the Spectrum Scale Install Toolkit: /usr/lpp/mmfs/5.0.3.1/installer/spectrumscale -h
To install a cluster manually: Use the gpfs packages located within /usr/lpp/mmfs/5.0.3.1/gpfs_<rpms/debs>
 
To upgrade an existing cluster using the Spectrum Scale Install Toolkit:
1) Copy your old clusterdefinition.txt file to the new /usr/lpp/mmfs/5.0.3.1/installer/configuration/ location
2) Review and update the config: /usr/lpp/mmfs/5.0.3.1/installer/spectrumscale config update
3) (Optional) Update the toolkit to reflect the current cluster config:
/usr/lpp/mmfs/5.0.3.1/installer/spectrumscale config populate -N <node>
4) Run the upgrade: /usr/lpp/mmfs/5.0.3.1/installer/spectrumscale upgrade -h
 
To add nodes to an existing cluster using the Spectrum Scale Install Toolkit:
1) Add nodes to the clusterdefinition.txt file: /usr/lpp/mmfs/5.0.3.1/installer/spectrumscale node add -h
2) Install GPFS on the new nodes: /usr/lpp/mmfs/5.0.3.1/installer/spectrumscale install -h
3) Deploy protocols on the new nodes: /usr/lpp/mmfs/5.0.3.1/installer/spectrumscale deploy -h
 
To update the toolkit to reflect the current cluster config examples:
/usr/lpp/mmfs/5.0.3.1/installer/spectrumscale config populate -N <node>
1) Manual updates outside of the install toolkit
2) Sync the current cluster state to the install toolkit prior to upgrade
3) Switching from a manually managed cluster to the install toolkit
 
==================================================================================
To get up and running quickly, visit our wiki for an IBM Spectrum Scale Protocols Quick Overview:
https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20%28GPFS%29/page/Protocols%20Quick%20Overview%20for%20IBM%20Spectrum%20Scale
3. Change the directory to where the installation toolkit is extracted:
[root@ece1 ~]# cd /usr/lpp/mmfs/5.0.3.1/installer/
4. Specify the installer node and the setup type in the cluster definition file. The setup type must be “ece” for IBM Spectrum Scale Erasure Code Edition:
./spectrumscale setup -s InstallerNodeIP -st ece
Check IP address of installer node:
[root@ece1 installer]# ping c72f4m5u13-ib0 -c 3
PING c72f4m5u13-ib0 (10.168.2.13) 56(84) bytes of data.
64 bytes from c72f4m5u13-ib0 (10.168.2.13): icmp_seq=1 ttl=64 time=0.025 ms
64 bytes from c72f4m5u13-ib0 (10.168.2.13): icmp_seq=2 ttl=64 time=0.038 ms
64 bytes from c72f4m5u13-ib0 (10.168.2.13): icmp_seq=3 ttl=64 time=0.040 ms
--- c72f4m5u13-ib0 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.025/0.034/0.040/0.008 ms
Specify the installer node and the setup type as Erasure Code Edition:
[root@ece1 installer]# ./spectrumscale setup -s 10.168.2.13 -st ece
[ INFO ] Installing prerequisites for install node
[ INFO ] Existing Chef installation detected. Ensure the PATH is configured so that chef-client and knife commands can be run.
[ INFO ] Your control node has been configured to use the IP 10.168.2.13 to communicate with other nodes.
[ INFO ] Port 8889 will be used for chef communication.
[ INFO ] Port 10080 will be used for package distribution.
[ INFO ] Install Toolkit setup type is set to ECE (Erasure Code Edition).
[ INFO ] SUCCESS
[ INFO ] Tip : Designate scale out, protocol and admin nodes in your environment to use during install:./spectrumscale node add <node> -p -a -so
5. Add scale-out nodes for IBM Spectrum Scale Erasure Code Edition in the cluster definition file:
./spectrumscale node add NodeName -so
In this example, six storage nodes make up the Erasure code building block (recovery group):
[root@ece1 installer]# cat ~/nodes
ece1
ece2
ece3
ece4
ece5
ece6
Adding node ece1 as a scale-out node:
[root@ece1 installer]# ./spectrumscale node add ece1 -so
[ INFO ] Adding node ece1 as a GPFS node.
[ INFO ] Setting ece1 as a scale-out node.
[ INFO ] Configuration updated.
Adding node ece2 as a scale-out node:
[root@ece1 installer]# ./spectrumscale node add ece2 -so
[ INFO ] Adding node ece2 as a GPFS node.
[ INFO ] Setting ece2 as a scale-out node.
[ INFO ] Configuration updated.
Adding node ece3 as a scale-out node:
[root@ece1 installer]# ./spectrumscale node add ece3 -so
[ INFO ] Adding node ece3 as a GPFS node.
[ INFO ] Setting ece3 as a scale-out node.
[ INFO ] Configuration updated.
Adding node ece4 as a scale-out node:
[root@ece1 installer]# ./spectrumscale node add ece4 -so
[ INFO ] Adding node ece4 as a GPFS node.
[ INFO ] Setting ece4 as a scale-out node.
[ INFO ] Configuration updated.
Adding node ece5 as a scale-out node:
[root@ece1 installer]# ./spectrumscale node add ece5 -so
[ INFO ] Adding node ece5 as a GPFS node.
[ INFO ] Setting ece5 as a scale-out node.
[ INFO ] Configuration updated.
Adding node ece6 as a scale-out node:
[root@ece1 installer]# ./spectrumscale node add ece6 -so
[ INFO ] Adding node ece6 as a GPFS node.
[ INFO ] Setting ece6 as a scale-out node.
[ INFO ] Configuration updated.
 
 
Note: IBM Spectrum Scale GUI, Call Home, and other management functions, if required, must be installed on separate nodes. For environments with high-performance requirements, IBM Spectrum Scale Erasure Code Edition storage nodes must not be assigned file audit logging, Call Home, GUI, or protocol node roles.
For example, if you attempt to specify ece1 as a GUI node, it fails:
[root@ece1 installer]# ./spectrumscale node add ece1 -g
[ FATAL ] You cannot add a ece1 node as a gui node because node ece1 is marked as scale-out node.
6. You can use the following command to display the list of nodes that are specified in the cluster definition file and the respective node designations:
./spectrumscale node list
[root@ece1 installer]# ./spectrumscale node list
[ INFO ] List of nodes in current configuration:
[ INFO ] [Installer Node]
[ INFO ] 10.168.2.13
[ INFO ]
[ INFO ] [Cluster Details]
[ INFO ] Name: scale_out.ece
[ INFO ] Setup Type: Erasure Code Edition
[ INFO ]
[ INFO ] [Extended Features]
[ INFO ] File Audit logging : Disabled
[ INFO ] Watch folder : Disabled
[ INFO ] Management GUI : Disabled
[ INFO ] Performance Monitoring : Disabled
[ INFO ] Callhome : Disabled
[ INFO ]
[ INFO ] GPFS Admin Quorum Manager Protocol Scaleout OS Arch
[ INFO ] Node Node Node Node Node Node
[ INFO ] ece1 X X rhel7 x86_64
[ INFO ] ece2 X X rhel7 x86_64
[ INFO ] ece3 X X X rhel7 x86_64
[ INFO ] ece4 X rhel7 x86_64
[ INFO ] ece5 X X X rhel7 x86_64
[ INFO ] ece6 X X rhel7 x86_64
[ INFO ]
[ INFO ] [Export IP address]
[ INFO ] No export IP addresses configured
7. Define the recovery group for IBM Spectrum Scale Erasure Code Edition in the cluster definition file:
./spectrumscale recoverygroup define -N Node1,Node2,...,NodeN [root@ece1 installer]# ./spectrumscale recoverygroup define -N ece1,ece2,ece3,ece4,ece5,ece6
[ INFO ] Defining nodeclass nc_1 with node ece1,ece2,ece3,ece4,ece5,ece6 into the cluster configuration.
[ INFO ] Defining recovery group rg_1 of nodeclass nc_1 into the cluster configuration.
[ INFO ] Configuration updated
[ INFO ] Tip :If all recovery group definition are complete, define required vdiskset to your cluster definition:./spectrumscale vdiskset define -vs
<vdiskset name> -rg <RgName> -code <RaidCode> -bs <blocksize> -ss <alloc size>
[ INFO ] Tip : If all node designations and any required configurations are complete, proceed to check the installation configuration: ./spectrumscale install --precheck
[ INFO ] Tip : if an advanced disk configuration is desired, complete the RG creation by running './spectrumscale install', then move to the CLI 'mmvdisk' command syntax to build advanced configuration.
8. Perform environment prechecks before starting the installation toolkit installation command:
./spectrumscale install -pre
[root@ece1 installer]# ./spectrumscale install --pre
[ INFO ] Logging to file:
/usr/lpp/mmfs/5.0.3.3/installer/logs/INSTALL-PRECHECK-28-08-2019_07:59:41.log
[ INFO ] Validating configuration
[ INFO ] Performing Chef (deploy tool) checks.
[ WARN ] NTP is not set to be configured with the install toolkit.See './spectrumscale config ntp -h' to setup.
[ WARN ] Install toolkit will not reconfigure Performance Monitoring as it has been disabled. See the IBM Spectrum Scale Knowledge center for documentation on manual configuration.
[ WARN ] No GUI servers specified. The GUI will not be configured on any nodes.
[ INFO ] Install toolkit will not configure file audit logging as it has been disabled.
[ INFO ] Install toolkit will not configure watch folder as it has been disabled.
[ INFO ] Checking for knife bootstrap configuration...
[ INFO ] Performing GPFS checks.
[ INFO ] Running environment checks
[ INFO ] Checking pre-requisites for portability layer.
[ INFO ] GPFS precheck OK
[ INFO ] Performing Erasure Code checks.
[ INFO ] Running environment checks for Erasure Code Edition.
[ INFO ] Erasure Code Edition precheck OK
[ INFO ] Performing RGs checks.
[ INFO ] Performing FILE AUDIT LOGGING checks.
[ INFO ] Running environment checks for file Audit logging
[ INFO ] Network check from admin node c72f4m5u13-ib0 to all other nodes in the cluster passed
[ INFO ] ece1 IBM Spectrum Scale Erasure Code Edition OS readiness version 1.1
[ INFO ] ece1 JSON files versions:
[ INFO ] ece1 supported OS:1.0
[ INFO ] ece1 sysctl: 0.5
[ INFO ] ece1 packages: 1.0
[ INFO ] ece1SAS adapters:1.1
[ INFO ] ece1NIC adapters:1.0
[ INFO ] ece1HW requirements:1.0
[ INFO ] ece1 checking processor compatibility
[ INFO ] ece1 x86_64 processor is supported to run ECE
[ WARN ] Ephemeral port range is not set. Please set valid ephemeral port range using the command ./spectrumscale config gpfs --ephemeral_port_range . You may set the default values as 60000-61000
[ INFO ] The install toolkit will not configure call home as it is disabled. To enable call home, use the following CLI command: ./spectrumscale callhome enable
[ INFO ] Pre-check successful for install.
[ INFO ] Tip : ./spectrumscale install
9. Perform the installation toolkit installation procedure:
./spectrumscale install
[root@ece1 installer]# ./spectrumscale install
[ INFO ] Logging to file:
/usr/lpp/mmfs/5.0.3.3/installer/logs/INSTALL-28-08-2019_09:05:15.log
[ INFO ] Validating configuration
[ WARN ] NTP is not set to be configured with the install toolkit.See './spectrumscale config ntp -h' to setup.
[ WARN ] Install toolkit will not reconfigure Performance Monitoring as it has been disabled. See the IBM Spectrum Scale Knowledge center for documentation on manual configuration.
[ WARN ] No GUI servers specified. The GUI will not be configured on any nodes.
[ INFO ] Install toolkit will not configure file audit logging as it has been disabled.
[ INFO ] Install toolkit will not configure watch folder as it has been disabled.
[ INFO ] Checking for knife bootstrap configuration...
[ INFO ] Running pre-install checks
[ INFO ] Running environment checks
[ INFO ] No GPFS License RPM detected on node ece1 . Ensure the appropriate License RPM is installed to utilize all available functionality.
[ INFO ] Checking pre-requisites for portability layer.
[ INFO ] GPFS precheck OK
[ INFO ] Running environment checks for Erasure Code Edition.
[ WARN ] An odd number of quorum nodes is recommended. 4 quorum nodes are currently configured.
[ WARN ] You have defined only 2 scale-out node as manager nodes, recommended to make all scale-out nodes into manager nodes.
[ INFO ] Erasure Code Edition precheck OK
[ INFO ] Running environment checks for file Audit logging
[ INFO ] Network check from admin node ece1 to all other nodes in the cluster passed
[ WARN ] Ephemeral port range is not set. Please set valid ephemeral port range using the command ./spectrumscale config gpfs --ephemeral_port_range . You may set the default values as 60000-61000
[ INFO ] The install toolkit will not configure call home as it is disabled. To enable call home, use the following CLI command: ./spectrumscale callhome enable
[ INFO ] Preparing nodes for install
[ INFO ] Installing Chef (deploy tool)
[ INFO ] Installing Chef Client on nodes
[ INFO ] Checking for chef-client and installing if required on ece1
[ INFO ] Chef Client 13.6.4 is on node ece1
[ INFO ] Checking for chef-client and installing if required on ece6
[ INFO ] Chef Client 13.6.4 is on node ece6
[ INFO ] Checking for chef-client and installing if required on ece3
[ INFO ] Chef Client 13.6.4 is on node ece3
[ INFO ] Checking for chef-client and installing if required on ece4
[ INFO ] Chef Client 13.6.4 is on node ece4
[ INFO ] Installing GPFS
[ INFO ] GPFS Packages to be installed: gpfs.base, gpfs.gpl, gpfs.msg.en_US, gpfs.docs, and gpfs.gskit
[ INFO ] [ece1 28-08-2019 09:06:16] IBM SPECTRUM SCALE: Generating node description file for cluster configuration (SS03)
[ INFO ] [ece1 28-08-2019 09:06:16] IBM SPECTRUM SCALE: Creating GPFS cluster with default profile (SS04)
[ INFO ] [ece1 28-08-2019 09:06:16] IBM SPECTRUM SCALE:
[ INFO ] [ece1 28-08-2019 09:06:20] IBM SPECTRUM SCALE: Setting ephemeral ports for GPFS daemon communication (SS13)
[ INFO ] [ece5 28-08-2019 09:07:32] IBM SPECTRUM SCALE: Tearing down core gpfs repository (SS06)
[ INFO ] [ece5 28-08-2019 09:07:32] IBM SPECTRUM SCALE: Tearing down GPFS performance monitoring repository (SS35)
[ INFO ] [ece1 28-08-2019 09:07:38] IBM SPECTRUM SCALE: Tearing down GPFS performance monitoring repository (SS35)
[ INFO ] [ece1 28-08-2019 09:07:38] IBM SPECTRUM SCALE: Tearing down core gpfs repository (SS06)
[ INFO ] Installing RGs
[ INFO ] RG rg_1 already exists.
[ INFO ] Installing FILE AUDIT LOGGING
[ INFO ] [ece2 28-08-2019 09:08:15] IBM SPECTRUM SCALE: Removing Yum cache repository (SS229)
[ INFO ] [ece2 28-08-2019 09:08:15] IBM SPECTRUM SCALE: Creating core gpfs repository (SS00)
[ INFO ] [ece2 28-08-2019 09:08:16] IBM SPECTRUM SCALE: Creating core gpfs repository (SS00)
[ INFO ] [ece2 28-08-2019 09:08:17] IBM SPECTRUM SCALE: Creating gpfs kafka repository (SS00)
[ INFO ] [ece2 28-08-2019 09:08:18] IBM SPECTRUM SCALE: Configuring GPFS performance monitoring repository (SS31)
[ INFO ] All services running
[ INFO ] Installation successful. 6 GPFS node active in cluster scale_out.ece. Completed in 30 minutes 13 seconds.
[ INFO ] Tip :If all node designations and any required protocol configurations are complete, proceed to check the deploy configuration:./spectrumscale deploy -precheck
Run the mmlslicense command to see ECE license:
[root@ece1 installer]# mmlslicense
 
Summary information
---------------------
Number of nodes defined in the cluster: 6
Number of nodes with server license designation: 6
Number of nodes with FPO license designation: 0
Number of nodes with client license designation: 0
Number of nodes still requiring server license designation: 0
Number of nodes still requiring client license designation: 0
This node runs IBM Spectrum Scale Standard Edition.
 
Run the mmgetstate command to verify GPFS states:
[root@ece1 installer]# mmgetstate -a
 
Node number Node name GPFS state
-------------------------------------------
1 ece1 active
2 ece2 active
3 ece3 active
4 ece4 active
5 ece5 active
6 ece6 active
10. Define the VDisk sets and file system for IBM Spectrum Scale Erasure Code Edition in the cluster definition file.
Check whether the current configuration has single declustered array (DA) or multiple declustered arrays by running the mmvdisk command to check the DA information:
/usr/lpp/mmfs/bin/mmvdisk rg list --rg rg_1 --da
[root@ece1 installer]# /usr/lpp/mmfs/bin/mmvdisk rg list --rg rg_1 --da declustered needs vdisks pdisks replace capacity scrub
array service user log total spare threshold total raw free raw duration background task
----------- ------- ---- --- ----- ----- --------- --------- --------
-------- ---------------
DA1 no 12 0 12 2 2 8869 GiB 7339 GiB 14
days scrub (71%)
mmvdisk: Total capacity is the raw space before any vdisk set definitions. mmvdisk: Free capacity is what remains for additional vdisk set definitions.
Use the following command to get the recovery group name:
./spectrumscale recoverygroup list
 
[root@ece1 installer]# ./spectrumscale recoverygroup list
[ INFO ] Name nodeclass Server
[ INFO ] rg_1 nc_1 ece1 ,ece2,ece3,ece4,ece5,ece6
a. Single declustered array:
./spectrumscale vdiskset define -rg RgName -code RaidCode -bs BlockSize -ss SetSize
[root@ece1 installer]# ./spectrumscale vdiskset define -rg rg_1 -code 4+3P -bs 4M
-ss 100
[ INFO ] The vdiskset vs_1 will be configured with 4M blocksize.
[ INFO ] The vdiskset vs_1 will be configured with 100 setsize.
[ INFO ] The vdiskset vs_1 will be configured with 4+3P erasure code.
[ INFO ] Configuration updated
[ INFO ] Tip : Now that vdisksset is defined, add a new filesystem with
./spectrumscale filesystem define -fs <filesystem name> -vs <vdiskset name>.
[ INFO ] Tip : If all the required configurations are complete, proceed to check the installation configuration: ./spectrumscale install --precheck
Use the following command to list VDisk set name:
./spectrumscale vdiskset list
[root@ece1 installer]# ./spectrumscale vdiskset list
[ INFO ] name recoverygroup blocksize setsize RaidCode
[ INFO ] vs_1 rg_1 4M 100 4+3P
Use the following command to create a file system with default attributes:
./spectrumscale filesystem define -fs FileSystem -vs VdiskSet
[root@ece1 installer]# ./spectrumscale filesystem define -fs fs_1 -vs vs_1
[ INFO ] The installer will create the new file system fs_1 if it does not exist.
[ INFO ] Configuration updated
[ INFO ] Tip : If all recovery group and vdiskset creation is completed after
./spectrumscale install, please run ./spectrumscale deploy to create filesystem.
If you need a protocol node, run step 11 first and then run the deploy command.
Perform environment prechecks before starting the installation toolkit deploy command:
./spectrumscale deploy -pre
Perform the installation toolkit deploy operation:
./spectrumscale deploy
b. Multiple declustered arrays:
If you have multiple declustered arrays, complete step 12 to create VDisk sets and a file system by using the mmvdisk command.
11. For protocol node configuration, use the following command. If you do not want protocol node now, you can complete this process later:
a. Assign cluster export service (CES) protocol service IP addresses:
These addresses are separate from the IP addresses that are used internally by the cluster:
./spectrumscale config protocols -e <list of CES IP>
b. Add nodes as a protocol node in the cluster definition file:
./spectrumscale node add NodeName -p
c. Enable required protocol (NFS, SMB or Object):
./spectrumscale enable nfs | smb | object
d. Configure protocol cesSharedRoot file system:
./spectrumscale config protocols -f cesSharedRoot -m /gpfs/cesSharedRoot
Where cesSharedRoot is the File system name that can be used for CES shared root that is needed for protocol configuration:
/gpfs/cesSharedRoot : CES shared root file system mount point.
12. For multiple declustered arrays, create a VDisk and file systems by using the mmvdisk command.
List current RG state
# mmvdisk recoverygroup list
needs user
recovery group active current or master server service vdisks remarks
-------------- ------- -------------------------------- ------- ------ -------
rg_1 yes c72f4m5u15-ib0 no 0
 
# mmvdisk recoverygroup list --recovery-group rg_1 --log-group
 
log group user vdisks log vdisks server
--------- ----------- ---------- ------
root 0 1 c72f4m5u15-ib0
LG001 0 1 c72f4m5u15-ib0
LG002 0 1 c72f4m5u21-ib0
LG003 0 1 c72f4m5u19-ib0
LG004 0 1 c72f4m5u17-ib0
LG005 0 1 c72f4m5u11-ib0
LG006 0 1 c72f4m5u13-ib0
LG007 0 1 c72f4m5u15-ib0
LG008 0 1 c72f4m5u21-ib0
LG009 0 1 c72f4m5u19-ib0
LG010 0 1 c72f4m5u17-ib0
LG011 0 1 c72f4m5u11-ib0
LG012 0 1 c72f4m5u13-ib0
In this example, the cluster has 3 DAs with different type of disks: NVMe, HDD, SSD
 
# mmvdisk recoverygroup list --recovery-group rg_1 --vdisk-set
 
declustered capacity all vdisk sets defined
recovery group array type total raw free raw free% in the declustered array
-------------- ----------- ---- --------- -------- ----- ------------------------
rg_1 DA1 NVMe 8869 GiB 8869 GiB 100% -
rg_1 DA2 HDD 8829 GiB 8829 GiB 100% -
rg_1 DA3 SSD 9173 GiB 9173 GiB 100% -
 
vdisk set map memory per server
node class available required required per vdisk set
---------- --------- -------- ----------------------
nc_1 7690 MiB 385 MiB -
 
Define vdisk sets for each DA.
Define the NVMe vdisk set
 
# mmvdisk vdiskset define --vdisk-set NV-Meta --recovery-group rg_1 --code 4way --block-size 1M --set-size 500G --declustered-array DA1 --nsd-usage metadataonly --storage-pool system
mmvdisk: Vdisk set 'NV-Meta' has been defined.
mmvdisk: Recovery group 'rg_1' has been defined in vdisk set 'NV-Meta'.
 
member vdisks
vdisk set count size raw size created file system and attributes
-------------- ----- -------- -------- ------- --------------------------
NV-Meta 12 41 GiB 170 GiB no -, DA1, 4WayReplication, 1 MiB, metadataOnly, system
 
declustered capacity all vdisk sets defined
recovery group array type total raw free raw free% in the declustered array
-------------- ----------- ---- --------- -------- ----- ------------------------
rg_1 DA1 NVMe 8869 GiB 6829 GiB 76% NV-Meta
 
vdisk set map memory per server
node class available required required per vdisk set
---------- --------- -------- ----------------------
nc_1 7690 MiB 391 MiB NV-Meta (6912 KiB)
 
Define the HDD vdisk set
 
# mmvdisk vdiskset define --vdisk-set HDD-Data --recovery-group rg_1 --code 8+3p --block-size 4M --set-size 100% --declustered-array DA2 --nsd-usage dataonly --storage-pool data01
mmvdisk: Vdisk set 'HDD-Data' has been defined.
mmvdisk: Recovery group 'rg_1' has been defined in vdisk set 'HDD-Data'.
 
member vdisks
vdisk set count size raw size created file system and attributes
-------------- ----- -------- -------- ------- --------------------------
HDD-Data 12 527 GiB 731 GiB no -, DA2, 8+3p, 4 MiB, dataOnly, data01
 
declustered capacity all vdisk sets defined
recovery group array type total raw free raw free% in the declustered array
-------------- ----------- ---- --------- -------- ----- ------------------------
rg_1 DA2 HDD 8829 GiB 51 GiB 0% HDD-Data
 
vdisk set map memory per server
node class available required required per vdisk set
---------- --------- -------- ----------------------
nc_1 7690 MiB 416 MiB HDD-Data (24 MiB), NV-Meta (6912 KiB)
 
Define the SSD vdisk set
 
# mmvdisk vdiskset define --vdisk-set SSD-Data --recovery-group rg_1 --code 8+3p --block-size 2M --set-size 100% --declustered-array DA3 --nsd-usage dataandmetadata --storage-pool system
mmvdisk: Vdisk set 'SSD-Data' has been defined.
mmvdisk: Recovery group 'rg_1' has been defined in vdisk set 'SSD-Data'.
 
member vdisks
vdisk set count size raw size created file system and attributes
-------------- ----- -------- -------- ------- --------------------------
SSD-Data 12 544 GiB 761 GiB no -, DA3, 8+3p, 2 MiB, dataAndMetadata, system
 
declustered capacity all vdisk sets defined
recovery group array type total raw free raw free% in the declustered array
-------------- ----------- ---- --------- -------- ----- ------------------------
rg_1 DA3 SSD 9173 GiB 32 GiB 0% SSD-Data
 
vdisk set map memory per server
node class available required required per vdisk set
---------- --------- -------- ----------------------
nc_1 7690 MiB 502 MiB HDD-Data (24 MiB), NV-Meta (6912 KiB), SSD-Data (85 MiB)
 
Create all vdisks that belong to these vdisk sets
 
# mmvdisk vdiskset create --vdisk-set all
mmvdisk: 12 vdisks and 12 NSDs will be created in vdisk set 'SSD-Data'.
mmvdisk: 12 vdisks and 12 NSDs will be created in vdisk set 'NV-Meta'.
mmvdisk: 12 vdisks and 12 NSDs will be created in vdisk set 'HDD-Data'.
mmvdisk: (mmcrvdisk) [I] Processing vdisk RG001LG001VS003
mmvdisk: (mmcrvdisk) [I] Processing vdisk RG001LG002VS003
mmvdisk: (mmcrvdisk) [I] Processing vdisk RG001LG003VS003
mmvdisk: (mmcrvdisk) [I] Processing vdisk RG001LG004VS003
mmvdisk: (mmcrvdisk) [I] Processing vdisk RG001LG005VS003
mmvdisk: (mmcrvdisk) [I] Processing vdisk RG001LG006VS003
mmvdisk: (mmcrvdisk) [I] Processing vdisk RG001LG007VS003
mmvdisk: (mmcrvdisk) [I] Processing vdisk RG001LG008VS003
mmvdisk: (mmcrvdisk) [I] Processing vdisk RG001LG009VS003
mmvdisk: (mmcrvdisk) [I] Processing vdisk RG001LG010VS003
mmvdisk: (mmcrvdisk) [I] Processing vdisk RG001LG011VS003
mmvdisk: (mmcrvdisk) [I] Processing vdisk RG001LG012VS003
mmvdisk: (mmcrvdisk) [I] Processing vdisk RG001LG001VS001
mmvdisk: (mmcrvdisk) [I] Processing vdisk RG001LG002VS001
mmvdisk: (mmcrvdisk) [I] Processing vdisk RG001LG003VS001
mmvdisk: (mmcrvdisk) [I] Processing vdisk RG001LG004VS001
mmvdisk: (mmcrvdisk) [I] Processing vdisk RG001LG005VS001
mmvdisk: (mmcrvdisk) [I] Processing vdisk RG001LG006VS001
mmvdisk: (mmcrvdisk) [I] Processing vdisk RG001LG007VS001
mmvdisk: (mmcrvdisk) [I] Processing vdisk RG001LG008VS001
mmvdisk: (mmcrvdisk) [I] Processing vdisk RG001LG009VS001
mmvdisk: (mmcrvdisk) [I] Processing vdisk RG001LG010VS001
mmvdisk: (mmcrvdisk) [I] Processing vdisk RG001LG011VS001
mmvdisk: (mmcrvdisk) [I] Processing vdisk RG001LG012VS001
mmvdisk: (mmcrvdisk) [I] Processing vdisk RG001LG001VS002
mmvdisk: (mmcrvdisk) [I] Processing vdisk RG001LG002VS002
mmvdisk: (mmcrvdisk) [I] Processing vdisk RG001LG003VS002
mmvdisk: (mmcrvdisk) [I] Processing vdisk RG001LG004VS002
mmvdisk: (mmcrvdisk) [I] Processing vdisk RG001LG005VS002
mmvdisk: (mmcrvdisk) [I] Processing vdisk RG001LG006VS002
mmvdisk: (mmcrvdisk) [I] Processing vdisk RG001LG007VS002
mmvdisk: (mmcrvdisk) [I] Processing vdisk RG001LG008VS002
mmvdisk: (mmcrvdisk) [I] Processing vdisk RG001LG009VS002
mmvdisk: (mmcrvdisk) [I] Processing vdisk RG001LG010VS002
mmvdisk: (mmcrvdisk) [I] Processing vdisk RG001LG011VS002
mmvdisk: (mmcrvdisk) [I] Processing vdisk RG001LG012VS002
mmvdisk: Created all vdisks in vdisk set 'SSD-Data'.
mmvdisk: Created all vdisks in vdisk set 'NV-Meta'.
mmvdisk: Created all vdisks in vdisk set 'HDD-Data'.
mmvdisk: (mmcrnsd) Processing disk RG001LG001VS003
mmvdisk: (mmcrnsd) Processing disk RG001LG002VS003
mmvdisk: (mmcrnsd) Processing disk RG001LG003VS003
mmvdisk: (mmcrnsd) Processing disk RG001LG004VS003
mmvdisk: (mmcrnsd) Processing disk RG001LG005VS003
mmvdisk: (mmcrnsd) Processing disk RG001LG006VS003
mmvdisk: (mmcrnsd) Processing disk RG001LG007VS003
mmvdisk: (mmcrnsd) Processing disk RG001LG008VS003
mmvdisk: (mmcrnsd) Processing disk RG001LG009VS003
mmvdisk: (mmcrnsd) Processing disk RG001LG010VS003
mmvdisk: (mmcrnsd) Processing disk RG001LG011VS003
mmvdisk: (mmcrnsd) Processing disk RG001LG012VS003
mmvdisk: (mmcrnsd) Processing disk RG001LG001VS001
mmvdisk: (mmcrnsd) Processing disk RG001LG002VS001
mmvdisk: (mmcrnsd) Processing disk RG001LG003VS001
mmvdisk: (mmcrnsd) Processing disk RG001LG004VS001
mmvdisk: (mmcrnsd) Processing disk RG001LG005VS001
mmvdisk: (mmcrnsd) Processing disk RG001LG006VS001
mmvdisk: (mmcrnsd) Processing disk RG001LG007VS001
mmvdisk: (mmcrnsd) Processing disk RG001LG008VS001
mmvdisk: (mmcrnsd) Processing disk RG001LG009VS001
mmvdisk: (mmcrnsd) Processing disk RG001LG010VS001
mmvdisk: (mmcrnsd) Processing disk RG001LG011VS001
mmvdisk: (mmcrnsd) Processing disk RG001LG012VS001
mmvdisk: (mmcrnsd) Processing disk RG001LG001VS002
mmvdisk: (mmcrnsd) Processing disk RG001LG002VS002
mmvdisk: (mmcrnsd) Processing disk RG001LG003VS002
mmvdisk: (mmcrnsd) Processing disk RG001LG004VS002
mmvdisk: (mmcrnsd) Processing disk RG001LG005VS002
mmvdisk: (mmcrnsd) Processing disk RG001LG006VS002
mmvdisk: (mmcrnsd) Processing disk RG001LG007VS002
mmvdisk: (mmcrnsd) Processing disk RG001LG008VS002
mmvdisk: (mmcrnsd) Processing disk RG001LG009VS002
mmvdisk: (mmcrnsd) Processing disk RG001LG010VS002
mmvdisk: (mmcrnsd) Processing disk RG001LG011VS002
mmvdisk: (mmcrnsd) Processing disk RG001LG012VS002
mmvdisk: Created all NSDs in vdisk set 'SSD-Data'.
mmvdisk: Created all NSDs in vdisk set 'NV-Meta'.
mmvdisk: Created all NSDs in vdisk set 'HDD-Data'.
List the current vdisk set state
 
# mmvdisk recoverygroup list --recovery-group rg_1 --vdisk-set
 
declustered capacity all vdisk sets defined
recovery group array type total raw free raw free% in the declustered array
-------------- ----------- ---- --------- -------- ----- ------------------------
rg_1 DA1 NVMe 8869 GiB 6829 GiB 76% NV-Meta
rg_1 DA2 HDD 8829 GiB 51 GiB 0% HDD-Data
rg_1 DA3 SSD 9173 GiB 32 GiB 0% SSD-Data
 
vdisk set map memory per server
node class available required required per vdisk set
---------- --------- -------- ----------------------
nc_1 7690 MiB 502 MiB HDD-Data (24 MiB), NV-Meta (6912 KiB), SSD-Data (85 MiB)
 
List the created vdisks
 
# mmvdisk recoverygroup list --recovery-group rg_1 --vdisk
 
declustered array block size and
vdisk and log group activity capacity RAID code checksum granularity remarks
------------------ ------- -------- -------- -------- --------------- --------- --------- -------
RG001LG001LOGHOME DA3 LG001 normal 2048 MiB 4WayReplication 2 MiB 4096 log home
RG001LG002LOGHOME DA3 LG002 normal 2048 MiB 4WayReplication 2 MiB 4096 log home
RG001LG003LOGHOME DA3 LG003 normal 2048 MiB 4WayReplication 2 MiB 4096 log home
RG001LG004LOGHOME DA3 LG004 normal 2048 MiB 4WayReplication 2 MiB 4096 log home
RG001LG005LOGHOME DA3 LG005 normal 2048 MiB 4WayReplication 2 MiB 4096 log home
RG001LG006LOGHOME DA3 LG006 normal 2048 MiB 4WayReplication 2 MiB 4096 log home
RG001LG007LOGHOME DA3 LG007 normal 2048 MiB 4WayReplication 2 MiB 4096 log home
RG001LG008LOGHOME DA3 LG008 normal 2048 MiB 4WayReplication 2 MiB 4096 log home
RG001LG009LOGHOME DA3 LG009 normal 2048 MiB 4WayReplication 2 MiB 4096 log home
RG001LG010LOGHOME DA3 LG010 normal 2048 MiB 4WayReplication 2 MiB 4096 log home
RG001LG011LOGHOME DA3 LG011 normal 2048 MiB 4WayReplication 2 MiB 4096 log home
RG001LG012LOGHOME DA3 LG012 normal 2048 MiB 4WayReplication 2 MiB 4096 log home
RG001ROOTLOGHOME DA3 root normal 2048 MiB 4WayReplication 2 MiB 4096 log home
RG001LG001VS001 DA1 LG001 normal 41 GiB 4WayReplication 1 MiB 8192
RG001LG001VS002 DA2 LG001 normal 527 GiB 8+3p 4 MiB 32 KiB
RG001LG001VS003 DA3 LG001 normal 544 GiB 8+3p 2 MiB 8192
RG001LG002VS001 DA1 LG002 normal 41 GiB 4WayReplication 1 MiB 8192
RG001LG002VS002 DA2 LG002 normal 527 GiB 8+3p 4 MiB 32 KiB
RG001LG002VS003 DA3 LG002 normal 544 GiB 8+3p 2 MiB 8192
RG001LG003VS001 DA1 LG003 normal 41 GiB 4WayReplication 1 MiB 8192
RG001LG003VS002 DA2 LG003 normal 527 GiB 8+3p 4 MiB 32 KiB
RG001LG003VS003 DA3 LG003 normal 544 GiB 8+3p 2 MiB 8192
RG001LG004VS001 DA1 LG004 normal 41 GiB 4WayReplication 1 MiB 8192
RG001LG004VS002 DA2 LG004 normal 527 GiB 8+3p 4 MiB 32 KiB
RG001LG004VS003 DA3 LG004 normal 544 GiB 8+3p 2 MiB 8192
RG001LG005VS001 DA1 LG005 normal 41 GiB 4WayReplication 1 MiB 8192
RG001LG005VS002 DA2 LG005 normal 527 GiB 8+3p 4 MiB 32 KiB
RG001LG005VS003 DA3 LG005 normal 544 GiB 8+3p 2 MiB 8192
RG001LG006VS001 DA1 LG006 normal 41 GiB 4WayReplication 1 MiB 8192
RG001LG006VS002 DA2 LG006 normal 527 GiB 8+3p 4 MiB 32 KiB
RG001LG006VS003 DA3 LG006 normal 544 GiB 8+3p 2 MiB 8192
RG001LG007VS001 DA1 LG007 normal 41 GiB 4WayReplication 1 MiB 8192
RG001LG007VS002 DA2 LG007 normal 527 GiB 8+3p 4 MiB 32 KiB
RG001LG007VS003 DA3 LG007 normal 544 GiB 8+3p 2 MiB 8192
RG001LG008VS001 DA1 LG008 normal 41 GiB 4WayReplication 1 MiB 8192
RG001LG008VS002 DA2 LG008 normal 527 GiB 8+3p 4 MiB 32 KiB
RG001LG008VS003 DA3 LG008 normal 544 GiB 8+3p 2 MiB 8192
RG001LG009VS001 DA1 LG009 normal 41 GiB 4WayReplication 1 MiB 8192
RG001LG009VS002 DA2 LG009 normal 527 GiB 8+3p 4 MiB 32 KiB
RG001LG009VS003 DA3 LG009 normal 544 GiB 8+3p 2 MiB 8192
RG001LG010VS001 DA1 LG010 normal 41 GiB 4WayReplication 1 MiB 8192
RG001LG010VS002 DA2 LG010 normal 527 GiB 8+3p 4 MiB 32 KiB
RG001LG010VS003 DA3 LG010 normal 544 GiB 8+3p 2 MiB 8192
RG001LG011VS001 DA1 LG011 normal 41 GiB 4WayReplication 1 MiB 8192
RG001LG011VS002 DA2 LG011 normal 527 GiB 8+3p 4 MiB 32 KiB
RG001LG011VS003 DA3 LG011 normal 544 GiB 8+3p 2 MiB 8192
RG001LG012VS001 DA1 LG012 normal 41 GiB 4WayReplication 1 MiB 8192
RG001LG012VS002 DA2 LG012 normal 527 GiB 8+3p 4 MiB 32 KiB
RG001LG012VS003 DA3 LG012 normal 544 GiB 8+3p 2 MiB 8192
 
Show the current vdisks fault tolerance
 
# mmvdisk recoverygroup list --recovery-group rg_1 --fault-tolerance
 
declustered VCD spares
configuration data array configured actual remarks
------------------ ----------- ---------- ------ -------
relocation space DA1 3 7 must contain VCD
relocation space DA2 9 13 must contain VCD
relocation space DA3 15 19 must contain VCD
 
configuration data disk group fault tolerance remarks
------------------ --------------------------------- -------
rg descriptor 2 node limiting fault tolerance
system index 2 node limited by rg descriptor
 
vdisk RAID code disk group fault tolerance remarks
------------------ --------------- --------------------------------- -------
RG001LG001LOGHOME 4WayReplication 2 node limited by rg descriptor
RG001LG002LOGHOME 4WayReplication 2 node limited by rg descriptor
RG001LG003LOGHOME 4WayReplication 2 node limited by rg descriptor
RG001LG004LOGHOME 4WayReplication 2 node limited by rg descriptor
RG001LG005LOGHOME 4WayReplication 2 node limited by rg descriptor
RG001LG006LOGHOME 4WayReplication 2 node limited by rg descriptor
RG001LG007LOGHOME 4WayReplication 2 node limited by rg descriptor
RG001LG008LOGHOME 4WayReplication 2 node limited by rg descriptor
RG001LG009LOGHOME 4WayReplication 2 node limited by rg descriptor
RG001LG010LOGHOME 4WayReplication 2 node limited by rg descriptor
RG001LG011LOGHOME 4WayReplication 2 node limited by rg descriptor
RG001LG012LOGHOME 4WayReplication 2 node limited by rg descriptor
RG001ROOTLOGHOME 4WayReplication 2 node limited by rg descriptor
RG001LG001VS001 4WayReplication 2 node limited by rg descriptor
RG001LG001VS002 8+3p 1 node + 1 pdisk
RG001LG001VS003 8+3p 1 node + 1 pdisk
RG001LG002VS001 4WayReplication 2 node limited by rg descriptor
RG001LG002VS002 8+3p 1 node + 1 pdisk
RG001LG002VS003 8+3p 1 node + 1 pdisk
RG001LG003VS001 4WayReplication 2 node limited by rg descriptor
RG001LG003VS002 8+3p 1 node + 1 pdisk
RG001LG003VS003 8+3p 1 node + 1 pdisk
RG001LG004VS001 4WayReplication 2 node limited by rg descriptor
RG001LG004VS002 8+3p 1 node + 1 pdisk
RG001LG004VS003 8+3p 1 node + 1 pdisk
RG001LG005VS001 4WayReplication 2 node limited by rg descriptor
RG001LG005VS002 8+3p 1 node + 1 pdisk
RG001LG005VS003 8+3p 1 node + 1 pdisk
RG001LG006VS001 4WayReplication 2 node limited by rg descriptor
RG001LG006VS002 8+3p 1 node + 1 pdisk
RG001LG006VS003 8+3p 1 node + 1 pdisk
RG001LG007VS001 4WayReplication 2 node limited by rg descriptor
RG001LG007VS002 8+3p 1 node + 1 pdisk
RG001LG007VS003 8+3p 1 node + 1 pdisk
RG001LG008VS001 4WayReplication 2 node limited by rg descriptor
RG001LG008VS002 8+3p 1 node + 1 pdisk
RG001LG008VS003 8+3p 1 node + 1 pdisk
RG001LG009VS001 4WayReplication 2 node limited by rg descriptor
RG001LG009VS002 8+3p 1 node + 1 pdisk
RG001LG009VS003 8+3p 1 node + 1 pdisk
RG001LG010VS001 4WayReplication 2 node limited by rg descriptor
RG001LG010VS002 8+3p 1 node + 1 pdisk
RG001LG010VS003 8+3p 1 node + 1 pdisk
RG001LG011VS001 4WayReplication 2 node limited by rg descriptor
RG001LG011VS002 8+3p 1 node + 1 pdisk
RG001LG011VS003 8+3p 1 node + 1 pdisk
RG001LG012VS001 4WayReplication 2 node limited by rg descriptor
RG001LG012VS002 8+3p 1 node + 1 pdisk
RG001LG012VS003 8+3p 1 node + 1 pdisk
 
Create a file system named gpfs_hd with vdisk set NV-Meta as metadata pool and vdisk set HDD-Data as data pool.
# mmvdisk filesystem create --file-system gpfs_hd --vdisk-set NV-Meta,HDD-Data --mmcrfs -T /gpfs_hd
mmvdisk: Creating file system 'gpfs_hd'.
mmvdisk: The following disks of gpfs_hd will be formatted on node c72f4m5u13-ib0:
mmvdisk: RG001LG001VS001: size 42966 MB
mmvdisk: RG001LG002VS001: size 42966 MB
mmvdisk: RG001LG003VS001: size 42966 MB
mmvdisk: RG001LG004VS001: size 42966 MB
mmvdisk: RG001LG005VS001: size 42966 MB
mmvdisk: RG001LG006VS001: size 42966 MB
mmvdisk: RG001LG007VS001: size 42966 MB
mmvdisk: RG001LG008VS001: size 42966 MB
mmvdisk: RG001LG009VS001: size 42966 MB
mmvdisk: RG001LG010VS001: size 42966 MB
mmvdisk: RG001LG011VS001: size 42966 MB
mmvdisk: RG001LG012VS001: size 42966 MB
mmvdisk: RG001LG001VS002: size 539968 MB
mmvdisk: RG001LG002VS002: size 539968 MB
mmvdisk: RG001LG003VS002: size 539968 MB
mmvdisk: RG001LG004VS002: size 539968 MB
mmvdisk: RG001LG005VS002: size 539968 MB
mmvdisk: RG001LG006VS002: size 539968 MB
mmvdisk: RG001LG007VS002: size 539968 MB
mmvdisk: RG001LG008VS002: size 539968 MB
mmvdisk: RG001LG009VS002: size 539968 MB
mmvdisk: RG001LG010VS002: size 539968 MB
mmvdisk: RG001LG011VS002: size 539968 MB
mmvdisk: RG001LG012VS002: size 539968 MB
mmvdisk: Formatting file system ...
mmvdisk: Disks up to size 594.17 GB can be added to storage pool system.
mmvdisk: Disks up to size 8.11 TB can be added to storage pool data01.
mmvdisk: Creating Inode File
mmvdisk: Creating Allocation Maps
mmvdisk: Creating Log Files
mmvdisk: Clearing Inode Allocation Map
mmvdisk: Clearing Block Allocation Map
mmvdisk: Formatting Allocation Map for storage pool system
mmvdisk: Formatting Allocation Map for storage pool data01
mmvdisk: Completed creation of file system /dev/gpfs_hd.
Create a file system named gpfs_hs with vdisk set SSD-Data
# mmvdisk filesystem create --file-system gpfs_hs --vdisk-set SSD-Data --mmcrfs -T /gpfs_hs
mmvdisk: Creating file system 'gpfs_hs'.
mmvdisk: The following disks of gpfs_hs will be formatted on node c72f4m5u15-ib0:
mmvdisk: RG001LG001VS003: size 557872 MB
mmvdisk: RG001LG002VS003: size 557872 MB
mmvdisk: RG001LG003VS003: size 557872 MB
mmvdisk: RG001LG004VS003: size 557872 MB
mmvdisk: RG001LG005VS003: size 557872 MB
mmvdisk: RG001LG006VS003: size 557872 MB
mmvdisk: RG001LG007VS003: size 557872 MB
mmvdisk: RG001LG008VS003: size 557872 MB
mmvdisk: RG001LG009VS003: size 557872 MB
mmvdisk: RG001LG010VS003: size 557872 MB
mmvdisk: RG001LG011VS003: size 557872 MB
mmvdisk: RG001LG012VS003: size 557872 MB
mmvdisk: Formatting file system ...
mmvdisk: Disks up to size 8.19 TB can be added to storage pool system.
mmvdisk: Creating Inode File
mmvdisk: Creating Allocation Maps
mmvdisk: Creating Log Files
mmvdisk: Clearing Inode Allocation Map
mmvdisk: Clearing Block Allocation Map
mmvdisk: Formatting Allocation Map for storage pool system
mmvdisk: Completed creation of file system /dev/gpfs_hs.
Show the current vdisk sets
# mmvdisk vdiskset list
 
vdisk set created file system recovery groups
---------------- ------- ----------- ---------------
HDD-Data yes gpfs_hd rg_1
NV-Meta yes gpfs_hd rg_1
SSD-Data yes gpfs_hs rg_1
 
Show the file system information
# mmvdisk filesystem list
 
file system vdisk sets
----------- ----------
gpfs_hd HDD-Data, NV-Meta
gpfs_hs SSD-Data
 
 
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.188.27