Configuring IBM Spectrum Archive Enterprise Edition
This chapter provides information about the postinstallation configuration of the IBM Spectrum Archive Enterprise Edition (IBM Spectrum Archive EE).
This chapter includes the following topics:
 
Note: In the lab setup for this book, we used a Red Hat based Linux system. The screen captures within this chapter are based on Version 1 Release 3 of the product. Although the steps that you will perform are the same, you might see slightly different output responses depending on your currently used version and release of the product.
5.1 Configuration prerequisites
This section describes the tasks that must be completed before IBM Spectrum Archive EE is configured.
Ensure that the following prerequisites are met before IBM Spectrum Archive EE is configured. For more information, see 5.2, “Configuring IBM Spectrum Archive EE” on page 105.
The Configuration worksheet is completed and available during the configuration process.
 
Tip: Table 5-1 on page 98, Table 5-2 on page 99, Table 5-3 on page 99, Table 5-4 on page 100, Table 5-5 on page 100, and Table 5-6 on page 101 provide a set of sample configuration worksheets. You can print and use these samples during your configuration of IBM Spectrum Archive EE.
The key-based login with OpenSSH is configured.
The IBM Spectrum Scale system is prepared and ready for use on your Linux server system.
The control paths (CPs) to the tape library logical libraries are configured and enabled. You need at least one CP per node.
5.1.1 Configuration worksheet tables
Print Table 5-1 on page 98, Table 5-2 on page 99, Table 5-3 on page 99, Table 5-4 on page 100, Table 5-5 on page 100, and Table 5-6 on page 101 and use them as worksheets or as a template to create your own worksheets to record the information you need to configure IBM Spectrum Archive EE.
For more information, see 5.1.2, “Obtaining configuration information” on page 101 and follow the steps to obtain the information that is required to complete your worksheet.
The information in the following tables is required to configure IBM Spectrum Archive EE. Complete Table 5-4 on page 100, Table 5-5 on page 100, and Table 5-6 on page 101 with the required information and refer to this information as necessary during the configuration process, as described in 5.2, “Configuring IBM Spectrum Archive EE” on page 105.
Table 5-1, Table 5-2, and Table 5-3 show example configuration worksheets with the parameters completed for the lab setup that was used to write this book.
Table 5-1 lists the file systems.
Table 5-1 Example IBM Spectrum Scale file systems
IBM Spectrum Scale file systems
File system name
Mount point
Need space management?
(Yes or No)
Reserved for IBM Spectrum Archive EE?
(Yes or No)
gpfs
/ibm/glues
YES
YES
Table 5-2 lists the logical tape library information.
Table 5-2 Example logical tape library
Logical Tape library
Tape library information
Tape library (L-Frame) Serial Number
78-A4274
Starting SCSI Element Address of the logical tape library for IBM Spectrum Archive EE (decimal and hex)
1033dec = 409hex
Logical tape library serial number
(L-Frame S/N + “0” + SCSI starting element address in hex)
78A4274-0-409 = 78A42740409
Tape Drive information
Drive Serial number
Assigned IBM Spectrum Scale node
CP?
(Yes or No)
Linux device name in the node
9A700M0029
htohru9
YES
/dev/sgXX
1068000073
htohru9
NO
/dev/sgYY
Table 5-3 lists the nodes.
Table 5-3 Example IBM Spectrum Scale nodes
IBM Spectrum Scale nodes
IBM Spectrum Scale node name
Installing IBM Spectrum Archive EE?
(Yes or No)
Tape drives assigned to this node
(Serial number)
CP enabled tape drive
(Serial number)
htohru9
YES
9A700M0029, 1068000073
9A700M0029
Figure 5-1 shows an example of a TS3500 GUI window that you use to display the starting SCSI element address of a TS3500 logical library. You must record the decimal value (starting address) to calculate the associated logical library serial number by using as shown in Table 5-2 on page 99. You can open this window if you check for the details for a specific logical library.
Figure 5-1 Obtain the starting SCSI element address of a TS3500 logical library
Table 5-4 shows an example of a blank file systems worksheet.
Table 5-4 Example IBM Spectrum Scale file systems
IBM Spectrum Scale file systems
File system name
Mount point
Need space management?
(Yes or No)
Reserved for IBM Spectrum Archive EE?
(Yes or No)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Table 5-5 shows an example of a blank logical tape library worksheet.
Table 5-5 Example logical tape library
Logical Tape library
Tape library information
Tape library (L-Frame) Serial Number
 
Starting SCSI Element Address of the logical Tape Library for IBM Archive EE (decimal and hex)
 
Logical tape library serial number
(L-Frame S/N + “0” + SCSI starting element address in hex)
 
Tape Drive information
Drive Serial number
Assigned IBM Spectrum Scale node
CP?
(Yes or No)
Linux device name in the node
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Table 5-6 shows an example a blank nodes worksheet.
Table 5-6 Example IBM Spectrum Scale nodes
IBM Spectrum Scale nodes
IBM Spectrum Scale node name
Installing IBM Spectrum Archive EE?
(Yes or No)
Tape drives assigned to this node
(Serial number)
CP enabled tape drive
(Serial number)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5.1.2 Obtaining configuration information
To obtain the information about your environment that is required for configuring IBM Spectrum Archive EE, complete the following steps:
1. Log on to the operating system as a root user.
2. Start GPFS (if it is not started already) by running the following command (see Example 5-1 on page 102):
# mmstartup -a
3. Mount GPFS (if it is not already mounted) by running the following command (see Example 5-1 on page 102):
# mmmount all
4. Obtain a list of all GPFS file systems that exist in the IBM Spectrum Scale cluster by running the following command (see Example 5-1 on page 102):
# mmlsfs all
5. Go to the Configuration worksheet (provided in 5.1.3, “Configuring key-based login with OpenSSH” on page 103) and enter the list of file system names in the GPFS file systems table.
6. Plan the GPFS file system that was used to store IBM Spectrum Archive EE internal data. For more information, see 5.1.4, “Preparing the IBM Spectrum Scale file system for IBM Spectrum Archive EE” on page 104.
7. Go to the Configuration worksheet and enter the GPFS file system that is used to store IBM Spectrum Archive EE internal data into Table 5-1 on page 98.
8. Obtain a list of all IBM Spectrum Scale nodes in the IBM Spectrum Scale cluster by running the following command (see Example 5-1 on page 102):
# mmlsnode
9. Go to the Configuration worksheet and enter the list of IBM Spectrum Scale nodes and whether the IBM Spectrum Archive EE is installed on the node in Logical Tape Library table.
10. Obtain the logical library serial number, as described in the footnote of Table 5-2 on page 99. For more information and support, see the IBM Documentation website for your specific tape library.
11. Go to the Configuration worksheet and enter the logical library serial number that was obtained in the previous step into the IBM Spectrum Scale nodes table.
12. Obtain a list of all tape drives in the logical library that you plan to use for the configuration of IBM Spectrum Archive EE. For more information, see the IBM Documentation website for your specific tape library.
13. Go to the Configuration worksheet and enter the tape drive serial numbers that were obtained through the previous step into the IBM Spectrum Scale nodes table.
14. Assign each drive to one of the IBM Spectrum Archive EE nodes that are listed in the Logical Library Table in the Configuration worksheet and add that information to the IBM Spectrum Scale nodes table.
15. Assign at least one CP to each of the IBM Spectrum Archive EE nodes and enter whether each drive is a CP drive in the IBM Spectrum Scale nodes section of the Configuration worksheet.
16. Go to the Configuration worksheet and update the Logical Tape Library table with the tape drive assignment and CP drive information by adding the drive serial numbers in the appropriate columns.
Keep the completed configuration worksheet available for reference during the configuration process. Example 5-1 shows how to obtain the information for the worksheet.
Example 5-1 Obtain the IBM Spectrum Scale required information for the configuration worksheet
[root@ltfs97 ~]# mmstartup -a
Fri Apr 5 14:02:32 JST 2013: mmstartup: Starting GPFS ...
htohru9.ltd.sdl: The GPFS subsystem is already active.
 
[root@ltfs97 ~]# mmmount all
Fri Apr 5 14:02:50 JST 2013: mmmount: Mounting file systems ...
 
[root@ltfs97 ~]# mmlsfs all
 
File system attributes for /dev/gpfs:
=====================================
flag value description
------------------- ------------------------ -----------------------------------
-f 8192 Minimum fragment (subblock) size in bytes
-i 4096 Inode size in bytes
-I 32768 Indirect block size in bytes
-m 1 Default number of metadata replicas
-M 2 Maximum number of metadata replicas
-r 1 Default number of data replicas
-R 2 Maximum number of data replicas
-j cluster Block allocation type
-D nfs4 File locking semantics in effect
-k all ACL semantics in effect
-n 32 Estimated number of nodes that will mount file system
-B 4194304 Block size
-Q none Quotas accounting enabled
none Quotas enforced
none Default quotas enabled
--perfileset-quota no Per-fileset quota enforcement
--filesetdf no Fileset df enabled?
-V 20.01 (5.0.2.0) File system version
--create-time Thu Sep 20 14:19:23 2018 File system creation time
-z yes Is DMAPI enabled?
-L 33554432 Logfile size
-E yes Exact mtime mount option
-S relatime Suppress atime mount option
-K whenpossible Strict replica allocation option
--fastea yes Fast external attributes enabled?
--encryption no Encryption enabled?
--inode-limit 600000512 Maximum number of inodes
--log-replicas 0 Number of log replicas
--is4KAligned yes is4KAligned?
--rapid-repair yes rapidRepair enabled?
--write-cache-threshold 0 HAWC Threshold (max 65536)
--subblocks-per-full-block 512 Number of subblocks per full block
-P system Disk storage pools in file system
--file-audit-log no File Audit Logging enabled?
--maintenance-mode no Maintenance Mode enabled?
-d nsd208_209_1;nsd208_209_2;nsd208_209_3;nsd208_209_4 Disks in file system
-A yes Automatic mount option
-o none Additional mount options
-T /ibm/glues Default mount point
--mount-priority 0 Mount priority
 
[root@ltfs97 ~]# mmlsnode
GPFS nodeset Node list
------------- -------------------------------------------------------
htohru9 htohru9
5.1.3 Configuring key-based login with OpenSSH
IBM Spectrum Archive EE uses the Secure Shell (SSH) protocol for secure file transfer and requires key-based login with OpenSSH for the root user.
To use key-based login with OpenSSH, it is necessary to generate SSH key files and append the public key file from each node (including the local node) to the authorized_keys file in the ~root/.ssh directory.
The following points must be considered:
This procedure must be performed on all IBM Spectrum Archive EE nodes.
After completing this task, a root user on any node in an IBM Spectrum Archive EE cluster can run any commands on any node remotely without providing the password for the root on the remote node. It is preferable that the cluster is built on a closed network. If the cluster is within a firewall, all ports can be opened. For more information, see 4.3.1, “Extracting binary rpm files from an installation package” on page 76 and 4.3.2, “Installing, upgrading, or uninstalling IBM Spectrum Archive EE” on page 78.
To configure key-based login with OpenSSH, complete the following steps:
1. If the ~root/.ssh directory does not exist, create it by running the following command:
mkdir ~root/.ssh
2. If the root user does not have SSH keys, generate them by running the ssh-keygen command and pressing enter at all prompts.
 
Important: You can verify whether the root user has a public key by locating the id_rsa and id_rsa.pub files under the /root/.ssh/ directory. If these files do not exist, you must generate them.
3. After the key is generated, copy the key to each server that requires a key-based login for OpenSSH by running the following command:
ssh-copy-id root@<server>
4. Repeat these steps on each IBM Spectrum Archive EE node.
5.1.4 Preparing the IBM Spectrum Scale file system for IBM Spectrum Archive EE
Complete this task to create and mount the IBM Spectrum Scale file system before IBM Spectrum Archive EE is configured.
Before you make any system upgrades or major configuration changes to your GPFS or IBM Spectrum Scale cluster, review your GPFS or IBM Spectrum Scale documentation and consult IBM Spectrum Scale frequently asked question (FAQ) information that applies to your version of IBM Spectrum Scale. For more information about the IBM Spectrum Scale FAQ, see IBM Documentation.
Before you begin this procedure, ensure that the following prerequisites are met:
IBM Spectrum Scale is installed on each of the IBM Spectrum Archive EE nodes.
The IBM Spectrum Scale cluster is created and all of the IBM Spectrum Archive EE nodes belong to the cluster.
IBM Spectrum Archive EE requires space for the file metadata, which is stored in the LTFS metadata directory. The metadata directory can be stored in its own GPFS file system, or it can share the GPFS file system that is being space-managed with IBM Spectrum Archive EE.
The file system that is used for the LTFS metadata directory must be created and mounted before the IBM Spectrum Archive EE configuration is performed. The following requirements apply to the GPFS file system that is used for the LTFS metadata directory:
The file system must be mounted and accessible from all of the IBM Spectrum Archive EE nodes in the cluster.
The GPFS file system (or systems) that are space-managed with IBM Spectrum Archive EE must be DMAPI enabled.
To create and mount the GPFS file system, complete the following steps:
1. Create a network shared disk (NSD), if necessary, by running the following command. It is possible to share an NSD with another GPFS file system.
# mmcrnsd -F nsd.list -v no
<<nsd.list>>
%nsd: device=/dev/dm-3
nsd=nsd00
servers=ltfs01, ltfs02, ltfs03, ltfs04
usage=dataAndMetadata
2. Start the GPFS service (if it is not started already) by running the following command:
# mmstartup -a
3. Create the GPFS file system by running the following command. For more information about the file system name and mount point, see 5.1.1, “Configuration worksheet tables” on page 98.
# mmcrfs /dev/gpfs nsd00 -z yes -T /ibm/glues
In this example, /dev/gpfs is the file system name and /ibm/glues is the mount point. For a separate file system that is used only for the LTFS metadata directory, you do not need to use the -z option. Generally, if a GPFS file system is not intended to be IBM Spectrum Archive EE managed, it should not be DMAPI-enabled; therefore, the -z option should not be specified. The preferred configuration is to have one file system with DMAPI-enabled.
4. Mount the GPFS file system by running the following command:
# mmmount gpfs -a
For more information about the mmmount command, see the following resources:
General Parallel File System Version 4 Release 1.0.4 Advanced Administration Guide, SC23-7032
IBM Spectrum Scale: Administration Guide, which is available at IBM Documentation.
5.2 Configuring IBM Spectrum Archive EE
The topics in this section describe how to use the ltfsee_config command to configure IBM Spectrum Archive EE in a single node or multiple node environment. Instructions for removing a node from an IBM Spectrum Archive EE configuration are also provided.
5.2.1 The ltfsee_config utility
Use the ltfsee_config command-line utility to configure the IBM Spectrum Archive EE for single node or multiple node environment. You must have root user authority to use this command. This command also can be used to check an IBM Spectrum Archive EE configuration. The utility operates in interactive mode and guides you step-by-step through the required information that you must provide.
 
Reminder: All of the command examples use the command without the full file path name because we added the IBM Spectrum Archive EE directory (/opt/ibm/ltfsee/bin) to the PATH variable.
The ltfsee_config command-line tool is shown in the following example and includes the following options:
ltfsee_config -m <mode> [options]
-m
<mode> and [options] can be one of the following items:
 – CLUSTER [-c]
Creates an IBM Spectrum Archive EE cluster environment and configures a user-selected IBM Spectrum Scale (GPFS) file system to be managed by the IBM Spectrum Archive or used for its metadata. The user must run this command one time from one of the IBM Spectrum Archive nodes. Running the command a second time modifies the file systems settings of the cluster.
 
 
 – ADD_CTRL_NODE [-g | -c | -a]
Adds the local node as the control (MMM) node to a tape library in an IBM Spectrum Archive EE environment, and configures its drives and node group. There can be one or two control nodes per tape library.
 
Note: Even if you configure two control nodes per tape library, you still only run ADD_CTRL_NODE once per tape library.
 – ADD_NODE [-g | -c| -a]
Adds the local node (as a non-control node) to a tape library, and configure its drives and node group. You choose whether or not the node is a control node as redundancy.
 – SET_CTRL_NODE
Configure or reconfigure one or two control nodes and select one node to be active at the next start of IBM Spectrum Archive EE.
 – UPDATE_FS_INFO
Apply the current information of IBM Spectrum Scale (GPFS) file system to IBM Spectrum Archive EE configuration.
 – REMOVE_NODE [-N <node_id>] [-f]
Removes the node and the drives configured for that node from the configuration.
 – REMOVE_NODEGROUP -l <library> -G <removed_nodegroup>
Removes the node group that is no longer used.
 – INFO
Shows the current configuration of this cluster.
 – LIST_LIBRARIES
Shows the serial numbers of the tape libraries that are configured in the cluster.
 – REPLACE_LIBRARY [-b]
Sets the serial number detected by the node to that of the configured library.
 – LIST_MOVE_POOLS
Shows the pool translation table.
 – PREPARE_MOVE_POOL -p <pool_name> -s <source_library> -d <destination_library> [-G <node_group>] [-b]
Prepares the pool translation table information for pool relocations between libraries.
 – CANCEL_MOVE_POOL -p <pool_name> -s <source_library> [-b]
Cancels the PREPARE_MOVE_POOL operation for pool translation.
 – ACTIVATE_MOVE_POOL -p <pool_name> -s <source_library> -d <destination_library> [-b]
Activates the pool that was relocated to a different library.
 – RECREATE_STATESAVE
Delete and initialize the whole of statesave. By using this command, all history and running task information are removed.
Options:
 – -a
Assigns the IP address of the Admin node name as the control node (MMM), or as the local node. If -a is not used, the IP address of the Daemon node name is assigned.
 – -c
Check and show the cluster or node configuration, without configuring or modifying it.
 – -g
Assign the node to a node group that is selected or specified by user. If -g is not used, the node is added to the default node group, which is named G0 if it did not exist before.
 – -G
Specifies the node group to assign to the pool in the destination library during translation between libraries, or the node group to be removed.
 – -N
Remove a non-local node by specifying its node ID. If -N is not used, the local node is removed.
 – -f
Force node removal. If -f is not used, an attempt to remove a control node fails and the configuration remains unchanged. When a control node is removed by using -f, other nodes from the same library and the drives that are configured for those nodes are also removed. To avoid removing multiple nodes, consider first setting another configured non-control node from the same library as the control node (SET_CTRL_NODE).
 
Important: When the active control node is removed by use of -f option, the library and the pool information stored in the internal database will be in-validated. If files that are migrated only to that library are left in the system, recalls of those files will no longer be possible.
 – -b
Skips restarting the HSM daemon as a post process of the operation.
 – -p
Specifies the name of the pool to be relocated to a different library.
 – -P
Specifies the directory path storing SOBAR Support for System Migration result.
 – -s
Specifies the name of the source library for a pool relocation procedure.
 – -d
Specifies the destination library for a pool relocation procedure.
 – -l
Specifies the library name in which to remove a node group with the -N option.
5.2.2 Configuring a single node cluster
Before you begin this procedure, ensure that all of the tasks that are described in 5.1, “Configuration prerequisites” on page 98 are met. Figure 5-2 shows a single-node configuration that is described in this section.
Figure 5-2 IBM Spectrum Archive single-node configuration
The steps in this section must be performed only on one node of an IBM Spectrum Archive EE cluster environment. If you plan to have only one IBM Spectrum Archive EE node, this is a so-called single-node cluster setup.
If you plan to set up a multi-node cluster environment for IBM Spectrum Archive EE, this configuration mode must be performed once, and only on a node of your choice of your cluster environment. All other nodes must be added. To do so, see 5.2.3, “Configuring a multiple-node cluster” on page 112.
To configure a single-node cluster for IBM Spectrum Archive EE, complete the following steps:
1. Log on to the operating system as a root user.
2. Start GPFS (if it is not already started) by running the following command:
# mmstartup -a
3. Mount the GPFS file system (if it is not already mounted) by running the following command:
# mmmount all
4. Start the IBM Spectrum Archive EE configuration utility with the -m CLUSTER option by running the following command and answering the prompted questions:
# ltfsee_config -m CLUSTER
Example 5-2 shows the successful run of the ltfsee_config -m CLUSTER command during the initial IBM Spectrum Archive EE configuration on the lab setup that was used for this book. The example stops the process and will not proceed to the ADD_CTRL_NODE, though you may answer [y] to continue to ADD_CTRL_NODE (step 5).
Example 5-2 Run the ltfsee_config -m CLUSTER command
[root@ltfsml1 ~]# /opt/ibm/ltfsee/bin/ltfsee_config -m CLUSTER
CLUSTER mode starts .
 
## 1. Check whether the cluster is already created ##
Cluster is not configured, configuring the cluster.
 
## 2. Check prerequisite on cluster ##
Cluster name: ltfsml2-ltfsml1.tuc.stglabs.ibm.com
ID: 12003238441805965800
Successfully validated the prerequisites.
 
## 3. List file systems in the cluster ##
Retrieving IBM Spectrum Scale (GPFS) file systems...
** Select a file system for storing IBM Spectrum Archive Enterprise Edition configuration and internal data.
Input the corresponding number and press Enter
or press q followed by Enter to quit.
 
File system
1. /dev/gpfs Mount point(/ibm/gpfs) DMAPI(Yes)
q. Quit
 
Input number > 1
 
** Select file systems to configure for IBM Spectrum Scale (GPFS) file system for Space Management.
Input the corresponding numbers and press Enter
or press q followed by Enter to quit.
Press a followed by Enter to select all file systems.
Multiple file systems can be specified using comma or white space delimiters.
 
File system
1. /dev/gpfs Mount point(/ibm/gpfs)
a. Select all file systems
q. Quit
 
Input number > 1
 
## 4. Configure Space Management ##
Disabling unnecessary daemons...
Editing Space Management Client settings...
Restarting Space Management service...
Terminating dsmwatchd.............
Terminating dsmwatchd.............
Starting dsmmigfs.............................
Configured space management.
 
## 5. Add selected file systems to the Space Management ##
Added the selected file systems to the space management.
 
## 6. Store the file systems configuration and dispatch it to all nodes ##
Storing the file systems configuration...
Copying ltfsee_config.filesystem file...
Stored the cluster configuration and dispatched the configuration file.
 
## 7. Create metadata directories and configuration parameters file ##
Created metadata directories and configuration parameters file.
 
CLUSTER mode completed.
 
Then do you want to perform the ADD_CTRL_NODE mode? [Y/n]: n
 
Important: During the first run of the ltfsee_config -m CLUSTER command, if you see the following error:
No file system is DMAPI enabled.
At least one file system has to be DMAPI enabled to use IBM Spectrum Archive Enterprise Edition.
Enable DMAPI of more than one IBM Spectrum Scale (GPFS) file systems and try again.
Ensure that DMAPI is turned on correctly, as described in 5.1.4, “Preparing the IBM Spectrum Scale file system for IBM Spectrum Archive EE” on page 104. You can use the following command sequence to enable DMAPI support for your GPFS file system (here the GPFS file system name that is used is gpfs):
# mmumount gpfs
mmumount: Unmounting file systems ...
# mmchfs gpfs -z yes
# mmmount gpfs
mmmount: Mounting file systems ...
5. Run the IBM Spectrum Archive EE configuration utility by running the following command and answering the prompted questions:
# ltfsee_config -m ADD_CTRL_NODE
Example 5-3 shows the successful run of the ltfsee_config -m ADD_CTRL_NODE command during initial IBM Spectrum Archive EE configuration on the lab setup that was used for this book.
Example 5-3 Run the ltfsee_config -m ADD_CTRL_NODE command
[root@ltfsml1 ~]# /opt/ibm/ltfsee/bin/ltfsee_config -m ADD_CTRL_NODE
ADD_CTRL_NODE mode starts .
 
## 1. Check whether the cluster is already created ##
Cluster is already created and configuration file ltfsee_config.filesystem exists.
 
## 2. Check prerequisite on node ##
Successfully validated the prerequisites.
 
## 3. IBM Spectrum Scale (GPFS) Configuration for Performance Improvement ##
Setting worker1Threads=400
Setting dmapiWorkerThreads=64
Configured IBM Spectrum Scale (GPFS) performance related settings.
 
## 4. Configure Space Management ##
Disabling unnecessary daemons...
Editing Space Management Client settings...
Restarting Space Management service...
Terminating dsmwatchd.............
Terminating dsmwatchd.............
Starting dsmmigfs.............................
Configured space management.
 
## 5. Add this node to a tape library ##
 
Number of logical libraries with assigned control node: 0
Number of logical libraries available from this node: 1
Number of logical libraries available from this node and with assigned control node: 0
 
** Select the tape library from the following list
and input the corresponding number. Then, press Enter.
 
Model Serial Number
1. 3576-MTL 000001300228_LLC
q. Return to previous menu
 
Input Number > 1
Input Library Name (alpha numeric or underscore, max 16 characters) > lib_ltfsml1
Added this node (ltfsml1.tuc.stglabs.ibm.com, node id 2) to library lib_ltfsml1 as its control node.
 
## 6. Add this node to a node group ##
Added this node (ltfsml1.tuc.stglabs.ibm.com, node id 2) to node group G0.
 
## 7. Add drives to this node ##
 
** Select tape drives from the following list.
Input the corresponding numbers and press Enter
or press q followed by Enter to quit.
Multiple tape drives can be specified using comma or white space delimiters.
 
Model Serial Number
1. ULT3580-TD6 1013000655
2. ULT3580-TD6 1013000688
3. ULT3580-TD6 1013000694
a. Select all tape drives
q. Exit from this Menu
 
Input Number > a
Selected drives: 1013000655:1013000688:1013000694.
Added the selected drives to this node (ltfsml1.tuc.stglabs.ibm.com, node id 2).
## 8. Configure LE+ component ##
Creating mount point...
Mount point folder '/ltfs' exists.
Use this folder for the LE+ component mount point as LE+ component assumes this folder.
Configured LE+ component.
## 9. Enabling system log ##
Restarting rsyslog...
System log (rsyslog) is enabled for IBM Spectrum Archive Enterprise Edition.
 
ADD_CTRL_NODE mode completed.
To summarize, EE Node 1 must run ltfsee_config -m CLUSTER and ltfsee_config -m ADD_CTRL_NODE to complete this single-node configuration.
If you are configuring multiple nodes for IBM Spectrum Archive EE, continue to 5.2.3, “Configuring a multiple-node cluster” on page 112.
5.2.3 Configuring a multiple-node cluster
To add nodes to form a multiple-node cluster configuration after the first node is configured, complete this task. With the release of IBM Spectrum Archive EE V1.2.4.0, a redundant control node can be set for failover scenarios.
When configuring any multiple-node clusters, set a secondary node as a redundant control node for availability features. The benefits of having redundancy are explained in 6.7, “IBM Spectrum Archive EE automatic node failover” on page 153.
Figure 5-3 shows a multiple-node cluster configuration that is described in this section.
Figure 5-3 IBM Spectrum Archive multiple-node cluster configuration
Before configuring more nodes, ensure that all tasks that are described in 5.1, “Configuration prerequisites” on page 98 are completed and that the first node of the cluster environment is configured, as described in 5.2.2, “Configuring a single node cluster” on page 108.
To configure another node for a multi-node cluster setup for IBM Spectrum Archive EE, complete the following steps:
1. Log on to the operating system as a root user.
2. Start GPFS (if it is not already started) by running the following command:
# mmstartup -a
3. Mount the GPFS file system on all nodes in the IBM Spectrum Scale cluster (if it is not already mounted) by running the following command:
# mmmount all -a
4. Start the IBM Spectrum Archive EE configuration utility with the -m ADD_NODE option by running the following command and answering the prompted questions:
# /opt/ibm/ltfsee/bin/ltfsee_config -m ADD_NODE
 
Important: This step must be performed on all nodes, except for the first node that was configured in 5.2.2, “Configuring a single node cluster” on page 108.
Example 5-4 shows how to add a secondary node and set it as a redundant control node by running ltfsee_config -m ADD_NODE. In step 5 of the command, after selecting which library to add the node to, a prompt will appear asking to make the node a redundant control node. Enter y to make the second node a redundant control node. Only two nodes per library can be control nodes. If there are more than two nodes added to the cluster, enter n for each additional node.
Example 5-4 Adding secondary node as a redundant control node
[root@ltfsml2 ~]# ltfsee_config -m ADD_NODE
The EE configuration script is starting: /opt/ibm/ltfsee/bin/ltfsee_config -m ADD_NODE
ADD_NODE mode starts .
 
## 1. Check to see if the cluster is already created ##
The cluster is already created and the configuration file ltfsee_config.filesystem exists.
 
## 2. Check prerequisite on node ##
Successfully validated the prerequisites.
 
## 3. IBM Spectrum Scale (GPFS) Configuration for Performance Improvement ##
Setting workerThreads=512
Setting dmapiWorkerThreads=64
Configured IBM Spectrum Scale (GPFS) preformance related settings.
 
## 4. Configure space management ##
Disabling unnecessary daemons...
Editing Space Management Client settings...
Deactivating failover operations on the node.
Restarting Space Management service...
Stopping the HSM service.
Terminating dsmwatchd.............
Starting the HSM service.
Starting dsmmigfs..................................
Activating failover operations on the node.
Configured space management.
 
## 5. Add this node to a tape library ##
 
The number of logical libraries with the assigned control node: 2
The number of logical libraries available from this node: 1
The number of logical libraries available from this node and with assigned control node: 1
 
** Select the tape library from the following list
and input the corresponding number. Then press Enter.
 
Library id Library name Control node
1. 0000013FA0520411 ltfsee_lib 9.11.120.198
q. Exit from this Menu
 
Input Number > 1
Add this node as a control node for control node redundancy(y/n)?
 
Input >y
The node ltfsml2(9.11.120.201) has been added as a control node for control node redundancy
Added this node (ltfsml2, node id 2) to library ltfsee_lib.
 
## 6. Add this node to a node group ##
Added this node (ltfsml2, node id 2) to node group G0.
 
## 7. Add drives to this node ##
 
** Select tape drives from the following list.
Input the corresponding numbers and press Enter
or press 'q' followed by Enter to quit.
Multiple tape drives can be specified using comma or white space delimiters.
 
Model Serial Number
1. ULT3580-TD5 1068093078
2. ULT3580-TD5 1068093084
a. Select all tape drives
q. Exit from this Menu
 
Input Number > a
Selected drives: 1068093078:1068093084.
Added the selected drives to this node (ltfsml2, node id 2).
 
## 8. Configure the LE+ component ##
Creating mount point...
Mount point folder '/ltfs' exists.
Use this folder for the LE+ component mount point as LE+ component assumes this folder.
Former saved configuration file exists which holds the following information:
=== difference /etc/ltfs.conf.local.rpmsave from /etc/ltfs.conf.local ===
 
=== end of difference ===
Do you want to use the saved configuration (y/n)?
Input > y
The LE+ component configuration is restored from a saved configuration.
Configured the LE+ component.
 
ADD_NODE mode completed.
To summarize, you ran the following configuration options on EE Node 1 in 5.2.2, “Configuring a single node cluster” on page 108:
ltfsee_config -m CLUSTER
ltfsee_config -m ADD_CTRL_NODE
For each additional IBM Spectrum Archive node in EE Node Group 1, run the ltfsee_config -m ADD_NODE command. For example, in Figure 5-3 on page 112, you must run ltfsee_config -m ADD_NODE on both EE Node 2 and EE Node 3.
If you require multiple tape library attachments, go to 5.2.4, “Configuring a multiple-node cluster with two tape libraries” on page 115.
5.2.4 Configuring a multiple-node cluster with two tape libraries
Starting with IBM Spectrum Archive V1.2, IBM Spectrum Archive supports the Multiple Tape Library Attachment feature in a single IBM Spectrum Scale cluster. This feature allows for data replication to pools in separate libraries for more data resiliency, and allows for total capacity expansion beyond a single library limit.
The second tape library can be the same tape library model as the first tape library or can be a different tape library model. These two tape libraries can be connected to a IBM Spectrum Scale cluster in a single site or can be placed in metro distance (less than 300 km) locations through IBM Spectrum Scale synchronous mirroring (stretched cluster).
For more information about synchronous mirroring by using IBM Spectrum Scale replication, see IBM Documentation.
 
Important: Stretched cluster is available for distances shorter than 300 km. For longer distances, the Active File Management (AFM) feature of IBM Spectrum Scale should be used with IBM Spectrum Archive. The use of AFM is with two different IBM Spectrum Scale clusters with one instance of IBM Spectrum Archive at each site. For more information about IBM Spectrum Scale AFM support, see 2.2.5, “Active File Management” on page 30.
To add nodes to form a multiple-node cluster configuration with two tape libraries after the first node is configured, complete this task. Figure 5-4 shows the configuration with two tape libraries.
Figure 5-4 IBM Spectrum Archive multiple-node cluster configuration across two tape libraries
Before configuring more nodes, ensure that all tasks that are described in 5.1, “Configuration prerequisites” on page 98 are completed and that the first node of the cluster environment is configured, as described in 5.2.2, “Configuring a single node cluster” on page 108.
To configure the nodes at the other location for a multiple-node two-tape library cluster setup for IBM Spectrum Archive EE, complete the following steps:
1. Run the IBM Spectrum Archive EE configuration utility by running the following command and answering the prompted questions:
# /opt/ibm/ltfsee/bin/ltfsee_config -m ADD_CTRL_NODE
Using Figure 5-4 as an example, the ltfsee_config -m ADD_CTRL_NODE command is run on EE Node 4.
2. Run the IBM Spectrum Archive EE configuration utility on all the remaining EE nodes at the other location by running the following command and answering the prompted questions:
# /opt/ibm/ltfsee/bin/ltfsee_config -m ADD_NODE
Using Figure 5-4 as an example, the ltfsee_config -m ADD_NODE command is run on EE Node 5 and EE Node 6.
5.2.5 Modifying a multiple-node configuration for control node redundancy
If users are upgrading to the IBM Spectrum Archive EE V1.3.0.0 from a previous version and have a multiple-node configuration with no redundant control nodes, users must manually set a redundant control node. See 4.3.2, “Installing, upgrading, or uninstalling IBM Spectrum Archive EE” on page 78 on how to perform upgrades. To modify the configuration to set a secondary node to be a redundant control node, IBM Spectrum Archive EE must not be running.
Run eeadm cluster stop, to stop IBM Spectrum Archive EE and LE+. After IBM Spectrum Archive EE has stopped, run ltfsee_config -m SET_CTRL_NODE to modify the configuration to add a redundant control node.
Example 5-5 shows the output of ltfsee_config -m SET_CTRL_NODE to create a redundant control node. In this example, the cluster has two libraries connected and will only perform setting a redundant control node on one of the two libraries. Repeat the same steps and select the second library to make a redundant control node for the second library.
Example 5-5 Setting node to be redundant control node
[root@ltfsml1 ~]# ltfsee_config -m SET_CTRL_NODE
The EE configuration script is starting: /opt/ibm/ltfsee/bin/ltfsee_config -m SET_CTRL_NODE
SET_CTRL_NODE mode starts .
 
## 1. Check to see if the cluster is already created ##
The cluster is already created and the configuration file ltfsee_config.filesystem exists.
## 2. Control node configuration.
 
** Select a library to set control nodes.
 
Libraries
1:0000013FA0520411
2:0000013FA0520412
q.Quit
 
Input number >1
Set the control nodes for the library 0000013FA0520411.
 
** Select 1 or 2 nodes for redundant control nodes from the following list.
They can be specified using comma or white space delimiters.
Nodes marked [x] are the current redundant configured nodes.
 
Nodes
1:[x]ltfsml1
2:[_]ltfsml2
q.Quit
 
Input number >1 2
 
## 3. Select control node to be active ##
The following nodes are selected as redundant nodes.
Select a node that will be active in the next LTFS-EE run.
 
Nodes
1:ltfsml1
2:ltfsml2
q.Quit
 
Input number >1
 
The node ebisu(9.11.120.198) has been set to be active for library ltfsee_lib
After successfully setting up redundant control nodes for each library cluster, start Spectrum Archive EE by running the eeadm cluster start command. Then, run eeadm node list to verify that each node was started up properly and is available. You should also see that there are two control nodes per library. Example 5-6 shows output from a multiple-node configuration with two tape libraries
Example 5-6 eeadm node list
[root@ltfsml1 ~]# eeadm node list
Node ID  State      Node IP Drives Ctrl Node Library Node Group Host Name
4 Available 9.11.120.224 2 Yes          ltfsee_lib2  G0 ltfsml4
3        Available 9.11.120.207 2 Yes(Active) ltfsee_lib2  G0 ltfsml3
2        Available 9.11.120.201 2 Yes ltfsee_lib1  G0 ltfsml2
1 Available 9.11.120.198 2 Yes(Active) ltfsee_lib1  G0 ltfsml1
5.3 First-time start of IBM Spectrum Archive EE
To start IBM Spectrum Archive EE the first time, complete the following steps:
1. Check that the following embedded, customized IBM Tivoli® Storage Manager for Space Management (HSM) client components are running on each IBM Spectrum Archive EE node:
# ps -ef|grep dsm
 
If HSM is already running on system, the output of this command should show the dsm proccesses as shown in Example 5-7 . HSM is not running if no output is shown.
Information about how to start HSM can be referenced in 6.2.3, “Hierarchical Space Management” on page 138.
 
 
2. Start the IBM Spectrum Archive EE program by running the following command:
/opt/ibm/ltfsee/bin/eeadm cluster start
 
Important: If the eeadm cluster start command does not return after several minutes, it might be either because tapes are being unloaded or because the firewall is running. The firewall service must be disabled on the IBM Spectrum Archive EE nodes. For more information, see 4.3.2, “Installing, upgrading, or uninstalling IBM Spectrum Archive EE” on page 78.
Example 5-7 shows all of the steps and the output when IBM Spectrum Archive EE was started the first time. During the first start, you might discover a warning message, as shown in the following example:
The authenticity of host 'localhost (::1)' can't be established.
RSA key fingerprint is 8f:56:95:fe:9c:eb:37:7f:95:b1:21:b9:45:d6:91:6b.
Are you sure you want to continue connecting (yes/no)?
This message is normal during the first start and you can easily continue by entering yes and pressing Enter.
Example 5-7 Start IBM Spectrum Archive EE the first time
[root@ltfsml1 ~]# ps -afe | grep dsm
root 14351 1 0 15:33 ? 00:00:01 /opt/tivoli/tsm/client/hsm/bin/dsmwatchd nodetach
root 15131 30301 0 16:33 pts/0 00:00:00 grep --color=auto dsm
root 17135 1 0 15:33 ? 00:00:00 dsmrecalld
root 17160 17135 0 15:33 ? 00:00:00 dsmrecalld
root 17161 17135 0 15:33 ? 00:00:00 dsmrecalld
 
[root@ltfsml1 ~]# eeadm cluster start
Library name: libb, library serial: 0000013400190402, control node (ltfsee_md) IP address: 9.11.244.46.
Starting - sending a startup request to libb.
Starting - waiting for startup completion : libb.
Starting - opening a communication channel : libb.
.
Starting - waiting for getting ready to operate : libb.
........................................................................
Started the IBM Spectrum Archive EE services for library libb with good status.
[root@ltfsml1 ~]# eeadm node list
Node ID State Node IP Drives Ctrl Node Library Node Group Host Name
1 available 9.11.244.46 4 yes(active) libb G0 ltfsml1
Now, IBM Spectrum Archive EE is started and ready for basic usage. For further handling, managing, and operations of IBM Spectrum Archive EE (such as creating pools, adding and formatting tapes, and setting up migration policies), see Chapter 6, “Managing daily operations of IBM Spectrum Archive Enterprise Edition” on page 129.
5.3.1 Configuring IBM Spectrum Archive EE with IBM Spectrum Scale AFM
This section walks through how to set up IBM Spectrum Archive EE and IBM Spectrum Scale AFM to create either a Centralized Archive Repository, or an Asynchronous Archive Replication solution. The steps shown in this section assume that the user has already installed and configured IBM Spectrum Archive EE.
If IBM Spectrum Archive EE has not been previously installed and configured, set up AFM first and then follow the instructions in Chapter 4, “Installing IBM Spectrum Archive Enterprise Edition” on page 73 to install Spectrum Archive EE and then in Chapter 5, “Configuring IBM Spectrum Archive Enterprise Edition” on page 97. If performed in that order, you can skip this section. See 7.10.3, “IBM Spectrum Archive EE migration policy with AFM” on page 246 for information about creating migration policy on cache nodes.
 
Important: Starting with IBM Spectrum Archive EE V1.2.3.0, IBM Spectrum Scale AFM is supported. This support is limited to only one cache mode, independent writer (IW).
For more information about configuring IBM Spectrum Scale AFM, see the AFM documentation at IBM Documentation.
5.3.2 Configuring a Centralized Archive Repository solution
A Centralized Archive Repository solution consists of having IBM Spectrum Archive EE at just the home cluster of IBM Spectrum Scale AFM. The steps in this section show how to set up a home site with IBM Spectrum Archive EE, and how to set up the cache site and link them. For more information on use cases, see Figure 8-14 on page 291.
Steps 1 - 5 demonstrate how to set up a IBM Spectrum Scale AFM home cluster and start IBM Spectrum Archive EE. Steps 6 - 9 show how to set up the IW caches for IBM Spectrum Scale AFM cache clusters:
1. If IBM Spectrum Scale is not already active and GPFS is not already mounted, start IBM Spectrum Scale and wait until the cluster becomes active. Then, mount the file system if it is not set to mount automatically using the commands in Example 5-8.
Example 5-8 Starting and mounting IBM Spectrum Scale and GPFS file system
[root@ltfseehomesrv ~]# mmstartup -a
Tue Mar 21 14:37:57 MST 2017: mmstartup: Starting GPFS ...
[root@ltfseehomesrv ~]# mmgetstate -a
 
Node number Node name GPFS state
------------------------------------------
1 ltfseehomesrv    arbitrating
[root@ltfseehomesrv ~]# mmgetstate -a
 
Node number Node name GPFS state
------------------------------------------
1 ltfseehomesrv    active
[root@ltfseehomesrv ~]# mmmount all -a
Tue Mar 21 14:40:36 MST 2017: mmmount: Mounting file systems ...
[root@ltfseecachesrv ~]# systemctl start hsm
[root@ltfseecachesrv ~]# dsmmigfs start
IBM Spectrum Protect
Command Line Space Management Client Interface
Client Version 8, Release 1, Level 0.0
Client date/time: 03/22/2017 13:41:36
(c) Copyright by IBM Corporation and other(s) 1990, 2016. All Rights Reserved.
 
[root@ltfseecachesrv ~]# dsmmigfs enablefailover
IBM Spectrum Protect
Command Line Space Management Client Interface
Client Version 8, Release 1, Level 0.0
Client date/time: 03/22/2017 13:41:41
(c) Copyright by IBM Corporation and other(s) 1990, 2016. All Rights Reserved.
 
Automatic failover is enabled on this node in mode ENABLED.
 
Note: Step 2 assumes that the user has already created their file set and linked it to the GPFS file system. The following examples use IWhome as the home file set.
2. After IBM Spectrum Scale is active and the GPFS file system is mounted, edit the NFS exports file (/etc/exports) to include the new file set. It is important that the no_root_squash, sync, and rw arguments are used. Example 5-9 shows example content of the exports file for file set IWhome.
Example 5-9 Contents of an exports file
[root@ltfseehomesrv ~]# cat /etc/exports
/ibm/glues/IWhome *(rw,sync,no_root_squash,nohide,insecure,no_subtree_check,fsid=125)
 
Note: The fsid in the exports file needs to be a unique number different than any other export clause within the exports file.
3. After the exports file has been modified to include the file set, start the NFS service. Example 5-10 shows an example of starting and checking the NFS service.
Example 5-10 Starting and checking the status of NFS service
[root@ltfseehomesrv ~]# systemctl start nfs
[root@ltfseehomesrv ~]# systemctl status nfs
? nfs-server.service - NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; vendor preset: disabled)
Active: active (exited) since Tue 2017-03-21 15:58:43 MST; 2s ago
Process: 1895 ExecStopPost=/usr/sbin/exportfs -f (code=exited, status=0/SUCCESS)
Process: 1891 ExecStopPost=/usr/sbin/exportfs -au (code=exited, status=0/SUCCESS)
Process: 1889 ExecStop=/usr/sbin/rpc.nfsd 0 (code=exited, status=0/SUCCESS)
Process: 10062 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=0/SUCCESS)
Process: 10059 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
Main PID: 10062 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/nfs-server.service
 
Mar 21 15:58:43 ltfseehomesrv.tuc.stglabs.ibm.com systemd[1]: Starting NFS server and services...
Mar 21 15:58:43 ltfseehomesrv.tuc.stglabs.ibm.com systemd[1]: Started NFS server and services.
4. After NFS has properly started the final step to configure IBM Spectrum Scale AFM at the home cluster is to enable the exported path. Run mmafmconfig enable <path-to-fileset> to enable the exported file set. Example 5-11 shows the execution of the mmafmconfig command with the IWhome file set.
Example 5-11 Execution of mmafmconfig enable <path-to-fileset>
[root@ltfseehomesrv ~]# mmafmconfig enable /ibm/glues/IWhome/
[root@ltfseehomesrv ~]#
5. After the file set has been enabled for AFM proceed by starting up IBM Spectrum Archive if it has not been started previously by running eeadm cluster start.
Run the next four steps on the designated cache nodes:
6. Before starting IBM Spectrum Scale on the cache clusters, determine which nodes will become the gateway nodes and then run the mmchnode --gateway -N <node1,node2,etc..> command to create gateway nodes. Example 5-12 shows the output of running mmchnode on one cache node.
Example 5-12 Setting gateway nodes for cache clusters
[root@ltfseecachesrv ~]# mmchnode --gateway -N ltfseecachesrv
Tue Mar 21 16:49:16 MST 2017: mmchnode: Processing node ltfseecachesrv.tuc.stglabs.ibm.com
[root@ltfseecachesrv ~]#
7. After all the gateway nodes have been set, start IBM Spectrum Scale and mount the file system if it is not done automatically, as shown in Example 5-13:
a. mmmstartup -a
b. mmgetstate -a
c. mmmount all -a (optional, only if the GPFS file system is not mounted automatically)
Example 5-13 Starting and mounting IBM Spectrum Scale and GPFS file system
[root@ltfseecachesrv ~]# mmstartup -a
Tue Mar 21 14:37:57 MST 2017: mmstartup: Starting GPFS ...
[root@ltfseecachesrv ~]# mmgetstate -a
 
Node number Node name GPFS state
------------------------------------------
1 ltfseecachesrv   arbitrating
[root@ltfseecachesrv ~]# mmgetstate -a
 
Node number Node name GPFS state
------------------------------------------
1 ltfseecachsrv1   active
[root@ltfseecachesrv ~]# mmmount all -a
Tue Mar 21 14:40:36 MST 2017: mmmount: Mounting file systems ...
[root@ltfseecachesrv ~]#
8. After IBM Spectrum Scale has been started and the GPFS file system is mounted, then create the cache fileset by using mmcrfileset with the afmTarget, afmMode, and inode-space parameters. Example 5-14 shows the execution of mmcrfileset to create a cache fileset.
Example 5-14 Creating a cache fileset that targets the home fileset
[root@ltfseecachesrv ~]# mmcrfileset gpfs iwcache -p afmmode=iw -p afmtarget=ltfseehomesrv:/ibm/glues/IWhome --inode-space=new
Fileset iwcache created with id 1 root inode 4194307.
9. After the fileset is created, it can be linked to a directory in the GPFS file system by running the mmlinkfileset <device> <fileset> -J <gpfs file system/fileset name> command. Example 5-15 shows output of running mmlinkfileset.
Example 5-15 Linking the GPFS fileset to a directory on the GPFS file system
[root@ltfseecachesrv glues]# mmlinkfileset gpfs iwcache -J /ibm/glues/iwcache
Fileset iwcache linked at /ibm/glues/iwcache
Steps 6 - 9 need to be run on each cache cluster that will be linked to the home cluster. After completing these steps, IBM Spectrum Scale AFM and IBM Spectrum Archive EE are set up on the home cluster and IBM Spectrum Scale AFM is set up on each cache cluster. The system is ready to perform centralized archiving and caching.
5.3.3 Configuring an Asynchronous Archive Replication solution
An Asynchronous Archive Replication solution consists of having IBM Spectrum Archive EE at both the home and cache cluster for IBM Spectrum Scale AFM. This section demonstrates how to set up IBM Spectrum Scale AFM with IBM Spectrum Archive EE to create an Asynchronous Archive Replication solution. For more information on use cases, see 8.11.2, “Asynchronous archive replication” on page 291.
Steps 1 - 5 demonstrate how to set up a IBM Spectrum Scale AFM home cluster and start IBM Spectrum Archive EE. Steps 6 - 11 demonstrate how to set up the cache clusters, and steps 12 - 15 demonstrate how to reconfigure Spectrum Archive EE’s configuration to work with IBM Spectrum Scale AFM.
1. If IBM Spectrum Archive is not already active and GPFS is not already mounted, start the file system and wait until the file system becomes active. Then, mount the file system if it is not set to mount automatically using the commands in Example 5-16.
Example 5-16 Starting and mounting IBM Spectrum Scale and GPFS file system
[root@ltfseehomesrv ~]# mmstartup -a
Tue Mar 21 14:37:57 MST 2017: mmstartup: Starting GPFS ...
[root@ltfseehomesrv ~]# mmgetstate -a
 
Node number Node name GPFS state
------------------------------------------
1 ltfseehomesrv    arbitrating
[root@ltfseehomesrv ~]# mmgetstate -a
 
Node number Node name GPFS state
------------------------------------------
1 ltfseehomesrv    active
[root@ltfseehomesrv ~]# mmmount all -a
Tue Mar 21 14:40:36 MST 2017: mmmount: Mounting file systems ...
[root@ltfseehomesrv ~]# systemctl start hsm
[root@ltfseehomesrv ~]# dsmmigfs start
IBM Spectrum Protect
Command Line Space Management Client Interface
Client Version 8, Release 1, Level 0.0
Client date/time: 03/22/2017 13:41:36
(c) Copyright by IBM Corporation and other(s) 1990, 2016. All Rights Reserved.
 
[root@ltfseehomesrv ~]# dsmmigfs enablefailover
IBM Spectrum Protect
Command Line Space Management Client Interface
Client Version 8, Release 1, Level 0.0
Client date/time: 03/22/2017 13:41:41
(c) Copyright by IBM Corporation and other(s) 1990, 2016. All Rights Reserved.
 
Automatic failover is enabled on this node in mode ENABLED.
 
Note: Step 2 assumes that the user has already created their fileset and linked it to the GPFS file system. The following examples use IWhome as the home fileset.
2. After IBM Spectrum Scale is active and the GPFS file system is mounted, edit the NFS exports file (/etc/exports) to include the new fileset. It is important that the no_root_squash, sync, and rw arguments are used. Example 5-17 shows example content of the exports file for fileset IWhome.
Example 5-17 Contents of an exports file
[root@ltfseehomesrv ~]# cat /etc/exports
/ibm/glues/IWhome *(rw,sync,no_root_squash,nohide,insecure,no_subtree_check,fsid=125)
 
Note: The fsid in the exports file needs to be a unique number different than any other export clause within the exports file.
3. After the exports file has been modified to include the fileset, start the NFS service. Example 5-18 shows an example of starting and checking the NFS service.
Example 5-18 Starting and checking the status of NFS service
[root@ltfseehomesrv ~]# systemctl start nfs
[root@ltfseehomesrv ~]# systemctl status nfs
? nfs-server.service - NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; vendor preset: disabled)
Active: active (exited) since Tue 2017-03-21 15:58:43 MST; 2s ago
Process: 1895 ExecStopPost=/usr/sbin/exportfs -f (code=exited, status=0/SUCCESS)
Process: 1891 ExecStopPost=/usr/sbin/exportfs -au (code=exited, status=0/SUCCESS)
Process: 1889 ExecStop=/usr/sbin/rpc.nfsd 0 (code=exited, status=0/SUCCESS)
Process: 10062 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=0/SUCCESS)
Process: 10059 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
Main PID: 10062 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/nfs-server.service
 
Mar 21 15:58:43 ltfseehomesrv.tuc.stglabs.ibm.com systemd[1]: Starting NFS server and services...
Mar 21 15:58:43 ltfseehomesrv.tuc.stglabs.ibm.com systemd[1]: Started NFS server and services.
4. After NFS has properly started, the final step to configure IBM Spectrum Scale AFM at the home cluster is to enable the exported path. Run mmafmconfig enable <path-to-fileset> to enable the exported fileset. Example 5-19 shows the execution of the mmafmconfig command with the IWhome fileset.
Example 5-19 Execution of mmafmconfig enable <path-to-fileset>
[root@ltfseehomesrv ~]# mmafmconfig enable /ibm/glues/IWhome/
[root@ltfseehomesrv ~]#
5. After the fileset has been enabled for AFM, start Spectrum Archive if it has not been started previously by running eeadm cluster start.
After the home cluster is set up and an NFS export directory is enabled for IBM Spectrum Scale AFM, steps 6 - 11 demonstrate how to set up a Spectrum Scale AFM IW cache fileset at a cache cluster and connect the cache’s fileset with the home’s fileset. Steps 12 - 15 show how to modify IBM Spectrum Archive EE’s configuration to allow cache file sets.
6. If IBM Spectrum Archive EE is active, properly shut it down by using the commands in Example 5-20.
Example 5-20 Shutting down IBM Spectrum Archive EE
[root@ltfseecachesrv ~]# eeadm cluster stop
Library name: libb, library serial: 0000013400190402, control node (ltfsee_md) IP address: 9.11.244.46.
Stopping - sending request and waiting for the completion.
.
Stopped the IBM Spectrum Archive EE services for library libb.
[root@ltfseecachesrv ~]# pidof mmm
[root@ltfseecachesrv ~]# umount /ltfs
[root@ltfseecachesrv ~]# pidof ltfs
[root@ltfseecachesrv ~]#
7. If IBM Spectrum Scale is active, properly shut it down by using the commands in Example 5-21.
Example 5-21 Shutting down IBM Spectrum Scale
[root@ltfseecachesrv ~]# dsmmigfs disablefailover
IBM Spectrum Protect
Command Line Space Management Client Interface
Client Version 8, Release 1, Level 0.0
Client date/time: 03/22/2017 13:31:14
(c) Copyright by IBM Corporation and other(s) 1990, 2016. All Rights Reserved.
 
Automatic failover is disabled on this node.
[root@ltfseecachesrv ~]# dsmmigfs stop
IBM Spectrum Protect
Command Line Space Management Client Interface
Client Version 8, Release 1, Level 0.0
Client date/time: 03/22/2017 13:31:19
(c) Copyright by IBM Corporation and other(s) 1990, 2016. All Rights Reserved.
 
[root@ltfseecachesrv ~]# systemctl stop hsm
[root@ltfseecachesrv ~]# mmumount all -a
Wed Mar 22 13:31:44 MST 2017: mmumount: Unmounting file systems ...
[root@ltfseecachesrv ~]# mmshutdown -a
Wed Mar 22 13:31:56 MST 2017: mmshutdown: Starting force unmount of GPFS file systems
Wed Mar 22 13:32:01 MST 2017: mmshutdown: Shutting down GPFS daemons
ltfseecachesrv.tuc.stglabs.ibm.com: Shutting down!
ltfseecachesrv.tuc.stglabs.ibm.com: 'shutdown' command about to kill process 24101
ltfseecachesrv.tuc.stglabs.ibm.com: Unloading modules from /lib/modules/3.10.0-229.el7.x86_64/extra
ltfseecachesrv.tuc.stglabs.ibm.com: Unloading module mmfs26
ltfseecachesrv.tuc.stglabs.ibm.com: Unloading module mmfslinux
Wed Mar 22 13:32:10 MST 2017: mmshutdown: Finished
8. With IBM Spectrum Archive EE and IBM Spectrum Scale both shut down, set the gateway nodes if they have not been set when IBM Spectrum Scale was configured by using the command in Example 5-22.
Example 5-22 Setting a gateway node
[root@ltfseecachesrv ~]# mmchnode --gateway -N ltfseecachesrv
Tue Mar 21 16:49:16 MST 2017: mmchnode: Processing node ltfseecachesrv.tuc.stglabs.ibm.com
[root@ltfseecachesrv ~]#
9. Properly start IBM Spectrum Scale by using the commands in Example 5-23.
Example 5-23 Starting IBM Spectrum Scale
[root@ltfseecachesrv ~]# mmstartup -a
Wed Mar 22 13:41:02 MST 2017: mmstartup: Starting GPFS ...
[root@ltfseecachesrv ~]# mmmount all -a
Wed Mar 22 13:41:22 MST 2017: mmmount: Mounting file systems ...
[root@ltfseecachesrv ~]# systemctl start hsm
[root@ltfseecachesrv ~]# dsmmigfs start
IBM Spectrum Protect
Command Line Space Management Client Interface
Client Version 8, Release 1, Level 0.0
Client date/time: 03/22/2017 13:41:36
(c) Copyright by IBM Corporation and other(s) 1990, 2016. All Rights Reserved.
 
[root@ltfseecachesrv ~]# dsmmigfs enablefailover
IBM Spectrum Protect
Command Line Space Management Client Interface
Client Version 8, Release 1, Level 0.0
Client date/time: 03/22/2017 13:41:41
(c) Copyright by IBM Corporation and other(s) 1990, 2016. All Rights Reserved.
 
Automatic failover is enabled on this node in mode ENABLED.
10. Create the independent-writer fileset by using the command in Example 5-24.
Example 5-24 Creating an IW fileset
[root@ltfseecachesrv ~]# mmcrfileset gpfs iwcache -p afmmode=independent-writer -p afmtarget=ltfseehomesrv:/ibm/glues/IWhome --inode-space=new
Fileset iwcache created with id 1 root inode 4194307.
[root@ltfseecachesrv ~]#
11. Link the fileset to a directory on the node’s GPFS file system by using the command in Example 5-25.
Example 5-25 Linking an IW fileset
[root@ltfseecachesrv ~]# mmlinkfileset gpfs iwcache -J /ibm/glues/iwcache
Fileset iwcache linked at /ibm/glues/iwcache
[root@ltfseecachsrv ~]#
IBM Spectrum Scale AFM is now configured and has a working home and IW cache clusters.
12. With IBM Spectrum Archive EE still shut down, obtain the metadata and HSM file systems IBM Spectrum Archive EE by using the command in Example 5-26.
Example 5-26 Obtaining metadata and HSM file system(s)
[root@ltfseecachesrv ~]# ltfsee_config -m INFO
The EE configuration script is starting: /opt/ibm/ltfsee/bin/ltfsee_config -m INFO
INFO mode starts .
 
## 1. Check to see if the cluster is already created ##
The cluster is already created and the configuration file ltfsee_config.filesystem exists.
Metadata Filesystem:
/ibm/glues
HSM Filesystems:
/ibm/glues
Library: Name=ltfsee_lib1, S/N=0000013400190402
Node Group: Name=ltfseecachesrv
Node: ltfseecachesrv.tuc.stglabs.ibm.com
Drive: S/N=00078D00BC, Attribute='mrg'
Drive: S/N=00078D00BD, Attribute='mrg'
Pool: Name=copy_cache, ID=902b097a-7a34-4847-a346-0e6d97444a21
Tape: Barcode=DV1982L7
Tape: Barcode=DV1985L7
Pool: Name=primary_cache, ID=14adb6cf-d1f5-46ef-a0bb-7b3881bdb4ec
Tape: Barcode=DV1983L7
Tape: Barcode=DV1984L7
13. Modify IBM Spectrum Archive EE’s configuration by using the command that is shown in Example 5-27 with the same file systems that were recorded from step 7.
Example 5-27 Modify IBM Spectrum Archive EE configuration for IBM Spectrum Scale AFM
[root@ltfseecachesrv ~]# ltfsee_config -m UPDATE_FS_INFO
The EE configuration script is starting: /opt/ibm/ltfsee/bin/ltfsee_config -m UPDATE_FS_INFO
UPDATE_FS_INFO mode starts .
## Step-1. Check to see if the cluster is already created ##
The cluster is already created and the configuration file ltfsee_config.filesystem exists.
Successfully validated the prerequisites.
** Select file systems to configure for the IBM Spectrum Scale (GPFS) file system for Space Management.
Input the corresponding numbers and press Enter
or press 'q' followed by Enter to quit.
Press a followed by Enter to select all file systems.
Multiple file systems can be specified using comma or white space delimiters.
File system
1. /dev/flash Mount point(/flash)
2. /dev/gpfs Mount point(/ibm/gpfs)
a. Select all file systems
q. Quit
Input number > a
## Step-2. Add selected file systems to Space Management ##
Added the selected file systems to the space management.
## Step-3. Store the file systems configuration and dispatch it to all nodes ##
Storing the file systems configuration.
Copying ltfsee_config.filesystem file.
Stored the cluster configuration and dispatched the configuration file.
Disabling runtime AFM file state checking.
UPDATE_FS_INFO mode completed.
14. Start IBM Spectrum Archive EE by using the commands in Example 5-28.
Example 5-28 Start IBM Spectrum Archive EE
[root@ltfseecachesrv ~]# eeadm cluster start
Library name: libb, library serial: 0000013400190402, control node (ltfsee_md) IP address: 9.11.244.46.
Starting - sending a startup request to libb.
Starting - waiting for startup completion : libb.
Starting - opening a communication channel : libb.
.
Starting - waiting for getting ready to operate : libb.
........................................................................
Started the IBM Spectrum Archive EE services for library libb with good status.
15. At the start of IBM Spectrum Archive EE, the AFMSKIPUNCACHEDFILES flag inside the /opt/tivoli/tsm/client/ba/bin/dsm.sys file should be set to yes. It can be checked by using the command in Example 5-29. If it has not been properly set, modify the file so that the AFMSKIPUNCACHEDFILES flag is set to yes.
Example 5-29 Validating AFMSKIPUNCACHEDFILES is set to yes
[root@ltfseecachsrv ~]# grep AFMSKIPUNCACHEDFILES /opt/tivoli/tsm/client/ba/bin/dsm.sys
AFMSKIPUNCACHEDFILES YES
After successfully completing these steps, IBM Spectrum Archive EE and IBM Spectrum Scale AFM are set up at both the home and cache cluster. They can now be used as an Asynchronous Archive Replication solution.
 
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.216.190.182