Operations
In this chapter, the day-to-day management of the IBM Spectrum Archive Enterprise Edition (IBM Spectrum Archive EE) environment is described.
This chapter includes the following topics:
 
Note: In our lab setup for writing this book, we used a Red Hat-based Linux system. The screen captures within this chapter are based on the Version 1 Release 2 of the product. Even though the steps you perform are the same, you might see slightly different output responses on your screen depending on your version and release of the product.
7.1 Overview
The following terms are specific to IBM Spectrum Archive EE operations and are used in this chapter:
Migration The movement of files from the Spectrum Scale file system on disk to IBM Linear Tape File System (LTFS) tape cartridges, which leaves behind a stub file.
Premigration The movement of files from GPFS file systems on disk to LTFS tape cartridges without replacing them with stub files on the GPFS file system. Identical copies of the files are on the GPFS file system and in LTFS storage.
Recall The movement of migrated files from tape cartridges back to the originating GPFS file system on disk, which is the reverse of migration.
Reconciliation The process of synchronizing a GPFS file system with the contents of an LTFS tape cartridge and removing old and obsolete objects from the tape cartridge. You must run reconciliation when a GPFS file is deleted, moved, or renamed.
Reclamation The process of defragmenting a tape cartridge. The space on a tape cartridge that is occupied by deleted files is not reused during normal LTFS operations. New data always is written after the last index on tape. The process of reclamation is similar to the same named process in Tivoli Storage Manager from the IBM Spectrum Archive family in that all active files are consolidated onto a second tape cartridge, which improves overall tape usage.
Retrieval The process of triggering IBM Spectrum Archive EE to retrieve information about physical resources from the tape library. This is scheduled to occur automatically at regular intervals, but can be run manually.
Tape check The process of checking a tape for errors when it is added to a tape cartridge pool.
Tape repair The process of repairing any errors that are found on the tape during the tape check process.
Import The addition of an LTFS tape cartridge to IBM Spectrum Archive EE.
Export The removal of an LTFS tape cartridge from IBM Spectrum Archive EE.
7.1.1 IBM Spectrum Archive EE command summaries
Use IBM Spectrum Archive EE commands to configure IBM Spectrum Archive EE tape cartridge pools and perform IBM Spectrum Archive EE administrative tasks. The commands use the ltfsee <options> format.
The following ltfsee command options are available. All options, except info, can be run only with root user permissions:
 
ltfsee cleanup
Use this command to clean up scan results that are not removed after a scan or session finishes. For example, you can use this command to clean up any scan numbers that remain after a user runs a ltfsee migrate command, then closes the command window before the command completes.
ltfsee drive
Use this command to add a tape drive to or remove a tape drive from the IBM Spectrum Archive EE system.
ltfsee export
Use this command to remove one or more tape cartridges from the IBM Spectrum Archive EE system by removing files on them from the IBM Spectrum Scale namespace. After you use this command, run the ltfsee tape move ieslot command to move a tape cartridge to the I/O station in the library. The ltfsee export command has two export modes: Normal Export and Offline Export. Normal Export removes the metadata on the GPFS file system and removes the tape from the storage pool. Offline Export keeps the metadata on the file system and keeps the tape in the storage pool as Offline.
ltfsee fsopt
Use this command to query or update the file system level settings stub size, read start recalls, and preview size.
ltfsee import
Use this command to add one or more tape cartridges to the IBM Spectrum Archive EE system and reinstantiate the files in the IBM Spectrum Scale namespace.
ltfsee info
Use this command to list current information about LTFS EE jobs and scans, and resource inventory for tape cartridges, drives, nodes, tape cartridge pools, and files.
ltfsee migrate
Use this command to migrate files to tape cartridge pools.
ltfsee pool
Use this command to create, delete, and modify IBM Spectrum Archive EE tape cartridge pools, and to format tape cartridges before they are added to the tape cartridge pools.
ltfsee premigrate
Use this command to premigrate files to tape cartridge pools.
ltfsee rebuild
This command rebuilds a GPFS file system into the specified directory with the files and file system objects that are found on the specified tapes.
ltfsee recall
Use this command to perform selective bulk recalls from the tape cartridges.
ltfsee recall_deadline
Use this command to display or set the recall deadline value that is applicable to all transparent recalls.
ltfsee reclaim
Use this command to free tape space by removing unreferenced files and unreferenced content from tape cartridges in a tape cartridge pool. The ltfsee reconcile command is a prerequisite for efficient tape reclamation.
ltfsee recover
Use this command to recover files from tape or to remove a tape from IBM Spectrum Archive EE when the tape is in the Critical state or the Write Fenced state.
ltfsee reconcile
Use this command to perform reconciliation tasks between the IBM Spectrum Scale namespace and the LTFS namespace. This command is a prerequisite for efficient tape reclamation.
ltfsee repair
Use this command to repair a file or file system object by changing its state to Resident.
ltfsee retrieve
Use this command to synchronize data in the IBM Spectrum Archive EE inventory with the tape library. The IBM Spectrum Archive EE system synchronizes data automatically, but it might be useful to explicitly trigger this operation if the configuration changes, for example, if a drive is added or removed.
ltfsee save
Use this command to save file system objects (symbolic links, empty files, and empty directories) to tape cartridge pools.
ltfsee start
Use this command to start the IBM Spectrum Archive EE system and define which GPFS file system your IBM Spectrum Archive EE system uses to store configuration information. The LE+ and HSM components must be running before you can use this command. You can run the ltfsee start command on any IBM Spectrum Archive EE node in the cluster.
 
Important: If the ltfsee start command does not return after several minutes, it might be because the firewall is running or tapes are being unmounted from the drives. The firewall service must be disabled on the IBM Spectrum Archive EE nodes. For more information, see 4.3.2, “Installing, upgrading, or uninstalling IBM Spectrum Archive EE” on page 68.
ltfsee status
Use this command to identify the node where the multi-tape management module (MMM) service is started.
ltfsee stop
Use this command to stop the IBM Spectrum Archive EE system. You can run this command on any IBM Spectrum Archive EE node in the cluster.
ltfsee tape move
Use this command to move one or more tape cartridges to an IE slot in the I/O station or to a home slot.
ltfsee threshold
Use this command to specify the limit at which migrations are preferred over recalls.
cleanup_dm_sess
Use this command to remove stale DMAPI sessions.
7.1.2 Using the command-line interface
The IBM Spectrum Archive EE system provides a command-line interface (CLI) that supports the automation of administrative tasks, such as starting and stopping the system, monitoring its status, and configuring tape cartridge pools. The CLI is the primary method for administrators to manage IBM Spectrum Archive EE. There is no GUI available as of this writing.
In addition, the CLI is used by the IBM Spectrum Scale mmapplypolicy command to trigger migrations or premigrations. When this action occurs, the mmapplypolicy command calls IBM Spectrum Archive EE when an IBM Spectrum Scale scan occurs, and passes the file name of the file that contains the scan results and the name of the target tape cartridge pool.
The ltfsee command uses the following syntax:
ltfsee <options>
 
Reminder: All of the command examples use the command without the full file path name because we added the IBM Spectrum Archive EE directory (/opt/ibm/ltfsee/bin) to the PATH variable.
7.2 Status information
This section describes the process that is used to determine whether each of the major components of IBM Spectrum Archive EE is running correctly. For more information about troubleshooting IBM Spectrum Archive EE, see Chapter 10, “Troubleshooting” on page 249.
The components should be checked in the order that is shown here because a stable, active GPFS file system is a prerequisite for starting IBM Spectrum Archive EE.
7.2.1 IBM Spectrum Scale
The following IBM Spectrum Scale commands are used to obtain cluster state information:
The mmdiag command obtains basic information about the state of the GPFS daemon.
The mmgetstate command obtains the state of the GPFS daemon on one or more nodes.
The mmlscluster and mmlsconfig commands show detailed information about the GPFS cluster configuration.
This section describes how to obtain GPFS daemon state information by running the GPFS command mmgetstate. For more information about the other GPFS commands, see the General Parallel File System Version 4 Release 1.0.4 Advanced Administration Guide, SC23-7032-01, or see the IBM Spectrum Scale V4.2.1: Advanced Administration Guide, which is available at this website:
The node on which the mmgetstate command is run must have the GPFS mounted. The node must also run remote shell commands on any other node in the GPFS/IBM Spectrum Scale cluster without the use of a password and without producing any extraneous messages.
Example 7-1 shows how to get status about the GPFS/IBM Spectrum Scale daemon on one or more nodes.
Example 7-1 Check the GPFS/IBM Spectrum Scale status
[root@ltfs97 ~]# mmgetstate -a
Node number Node name GPFS state
------------------------------------------
1 htohru9 down
The -a argument shows the state of the GPFS/IBM Spectrum Scale daemon on all nodes in the cluster.
 
Permissions: Retrieving the status for GPFS/IBM Spectrum Scale requires root user permissions.
The following GPFS/IBM Spectrum Scale states are recognized and shown by this command:
Active: GPFS/IBM Spectrum Scale is ready for operations.
Arbitrating: A node is trying to form a quorum with the other available nodes.
Down: GPFS/IBM Spectrum Scale daemon is not running on the node or is recovering from an internal error.
Unknown: Unknown value. The node cannot be reached or some other error occurred.
If the GPFS/IBM Spectrum Scale state is not active, attempt to start GPFS/IBM Spectrum Scale and check its status, as shown in Example 7-2.
Example 7-2 Start GPFS/IBM Spectrum Scale
[root@ltfs97 ~]# mmstartup -a
Tue Apr 2 14:41:13 JST 2013: mmstartup: Starting GPFS ...
[root@ltfs97 ~]# mmgetstate -a
Node number Node name GPFS state
------------------------------------------
1 htohru9 active
If the status is active, also check the GPFS/IBM Spectrum Scale mount status by running the command that is shown in Example 7-3.
Example 7-3 Check the GPFS/IBM Spectrum Scale mount status
[root@ltfs97 ~]# mmlsmount all
File system gpfs is mounted on 1 nodes.
The message confirms that the GPFS file system is mounted.
7.2.2 LTFS Library Edition Plus component
IBM Spectrum Archive EE constantly checks to see whether the LTFS Library Edition Plus (LE+) component is running. If the LTFS LE+ component is running correctly, you can see whether the LTFS file system is mounted by running the mount command or the df command, as shown in Example 7-4.
The LTFS LE+ component must be running on all EE nodes.
Example 7-4 Check the LTFS LE+ component status (running)
[root@ltfs97 ~]# df -m
Filesystem 1M-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root
33805 5081 27007 16% /
tmpfs 1963 0 1963 0% /dev/shm
/dev/vda1 485 36 424 8% /boot
/dev/gpfs 153600 8116 145484 6% /ibm/glues
ltfs:/dev/IBMchanger8
2147483648 0 2147483648 0% /ltfs
If the LTFS LE+ component is not running, you do not see any LTFS file system mounted, as shown in Example 7-5.
Example 7-5 Check the LTFS LE+ component status (not running)
[root@ltfs97 ~]# df -m
Filesystem 1M-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root
33805 5081 27007 16% /
tmpfs 1963 0 1963 0% /dev/shm
/dev/vda1 485 36 424 8% /boot
/dev/gpfs 153600 8116 145484 6% /ibm/glues
If the LTFS LE+ component is not running, Example 7-6 shows you how to start LTFS by using the tape library, which is defined in Linux as the device IBMchanger0.
Example 7-6 Starting the LTFS LE+ component
# ltfs /ltfs -o changer_devname=/dev/IBMchanger0
7230 LTFS14000I LTFS starting, LTFS version 2.1.6.0 (9910), log level 2
7230 LTFS14058I LTFS Format Specification version 2.2.0
7230 LTFS14104I Launched by "/opt/IBM/ltfs/bin/ltfs /ltfs -o changer_devname=/dev/IBMchanger0"
7230 LTFS14105I This binary is built for Linux (x86_64)
7230 LTFS14106I GCC version is 4.8.3 20140911 (Red Hat 4.8.3-9)
7230 LTFS17087I Kernel version: Linux version 3.10.0-327.el7.x86_64 ([email protected]) (gcc version 4.8.3 20140911 (Red Hat 4.8.3-9) (GCC) ) #1 SMP Thu Oct 29 17:29:29 EDT 2015 i386
7230 LTFS17089I Distribution: NAME="Red Hat Enterprise Linux Server"
7230 LTFS17089I Distribution: Red Hat Enterprise Linux Server release 7.2 (Maipo)
7230 LTFS17089I Distribution: Red Hat Enterprise Linux Server release 7.2 (Maipo)
7230 LTFS14064I Sync type is "unmount"
7230 LTFS17085I Plugin: Loading "ibmtape" driver
7230 LTFS17085I Plugin: Loading "unified" iosched
7230 LTFS17085I Plugin: Loading "ibmtape" changer
7230 LTFS17085I Plugin: Loading "ondemand" dcache
7230 LTFS17085I Plugin: Loading "memory" crepos
7230 LTFS11593I LTFS starts with a product license version (20130412_2702)
7230 LTFS12165I lin_tape version is 3.0.13
7230 LTFS12118I Changer identification is '3576-MTL '
7230 LTFS12162I Vendor ID is IBM
7230 LTFS12159I Firmware revision is 670G
7230 LTFS12160I Changer serial is 000001300026_LLF
7230 LTFS12196I IOCTL: INQUIRY PAGE -1056947426 returns -20501 (generic 22) 000001300026_LLF
7230 LTFS11578I Inactivating all drives (Skip to scan devices)
7230 LTFS16500I Cartridge repository plugin is initialized (memory, /ibm/gpfs/.ltfsee/meta/000001300026_LLF)
7230 LTFS11545I Rebuilding the cartridge inventory
7230 LTFS11627I Getting Inventory - 000001300026_LLF
7230 LTFS11629I Aqcuireing MoveLock - 000001300026_LLF
7230 LTFS11630I Aqcuired Move Lock (5) - 000001300026_LLF
7230 LTFS11628I Got Inventory - 000001300026_LLF
7230 LTFS11720I Built the cartridge inventory (0)
7230 LTFS14708I LTFS admin server version 2 is starting on port 7600
7230 LTFS14111I Initial setup completed successfully
7230 LTFS14112I Invoke 'mount' command to check the result of final setup
7230 LTFS14113I Specified mount point is listed if succeeded
 
Permissions: Starting the LTFS LE+ component requires root user permissions.
As suggested in the second to last message in Example 7-5 on page 129, confirm that the LTFS mount point is listed.
7.2.3 Hierarchical Space Management
Hierarchical Space Management (HSM) must be running before you start IBM Spectrum Archive EE. You can verify that HSM is running by checking whether the watch daemon (dsmwatchd) and at least three recall daemons (dsmrecalld) are active. Query the operating system to verify that the daemons are active by running the command that is shown in Example 7-7.
Example 7-7 Check the HSM status by running ps
[root@ltfs97 /]# ps -ef|grep dsm
root 1355 1 0 14:12 ? 00:00:01 /opt/tivoli/tsm/client/hsm/bin/dsmwatchd nodetach
root 5657 1 0 14:41 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld
root 5722 5657 0 14:41 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld
root 5723 5657 0 14:41 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld
The dsmmigfs command also provides the status of HSM, as shown by the output in Example 7-8.
Example 7-8 Check the HSM status by using dsmmigfs
[root@cypress ~]# dsmmigfs query -detail
IBM Tivoli Storage Manager
Command Line Space Management Client Interface
Client Version 7, Release 1, Level 6.3
Client date/time: 10/18/2016 23:17:15
(c) Copyright by IBM Corporation and other(s) 1990, 2016. All Rights Reserved.
 
 
The local node has Node ID: 1
The failover environment is active on the local node.
The recall distribution is enabled.
 
File System Name: /ibm/gpfs
High Threshold: 90
Low Threshold: 80
Premig Percentage: 10
Quota: 227272
Stub Size: 0
Read Starts Recall: no
Preview Size: 0
Server Name: SERVER_A
Max Candidates: 100
Max Files: 0
Read Event Timeout: 600
Stream Seq: 0
Min Partial Rec Size: 0
Min Stream File Size: 0
MinMigFileSize: 0
Preferred Node: cypress.tuc.stglabs.ibm.com Node ID: 1
Owner Node: cypress.tuc.stglabs.ibm.com Node ID: 1
You can also ensure that the GPFS file system (named gpfs in this example) is managed by HSM by running the command that is shown in Example 7-9.
Example 7-9 Check GPFS file system
[root@ltfs97 /] mmlsfs gpfs|grep DMAPI
-z Yes Is DMAPI enabled?
To manage a file system with IBM Spectrum Archive EE, it must be data management application programming interface (DMAPI) enabled. A file system is managed by IBM Spectrum Archive EE by running the ltfsee_config command, which is described in 6.2, “Configuring IBM Spectrum Archive EE” on page 110.
 
Permissions: Starting HSM requires root user permissions.
If the HSM watch daemon (dsmwatchd) is not running, Example 7-10 shows you how to start it.
Example 7-10 Start the HSM watch daemon
[root@ltfsrl1 ~]# systemctl start hsm.service
[root@ltfsrl1 ~]# ps -afe | grep dsm
root 7687 1 0 08:46 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmwatchd nodetach
root 8405 6621 0 08:46 pts/1 00:00:00 grep --color=auto dsm
If the HSM recall daemons (dsmrecalld) are not running, Example 7-11 shows you how to start them.
Example 7-11 Start HSM
[root@ltfsml1 ~]# dsmmigfs start
IBM Tivoli Storage Manager
Command Line Space Management Client Interface
Client Version 7, Release 1, Level 4.80
Client date/time: 02/11/2016 10:51:11
(c) Copyright by IBM Corporation and other(s) 1990, 2015. All Rights Reserved.
If failover operations within the IBM Spectrum Scale cluster are wanted on the node, run the dsmmigfs enablefailover command after you run the dsmmigfs start command.
7.2.4 IBM Spectrum Archive EE
After IBM Spectrum Archive EE is started, you can retrieve details about the node that the multi-tape management module (MMM) service was started on by running the ltfsee status command. You can also use this command to determine whether the MMM service is running. The MMM is the module that manages configuration data and physical resources of IBM Spectrum Archive EE.
 
Permissions: Retrieving the status for the MMM service requires root user permissions.
If the MMM service is running correctly, you see a message similar to the message that is shown in Example 7-12.
Example 7-12 Check the IBM Spectrum Archive EE status
[root@ltfsml1 ~]# ltfsee status
Ctrl Node Status PID Library
9.11.121.227 Active 5574 lib_ltfsml2
9.11.121.122 Active 13662  lib_ltfsml1
If the MMM service is not running correctly, you see a message that is similar to the message that is shown in Example 7-13.
Example 7-13 Check the IBM Spectrum Archive EE status
[root@ltfsml1 ~]# ltfsee status
Ctrl Node Status PID Library
9.11.121.227 Inactive - lib_ltfsml2
9.11.121.122 Inactive - lib_ltfsml1
The Inactive status text says that IBM Spectrum Archive EE is not running on that particular EE Control Node. In our example, IBM Spectrum Archive EE is not running and must be started. Example 7-14 shows how to start IBM Spectrum Archive EE.
Example 7-14 Start IBM Spectrum Archive EE
[root@ltfsml1 ~]# ltfsee start
Library name: lib_ltfsml2, library id: 000001300228_LLA, control node (MMM) IP address: 9.11.121.227.
GLESM401I(00253): Loaded the global configuration.
GLESM402I(00264): Created the Global Resource Manager.
GLESM403I(00279): Fetched the node groups from the Global Resource Manager.
GLESM404I(00287): Detected the IP address of the MMM (9.11.121.227).
GLESM405I(00298): Configured the node group (G0).
GLESM406I(00307): Created the unassigned list of the library resources.
GLESL536I(00074): Started the IBM Spectrum Archive EE service (MMM) for library lib_ltfsml2.
Library name: lib_ltfsml1, library id: 000001300228_LLC, control node (MMM) IP address: 9.11.121.122.
GLESM401I(00253): Loaded the global configuration.
GLESM402I(00264): Created the Global Resource Manager.
GLESM403I(00279): Fetched the node groups from the Global Resource Manager.
GLESM404I(00287): Detected the IP address of the MMM (9.11.121.122).
GLESM405I(00298): Configured the node group (G0).
GLESM406I(00307): Created the unassigned list of the library resources.
GLESL536I(00074): Started the IBM Spectrum Archive EE service (MMM) for library lib_ltfsml1.
 
Important: No automatic failover function for IBM Spectrum Archive EE is available. If the IBM Spectrum Scale node that is running the MMM service of IBM Spectrum Archive EE fails, the MMM service must be started manually.
Important: If the ltfsee start command does not return after several minutes, it might be because the firewall is running or tapes are being unmounted from the drives. The firewall service must be disabled on the IBM Spectrum Archive EE nodes. For more information, see 4.3.2, “Installing, upgrading, or uninstalling IBM Spectrum Archive EE” on page 68.
7.3 Upgrading components
The following sections describe the process that is used to upgrade IBM Spectrum Scale and other components of IBM Spectrum Archive EE.
7.3.1 IBM Spectrum Scale
Complete this task if you must update your version of IBM Spectrum Scale that is used with Spectrum Archive EE.
Before any system upgrades or major configuration changes are made to your IBM Spectrum Scale cluster, review your IBM Spectrum Scale documentation and consult the IBM Spectrum Scale frequently asked question (FAQ) information that applies to your version of IBM Spectrum Scale. For more information about the IBM Spectrum Scale FAQ, see the Cluster products IBM Knowledge Center and select the IBM Spectrum Scale release under the Cluster product libraries topic in the navigation pane that applies to your installation:
To update IBM Spectrum Scale, complete the following steps:
1. Stop IBM Spectrum Archive EE by running the command that is shown in Example 7-15.
Example 7-15 Stop IBM Spectrum Archive EE
[root@ltfsml1 ~]# ltfsee stop
Library name: lib_ltfsml2, library id: 000001300228_LLA, control node (MMM) IP address: 9.11.121.227.
Stopped LTFS EE service (MMM) for library lib_ltfsml2.
Library name: lib_ltfsml1, library id: 000001300228_LLC, control node (MMM) IP address: 9.11.121.122.
Stopped LTFS EE service (MMM) for library lib_ltfsml1ltfsml1.
2. Run the pidof mmm command on all EE Control Nodes until all MMM processes have terminated.
3. Stop the Tivoli Storage Manager for Space Management HSM by running the command that is shown in Example 7-16.
Example 7-16 Stop HSM
[root@ltfsml1 ~]# dsmmigfs stop
IBM Tivoli Storage Manager
Command Line Space Management Client Interface
Client Version 7, Release 1, Level 4.80
Client date/time: 02/11/2016 10:57:52
(c) Copyright by IBM Corporation and other(s) 1990, 2015. All Rights Reserved.
This command must be run on every IBM Spectrum Archive EE node.
4. Stop the watch daemon by running the command that is shown in Example 7-17.
Example 7-17 Stop the watch daemon
[root@ltfsml1 ~]# systemctl stop hsm.service
This command must be run on every IBM Spectrum Archive EE node.
5. Unmount the library and stop the LTFS LE+ component by running the command that is shown in Example 7-18.
Example 7-18 Stop the LTFS LE+ component
[root@ltfs97 ~]# umount /ltfs
This command must be run on every IBM Spectrum Archive EE node.
6. Run the pidof ltfs command on all EE Nodes until all ltfs processes are terminated.
7. Unmount GPFS by running the command that is shown in Example 7-19.
Example 7-19 Stop GPFS
[root@ltfs97 ~]# mmumount all
Tue Apr 16 23:43:29 JST 2013: mmumount: Unmounting file systems ...
If the mmumount all command results show that processes are still being used (as shown in Example 7-20), you must wait for them to finish and then run the mmumount all command again.
Example 7-20 Processes running prevent the unmounting of the GPFS file system
[root@ltfs97 ~]# mmumount all
Tue Apr 16 23:46:12 JST 2013: mmumount: Unmounting file systems ...
umount: /ibm/glues: device is busy.
(In some cases useful info about processes that use
the device is found by lsof(8) or fuser(1))
umount: /ibm/glues: device is busy.
(In some cases useful info about processes that use
the device is found by lsof(8) or fuser(1))
8. Shut down GPFS by running the command that is shown in Example 7-21.
Example 7-21 Shut down GPFS
[root@ltfs97 ~]# mmshutdown -a
Tue Apr 16 23:46:51 JST 2013: mmshutdown: Starting force unmount of GPFS file systems
Tue Apr 16 23:46:56 JST 2013: mmshutdown: Shutting down GPFS daemons
htohru9.ltd.sdl: Shutting down!
htohru9.ltd.sdl: 'shutdown' command about to kill process 3645
htohru9.ltd.sdl: Unloading modules from /lib/modules/2.6.32-220.el6.x86_64/extra
htohru9.ltd.sdl: Unloading module mmfs26
htohru9.ltd.sdl: Unloading module mmfslinux
htohru9.ltd.sdl: Unloading module tracedev
Tue Apr 16 23:47:03 JST 2013: mmshutdown: Finished
9. Download the IBM Spectrum Scale update from IBM Fix Central. Extract the IBM Spectrum Scale rpm files and install the updated rpm files by running the command that is shown in Example 7-22.
Example 7-22 Update IBM Spectrum Scale
rpm -Uvh *.rpm
10. Rebuild and install the IBM Spectrum Scale portability layer by running the commands that are shown in Example 7-23.
Example 7-23 Rebuild GPFS
mmbuildgpl
11. Start GPFS by running the command that is shown in Example 7-24.
Example 7-24 Start GPFS
[root@ltfs97 ~]# mmstartup -a
Tue Apr 16 23:47:42 JST 2013: mmstartup: Starting GPFS ...
12. Mount the GPFS file system by running the command that is shown in Example 7-25.
Example 7-25 Mount GPFS file systems
[root@ltfs97 ~]# mmmount all
Tue Apr 16 23:48:09 JST 2013: mmmount: Mounting file systems ...
13. Mount the LTFS file system and start the LTFS LE+ component by running the command that is shown in Example 7-26.
Example 7-26 Start LTFS LE+ component
[root@ltfsml1 ~]# ltfs /ltfs -o changer_devname=/dev/IBMchanger0
7763 LTFS14000I LTFS starting, LTFS version 2.1.6.0 (9601), log level 2
7763 LTFS14058I LTFS Format Specification version 2.2.0
7763 LTFS14104I Launched by "ltfs /ltfs -o changer_devname=/dev/IBMchanger0"
7763 LTFS14105I This binary is built for Linux (x86_64)
7763 LTFS14106I GCC version is 4.8.3 20140911 (Red Hat 4.8.3-9)
7763 LTFS17087I Kernel version: Linux version 3.10.0-229.el7.x86_64 ([email protected]) (gcc version 4.8.3 20140911 (Red Hat 4.8.3-7) (GCC) ) #1 SMP Thu Jan 29 18:37:38 EST 2015 i386
7763 LTFS17089I Distribution: NAME="Red Hat Enterprise Linux Server"
7763 LTFS17089I Distribution: Red Hat Enterprise Linux Server release 7.1 (Maipo)
7763 LTFS17089I Distribution: Red Hat Enterprise Linux Server release 7.1 (Maipo)
7763 LTFS14064I Sync type is "unmount"
7763 LTFS17085I Plugin: Loading "ibmtape" driver
7763 LTFS17085I Plugin: Loading "unified" iosched
7763 LTFS17085I Plugin: Loading "ibmtape" changer
7763 LTFS17085I Plugin: Loading "ondemand" dcache
7763 LTFS17085I Plugin: Loading "memory" crepos
7763 LTFS11593I LTFS starts with a product license version (20130412_2702)
7763 LTFS12165I lin_tape version is 3.0.1
7763 LTFS12118I Changer identification is '3576-MTL '
7763 LTFS12162I Vendor ID is IBM
7763 LTFS12159I Firmware revision is 670G
7763 LTFS12160I Changer serial is 000001300228_LLC
7763 LTFS12196I IOCTL: INQUIRY PAGE -1056947426 returns -20501 (generic 22) 000001300228_LLC
7763 LTFS11578I Inactivating all drives (Skip to scan devices)
7763 LTFS16500I Cartridge repository plugin is initialized (memory, /ibm/gpfs/.ltfsee/meta/000001300228_LLC)
7763 LTFS11545I Rebuilding the cartridge inventory
7763 LTFS11627I Getting Inventory - 000001300228_LLC
7763 LTFS11629I Acquiring MoveLock - 000001300228_LLC
7763 LTFS11630I Acquired Move Lock (5) - 000001300228_LLC
7763 LTFS11628I Got Inventory - 000001300228_LLC
7763 LTFS11720I Built the cartridge inventory (0)
7763 LTFS14708I LTFS admin server version 2 is starting on port 7600
7763 LTFS14111I Initial setup completed successfully
7763 LTFS14112I Invoke 'mount' command to check the result of final setup
7763 LTFS14113I Specified mount point is listed if succeeded
In Example 7-26 on page 135, /dev/IBMchanger0 is the name of your library device. This command must be run on every IBM Spectrum Archive EE node.
14. Start the watch daemon by running the command that is shown in Example 7-27.
Example 7-27 Start the watch daemon
[root@ltfsml1 ~]# systemctl start hsm.service
This command must be run on every IBM Spectrum Archive EE node.
15. Start the HSM by running the command that is shown in Example 7-28.
Example 7-28 Start HSM
[root@ltfsml1 ~]# dsmmigfs start
IBM Tivoli Storage Manager
Command Line Space Management Client Interface
Client Version 7, Release 1, Level 4.80
Client date/time: 02/11/2016 11:06:39
(c) Copyright by IBM Corporation and other(s) 1990, 2015. All Rights Reserved.
This command must be run on every IBM Spectrum Archive EE node.
16. Start IBM Spectrum Archive EE by running the command that is shown in Example 7-29.
Example 7-29 Start IBM Spectrum Archive EE
[root@ltfsml1 ~]# ltfsee start
Library name: lib_ltfsml2, library id: 000001300228_LLA, control node (MMM) IP address: 9.11.121.227.
GLESM401I(00253): Loaded the global configuration.
GLESM402I(00264): Created the Global Resource Manager.
GLESM403I(00279): Fetched the node groups from the Global Resource Manager.
GLESM404I(00287): Detected the IP address of the MMM (9.11.121.227).
GLESM405I(00298): Configured the node group (G0).
GLESM406I(00307): Created the unassigned list of the library resources.
GLESL536I(00074): Started the IBM Spectrum Archive EE service (MMM) for library lib_ltfsml2.
Library name: lib_ltfsml1, library id: 000001300228_LLC, control node (MMM) IP address: 9.11.121.122.
GLESM401I(00253): Loaded the global configuration.
GLESM402I(00264): Created the Global Resource Manager.
GLESM403I(00279): Fetched the node groups from the Global Resource Manager.
GLESM404I(00287): Detected the IP address of the MMM (9.11.121.122).
GLESM405I(00298): Configured the node group (G0).
GLESM406I(00307): Created the unassigned list of the library resources.
GLESL536I(00074): Started the IBM Spectrum Archive EE service (MMM) for library lib_ltfsml1.
Check the status of each component. Optionally, you can check the status of each component when it is started, as described, 7.2, “Status information” on page 127.
7.3.2 LTFS LE+ component
For more information about how to upgrade the LTFS LE+ component, see 4.3.2, “Installing, upgrading, or uninstalling IBM Spectrum Archive EE” on page 68. Because the LTFS LE+ component is a component of IBM Spectrum Archive EE, it is upgraded as part of the IBM Spectrum Archive EE upgrade.
7.3.3 Hierarchical Space Management
For more information about how to upgrade HSM, see 4.3.2, “Installing, upgrading, or uninstalling IBM Spectrum Archive EE” on page 68. Because HSM is a component of IBM Spectrum Archive EE, it is upgraded as part of the IBM Spectrum Archive EE upgrade.
7.3.4 IBM Spectrum Archive EE
For more information about how to upgrade IBM Spectrum Archive EE, see 4.3.2, “Installing, upgrading, or uninstalling IBM Spectrum Archive EE” on page 68.
7.4 Starting and stopping IBM Spectrum Archive EE
This section describes how to start and stop IBM Spectrum Archive EE.
7.4.1 Starting IBM Spectrum Archive EE
You run the ltfsee start command to start the IBM Spectrum Archive EE system. The LTFS LE+ and HSM components must be running before you can use this command. You can run the ltfsee start command on any IBM Spectrum Archive EE node in the cluster.
For example, to start IBM Spectrum Archive EE, run the command that is shown in Example 7-30.
Example 7-30 Start IBM Spectrum Archive EE
[root@ltfsml1 ~]# ltfsee start
Library name: lib_ltfsml2, library id: 000001300228_LLA, control node (MMM) IP address: 9.11.121.227.
GLESM401I(00253): Loaded the global configuration.
GLESM402I(00264): Created the Global Resource Manager.
GLESM403I(00279): Fetched the node groups from the Global Resource Manager.
GLESM404I(00287): Detected the IP address of the MMM (9.11.121.227).
GLESM405I(00298): Configured the node group (G0).
GLESM406I(00307): Created the unassigned list of the library resources.
GLESL536I(00074): Started the IBM Spectrum Archive EE service (MMM) for library lib_ltfsml2.
Library name: lib_ltfsml1, library id: 000001300228_LLC, control node (MMM) IP address: 9.11.121.122.
GLESM401I(00253): Loaded the global configuration.
GLESM402I(00264): Created the Global Resource Manager.
GLESM403I(00279): Fetched the node groups from the Global Resource Manager.
GLESM404I(00287): Detected the IP address of the MMM (9.11.121.122).
GLESM405I(00298): Configured the node group (G0).
GLESM406I(00307): Created the unassigned list of the library resources.
GLESL536I(00074): Started the IBM Spectrum Archive EE service (MMM) for library lib_ltfsml1.
 
Important: If the ltfsee start command does not return after several minutes, it might be because the firewall is running or unmounting tapes from drives. The firewall service must be disabled on the IBM Spectrum Archive EE nodes. For more information, see 4.3.2, “Installing, upgrading, or uninstalling IBM Spectrum Archive EE” on page 68.
You can confirm that IBM Spectrum Archive EE is running by referring to the steps in Example 7-12 on page 132 or by running the command in Example 7-31 on all EE Control Nodes to check running processes in Linux.
Example 7-31 Check the status of IBM Spectrum Archive EE by running the ps command
[root@ltfsml1 ~]# ps -afe | grep mmm
root 3736 1 0 11:08 ? 00:00:00 /opt/ibm/ltfsee/bin/mmm -b -l 000001300228_LLC
root 13675 7556 0 11:12 pts/7 00:00:00 grep --color=auto mmm
 
7.4.2 Stopping IBM Spectrum Archive EE
The ltfsee stop command stops the IBM Spectrum Archive EE system on all EE Control Nodes. To stop IBM Spectrum Archive EE, use the stop option, as shown in Example 7-32.
Example 7-32 Stop IBM Spectrum Archive EE
[root@ltfsml1 ~]# ltfsee stop
Library name: lib_ltfsml2, library id: 000001300228_LLA, control node (MMM) IP address: 9.11.121.227.
Stopped LTFS EE service (MMM) for library lib_ltfsml2.
Library name: lib_ltfsml1, library id: 000001300228_LLC, control node (MMM) IP address: 9.11.121.122.
Stopped LTFS EE service (MMM) for library lib_ltfsml1.
In some cases, you might see the GLESM054I informational message if there are active jobs on the job queue in IBM Spectrum Archive EE:
There are still jobs in progress. To execute please terminate a second time.
If you are sure that you want to stop IBM Spectrum Archive EE, run the ltfsee stop command a second time within 5 seconds, which stops any running IBM Spectrum Archive EE jobs abruptly.
7.5 Tape library management
This section describes how to use ltfsee commands to add and remove tape drives and tape cartridges from your LTFS library.
7.5.1 Adding tape cartridges
This section describes how to add tape cartridges in IBM Spectrum Archive EE. An unformatted tape cartridge cannot be added to the IBM Spectrum Archive EE library. However, you can format a tape when you add it to a tape cartridge pool. The process of formatting a tape in LTFS creates the required LTFS partitions on the tape. For more information, see 1.5, “The Linear Tape File System tape format” on page 15.
After tape cartridges are added through the I/O station, or after they are inserted directly into the tape library, you might have to run a ltfsee retrieve command. First, run the ltfsee info tapes command. If the tape cartridges are missing, run the ltfsee retrieve command, which synchronizes the data for these changes between the IBM Spectrum Archive EE system and the tape library. This process occurs automatically. However, if the tape does not appear within the ltfsee info tapes command output, you can force a rebuild of the inventory (synchronization of IBM Spectrum Archive EE inventory with the tape library's inventory).
Data tape cartridge
To add a tape cartridge (that was previously used by LTFS) to the IBM Spectrum Archive EE system, complete the following steps:
1. Insert the tape cartridge into the I/O station.
2. Run the ltfsee info tapes command to see whether your tape appears in the list, as shown in Example 7-33 (in this example, the -l option is used to limit the tapes to one tape library).
Example 7-33 Run the ltfs info tapes command to check whether a tape cartridge must be synchronized
[root@ltfsml1 ~]# ltfsee info tapes -l lib_ltfsml1
Tape ID Status Type Capacity(GiB) Free(GiB) Unref(GiB) Pool Library Address Drive
IMN807L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4182 -
IMN806L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4181 -
IMN805L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4180 -
IMN838L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4175 -
IMN797L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4174 -
IMN722L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4173 -
IMN799L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4172 -
IMN802L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4169 -
IMN803L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4167 -
IMN798L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4170 -
IMN801L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4166 -
IMN809L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4165 -
IMN808L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4164 -
IMN795L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4163 -
IMN833L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4161 -
IMN727L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4159 -
IMN726L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4158 -
2MA433L5 Valid LTO5 1327 1159 0 primary_ltfsml1 lib_ltfsml1 4176 -
IMN834L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4160 -
IMN836L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4155 -
IMN721L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4168 -
IMN804L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4152 -
IMN792L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4151 -
2MA455L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4147 -
2MA454L5 Valid LTO5 1327 0 2 primary_ltfsml1 lib_ltfsml1 4145 -
IMN794L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4171 -
2MA452L5 Valid LTO5 1327 1327 0 primary_ltfsml1 lib_ltfsml1 4141 -
IMN837L5 Valid LTO5 1327 0 2 primary_ltfsml1 lib_ltfsml1 4150 -
IMN839L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4139 -
2MA444L5 Valid LTO5 1327 874 0 primary_ltfsml1 lib_ltfsml1 4134 -
IMN725L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4157 -
IMN724L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4142 -
IMN831L5 Valid LTO5 1327 0 0 primary_ltfsml1 lib_ltfsml1 4154 -
IMN790L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4136 -
2MA431L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4114 -
2MA430L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4113 -
2MA432L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4138 -
2MA445L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4135 -
2MA442L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4110 -
2MA419L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4109 -
2MA422L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4115 -
2MA418L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4108 -
2MA453L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4146 -
2MA443L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4118 -
2MA417L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4105 -
2MA420L5 Valid LTO5 1327 0 2 primary_ltfsml1 lib_ltfsml1 4178 -
2MA423L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4117 -
2MA429L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4107 -
2MA416L5 Valid LTO5 1327 835 0 primary_ltfsml1 lib_ltfsml1 4104 -
IMN791L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4140 -
IMN720L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4149 -
2MA428L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4106 -
2MA421L5 Valid LTO5 1327 0 2 primary_ltfsml1 lib_ltfsml1 4102 -
IMN835L5  Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4156 -
Tape cartridge 2FC143L5L5 is not in the list.
3. Because tape cartridge 2FC143L5 is not in the list, synchronize the data in the IBM Spectrum Archive EE inventory with the tape library by running the ltfsee retrieve command, as shown in Example 7-34.
Example 7-34 Synchronize the tape
[root@ltfsml1 ~]# ltfsee retrieve
Library name: lib_ltfsml2, library id: 000001300228_LLA, control node (MMM) IP address: 9.11.121.227.
GLESL036I(00188): Retrieve completed.
Library name: lib_ltfsml1, library id: 000001300228_LLC, control node (MMM) IP address: 9.11.121.122.
GLESL036I(00188): Retrieve completed.
4. Repeating the ltfsee info tapes command shows that the inventory was corrected, as shown in Example 7-35.
Example 7-35 Tape cartridge 2FC143L5 is synchronized
[root@ltfsml1 ~]# ltfsee info tapes -l lib_ltfsml1
Tape ID Status Type Capacity(GiB) Free(GiB) Unref(GiB) Pool Library Address Drive
IMN807L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4182 -
IMN806L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4181 -
IMN805L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4180 -
IMN838L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4175 -
IMN797L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4174 -
IMN722L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4173 -
IMN799L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4172 -
IMN802L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4169 -
IMN803L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4167 -
IMN798L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4170 -
IMN801L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4166 -
IMN809L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4165 -
IMN808L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4164 -
IMN795L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4163 -
IMN833L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4161 -
IMN727L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4159 -
IMN726L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4158 -
2MA433L5 Valid LTO5 1327 1159 0 primary_ltfsml1 lib_ltfsml1 4176 -
IMN834L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4160 -
IMN836L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4155 -
IMN721L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4168 -
IMN804L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4152 -
IMN792L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4151 -
2MA455L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4147 -
2MA454L5 Valid LTO5 1327 0 2 primary_ltfsml1 lib_ltfsml1 4145 -
IMN794L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4171 -
2MA452L5 Valid LTO5 1327 1327 0 primary_ltfsml1 lib_ltfsml1 4141 -
IMN837L5 Valid LTO5 1327 0 2 primary_ltfsml1 lib_ltfsml1 4150 -
IMN839L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4139 -
2MA444L5 Valid LTO5 1327 874 0 primary_ltfsml1 lib_ltfsml1 4134 -
IMN725L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4157 -
IMN724L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4142 -
IMN831L5 Valid LTO5 1327 0 0 primary_ltfsml1 lib_ltfsml1 4154 -
IMN790L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4136 -
2MA431L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4114 -
2MA430L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4113 -
2MA432L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4138 -
2MA445L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4135 -
2MA442L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4110 -
2MA419L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4109 -
2MA422L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4115 -
2MA418L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4108 -
2MA453L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4146 -
2MA443L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4118 -
2MA417L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4105 -
2MA420L5 Valid LTO5 1327 0 2 primary_ltfsml1 lib_ltfsml1 4178 -
2MA423L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4117 -
2MA429L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4107 -
2MA416L5 Valid LTO5 1327 835 0 primary_ltfsml1 lib_ltfsml1 4104 -
IMN791L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4140 -
IMN720L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4149 -
2MA428L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4106 -
2MA421L5 Valid LTO5 1327 0 2 primary_ltfsml1 lib_ltfsml1 4102 -
IMN835L5 Valid LTO5 1327 0 1 primary_ltfsml1 lib_ltfsml1 4156 -
2FC143L5 Unavailable LTO5            0         0        0  -   lib_ltfsml1  4183    -
5. If necessary, move the tape cartridge from the I/O station to a storage slot by running the ltfsee tape move command (see Example 7-36) with the homeslot option.
Example 7-36 Move tape to homeslot
[root@ltfsml1 ~]# ltfsee tape move homeslot -t 2FC143L5 -l lib_ltfsml1
Tape 2FC143L5 is moved successfully.
6. Add the tape cartridge to a tape cartridge pool. If the tape cartridge contains actual data to be added to LTFS, you must import it first before you add it. Run the ltfsee import command to add the tape cartridge into the IBM Spectrum Archive EE library and import the files on that tape cartridge into the IBM Spectrum Scale namespace as stub files.
If you have no data on the tape cartridge (but it is already formatted for LTFS), add it to a tape cartridge pool by running the ltfsee pool add command.
Scratch cartridge
To add a scratch cartridge to the IBM Spectrum Archive EE system, complete the following steps:
1. Insert the tape cartridge into the I/O station.
2. Synchronize the data in the IBM Spectrum Archive EE inventory with the tape library by running the ltfsee retrieve command, as shown in Example 7-34 on page 140.
3. If necessary, move the tape cartridge from the I/O station to a storage slot by running the ltfsee tape move command with the homeslot option, as shown in Example 7-36.
4. To format the tape cartridge while adding it to a tape cartridge pool, use the -f or --format option. For example, Example 7-37 shows the output of the ltfsee pool add command.
Example 7-37 Format a scratch tape
[root@ltfsml1 ~]# ltfsee pool add -p primary_ltfsml1 -t 2FC143L5 -l lib_ltfsml1 -f
GLESL042I(00640): Adding tape 2FC143L5 to storage pool primary_ltfsml1.
Tape 2FC143L5 successfully formatted.
Added tape 2FC143L5 to pool primary_ltfsml1 successfully.
For more information about other formatting options, see 7.5.3, “Formatting tape cartridges” on page 144.
7.5.2 Moving tape cartridges
This section summarizes the IBM Spectrum Archive EE commands that can be used for moving tape cartridges.
Moving to different tape cartridge pools
If a tape cartridge contains any files, IBM Spectrum Archive EE will not allow you to move a tape cartridge from one tape cartridge pool to another. If this is attempted, you receive an error message, as shown in Example 7-38.
Example 7-38 Error message that is shown when attempting to remove a tape cartridge from a tape cartridge pool with migrated/saved files on tape
[root@ltfsml1 ~]# ltfsee pool remove -p primary_ltfsml1 -t IMN835L5 -l lib_ltfsml1
GLESL043I(00832): Removing tape IMN835L5 from storage pool primary_ltfsml1.
GLESL357E(00868): Tape IMN835L5 has migrated files or saved files. It has not been removed from the pool.
However, you can remove an empty tape cartridge from one tape cartridge pool and add it to another tape cartridge pool, as shown in Example 7-39.
Example 7-39 Remove an empty tape cartridge from one tape cartridge pool and add it to another tape cartridge pool
[root@ltfsml1 ~]# ltfsee pool remove -p primary_ltfsml1 -t 2FC143L5 -l lib_ltfsml1
GLESL043I(00832): Removing tape 2FC143L5 from storage pool primary_ltfsml1.
Removed tape 2FC143L5 from pool primary_ltfsml1 successfully.
 
[root@ltfsml1 ~]# ltfsee pool add -p copy_ltfsml1 -t 2FC143L5 -l lib_ltfsml1
GLESL042I(00640): Adding tape 2FC143L5 to storage pool copy_ltfsml1.
Tape 2FC143L5 successfully checked.
Added tape 2FC143L5 to pool copy_ltfsml1 successfully.
Before you remove a tape cartridge from one tape cartridge pool and add it to another tape cartridge pool, a preferred practice is to reclaim the tape cartridge to ensure that no files remain on the tape when it is removed. For more information, see 7.16, “Reclamation” on page 181.
Moving to the homeslot
To move a tape cartridge from a tape drive to its homeslot in the tape library, use the command that is shown in Example 7-40. You might want to use this command in cases where a tape cartridge is loaded in a tape drive and you want to unload it.
Example 7-40 Move a tape cartridge from a tape drive to its homeslot
[root@ltfsml1 ~]# ltfsee tape move homeslot -t 2FC143L5 -p copy_ltfsml1 -l lib_ltfsml1
GLESL373I(00636): Moving tape 2FC143L5
Tape 2FC143L5 is unmounted because it is inserted into the drive.
Tape 2FC143L5 is moved successfully.
Moving to the I/O station
The command that is shown in Example 7-41 moves a tape cartridge to the ieslot (I/O station). This might be required when tape cartridges are exported or exported offline.
Example 7-41 Move a tape cartridge to the ieslot after an export offline operation
[root@ltfsml1 ~]# ltfsee export -p copy_ltfsml1 -l lib_ltfsml1 -t 2FC143L5 -o "LTFS EE Redbooks"
Export of tape 2FC143L5 has been requested...
Export of tape 2FC143L5 complete.
Updated offline state of tape 2FC143L5.
Removing tape 2FC143L5 from storage pool copy_ltfsml1.
 
[root@ltfsml1 ~]# ltfsee tape move ieslot -t 2FC143L5 -l lib_ltfsml1
GLESL373I(00636): Moving tape 2FC143L5
Tape 2FC143L5 is unmounted because it is inserted into the drive.
Tape 2FC143L5 is moved successfully.
The move can be between homeslot and ieslot or tape drive and homeslot. If the tape cartridge belongs to a tape cartridge pool and online (not in the Offline state), the request to move it to the ieslot fails. After a tape cartridge is moved to ieslot, the tape cartridge cannot be accessed from IBM Spectrum Archive EE. If the tape cartridge contains migrated files, the tape cartridge should not be moved to ieslot without first exporting the tape cartridge.
A tape cartridge in ieslot cannot be added to a tape cartridge pool. Such a tape cartridge must be moved to home slot before adding it.
7.5.3 Formatting tape cartridges
This section describes how to format a medium in the library for the IBM Spectrum Archive EE. There are two options that can be used: -f or -F. The -f option formats only a scratch tape. The format fails if the tape cartridge is already LTFS formatted. If the tape cartridge is already formatted for IBM Spectrum Archive EE, the format fails, as shown in Example 7-42.
Example 7-42 Format failure
[root@ltfsml1 ~]# ltfsee pool add -p copy_ltfsml1 -l lib_ltfsml1 -t 2FC143L5 -f
GLESL042I(00640): Adding tape 2FC143L5 to storage pool copy_ltfsml1.
GLESL091E(00683): This operation is not allowed to this state of tape. Need to check the status of Tape 2FC143L5 by using the ltfsee info tapes command.
When the formatting is requested, IBM Spectrum Archive EE attempts to mount the target medium to obtain the medium condition. The medium is formatted if the mount command finds any of the following conditions:
The medium was not yet formatted for LTFS.
The medium has an invalid label.
Labels in both partitions do not have the same value.
If none of these conditions are found, LTFS assumes that the medium already is formatted, so it is not formatted by default. However, the -F option formats any tape, even if it is formatted but does not contain any referenced data. Tape cartridges that were used by IBM Spectrum Archive EE must be reclaimed to allow them to be reformatted because the reclamation process removes the referenced data.
Example 7-43 shows a tape cartridge being formatted by using the -F option.
Example 7-43 Forced format
[root@ltfsml1 ~]# ltfsee pool add -p copy_ltfsml1 -l lib_ltfsml1 -t 2FC143L5 -F
GLESL042I(00640): Adding tape 2FC143L5 to storage pool copy_ltfsml1.
Tape 2FC143L5 successfully formatted.
Added tape 2FC143L5 to pool copy_ltfsml1 successfully.
Multiple tape cartridges can be formatted by specifying multiple tape VOLSERs. Example 7-44 shows three tape cartridges that are formatted sequentially or simultaneously.
Example 7-44 Format multiple tape cartridges
[root@ltfsml1 ~]# ltfsee pool add -p copy_ltfsml1 -l lib_ltfsml1 -t 2FC147L5 -t 2FC141L5 -t 2FC140L5 -e
GLESL042I(00640): Adding tape 2FC147L5 to storage pool copy_ltfsml1.
GLESL042I(00640): Adding tape 2FC141L5 to storage pool copy_ltfsml1.
GLESL042I(00640): Adding tape 2FC140L5 to storage pool copy_ltfsml1.
Tape 2FC147L5 successfully formatted.
Added tape 2FC147L5 to pool copy_ltfsml1 successfully.
Tape 2FC141L5 successfully formatted.
Added tape 2FC141L5 to pool copy_ltfsml1 successfully.
Tape 2FC140L5 successfully formatted.
Added tape 2FC140L5 to pool copy_ltfsml1 successfully.
When multiple format jobs are submitted, IBM Spectrum Archive EE uses all available drives with the ‘g’ drive attribute for the format jobs, which are done in parallel.
Active file check before formatting tape cartridges
Some customers (such as those in the video surveillance industry) might want to retain data only for a certain retention period and then reuse the tape cartridges. Running the reconciliation and reclamation commands are the most straightforward method, but might take a long time if there are billions of small files in GPFS because the command checks every file in GPFS and deletes files on the tape cartridge one by one. The fastest method is to manage the pool and identify data in tape cartridges that has passed the retention period. Customers can then remove the tape cartridge and add to a new pool by reformatting the entire tape cartridge. This approach saves time, but customers need to be certain that the tape cartridge does not have any active data.
To be sure that a tape cartridge is format-ready, this section introduces a new option, -E, to the ltfsee pool remove command. When run, this command checks whether the tape cartridge contains any active data. If all the files in the tape cartridge have already been deleted in GPFS, the command determines that the tape cartridge is effectively empty and removes the tape cartridge from the pool. If the tape cartridge still has active data, the command will not remove it. No reconciliation command is necessary before this command.
When -E is run, the command performs the following steps:
1. Determine if the specified tape cartridge is in the specified pool and is not mounted.
2. Reserve the tape cartridge so that no migration will occur to the tape.
3. Read the volume cache (GPFS file) for the tape cartridge. If any file entries exist in the volume cache, check whether the corresponding GPFS stub file exists, as-is or renamed. This is the same check routine as the current reclaim -q (quick reconcile) command.
4. If the tape cartridge is empty or has files but all of them have already been deleted in GPFS (not renamed), remove the tape cartridge from the pool.
Example 7-45 shows the output of the active file check command -E with three tape cartridges. VTAPE3L5 is an empty tape cartridge while VTAPE4L5 and VTAPE5L5 have files migrated to them and all corresponding GPFS stub files deleted, but reconciliation has not yet been performed.
Example 7-45 Removing tape cartridges from pool with active file check
[root@io0 src]# /opt/ibm/ltfsee/bin/ltfsee pool remove -p pool1 -l lib1 -t VTAPE3L5 -t VTAPE4L5 -t VTAPE5L5 -E
GLESL584I(00820): Reserving tapes.
GLESS135I(00851): Reserved tapes: VTAPE3L5 VTAPE4L5 VTAPE5L5 .
GLESL043I(00881): Removing tape VTAPE3L5 from storage pool pool1.
Removed tape VTAPE3L5 from pool pool1 successfully.
GLESL043I(00881): Removing tape VTAPE4L5 from storage pool pool1.
GLESL575I(00907): Tape VTAPE4L5 contains files but all of them have been deleted in GPFS.
GLESL572I(00938): Removed tape VTAPE4L5 from pool pool1 successfully. Format the tape when adding back to a pool.
GLESL043I(00881): Removing tape VTAPE5L5 from storage pool pool1.
GLESL575I(00907): Tape VTAPE5L5 contains files but all of them have been deleted in GPFS.
GLESL572I(00938): Removed tape VTAPE5L5 from pool pool1 successfully. Format the tape when adding back to a pool.
GLESS137I(00980): Removing tape reservations.
5. If the tape cartridge has a valid, active file, the check routine aborts on the first hit and goes on to the next specified tape cartridge. The command will not remove the tape cartridge from the pool.
In Example 7-46, VTAPE7L5 has files migrated to the tape cartridge but all corresponding GPFS stub files have been deleted. VTAPE8L5 has active renamed data while VTAPE9L5 has active data. Upon running the -E command, only VTAPE7L5 is removed. VTAPE8L5 and VTAPE9L5 will not be removed.
Example 7-46 Tape cartridges containing active data will not be removed from pool
[root@io0 archive]# ltfsee pool remove -p pool1 -t VTAPE7L5 -t VTAPE8L5 -t VTAPE9L5 -E
GLESL584I(00820): Reserving tapes.
GLESS135I(00851): Reserved tapes: VTAPE7L5 VTAPE8L5 VTAPE9L5 .
GLESL043I(00881): Removing tape VTAPE7L5 from storage pool pool1.
GLESL575I(00907): Tape VTAPE7L5 contains files but all of them have been deleted in GPFS.
GLESL572I(00938): Removed tape VTAPE7L5 from pool pool1 successfully. Format the tape when adding back to a pool.
GLESL043I(00881): Removing tape VTAPE8L5 from storage pool pool1.
GLESL357E(00890): Tape VTAPE8L5 has migrated files or saved files. It has not been removed from the pool.
GLESL043I(00881): Removing tape VTAPE9L5 from storage pool pool1.
GLESL357E(00890): Tape VTAPE9L5 has migrated files or saved files. It has not been removed from the pool.
GLESS137I(00980): Removing tape reservations.
The active file check applies to all data types that the current IBM Spectrum Archive EE might store to a tape cartridge:
Normal migrated files
Saved objects such as empty directory and link files
Another approach is to run mmapplypolicy to list all files that have been migrated to the designated tape cartridge ID. However, if the Spectrum Scale file system has over 1 billion files, the mmapplypolicy scan might take a long time.
7.5.4 Removing tape drives
When the LTFS mounts the library, all tape drives are inventoried by default. The following procedure can be started when a tape drive requires replacing or repairing and must be physically removed from the library. The same process also must be carried out when firmware for the tape drive is upgraded. If a tape is in the drive and a job is in-progress, the tape is unloaded automatically when the job completes.
After mounting the library, the user can run ltfsee commands to manage the library and to correct a problem if one occurs.
To remove a tape drive from the library, complete the following steps:
1. Remove the tape drive from the IBM Spectrum Archive EE inventory by running the ltfsee drive remove command, as shown in Example 7-47. A medium in the tape drive is automatically moved to the home slot (if one exists).
Example 7-47 Remove a tape drive
[root@ltfsml1 ~]# ltfsee drive remove -d 1013000655 -l lib_ltfsml1
GLESL121I(00282): Drive serial 1013000655 is removed from LTFS EE drive list.
 
[root@ltfsml1 ~]# ltfsee info drives
Drive S/N Status Type Role Library Address Node ID Tape Node Group
1013000667 Not mounted LTO6 mrg lib_ltfsml2 256 1 - G0
1013000110 Not mounted LTO6 mrg lib_ltfsml2 257 1 - G0
1013000692 Not mounted LTO6 mrg lib_ltfsml2 258 1 - G0
1013000694 Not mounted LTO6 mrg lib_ltfsml1 256 2 - G0
1013000688 Not mounted LTO6 mrg lib_ltfsml1 257 2 - G0
1013000655 Stock UNKNOWN  ---   lib_ltfsml1    258 - - -
2. Physically remove the tape drive from the tape library.
For more information about how to remove tape drives (including control path considerations), see IBM System Storage TS3500 Tape Library with ALMS Operator Guide, GA32-0594.
7.5.5 Adding tape drives
Add the tape drive to the LTFS inventory by running the ltfsee drive add command, as shown in Example 7-48 on page 148.
Optionally, drive attributes can be set when adding a tape drive. Drive attributes are the logical OR of the attributes: migrate(4), recall(2), generic(1). If the individual attribute is set, any corresponding jobs on the job queue can be run on that drive. The drive attributes can be specified after the ‘:’ following the drive serial and must be a decimal number.
In Example 7-48, ‘6’ is the logical OR of migrate(4) and recall(2), so migration jobs and recall jobs can be performed on this drive. For more information, see 7.19, “Drive Role settings for job assignment control” on page 192.
The node ID is required for the add command.
Example 7-48 Add a tape drive
[root@ltfsml1 ~]# ltfsee drive add -d 1013000655:6 -n 2 -l lib_ltfsml1
GLESL119I(00174): Drive 1013000655:6 added successfully.
 
[root@ltfsml1 ~]# ltfsee info drives
Drive S/N Status Type Role Library Address Node ID Tape Node Group
1013000667 Not mounted LTO6 mrg lib_ltfsml2 256 1 - G0
1013000110 Not mounted LTO6 mrg lib_ltfsml2 257 1 - G0
1013000692 Not mounted LTO6 mrg lib_ltfsml2 258 1 - G0
1013000694 Not mounted LTO6 mrg lib_ltfsml1 256 2 - G0
1013000688 Not mounted LTO6 mrg lib_ltfsml1 257 2 - G0
1013000655 Not mounted LTO6 mrg lib_ltfsml1 258 2 - G0
7.6 Tape storage pool management
This section describes how to use the ltfsee pool command to manage tape cartridge pools with IBM Spectrum Archive EE.
 
Permissions: Managing tape cartridge pools by running the ltfsee pool command requires root user permissions.
To perform file migrations, it is first necessary to create and define tape cartridge pools, which are the targets for migration. It is then possible to add or remove tape cartridges to or from the tape cartridge pools.
Consider the following rules and recommendations for tape cartridge pools:
Before adding tape cartridges to a tape cartridge pool, the tape cartridge must first be in the homeslot of the tape library. For more information about moving to the homeslot, see 7.5.2, “Moving tape cartridges” on page 142.
Multiple jobs can be performed in parallel when more than one tape cartridge is defined in a tape cartridge pool. Have multiple tape cartridges in each tape cartridge pool to increase performance.
The maximum number of drives in a node group that is used for migration for a particular tape cartridge pool can be limited by setting the mountlimit attribute for the tape cartridge pool. The default is 0, which is unlimited. For more information about the mountlimit attribute, see 8.2, “Maximizing migration performance with redundant copies” on page 207.
After a file is migrated to a tape cartridge pool, it cannot be migrated again to another tape cartridge pool before it is recalled.
When tape cartridges are removed from a tape cartridge pool but not exported from IBM Spectrum Archive EE, they are no longer targets for migration or recalls.
When tape cartridges are exported from IBM Spectrum Archive EE system by running the ltfsee export command, they are removed from their tape cartridge pool and the files are not accessible for recall.
7.6.1 Creating tape cartridge pools
This section describes how to create tape cartridge pools for use with IBM Spectrum Archive EE. Tape cartridge pools are logical groupings of tape cartridges within IBM Spectrum Archive EE. The groupings might be based on their intended function (for example, OnsitePool and OffsitePool) or based on their content (for example, MPEGpool and JPEGpool). However, you must create at least one tape cartridge pool.
You create tape cartridge pools by using the create option of the ltfsee pool command. For example, the command that is shown in Example 7-49 creates the tape cartridge pool named MPEGpool.
Example 7-49 Create a tape cartridge pool
[root@ltfs97 /]# ltfsee pool create -p MPEGpool
For single tape library systems, the -l option (library name) can be omitted. For two tape library systems, the -l option is created to specify the library name.
For single node group systems, the -g option (node group) can be omitted. For multiple node group systems, the -g option is created to specify the node group.
The default tape cartridge pool type is a regular pool. If a WORM pool is wanted, supply the --worm physical option.
The pool names are case-sensitive and can be duplicated in different tape libraries. No informational messages are shown at the successful completion of the command. However, you can confirm that the pool was created by running the ltfsee info pools command.
7.6.2 Deleting tape cartridge pools
This section describes how to delete tape cartridge pools for use with IBM Spectrum Archive EE. Delete tape cartridge pools by using the delete option of the ltfsee pool command. For example, the command in Example 7-50 deletes the tape cartridge pool that is named MPEGpool.
Example 7-50 Delete a tape cartridge pool
[root@ltfs97 /]# ltfsee pool delete -p MPEGpool
For single tape library systems, the -l option (library name) can be omitted. For two tape library systems, the -l option is created to specify the library name.
When deleting a tape cartridge pool, the -g option (node group) can be omitted.
No informational messages are shown after the successful completion of the command.
 
Important: If the tape cartridge pool contains tape cartridges, the tape cartridge pool cannot be deleted until the tape cartridges are removed.
You cannot use IBM Spectrum Archive EE to delete a tape cartridge pool that still contains data. Example 7-51 shows the attempt to delete a tape cartridge pool with tape cartridges in it and its resulting error message.
Example 7-51 Delete a tape cartridge pool containing data
[root@ltfsml1 ~]# ltfsee pool delete -p copy_ltfsml1 -l lib_ltfsml1
GLESL044E(00467): Failed to delete storage pool. The pool copy_ltfsml1 is not empty. Tapes must be removed from the storage pool before it can be deleted.
To allow the deletion of the tape cartridge pool, you must remove all tape cartridges from it by running the ltfsee pool remove command, as described in 7.5.2, “Moving tape cartridges” on page 142.
7.7 Migration
The migration process is the most significant reason for using IBM Spectrum Archive EE. Migration is the movement of files from IBM Spectrum Scale (on disk) to LTFS tape cartridges in tape cartridge pools, which leaves behind a small stub file on the disk. This has the obvious effect of reducing the usage of IBM Spectrum Scale and so that you can move less frequently accessed data to lower-cost, lower-tier tape storage from where it can be easily recalled.
IBM Spectrum Scale policies are used to specify files in the IBM Spectrum Scale namespace (through a GPFS scan) to be migrated to the LTFS tape tier. For each specified GPFS file, the file content, GPFS path, and user-defined extended attributes are stored in LTFS so that they can be re-created at import. Empty GPFS directories are not migrated.
In addition, it is possible to migrate an arbitrary list of files directly by running the ltfsee migrate command. This task is done by specifying the file name of a scan list file that lists the files to be migrated and specifying the designated pools as command options.
 
Important: Running the Tivoli Storage Manager for Space Management dsmmigrate command directly is not supported.
To migrate files, the following configuration and activation prerequisites must be met:
Ensure that the MMM service is running on an LTFS node. For more information, see 7.2.4, “IBM Spectrum Archive EE” on page 132.
Ensure that storage pools that are not empty are created and defined. For more information, see 7.6.1, “Creating tape cartridge pools” on page 149.
Ensure that space management is turned on. For more information, see 7.2.3, “Hierarchical Space Management” on page 130.
Activate one of the following mechanisms to trigger migration:
 – Automated IBM Spectrum Scale policy-driven migration that uses thresholds.
 – Manual policy-based migration by running the mmapplypolicy command.
 – Manual migration by running the ltfsee migrate command and a prepared list of files and tape cartridge pools.
IBM Spectrum Archive EE uses a semi-sequential fill policy for tapes that enables multiple files to be written in parallel by using multiple tape drives within the tape library. Jobs are put on the queue and the scheduler looks at the queue to decide which jobs should be run. If one tape drive is available, all of the migration goes on one tape cartridge. If there are three tape drives available, the migrations are spread among the three tape drives. This configuration improves throughput and is a more efficient usage of tape drives.
With the recent changes to Spectrum Archive EE V1.2.2.0, individual files no longer get scheduled on the job queue. Spectrum Archive EE now internally groups files into file lists and schedules these lists on the job queue,. The lists are then distributed to available drives to perform the migrations.
The grouping is done by using two parameters: A total file size and a total number of files. The default settings for the file lists are 20 GB or 20k files. This requirement means that a file list can contain either 20 GB of files or 20k number of files, whichever fills up first, before creating a new file list. For example, if you have 10 files to migrate and each file is 10 GB in size, then when migration is kicked off, Spectrum Archive EE internally generates five file lists containing two files each because the two files reach the 20 GB limit that a file list can have. It then schedules those file lists to the job queue for available drives. For performance references, see 3.4.4, “Performance” on page 53.
 
Note: Starting with Spectrum Archive EE V1.2.2.0, files that are recently created need to wait two minutes before being migrated. Otherwise, the migrations will fail.
Example 7-52 shows the output of running the mmapplypolicy command that uses a policy file called sample_policy.txt.
Example 7-52 mmapplypolicy
[root@ltfsml1 ~]# mmapplypolicy /ibm/gpfs/production/ -P sample_policy.txt
[I] GPFS Current Data Pool Utilization in KB and %
Pool_Name KB_Occupied KB_Total Percent_Occupied
system 861676032 12884901888 6.687486172%
[I] 6314402 of 104857600 inodes used: 6.021883%.
[I] Loaded policy rules from sample_policy.txt.
Evaluating policy rules with CURRENT_TIMESTAMP = 2016-02-12@21:56:04 UTC
Parsed 3 policy rules.
 
RULE 'SYSTEM_POOL_PLACEMENT_RULE' SET POOL 'system'
 
RULE EXTERNAL POOL 'COPY_POOL'
EXEC '/opt/ibm/ltfsee/bin/ltfsee'
OPTS '-p copy_ltfsml1@lib_ltfsml1'
 
RULE 'LTFS_EE_FILES' MIGRATE FROM POOL 'system' TO POOL 'COPY_POOL'
WHERE FILE_SIZE > 0
AND ((NOT MISC_ATTRIBUTES LIKE '%M%') OR (MISC_ATTRIBUTES LIKE '%M%' AND MISC_ATTRIBUTES NOT LIKE '%V%'))
AND NOT ((PATH_NAME LIKE '/ibm/gpfs/.ltfsee/%' OR PATH_NAME LIKE '%/.SpaceMan/%'))
[I] 2016-02-12@21:56:05.015 Directory entries scanned: 4.
[I] Directories scan: 3 files, 1 directories, 0 other objects, 0 'skipped' files and/or errors.
[I] 2016-02-12@21:56:05.019 Sorting 4 file list records.
[I] Inodes scan: 3 files, 1 directories, 0 other objects, 0 'skipped' files and/or errors.
[I] 2016-02-12@21:56:05.040 Policy evaluation. 4 files scanned.
[I] 2016-02-12@21:56:05.044 Sorting 3 candidate file list records.
[I] 2016-02-12@21:56:05.045 Choosing candidate files. 3 records scanned.
[I] Summary of Rule Applicability and File Choices:
Rule# Hit_Cnt KB_Hit Chosen KB_Chosen KB_Ill Rule
0 3 46392 3 46392 0 RULE 'LTFS_EE_FILES' MIGRATE FROM POOL 'system' TO POOL 'COPY_POOL' WHERE(.)
 
[I] Filesystem objects with no applicable rules: 1.
 
[I] GPFS Policy Decisions and File Choice Totals:
Chose to migrate 46392KB: 3 of 3 candidates;
Predicted Data Pool Utilization in KB and %:
Pool_Name KB_Occupied KB_Total Percent_Occupied
system 861629640 12884901888 6.687126122%
GLESL167I(00509): A list of files to be migrated has been sent to LTFS EE using scan id 1425344513.
GLESL038I(00555): Migration result: 3 succeeded, 0 failed, 0 duplicate, 0 duplicate wrong pool, 0 not found, 0 too small to qualify for migration.
[I] 2016-02-12@21:57:55.981 Policy execution. 3 files dispatched.
[I] A total of 3 files have been migrated, deleted or processed by an EXTERNAL EXEC/script;
0 'skipped' files and/or errors.
7.7.1 Managing file migration pools
A file can be migrated to one pool or to multiple pools if replicas are configured. However, after the file is in the migrated state, it cannot be migrated again to other tape cartridge pools before it is recalled and made resident again by using the ltfsee repair command. (For more information about creating replicas, see 7.7.4, “Replicas and redundant copies” on page 161.) Recalling the file into resident state invalidates the LTFS copy from the reconcile and export perspective.
7.7.2 Threshold-based migration
This section describes how to use IBM Spectrum Scale policies for threshold-based migrations with IBM Spectrum Archive EE.
Automated IBM Spectrum Scale policy-driven migration is a standard IBM Spectrum Scale migration procedure that allows file migration from IBM Spectrum Scale disk pools to external pools. IBM Spectrum Archive EE is configured as an external pool to IBM Spectrum Scale by using policy statements.
After you define an external tape cartridge pool, migrations or deletion rules can refer to that pool as a source or target tape cartridge pool. When the mmapplypolicy command is run and a rule dictates that data should be moved to an external pool, the user-provided program that is identified with the EXEC clause in the policy rule starts. That program receives the following arguments:
The command to be run. IBM Spectrum Scale supports the following subcommands:
 – LIST: Provides arbitrary lists of files with no semantics on the operation.
 – MIGRATE: Migrates files to external storage and reclaims the online space that is allocated to the file.
 – PREMIGRATE: Migrates files to external storage, but does not reclaim the online space.
 – PURGE: Deletes files from both the online file system and the external storage.
 – RECALL: Recall files from external storage to the online storage.
 – TEST: Tests for presence and operation readiness. Returns zero for success and returns nonzero if the script should not be used on a specific node.
The name of a file that contains a list of files to be migrated.
 
Important: IBM Spectrum Archive EE supports only the MIGRATE, PREMIGRATE, and RECALL subcommands.
Any optional parameters that are specified with the OPTS clause in the rule. These optional parameters are not interpreted by the IBM Spectrum Scale policy engine but the method IBM Spectrum Archive EE uses to pass the tape cartridge pools to which the files are migrated.
To set up automated IBM Spectrum Scale policy-driven migration to IBM Spectrum Archive EE, you must configure IBM Spectrum Scale to be managed by IBM Spectrum Archive EE. In addition, a migration callback must be configured. Callbacks are provided primarily as a method for system administrators to take notice when important IBM Spectrum Scale events occur. It registers a user-defined command that IBM Spectrum Scale runs when certain events occur. For example, an administrator can use the low disk event callback to inform system administrators when a file system is getting full.
The migration callback is used to register the policy engine to be run if a high threshold in a file system pool is met. For example, after your pool usage reaches 80%, you can start the migration process. You must enable the migration callback by running the mmaddcallback command.
In the mmaddcallback command in Example 7-53, the --command option points to a sample script file /usr/lpp/mmfs/bin/mmapplypolicy. Before you run this command, you must ensure that the specified sample script file exists. The --event option registers the events for which the callback is configured, such as the “low disk space” events that are in the command example. For more information on how to create and set a fail-safe policy, see 8.10, “Real world use cases for mmapplypolicy” on page 214.
Example 7-53 mmaddcallback example
mmaddcallback MIGRATION --command /usr/lpp/mmfs/bin/mmapplypolicy --event lowDiskSpace --params “%fsName -B 10000 -m <2x the number of drives>”
For more information, see the following publications:
IBM Spectrum Scale V4.2.1: Administration and Programming Reference, SA23-1452
Tivoli Field Guide - TSM for Space Management for UNIX-GPFS Integration white paper, found at:
After the file system is configured to be managed by IBM Spectrum Archive EE and the migration callback is configured, a policy can be set up for the file system. The placement policy that defines the initial placement of newly created files and the rules for placement of restored data must be installed into IBM Spectrum Scale by using the mmchpolicy command. If a Spectrum Scale file system does not have a placement policy installed, all the data is stored in the system storage pool.
You can define the file management rules and install them in the file system together with the placement rules by running the mmchpolicy command. You also can define these rules in a separate file and explicitly provide them to the mmapplypolicy command by using the -P option. The latter option is described in 7.7.3, “Manual migration” on page 157.
In either case, policy rules for placement or migration can be intermixed. Over the life of the file, data can be migrated to a different tape cartridge pool any number of times, and files can be deleted or restored.
The policy must define IBM Spectrum Archive EE (/opt/ibm/ltfsee/bin/ltfsee) as an external tape cartridge pool.
 
Tip: Only one IBM Spectrum Scale policy, which can include one or more rules, can be set up for a particular GPFS file system.
After a policy is entered into a text file (such as policy.txt), you can apply the policy to the file system by running the mmchpolicy command. You can check the syntax of the policy before you apply it by running the command with the -I test option, as shown in Example 7-54.
Example 7-54 Test a IBM Spectrum Scale policy
mmchpolicy /dev/gpfs policy.txt -t "System policy for LTFS EE" -I test
After you test your policy, run the mmchpolicy command without the -I test to set the policy. After a policy is set for the file system, you can check the policy by displaying it with the mmlspolicy command, as shown in Example 7-55. This policy migrates all files in groups of 20 GiB after the Spectrum Archive disk space reaches a threshold of or above 80% in the /ibm/glues/archive directory to tape.
Example 7-55 List a IBM Spectrum Scale policy
[root@ltfs97]# mmlspolicy /dev/gpfs -L
/* LTFS EE - GPFS policy file */
 
define(
user_exclude_list,
PATH_NAME LIKE '/ibm/glues/0%'
OR NAME LIKE '%&%')
 
define(
user_include_list,
FALSE)
 
define(
exclude_list,
NAME LIKE 'dsmerror.log')
 
/* define is_premigrated uses GPFS inode attributes that mark a file
as a premigrated file. Use the define to include or exclude premigrated
files from the policy scan result explicitly */
define(
is_premigrated,
MISC_ATTRIBUTES LIKE '%M%' AND MISC_ATTRIBUTES NOT LIKE '%V%')
 
/* define is_migrated uses GPFS inode attributes that mark a file
as a migrated file. Use the define to include or exclude migrated
files from the policy scan result explicitly */
define(
is_migrated,
MISC_ATTRIBUTES LIKE '%V%')
 
RULE 'SYSTEM_POOL_PLACEMENT_RULE' SET POOL 'system'
 
RULE EXTERNAL POOL 'Archive_files'
EXEC '/opt/ibm/ltfsee/bin/ltfsee'
OPTS -p 'pool1'
SIZE(21474836480)
 
RULE 'ARCHIVE_FILES' MIGRATE FROM POOL 'system'
THRESHOLD(80,50)
TO POOL 'Archive_files'
WHERE PATH_NAME LIKE '/ibm/glues/archive/%'
AND NOT (exclude_list)
AND (NOT (user_exclude_list) OR (user_include_list))
AND (is_migrated OR is_premigrated)
To ensure that a specified IBM Spectrum Scale is migrated only once, run the mmapplypolicy command with the --single-instance option. If this is not done, IBM Spectrum Archive EE attempts to start another migration process every two minutes. It is called every two minutes because the threshold continues to be exceeded while migrations are still in the process of migrating the data to free up the space.
As a preferred practice, the user should not use overlapping IBM Spectrum Scale policy rules within different IBM Spectrum Scale policy files that select the same files for migration to different tape cartridge pools. If a file is already migrated, later migration attempts fail, which is the standard HSM behavior. However, this is not normally done, due to incorporating thresholds.
 
Important: If a single Spectrum Scale file system is used and the metadata directory is stored in the same file system that is space-managed with IBM Spectrum Archive EE, migration of the metadata directory must be prevented. The name of metadata directory is /ibm/gpfs/.ltfsee/meta/.
By combining the attributes of THRESHOLD and WEIGHT in IBM Spectrum Scale policies, you can have a great deal of control over the migration process. When a IBM Spectrum Scale policy is applied, each candidate file is assigned a weight (based on the WEIGHT attribute). All candidate files are sorted by weight and the highest weight files are chosen to MIGRATE until the low occupancy percentage (based on THRESHOLD attribute) is achieved, or there are no more candidate files.
Example 7-56 shows a policy that starts migration of all files when the file system pool named “system” reaches 80% full (refer to the THRESHOLD attribute), and continues migration until the pool is reduced to 60% full or less by using a weight that is based on the date and time that the file was last accessed (refer to the ACCESS_TIME attribute). The file system usage is checked every two minutes. All files to be migrated must have more than 5 MB of disk space that is allocated for the file (refer to the KB_ALLOCATED attribute). The migration is performed to an external pool, presented by IBM Spectrum Archive EE (/opt/ibm/ltfsee/bin/ltfsee), and the data that is migrated is sent to the IBM Spectrum Archive EE tape cartridge pool named “Tapepool1”. In addition, this example policy excludes some system files and directories.
Example 7-56 Threshold-based migration in a IBM Spectrum Scale policy file
define
(
exclude_list,
(
PATH_NAME LIKE '%/.SpaceMan/%'
OR PATH_NAME LIKE '%/.ctdb/%'
OR PATH_NAME LIKE '/ibm/glues/.ltfsee/%'
OR NAME LIKE 'fileset.quota%'
OR NAME LIKE 'group.quota%'
)
)
RULE EXTERNAL POOL 'ltfsee'
EXEC '/opt/ibm/ltfsee/bin/ltfsee'
OPTS -p‘Tapepool1' /* This is our pool in LTFS Enterprise Edition */
SIZE(21474836480)
 
/* The following statement is the migration rule */
RULE 'ee_sysmig' MIGRATE FROM POOL 'system'
 
THRESHOLD(80,60)
WEIGHT(CURRENT_TIMESTAMP - ACCESS_TIME)
TO POOL 'ltfsee'
WHERE (KB_ALLOCATED > 5120)
AND NOT (exclude_list)
 
/* The following statement is the default placement rule that is required for a system migration */
RULE 'default' set pool 'system'
In addition to monitoring the file system’s overall usage in Example 7-56, you can monitor how frequently a file is accessed with IBM Spectrum Scale policies. A file’s access temperature is an attribute for a policy that provides a means of optimizing tiered storage. File temperatures are a relative attribute, which indicates whether a file is “hotter” or “colder” than the others in its pool. The policy can be used to migrate hotter files to higher tiers and colder files to lower. The access temperature is an exponential moving average of the accesses to the file. As files are accessed, the temperature increases; likewise, when the access stops, the file cools. File temperature is intended to optimize nonvolatile storage, not memory usage. Therefore, cache hits are not counted. In a similar manner, only user accesses are counted.
The access counts to a file are tracked as an exponential moving average. A file that is not accessed loses a percentage of its accesses each period. The loss percentage and period are set through the configuration variables fileHeatLossPercent and fileHeatPeriodMinutes. By default, the file access temperature is not tracked.
To use access temperature in policy, the tracking must first be enabled. To do this, set the following configuration variables:
fileHeatLossPercent
The percentage (0 - 100) of file access temperature that is dissipated over the fileHeatPeriodMinutes time. The default value is 10.
fileHeatPeriodMinutes
The number of minutes that is defined for the recalculation of file access temperature. To turn on tracking, fileHeatPeriodMinutes must be set to a nonzero value from the default value of 0. You use WEIGHT(FILE_HEAT) with a policy MIGRATE rule to prioritize migration by file temperature.
The following example sets fileHeatPeriodMinutes to 1440 (24 hours) and fileHeatLossPercent to 10, meaning that unaccessed files lose 10% of their heat value every 24 hours, or approximately 0.4% every hour (because the loss is continuous and “compounded” geometrically):
mmchconfig fileheatperiodminutes=1440,fileheatlosspercent=10
 
Note: If the updating of the file access time (atime) is suppressed or if relative atime semantics are in effect, proper calculation of the file access temperature might be adversely affected.
These examples provide only an introduction to the wide range of file attributes that migration can use in IBM Spectrum Scale policies. IBM Spectrum Scale provides a range of other policy rule statements and attributes to customize your IBM Spectrum Scale environment, but a full description of all these is outside the scope for this publication. For syntax definitions for IBM Spectrum Scale policy rules, which correspond to constructs in this script (such as EXEC, EXTERNAL POOL, FROM POOL, MIGRATE, RULE, OPTS, THRESHOLD, TO POOL, WEIGHT, and WHERE), see the information about Policy rule syntax definitions in IBM Spectrum Scale V4.2.1: Advanced Administration Guide, SC23-7032. Also, see 8.4, “Setting mmapplypolicy options for increased performance” on page 209.
For more information about IBM Spectrum Scale SQL expressions for policy rules, which correspond to constructs in this script (such as CURRENT_TIMESTAMP, FILE_SIZE, MISC_ATTRIBUTES, NAME, and PATH_NAME), see the information about SQL expressions for policy rules in IBM Spectrum Scale V4.2.1: Advanced Administration Guide, SC23-7032. IBM Spectrum Scale V4.2.1 documentation is available at the Cluster products IBM Knowledge Center at this website:
7.7.3 Manual migration
In contrast to the Threshold-based migration process that can be controlled only from within IBM Spectrum Scale, the manual migration of files from IBM Spectrum Scale to LTFS tape cartridges can be accomplished by running the mmapplypolicy command or the ltfsee command. The use of these commands is documented in this section. Manual migration is more likely to be used for ad hoc migration of a file or group of files that do not fall within the standard IBM Spectrum Scale policy that is defined for the file system.
Using mmapplypolicy
This section describes how to manually start file migration while using a IBM Spectrum Scale policy file for file selection.
You can apply a manually created policy by manually running the mmapplypolicy command, or by scheduling the policy with the system scheduler. You can have multiple different policies, which can each include one or more rules. However, only one policy can be run at a time.
 
Important: Prevent migration of the .SPACEMAN directory of a GPFS file system by excluding the directory with a IBM Spectrum Scale policy rule.
You can accomplish manual file migration for a Spectrum Scale file system that is managed by IBM Spectrum Archive EE by running the mmapplypolicy command. This command runs a policy that selects files according to certain criteria, and then passes these files to IBM Spectrum Archive EE for migration. As with automated IBM Spectrum Scale policy-driven migrations, the name of the target IBM Spectrum Archive EE tape cartridge pool is provided as the first option of the pool definition rule in the IBM Spectrum Scale policy file.
The following phases occur when the mmapplypolicy command is started:
Phase 1: Selecting candidate files
In this phase of the mmapplypolicy job, all files within the specified GPFS file system device (or below the input path name) are scanned. The attributes of each file are read from the file’s GPFS inode structure.
Phase two: Choosing and scheduling files
In this phase of the mmapplypolicy job, some or all of the candidate files are chosen. Chosen files are scheduled for migration, accounting for the weights and thresholds that are determined in phase one.
Phase three: Migrating and premigrating files
In the third phase of the mmapplypolicy job, the candidate files that were chosen and scheduled by the second phase are migrated or premigrated, each according to its applicable rule.
For more information about the mmapplypolicy command and other information about IBM Spectrum Scale policy rules, see IBM Spectrum Scale V4.2.1: Advanced Administration Guide, SC23-7032, or the IBM Spectrum Scale cluster products IBM Knowledge Center at this website:
 
Important: In a multicluster environment, the scope of the mmapplypolicy command is limited to the nodes in the cluster that owns the file system.
Hints and tips
Before you write and apply policies, consider the following points:
It is advised that you always test your rules by running the mmapplypolicy command with the -I test option and the -L 3 (or higher) option before they are applied in a production environment, which helps you understand which files are selected as candidates and which candidates are chosen.
To view all selected files that have been chosen for migration, run the mmapplypolicy command with the -I defer and the -f /tmp options. The -I defer option runs the actual policy without actually making any data movements, and the -f /tmp option specifies a directory or file to output each migration rule. This option is helpful when dealing with lots of files.
Do not apply a policy to an entire file system of vital files until you are confident that the rules correctly express your intentions. To test your rules, find or create a subdirectory with a modest number of files, some that you expect to be selected by your SQL policy rules and some that you expect are skipped. Run the following command:
mmapplypolicy /ibm/gpfs/TestSubdirectory -L 6 -I test
The output shows you exactly which files are scanned and which ones match rules or not.
Testing a IBM Spectrum Scale policy
Example 7-57 shows a mmapplypolicy command that tests, but does not apply, a IBM Spectrum Scale policy by using the testpolicy policy file.
Example 7-57 Test a IBM Spectrum Scale policy
[root@ltfsml1 ~]# mmapplypolicy /ibm/gpfs/production/ -P sample_policy.txt -I test
[I] GPFS Current Data Pool Utilization in KB and %
Pool_Name KB_Occupied KB_Total Percent_Occupied
system 861650688 12884901888 6.687289476%
[I] 6314406 of 104857600 inodes used: 6.021887%.
[I] Loaded policy rules from sample_policy.txt.
Evaluating policy rules with CURRENT_TIMESTAMP = 2016-02-13@16:17:57 UTC
Parsed 3 policy rules.
 
RULE 'SYSTEM_POOL_PLACEMENT_RULE' SET POOL 'system'
 
RULE EXTERNAL POOL 'COPY_POOL'
EXEC '/opt/ibm/ltfsee/bin/ltfsee'
OPTS '-p copy_ltfsml1@lib_ltfsml1'
SIZE(21474836480)
 
RULE 'LTFS_EE_FILES' MIGRATE FROM POOL 'system'
THRESHOLD(50,0)
TO POOL 'COPY_POOL'
WHERE FILE_SIZE > 5242880
AND NAME LIKE ‘%.IMG’
AND ((NOT MISC_ATTRIBUTES LIKE '%M%') OR (MISC_ATTRIBUTES LIKE '%M%' AND MISC_ATTRIBUTES NOT LIKE '%V%'))
AND NOT ((PATH_NAME LIKE '/ibm/gpfs/.ltfsee/%' OR PATH_NAME LIKE '%/.SpaceMan/%'))
[I] 2016-02-13@16:17:58.356 Directory entries scanned: 5.
[I] Directories scan: 4 files, 1 directories, 0 other objects, 0 'skipped' files and/or errors.
[I] 2016-02-13@16:17:58.360 Sorting 5 file list records.
[I] Inodes scan: 4 files, 1 directories, 0 other objects, 0 'skipped' files and/or errors.
[I] 2016-02-13@16:17:58.376 Policy evaluation. 5 files scanned.
[I] 2016-02-13@16:17:58.380 Sorting 1 candidate file list records.
[I] 2016-02-13@16:17:58.380 Choosing candidate files. 1 records scanned.
[I] Summary of Rule Applicability and File Choices:
Rule# Hit_Cnt KB_Hit Chosen KB_Chosen KB_Ill Rule
0 1 20736 1 20736 0 RULE 'LTFS_EE_FILES' MIGRATE FROM POOL 'system' TO POOL 'COPY_POOL' WHERE(.)
 
[I] Filesystem objects with no applicable rules: 4.
 
[I] GPFS Policy Decisions and File Choice Totals:
Chose to migrate 20736KB: 1 of 1 candidates;
Predicted Data Pool Utilization in KB and %:
Pool_Name KB_Occupied KB_Total Percent_Occupied
system 861629952 12884901888 6.687128544%
The policy in Example 7-57 on page 159 is configured to select files that have the file extension .IMG for migration to the IBM Spectrum Archive EE tape cartridge pool named copy_ltfsml1 in library name lib_ltfsml1 if the usage of the /ibm/glues file system exceeds 50% for any .IMG file that exceeds 5 MB.
Using ltfsee
The ltfsee migrate command requires a migration list file that contains a list of files to be migrated with the name of the target tape cartridge pool. Unlike migrating files by using IBM Spectrum Scale policy, it is not possible to use wildcards in place of file names; the name and path of each file to be migrated must be specified in full. The file must be in the following format:
-- /ibm/glues/file1.mpeg
-- /ibm/glues/file2.mpeg
 
Note: Make sure that there is a space before and after the “--”.
Example 7-58 shows the output of running such a migrate command.
Example 7-58 Manual migration by using a scan result file
[root@ltfs97 /]# ltfsee migrate gpfs-scan.txt -p MPEGpool
GLESL167I(00400): A list of files to be migrated has been sent to LTFS EE using scan id 1842482689.
GLESL038I(00448): Migration result: 2 succeeded, 0 failed, 0 duplicate, 0 duplicate wrong pool, 0 not found, 0 too small to qualify for migration, 0 too early for migration.
Using a cron job
Migrations that use the ltfsee and mmapplypolicy commands can be automated by scheduling cron jobs, which is possible by setting a cron job that periodically triggers migrations by calling mmapplypolicy with ltfsee as an external program. In this case, the full path to ltfsee must be specified. The following are steps to start the crond process and create a cron job:
1. Start the crond process from by running /etc/rc.d/init.d/crond start or /etc/init.d/crond start.
2. Create a crontab job by opening the crontab editor with the crontab -e command. If using VIM to edit the jobs, press “i” to enter insert mode to start typing.
3. Enter the frequency and command that you would like to run.
4. After entering the jobs you would like to run, exit the editor. If using VIM, press the escape key and enter “:wq”. If using nano, press Ctrl + x. This combination opens the save options. Then, press y to save the file and then Enter to override the file name.
5. View that the cron job has been created by running crontab -l.
The syntax for a cron job is m h dom mon dow command. In this syntax, m stands for minutes, h stands for hours, dom stands for day of month, mon stands for month, and dow stands for day of week. The hour parameter is in a 24 hour period, so 0 represents 12 AM and 12 represents 12 PM.
Example 7-59 shows how to start the crond process and create a single cron job that performs migrations every 6 hours.
Example 7-59 Creating a cron job for migrations to run twice a day at 8 AM and 8 PM
[root@ltfseesrv ~]# /etc/rc.d/init.d/crond start
Starting crond: [ OK ]
 
[root@carbite ~]# crontab -e
00 0,6,12,18 * * * /usr/lpp/mmfs/bin/mmapplypolicy gpfs -p /root/premigration_policy.txt -B 10000 -m 16
crontab: installing new crontab
 
[root@carbite ~]# crontab -l
00 0,6,12,18 * * * /usr/lpp/mmfs/bin/mmapplypolicy gpfs -p /root/premigration_policy.txt -B 10000 -m 16
Thresholds
By default, recall jobs have a higher priority than migration jobs. A threshold parameter is available as an option to the ltfsee command and it determines the percentage usage of the IBM Spectrum Scale at which migrations are preferred over recalls. The default value is 95%. If this value is passed for one of the managed file systems, migrations are given a higher priority. Recalls are again preferred after the file system usage drops by 5%. For example, if a threshold of 93% is selected, recalls again are preferred when the file system usage is at or below 88%. In most environments, you do not need to change this setting; however, Example 7-60 shows an example of reducing the threshold from 95% to 80%.
Example 7-60 Threshold
[root@ltfs97 ~]# ltfsee threshold
Current threshold: 95%
 
[root@ltfs97 ~]# ltfsee threshold 80
 
[root@ltfs97 ~]# ltfsee threshold
Current threshold: 80%
7.7.4 Replicas and redundant copies
This section introduces how replicas and redundant copies are used with IBM Spectrum Archive EE and describes how to create replicas of migrated files during the migration process.
Overview
IBM Spectrum Archive EE enables the creation of a replica of each Spectrum Scale file during the migration process. The purpose of the replica function is to enable creating multiple LTFS copies of each GPFS file during migration that can be used for disaster recovery, including across two tape libraries at two different locations.
The first replica is the primary copy, and more replicas are called redundant copies. Redundant copies must be created in tape cartridge pools that are different from the pool of the primary replica and from the pools of other redundant copies. Up to two redundant copies can be created (for a total of three copies of the file on various tapes). The tape cartridge where the primary replica is stored and the tape cartridges that contain the redundant copies are referenced in the GPFS inode with a IBM Spectrum Archive EE DMAPI attribute. The primary replica is always listed first.
For transparent recalls such as double-clicks of a file or through application reads, IBM Spectrum Archive EE always performs the recall by using the primary copy tape. The primary copy is the first tape cartridge pool that is defined by the migration process. If the primary copy tape cannot be accessed, including recall failures, then IBM Spectrum Archive EE automatically tries the recall job again by using the remaining replicas if they are available during the initial migration process. This automatic retry operation is transparent to the transparent recall requester.
For selective recalls initiated by the ltfsee recall command, an available copy is selected from the available replicates and the recall job is generated against the selected tape cartridge. There are no retries. The selection is based on the available copies in the tape library, which is supplied by the -l option in a two-tape library environment.
When a migrated file is recalled for a write operation or truncated, the file is marked as resident and the pointers to tape are dereferenced.The remaining copies are no longer referenced and are removed during the reconciliation process. In the case of a truncate to 0 operation, it does not generate a recall from tape. The truncated 0 file is marked as resident only.
Redundant copies are written to their corresponding tape cartridges in the IBM Spectrum Archive EE format. These tape cartridges can be reconciled, exported, reclaimed, or imported by using the same commands and procedures that are used for standard migration without replica creation.
Creating replicas and redundant copies
You can create replicas and redundant copies during automated IBM Spectrum Scale policy-based migrations or during manual migrations by running the ltfsee migrate (or ltfsee premigrate) command.
If a IBM Spectrum Scale scan is used and you use a scan policy file to specify files for migration, it is necessary to modify the OPTS line of the policy file to specify the tape cartridge pool for the primary replica and different tape cartridge pools for each redundant copy. The tape cartridge pool for the primary replica (including primary library) is listed first, followed by the tape cartridge pools for each copy (including a secondary library), as shown in Example 7-61. A pool cannot be listed more than once in the OPTS line. If a pool is listed more than once per line, the file is not migrated. Example 7-61 shows the OPTS line in a policy file, which makes replicas of files in two tape cartridge pools in a single tape library.
Example 7-61 Extract from IBM Spectrum Scale policy file for replicas
OPTS '-p PrimPool@PrimLib CopyPool@PrimLib'
For more information about IBM Spectrum Scale policy files, see 7.7.2, “Threshold-based migration” on page 152.
If you are running the ltfsee migrate (or ltfsee premigrate) command, a scan list file must be passed along with the designated pools that the user wants to migrate the files to. This process can be done by calling ltfsee migrate -s <scan list> -p <pool1> <pool2> <pool3> (or ltfsee premigrate).
Example 7-62 shows what the scan list looks like when selecting files to migrate.
Example 7-62 Example scan list file
[root@ltfs97 /]# cat migrate.txt
 -- /ibm/glues/document10.txt
 -- /ibm/glues/document20.txt
Example 7-63 shows how one would run a manual migration using the ltfsee migrate command on the scan list from Example 7-62 to two tapes.
Example 7-63 Creation of replicas during migration
[root@ltfs97]# ltfsee migrate mig -p primary_ltfs@lib_lto copy_ltfs@lib_lto
GLESL167I(00400): A list of files to be migrated has been sent to LTFS EE using scan id 3190476033.
GLESL038I(00448): Migration result: 2 succeeded, 0 failed, 0 duplicate, 0 duplicate wrong pool, 0 not found, 0 too small to qualify for migration, 0 too early for migration.
IBM Spectrum Archive EE attempts to create redundant copies as efficiently as possible with a minimum number of mount and unmount steps. For example, if all tape drives are loaded with tape cartridges that belong only to the primary copy tape cartridge pool, data is written to them before IBM Spectrum Archive EE begins loading the tape cartridges that belong to the redundant copy tape cartridge pools. For more information, see 3.4.2, “IBM Spectrum Archive EE metadata file system” on page 51.
By monitoring the ltfsee info jobs command as the migration is running, you can observe the status of the migration and migration (copy) jobs changing, as shown in Example 7-64.
Example 7-64 Job status during migration
[root@carbite ~]# ltfsee info jobs
Job Type Status Idle(sec) Scan ID Tape Pool Library Node File Name or inode
Migration In-progress 20 852297473 IM1178L6 primary lto_ts4500 1 /ibm/gpfs/LTFS_EE_FILE_Fa43WzQKcAeP2_j2XT9t.bin
Migration In-progress 20 852297473 IM1178L6 primary lto_ts4500 1 /ibm/gpfs/LTFS_EE_FILE_JmkgSoqPX9FhRoDlgTdCnDttA3Ee0A24pfQfbeva2ruRvTp_mXYbKW.bin
Migration In-progress 20 852297473 IM1178L6 primary lto_ts4500 1 /ibm/gpfs/LTFS_EE_FILE_lYO8ucO68bAzXBgPOTSKWHhU2lSv_LFFDAd.bin
 
[root@carbite ~]# ltfsee info jobs
Job Type Status Idle(sec) Scan ID Tape Pool Library Node File Name or inode
Migration Copied 20 852297473 IM1178L6 primary lto_ts4500 1 /ibm/gpfs/LTFS_EE_FILE_Fa43WzQKcA eP2_j2XT9t.bin
Migration Copied 20 852297473 IM1178L6 primary lto_ts4500 1 /ibm/gpfs/LTFS_EE_FILE_JmkgSoqPX9 FhRoDlgTdCnDttA3Ee0A24pfQfbeva2ruRvTp_mXYbKW.bin
Migration Copied 20 852297473 IM1178L6 primary lto_ts4500 1 /ibm/gpfs/LTFS_EE_FILE_lYO8ucO68b AzXBgPOTSKWHhU2lSv_LFFDAd.bin
For more information and command syntax, see the ltfsee migrate command in 7.7, “Migration” on page 150.
Considerations
Consider the following points when replicas are used:
Redundant copies must be created in different tape cartridge pools. The pool of the primary replica must be different from the pool for the first redundant copy, which, in turn, must be different from the pool for the second redundant copy.
The migration of a premigrated file does not create replicas.
If offsite tapes are required, redundant copies can be exported out of the tape library and shipped to an offsite location after running the export offline. A second option would be to create the redundant copy in a different tape library.
7.7.5 Migration hints and tips
This section provides preferred practices for successfully managing the migration of files.
Overlapping IBM Spectrum Scale policy rules
After a file is migrated to a tape cartridge pool and is in the migrated state, it cannot be migrated to other tape cartridge pools (unless it is first recalled). It is preferable that you do not use overlapping IBM Spectrum Scale policy rules within different IBM Spectrum Scale policy files that can select the same files for migration to different tape cartridge pools. If a file is already migrated, a later migration fails.
In this example, an attempt is made to migrate four files to tape cartridge pool CopyPool. Before the migration attempt, tape 055AGWL5, which is defined in a different tape cartridge pool (PrimPool), already contains three of the four files. The state of the files on these tape cartridges before the migration attempt is shown by the ltfsee info command that is shown in Example 7-65.
Example 7-65 Before migration
[root@ltfs97 gpfs]# ltfsee info files *.ppt
Name: fileA.ppt
Tape id:2MA260L5@lib_lto Status: migrated
Name: fileB.ppt
Tape id:2MA260L5@lib_lto Status: migrated
Name: fileC.ppt
Tape id:2MA260L5@lib_lto Status: migrated
Name: fileD.ppt
Tape id:- Status: resident
The attempt to migrate the files to a different tape cartridge pool produces the results that are shown in Example 7-66.
Example 7-66 Attempted migration of already migrated files
[root@ltfs97 gpfs]# ltfsee migrate mig -p CopyPool@lib_lto
GLESL167I(00400): A list of files to be migrated has been sent to LTFS EE using scan id 3353727489.
GLESL159E(00440): Not all migration has been successful.
GLESL038I(00448): Migration result: 1 succeeded, 3 failed, 0 duplicate, 0 duplicate wrong pool, 0 not found, 0 too small to qualify for migration, 0 too early for migration
If the IBM Spectrum Archive EE log is viewed, the error messages that are shown in Example 7-67 explain the reason for the failures.
Example 7-67 Migration errors reported in the IBM Spectrum Archive EE log file
2016-12-13T21:17:41.554526-07:00 ltfs97 mmm[3743]: GLESM148E(00538): File /ibm/gpfs/fileA.ppt is already migrated and will be skipped.
2016-12-13T21:17:41.555037-07:00 ltfs97 mmm[3743]: GLESM148E(00538): File /ibm/gpfs/fileB.ppt is already migrated and will be skipped.
2016-12-13T21:17:41.555533-07:00 ltfs97 mmm[3743]: GLESM148E(00538): File /ibm/gpfs/fileC.ppt is already migrated and will be skipped.
The files on tape 2MA260L5 (fileA.ppt, fileB.ppt, and fileC.ppt) are already in storage pool PrimPool. Therefore, the attempt to migrate them to storage pool CopyPool produces a migration result of “Failed”. Only the attempt to migrate the resident file fileD.ppt succeeds.
If the aim of this migration was to make redundant replicas of the four PPT files in the CopyPool tape cartridge pool, the method that is described in 7.7.4, “Replicas and redundant copies” on page 161 must be followed instead.
IBM Spectrum Scale policy for the .SPACEMAN directory
Prevent migration of the .SPACEMAN directory of a IBM Spectrum Scale by excluding the directory with a IBM Spectrum Scale policy rule. An example is shown in Example 7-56 on page 156.
Automated IBM Spectrum Scale policy-driven migration
To ensure that a specified GPFS file system is migrated once, run the mmapplypolicy command with the --single-instance option. The --single-instance option ensures that multiple mmapplypolicy commands are not running in parallel because it might take longer than 2 minutes to migrate a list of files to tape cartridges.
Tape format
For more information about the format of tapes that are created by the migration process, see 11.2, “Formats for IBM Spectrum Scale to IBM Spectrum Archive EE migration” on page 348.
Migration Policy
A migration policy is used to make your lives easier. When run, Spectrum Scale performs a scan of all candidate files in the Spectrum Archive name space to be migrated onto tape. This process saves the user lots of time because they do not need to manually search their file system and find candidate files for migrations, especially when there are millions of files created. For use cases on migration policy, see 8.10, “Real world use cases for mmapplypolicy” on page 214.
7.8 Premigration
A premigrated file is a file that is on both disk and tape. To change a file to a premigrated state, you have two options:
Recalling migrated files:
a. The file initially is only on a disk (the file state is resident).
b. The file is migrated to tape by running ltfsee migrate, after which the file is a stub on the disk (the file state is migrated) and the IDs of the tapes containing the redundant copies are written to a IBM Spectrum Archive EE DMAPI attribute.
c. The file is recalled from tape by using a recall for read when a client attempts to read from the file, and it is both on disk and tape (the file state is premigrated).
Premigrating files:
a. The file initially is only on disk (the file state is resident).
b. The file is premigrated to tape by running ltfsee premigrate. The IDs of the tapes containing the redundant copies are written to a IBM Spectrum Archive EE DMAPI attribute.
Premigration works similar to migration:
1. The premigration scan list file has the same format as the migration scan list file.
2. Up to two more redundant copies are allowed (the same as with migration).
3. Manual premigration is available by running either ltfsee premigrate or mmapplypolicy.
4. Automatic premigration is available by running ltfsee premigrate through the mmapplypolicy/mmaddcallback command or a cron job.
5. Migration hints and tips are applicable to premigration.
For the ltfsee migrate command, each migrate job is achieved internally by splitting the work into three steps:
1. Writing the content of the file to tapes, including redundant copies
2. Writing the IDs of the tapes containing the redundant copies of the file, which are written to a IBM Spectrum Archive EE DMAPI attribute
3. Stubbing the file on disk
For premigration, step 3 is not performed, and the omission of this step is the only difference between premigration and migration.
7.8.1 Premigration with the ltfsee premigrate command
The ltfsee premigrate command is used to premigrate non-empty regular files to tape. The command syntax is the same as for the ltfsee migrate command. The following is an example of the syntax:
ltfsee premigrate -s <GPFS scan list file> -p <target tape cartridge pool 1> <target tape cartridge pool 2> <target tape cartridge pool 3>
The <GPFS scan list file> file includes the list of non-empty regular files to be premigrated. Each line of this file must end with -- <full path filename>. All file system objects are saved to the specified target tape cartridge pool. Optionally, the target tape cartridge pool can be followed by up to two more tape cartridge pools (for redundant copies) separated by spaces.
7.8.2 Premigration running the mmapplypolicy command
To perform premigration by running the mmapplypolicy command, the THRESHOLD clause is used to determine the files for premigration. There is no IBM Spectrum Scale premigrate command, and the default behavior is to not premigrate files.
The THRESHOLD clause can have the following parameters to control migration and premigration:
THRESHOLD (high percentage, low percentage, premigrate percentage)
If no premigrate threshold is set with the THRESHOLD clause or a value is set greater than or equal to the low threshold, then the mmapplypolicy command does not premigrate files. If the premigrate threshold is set to zero, the mmapplypolicy command premigrates all files.
For example, the following rule premigrates all files if the storage pool occupancy is 0 - 30%. When the storage pool occupancy is 30% or higher, files are migrated until the storage pool occupancy drops below 30%. Then, it continues by premigrating all files:
RULE 'premig1' MIGRATE FROM POOL 'system' THRESHOLD (0,30,0) TO POOL 'ltfs'
The rule in the following example takes effect when the storage pool occupancy is higher than 50%. Then, it migrates files until the storage pool occupancy is lower than 30%, after which it premigrates the remaining files:
RULE 'premig2' MIGRATE FROM POOL 'system' THRESHOLD (50,30,0) TO POOL 'ltfs'
The rule in the following example is configured so that if the storage pool occupancy is below 30%, it selects all files that are larger than 5 MB for premigration. Otherwise, when the storage pool occupancy is 30% or higher, the policy migrates files that are larger than 5 MB until the storage pool occupancy drops below 30%. Then, it continues by premigrating all files that are larger than 5 MB:
RULE 'premig3' MIGRATE FROM POOL 'system' THRESHOLD (0,30,0) TO POOL 'ltfs' WHERE( AND (KB_ALLOCATED > 5120))
The rule in the following example is the preferred rule when performing premigrations only. It requires a callback to perform the stubbing. If the storage pools occupancy is below 100%, it selects all files larger than 5 MB for premigration. By setting the threshold to 100% the storage pools occupancy will never exceed this value. Therefore, migrations will not be performed, and a callback is needed to run the stubbing. For an example of a callback, see 8.10.2, “Creating active archive system policies” on page 215.
RULE ‘premig4’ PREMIGRATE FROM POOL ‘system’ THRESHOLD (0,100,0) TO POOL ‘ltfs’ WHERE (FILE_SIZE > 5242880)
7.9 Preserving file system objects on tape
Symbolic links, empty regular files, and empty directories are some file system objects that do not contain data or content. When you save these types of file system objects, you cannot use migration and premigration commands. HSM is used to move data to and from tapes, that is, for space management. Because these file system objects do not have data, they cannot be processed by migration or premigration. A new driver (called the save driver) was introduced to save these file system objects to tape.
The following items (data and metadata that is associated with an object) are written and read to and from tapes:
File data for non-empty regular files
Path and file name for all objects
Target symbolic name only for symbolic links
User-defined extended attributes for all objects except symbolic links
The following items are not written and read to and from tapes:
Timestamps
User ID and group ID
ACLs
To save these file system objects on tape, you have two options:
Calling the ltfseesave command directly with a scan list file
An IBM Spectrum Scale policy with the mmapplypolicy command
7.9.1 Saving file system objects with the ltfsee save command
The ltfsee save command is used to save symbolic links, empty regular files, and empty directories to tape. The command syntax is the same as the ltfsee migrate or ltfsee premigrate commands. The following is the syntax of the ltfsee save command:
ltfsee save -s <GPFS scan list file> -p <target tape cartridge pool 1> <target tape cartridge pool 2> <target tape cartridge pool 3>
The <GPFS scan list file> file includes the list of file system objects (symbolic links, empty regular files, and empty directories) to be saved. Each line of this file must end with -- <full path file system object name>. All file system objects are saved to the specified target tape cartridge pool. Optionally, the target tape cartridge pool can be followed by up to two more tape cartridge pools (for redundant copies) separated by spaces.
 
Note: This command is not applicable for non-empty regular files.
7.9.2 Saving file system objects with policies
Migration and premigration cannot be used for file system objects that do not occupy space for data. To save file system objects, such as symbolic links, empty regular files, and empty directories with a IBM Spectrum Scale policy, the IBM Spectrum Scale list rule must be used.
A working policy sample of IBM Spectrum Scale list rules to save these file system objects without data to tape can be found in the /opt/ibm/ltfsee/data/sample_save.policy file. The only change that is required to the following sample policy file is the specification of the cartridge pool (in blue colored letters). These three list rules can be integrated into existing IBM Spectrum Scale policies. Example 7-68 shows the sample policy.
Example 7-68 Sample policy to save file system objects without data to tape
/*
Sample policy rules
to save
symbolic links,
empty directories and
empty regular files
*/
 
RULE
EXTERNAL LIST 'emptyobjects'
EXEC '/opt/ibm/ltfsee/bin/ltfseesave'
OPTS '-p sample_pool'
 
define(DISP_XATTR,
CASE
WHEN XATTR($1) IS NULL
THEN '_NULL_'
ELSE XATTR($1)
END
)
 
RULE 'symoliclinks'
LIST 'emptyobjects'
DIRECTORIES_PLUS
/*
SHOW ('mode=' || SUBSTR(MODE,1,1) ||
' stime=' || DISP_XATTR('dmapi.IBMSTIME') ||
' ctime=' || VARCHAR(CHANGE_TIME) ||
' spath=' || DISP_XATTR('dmapi.IBMSPATH'))
*/
WHERE
( /* if the object is a symbolic link */
MISC_ATTRIBUTES LIKE '%L%'
)
AND
(
PATH_NAME NOT LIKE '%/.SpaceMan/%'
)
AND
(
( /* if the object has not been saved yet */
XATTR('dmapi.IBMSTIME') IS NULL
AND
XATTR('dmapi.IBMSPATH') IS NULL
)
OR
( /* if the object is modified or renamed after it was saved */
TIMESTAMP(XATTR('dmapi.IBMSTIME')) < TIMESTAMP(CHANGE_TIME)
OR
XATTR('dmapi.IBMSPATH') != PATH_NAME
)
)
 
RULE 'directories'
LIST 'emptyobjects'
DIRECTORIES_PLUS
/*
SHOW ('mode=' || SUBSTR(MODE,1,1) ||
' stime=' || DISP_XATTR('dmapi.IBMSTIME') ||
' ctime=' || VARCHAR(CHANGE_TIME) ||
' spath=' || DISP_XATTR('dmapi.IBMSPATH'))
*/
WHERE
( /* if the object is a directory */
MISC_ATTRIBUTES LIKE '%D%'
)
AND
(
PATH_NAME NOT LIKE '%/.SpaceMan'
AND
PATH_NAME NOT LIKE '%/.SpaceMan/%'
)
AND
(
( /* directory's emptiness is checked in the later processing */
/* if the object has not been saved yet */
XATTR('dmapi.IBMSTIME') IS NULL
AND
XATTR('dmapi.IBMSPATH') IS NULL
)
OR
( /* if the object is modified or renamed after it was saved */
TIMESTAMP(XATTR('dmapi.IBMSTIME')) < TIMESTAMP(CHANGE_TIME)
OR
XATTR('dmapi.IBMSPATH') != PATH_NAME
)
)
 
RULE 'emptyregularfiles'
LIST 'emptyobjects'
/*
SHOW ('mode=' || SUBSTR(MODE,1,1) ||
' stime=' || DISP_XATTR('dmapi.IBMSTIME') ||
' ctime=' || VARCHAR(CHANGE_TIME) ||
' spath=' || DISP_XATTR('dmapi.IBMSPATH'))
*/
WHERE
( /* if the object is a regular file */
MISC_ATTRIBUTES LIKE '%F%'
)
AND
(
PATH_NAME NOT LIKE '%/.SpaceMan/%'
)
AND
(
( /* if the size = 0 and the object has not been saved yet */
FILE_SIZE = 0
AND
XATTR('dmapi.IBMSTIME') IS NULL
AND
XATTR('dmapi.IBMSPATH') IS NULL
)
OR
( /* if the object is modified or renamed after it was saved */
FILE_SIZE = 0
AND
(
TIMESTAMP(XATTR('dmapi.IBMSTIME')) < TIMESTAMP(CHANGE_TIME)
OR
XATTR('dmapi.IBMSPATH') != PATH_NAME
)
)
)
7.10 Restoring non-empty regular files and file system objects from tape
The ltfsee rebuild command rebuilds a GPFS file system by restoring migrated files and saved file system objects (symbolic links, empty files, and empty directories) from tapes. The migrated files and saved file system objects are restored to a specified directory by using the files that are found on the specified tapes. If multiple versions or generations of a file are found, the latest version or generation is selected. If any of the versions or generations cannot be determined, the file that most recently was imported is renamed and two (or more) versions or generations are rebuilt or recovered.
 
Note: The use of ltfsee rebuild is for disaster scenarios, such as if the Spectrum Scale file system is lost and a rebuild is required from tape.
The ltfsee rebuild command has the following syntax:
ltfsee rebuild -P <pathName> -p <poolName> -l <libraryName> <-t tape_id_1 tape_id_2 ... tape_id_N>
<pathName> specifies the directory where the Spectrum Scale file system is rebuilt to, and <-t tape_id_1 tape_id_2 ... tape_id_N> specifies the tapes from the pool that is specified by <poolName> from the tape library that is specified by <libraryName> to search for the files to rebuild the Spectrum Scale file system.
7.11 Recall
In space management solutions, there are two different types of recall possibilities: transparent and selective recall processing. Both are possible with the current IBM Spectrum Archive EE implementation.
Transparent recalls are initiated by an application that tries to read, write, or truncate a migrated file while not being aware that it was migrated. The specific I/O request that initiated the recall of the file is fulfilled, with a possible delay because the file data is not available immediately (it is on tape).
For transparent recalls, it is difficult to do an optimization because it is not possible to predict when the next transparent recall will happen. Some optimization already is possible because within the IBM Spectrum Archive EE job queue, the requests are run in an order that is based on the tape and the starting block to which a file is migrated. This becomes effective only if requests happen close together in time. Furthermore, with the default IBM Spectrum Archive EE settings, there is a limitation of up to only 60 transparent recalls possible on the IBM Spectrum Archive EE job queue. A 61st request appears only if one of the previous 60 transparent recall requests completes. Therefore, the ordering can happen only on this small 60 transparent recall subset. It is up to the software application to send the transparent recalls in parallel to have multiple transparent recalls to run at the same time.
Selective recalls are initiated by users that are aware that the file data is on tape and they want to transfer it back to disk before an application accesses the data. This action avoids delays within the application that is accessing the corresponding files.
Contrary to transparent recalls, the performance objective for selective recalls is to provide the best possible throughput for the complete set of files that is being recalled, disregarding the response time for any individual file. However, to provide reasonable response times for transparent recalls in scenarios where recall of many files is in progress, the processing of transparent recalls are modified to have higher priority than selective recalls. Selective recalls are performed differently than transparent recalls, and so they do not have a limitation like transparent recalls.
Recalls have higher priority than other IBM Spectrum Archive EE operations. For example, if there is a recall request for a file on a tape cartridge being reclaimed or for a file on the tape cartridge being used as reclamation target, the reclamation job is stopped, the recall or recalls from the tape cartridge that is needed for recall are served, and then the reclamation resumes automatically.
Recalls have also higher priority over tape premigration processes. They are optimized across tapes and optimized within a tape used for premigration activities. The recalls are in close proximity given priority.
7.11.1 Transparent recall
Transparent recall processing automatically returns migrated file data to its originating local file system when you access it. After the data is recalled by reading the file, the HSM client leaves the copy of the file in the tape cartridge pool, but changes it to a premigrated file because an identical copy exists on your local file system and in the tape cartridge pool. If you do not modify the file, it remains premigrated until it again becomes eligible for migration. A transparent recall process waits for a tape drive to become available.
If you modify or truncate a recalled file, it becomes a resident file. The next time your file system is reconciled, MMM marks the stored copy for deletion.
The order of selection from the replicas is always the same. The primary copy is always selected first from which to be recalled. If this recall from the primary copy tape fails or is not accessible, then IBM Spectrum Archive EE automatically retries the transparent recall operation against the other replicas if they exist.
 
Note: Transparent recall is used most frequently because it is activated when you access a migrated file, such as opening a file.
7.11.2 Selective recall
Use selective recall processing if you want to return specific migrated files to your local file system. The access time (atime) changes to the current time when you selectively recall a migrated file.
To selectively recall files, run the tail command or any similar command. For example, the command that is shown in Example 7-69 recalls a file that is named file6.img to the /ibm/glues directory.
Example 7-69 Recall a single file
[root@ltfs97 glues]# tail /ibm/glues/file6.img
No message is displayed to confirm the successful recall of the file; however, if there is an error message, it is logged to the dsmerror.log file. The ltfsee info files command can be used to verify a successful recall. After a successful recall, the file status changes from migrated to premigrated.
7.11.3 Recalling files with the ltfsee recall command
The ltfsee recall command performs selective recalls of migrated files to the local file system. This command performs selective recalls in multiple ways:
Using a recall list file
Using an IBM Spectrum Scale scan list file
From the output of another command
Using a IBM Spectrum Scale scan list file that is generated through a IBM Spectrum Scale policy and the mmapplypolicy command
With multiple tape libraries configured, the ltfsee recall command requires the -l option to specify the tape library from which to recall. When a file is recalled, the recall can occur on any of the tapes (that is, either primary or redundant copies) from the specified tape library. The following conditions are applied to determine the best replica:
The condition of the tape
If a tape is mounted
If a tape is mounting
If there are jobs that are assigned to a tape
If conditions are equal between certain tapes, the primary tape is preferred over the redundant copy tapes. The secondary tape is preferred over the third tape. These rules are necessary to make the tape selection predictive. However, there are no automatic retries likes with transparent recalls.
For example, if a primary tape is not mounted but a redundant copy is, the redundant copy tape is used for the recall job to avoid unnecessary mount operations.
If the specified tape library does not have any replicas, IBM Spectrum Archive EE automatically resubmits the request to the other tape library to process the bulk recalls:
Three copies: TAPE1@Library1 TAPE2@Library1 TAPE3@Library2
 – If -l Library1 => TAPE1 or TAPE2
 – If -l Library2 => TAPE3
Two copies: TAPE1@Library1 TAPE2@Library1
 – If -l Library1 => TAPE1 or TAPE2
 – If -l Library2 => TAPE1 or TAPE2
The ltfsee recall command
The ltfsee recall command is used to recall non-empty regular files from tape. The command syntax is the same as the ltfsee migrate command. Here are some examples of the syntax:
ltfsee recall -l <library_name> -f <recall list file>
The <recall list file> file includes a list of non-empty regular files to be recalled. Each line contains the file name with an absolute path or a relative path based on the working directory.
ltfsee recall -l <library_name> -s <GPFS scan list file>
The <GPFS scan list file> file includes the list of non-empty regular files to be recalled. Each line of this file must end with “-- <full path filename>”.
The ltfsee recall command with the output of another command
The ltfsee recall command can take as input the output of other commands through a pipe. In Example 7-70, all files with names ending with .bin are recalled under the /ibm/gpfs/production directory, including subdirectories. Thus, it is convenient to recall whole directories with a simple command.
Example 7-70 ltfsee recall command with the output of another command
[root@ltfsml1 ~]# find /ibm/gpfs/production -name "*.bin" -print | ltfsee recall -l lib_ltfsml1
GLESL277I(00318): The ltfsee recall command is called without specifying an input file waiting for standard input.
If necessary, press ^D to exit.
GLESL268I(00142): 4 file name(s) have been provided to recall.
GLESL263I(00191): Recall result: 4 succeeded, 0 failed, 0 duplicate, 0 not migrated, 0 not found.
Example 7-71 shows the output of the command that is run in Example 7-70.
Example 7-71 ltfsee info jobs command output from the ltfsee recall command in Example 7-70
[root@ltfsml1 ~]# ltfsee info jobs
Job Type Status Idle(sec) Scan ID Tape Pool Library Node File Name or inode
Selective Recall In-progress 2 3108835585 2FC140L5 copy_ltfsml1 lib_ltfsml1 2 10374858
Selective Recall In-progress 2 3108835585 2FC141L5 copy_ltfsml1 lib_ltfsml1 2 10374888
Selective Recall Unscheduled 2 3108835585 2FC140L5 copy_ltfsml1 lib_ltfsml1 - 10374875
Selective Recall Unscheduled 2 3108835585 2FC140L5 copy_ltfsml1 lib_ltfsml1 - 10374877
7.11.4 The ltfsee recall_deadline command
When multiple recalls occur against one tape, the recalls are added to the IBM Spectrum Archive EE job queue and reordered according to their placement on tape. This reordering is done to optimize the order of file recalls. During the recall processing, new recall requests can occur. A new recall is added to the queue and then all the recalls on queue are reordered. Because of this continual reordering, some recall requests can stay in the queue as unscheduled for a long time.
Transparent (or automatic) recalls are processed with higher priority than selective recalls because they have stronger requirements regarding response time. Transparent recalls can be even further prioritized based on the recall_deadline setting, but selective recalls always are processed according to their placement on tape.
IBM Spectrum Archive EE has a recall feature to optimize the order of file recalls from tapes. This process involves ordering recall requests according to the starting block of the file on tape. To prevent any single transparent recall request from being starved, a recall deadline timeout setting value is implemented and defaults to 120 seconds. If a transparent recall request is unscheduled for more than the recall deadline timeout value than the average recall request, it is considered to have timed out according to the recall deadline and is processed with a higher priority. The objective of adjusting the default recall deadline timeout value is to provide better transparent recall performance and to allow flexible adjustment of transparent recall behavior based on observed performance.
If this option is set to 0, the recall deadline queue is disabled, meaning that all transparent recalls are processed according to the starting block of the file on tapes.
 
Note: The recall deadline handling applies only to transparent recalls.
Example 7-72 shows the command to view the current recall deadline value.
Example 7-72 View the current recall deadline value
[root@ltfsml1 ~]# ltfsee recall_deadline
Library name: lib_ltfsml2, library id: 000001300228_LLA, control node (MMM) IP address: 9.11.121.227.
GLESL300I(00247): Recall deadline timeout is set to 120.
Library name: lib_ltfsml1, library id: 000001300228_LLC, control node (MMM) IP address: 9.11.121.122.
GLESL300I(00247): Recall deadline timeout is set to 120.
Example 7-73 shows the command to set the recall deadline to a new value.
Example 7-73 Set the recall deadline to a new value
[root@ltfsml1 ~]# ltfsee recall_deadline 240
Library name: lib_ltfsml2, library id: 000001300228_LLA, control node (MMM) IP address: 9.11.121.227.
GLESL300I(00247): Recall deadline timeout is set to 240.
Library name: lib_ltfsml1, library id: 000001300228_LLC, control node (MMM) IP address: 9.11.121.122.
GLESL300I(00247): Recall deadline timeout is set to 240.
7.11.5 Read Starts Recalls: Early trigger for recalling a migrated file
IBM Spectrum Archive EE can define a stub size for migrated files so that the stub size initial bytes of a migrated file are kept on disk while the entire file is migrated to tape. The migrated file bytes that are kept on the disk are called the stub. Reading from the stub does not trigger a recall of the rest of the file. After the file is read beyond the stub, the recall is triggered, and it might take a long time until the entire file is read from tape (because a tape mount might be required, and it takes time to position the tape before data can be recalled from tape).
When Read Start Recalls (RSR) is enabled for a file, the first read from the stub file triggers a recall of the complete file in the background (asynchronous). Reads from the stubs are still possible while the rest of the file is being recalled. After the rest of the file is recalled to disks, reads from any file part are possible.
With the Preview Size (PS) value, a preview size can be set to define the initial file part size for which any reads from the resident file part does not trigger a recall. Typically, the PS value is large enough to see whether a recall of the rest of the file is required without triggering a recall for reading from every stub. This is important to prevent unintended massive recalls. The PS value can be set only smaller than or equal to the stub size.
This feature is useful, for example, when playing migrated video files. While the initial stub size part of a video file is played, the rest of the video file can be recalled to prevent a pause when it plays beyond the stub size. You must set the stub size and preview size to be large enough to buffer the time that is required to recall the file from tape without triggering recall storms.
For more usage details, see “ltfsee fsopt” on page 324.
7.12 Repairing files to their resident state
This section describes the ltfsee repair command. However, this command should rarely be used by itself. The recover command in section 7.13, “Handling of write-failure tapes” on page 178 internally uses this command to fix the file states when recovering from a write failure. The ltfsee repair command is used to repair a file or object by changing the state to Resident when the tape (or tapes) that are used for migration, premigration, or save are not available. The ltfsee repair command is available only for files in the Premigrated state. This option removes metadata on IBM Spectrum Scale, which is used for keeping the file/object state. Here is an example of the syntax:
ltfsee repair <pathName>
<pathName> specifies the path name of the file to be repaired to the Resident state.
A typical usage of the ltfsee repair command is when a tape malfunctions and the files on the tape must be marked Resident again to allow for migration, premigration, or save again. After they are migrated, premigrated, or saved again, the files are on a primary tape and redundant copy tapes.
When a tape goes to the Critical, Write Fenced, or Warning state, the number of tapes that can be used for recalls is reduced. For example, if a file is migrated to two tapes (one primary and one redundant copy tape) and one of those tapes malfunctions, the file now can only be recalled from the remaining tape (redundancy is reduced).
 
Note: To prevent further damage to tape if redundant copies exist, recalls will favor valid tapes rather than the Critical, Write Fenced, or Warning tapes. In the case where there is only one copy, then recalls will occur on the said tape.
Use the following procedure:
1. Recall the migrated files on the malfunctioning tape by using the remaining tape.
2. Mark those recalled files as Resident.
3. Migrate the files to tape again, which regains the two copy redundancy.
After recalling files, the files are in the Premigrated state (data content is on disk and on tape). Now that the data content is on disk, you can mark the files to be in the Resident state by removing all metadata from the IBM Spectrum Scale files by running the ltfsee repair command. This task can be easily done by using a IBM Spectrum Scale policy that selects files that are in the Premigrated state and are on the bad tape, and then calling an external script by running the ltfsee repair command.
A sample IBM Spectrum Scale policy to select all premigrated files that are on the malfunctioning tape is shown in Example 7-74. In this sample policy, replace the tape VOLSER ID with your tape VOLSER ID.
Example 7-74 Sample IBM Spectrum Scale policy
/*
Sample policy rules to make premigrated files to resident
*/
 
define(is_premigrated,(MISC_ATTRIBUTES LIKE '%M%' AND MISC_ATTRIBUTES NOT LIKE '%V%'))
 
RULE
EXTERNAL LIST 'premigrated_files'
EXEC './make_resident.sh'
OPTS 'premig_to_resident.list'
 
RULE 'rule_premigrated_files'
LIST 'premigrated_files'
WHERE PATH_NAME NOT LIKE '%/.SpaceMan/%'
AND is_premigrated
AND XATTR('dmapi.IBMTPS') LIKE '%VOLSER ID%'
AND XATTR('dmapi.IBMTPS') LIKE '%:%'
Example 7-75 shows the make_resident.sh script that is referenced in Example 7-74. The script calls the ltfsee repair command.
Example 7-75 The make_resident.sh script
#!/bin/bash
# $1: TEST or LIST command
# $2: GPFS policy scan result file
# $3: File name to backup GPFS policy scan result file
 
rc_latch=0
 
# Do noting for TEST command
if [ "$1" = "TEST" ]
then
exit 0
elif [ "$1" != "LIST" ]
then
echo "usage $0 <TEST|LIST>"
exit 1
fi
 
#Save GPFS policy scan result if $3 is specified
if [ "$3" != "" ]
then
cp $2 $3
fi
 
# Obtain premig file name from GPFS scan result file, and make it resident
cat $2 | sed -e "s/.*-- //" | while read premig_file
do
/opt/ibm/ltfsee/bin/ltfsee repair $premig_file
if [ $? != 0 ]
then
rc_latch = 1
fi
done
 
exit ${rc_latch}
At this stage, all previously migrated files on the bad tape are in the Resident state. Because these files are now resident again, the currently set IBM Spectrum Scale policy or manual operation can migrate these files to tape again (allowing for redundancy on multiple tapes).
7.13 Handling of write-failure tapes
When a tape has a write failure, the two possible outcomes for the tape are Critical and Write Fenced. In the first outcome, the tape fails to sync and the index is not written to the tape, so the tape goes into the Critical state. In the second outcome, starting with Spectrum Archive EE V1.2.2.0, write failures to tape might have a successful sync and the index successfully written for all the files migrated to tape right before the write failure. This scenario makes the tape go into a Write Fenced state. If the tape has a write failure and the tape is in the Write Fenced state, the drive does not go into a Locked state. This state allows the Write Fenced tape to be unloaded where it remains as Write Fenced for other Spectrum Archive jobs to access that drive. For both Critical and Write Fenced tapes, the preferred action item is to copy the data off the write failure tape and onto another tape to avoid future failures. “Recover the data” means recalling migrated files and saved objects from the tape to IBM Spectrum Scale, and copying the other files from the tape to IBM Spectrum Scale or local disk, as specified by the user.
To recover the data from the Critical and Write Fenced tapes and remove the Critical tape from LTFS LE+ to make the tape drive available again, run the ltfsee recover command. The ltfsee recover command can be used only when the tape is in the Critical state or the Write Fenced state.
For steps to recover from a write failure tape, see 10.3, “Recovering data from a write failure tape” on page 261:
7.14 Handling of read-failure tapes
When a tape has a read failure, its state is changed to Warning. Starting with Spectrum Archive EE V1.2.2, tapes that have a read failure are no longer candidates for future migration jobs, due to the poor read quality on one or more Warning tapes. It is important to copy the data on the Warning tape with the read failure over to a valid tape within the same pool to prevent further read failures or permanent damage to the Warning tape. After the data is copied over to a new tape, you can discard the Warning tape.
For steps to recover data from a read failure, see 10.4, “Recovering data from a read failure tape” on page 263.
7.15 Reconciliation
This section describes file reconciliation with IBM Spectrum Archive EE and presents considerations for the reconciliation process.
HSM is not notified upon moves, renames, or deletions of files in IBM Spectrum Scale. Therefore, over time the metadata of migrated files on IBM Spectrum Scale can diverge from their equivalents on LTFS. The goal of the reconciliation function is to synchronize the IBM Spectrum Scale namespace with the corresponding LTFS namespace (per tape cartridge) and the corresponding LTFS attributes (per tape cartridge).
The reconciliation process resolves any inconsistencies that develop between files in the IBM Spectrum Scale and their equivalents in IBM Spectrum Archive EE. When files are deleted, moved, or renamed in IBM Spectrum Scale, the metadata of those files becomes out of sync with their copies in LTFS. By performing file reconciliation, it is possible to synchronize the IBM Spectrum Scale namespace and attributes that are stored in LTFS (on tape cartridges) with the current IBM Spectrum Scale namespace and attributes. However, reconciliation works on only tape cartridges that were used in IBM Spectrum Archive EE. Tapes that were not used in LTFS Library Edition (LE) cannot be reconciled.
For each file that was deleted in IBM Spectrum Scale, the reconciliation process deletes the corresponding LTFS files and symbolic links. If the parent directory of the deleted symbolic link is empty, the parent directory is also deleted. This frees memory resources that were needed for storing the LTFS index entries of those deleted files.
For each IBM Spectrum Scale file that was moved or renamed in IBM Spectrum Scale, the reconciliation process updates for each LTFS instance (replica) of that IBM Spectrum Scale file the LTFS extended attribute that contains the IBM Spectrum Scale path and the LTFS symbolic link.
Reconciliation can be performed on one or more GPFS file systems, one or more tape cartridge pools, or a set of tape cartridges. When the reconciliation process involves multiple tape cartridges, multiple IBM Spectrum Archive EE nodes and tape drives can be used in parallel. However, because recall jobs have priority, only available tape drives are used for reconciliation. After reconciliation is started, the tape cartridge cannot be unmounted until the process completes.
The following list presents limitations of the reconciliation process:
1. Only one reconciliation process can be started at a time. If an attempt is made to start a.reconciliation process while another process is running, the attempt fails. The ltfsee reconcile command fails and the following failure message appears:
GLESL098E(00774): Another reconciliation, reclamation or export job is currently executing. Wait for completion of the executing process and try again.
2. After a reconciliation process is started, new migration jobs are prevented until the reconciliation process completes on the reconciling tapes. However, if any migration jobs are running, the reconciliation process does not begin until all migration jobs complete.
3. Recalls from a tape cartridge being reconciled are not available while the reconciliation process is updating the index for that tape cartridge, which is a short step in the overall reconciliation process.
The command outputs in the following examples show the effect that reconciliation has on a file after that file is renamed in the GPFS file system. Example 7-76 shows the initial state with a single file that is called file1.img on tape.
Example 7-76 List the file on LTFS tape
[root@ltfs97 /]# ls -la /ltfs/058AGWL5/ibm/glues/file1.img*
lrwxrwxrwx 1 root root 87 Apr 9 13:40 /ltfs/058AGWL5/ibm/glues/file1.img -> /ltfs/058AGWL5/.LTFSEE_DATA/1066406549503693876-17750464391302654144-190660251-134206-0
The file is also present on the GPFS file system, as shown in Example 7-77.
Example 7-77 List the file on the GPFS file system
[root@ltfs97 /]# ls -la /ibm/glues/file1.img*
-r--r--r-- 1 root root 137248768 Mar 27 16:28 /ibm/glues/file1.img
IBM Spectrum Archive EE considers the file to be in a Premigrated state, as shown in Example 7-78.
Example 7-78 List the file in IBM Spectrum Archive EE
[root@ltfs97 /]# ltfsee info files /ibm/glues/file1.img*
File name: /ibm/glues/file1.img
Tape id:058AGWL5@lib_ltfsml1 Status: premigrated
The file is renamed to .old, as shown in Example 7-79.
Example 7-79 Rename the file
[root@ltfs97 /]# mv /ibm/glues/file1.img /ibm/glues/file1.img.old
 
[root@ltfs97 /]# ls -la /ibm/glues/file1.img*
-r--r--r-- 1 root root 137248768 Mar 27 16:28 /ibm/glues/file1.img.old
However, the file on the tape cartridge is not immediately and automatically renamed as a result of the previous change to the file in IBM Spectrum Scale. This can be confirmed by running the command that is shown in Example 7-80, which still shows the original file name.
Example 7-80 List the file on the LTFS tape cartridge
[root@ltfs97 /]# ls -la /ltfs/058AGWL5/ibm/glues/file1.img*
lrwxrwxrwx 1 root root 87 Apr 9 13:40 /ltfs/058AGWL5/ibm/glues/file1.img -> /ltfs/058AGWL5/.LTFSEE_DATA/1066406549503693876-17750464391302654144-190660251-134206-0
If you perform a reconciliation of the tape now, IBM Spectrum Archive EE synchronizes the file in IBM Spectrum Scale with the file on tape, as shown in Example 7-81.
Example 7-81 Reconcile the LTFS tape
[root@ltfs97 /]# ltfsee reconcile -t 058AGWL5 -p copy_ltfsml1 -l lib_ltfsml1
GLESS016I(00109): Reconciliation requested
GLESS049I(00610): Tapes to reconcile: 058AGWL5
GLESS050I(00619): GPFS filesystems involved: /ibm/glues
GLESS053I(00647): Number of pending migrations: 0
GLESS054I(00651): Creating GPFS snapshots:
GLESS055I(00656): Creating GPFS snapshot for /ibm/glues ( /dev/gpfs )
GLESS056I(00724): Scanning GPFS snapshots:
GLESS057I(00728): Scanning GPFS snapshot of /ibm/glues ( /dev/gpfs )
GLESS058I(00738): Removing GPFS snapshots:
GLESS059I(00742): Removing GPFS snapshot of /ibm/glues ( /dev/gpfs )
GLESS060I(00760): Processing scan results:
GLESS061I(00764): Processing scan results for /ibm/glues ( /dev/gpfs )
GLESS063I(00789): Reconciling the tapes:
GLESS001I(00815): Reconciling tape 058AGWL5 has been requested
GLESS002I(00835): Reconciling tape 058AGWL5 complete
GLESL172I(02984): Synchronizing LTFS EE tapes information
If you list the files on the tape cartridge, you can see that the file name changed. Compare Example 7-82 with the output from Example 7-80 on page 180. Although files are moved or renamed, recalls on those files are fine.
Example 7-82 List the files on the LTFS tape cartridge
[root@ltfs97 /]# ls -la /ltfs/058AGWL5/ibm/glues/file1.img*
lrwxrwxrwx 1 root root 87 Apr 9 13:44 /ltfs/058AGWL5/ibm/glues/file1.img.old -> /ltfs/058AGWL5/.LTFSEE_DATA/1066406549503693876-17750464391302654144-190660251-134206-0
7.16 Reclamation
The space on tape that is occupied by deleted files is not reused during normal IBM Spectrum Archive EE operations. New data is always written after the last index on tape. The process of reclamation is similar to the same named process in IBM Spectrum Protect™ (formerly Tivoli Storage Manager) environment in that all active files are consolidated onto a new, empty, second tape cartridge. This improves overall tape usage and utilization.
When files are deleted, overwritten, or edited on IBM Spectrum Archive EE tape cartridges, it is possible to reclaim the space. The reclamation function of IBM Spectrum Archive EE frees tape space that is occupied by non-referenced files and non-referenced content that is present on the tape. The reclamation process copies the files that are referenced by the LTFS index of the tape cartridge being reclaimed to another tape cartridge, updates the GPFS/IBM Spectrum Scale inode information, and then reformats the tape cartridge that is being reclaimed.
Reclamation considerations
The following considerations should be reviewed before the reclamation function is used:
Reconcile before reclaiming tape cartridges
It is preferable to perform a reconciliation of the set of tape cartridges that are being reclaimed before the reclamation process is initiated. For more information, see 7.12, “Repairing files to their resident state” on page 176. If this is not performed, it is possible reclamation fails with the following message:
GLESL086I(01990): Reclamation has not completed since at least tape 058AGWL5 needs to be reconciled.
Scheduled reclamation
It is preferable to schedule periodically reclamation for the IBM Spectrum Archive EE tape cartridge pools.
Recall priority
Recalls are prioritized over reclamation. If there is a recall request for a file on a tape cartridge that is being reclaimed or for a file on the tape cartridge being used as the reclamation target, the reclamation job is stopped for the recall. After the recall is complete, the reclamation resumes automatically.
One tape cartridge at a time
Only one tape cartridge is reclaimed at a time. The reclamation function does not support parallel use of drives for reclaiming multiple tape cartridges simultaneously.
Use the reclaim option of the ltfsee command to start reclamation of a specified tape cartridge pool or of certain tape cartridges within a specified tape cartridge pool. The ltfsee reclaim command is also used to specify thresholds that indicate when reclamation is performed by the percentage of the available capacity on a tape cartridge.
Example 7-83 shows the results of reclaiming a single tape cartridge 058AGWL5.
Example 7-83 Reclamation of a single tape cartridge
[root@ltfs97 glues]# ltfsee reclaim -p myfirstpool -l myfirstlib -t 058AGWL5
Start reclaiming the following 1 tapes:
058AGWL5
Tape 058AGWL5 successfully reclaimed, formatted, and removed from storage pool myfirstpool.
Reclamation complete. 1 tapes reclaimed, 1 tapes removed from the storage pool.
At the end of the process, the tape cartridge is reformatted and removed from the tape cartridge pool only if the -t or -n options are used. For more information, see “ltfsee premigrate command” on page 336.
Reclamation performance
Reclaim operation might take a long time to copy all required files from one tape to another, especially when the number of files to be copied is large. In a lab demonstration, it was observed that reclamation of a tape that has only 2,000 of 1 MiB files that are left takes almost 2,000 seconds. To have rough estimation, 2.5 TB of LTO tape has 2,500,000 of 1 MiB files. Assuming 10% of them remain in the tape when a user requests reclaim, it takes 250,000 seconds, that is, around 70 hours. Preparing this time window to fit into the schedule of production system by using IBM Spectrum Archive EE is difficult for common use cases. From this point of view, improvement of the performance of the execution time of the reclaim operation was introduced with IBM Spectrum Archive V1R1.1.3.
A new command-line option was introduced to limit the number of files to be moved within a reclaim operation so that a user can manage the length of time of a reclaim operation to fit the given time window in the production system operations.
Another new function called quick reconcile was added so that a user does not need to run reconcile before the reclamation task. The quick reconcile operation is performed implicitly at the start of the reclaim operation, and the reclaim operation now can run without a reconcile operation before it.
 
Note: Even if the quick reconcile function is introduced, a full reconcile might be required in some rare cases before running the reclaim operation.
The chart that is shown in Figure 7-1 indicates how many seconds are required to reclaim all files in the source tape. With prior LTFS EE versions, it takes around 1 second per one file. Starting with IBM Spectrum Archive (LTFS) EE V1R1.1.3 (PGA2.2), it is almost 10 times faster when the source and the target tape cartridge are on the same node.
Figure 7-1 Reclamation performance enhancement by using “quick reconcile”
The yellow line shows performance just for a reference when scp and ssh rm commands are used on the same node by IBM Spectrum Archive EE. When the source and the target tapes are on a different node, the performance is worse than this reference because it requires metadata access to get the list of files in the source tape. In this case for the quick reconcile operation, the total time of reclamation might become longer than the case without the quick reconcile. The time required to have the quick reconcile varies by the number of files to be reconciled. The quick reconcile includes removing files that are not required anymore from an IBM Spectrum Archive EE tape.
New messages are added to log so that a user can see the progress of the operation. Some examples are shown in Example 7-84.
Example 7-84 New log message for reclamation enhancement
GLESR051I(01439): 17608 of 22000 files have been processed.
GLESR052I(01505): Removing files from source tape TYO118L5.
GLESR053I(01528): 1000 of 22000 files have been removed.
When the option to limit the number of files for reclamation is used and one or more files remain in the source tape, the following message is shown in the log file:
GLESL332I(ltfsee:4404): Reclamation has been partially performed. Run the reclamation again for tape JCA811JC.
Two option switches were added to the reclaim command starting with IBM Spectrum Archive (LTFS) EE V1R1.1.3 and later:
-L: Limit the number of files to be processed by the reclaim command.
This -L option can be set with the -t option when the -t option has only one tape_id. When the number of files that are stored on the tape to be reclaimed is more than the value provided with the -L option, the reclaim command completes its operation when the number of files that are provided by the -L option is processed. You may not gain any capacity from unreferenced capacity at this point because another reclaim operation might be required. However, this option is useful to limit the time of operation taken by one reclaim command. For example, when there is only a limited time window available for the reclaim operation, a user may limit the number of files by using the -L option with a small number. After the completion of the command with the -L option, the user has the choice to run the command again if there is time available.
When the number of files that are stored on the tape is less than the number provided by the -L option, the reclaim operation completes as though the reclaim option was run without the -L option. If this option is specified without any value, the default value is 100,000.
-q: Quick Reconcile.
This option enables the quick reconcile feature of the reclaim command. The quick reconcile feature handles basic inconsistencies between the files on LTFS and IBM Spectrum Scale. The basic file inconsistencies have the following conditions:
 – A file on IBM Spectrum Scale was removed.
 – A file on IBM Spectrum Scale was renamed.
When a file with these conditions is found, the reclaim operation without the -q option does not touch those files and reports to the user that the reconcile command must be run. With the -q option, those files are processed without running the reconcile operation in advance. When there is a file that cannot be processed by the quick reconcile feature, the reclaim command continues the operation for the other files and reports at the end of operation that a reconcile is required. Only the files that are not handled by the reclaim command are left on the tape.
To use the quick reconcile operation, all of the DMAPI enabled file systems should be mounted. Otherwise, the reclaim operation fails. Files that are imported from outside of IBM Spectrum Archive EE by the import command are not handled by the quick reconcile feature.
The -q option can be used in any combination of the options, but it should be put after the -t or -n option when the -t or -n option is used.
7.17 Checking and repairing
You can run the ltfsee pool add command with a check option or a deep recovery option to check the medium when one or more tape cartridges in the library are inconsistent and it becomes necessary to perform a check and recover operation.
Example 7-85 shows the output of the check and recovery on a single tape cartridge.
Example 7-85 Check and recovery of a tape
[root@ltfs97 glues]# ltfsee pool add -p CopyPool -t 058AGWL5 -c
Tape 058AGWL5 successfully checked.
Adding tape 058AGWL5 to storage pool CopyPool
Most corrupted tapes can be repaired by using the --check (or -c) option. If the command fails, it might be because the tape is missing an end-of-data (EOD) mark. Try to repair the tape again by running the ltfsee pool add command with the --deep_recovery (or -d) option, as shown in Example 7-86.
Example 7-86 Deep recovery of a tape
[root@ltfs97 glues]# ltfsee pool add -p CopyPool -t 058AGWL5 -d
Tape 058AGWL5 successfully checked.
Adding tape 058AGWL5 to storage pool CopyPool
7.18 Importing and exporting
The import and export processes are the mechanisms for moving existing data on the LTFS written tape cartridges into or out of the IBM Spectrum Archive EE environment.
7.18.1 Importing
Import tape cartridges to your IBM Spectrum Archive EE system by running the ltfsee import -p <pool> -t <tape> command.
When you import a tape cartridge, the ltfsee import -p <pool> -t <tape> command adds the specified tape cartridge to the IBM Spectrum Archive EE library, assigns the tape to the designated pool, imports the files on that tape cartridge into the IBM Spectrum Scale namespace. This process puts the stub file back in the GPFS file system and the imported files are in a migrated state, which means that the data remains on tape cartridge. The data portion of the file is not copied to disk during the import.
Example 7-87 shows the import of an LTFS tape cartridge that was created on a different LTFS system into a directory that is called 075AGWL5 in the /ibm/glues file system.
Example 7-87 Import an LTFS tape cartridge
[root@ltfs97 /]# ltfsee import -p myPool -t 075AGWL5 -P /ibm/glues
Import of tape 075AGWL5 has been requested...
Import of tape 075AGWL5 complete.
Importing file paths
The default import file path for the ltfsee import command is /{GPFS file system}/IMPORT. As shown in Example 7-88, if no other parameters are specified on the command line, all files are restored to the ../IMPORT/{VOLSER} directory under the GPFS file system.
Example 7-88 Import by using default parameters
[root@ltfs97 glues]# ltfsee import -p myPool -t 037AGWL5
Import of tape 037AGWL5 has been requested...
Import of tape 037AGWL5 complete.
 
[root@ltfs97 glues]# ls -las /ibm/glues/IMPORT/037AGWL5
total 0
0 drwxr-xr-x 2 root root 512 Apr 18 16:22 .
0 drwxr-xr-x 3 root root 512 Apr 18 16:22 ..
0 -rw------- 1 root root 104857600 Apr 18 16:22 file10.img
0 -rw------- 1 root root 104857600 Apr 18 16:22 file9.img
0 -rw------- 1 root root 104857600 Apr 18 16:22 fileA.ppt
0 -rw------- 1 root root 104857600 Apr 18 16:22 fileB.ppt
0 -rw------- 1 root root 104857600 Apr 18 16:22 fileC.ppt
0 -rw------- 1 root root 104857600 Apr 18 16:22 offsite1.mpeg
0 -rw------- 1 root root 104857600 Apr 18 16:22 offsite2.mpeg
0 -rw------- 1 root root 104857600 Apr 18 16:22 offsite3.mpeg
0 -rw------- 1 root root 104857600 Apr 18 16:22 offsite4.mpeg
0 -rw------- 1 root root 104857600 Apr 18 16:22 offsite5.mpeg
Example 7-89 shows the use of the -P parameter, which can be used to redirect the imported files to an alternative directory. The VOLSER is still used in the directory name, but you can now specify a custom import file path by using the -P option. If the specified path does not exist, it is created.
Example 7-89 Import by using the -P parameter
[root@ltfs97 glues]# ltfsee import -p myPool -t 037AGWL5 -P /ibm/glues/alternate
Import of tape 037AGWL5 has been requested...
Import of tape 037AGWL5 complete.
 
[root@ltfs97 glues]# ls -las /ibm/glues/alternate/037AGWL5
total 32
0 drwxr-xr-x 2 root root 512 Apr 18 16:24 .
32 drwxr-xr-x 11 root root 32768 Apr 18 16:24 ..
0 -rw------- 1 root root 104857600 Apr 18 16:24 file10.img
0 -rw------- 1 root root 104857600 Apr 18 16:24 file9.img
0 -rw------- 1 root root 104857600 Apr 18 16:24 fileA.ppt
0 -rw------- 1 root root 104857600 Apr 18 16:24 fileB.ppt
0 -rw------- 1 root root 104857600 Apr 18 16:24 fileC.ppt
0 -rw------- 1 root root 104857600 Apr 18 16:24 offsite1.mpeg
0 -rw------- 1 root root 104857600 Apr 18 16:24 offsite2.mpeg
0 -rw------- 1 root root 104857600 Apr 18 16:24 offsite3.mpeg
0 -rw------- 1 root root 104857600 Apr 18 16:24 offsite4.mpeg
0 -rw------- 1 root root 104857600 Apr 18 16:24 offsite5.mpeg
Example 7-90 shows the use of the -R parameter during the import, which has the effect of importing files to the root of the file system that is specified and not creating a VOLSER directory.
Example 7-90 Import by using the -R parameter
[root@ltfs97 glues]# ltfsee import -p myPool -t 037AGWL5 -P /ibm/glues -R
Import of tape 075AGWL5 has been requested...
Import of tape 075AGWL5 complete.
 
[root@ltfs97 glues]# ls -las /ibm/glues
total 1225525
32 drwxr-xr-x 10 root root 32768 Apr 18 18:03 .
4 drwxr-xr-x 4 root root 4096 Apr 3 15:00 ..
102400 -rw------- 1 root root 104857600 Apr 18 18:02 file10.img
102400 -rw------- 1 root root 104857600 Apr 16 14:51 file1.img
102400 -rw------- 1 root root 104857600 Apr 16 14:51 file3.img
102400 -rw------- 1 root root 104857600 Apr 18 18:02 file9.img
102400 -rw------- 1 root root 104857600 Apr 18 18:02 fileA.ppt
102400 -rw------- 1 root root 104857600 Apr 18 18:02 fileB.ppt
102400 -rw------- 1 root root 104857600 Apr 18 18:02 fileC.ppt
102400 -rw------- 1 root root 104857600 Apr 18 18:03 offsite1.mpeg
102400 -rw------- 1 root root 104857600 Apr 18 18:03 offsite2.mpeg
102400 -rw------- 1 root root 104857600 Apr 18 18:03 offsite3.mpeg
100864 -rw------- 1 root root 104857600 Apr 18 18:03 offsite4.mpeg
100608 -rw------- 1 root root 104857600 Apr 18 18:03 offsite5.mpeg
1 dr-xr-xr-x 2 root root 8192 Apr 18 17:58 .snapshots
8 drwxrwsr-x 6 bin bin 8192 Apr 18 17:58 .SpaceMan
With each of these parameters, you have the option of overwriting, ignoring, or renaming existing files by using the -o, -i, or -r parameters.
Handling import file name conflicts
This section describes how you can use the rename, overwrite, and ignore options of the ltfsee import command to handle import file name conflicts. The default behavior is to rename a file being imported if that file name exists in the full target import path. When you import files by running the ltfsee import command, you can use the following ltfsee import command options to affect command processing when the file name of an import file exists in the full target path:
-r, --rename
This is the default setting. All existing files are kept and any import files with conflicting names are renamed. Files are renamed by appending the suffix _i, where i is a number
1 - n. For example, a file that is named file1.txt is renamed files1.txt_1.
-o, --overwrite
All import files are imported and any existing files with conflicting names are overwritten.
-i, --ignore
All existing files are kept and any import files with conflicting names are ignored.
Importing offline tape cartridges
For more information about offline tape cartridges, see 7.18.2, “Exporting” on page 188. Offline tape cartridges can be reimported to the IBM Spectrum Scale namespace by running the ltfsee import command with the –offline option. If an offline exported tape cartridge was modified by the user while it was outside the library, the ltfsee import –offline command fails and the user must reimport the tape cartridge again by using another ltfsee import option.
When the tape cartridge is offline and outside the library, the IBM Spectrum Scale offline files on disk or the files on tape cartridge should not be modified.
Problems that are caused by trying to import a tape cartridge that was exported by using the option --offline can be solved by reimporting the tape cartridge by using the other options available. By using import --recreate and –overwrite (assuming the original path is used), some of the inodes on IBM Spectrum Scale are overwritten and new inodes are created for the files that did not have one. At the end of this process, all the files on the tape cartridge have an inode on IBM Spectrum Scale.
Example 7-91 shows an example of importing an offline tape cartridge.
Example 7-91 Import an offline tape cartridge
[root@ltfssn1 ~]# ltfsee import -p JZJ5WORM -t JZ0072JZ --offline
Import of tape JZ0072JZ has been requested.
Import of tape JZ0072JZ complete.
Updated offline state of tape JZ0072JZ.
7.18.2 Exporting
Export tape cartridges from your IBM Spectrum Archive EE system by running the ltfsee export command. When you normal export a tape cartridge, the ltfsee export command removes the tape cartridge from the IBM Spectrum Archive EE library. The tape cartridge is reserved so that it is no longer a target for file migrations. It is then reconciled to remove any inconsistencies between it and IBM Spectrum Scale. The export process then removes all files from the Spectrum Scale file system that exist on the exported tape cartridge. The files on tape cartridges are unchanged by the export, and are accessible by other LTFS systems.
If the --offline option is specified, all files from the tape cartridges or tape cartridge pool that are specified are moved to an offline status and those files cannot be accessed. However, the corresponding inode of each file is kept in IBM Spectrum Scale, and those files can be brought back to the IBM Spectrum Scale namespace by reimporting the tape cartridge by using the --offline option. This --offline option can be used, for example, when exporting tape cartridges containing redundant copies to an off-site storage location for disaster recovery purposes.
 
Important: If the --offline option is omitted in the export command, all files on the exported tape cartridge are removed from the GPFS file system.
Export considerations
Consider the following information when planning IBM Spectrum Archive EE export activities:
If you put different logical parts of a IBM Spectrum Scale namespace (such as the project directory) into different LTFS tape cartridge pools, you can export tape cartridges that contain all and only the files from that specific part of the IBM Spectrum Scale namespace. Otherwise, you must first recall all the files from the namespace of interest (such as the project directory), then migrate the recalled files to an empty tape cartridge pool, and then export that tape cartridge pool.
Reconcile occurs automatically before the export is processed.
Although the practice is not preferable, tape cartridges can be physically removed from IBM Spectrum Archive EE without exporting them, in which case no changes are made to the IBM Spectrum Scale inode. The following results can occur:
Causes a file operation that requires access to the removed tape cartridge to fail. No information as to where the tape cartridges are is available.
Files on an LTFS tape cartridge can be replaced in IBM Spectrum Archive EE without reimporting (that is, without updating anything in IBM Spectrum Scale). This is equivalent to a library going offline and then being brought back online without taking any action in the IBM Spectrum Scale namespace or management.
 
Important: If a tape cartridge is removed from the library without the use of the export utility, modified, and then reinserted in the library, the behavior can be unpredictable.
Exporting tape cartridges
The normal export of an IBM Spectrum Archive EE tape cartridge first reconciles the tape cartridge to correct any inconsistencies between it and IBM Spectrum Scale. Then, it removes all files from the Spectrum Scale file system that exist on the exported tape cartridge.
Example 7-92 shows the typical output from the export command.
Example 7-92 Export a tape cartridge
 
# ltfsee export -p PrimPool-l lib_lto -t 2MA262L5
GLESS016I(00184): Reconciliation requested.
GLESM401I(00194): Loaded the global configuration.
GLESS049I(00637): Tapes to reconcile: 2MA262L5 .
GLESS050I(00644): GPFS file systems involved: /ibm/gpfs .
GLESS134I(00666): Reserving tapes for reconciliation.
GLESS135I(00699): Reserved tapes: 2MA262L5 .
GLESS054I(00737): Creating GPFS snapshots:
GLESS055I(00742): Deleting the previous reconcile snapshot and creating a new one for /ibm/gpfs ( gpfs ).
GLESS056I(00763): Scanning GPFS snapshots:
GLESS057I(00768): Scanning GPFS snapshot of /ibm/gpfs ( gpfs ).
GLESS060I(00844): Processing scan results:
GLESS061I(00849): Processing scan results for /ibm/gpfs ( gpfs ).
GLESS141I(00862): Removing stale DMAPI attributes:
GLESS142I(00867): Removing stale DMAPI attributes for /ibm/gpfs ( gpfs ).
GLESS063I(00900): Reconciling the tapes:
GLESS001I(00994): Reconciling tape 2MA262L5 has been requested.
GLESS002I(01013): Reconciling tape 2MA262L5 complete.
GLESS137I(01134): Removing tape reservations.
GLESS058I(02320): Removing GPFS snapshots:
GLESS059I(02327): Removing GPFS snapshot of /ibm/gpfs ( gpfs ).
Export of tape 2MA262L5 has been requested...
GLESL074I(00649): Export of tape 2MA262L5 complete.
GLESL373I(00805): Moving tape 2MA262L5.
Tape 2MA262L5 is unmounted because it is inserted into the drive.
GLESL043I(00151): Removing tape 2MA262L5 from storage pool PrimPool.
GLESL490I(00179): Export command completed successfully.
Example 7-93 shows how the normal exported tape is displayed as exported by running the ltfsee info tapes command.
Example 7-93 Display status of normal export tape cartridge
# ltfsee info tapes
Tape ID Status Type Capacity(GiB) Free(GiB) Unref(GiB) Pool Library Address Drive
1IA105L5 Valid LTO5 1327 1327 0 PrimPool lib_lto 4141 -
D00397L5 Valid LTO5 1327 1327 0 PrimPool lib_lto 4143 -
D00454L5 Valid LTO5 1327 1327 0 PrimPool lib_lto 4136 -
2MA262L5 Exported LTO5 0 0 0 - lib_lto 4138 -
If errors occur during the export phase, the tape goes to the export state. However, some of the files that belong to that tape might still remain in the file system and still have a reference to that tape. Such an error can occur when an export is happening and while reconciliation occurs one starts to modify the files belonging to the exporting tape. In such a scenario, see 10.5, “Handling export errors” on page 264 on how to clean up the remaining files on the Spectrum Scale file system.
Exporting offline tape cartridges
If you want to move tape cartridges to an off-site location for DR purposes but still retain files in the Spectrum Scale file system, follow the procedure that is described here. In Example 7-94, tape 051AGWL5 contains redundant copies of five MPEG files that must be moved off-site.
Example 7-94 Export an offline tape cartridge
[root@ltfssn1 ~]# ltfsee export -p JZJ5WORM -t JZ0072JZ -o "Export offline"
 
Export of tape JZ0072JZ has been requested...
GLESL074I(00649): Export of tape JZ0072JZ complete.
Updated offline state of tape JZ0072JZ to OFFLINE.
GLESL487I(00139): Tape JZ0072JZ stays in pool JZJ5WORM while it is offline exported.
GLESL373I(00805): Moving tape JZ0072JZ.
Tape JZ0072JZ is unmounted because it is inserted into the drive.
Offline export is a much quicker export function than normal export because no reconciliation is run. If you run the ltfsee info tapes command, you can see the offline status of the tape cartridge as shown in Example 7-95.
Example 7-95 Display status of offline tape cartridges
[root@ltfssn1 ~]# ltfsee info tapes
Tape ID Status Type Capacity(GiB) Free(GiB) Unref(GiB) Pool Library Address Drive
JYA825JY Valid TS1140(J5) 6292 6292 0 JY         TS4500_3592 1035 -
JCC541JC Unknown TS1140 0 0 0 J5         TS4500_3592 0 -
JZ0072JZ Offline TS1150(J5) 9022 9022 0 JZJ5WORM   TS4500_3592 258 0000078D8320
It is now possible to physically remove tape JZ0072JZ from the tape library so it can be sent to the off-site storage location.
7.18.3 Import/Export enhancements
Starting with IBM Spectrum Archive (LTFS) EE version V1R1.1.3 and later, the Import/Export function was enhanced and improved. These changes address the following main three aspects from prior version of LTFS EE releases beside general performance improvements for Import/Export:
1. Offline export against a redundant (secondary or third tape) copy can be used to move a tape or a set of tapes to a different location. In this case, even if the redundant copy is offline exported, the file can be only recalled from the primary copy.
2. Only when all replicas are exported/offline exported is the file reported as exported/offline exported.
3. It might take a longer time to export a tape because LTFS EE runs reconcile before exporting a tape.
Therefore, the following Import/Export enhancements were introduced to IBM Spectrum Archive (LTFS) EE starting with version V1R1.1.3 and later:
Allow a stub of a premigrated file even if a redundant copy is offline exported.
Full replica support for ease of use.
The ltfsee info tapes command shows an offline tape list.
To view the offline message run ltfsee tape show with the -a offline option.
The ltfsee info files command does not show an offline message for the files migrated to the offline exported tape.
You can skip reconcile if you run offline export to improve performance.
There is no update on a stub for offline export/import to improve performance.
Keep offline exported information (Tape ID and offline message) in MMM to avoid updating the stub.
Regarding full replica support, Export/Import does not depend on the primary/redundant copy. When all copies are exported, the file is exported.
Table 7-1 shows a use case example where a file was migrated to three physical tapes: TAPE1, TAPE2, TAPE3. The file behaves as shown by export operations.
Table 7-1 Export operations use case scenario of file with three tapes
Operation
File
TAPE1 is offline exported.
File is available (can recall).
TAPE1/TAPE2 are offline exported.
File is available (can recall).
TAPE1/TAPE2/TAPE3 are offline exported.
File is offline (cannot recall).
TAPE1 is exported.
File is available (IBMTPS has TAPE2/TAPE3).
TAPE1/TAPE2 is exported.
File is available (IBMTPS has TAPE3).
TAPE1/TAPE2/TAPE3 is exported.
File is removed from GPFS.
TAPE1/TAPE2 are offline exported, then TAPE3 is exported.
File is offline (IBMTPS has TAPE1/TAPE2).
The ltfsee info files command does not show the offline message even if the file is exported. It just shows offline, as in Example 7-96.
Example 7-96 The ltfsee info files command not showing the offline message for exported file
# ltfsee info files /mnt/gpfs/test/1MData.001
Name: /mnt/gpfs/test/1MData.001
Tape id:D00600L5 Status: offline
7.19 Drive Role settings for job assignment control
Before LTFS/IBM Spectrum Archive V1 R1.1.3, the physical drive allocation logic was simple. IBM Spectrum Archive (LTFS) EE picked up the first found idle tape drive from the list of available drives. If the tape to be mounted was already in a drive, IBM Spectrum Archive (LTFS) EE picked it preferentially. This simple drive allocation logic is sometimes not good enough for a client use cases who require the following capabilities:
Reserve drives for potential recalls, for example, not to be used for purposes other than for a quick recall job dispatch.
Assign dedicated drives for migration jobs. Without this kind of drive assignment, the control node scheduler might change tapes from primary tape to copy one, or go in the opposite direction.
So, it might be beneficial for users to configure the tape drive's capabilities to allow or disallow each type of jobs on a per-drive basis.
Configurable tape drive attributes were introduced in IBM Spectrum Archive (LTFS) EE version V1R1.1.3 and later. Each of the attributes corresponds to the tape drive's capability to perform a specific type of job. Here are the attributes:
Migration
Recall
Generic
Table 7-2 describes each of the available IBM Spectrum Archive (LTFS) EE drive attributes for the attached physical tape drives.
Table 7-2 IBM Spectrum Archive (LTFS) EE drive attributes
Attributes
Description
Migration
If the Migration attribute is set for a drive, that drive can process migration jobs. If not, IBM Spectrum Archive (LTFS) EE never runs migration jobs by using that drive. Save jobs are also allowed/disallowed through this attribute setting. It is preferable that there be at least one tape drive that has this attribute set to Migration.
Recall
If the Recall attribute is set for a drive, that drive can process recall jobs. If not, IBM Spectrum Archive (LTFS) EE never runs recall jobs by using that drive. Both automatic file recall and selective file recall are enabled / disabled by using this single attribute. There is no way to enable / disable one of these two recall types selectively. It is preferable that there be at least one tape drive that has this attribute set to Recall.
Generic
If the Generic attribute is set for a drive, that drive can process generic jobs. If not, IBM Spectrum Archive (LTFS) EE never runs generic jobs by using that drive. IBM Spectrum Archive (LTFS) EE creates and runs miscellaneous generic jobs for administrative purposes, such as formatting tape, checking tape, reconciling tape, reclaiming a tape, and validating a tape. Some of those jobs are internally run with any of the user operations. It is preferable that there be at least one tape drive that has this attribute set to Generic. For reclaiming tape, at least two tape drives are required, so at least two drives need the Generic attribute.
To set these attributes for a tape drive, the attributes can be specified when adding a tape drive to IBM Spectrum Archive (LTFS) EE. Use the following command syntax:
ltfsee drive add -d <drive serial[:attr]> [-n <node_id>]
The attr option is a decimal numeric parameter and optional for this command and can be specified after “:”. A logical OR applies to set the three attributes: Migrate (4), Recall (2) and Generic (1). For example, a number of 6 for attr allows migration and recall job while copy and generic job are disallowed. All of the attributes are set by default. If the tape drive to update already has attributes set by IBM Spectrum Archive (LTFS) EE, you must remove them before adding new ones by using the ltfsee drive remove command.
To check the current active drive attributes, the ltfsee info drives command is useful. This command shows each tape drive’s attributes, as shown in Example 7-97.
Example 7-97 Check current IBM Spectrum Archive EE drive attributes
[root@ltfsml1 ~]# ltfsee info drives
Drive S/N Status Type Role Library Address Node ID Tape Node Group
1013000667 Not mounted LTO6 mrg lib_ltfsml2 256 1 - G0
1013000110 Not mounted LTO6 mrg lib_ltfsml2 257 1 - G0
1013000692 Not mounted LTO6 mrg lib_ltfsml2 258 1 - G0
1013000694 Mounted LTO6 mrg lib_ltfsml1 256 2 2FC140L5 G0
1013000688 Mounted LTO6 mrg lib_ltfsml1 257 2 2FC141L5 G0
1013000655 Mounted LTO6 mrg lib_ltfsml1 258 2 2FC147L5 G0
The letters m, r, and g are shown when the corresponding attribute Migration, Recall, and Generic are set to on. If an attribute is not set, “-” is shown instead.
 
Hint for the drive attributes setting: In a multiple nodes environment, it is expected that the reclaim driver works faster if two tape drives that are used for reclaim are assigned to a single node. For that purpose, tape drives with the Generic attribute should be assigned to a single node and all of other drives of the remaining nodes should not have the Generic attribute.
7.20 Tape drive intermix support
This section describes the physical tape drive intermix support that was released with IBM Spectrum Archive EE V1R2.0.0 or later.
This enhancement has these objectives:
Use IBM LTO-7 tapes and drives in mixed configuration with older IBM LTO (LTO-6 and 5) generations
Use 3592 JC/JD cartridges along with IBM TS1140 and TS1150 drives in mixed environments
 
Note: An intermix of LTO and TS11xx tape drive technology and media is not supported by IBM Spectrum Archive EE.
The following main use cases are expected to be used by this new feature:
Tape media and technology migration (from old to new generation tapes).
Continue using prior generation formatted tapes (read or write) with the current technology tape drive generation.
To generate and use a mixed tape drive environment, you must define the different LTO or TS11xx drive types with the creation of the logical library partition (within your tape library) to be used along with your IBM Spectrum Archive EE setup. For more information about how to create and define a logical tape library partition, see the example shown in 4.5, “Creating a logical library and defining tape drives” on page 76.
When LTO-7, 6 and 5 tapes are used in a tape library, correct cartridges and drives are selected by IBM Spectrum Archive EE to read or write the required data, which includes the usage of an LTO7 drive for recall from LTO5 tapes.
When 3592 JC or 3592 JD tapes are used in a tape library and both IBM TS1140 and TS1150 drives are used for the JC tapes, correct tapes and drives are selected by IBM Spectrum Archive EE to read or write the required data.
With this new function, a data migration between different generations of tape cartridges can be achieved. You can select and configure which TS11xx format (TS1140 or TS1150) is used by IBM Spectrum Archive EE for operating 3592 JC tapes. The default for IBM Spectrum Archive EE is always to use and format to the highest available capacity. Also, when you import 3592 JC media, you can tell those tapes in which format IBM Spectrum Archive EE uses them.
The ltfsee pool add command may be used for configuration when new physical tape cartridges are added to your IBM Spectrum Archive EE setup:
ltfsee pool add -p poolname -t tapename <-f | -F> -T <format_type>
The ltfsee pool add command with the -T option accepts format_type when the tape in use is 3592 JC or 3592 JD tapes. The format type can be either J4 or J5. When the format type is provided, the tape is formatted in the provided format. When the tapes in use are LTO tapes and a format type is added, the command fails. When tapes are different from the devtype property of the pools, the command also fails. When the format type is not provided for 3592 JC / 3592 JD tapes, the highest capacity format (J5) is used.
Table 7-3 shows the valid format types that you can define for 3592 tape media.
Table 7-3 Valid format types for 3592 tape media
Format type
Tape JB, JC, JK, and JY
Tape JD, JL, and JZ
Pool attribute
J4
Valid
Not Valid
3592-4C
J5
Valid1
Valida
3592-5C

1 Highest capacity format type
For more information about adding tape cartridges within IBM Spectrum Archive EE, see 7.5.1, “Adding tape cartridges” on page 139.
7.21 Write Once Read Many support for IBM TS1140/TS1150 tape drives
From the long-term archive perspective, there sometimes is a requirement to store files without any modification that is ensured by the system. Starting with IBM Spectrum Archive EE V1R2.0.0 and later, you can deploy Write Once Read Many (WORM) tape cartridges in your IBM Spectrum Archive EE setup. Only 3592 WORM tapes that can be used with IBM TS1140 or IBM TS1150 drives are supported.
 
Note: LTO WORM tapes are not supported for IBM Spectrum Archive EE.
For more information about IBM tape media and WORM tapes, see the following website:
Objective for WORM tape support
The objective is to store files without any modifications that is ensured by the system, but with the following limitations:
Only ensure that the file on tape is immutable if the user uses only IBM Spectrum Archive EE:
 – Does not detect the case where an appended modified index is at the end of tape by using a direct SCSI command.
 – From LTFS format perspective, this case can be detected but it needs a time to scan every index on the tape. This feature is not provided in the release of IBM Spectrum Archive EE on which this book is based.
Does not ensure that the file cannot be modified through GPFS in the following ways:
 – Migrate the immutable files to tape.
 – Recall the immutable files to disk.
 – Change the immutable attribute of the file on disk and modify.
Function overview for WORM tape support
To support 3592 WORM tapes, IBM Spectrum Archive EE starting with release V1R2.0.0. provides the following new features:
Introduces a WORM attribute to the IBM Spectrum Archive EE pool attributes.
A WORM pool can have only WORM cartridges.
Files that have GPFS immutable attributes can still be migrated to normal pools.
Example 7-98 shows how to set the WORM attribute to an IBM Spectrum Archive EE pool by using the ltfsee pool command.
Example 7-98 Set the WORM attribute to an IBM Spectrum Archive EE pool
[root@ltfssn1 ~]# ltfsee pool create -p myWORM --worm physical
There also is an IBM Spectrum Scale layer that can provide a certain immutability for files within the GPFS file system. You can apply immutable and appendOnly restrictions either to individual files within a file set or to a directory. An immutable file cannot be changed or renamed. An appendOnly file allows append operations, but not delete, modify, or rename operations. An immutable directory cannot be deleted or renamed, and files cannot be added or deleted under such a directory. An appendOnly directory allows new files or subdirectories to be created with 0-byte length; all such new created files and subdirectories are marked as appendOnly automatically.
The immutable flag and the appendOnly flag can be set independently. If both immutability and appendOnly are set on a file, immutability restrictions are in effect.
To set or unset these attributes, use the following IBM Spectrum Scale command options:
mmchattr -i yes|no
This command sets or unsets a file to or from an immutable state:
 – -i yes
Sets the immutable attribute of the file to yes.
 – -i no
Sets the immutable attribute of the file to no.
mmchattr -a yes|no
This command sets or unsets a file to or from an appendOnly state:
 – -a yes
Sets the appendOnly attribute of the file to yes.
 – -a no
Sets the appendOnly attribute of the file to no.
 
Note: Before an immutable or appendOnly file can be deleted, you must change it to mutable or set appendOnly to no (by using the mmchattr command).
Storage pool assignment of an immutable or appendOnly file can be changed; an immutable or appendOnly file is allowed to transfer from one storage pool to another.
To display whether a file is immutable or appendOnly, run this command:
mmlsattr -L myfile
The system displays information similar to the following output:
file name: myfile
metadata replication: 2 max 2
data replication: 1 max 2
immutable: no
appendOnly: no
flags:
storage pool name: sp1
fileset name: root
snapshot name:
creation Time: Wed Feb 22 15:16:29 2012
Windows attributes: ARCHIVE
The effects of file operations on immutable and appendOnly files
After a file is set as immutable or appendOnly, the following file operations and attributes work differently from the way they work on regular files:
delete
An immutable or appendOnly file cannot be deleted.
modify/append
An immutable file cannot be modified or appended. An appendOnly file cannot be modified, but it can be appended.
 
Note: The immutable and appendOnly flag check takes effect after the file is closed; therefore, the file can be modified if it is opened before the file is changed to immutable.
mode
An immutable or appendOnly file's mode cannot be changed.
ownership, acl
These attributes cannot be changed for an immutable or appendOnly file.
extended attributes
These attributes cannot be added, deleted, or modified for an immutable or appendOnly file.
timestamp
The time stamp of an immutable or appendOnly file can be changed.
directory
If a directory is marked as immutable, no files can be created, renamed, or deleted under that directory. However, a subdirectory under an immutable directory remains mutable unless it is explicitly changed by the mmchattr command.
If a directory is marked as appendOnly, no files can be renamed or deleted under that directory. However, 0-byte length files can be created.
For more information about IBM Spectrum Scale V4.1.1 immutability and appendOnly limitations, see the IBM Knowledge Center website:
Example 7-99 shows the screen output that you receive while working, showing, and changing the IBM Spectrum Scale immutable or appendOnly file attributes.
Example 7-99 Set or change an IBM Spectrum Scale file immutable file attribute
[root@ltfsee_node0]# echo "Jan" > jan_jonas.out
[root@ltfsee_node0]# mmlsattr -L -d jan_jonas.out
file name: jan_jonas.out
metadata replication: 1 max 2
data replication: 1 max 2
immutable: no
appendOnly: no
flags:
storage pool name: system
fileset name: root
snapshot name:
creation time: Mon Aug 31 15:40:54 2015
Windows attributes: ARCHIVE
Encrypted: yes
gpfs.Encryption: 0x454147430001008C525B9D470000000000010001000200200008000254E60BA4024AC1D50001000100010003000300012008921539C65F5614BA58F71FC97A46771B9195846A9A90F394DE67C4B9052052303A82494546897FA229074B45592D61363532323261642D653862632D346663632D383961332D346137633534643431383163004D495A554E4F00
EncPar 'AES:256:XTS:FEK:HMACSHA512'
type: wrapped FEK WrpPar 'AES:KWRAP' CmbPar 'XORHMACSHA512'
KEY-a65222ad-e8bc-4fcc-89a3-4a7c54d4181c:ltfssn2
 
[root@ltfsee_node0]# mmchattr -i yes jan_jonas.out
 
[root@ltfsee_node0]# mmlsattr -L -d jan_jonas.out
file name: jan_jonas.out
metadata replication: 1 max 2
data replication: 1 max 2
immutable: yes
appendOnly: no
flags:
storage pool name: system
fileset name: root
snapshot name:
creation time: Mon Aug 31 15:40:54 2015
Windows attributes: ARCHIVE READONLY
Encrypted: yes
gpfs.Encryption: 0x454147430001008C525B9D470000000000010001000200200008000254E60BA4024AC1D50001000100010003000300012008921539C65F5614BA58F71FC97A46771B9195846A9A90F394DE67C4B9052052303A82494546897FA229074B45592D61363532323261642D653862632D346663632D383961332D346137633534643431383163004D495A554E4F00
EncPar 'AES:256:XTS:FEK:HMACSHA512'
type: wrapped FEK WrpPar 'AES:KWRAP' CmbPar 'XORHMACSHA512'
KEY-a65222ad-e8bc-4fcc-89a3-4a7c54d4181c:ltfssn2
 
[root@ltfsee_node0]# echo "Jonas" >> jan_jonas.out
-bash: jan_jonas.out: Read-only file system
[root@ltfsee_node0]#
These immutable or appendOnly file attributes can be changed at any time by the IBM Spectrum Scale administrator. They cannot provide an ultimate immutability.
If you are working with IBM Spectrum Archive EE and IBM Spectrum Scale and you plan implementing a WORM solution along with WORM tape cartridges, these two main assumptions apply:
Only files that have IBM Spectrum Scale with the immutable attribute ensure no modification.
The IBM Spectrum Scale immutable attribute is not changed after it is set unless it is changed by an administrator.
Consider the following limitations when using WORM tapes together with IBM Spectrum Archive EE:
WORM tapes are supported only with IBM TS1140 and TS1150 tape drives (3592 JY, JZ).
If IBM Spectrum Scale immutable attributes are changed after migration, the next migration fails against the same WORM pool.
IBM Spectrum Archive EE supports the following operations with WORM media:
 – Migrate
 – Recall
 – Offline export and offline import
IBM Spectrum Archive EE does not support the following operations with WORM media:
 – Reclaim
 – Reconcile
 – Export and Import
For more information about the IBM Spectrum Archive EE commands, see 11.1, “Command-line reference” on page 322.
7.22 Obtaining the location of files and data
This section describes how to obtain information about the location of files and data by using IBM Spectrum Archive EE.
You can use the files option of the ltfsee info command to discover the physical location of files. To help with the management of replicas, this command also indicates which tape cartridges are used by a particular file. To quickly determine the file states for many files refer to the IBM Spectrum Scale wiki below:
Example 7-100 shows the typical output of the ltfs info files command. Some files are on multiple tape cartridges, some are in a migrated state, and others are premigrated only.
Example 7-100 Files location
[root@ltfs97 glues]# ltfsee info files offsite*
File name: offsite1.mpeg
Tape id:037AGWL5 Status: premigrated
File name: offsite2.mpeg
Tape id:037AGWL5:051AGWL5 Status: migrated
File name: offsite3.mpeg
Tape id:037AGWL5:051AGWL5 Status: migrated
File name: offsite4.mpeg
Tape id:037AGWL5 Status: migrated
File name: offsite5.mpeg
Tape id:037AGWL5:051AGWL5 Status: migrated
Although IBM Spectrum Scale can support file names of 255 bytes, the LTFS LE+ component supports only 255 characters and does not support colons or slashes in the file name. If you create a file on IBM Spectrum Scale with more than 255 characters or containing colons, it cannot be migrated by using IBM Spectrum Archive EE, as shown in Table 7-4.
Table 7-4 Comparison of file system limits
Setting
IBM Spectrum Scale
LTFS LE+ component
LTFS LE+ component multinode
Maximum length of file name
255 bytes
255 characters
255 characters
Supported characters
Anything
Anything but colon and slash
Anything but colon and slash
Maximum length of the path name
1024 bytes
Unlimited
1024 -39 bytes
Maximum size of extended attributes
64 KB
4 KB
64 KB
7.23 Obtaining inventory, job, and scan status
This section describes how to obtain resource inventory information or information about ongoing migration and recall jobs or scans with IBM Spectrum Archive EE. You can use the info option of the ltfsee command to obtain information about current jobs and scans and resource inventory information about tape cartridges, drives, nodes, tape cartridge pools, and files. Example 7-101 shows the command that is used to display all IBM Spectrum Archive EE tape cartridge pools.
Example 7-101 Tape cartridge pools
[root@ltfsml1 ~]# ltfsee info pools
Pool Name Total(TiB) Free(TiB) Unref(TiB) Tapes Type Library Node Group
primary_ltfsml2 75.2 1.1 0.1 58 LTO lib_ltfsml2 G0
primary_ltfsml1 70.0 4.1 0.1 54 LTO lib_ltfsml1 G0
copy_ltfsml1 5.2 5.2 0.0 4 LTO lib_ltfsml1 G0
Example 7-102 shows the serial numbers and status of the tape drives that are used by IBM Spectrum Archive EE.
Example 7-102 Drives
[root@ltfsml1 ~]# ltfsee info drives
Drive S/N Status Type Role Library Address Node ID Tape Node Group
1013000667 Not mounted LTO6 mrg lib_ltfsml2 256 1 - G0
1013000110 Not mounted LTO6 mrg lib_ltfsml2 257 1 - G0
1013000692 Not mounted LTO6 mrg lib_ltfsml2 258 1 - G0
1013000694 Mounted LTO6 mrg lib_ltfsml1 256 2 2FC140L5 G0
1013000688 Mounted LTO6 mrg lib_ltfsml1 257 2 2FC141L5 G0
1013000655 Mounted LTO6 mrg lib_ltfsml1 258 2 2FC147L5 G0
To view all the IBM Spectrum Archive EE tape cartridges, run the command that is shown in Example 7-103.
Example 7-103 Tape cartridges
root@ltfsml1 ~]# ltfsee info tapes
Tape ID Status Type Capacity(GiB) Free(GiB) Unref(GiB) Pool Library Address Drive
IMN793L5 Valid LTO5 1327 0 1 primary_ltfsml2 lib_ltfsml2 4256 -
IMN830L5 Valid LTO5 1327 0 1 primary_ltfsml2 lib_ltfsml2 4226 -
IMN783L5 Valid LTO5 1327 696 0 primary_ltfsml2 lib_ltfsml2 4225 -
IMN784L5 Valid LTO5 1327 0 1 primary_ltfsml2 lib_ltfsml2 4224 -
IMN753L5 Valid LTO5 1327 0 1 primary_ltfsml2 lib_ltfsml2 4223 -
IMN754L5 Valid LTO5 1327 0 1 primary_ltfsml2 lib_ltfsml2 4222 -
IMN728L5 Valid LTO5 1327 0 1 primary_ltfsml2 lib_ltfsml2 4221 -
IMN729L5 Valid LTO5 1327 0 1 primary_ltfsml2 lib_ltfsml2 4220 -
IMN772L5 Valid LTO5 1327 0 1 primary_ltfsml2 lib_ltfsml2 4219 -
IMN750L5 Valid LTO5 1327 0 1 primary_ltfsml2 lib_ltfsml2 4216 -
IMN786L5 Valid LTO5 1327 400 0 primary_ltfsml2 lib_ltfsml2 4213 -
IMN785L5 Valid LTO5 1327 0 1 primary_ltfsml2 lib_ltfsml2 4212 -
IMN757L5 Valid LTO5 1327 0 1 primary_ltfsml2 lib_ltfsml2 4206 -
IMN773L5 Valid LTO5 1327 0 1 primary_ltfsml2 lib_ltfsml2 4218 -
IMN789L5 Valid LTO5 1327 0 1 primary_ltfsml2 lib_ltfsml2 4205 -
IMN751L5 Valid LTO5 1327 0 1 primary_ltfsml2 lib_ltfsml2 4215 -
IMN787L5 Valid LTO5 1327 0 1 primary_ltfsml2 lib_ltfsml2 4203 -
IMN758L5 Valid LTO5 1327 0 1 primary_ltfsml2 lib_ltfsml2 4207 -
IMN770L5 Valid LTO5 1327 0 1 primary_ltfsml2 lib_ltfsml2 4202 -
IMN756L5 Valid LTO5 1327 0 1 primary_ltfsml2 lib_ltfsml2 4201 -
IMN759L5 Valid LTO5 1327 0 1 primary_ltfsml2 lib_ltfsml2 4200 -
IMN779L5 Valid LTO5 1327 0 1 primary_ltfsml2 lib_ltfsml2 4198 -
Regularly monitor the output of IBM Spectrum Archive EE info jobs and IBM Spectrum Archive EE info scans to ensure that jobs and scans are progressing as expected. Use the command syntax that is shown in Example 7-104.
Example 7-104 Jobs
[root@ltfs97] ltfsee info jobs
Job Type Status Idle(sec) Scan ID Tape Pool Library Node File Name or inode
Selective Recall In-progress 0 3108835585 2FC140L5 copy_ltfsml1 lib_ltfsml1 2 10374858
Selective Recall In-progress 0 3108835585 2FC141L5 copy_ltfsml1 lib_ltfsml1 2 10374888
Selective Recall Unscheduled 0 3108835585 2FC140L5 copy_ltfsml1 lib_ltfsml1 - 10374877
Selective Recall Unscheduled 0 3108835585 2FC140L5 copy_ltfsml1 lib_ltfsml1 - 10374877
The ltfsee info nodes command (see Example 7-105) provides a summary of the state of each LTFS LE+ component node.
Example 7-105 ltfs info nodes
[root@ltfsml1 ~]# ltfsee info nodes
Node ID Status Node IP Drives Ctrl Node Library Node Group Host Name
1 Available 9.11.121.227 3 Yes lib_ltfsml2 G0 ltfsml2.tuc.stglabs.ibm.com
2 Available 9.11.121.122 3 Yes lib_ltfsml1 G0 ltfsml1.tuc.stglabs.ibm.com
The ltfsee info scans command (see Example 7-106) provides a summary of the current active scans in IBM Spectrum Archive EE. The scan number corresponds to the job number that is reported by the ltfsee info jobs command.
Example 7-106 ltfsee info scans
Scan ID Parent Queued Succeed Failed Canceled Library Ctrl Node
3587183105 - 1 0 0 0 lib_ltfsml1 ltfsml1.tuc.stglabs.ibm.com
3587183361 3587183105 1 0 0 0 lib_ltfsml1 ltfsml1.tuc.stglabs.ibm.com
7.24 Cleaning up a scan or session
This section describes how to perform a cleanup if a scan or session is complete, but was not removed by the calling process. An example is when a window was closed while the migrate command was running.
The cleanup option of the ltfsee command can clean up a scan or session that finished but cannot be removed by its calling process. If the scan is not finished, you cannot “clean it up”, as shown in Example 7-107.
Example 7-107 Clean up a running scan
[root@ltfs97 ~]# ltfsee cleanup 2282356739
Session or scan with id 2282356739 is not finished.
7.25 Monitoring the system with SNMP
You can use SNMP traps to receive notifications about system events. There are many processes that should be reported through SNMP. However, for the first release of IBM Spectrum Archive EE, only the following critical state changes are reported by SNMP traps:
Starting IBM Spectrum Archive EE
Stopping IBM Spectrum Archive EE
Unexpected termination or failure of migration, recall, reclaim, reconcile, import, or export processes
The MIB file is installed in /opt/ibm/ltfsee/data/LTFSEE-MIB.txt on each node. It should be loaded on the SNMP server.
7.25.1 Configuring Net-SNMP
It is necessary to modify the /etc/snmp/snmpd.conf configuration file to receive SNMP traps. This file is on the node where the MMM is running. For more information, see 10.6.2, “SNMP” on page 268.
To configure Net-SNMP, complete the following steps:
1. Open the /etc/snmp/snmpd.conf configuration file.
2. Add the following entry to the file:
[root@ltfs97 ~]# master agentx
[root@ltfs97 ~]# trap2sink <managementhost>
The variable <managementhost> is the host name or IP address of the host to which the SNMP traps are sent.
3. Restart the SNMP daemon by running the following command:
[root@ltfs97 ~]# systemctl restart snmpd.service
7.25.2 Starting and stopping the snmpd daemon
Before IBM Spectrum Archive EE is started, it is necessary to start the snmpd daemon on the node where IBM Spectrum Archive EE is running.
To start the snmpd daemon, run the following command:
[root@ltfs97 ~]# systemctl start snmpd.service
To stop the snmpd daemon, run the following command:
[root@ltfs97 ~]# systemctl stop snmpd.service
To restart the snmpd daemon, run the following command:
[root@ltfs97 ~]# systemctl restart snmpd.service
7.25.3 Example of an SNMP trap
Example 7-108 shows the type of trap information that is received by the SNMP server.
Example 7-108 Example SNMP trap
Received 119 bytes from UDP: [9.11.121.209]:35054->[9.11.121.209]
0000: 30 75 02 01 01 04 06 70 75 62 6C 69 63 A7 68 02 0u.....public.h.
0016: 04 02 14 36 30 02 01 00 02 01 00 30 5A 30 0F 06 ...60......0Z0..
0032: 08 2B 06 01 02 01 01 03 00 43 03 02 94 E6 30 19 .+.......C....0.
0048: 06 0A 2B 06 01 06 03 01 01 04 01 00 06 0B 2B 06 ..+...........+.
0064: 01 04 01 02 06 81 76 02 03 30 11 06 0C 2B 06 01 ......v..0...+..
0080: 04 01 02 06 81 76 01 01 00 02 01 01 30 19 06 0C .....v......0...
0096: 2B 06 01 04 01 02 06 81 76 01 02 00 04 09 47 4C +.......v.....GL
0112: 45 53 4C 31 35 39 45 ESL159E
 
2013-04-18 23:10:22 xxx.storage.tucson.ibm.com [UDP: [9.11.121.209]:35054->[9.11.121.209]]:
SNMPv2-SMI::mib-2.1.3.0 = Timeticks: (169190) 0:28:11.90 SNMPv2-SMI::snmpModules.1.1.4.1.0 = OID: LTFSEE-MIB::ltfseeErrJobTrap LTFSEE-MIB::ltfseeJob.0 = INTEGER: migration(1) LTFSEE-MIB::ltfseeJobInfo.0 = STRING: "GLESL159E"
The last line in the raw message shows that it is an IBM Spectrum Archive EE Job Trap that relates to migration. The error code that is displayed, GLESL159E, confirms that it relates to a failed migration process.
 
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.143.4.181