Hints, tips, and preferred practices
This chapter provides you with hints, tips, and preferred practices for the IBM Spectrum Archive Enterprise Edition (IBM Spectrum Archive EE). It covers various aspects about IBM Spectrum Scale, reuse of tape cartridges, scheduling, and disaster recovery (DR). Some aspects might overlap with functions that are described in Chapter 7, “Operations” on page 123 and Chapter 10, “Troubleshooting” on page 249. However, it is important to list them here in the context of hints, tips, and preferred practices.
This chapter includes the following topics:
Backing up file systems that are not managed by IBM Spectrum Archive EE
 
Important: All of the command examples in this chapter use the command without the full file path name because we added the IBM Spectrum Archive EE directory (/opt/ibm/ltfsee/bin) to the PATH variable of the operation system.
8.1 Preventing migration of the .SPACEMAN and metadata directories
This section describes an IBM Spectrum Scale policy rule that you should have in place to help ensure the correct operation of your IBM Spectrum Archive EE system.
You can prevent migration of the .SPACEMAN directory and the IBM Spectrum Archive EE metadata directory of a GPFS file system by excluding these directories by having an IBM Spectrum Scale policy rule in place. Example 8-1 shows how an exclude statement can look in an IBM Spectrum Scale migration policy file where the metadata directory starts with the text “/ibm/glues/.ltfsee”.
Example 8-1 IBM Spectrum Scale sample directory exclude statement in the migration policy file
define(
user_exclude_list,
(
PATH_NAME LIKE '/ibm/glues/.ltfsee/%'
OR PATH_NAME LIKE '%/.SpaceMan/%'
OR PATH_NAME LIKE '%/.snapshots/%'
)
)
8.2 Maximizing migration performance with redundant copies
To minimize drive mounts/unmounts and to maximize performance with multiple copies, you should set the mount limit per tape cartridge pool to equal the number of tape drives in the node group divided by the number of copies. The mount limit attribute of a tape cartridge pool specifies the maximum allocated number of drives used for migration for the tape cartridge pool. A value of 0 means no limit and is also the default value.
For example, if there are four drives and two copies initially, set the mount limit to 2 for the primary tape cartridge pool and 2 for the copy tape cartridge pool. This setting maximizes the migration performance because both the primary and copy jobs are run in parallel by using two tape drives each for each tape cartridge pool. This action also avoids unnecessary mounts/unmounts of tape cartridges.
To show the current mount limit setting for a tape cartridge pool, run the following command:
ltfsee pool show -p <poolname> [-l <libraryname>]
To set the mount limit setting for a tape cartridge pool, run the following command:
ltfsee pool set -p <poolname> [-l <libraryname>] -a <attribute> -v <value>
To set the mount limit attribute to 2, run the ltfsee pool show and ltfsee pool set commands, as shown in Example 8-2.
Example 8-2 Set the mount limit attribute to 2
[root@ltfsml1 ~]# ltfsee pool show -p primary_ltfsml1 -l lib_ltfsml1
Attribute Value
poolname primary_ltfsml1
poolid 17fba0a9-a49b-4f16-baf5-5e0395136468
devtype LTO
format 0xFFFFFFFF
worm no (0)
nodegroup G0
fillpolicy Default
owner System
mountlimit 0
 
[root@ltfsml1 ~]# ltfsee pool set -p primary_ltfsml1 -l lib_ltfsml1 -a mountlimit -v 2
 
[root@ltfsml1 ~]# ltfsee pool show -p primary_ltfsml1 -l lib_ltfsml1
Attribute Value
poolname primary_ltfsml1
poolid 17fba0a9-a49b-4f16-baf5-5e0395136468
devtype LTO
format 0xFFFFFFFF
worm no (0)
nodegroup G0
fillpolicy Default
owner System
mountlimit 2
8.3 Changing the SSH daemon settings
The default values for MaxSessions and MaxStartups are too low and must increase to allow for successful operations with LTFSEE:
MaxSessions specifies the maximum number of open sessions that is permitted per network connection. The default is 10.
MaxStartups specifies the maximum number of concurrent unauthenticated connections to the SSH daemon. Additional connections are dropped until authentication succeeds or the LoginGraceTime expires for a connection. The default is 10:30:100.
To change MaxSessions and MaxStartups values to 1024, complete the following steps:
1. Edit the /etc/ssh/sshd_config file to set the MaxSessions and MaxStartups values to 1024:
MaxSessions = 1024
MaxStartups = 1024
2. Restart the sshd service by running the following command:
systemctl restart sshd.service
8.4 Setting mmapplypolicy options for increased performance
The default values of the mmapplypolicy command options must be changed when running with IBM Spectrum Archive EE. The values for these three options should be increased for enhanced performance:
-B MaxFiles
Specifies how many files are passed for each invocation of the EXEC script. The default value is 100. If the number of files exceeds the value that is specified for MaxFiles, mmapplypolicy starts the external program multiple times.
The preferred value for IBM Spectrum Archive EE is 10000.
-m ThreadLevel
The number of threads that are created and dispatched within each mmapplypolicy process during the policy execution phase. The default value is 24.
The preferred value for IBM Spectrum Archive EE is 2x the number of drives.
--single-instance
Ensures that, for the specified file system, only one instance of mmapplypolicy that is started with the --single-instance option can run at one time. If another instance of mmapplypolicy is started with the --single-instance option, this invocation does nothing and terminates.
As a preferred practice, set the --single_instance option when running with IBM Spectrum Archive EE.
-s LocalWorkDirectory
Specifies the directory to be used for temporary storage during mmapplypolicy command processing. The default directory is /tmp. The mmapplypolicy command stores lists of candidate and chosen files in temporary files within this directory.
When you run mmapplypolicy, it creates several temporary files and file lists. If the specified file system or directories contain many files, this process can require a significant amount of temporary storage. The required storage is proportional to the number of files (NF) being acted on and the average length of the path name to each file (AVPL). To make a rough estimate of the space required, estimate NF and assume an AVPL of 80 bytes. With an AVPL of 80, the space required is roughly (300 X NF) bytes of temporary space.
8.5 Setting the inode size for the GPFS file system for increased performance
When you create the GPFS file systems, there is an option that is called -i InodeSize for the mmcrfs command. The option specifies the byte size of inodes. The supported inode sizes are 512, 1024, and 4096 bytes. The preferred inode size is 4096 (the default for newly created IBM Spectrum Scale file systems) for all IBM Spectrum Scale file systems for IBM Spectrum Archive EE. This size should include both user data file systems and the IBM Spectrum Archive EE metadata file system.
8.6 Determining the file states for all files within the GPFS file system
Typically, to determine the state of a file and to which tape cartridges the file is migrated, you run the ltfsee info files command. However, it is not practical to run this command for every file on the GPFS file system.
In Example 8-3, the file is in the migrated state and is only on the tape cartridge JC0090JC.
Example 8-3 Example of the ltfsee info files command
[root@ltfssn2 ~]# ltfsee info files /ibm/glues/LTFS_EE_FILE_v2OmH1Wq_gzvnyt.bin
Name: /ibm/glues/LTFS_EE_FILE_v2OmH1Wq_gzvnyt.bin
Tape id:JC0090JC Status: migrated
Thus, use list rules in an IBM Spectrum Scale policy instead. Example 8-4 is a sample set of list rules to display files and file system objects. For those files that are in the migrated or premigrated state, the output line contains the tape cartridges on which that file is.
Example 8-4 Sample set of list rules to display the file states
define(
user_exclude_list,
(
PATH_NAME LIKE '/ibm/glues/.ltfsee/%'
OR PATH_NAME LIKE '%/.SpaceMan/%'
OR PATH_NAME LIKE '%/lost+found/%'
OR NAME = 'dsmerror.log'
)
)
 
define(
is_premigrated,
(MISC_ATTRIBUTES LIKE '%M%' AND MISC_ATTRIBUTES NOT LIKE '%V%')
)
 
define(
is_migrated,
(MISC_ATTRIBUTES LIKE '%V%')
)
 
define(
is_resident,
(NOT MISC_ATTRIBUTES LIKE '%M%')
)
 
define(
is_symlink,
(MISC_ATTRIBUTES LIKE '%L%')
)
 
define(
is_dir,
(MISC_ATTRIBUTES LIKE '%D%')
)
 
RULE 'SYSTEM_POOL_PLACEMENT_RULE' SET POOL 'system'
 
RULE EXTERNAL LIST 'file_states'
EXEC '/root/file_states.sh'
 
RULE 'EXCLUDE_LISTS' LIST 'file_states' EXCLUDE
WHERE user_exclude_list
 
RULE 'MIGRATED' LIST 'file_states'
FROM POOL 'system'
SHOW('migrated ' || xattr('dmapi.IBMTPS'))
WHERE is_migrated
 
RULE 'PREMIGRATED' LIST 'file_states'
FROM POOL 'system'
SHOW('premigrated ' || xattr('dmapi.IBMTPS'))
WHERE is_premigrated
 
RULE 'RESIDENT' LIST 'file_states'
FROM POOL 'system'
SHOW('resident ')
WHERE is_resident
AND (FILE_SIZE > 0)
 
RULE 'SYMLINKS' LIST 'file_states'
DIRECTORIES_PLUS
FROM POOL 'system'
SHOW('symlink ')
WHERE is_symlink
 
RULE 'DIRS' LIST 'file_states'
DIRECTORIES_PLUS
FROM POOL 'system'
SHOW('dir ')
WHERE is_dir
AND NOT user_exclude_list
 
RULE 'EMPTY_FILES' LIST 'file_states'
FROM POOL 'system'
SHOW('empty_file ')
WHERE (FILE_SIZE = 0)
The policy runs a script that is named file_states.sh, which is shown in Example 8-5.
Example 8-5 Example of file_states.sh
if [[ $1 == 'TEST' ]]; then
rm -f /root/file_states.txt
elif [[ $1 == 'LIST' ]]; then
cat $2 >> /root/file_states.txt
fi
To run the IBM Spectrum Scale policy, run the mmapplypolicy command with the -P option and the file states policy. This action produces a file that is called /root/file_states.txt, as shown in Example 8-6.
Example 8-6 Sample output of the /root/file_states.txt file
58382 1061400675 0 migrated 1 JC0090JC -- /ibm/glues/LTFS_EE_FILE_yLNo6EzFmWGRV5IiwQj7Du5f48DGlh5146TY6HQ4Ua_0VfnnC.bin
58385 1928357707 0 dirs -- /ibm/glues/empty_dir_2
58388 1781732144 0 migrated 1 JC0090JC -- /ibm/glues/LTFS_EE_FILE_lnBQlgqyQzH4QVkH54RXbX69KKdtul4HQBOu595Yx_78n50Y.bin
58389 1613275299 0 dirs -- /ibm/glues/empty_dir
58390 711115255 0 migrated 1 JC0090JC -- /ibm/glues/LTFS_EE_FILE_nJsfmmOLnmYN8kekbp4z_VJQ7Oj.bin
58391 386339945 0 migrated 1 JC0090JC -- /ibm/glues/LTFS_EE_FILE_0FbAueZL_9vYKRd.bin
58392 1245114627 0 dirs -- /ibm/glues/.empty_dir_3
58394 1360657693 0 migrated 1 JC0090JC -- /ibm/glues/LTFS_EE_FILE_JZg280w78VtJHG5mi25_l3hZhP.bin
58395 856815733 0 migrated 1 JC0090JC -- /ibm/glues/LTFS_EE_FILE_I9UIwx4ZmVNwZsGMTo3_qlRjKL.bin
Within the /root/file_states.txt file, the files states and file system objects can be easily identified for all IBM Spectrum Scale files, including the tape cartridges on which the files or file system objects are.
8.7 Memory considerations on the GPFS file system for increased performance
To make IBM Spectrum Scale more resistant against out of memory scenarios, adjust the vm.min_free_kbytes kernel tunable. This tunable controls the amount of free memory that Linux kernel keeps available (that is, not used in any kernel caches).
When vm.min_free_kbytes is set to its default value, some configurations might encounter memory exhaustion symptoms when free memory should in fact be available. Setting vm.min_free_kbytes to a higher value of 5-6% of the total amount of physical memory helps to avoid such a situation.
To modify vm.min_free_kbytes, complete the following steps:
1. Check the total memory of the system by running the following command:
#free -k
2. Calculate 5-6% of the total memory in kilobytes.
3. Add vm.min_free_kbytes = <value from step 2> to the /etc/sysctl.conf file.
4. Run sysctl -p /etc/sysctl.conf to permanently set the value.
8.8 Increasing the default maximum number of inodes in IBM Spectrum Scale
The IBM Spectrum Scale default maximum number of inodes is fine for most configurations, but for large systems that might have millions of files or more, the maximum number of inodes to set at file system creation time might need to be changed or increased after file system creation time. The maximum number of inodes must be larger than the expected sum of files and file system objects being managed by IBM Spectrum Archive EE (including the IBM Spectrum Archive EE metadata files if there is only one GPFS file system).
Inodes are allocated when they are used. When a file is deleted, the inode is reused, but inodes are never deallocated. When setting the maximum number of inodes in a file system, there is an option to preallocate inodes. However, in most cases there is no need to preallocate inodes because by default inodes are allocated in sets as needed. If you do decide to preallocate inodes, be careful not to preallocate more inodes than will be used; otherwise, the allocated inodes unnecessarily consume metadata space that cannot be reclaimed.
Further considerations when managing inodes:
1. For file systems that are supporting parallel file creates, as the total number of free inodes drops below 5% of the total number of inodes, there is the potential for slowdown in the file system access. Take this situation into consideration when creating or changing your file system.
2. Excessively increasing the value for the maximum number of files might cause the allocation of too much disk space for control structures.
To view the current number of used inodes, number of free inodes, and maximum number of inodes, run the following command:
mmdf Device
To set the maximum inode limit for the file system, run the following command:
mmchfs Device --inode-limit MaxNumInodes[:NumInodesToPreallocate]
8.9 Configuring IBM Spectrum Scale settings for performance improvement
The performance results in 3.4.4, “Performance” on page 53 were obtained by modifying the following IBM Spectrum Scale configuration attributes to optimize Spectrum Scale I/O. The following values were found to be optimal in our lab environment. They might need to be modified to fit your environment.
pagepool = 8 GB
workerThreads = 1024
nsdBufSpace = 50
nsdMaxWorkerThreads = 1024
nsdMinWorkerThreads = 1024
nsdMultiQueue = 64
nsdMultiQueueType = 1
nsdSmallThreadRatio = 1
nsdThreadsPerQueue = 48
numaMemoryInterleave = yes
maxStatCache = 0
ingorePrefetchLUNCount = yes
logPingPongSector = no
scatterBufferSize = 256k
The pagepool should be at least 4 GB but preferably around 8 GB depending on the system configuration. The file system block size should be 2 MB due to a disk subsystem of eight data disks plus one parity disk with a stripe size of 256k.
8.10 Real world use cases for mmapplypolicy
Typically, customers who use Spectrum Archive with Spectrum Scale manage one of two types of archive systems. The first is a traditional archive configuration where files are rarely accessed or updated. This configuration is intended for users who plan on keeping all their data on tape only. The second type is an active archive configuration. This configuration is more intended for users who continuously accesses the files. Each use case requires the creation of different Spectrum Scale policies.
8.10.1 Creating a traditional archive system policy
A traditional archive system uses a single policy that scans the Spectrum Archive name space for any files over 5 MB in size and migrates them to tape. This process saves the customer disk space immediately for new files to be generated. See “Using a cron job” on page 160 for information on how to automate the execution of this policy periodically.
 
Note: In the following policies, some optional attributes are added to provide efficient (pre)migration such as the SIZE attribute. This attribute specifies how many files to pass in to the EXEC script at a time. The preferred setting, which is listed in the examples below, is to set it to 20 GiB.
Example 8-7 shows a simple migration policy that chooses files greater than 5 MB to be candidate migration files and stubs them to tape. This is a good base policy to start off and modify to your specific needs. For instance, if you need to have files on three storage pools, modify the OPTS parameter to include a third <pool>@<library>.
Example 8-7 Simple migration file
define(user_exclude_list,(PATH_NAME LIKE '/ibm/gpfs/.ltfsee/%' OR PATH_NAME LIKE '/ibm/gpfs/.SpaceMan/%'))
define(is_premigrated,(MISC_ATTRIBUTES LIKE '%M%' AND MISC_ATTRIBUTES NOT LIKE '%V%'))
define(is_migrated,(MISC_ATTRIBUTES LIKE '%V%'))
define(is_resident,(NOT MISC_ATTRIBUTES LIKE '%M%'))
 
RULE 'SYSTEM_POOL_PLACEMENT_RULE' SET POOL 'system'
 
RULE EXTERNAL POOL 'LTFSEE_FILES'
EXEC '/opt/ibm/ltfsee/bin/ltfsee'
OPTS '-p primary@lib_ltfseevm copy@lib_ltfseevm'
SIZE(21474836480)
 
RULE 'LTFSEE_FILES_RULE' MIGRATE FROM POOL 'system'
TO POOL 'LTFSEE_FILES'
WHERE FILE_SIZE > 5242880
AND (CURRENT_TIMESTAMP - MODIFICATION_TIME > INTERVAL '5' MINUTES)
AND is_resident OR is_premigrated
AND NOT user_exclude_list
8.10.2 Creating active archive system policies
An active archive system requires two policies to maintain the system. The first is a premigration policy that will select all files over 5 MB to premigrate to tape, allowing users to still quickly obtain their files from disk. To see how to place this premigration policy into a cron job to run every 6 hours, see “Using a cron job” on page 160.
Example 8-8 shows a simple premigration policy for files greater than 5 MB.
Example 8-8 Simple premigration policy for files greater than 5MB
define(user_exclude_list,(PATH_NAME LIKE '/ibm/gpfs/.ltfsee/%' OR PATH_NAME LIKE '/ibm/gpfs/.SpaceMan/%'))
define(is_premigrated,(MISC_ATTRIBUTES LIKE '%M%' AND MISC_ATTRIBUTES NOT LIKE '%V%'))
define(is_migrated,(MISC_ATTRIBUTES LIKE '%V%'))
define(is_resident,(NOT MISC_ATTRIBUTES LIKE '%M%'))
 
RULE 'SYSTEM_POOL_PLACEMENT_RULE' SET POOL 'system'
 
RULE EXTERNAL POOL 'LTFSEE_FILES'
EXEC '/opt/ibm/ltfsee/bin/ltfsee'
OPTS '-p primary@lib_ltfseevm copy@lib_ltfseevm'
SIZE(21474836480)
 
RULE 'LTFSEE_FILES_RULE' PREMIGRATE FROM POOL 'system'
THRESHOLD(0,100,0)
TO POOL 'LTFSEE_FILES'
WHERE FILE_SIZE > 5242880
AND (CURRENT_TIMESTAMP - MODIFICATION_TIME > INTERVAL '5' MINUTES)
AND is_resident
AND NOT user_exclude_list
The second policy is a fail-safe policy that needs to be set so when a low disk space event is triggered, the fail-safe policy can be called. Adding the WEIGHT attribute to the policy allows the user to choose whether they want to start stubbing large files first or least recently used files. When the fail-safe policy starts running, it frees up the disk space to a set percentage.
The following are the commands for setting a fail-safe policy and calling mmadcallback:
mmchpolicy gpfs failsafe_policy.txt
mmaddcallback MIGRATION --command /usr/lpp/mmfs/bin/mmapplypolicy --event lowDiskSpace --params “%fsName -B 10000 -m <2x the number of drives>”
After setting the policy with the mmchpolicy command, run mmaddcallback with the fail-safe policy. This policy will run periodically and check whether the disk space has reached the threshold where stubbing is required to free up space.
Example 8-9 shows a simple failsafe_policy.txt, which gets triggered when the Spectrum Scale’s disk space reaches 80% full, and stubs least recently used files until the disk space has 50% occupancy.
Example 8-9 failsafe_policy.txt
define(user_exclude_list,(PATH_NAME LIKE '/ibm/gpfs/.ltfsee/%' OR PATH_NAME LIKE '/ibm/gpfs/.SpaceMan/%'))
define(is_premigrated,(MISC_ATTRIBUTES LIKE '%M%' AND MISC_ATTRIBUTES NOT LIKE '%V%'))
define(is_migrated,(MISC_ATTRIBUTES LIKE '%V%'))
define(is_resident,(NOT MISC_ATTRIBUTES LIKE '%M%'))
 
RULE 'SYSTEM_POOL_PLACEMENT_RULE' SET POOL 'system'
 
RULE EXTERNAL POOL 'LTFSEE_FILES'
EXEC '/opt/ibm/ltfsee/bin/ltfsee'
OPTS '-p primary@lib_ltfsee copy@lib_ltfsee copy2@lib_ltfsee'
SIZE(21474836480)
 
RULE 'LTFSEE_FILES_RULE' MIGRATE FROM POOL 'system'
THRESHOLD(80,50)
WEIGHT(CURRENT_TIMESTAMP - ACCESS_TIME)
TO POOL 'LTFSEE_FILES'
WHERE FILE_SIZE > 5242880
AND is_premigrated
AND NOT user_exclude_list
8.11 Capturing a core file on RHEL with abrtd
The Automatic Bug Reporting Tool (ABRT) consists of the abrtd daemon and a number of system services and utilities to process, analyze, and report detected problems. The daemon runs silently in the background most of the time, and springs into action when an application crashes or a kernel fault is detected. The daemon then collects the relevant problem data, such as a core file if there is one, the crashing application’s command-line parameters, and other data of forensic utility.
For abrtd to work with IBM Spectrum Archive EE, two configuration directives must be modified in the /etc/abrt/abrt-action-save-package-data.conf file:
OpenGPGCheck = yes/no
Setting the OpenGPGCheck directive to yes, which is the default setting, tells ABRT to analyze and handle only crashes in applications that are provided by packages that are signed by the GPG keys, which are listed in the /etc/abrt/gpg_keys file. Setting OpenGPGCheck to no tells ABRT to catch crashes in all programs.
ProcessUnpackaged = yes/no
This directive tells ABRT whether to process crashes in executable files that do not belong to any package. The default setting is no.
Here are the preferred settings:
OpenGPGCheck = no
ProcessUnpackaged = yes
8.12 Antivirus considerations
Although in-depth testing occurs with IBM Spectrum Archive EE and many industry-leading antivirus software, there are a few considerations to review periodically:
Configure any antivirus software to exclude IBM Spectrum Archive EE and HSM work directories:
 – The library mount point (the /ltfs directory).
 – All IBM Spectrum Archive EE space-managed GPFS file systems (which includes the .SPACEMAN directory).
 – The IBM Spectrum Archive EE metadata directory (the GPFS file system that is reserved for IBM Spectrum Archive EE internal usage).
Use antivirus software that supports sparse or offline files. Be sure that it has a setting that allows it to skip offline or sparse files to avoid unnecessary recall of migrated files.
8.13 Automatic email notification with rsyslog
Rsyslog and its mail output module (ommail) can be used to send syslog messages from IBM Spectrum Archive EE through email. Each syslog message is sent through its own email. Users should pay special attention to applying the correct amount of filtering to prevent heavy spamming. The ommail plug-in is primarily meant for alerting users of certain conditions and should be used in a limited number of cases. For more information, see the rsyslog ommail home page:
Here are two examples of how rsyslog ommail can be used with IBM Spectrum Archive EE by modifying the /etc/rsyslog.conf file:
If users want to send an email on all IBM Spectrum Archive EE registered error messages, the regular expression is “GLES[A-Z][0-9]*E”, as shown in Example 8-10.
Example 8-10 Email for all IBM Spectrum Archive EE registered error messages
$ModLoad ommail
$ActionMailSMTPServer us.ibm.com
$ActionMailFrom ltfsee@ltfsee_host1.tuc.stglabs.ibm.com
$ActionMailTo [email protected]
$template mailSubject,"LTFS EE Alert on %hostname%"
$template mailBody,"%msg%"
$ActionMailSubject mailSubject
:msg, regex, "GLES[A-Z][0-9]*E" :ommail:;mailBody
If users want to send an email on any failed migration per scan list, the regular expression is “[1-9][0-9]* failed”, as shown in Example 8-11.
Example 8-11 Email for any failed migration per scan list
$ModLoad ommail
$ActionMailSMTPServer us.ibm.com
$ActionMailFrom ltfsee@ltfsee_host1.tuc.stglabs.ibm.com
$ActionMailTo [email protected]
$template mailSubject,"LTFS EE Alert on %hostname%"
$template mailBody,"%msg%"
$ActionMailSubject mailSubject
:msg, regex, " [1-9][0-9]* failed" :ommail:;mailBody
8.14 Overlapping IBM Spectrum Scale policy rules
This section describes how you can avoid migration failures during your IBM Spectrum Archive EE system operations by having only non-overlapping IBM Spectrum Scale policy rules in place.
After a file is migrated to a tape cartridge pool and is in the migrated state, it cannot be migrated to other tape cartridge pools (unless it is recalled back from physical tape to file system space). Do not use overlapping IBM Spectrum Scale policy rules within different IBM Spectrum Scale policy files that can select the same files for migration to different tape cartridge pools. If a file was migrated, a later migration fails. The migration result for any file that already is in the migrated state is fail.
In Example 8-12, an attempt is made to migrate four files to tape cartridge pool pool2. Before the migration attempt, Tape ID 2MA260L5 is already in tape cartridge pool pool1, and Tape ID 2MA262L5 has one migrated and one pre-migrated file. The state of the files on these tape cartridges before the migration attempt is shown by the ltfsee info command in Example 8-12.
Example 8-12 Display the state of files on IBM Spectrum Archive EE tape cartridges through the ltfsee info command
# ltfsee info files *.ppt
Name: fileA.ppt
Tape id:2MA260L5@lib_lto Status: migrated
Name: fileB.ppt
Tape id:2MA260L5@lib_lto Status: migrated
Name: fileC.ppt
Tape id:2MA262L5@lib_lto Status: migrated
Name: fileD.ppt
Tape id:2MA262L5@lib_lto Status: premigrated
The mig scan list file that is used in this example contains these entries, as shown in Example 8-13.
Example 8-13 Sample content of a scan list file
-- /ibm/gpfs/fileA.ppt
-- /ibm/gpfs/fileB.ppt
-- /ibm/gpfs/fileC.ppt
-- /ibm/gpfs/fileD.ppt
 
The attempt to migrate the files produces the results that are shown in Example 8-14.
Example 8-14 Migration of files by running the ltfsee migration command
# ltfsee migrate mig -p pool2@lib_lto
GLESL167I(00400): A list of files to be migrated has been sent to LTFS EE using scan id 3428780289.
GLESL159E(00440): Not all migration has been successful.
GLESL038I(00448): Migration result: 1 succeeded, 3 failed, 0 duplicate, 0 duplicate wrong pool, 0 not found, 0 too small to qualify for migration, 0 too early for migration.
The files on Tape ID 2MA260L5 (fileB.ppt and fileD.pdf) already are in tape cartridge pool pool1. Therefore, the attempt to migrate them to tape cartridge pool pool2 produces a migration result failed. For the files on Tape ID 2MA262L5, the attempt to migrate fileB.ppt file also produces a migration result fail because the file already is migrated. Only the attempt to migrate the pre-migrated fileA.ppt file succeeds. So, one operation succeeded and three other operations failed.
8.15 Storage pool assignment
This section describes how you can facilitate your IBM Spectrum Archive EE system export activities by using different storage pools for logically different parts of an IBM Spectrum Scale namespace.
If you put different logical parts of an IBM Spectrum Scale namespace (such as the project directory) into different LTFS tape cartridge pools, you can Normal Export tape cartridges that contain only the files from that specific part of the IBM Spectrum Scale namespace (such as project abc). Otherwise, you must first recall all the files from the namespace of interest (such as the project directory of all projects), migrate the recalled files to an empty tape cartridge pool, and then Normal Export that tape cartridge pool.
The concept of different tape cartridge pools for different logical parts of an IBM Spectrum Scale namespace can be further isolated by using IBM Spectrum Archive node groups. A node group consists of one or more nodes that are connected to the same tape library. When tape cartridge pools are created, they can be assigned to a specific node group. For migration purposes, it allows certain tape cartridge pools to be used with only drives within the owning node group.
8.16 Tape cartridge removal
This section describes the information that must be reviewed before you physically remove a tape cartridge from the library of your IBM Spectrum Archive EE environment.
8.16.1 Reclaiming tape cartridges before you remove or export them
If tape cartridges are going bad, reclaim tape cartridges before you remove or Normal Export them. To avoid failed recall operations, reclaim a tape cartridge by running the ltfsee reclaim command before you remove it from the LTFS file system (by running the ltfsee tape move command) or Normal Export it from the LTFS library (by running the ltfsee export command without the --offline option).
8.16.2 Exporting tape cartridges before physically removing them from the library
A preferred practice is always to export a tape cartridge before it is physically removed from the library. If a removed tape cartridge is modified and then reinserted in the library, unpredictable behavior can occur.
8.17 Reusing LTFS formatted tape cartridges
In some scenarios, you might want to reuse tape cartridges for your IBM Spectrum Archive EE setup, which were used before as an LTFS formatted media in another LTFS setup.
Because these tape cartridges still might contain data from the previous usage, IBM Spectrum Archive EE recognizes the old content because LTFS is a self-describing format.
Before such tape cartridges can be reused within your IBM Spectrum Archive EE environment, they must be reformatted before they are added to an IBM Spectrum Archive EE tape cartridge pool. This task can be done by running the ltfsee operation commands (reconcile or reclaim and an implicit format through the add pool command).
8.17.1 Reformatting LTFS tape cartridges through ltfsee commands
If a tape cartridge was used as an LTFS tape, you can check its contents after it is added to the IBM Spectrum Archive EE system and loaded to a drive. You can run the ls -la command to display content of the tape cartridge, as shown in Example 8-15.
Example 8-15 Display content of a used LTFS tape cartridge (non-IBM Spectrum Archive EE)
[root@ltfs97 ~]# ls -la /ltfs/153AGWL5
total 41452613
drwxrwxrwx 2 root root 0 Jul 12 2012 .
drwxrwxrwx 12 root root 0 Jan 1 1970 ..
-rwxrwxrwx 1 root root 18601 Jul 12 2012 api_test.log
-rwxrwxrwx 1 root root 50963 Jul 11 2012 config.log
-rwxrwxrwx 1 root root 1048576 Jul 12 2012 dummy.000
-rwxrwxrwx 1 root root 21474836480 Jul 12 2012 perf_fcheck.000
-rwxrwxrwx 1 root root 20971520000 Jul 12 2012 perf_migrec
lrwxrwxrwx 1 root root 25 Jul 12 2012 symfile -> /Users/piste/mnt/testfile
You can also discover if it was an IBM Spectrum Archive EE tape cartridge before or just a standard LTFS tape cartridge that is used by LTFS LE or SDE release. Review the hidden directory .LTFSEE_DATA, as shown in Example 8-16. This example indicates that this cartridge was a previously used as an IBM Spectrum Archive EE tape cartridge.
Example 8-16 Display content of a used LTFS tape cartridge (IBM Spectrum Archive EE)
[root@ltfs97 ~]# ls -la /ltfs/051AGWL5
total 0
drwxrwxrwx 4 root root 0 Apr 9 18:00 .
drwxrwxrwx 11 root root 0 Jan 1 1970 ..
drwxrwxrwx 3 root root 0 Apr 9 18:00 ibm
drwxrwxrwx 2 root root 0 Apr 9 18:00 .LTFSEE_DATA
The procedure for reuse and reformatting of a previously used LTFS tape cartridge depends on whether it was used before as an LTFS SDE/LE tape cartridge or as an IBM Spectrum Archive EE tape cartridge.
Before you start with the reformat procedures and examples, it is important that you confirm the following starting point. You can see the tape cartridges that you want to reuse by running the ltfsee info tapes command in the status “Unknown”, as shown in Example 8-17.
Example 8-17 Output of the ltfsee info tapes command
[root@ltfs97 ~]# ltfsee info tapes
tape id device type size free space lib address node id drive serial storage pool status
037AGWL5 LTO 5 0GB 0GB 1038 - - - Unknown
051AGWL5 LTO 5 1327GB 1326GB 1039 - - PrimaryPool Valid LTFS
055AGWL5 LTO 5 1327GB 1327GB 1035 - - MPEGpool Valid LTFS
058AGWL5 LTO 5 1327GB 1326GB 1036 - - myfirstpool Valid LTFS
059AGWL5 LTO 5 1327GB 1327GB 258 1 1068000073 myfirstpool Valid LTFS
075AGWL5 LTO 5 1327GB 1325GB 1040 - - - Valid LTFS
YAM049L5 LTO 5 1327GB 1326GB 1042 - - - Valid LTFS
YAM086L5 LTO 5 1327GB 1327GB 1034 - - CopyPool Valid LTFS
YAM087L5 LTO 5 1327GB 1327GB 1033 - - MPEGpool Valid LTFS
 
Reformatting and reusing an LTFS SDE/LE tape cartridge
In this case, you run the ltfsee pool command to add this tape cartridge to an IBM Spectrum Archive EE tape cartridge pool and format it at the same time with the -F option, as shown in the following example and in Example 8-18:
ltfsee pool add myfirstpool <tapeID> -F
Example 8-18 Reformat a used LTFS SDE/LE tape cartridge
[root@ltfs97 ~]# ltfsee pool add myfirstpool 037AGWL5 -F
Tape 037AGWL5 successfully formatted.
Adding tape 037AGWL5 to storage pool myfirstpool
Reformatting and reusing an IBM Spectrum Archive EE tape cartridge
If you want to reuse an IBM Spectrum Archive EE tape cartridge, the ltfsee pool add -p myfirstpool <tapeID> -F command does not work because IBM Spectrum Archive EE detects that it was a previously used IBM Spectrum Archive EE tape cartridge. Before you use the command for adding it to a pool and use the -F format option, you must resolve and clean the relationships from the old files on the tape cartridge to your IBM Spectrum Archive EE environment. You can do this task by running the ltfsee reconcile -t command, first against the specific tape cartridge, as shown in the following example and in Example 8-19:
ltfsee reconcile -t <tapeID>
ltfsee pool add -p myfirstpool <tapeID> -F
Example 8-19 Reconcile and reformat a used IBM Spectrum Archive EE tape cartridge
[root@ltfs97 ~]# ltfsee reconcile -t 058AGWL5 -p myfirstpool
GLESS016I(00109): Reconciliation requested
GLESS049I(00610): Tapes to reconcile: 058AGWL5
GLESS050I(00619): GPFS filesystems involved: /ibm/glues
GLESS053I(00647): Number of pending migrations: 0
GLESS054I(00651): Creating GPFS snapshots:
GLESS055I(00656): Creating GPFS snapshot for /ibm/glues ( /dev/gpfs )
GLESS056I(00724): Scanning GPFS snapshots:
GLESS057I(00728): Scanning GPFS snapshot of /ibm/glues ( /dev/gpfs )
GLESS058I(00738): Removing GPFS snapshots:
GLESS059I(00742): Removing GPFS snapshot of /ibm/glues ( /dev/gpfs )
GLESS060I(00760): Processing scan results:
GLESS061I(00764): Processing scan results for /ibm/glues ( /dev/gpfs )
GLESS063I(00789): Reconciling the tapes:
GLESS001I(00815): Reconciling tape 058AGWL5 has been requested
GLESS002I(00835): Reconciling tape 058AGWL5 complete
GLESL172I(02984): Synchronizing LTFS EE tapes information
[root@ltfs97 ~]# ltfsee pool add -p myfirstpool 058AGWL5 -F
Tape 058AGWL5 successfully formatted.
Adding tape 058AGWL5 to storage pool myfirstpool
8.18 Reusing non-LTFS tape cartridges
In some scenarios, you might want to reuse tape cartridges for your IBM Spectrum Archive EE setup, which were used before as non-LTFS formatted media in another server setup behind your tape library (such as backup tape cartridges from a Tivoli Storage Manager Server or an IBM Spectrum Protect environment).
Although these tape cartridges still might contain data from the previous usage, they can be used within IBM Spectrum Archive EE the same way as new, unused tape cartridges. For more information about how to add new tape cartridge media to an IBM Spectrum Archive EE tape cartridge pool, see 7.5.1, “Adding tape cartridges” on page 139.
8.19 Moving tape cartridges between pools
This section describes preferred practices to consider when you want to move a tape cartridge between tape cartridge pools. This information also relates to the function that is described in 7.5.2, “Moving tape cartridges” on page 142.
8.19.1 Avoiding changing assignments for tape cartridges that contain files
If a tape cartridge contains any files, a preferred practice is to not move the tape cartridge from one tape cartridge pool to another tape cartridge pool. If you remove the tape cartridge from one tape cartridge pool and then add it to another tape cartridge pool, the tape cartridge includes files that are targeted for multiple pools. Before you export files you want from that tape cartridge, you must recall any files that are not supposed to be exported in such a scenario.
8.19.2 Reclaiming a tape cartridge and changing its assignment
Before you remove a tape cartridge from one tape cartridge pool and add it to another tape cartridge pool, a preferred practice is to reclaim the tape cartridge so that no files remain on the tape cartridge when it is removed. This action prevents the scenario that is described in 8.19.1, “Avoiding changing assignments for tape cartridges that contain files” on page 222.
8.20 Offline tape cartridges
This section describes how you can help maintain the file integrity of offline tape cartridges by not modifying the files of offline exported tape cartridges. Also, a reference to information about solving import problems that are caused by modified offline tape cartridges is provided.
8.20.1 Do not modify the files of offline tape cartridges
When a tape cartridge is offline and outside the library, do not modify its IBM Spectrum Scale offline files on disk and do not modify its files on the tape cartridge. Otherwise, some files that exist on the tape cartridge might become unavailable to IBM Spectrum Scale.
8.20.2 Solving problems
For more information about solving problems that are caused by trying to import an offline exported tape cartridge that was modified while it was outside the library, see “Importing offline tape cartridges” on page 187.
8.21 Scheduling reconciliation and reclamation
This section provides information about scheduling regular reconciliation and reclamation activities.
The reconciliation process resolves any inconsistencies that develop between files in the IBM Spectrum Scale and their equivalents in LTFS. The reclamation function frees up tape cartridge space that is occupied by non-referenced files and non-referenced content that is present on the tape cartridge. In other words, this is inactive content of data, but still occupying space on the physical tape.
It is preferable to schedule periodically reconciliation and reclamation, ideally during off-peak hours and at a frequency that is most effective, which ensures consistency between files and efficient use of the tape cartridges in your IBM Spectrum Archive EE environment.
8.22 License Expiration Handling
License validation is done by the IBM Spectrum Archive EE program. If the license covers only a certain period (as in the case for the IBM Spectrum Archive EE Trial Version, which is available for three months), it expires if this time is passed. The behavior of IBM Spectrum Archive EE changes after that period in the following cases:
The state of the nodes changes to the following defined value:
NODE_STATUS_LICENSE_EXPIRED
The node status can be determined through an internal node command, which is also used in other parts of IBM Spectrum Archive EE.
When the license is expired, IBM Spectrum Archive EE can still read data, but it is impossible to write and migrate data. In such case, not all IBM Spectrum Archive EE commands are usable.
When the license is expired and detected by the scheduler of the main IBM Spectrum Archive EE management components (MMM), it shuts down. This feature is necessary to have a proper clean-up if some jobs are still running or unscheduled. By doing so, a user is aware that IBM Spectrum Archive EE does not function because of the license expiration.
To give a user the possibility to access files that were previously migrated to tape, it is possible for IBM Spectrum Archive EE to restart, but it operates with limited functions. All functions that write to tape cartridges are not available. During the start of IBM Spectrum Archive EE (through MMM), it is detected that there are some nodes in the status of NODE_STATUS_LICENSE_EXPIRED.
IBM Spectrum Archive EE fails the following commands immediately:
migrate
import
export
reconcile
reclaim
These commands are designed to write to a tape cartridge in certain cases. Therefore, they fail with an error message. The transparent access of a migrated file is not affected. The deletion of the link and the data file on a tape cartridge because of a write or truncate recall is omitted.
In summary, the following steps occur after expiration:
1. The status of the nodes changes to the state NODE_STATUS_LICENSE_EXPIRED.
2. IBM Spectrum Archive EE shuts down to allow a proper clean-up.
3. IBM Spectrum Archive EE can be started again with limited functions.
8.23 Reassigning physical tape cartridges to another logical tape library
If you insert tape cartridges to your physical tape library, you must assign them to a specific logical tape library partition within this tape library. Then, these tape cartridges can be used exclusively by this logical library partition. The Advanced Library Management System (ALMS) feature of the IBM TS3500 tape library can assist you and provide different methods for assigning new tape cartridges.
The following methods that are used are the most popular for assigning a tape cartridge to a logical library:
Automatic tape cartridge assignment through a predefined Cartridge Assignment Policy (CAP).
Manual tape cartridge assignment through the appropriate functions of the IBM TS3500 Tape Library Specialist web interface (web GUI) by the user.
If you inserted new, unknown tape cartridges into the tape library (such as through the TS3500 I/O Station), this assignment does not change the logical location of the tape cartridge within the tape library. The following logical positions are available where a tape cartridge can be in a tape library (which are known as storage element addresses):
Home Slot (starting at address 1025)
I/O Station (real station or virtual station) (769 - 1023)
Tape drive (starting at 257)
The tape cartridge logically stays in the I/O Station, in the physical I/O Station, or in a virtual I/O Station when ALMS is active and Virtual I/O is enabled. Even when ALMS moves the tape cartridges to the final slot position within the library, the tape cartridges stay in the virtual I/O Station address (such as 769).
The server application behind the logical tape library must recognize new tape cartridges and move them from the virtual or physical I/O Station to a home slot. In the context of this book, IBM Spectrum Archive EE is the server application that is responsible for the management of the physical tapes. Therefore, IBM Spectrum Archive EE must detect tape cartridges in any I/O Station and move them for use into a home storage slot under its control.
 
Important: In IBM Spectrum Archive EE, the I/O Station slots are named IE slots.
The following methods are available to IBM Spectrum Archive EE to recognize new tape cartridges that are waiting in the (logical) tape library I/O Station before they can be moved to home slots and registered into IBM Spectrum Archive EE:
The manual method retrieves tape library inventory through the ltfsee retrieve command. This method can be used at anytime, such as after a tape cartridge was assigned to the logical library of IBM Spectrum Archive EE.
The automated method is used through SCSI Unit Attention reporting from the tape library whenever a new assignment is done (through a tape CAP or the user through a web-based GUI).
The automated reporting method of new tape cartridge assignments through SCSI Unit Attention reporting from the tape library must be enabled. You can enable this function through the TS3500 Tape Library Specialist web interface. Open the web interface and click Cartridges → Cartridge Assignment Policy. Select the All other XXX cartridges option, where XXX stands for LTO or 3592 media depending on your specific setup. From the drop-down menu, choose Modify and then select Go, as shown in Figure 8-1.
Figure 8-1 General setting for Cartridge Assignment Policies
A new window opens in which you can enable the function for Enable Reporting of Unit Attentions for All New Assignments for a specific logical library (as shown in Figure 8-2). Activate the feature, select your IBM Spectrum Archive EE logical library partition (such as in our lab setup LTFS_L05), and click Apply.
Figure 8-2 Change Logical Library owning All Other LTO VOLSERs window
For more information, see the IBM TS3500 tape library IBM Knowledge Center at this website:
8.24 Disaster recovery
This section describes the preparation of an IBM Spectrum Archive EE DR setup and the steps that you must perform before and after a disaster to recover your IBM Spectrum Archive EE environment.
8.24.1 Tiers of disaster recovery
Understanding DR strategies and solutions can be complex. To help categorize the various solutions and their characteristics (for example, costs, recovery time capabilities, and recovery point capabilities), definitions of the various levels and required components can be defined. The idea behind such a classification is to help those concerned with DR determine the following issues:
What solution they have
What solution they require
What it requires to meet greater DR objectives
In 1992, the SHARE user group in the United States with IBM defined a set of DR tier levels. This action was done to address the need to describe and quantify various different methodologies for successful mission-critical computer systems DR implementations. So, within the IT Business Continuance industry, the tier concept continues to be used, and is useful for describing today’s DR capabilities.
The tiers’ definitions are designed so that emerging DR technologies can also be applied, as shown in Table 8-1.
Table 8-1 Summary of disaster recovery tiers (SHARE)
Tier
Description
6
Zero data loss
5
Two-site two-phase commit
4
Electronic vaulting to hotsite (active secondary site)
3
Electronic vaulting
2
Offsite vaulting with a hotsite (PTAM + hot site)
1
Offsite vaulting (Pickup Truck Access Method (PTAM))
0
Offsite vaulting (PTAM)
In the context of the IBM Spectrum Archive EE product, this section focuses only on the tier 1 strategy because this is the only supported solution that you can achieve with a product that handles physical tape media (offsite vaulting).
For more information about the other DR tiers and general strategies, see Disaster Recovery Strategies with Tivoli Storage Management, SG24-6844.
Tier 1: Offsite vaulting
A tier 1 installation is defined as having a disaster recovery plan (DRP) that backs up and stores its data at an offsite storage facility, and determines some recovery requirements. As shown in Figure 8-3, backups are taken that are stored at an offsite storage facility. This environment also can establish a backup platform, although it does not have a site at which to restore its data, nor the necessary hardware on which to restore the data, for example, compatible tape devices.
Figure 8-3 Tier 1 - offsite vaulting (PTAM)
Because vaulting and retrieval of data is typically handled by couriers, this tier is described as the PTAM. PTAM is a method that is used by many sites because this is a relatively inexpensive option. However, it can be difficult to manage, that is, it is difficult to know exactly where the data is at any point. There is probably only selectively saved data. Certain requirements were determined and documented in a contingency plan and there is optional backup hardware and a backup facility that is available.
Recovery depends on when hardware can be supplied, or possibly when a building for the new infrastructure can be located and prepared. Although some customers are on this tier and seemingly can recover if there is a disaster, one factor that is sometimes overlooked is the recovery time objective (RTO). For example, although it is possible to recover data eventually, it might take several days or weeks. An outage of business data for this long can affect business operations for several months or even years (if not permanently).
 
Important: With IBM Spectrum Archive EE, the recovery time can be improved because after the import of the vaulting tape cartridges into a recovered production environment, the user data is immediately accessible without the need to copy back content from the tape cartridges into a disk or file system.
8.24.2 Preparing IBM Spectrum Archive EE for a tier 1 disaster recovery strategy (offsite vaulting)
IBM Spectrum Archive EE has all the tools and functions that you need to prepare a tier 1 DR strategy for offsite vaulting of tape media.
The fundamental concept is based on the IBM Spectrum Archive EE function to create replicas and redundant copies of your file system data to tape media during migration (see 7.7.4, “Replicas and redundant copies” on page 161). IBM Spectrum Archive EE enables the creation of a replica plus two more redundant replicas (copies) of each IBM Spectrum Scale file during the migration process.
The first replica is the primary copy, and other replicas are called redundant copies. Redundant copies must be created in tape cartridge pools that are different from the tape cartridge pool of the primary replica and different from the tape cartridge pools of other redundant copies. Up to two redundant copies can be created, which means that a specific file from the GPFS file system can be stored on three different physical tape cartridges in three different IBM Spectrum Archive EE tape cartridge pools. The tape cartridge where the primary replica is stored and the tapes that contain the redundant copies are referenced in the IBM Spectrum Scale inode with an IBM Spectrum Archive EE DMAPI attribute. The primary replica is always listed first.
Redundant copies are written to their corresponding tape cartridges in the IBM Spectrum Archive EE format. These tape cartridges can be reconciled, exported, reclaimed, or imported by using the same commands and procedures that are used for standard migration without replica creation.
Redundant copies must be created in tape cartridge pools that are different from the pool of the primary replica and different from the pools of other redundant copies. Therefore, create a DR pool named DRPool that exclusively contains the media you plan to Offline Export for offline vaulting. You also must plan for the following issues:
Which file system data is migrated (as another replica) to the DR pool?
How often do you plan the export and removal of physical tapes for offline vaulting?
How do you handle media lifecycle management with the tape cartridges for offline vaulting?
What are the DR steps and procedure?
When you have the redundant copy that is created and stored in an external site (offline vaulting) for DR purposes, the primary replica, IBM Spectrum Archive EE server, and IBM Spectrum Scale do not exist if a disaster occurs. Section 8.24.3, “IBM Spectrum Archive EE tier 1 DR procedure” on page 229 describes the steps that are used to perform to recovery (import) the offline vaulting tape cartridges to a newly installed IBM Spectrum Archive EE environment, re-create the GPFS file system information, and regain access to your IBM Spectrum Archive EE data.
 
Important: The migration of a pre-migrated file does not create new replicas.
Example 8-20 shows you a sample migration policy to migrate all files to three pools. To have this policy run periodically, see “Using a cron job” on page 160.
Example 8-20 Sample of a migration policy
define(user_exclude_list,(PATH_NAME LIKE '/ibm/gpfs/.ltfsee/%' OR PATH_NAME LIKE '/ibm/gpfs/.SpaceMan/%'))
define(is_premigrated,(MISC_ATTRIBUTES LIKE '%M%' AND MISC_ATTRIBUTES NOT LIKE '%V%'))
define(is_migrated,(MISC_ATTRIBUTES LIKE '%V%'))
define(is_resident,(NOT MISC_ATTRIBUTES LIKE '%M%'))
 
RULE 'SYSTEM_POOL_PLACEMENT_RULE' SET POOL 'system'
 
RULE EXTERNAL POOL 'LTFSEE_FILES'
EXEC '/opt/ibm/ltfsee/bin/ltfsee'
OPTS '-p primary@lib_ltfseevm copy@lib_ltfseevm DR@lib_ltfseevm'
SIZE(21474836480)
 
RULE 'LTFSEE_FILES_RULE' MIGRATE FROM POOL 'system'
 
TO POOL 'LTFSEE_FILES'
AND is_premigrated
AND NOT user_exclude_list
After you create redundant replicas of your file system data on different IBM Spectrum Archive EE tape cartridge pools for offline vaulting, you can Normal Export the tape cartridges by running the IBM Spectrum Archive EE export command. For more information, see 7.18.2, “Exporting” on page 188.
 
Important: The IBM Spectrum Archive EE export command does not eject the tape cartridge to the physical I/O station of the attached tape library. To eject the DR tape cartridges from the library to take them out for offline vaulting, you can run the ltfsee tape move command with the option ieslot. For more information, see 7.5, “Tape library management” on page 139 and 11.1, “Command-line reference” on page 322.
8.24.3 IBM Spectrum Archive EE tier 1 DR procedure
To perform a DR to restore a destroyed IBM Spectrum Archive EE server and IBM Spectrum Scale with the offline vaulting tape cartridges, complete the following steps:
1. Before you start a DR, a set of IBM Spectrum Archive EE exported tape cartridges for an Offline vault must be created and a “new” Linux server and an IBM Spectrum Archive EE cluster environment, including IBM Spectrum Scale, must be set up.
2. Confirm that the new installed IBM Spectrum Archive EE cluster is running and ready for the import operation by running the following commands:
 – # ltfsee status
 – # ltfsee info tapes
 – # ltfsee info pools
3. Insert the tape cartridges for DR into the tape library, such as the IBM TS3500 I/O station.
4. By using the IBM TS3500 user interface (web-based GUI) or a set CAP, assign the DR tape cartridges to the IBM Spectrum Archive EE logical tape library partition of your new IBM Spectrum Archive EE server.
5. From the IBM Spectrum Archive EE program, retrieve the updated inventory information from the logical tape library by running the following command:
# ltfsee retrieve
6. Import the DR tape cartridges into the IBM Spectrum Archive EE environment by running the ltfsee rebuild command. The ltfsee rebuild command features various options that you can specify. Therefore, it is important to become familiar with these options, especially when you are performing DR. For more information, see Chapter 11, “Reference” on page 321.
When you rebuild from one or more tape cartridges, the ltfsee rebuild command adds the specified tape cartridge to the IBM Spectrum Archive EE library and imports the files on that tape cartridge into the IBM Spectrum Scale namespace. This process puts the stub file back in to the IBM Spectrum Scale, but the imported files stay in a migrated state, which means that the data remains on tape. The data portion of the file is not copied to disk during the import.
Rebuild from tape cartridges are left outside any tape cartridge pool and cannot be targeted for migration. If you want a rebuild from tape cartridge to be available for migration, you must assign it to a tape cartridge pool.
Restoring file system objects and files from tape
If a GPFS file system fails, the migrated files and the saved file system objects (empty regular files, symbolic links, and empty directories) can be restored from tapes by running the ltfsee rebuild command.
The ltfsee rebuild command operates similarly to the ltfsee import command. The ltfsee rebuild command reinstantiates the stub files in IBM Spectrum Scale for migrated files. The state of those files changes to the migrated state. Additionally the ltfsee rebuild command re-creates the file system objects in IBM Spectrum Scale for saved file system objects.
 
Note: When a symbolic link is saved to tape and then restored by the ltfsee rebuild command, the target of the symbolic link is kept. It can cause the link to break. Therefore, after a symbolic link is restored, it might need to be moved manually to its original location on IBM Spectrum Scale.
Typical recovery procedure by using the ltfsee rebuild command
Here is a typical user scenario for recovering migrated files and saved file system objects from tape by running the ltfsee rebuild command:
1. Re-create the GPFS file system or create a GPFS file system.
2. Restore the migrated files and saved file system objects from tape by running the ltfsee rebuild command:
ltfsee rebuild -P /gpfs/ltfsee/rebuild -p PrimPool -t LTFS01L6 LTFS02L6 LTFS03L6
/gpfs/ltfsee/rebuild is a directory in IBM Spectrum Scale to be restored to, PrimPool is the storage pool to import the tapes into, and LTFS01L6, LTFS02L6, and LTFS02L6 are tapes that contain migrated files or saved file system objects.
Rebuild processing for unexported tapes that are not reconciled
The ltfsee rebuild command might encounter tapes that are not reconciled when the command is applied to tapes that are not exported from IBM Spectrum Archive EE. Then, the following situations can occur with the processing to restore files and file system objects, and should be handled as described for each case:
The tapes might have multiple generations of a file or a file system object. If so, the ltfsee rebuild command restores an object from the latest one that is on the tapes that are specified from the command.
The tapes might not reflect the latest file information from IBM Spectrum Scale. If so, the ltfsee rebuild command might restore files or file system objects that were removed from IBM Spectrum Scale.
Rebuild and restore considerations
Observe these additional considerations when you run the ltfsee rebuild command:
While the ltfsee rebuild command is running, do not modify or access the files or file system objects to be restored. During the rebuild process, an old generation of the file can appear on IBM Spectrum Scale.
Avoid running the ltfsee rebuild command with any tape that is being used by IBM Spectrum Archive EE. Otherwise, significant errors are likely to occur because two stub files are created in IBM Spectrum Scale that are migrated to one tape.
 
Note: Observe the following advice if you run the ltfsee rebuild command with a tape in use by IBM Spectrum Archive EE.
You must resolve the situation (two stub files for one migrated file) before any other operations are made against the files or file system objects that correspond to the tape.
For example, consider the case where a file is migrated to a tape, and then the file accidentally is removed from IBM Spectrum Scale. To restore the file from the tape, you can run the ltfsee rebuild command with the tape, which rebuilds all files and file system objects in the tape to a specified directory. Next, you must choose the file to be restored and move it to the original location in IBM Spectrum Scale. Then, you must remove all other files and file system objects in the specified directory before any actions (migration, recall, save, reconcile, reclaim, import, export, or rebuild) are made against the tape.
Important: With the first release of IBM Spectrum Archive EE, access control list (ACL) file system information is not supported and not recovered if you rebuild the IBM Spectrum Scale data after a disaster through the replica tape cartridges. All recovered files under IBM Spectrum Scale have a generic permission setting, as shown in the following example:
-rw------- 1 root root 104857600 Apr 16 14:51 file1.img
The IBM Spectrum Scale Extended Attributes (EAs) are supported by IBM Spectrum Archive EE and are restored. These EAs are imported because they are used by various applications to store more file information. For more information about limitations, see Table 7-4 on page 200.
8.25 IBM Spectrum Archive EE problem determination
If you discover an error message or a problem while you are running and operating the IBM Spectrum Archive EE program, you can check the IBM Spectrum Archive EE log file as a starting point for problem determination.
The IBM Spectrum Archive EE log file can be found in the following directory:
/var/log/ltfsee.log
In Example 8-21, we attempted to migrate two files (document10.txt and document20.txt) to a pool (myfirstpool) that contained two new formatted and added physical tapes (055AGWL5 and 055AGWL5). We encountered an error that only one file was migrated successfully. We checked the ltfsee.log to determine why the other file was not migrated.
Example 8-21 Check the ltfsee.log file
GLESL167I(00400): A list of files to be migrated has been sent to LTFS EE using scan id 1887703553.
GLESL159E(00440): Not all migration has been successful.
GLESL038I(00448): Migration result: 1 succeeded, 1 failed, 0 duplicate, 0 duplicate wrong pool, 0 not found, 0 too small to qualify for migration, 0 too early for migration.
[root@ltfs97 glues]#
[root@ltfs97 glues]# vi /var/log/ltfsee.log
2016-12-14T09:17:08.538724-07:00 ltfs97 mmm[7889]: GLESM148E(00538): File /ibm/gpfs/document20.txt is already migrated and will be skipped.
2016-12-14T09:17:18.540054-07:00 ltfs97 ltfsee[1948]: GLESL159E(00440): Not all migration has been successful.
In Example 8-21, you can see from the message in the IBM Spectrum Archive EE log file that one file we tried to migrate was already in a migrated state and therefore skipped to migrate again, as shown in the following example:
GLESM148E(00883): File /ibm/glues/document20.txt is already migrated and will be skipped.
For more information about problem determination, see Chapter 10, “Troubleshooting” on page 249.
8.26 Collecting IBM Spectrum Archive EE logs for support
If you discover a problem with your IBM Spectrum Archive EE program and open a ticket at the IBM Support Center, you might be asked to provide a package of IBM Spectrum Archive EE log files.
There is a Linux script that is available with IBM Spectrum Archive EE that collects all of the needed files and logs for your convenience to provide them to IBM Support. This task also compresses the files into a single package.
To generate the compressed .tar file and provide it on request to IBM Support, run the following command:
ltfsee_log_collection
Example 8-22 shows the output of the ltfsee_log_collection command. During the log collection run, you are asked what information you want to collect. If you are unsure, select Y to select all the information. At the end of the output, you can find the file name and where the log package was stored.
Example 8-22 ltfsee_log_collection command
[root@ltfs97 ~]# ltfsee_log_collection
 
LTFS Enterprise Edition - log collection program
 
This program collects the following information from your GPFS cluster.
(1) Log files that are generated by GPFS, LTFS Enterprise Edition
(2) Configuration information that are configured to use GPFS and LTFS
Enterprise Edition
(3) System information including OS distribution and kernel, and hardware
information (CPU and memory)
 
If you want to collect all the above information, input 'y'.
If you want to collect only (1) and (2), input 'p' (partial).
If you don't want to collect any information, input 'n'.
 
The collected data will be zipped in the ltfsee_log_files_<date>_<time>.tar.gz.
You can check the contents of it before submitting to IBM.
 
Input > y
Create a temporary directory '/root/ltfsee_log_files'
copy all log files to the temporary directory
/var/adm/ras/mmfs.log.latest
/var/adm/ras/mmfs.log.previous
/opt/tivoli/tsm/client/hsm/bin/dsmerror.log
/var/log/ltfs.log
/var/log/ltfsee.log
/etc/ltfs.conf
/etc/ltfs.conf.local
/etc/rc.gpfshsm
/opt/tivoli/tsm/client/ba/bin/dsm.opt
/opt/tivoli/tsm/client/ba/bin/dsm.sys
/var/log/messages
get gpfs/hsm configuration
get running process information
get system configuration
list all nodes in the cluster
zip temporary directory
ltfsee_log_files/
ltfsee_log_files/reconcile/
ltfsee_log_files/reconcile/gpfs/
ltfsee_log_files/reconcile/gpfs/gpfs_lists_per_tape/
ltfsee_log_files/reconcile/gpfs/gpfs_lists_per_tape/058AGWL5
ltfsee_log_files/reconcile/gpfs/gpfs_lists_per_tape/YAM049L5
ltfsee_log_files/reconcile/gpfs/gpfs_policies/
ltfsee_log_files/reconcile/gpfs/gpfs_policies/17750464391302654144
ltfsee_log_files/reconcile/gpfs/ltfs_lists/
ltfsee_log_files/reconcile/gpfs/ltfs_lists/YAM049L5
ltfsee_log_files/reconcile/gpfs/gpfs_fss/
ltfsee_log_files/reconcile/gpfs/gpfs_fss/gpfs_fss_invlvd
ltfsee_log_files/reconcile/gpfs/gpfs_fss/gpfs_fss_all
ltfsee_log_files/reconcile/gpfs/gpfs_fss/gpfs_fss_all_dmapi_mntd
ltfsee_log_files/reconcile/gpfs/gpfs_fss/gpfs_fss_all_dmapi
ltfsee_log_files/reconcile/gpfs/gpfs_lists/
ltfsee_log_files/reconcile/gpfs/gpfs_lists/17750464391302654144
ltfsee_log_files/htohru9.ltd.sdl/
ltfsee_log_files/htohru9.ltd.sdl/process.log
ltfsee_log_files/htohru9.ltd.sdl/mmfs.log.latest
ltfsee_log_files/htohru9.ltd.sdl/ltfs.conf
ltfsee_log_files/htohru9.ltd.sdl/ltfsee.log
ltfsee_log_files/htohru9.ltd.sdl/messages
ltfsee_log_files/htohru9.ltd.sdl/dsm.sys
ltfsee_log_files/htohru9.ltd.sdl/dsmerror.log
ltfsee_log_files/htohru9.ltd.sdl/ltfs.log
ltfsee_log_files/htohru9.ltd.sdl/ltfsee_info.log
ltfsee_log_files/htohru9.ltd.sdl/information
ltfsee_log_files/htohru9.ltd.sdl/ltfs.conf.local
ltfsee_log_files/htohru9.ltd.sdl/rc.gpfshsm
ltfsee_log_files/htohru9.ltd.sdl/mmfs.log.previous
ltfsee_log_files/htohru9.ltd.sdl/dsm.opt
ltfsee_log_files/mmlsfs.log
ltfsee_log_files/mmlsconfig.log
ltfsee_log_files/dsmmigfs_query.log
remove temporary directory
The log files collection process is completed
zipped file name: ltfsee_log_files_20130410_145325.tar.gz
8.27 Backing up file systems that are not managed by IBM Spectrum Archive EE
Because the Tivoli Storage Manager Backup/Archive client and the Tivoli Storage Manager hierarchical storage management (HSM) client from the IBM Spectrum Protect family are components of IBM Spectrum Archive EE and installed as part of the IBM Spectrum Archive EE installation process, it is possible to use them to back up local server file systems that are not part of the GPFS or IBM Spectrum Scale cluster for IBM Spectrum Archive EE. However, the backup of a GPFS or IBM Spectrum Scale file system that is used by IBM Spectrum Archive EE is not supported.
 
8.27.1 Considerations
Consider the following points when you are using the Tivoli Storage Manager Backup/Archive client in the IBM Spectrum Archive EE environment:
Licensing
You must purchase a separate Tivoli Storage Manager client license from the IBM Spectrum Protect family to back up your server by using the Tivoli Storage Manager Backup/Archive client that is supplied with IBM Spectrum Archive EE.
Compatibility
The Tivoli Storage Manager Backup/Archive client Version 7.1.1.3 from the IBM Spectrum Protect family is installed with IBM Spectrum Archive EE. This version is required by IBM Spectrum Archive EE and was modified to run with IBM Spectrum Archive EE. Therefore, it cannot be upgraded or downgraded independently of the IBM Spectrum Archive EE installation.
This version might cause compatibility issues if, for example, you are running a Tivoli Storage Manager V5.5 server. For more information about the current versions of Tivoli Storage Manager server that are compatible with this Tivoli Storage Manager client, see this website:
8.27.2 Backing up a GPFS or IBM Spectrum Scale environment
This section describes the standard method of backing up a GPFS or IBM Spectrum Scale environment (GPFS file system) to a Tivoli Storage Manager server of the IBM Spectrum Protect family with and without HSM. For more information, see General Parallel File System Version 4 Release 2.1 Advanced Administration Guide, SC23-7032-01 or see IBM Spectrum Scale V4.2.1: Advanced Administration Guide, which is available at this website:
The backup of a GPFS file system that is managed by the IBM Spectrum Archive EE environment is not supported in the Version 1 release of IBM Spectrum Archive EE. The primary reason is that attempting to back up the stub of a file that was migrated to LTFS causes it to be automatically recalled from LTFS (tape) to the IBM Spectrum Scale. This is not an efficient way to perform backups, especially when you are dealing with large numbers of files.
However, a GPFS file system that is independent of IBM Spectrum Archive EE and not managed by IBM Spectrum Archive EE can be backed up. Normally, the mmbackup command is used to back up the files of a GPFS file system to Tivoli Storage Manager servers by using the Tivoli Storage Manager Backup/Archive Client of the IBM Spectrum Protect family. In addition, the mmbackup command can operate with regular Tivoli Storage Manager backup commands for backup. After a file system is backed up, you can restore files by using the interfaces that are provided by Tivoli Storage Manager of the IBM Spectrum Protect family.
If HSM is also installed, Tivoli Storage Manager and HSM coordinate to back up and migrate data. If you back up and migrate files to the same Tivoli Storage Manager server, the HSM client can verify that current backup versions of your files exist before you migrate them. If they are not backed up, they must be backed up before migration (if the MIGREQUIRESBKUP=Yes option is set for management classes).
8.27.3 Backing up non IBM Spectrum Scale file systems
The backup of non GPFS file systems space can be accomplished with minor modifications to the embedded Tivoli Storage Manager configuration files that are set up during a standard IBM Spectrum Archive EE installation. Example 8-23 shows the default dsm.sys that is created after the installation of IBM Spectrum Archive EE.
Example 8-23 Default dsm.sys file
SErvername server_a
COMMMethod TCPip
TCPPort 1500
TCPServeraddress node.domain.company.COM
 
HSMBACKENDMODE TSMFREE
ERRORLOGNAME /opt/tivoli/tsm/client/hsm/bin/dsmerror.log
To permit the backup of non GPFS file system to a Tivoli Storage Manager server of the IBM Spectrum Protect family, another server stanza must be added to the file, as shown in Example 8-24.
Example 8-24 Modified dsm.sys file
MIGRATESERVER server_a
 
SErvername server_a
COMMMethod TCPip
TCPPort 1500
TCPServeraddress node.domain.company.COM
 
HSMBACKENDMODE TSMFREE
ERRORLOGNAME /opt/tivoli/tsm/client/hsm/bin/dsmerror.log
 
SErvername TSM
COMMMethod TCPip
TCPPort 1500
TCPServeraddress 192.168.10.20
ERRORLOGNAME /opt/tivoli/tsm/client/ba/bin/dsmerror.log
PASSWORDACCESS GENERATE
EXCLUDE.DIR /ibm/glues
These are the minimum configuration options that are needed to perform a backup. However, if there are other parameters that are required in your environment, you can add them to the “SErvername TSM” stanza.
The MIGRATESERVER option was added to the beginning of the file outside any server stanzas. The migrate server option specifies the name of the server to which you want to migrate files from your client node. If you do not specify a server with the migrate server option, your files migrate to the server that you specify with the defaultserver option. If you do not specify a server with either of these options, your files migrate to the server that you identify in the first stanza of your dsm.sys file.
You also must exclude the GPFS to avoid it being backed up as part of an incremental backup. In our case, the file system is named /ibm/glues/.
Example 8-25 shows the default dsm.opt file that was created after the installation of IBM Spectrum Archive EE.
Example 8-25 Default dsm.opt file
* SErvername A server name defined in the dsm.sys file
 
HSMDISABLEAUTOMIGDAEMONS YES
Although it is possible to add another server stanza to the default dsm.opt file, it is safer to create a separate dsm.opt file specifically for Tivoli Storage Manager backups. In Example 8-26, we created a file that is called dsm_tsm.opt with the same SErvername as specified in the dsm.sys file. These are the minimum configuration options that are needed to perform a backup. However, if there are other parameters that are required in your environment, you can add them to the “SErvername TSM” stanza.
Example 8-26 Other dsm_tsm.opt file
SErvername TSM
To perform a backup of a non GPFS file system to a Tivoli Storage Manager server of the
IBM Spectrum Protect family, the Tivoli Storage Manager Backup/Archive client must be started by using the -optfile option, as shown in Example 8-27.
Example 8-27 Start the Tivoli Storage Manager Backup/Archive Client
[root@ltfs97 bin]# dsmc -optfile=dsm_tsm.opt
IBM Tivoli Storage Manager
Command Line Space Management Client Interface
Client Version 7, Release 1, Level 0.3
Client date/time: 09/23/2014 13:45:36
(c) Copyright by IBM Corporation and other(s) 1990, 2014. All Rights Reserved.
 
Node Name: LTFS97
Session established with server TSM: Windows
Server Version 7, Release 1, Level 0
Server date/time: 09/23/2014 13:45:42 Last access: 09/23/2014 13:43:32
tsm>
tsm> q inclexcl
*** FILE INCLUDE/EXCLUDE ***
Mode Function Pattern (match from top down) Source File
---- --------- ------------------------------ -----------------
No exclude filespace statements defined.
Excl Directory /.../.TsmCacheDir TSM
Excl Directory /.../.SpaceMan Operating System
Excl Directory /ibm/glues /opt/tivoli/tsm/client/ba/bin/dsm.sys
Exclude HSM /etc/adsm/SpaceMan/config/.../* Operating System
Exclude HSM /.../.SpaceMan/.../* Operating System
Exclude Restore /.../.SpaceMan/.../* Operating System
Exclude Archive /.../.SpaceMan/.../* Operating System
No DFS include/exclude statements defined.
tsm>
For more information about how to back up a server by using Tivoli Storage Manager of the IBM Spectrum Protect family, see the client manual found at this website:
8.27.4 IBM TS4500 Automated Media Verification with IBM Spectrum Archive EE
In some use cases where IBM Spectrum Archive EE is deployed, you might have the requirement to ensure periodically that the files and data that is migrated from the IBM Spectrum Scale file system to physical tape is still readable and can be recalled back from tape to the file system without any error. Especially in a more long-term archival environment, a function that checks the physical media based on a schedule that the user can implement is highly appreciated.
Starting with the release of the IBM TS4500 Tape Library R2, such a new fully transparent function is introduced within the TS4500 operations that is named policy-based automatic media verification. This new function is hidden from any ISV software, similar to the automatic cleaning. No ISV certification is required. It can be enabled/disabled through the logical library with additional settings to define the verify period (for example, every 6 months) and the first verification date.
One or more designated media verification drives (MVDs) must be assigned to a logical library in order for the verification to take place. A preferred practice is to have two MVDs assigned at a time to ensure that no false positives occur because of a faulty tape drive. Figure 8-4 shows an example of such a setup.
Figure 8-4 TS4500 with one logical library showing two MVDs configured
 
Note: MVDs defined within a logical library are not accessible from the host or application by using the drives of this particular logical library for production.
Verify results are simple pass/fail, but verify failures are retried on a second physical drive, if available, before being reported. A failure is reported through all normal notification options (email, syslog, and SNMP). MVDs are not reported as mount points (SCSI DTEs) to the ISV application, so MVDs need do not be connected to the SAN.
During this process, whenever access from the application or host is required to the physical media under media verification, the tape library stops the current verification process, dismounts the needed tape from the MVD, and mounts it to a regular tape drive within the same logical library for access by the host application to satisfy the requested mount. At a later point, the media verification process continues.
The library GUI Cartridges page adds columns for last verification date/time, verification result, and next verification date/time (if automatic media verification is enabled). If a cartridge being verified is requested for ejection or mounting by the ISV software (which thinks the cartridge is in a storage slot), the verify task is automatically canceled, a checkpoint occurs, and the task resumes later (if/when the cartridge is available). The ISV eject or mount occurs with a delay comparable to a mount to a drive being cleaned (well within the preferred practice SCSI Move Medium timeout values). The GUI also supports a manual stop of the verify task.
The last verification date/time is written in the cartridge memory (CM) and read upon first mount after being newly inserted into a TS4500, providing persistence and portability (similar to a cleaning cartridge usage count).
All verify mounts are recorded in the mount history CSV file, allowing for more granular health analysis (for example, outlier recovered error counts) by using Tape System Reporter (TSR) or Rocket Server graph.
The whole media verification process is transparent to IBM Spectrum Archive EE as the host. No definitions and configurations need to be done within IBM Spectrum Archive EE. All setup activities are done only through the TS4500 management interface.
Figure 8-5 to Figure 8-8 on page 241 show screen captures from the TS4500 tape library web interface that show you how to assign an MVD to a logical library. It is a two-step process because you must define a drive to be an MVD and you must assign this drive before the logical library (if it was not assigned before).
1. You can select the menu option Drives by Logical Library to assign an unassigned drive to a logical library by right-clicking the unassigned drive icon. A menu opens where you select Assign, as shown in Figure 8-5.
Figure 8-5 Assign a tape drive to a logical library through the TS4500 web interface - step 1
2. Another window opens, where you must select the specific logical library to which the unassigned drive is supposed to be added, as shown in Figure 8-6.
Figure 8-6 Assign a tape drive to a logical library through the TS4500 web interface - step 2
3. If the drive to be used as an MVD is configured within the logical library, change its role, as shown in Figure 8-7 and Figure 8-8 on page 241.
Figure 8-7 Reserve a tape drive as the media verification drive through the TS4500 web interface
4. You must right-click the assigned drive within the logical library. A menu opens and you select Use for Media Verification from the list of the provided options. A confirmation dialog box opens. Click Yes to proceed.
5. After making that configuration change to the drive, you see a new icon in front of it to show you the new role (Figure 8-8).
Figure 8-8 Display a tape drive as the media verification drive through the TS4500 web interface
 
Note: The MVD flag for a tape drive is a global setting, which means that after it is assigned, the drive keeps its role as an MVD even it is unassigned and will be assigned to a new logical library. Unassigning does not disable this role.
To unassign a drive from being an MVD, follow the same procedure again, and select (after the right-click) Use for Media Access. This action changes the drive role back to normal operation for the attached host application to this logical library.
Figure 8-9 shows you the TS4500 web interface dialog box for enabling automatic media verification on an existing logical library. You must go to the Cartridges by Logical Library page. Then, select Modify Media Verification for the selected logical library. The Automatic Media Verification dialog box opens and you can enter the media verification schedule.
Figure 8-9 Modify Media Verification dialog box to set up a schedule
By using this dialog box, you can enable/disable an automatic media verification schedule. Then, you can configure how often the media should be verified and the first verification date. Finally, you can select the MVDs, which are selected by the library to perform the scheduled media verification test.
If you go to the Cartridges by Logical Library page and select Properties for the selected logical library. a dialog box opens and you can see the current media verification configuration for that logical library, as shown by Figure 8-10.
Figure 8-10 TS4500 properties for a logical library
For more information and the usage of the TS4500 R2 media verification functions, see IBM IBM TS4500 R3 Tape Library Guide, SG24-8235 and the IBM TS4500 IBM Knowledge Center, found at:
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
13.58.200.78