Troubleshooting
This chapter describes the process that you can use to troubleshoot issues with Linear Tape File System (LTFS) Enterprise Edition (IBM Spectrum Archive EE).
This chapter includes the following topics:
10.1 Overview
This section provides a simple health check procedure for IBM Spectrum Archive EE.
10.1.1 Quick health check
If you are having issues with an existing IBM Spectrum Archive EE environment, Figure 10-1 shows a simple flowchart that you can follow as the first step to troubleshooting problems with the IBM Spectrum Archive EE components.
Figure 10-1 Quick health check procedure
Figure 10-1 uses the examples of a tape library with the device name IBMchanger9 and IBM Spectrum Scale / General Parallel File System (GPFS) /ibm/glues, which you must change to match your environment.
If your issue remains after you perform these simple checks, you must follow the procedures that are described in the remainder of this chapter and perform more detailed troubleshooting.
10.2 Hardware
The topics in this section provide information that can help you to identify and resolve problems with the hardware that is used by IBM Spectrum Archive EE.
10.2.1 Tape library
If the TS4500 tape library has a problem, it reports an error in the events page on the TS4500 Management GUI. When an error occurs, IBM Spectrum Archive might not work. Figure 10-2 shows an example of a library error.
Figure 10-2 Tape library error log
For more information about how to solve tape library errors, see the IBM TS4500 R3 Tape Library Guide, SG24-8235.
10.2.2 Tape drives
If an LTO tape drive has a problem, it reports the error on a single-character display (SCD). If a TS1140 (or later) tape drive has a problem, it reports the error on an 8-character message display. When this occurs, IBM Spectrum Archive might not work. The method to obtain information about a drive error is to determine which drive is reporting the error and then access the events page to see the error by using the TS4500 Management GUI.
Figure 10-3 shows an example from the web interface of a tape drive that has an error and is no longer responding.
Figure 10-3 Tape drive error
If you right-click the event and select Display fix procedure, another window opens and shows suggestions about how to fix the problem. If a drive display reports a specific drive error code, see the tape drive maintenance manual for a solution. For more information about analyzing the operating system error logs, see 10.6.1, “Linux” on page 266.
If a problem is identified in the tape drive and the tape drive must be repaired, the drive must first be removed from the IBM Spectrum Archive EE system. For more information, see “Taking a tape drive offline” on page 253.
Managing tape drive dump files
This section describes how to manage the automatic erasure of drive dump files. IBM Spectrum Archive automatically generates two tape drive dump files in the /tmp directory when it receives unexpected sense data from a tape drive. Example 10-1 shows the format of the dump files.
Example 10-1 Dump files
[root@ltfs97 tmp]# ls -la *.dmp
-rw-r--r-- 1 root root 3681832 Apr 4 14:26 ltfs_1068000073_2013_0404_142634.dmp
-rw-r--r-- 1 root root 3681832 Apr 4 14:26 ltfs_1068000073_2013_0404_142634_f.dmp
-rw-r--r-- 1 root root 3681832 Apr 4 14:42 ltfs_1068000073_2013_0404_144212.dmp
-rw-r--r-- 1 root root 3697944 Apr 4 14:42 ltfs_1068000073_2013_0404_144212_f.dmp
-rw-r--r-- 1 root root 3697944 Apr 4 15:45 ltfs_1068000073_2013_0404_154524.dmp
-rw-r--r-- 1 root root 3683424 Apr 4 15:45 ltfs_1068000073_2013_0404_154524_f.dmp
-rw-r--r-- 1 root root 3683424 Apr 4 17:21 ltfs_1068000073_2013_0404_172124.dmp
-rw-r--r-- 1 root root 3721684 Apr 4 17:21 ltfs_1068000073_2013_0404_172124_f.dmp
-rw-r--r-- 1 root root 3721684 Apr 4 17:21 ltfs_1068000073_2013_0404_172140.dmp
-rw-r--r-- 1 root root 3792168 Apr 4 17:21 ltfs_1068000073_2013_0404_172140_f.dmp
The size of each drive dump file is approximately 2 MB. By managing the drive dump files that are generated, you can save disk space and enhance IBM Spectrum Archive performance.
It is not necessary to keep dump files after they are used for problem analysis. Likewise, the files are not necessary if the problems are minor and can be ignored. A script program that is provided with IBM Spectrum Archive EE periodically checks the number of drive dump files and their date and time. If some of the dump files are older than two weeks or if the number of dump files exceeds 1000 files, the script program erases them.
The script file is started by using Linux crontab features. A cron_ltfs_limit_dumps.sh file is in the /etc/cron.daily directory. This script file is started daily by the Linux operating system. The interval to run the script can be changed by moving the cron_ltfs_limit_dumps.sh file to other cron folders, such as cron.weekly. (For more information about how to change the crontab setting, see the manual for your version of Linux.) In the cron_ltfs_limit_dumps.sh file, the automatic drive dump erase policy is specified by the option of the ltfs_limit_dump.sh script file, as shown in the following example:
/opt/IBM/ltfs/bin/ltfs_limit_dumps.sh -t 14 -n 1000
You can modify the policy by editing the options in the cron_ltfs_limit_dumps.sh file. The expiration date is set as a number of days by the -t option. In the example, a drive dump file is erased when it is more than 14 days old. The number of files to keep is set by the -n option. In our example, if the number of files exceeds 1,000, older files are erased so that the 1,000-file maximum is not exceeded. If either of the options are deleted, the dump files are deleted by the remaining policy. By editing these options in the cron_ltfs_limit_dumps.sh file, the number of days that files are kept and the number of files that are stored can be modified.
Although not recommended, you can disable the automatic erasure of drive dump files by removing the cron_ltfs_limit_dumps.sh file from the cron folder.
Taking a tape drive offline
This section describes how to take a drive offline from the IBM Spectrum Archive EE system to perform diagnostic operations while the IBM Spectrum Archive EE system stays operational. To accomplish this task, use software such as the IBM Tape Diagnostic Tool (ITDT) or the IBM LTFS Format Verifier, which are described in 11.3, “System calls and IBM tools” on page 349.
 
Important: If the diagnostic operation you intend to perform requires that a tape cartridge be loaded into the drive, ensure that you have an empty non-pool tape cartridge available in the logical library of IBM Spectrum Archive EE. If a tape cartridge is in the tape drive when the drive is removed, the tape cartridge is automatically moved to the home slot.
To perform diagnostic tests, complete the following steps:
1. Identify the node ID number of the drive to be taken offline by running the ltfsee info drives command. Example 10-2 shows the tape drives in use by IBM Spectrum Archive EE.
Example 10-2 Identify the tape drive to remove
[root@ltfssn1 ~]# ltfsee info drives
Drive S/N Status Type Role Library Address Node ID Tape Node Group
000000014A00 Not mounted TS1140 mrg TS4500_3592 257 1 - SINGLE_NODE
0000078D8320 Mounted TS1150 mrg TS4500_3592 258 1 JCC539JC SINGLE_NODE
In this example, we take the tape drive with serial number 000000014A00 on cluster node 1 offline.
2. Remove the tape drive from the IBM Spectrum Archive EE inventory by specifying the ltfsee drive remove <drive serial number> command. Example 10-3 shows the removal of a single tape drive from IBM Spectrum Archive EE.
Example 10-3 Remove the tape drive
[root@ltfssn1 ~]# ltfsee drive remove -d 000000014A00
GLESL121I(00282): Drive serial 000000014A00 is removed from LTFS EE drive list.
3. Check the success of the removal. Run the ltfsee info drives command and verify that the output shows that the MMM attribute for the drive is in the “stock” state. Example 10-4 shows the status of the drives after it is removed from IBM Spectrum Archive EE.
Example 10-4 Check the tape drive status
[root@ltfssn1 ~]# ltfsee info drives
Drive S/N Status Type Role Library Address Node ID Tape Node Group
0000078D8320 Mounted TS1150 mrg TS4500_3592 258 1 JCC539JC SINGLE_NODE
000000014A00 Stock UNKNOWN --- TS4500_3592 257 - - -
4. Identify the primary device number of the drive for subsequent operations by running the cat /proc/scsi/IBMtape command. The command output lists the device number in the “Number” field. Example 10-5 shows the output of this command with the offline tape drive 13 in bold.
Example 10-5 List the tape drives in Linux
[root@ltfs97 /]# cat /proc/scsi/IBMtape
lin_tape version: 1.76.0
lin_tape major number: 248
Attached Tape Devices:
Number model S/N HBA SCSI FO Path
0 ULT3580-TD5 00078A218E qla2xxx 2:0:0:0 NA
1 ULT3580-TD5 1168001144 qla2xxx 2:0:1:0 NA
2 ULTRIUM-TD5 9A700L0077 qla2xxx 2:0:2:0 NA
3 ULT3580-TD6 1013000068 qla2xxx 2:0:3:0 NA
4 03592E07 0000013D0485 qla2xxx 2:0:4:0 NA
5 ULT3580-TD5 00078A1D8F qla2xxx 2:0:5:0 NA
6 ULT3580-TD6 00013B0037 qla2xxx 2:0:6:0 NA
7 03592E07 001013000652 qla2xxx 2:0:7:0 NA
8 03592E07 0000078DDAC3 qla2xxx 2:0:8:0 NA
9 03592E07 001013000255 qla2xxx 2:0:9:0 NA
10 ULT3580-TD5 1068000073 qla2xxx 2:0:10:0 NA
11 03592E07 0000013D0734 qla2xxx 2:0:11:0 NA
12 ULT3580-TD6 00013B0084 qla2xxx 2:0:12:0 NA
13 3592E07     000000014A00      qla2xxx 2:0:13:0 NA
14 ULT3580-TD5 1068000070 qla2xxx 2:0:14:0 NA
15 ULT3580-TD5 1068000016 qla2xxx 2:0:15:0 NA
16 03592E07 0000013D0733 qla2xxx 2:0:19:0 NA
17 03592E07 0000078DDBF1 qla2xxx 2:0:20:0 NA
18 ULT3580-TD5 00078AC0A5 qla2xxx 2:0:21:0 NA
19 ULT3580-TD5 00078AC08B qla2xxx 2:0:22:0 NA
20 03592E07 0000013D0483 qla2xxx 2:0:23:0 NA
21 03592E07 0000013D0485 qla2xxx 2:0:24:0 NA
22 03592E07 0000078D13C1 qla2xxx 2:0:25:0 NA
5. If your diagnostic operations require a tape cartridge to be loaded into the drive, complete the following steps. Otherwise, you are ready to perform diagnostic operations on the drive, which has the drive address /dev/IBMtapenumber, where number is the device number that is obtained in step 4:
a. Move the tape cartridge to the drive from the I/O station or home slot. You can move the tape cartridge by using ITDT (in which case the drive must have the control path), or the TS4500 Management GUI.
b. Perform the diagnostic operations on the drive, which has the drive address /dev/IBMtapenumber, where number is the device number that is obtained in step 4.
c. When you are finished, return the tape cartridge to its original location.
6. Again, add the drive to the IBM Spectrum Archive EE inventory by running the ltfsee drive add drive_serial node_id command, where node_id is the same node that the drive was assigned to originally in step 1 on page 253.
Example 10-6 shows the tape drive that is readded to IBM Spectrum Archive EE.
Example 10-6 Add again the tape drive
[root@ltfssn1 ~]# ltfsee drive add -d 000000014A00 -n 1
GLESL119I(00174): Drive 000000014A00 added successfully.
Running the ltfsee info drives command again shows that the tape drive is no longer in a “stock” state. Example 10-7 shows the output of this command.
Example 10-7 Check the tape drive status
[root@ltfssn1 ~]# ltfsee info drives
Drive S/N Status Type Role Library Address Node ID Tape Node Group
000000014A00 Not mounted TS1140 mrg TS4500_3592 257 1 - SINGLE_NODE
0000078D8320 Mounted TS1150 mrg TS4500_3592 258 1 JCC539JC SINGLE_NODE
10.2.3 Tape cartridge
Table 10-1 shows all possible status conditions for an IBM Spectrum Archive tape cartridge as displayed by the ltfsee info tapes command.
Table 10-1 Tape cartridge status
Tape cartridge status
File system access
Description
Effect
How to recover tape cartridges
Valid Spectrum Archive EE commands
Valid
Yes
The Valid status indicates that the cartridge is valid.
N/A
N/A
All but imports, recover, and pool add
Exported
No
The Exported status indicates that the cartridge is valid and is exported.
N/A
N/A
ltfsee import
ltfsee tape move
 
Offline
No
The Offline status indicates that the cartridge is valid and is exported offline.
N/A
N/A
ltfsee import with offline option
ltfsee tape move
Unknown
Unknown
The tape cartridge contents are unknown.
The index file must be read on the tape medium before most file system requests can be processed.
The status change after it is used.
ltfsee pool remove
ltfsee reclaim (source/target)
ltfsee export
ltfsee rebuild
ltfsee recall
 
Write Protected
Read-only
The tape cartridge is physically (or logically) in a write-protected state.
No LTFS write operations are allowed.
Remove the physical write-protection.
ltfsee pool remove
ltfsee recall
Critical
Read-only
Indicates that the tape had a write failure. The index in memory is dirty.
No LTFS operations are allowed.
Critical and warning tapes must be recovered manually. There is no automated process. Because a series of complicated steps must be performed to repair tapes in these states, contact the IBM Spectrum Archive EE development team to help with the recovery of the data. An automated process might be available in a future release of IBM Spectrum Archive EE.
ltfsee recover
ltfsee recall
Warning
Read-only
Indicates that the tape had a read failure.
No LTFS operations are allowed.
Critical and warning tapes must be recovered manually. There is no automated process. Because a series of complicated steps must be performed to repair tapes in these states, contact the IBM Spectrum Archive EE development team to help with the recovery of the data. An automated process might be available in a future release of IBM Spectrum Archive EE.
ltfsee pool remove
ltfsee reconcile
ltfsee reclaim(source)
ltfsee rebuild
ltfsee recall
Unavailable
No
Indicates that the cartridge is not available in the IBM Spectrum Archive EE system. A tape that is newly inserted into the tape library is in this state.
No LTFS operations are allowed.
Add the tape cartridge to LTFS by running the import command if it contains data.
ltfsee pool remove
ltfsee pool add
ltfsee import
Invalid
No
The tape cartridge is inconsistent with the LTFS format and must be checked by using the -c option.
No LTFS operations are allowed.
Check the tape cartridge.
ltfsee pool add
ltfsee pool remove
Unformatted
No
The tape cartridge is not formatted and must be formatted by using the -f option.
No LTFS operations are allowed.
Format the tape cartridge.
ltfsee pool add
ltfsee pool remove
Inaccessible
No
The tape cartridge is not allowed to move in the library or might be stuck in the drive.
No LTFS operations are allowed.
Remove the stuck tape cartridge and fix the cause.
ltfsee pool remove
Error
No
Indicates that the tape cartridge reported a medium error.
No LTFS operations are allowed.
The tape cartridge status returns to “Valid” by physically removing the medium from the library, then adding it to the library again. If this state occurs again, the tape cartridge should not be used.
ltfsee pool remove
Not supported
No
The tape cartridge is an older generation or an LTO write-once, read-many (WORM) tape cartridge.
No LTFS operations are allowed.
LTFS supports the following tape cartridges:
LTO-7
LTO-6
LTO-5
3592 Extended data (JD)
3592 Advanced data (JC)
3592 Extended data (JB)
3592 Economy data (JK)
3592 Economy data (JL)
3592 Extended WORM data (JZ)
3592 Advanced WORM data (JY)
None
Duplicate
No
The tape cartridge has the same bar code as another tape cartridge.
No LTFS operations are allowed.
Remove one of the duplicate tape cartridges from the library.
None
Cleaning
No
The tape cartridge is a cleaning cartridge.
No LTFS operations are allowed.
Remove the cleaning cartridge from the library.
None
In Progress
No
The In Progress status indicates that the LE component is moving, mounting, or unmounting this tape cartridge.
N/A
N/A
ltfsee pool remove
Disconnected
No
The Disconnected status indicates that the EE and LE components that are used by this tape cannot communicate. The admin channel connection might be disconnected.
No LTFS operations are allowed.
Check the EE and LE components to see whether they are running.
ltfsee pool remove
Exporting
No
The Exporting status indicates that EE is exporting this tape cartridge.
N/A
N/A
ltfsee pool remove
Unusable
No
The tape has become unusable.
Recalls can still be performed on tape.
Perform a pool add with -c option to attempt to recover the tape.
ltfsee recall
ltfsee export
ltfsee rebuild
ltfsee reclaim (source/target)
ltfsee reconcile
ltfsee pool remove
Write Fenced
Read-only
Indicates that the tape had a write failure but the index was successfully written.
Limited LTFS operations are allowed: recall, recover, repair, and pool remove.
Run the recover command using the -c option to generate a scan list and bring the files back into resident state, then run the recover command using the -r option to double check any missed files. The -r option removes the tape from the ltfs ee pool if no new files were detected remaining on tape. Save the tape just in case there are issues recovering files and contact the IBM’s Spectrum Archive support.
ltfsee pool remove
ltfsee recover
ltfsee tape move
ltfsee recall
Unknown status
This status is only a temporary condition that can be caused when a new tape cartridge is added to the tape library but was not yet mounted in a tape drive to load the index.
Write-protected status
This status is caused by setting the write-protection tag on the tape cartridge. If you want to use this tape cartridge in IBM Spectrum Archive EE, you must remove the write protection because a write-protected tape cartridge cannot be added to a tape cartridge pool. After the write-protection is removed, you must run the ltfsee retrieve command to update the status of the tape to “Valid LTFS”.
Critical or Warning status
This status can be caused by actual physical errors on the tape. Automatic recovery has been added to Spectrum Archive V1.2.2. See 10.3, “Recovering data from a write failure tape” on page 261 and 10.4, “Recovering data from a read failure tape” on page 263 for recovery procedures.
Unavailable status
This status is caused by a tape cartridge being removed from LTFS. The process of adding it to LTFS (see 7.5.1, “Adding tape cartridges” on page 139) changes the status back to “Valid LTFS”; therefore, this message requires no other corrective action.
Invalid LTFS status
If an error occurs while writing to a tape cartridge, it might be displayed with an Invalid LTFS status that indicates an inconsistent LTFS format. Example 10-8 shows an “Invalid LTFS” status.
Example 10-8 Check the tape cartridge status
[root@ltfssn1 ~]# ltfsee info tapes
Tape ID Status Type Capacity(GiB) Free(GiB) Unref(GiB) Pool Library Address Drive
JZ0072JZ Offline TS1150 0 0 0 JZJ5WORM TS4500_3592 -1 -
JYA825JY Valid TS1140(J5) 6292 6292 0 JY TS4500_3592 1035 -
JCC539JC Valid TS1140(J5) 6292 6292 0 primary_ltfssn1 TS4500_3592 258 0000078D8320
JCC541JC Invalid TS1140 0 0 0 - TS4500_3592 0 -
JD0226JD Unavailable TS1150 0 0 0 - TS4500_3592 1033 -
JD0427JD Unavailable TS1150 0 0 0 - TS4500_3592 1032 -
JY0321JY Unavailable TS1140 0 0 0 - TS4500_3592 1028 -
This status can be changed back to Valid LTFS by checking the tape cartridge. To do so, run the command that is shown in Example 10-9.
Example 10-9 Add tape cartridge to pool with check option
[root@ltfssn1 ~]# ltfsee pool add -p J5 -t JCC541JC -c
GLESL042I(00640): Adding tape JCC541JC to storage pool J5.
Tape JCC541JC successfully checked.
Added tape JCC541JC to pool J5 successfully.
Example 10-10 shows the updated tape cartridge status for JCC541JC.
Example 10-10 Check the tape cartridge status
[root@ltfssn1 ~]# ltfsee info tapes
Tape ID Status Type Capacity(GiB) Free(GiB) Unref(GiB) Pool Library Address Drive
JCC541JC Valid TS1140(J5) 6292 6292 0 J5 TS4500_3592 1030 -
JZ0072JZ Offline TS1150 0 0 0 JZJ5WORM TS4500_3592 -1 -
JYA825JY Valid TS1140(J5) 6292 6292 0 JY TS4500_3592 1035 -
JCC539JC Valid TS1140(J5) 6292 6292 0 primary_ltfssn1 TS4500_3592 1031 -
JD0226JD Unavailable TS1150 0 0 0 - TS4500_3592 1033 -
JD0427JD Unavailable TS1150 0 0 0 - TS4500_3592 1032 -
JY0321JY Unavailable TS1140 0 0 0 - TS4500_3592 1028 -
Unformatted status
This status usually is observed when a scratch tape is added to LTFS without formatting it. It can be fixed by removing and readding it with the -format option, as described in 7.5.3, “Formatting tape cartridges” on page 144.
If the tape cartridge was imported from another system, the IBM LTFS Format Verifier can be useful for checking the tape format. For more information about performing diagnostic tests with the IBM LTFS Format Verifier, see 11.3.2, “Using the IBM LTFS Format Verifier” on page 350.
Inaccessible status
This status is most often the result of a stuck tape cartridge. Removing the stuck tape cartridge and then moving it back to its homeslot, as shown in 7.5.2, “Moving tape cartridges” on page 142, should correct the “Inaccessible” status.
Error status
Tape cartridges with an error status can often be the result of errors on the tape. This cartridge cannot be used until the condition is cleared. Stop IBM Spectrum Archive EE and clear the dcache for the files on tape and then restart IBM Spectrum Archive EE, as described in 7.4, “Starting and stopping IBM Spectrum Archive EE” on page 137.
Not-supported status
Only LTO-7, LTO-6, LTO-5, and 3592-JB, JC, JD, JK, JL, JY, and JZ tape cartridges are supported by IBM Spectrum Archive EE. This message indicates that the tape cartridge is not one of these 10 types and should be removed from the tape library.
Cleaning status
This status is caused by having a cleaning tape in LTFS. Remove the tape cartridge from LTFS and ensure that it is defined as a cleaning tape rather than data tape in the tape library.
Write Fenced status
This status is caused by actual physical errors on the tape. However, the index was successfully written on one of the tape’s partitions, which allows for easier recovery than critical tapes. The process of recovering such a tape removes the tape from the LTFS EE pool and will still be marked as Write Fenced. Save the tape for future reference in case the recover was unsuccessful and contact IBM Spectrum Archive support. See 10.3, “Recovering data from a write failure tape” on page 261 for steps to recover a Write Fenced tape.
10.3 Recovering data from a write failure tape
The following are steps for recovering data from a write failure tape, including both Critical and Write Fenced tapes:
1. Verify that the tape is in either Critical or Write Fenced by running ltfsee info tapes.
2. Run ltfsee recover -c on the Critical/Write Fenced tape to recall all the files on the tape and make them resident again.
3. Run ltfsee recover -r on the Critical/Write Fenced tape to perform a final file check on the tape and finally remove the tape from the pool.
If the tape was Critical after step 3, the drive is now unlocked for further use.
Example 10-11 shows the commands and output to recover a Write Fenced tape.
Example 10-11 Recovering data from a write fenced tape
[root@ltfssrv18 ~]# ltfsee info tapes | grep IM1196L6
IM1196L6 Write Fenced LTO6 0 0 0 primary lib_lib0 4112 -
 
[root@ltfssrv18 ~]# ltfsee recover -p primary -t IM1196L6 -c
Scanning GPFS file systems for finding migrated/saved objects in tape IM1196L6.
Tape IM1196L6 has 101 files to be recovered. The list is saved to /tmp/ltfssrv18.17771.ibm.gpfs.recoverlist.
Bulk recalling files in tape IM1196L6
GLESL268I(00151): 101 file name(s) have been provided to recall.
GLESL263I(00207): Recall result: 101 succeeded, 0 failed, 0 duplicate, 0 not migrated, 0 not found.
Making 101 files resident in tape IM1196L6.
Changed to resident: 10/101.
Changed to resident: 20/101.
Changed to resident: 30/101.
Changed to resident: 40/101.
Changed to resident: 50/101.
Changed to resident: 60/101.
Changed to resident: 70/101.
Changed to resident: 80/101.
Changed to resident: 90/101.
Changed to resident: 100/101.
Scanning remaining objects migrated/saved in tape IM1196L6.
Scanning non EE objects in tape IM1196L6.
Recovery of tape IM1196L6 is successfully done. 101 files are recovered. The list is saved to /tmp/ltfssrv18.17771.ibm.gpfs.recoverlist.
 
[root@ltfssrv18 ~]# ltfsee recover -p primary -t IM1196L6 -r
Scanning GPFS file systems for finding migrated/saved objects in tape IM1196L6.
Tape IM1196L6 has no file to be recovered.
Removed the tape IM1196L6 from the pool write_fenced.
[root@ltfssrv18 ~]# ltfsee info tapes | grep IM1196L6
IM1196L6 Write Fenced LTO6 0 0 0 - lib_ltfssrv18 4112 -
Example 10-12 shows the commands and output to recover data from a Critical tape.
Example 10-12 Recovering data from a critical tape
[root@ltfssrv18 ~]# ltfsee info tapes | grep Critical
JD0335JD Critical TS1150(J5) 9022 9022 0 PrimPool lib_ltfssrv18 260 0000078D8322
 
[root@ltfssrv18 ~]# ltfsee recover -p PrimPool -t JD0335JD -c
Scanning GPFS file systems for finding migrated/saved objects in tape JD0335JD.
Tape JD0335JD has 100 files to be recovered. The list is saved to /tmp/ltfssrv18.tuc.stglabs.ibm.com.18168.ibm.glues.recoverlist.
Bulk recalling files in tape JD0335JD
GLESL268I(00151): 100 file name(s) have been provided to recall.
GLESL263I(00207): Recall result: 100 succeeded, 0 failed, 0 duplicate, 0 not migrated, 0 not found.
Making 100 files resident in tape JD0335JD.
Changed to resident: 10/100.
Changed to resident: 20/100.
Changed to resident: 30/100.
Changed to resident: 40/100.
Changed to resident: 50/100.
Changed to resident: 60/100.
Changed to resident: 70/100.
Changed to resident: 80/100.
Changed to resident: 90/100.
Changed to resident: 100/100.
Scanning remaining objects migrated/saved in tape JD0335JD.
Scanning non EE objects in tape JD0335JD.
Recovery of tape JD0335JD is successfully done. 100 files are recovered. The list is saved to /tmp/ltfssrv18.tuc.stglabs.ibm.com.18168.ibm.glues.recoverlist.
 
[root@ltfssrv18 ~]# ltfsee recover -p PrimPool-t JD0335JD -r
Scanning GPFS file systems for finding migrated/saved objects in tape JD0335JD.
Tape JD0335JD has no file to be recovered.
Removed the tape JD0335JD from the pool PrimPool.
 
[root@ltfssrv18 ~]# ltfsee info tapes | grep JD0335JD
JD0335JD Error TS1150 0 0 0 - lib_ltfssrv18 1063 -
10.4 Recovering data from a read failure tape
Copy migrated jobs from a Warning tape to a valid tape within the same pool with the following steps:
1. Identify a tape with a read failure by running ltfsee info tapes to locate the Warning tape.
2. After the Warning tape has been identified, run the relocate_replica.sh script to copy the files from the Warning tape to a new tape within the same pool.
3. After a successful copy, remove the Warning tape from the library and discard it.
The syntax for the script is as follows:
relocate_replica.sh -t <tape id> -p <pool name>@<library name>:<pool name>@<library name>[:<pool name>@<library name>] -P <path name>
-t <tape id>
Specifies tape ID to relocate replica from
-p <pool name>@<library name>:[<pool name>@<library name>]
Specifies pool names and library names to store replicas after running this script
-P <path name>
Specifies path to GPFS file system to scan
Example 10-13 shows system output of the steps to recover data from a read failure tape.
Example 10-13 Recovering data from a read failure
[root@ltfssrv18 ~]# ltfsee info tapes | grep primary
JD2065JD Warning TS1150(J5) 9022 9021 0 primary lib0 1036 -
JD2067JD Valid TS1150(J5) 9022 9021 0 primary lib0 1038 -
JD2066JD Valid TS1150(J5) 9022 9022 0 primary lib0 1037 -
 
[root@ltfssrv18 ~]# ltfsee info files `ls | head -1`
Name: LTFS_EE_FILE_0qIc9FO1LsaBmyTvA_b1ljgl.bin
Tape id:JD2065JD@lib0:JD2070JD@lib0:JD2064JD@lib0 Status: premigrated
 
[root@ltfssrv18 ~]# ./relocate_replica.sh -t JD2065JD -p primary@lib0:copy@lib0:copy2@lib0 -P /ibm/gpfs/
1. Getting pool name and library name to which tape JD2065JD belongs
2. Removing specified tape JD2065JD from pool primary@lib0
3. Creating policy file...
4. Performing policy scan...
5. Recalling migrated files to premigrated state
6. Removing replica in target tape JD2065JD
7. Creating replica in alternative tape in pool primary@lib0
8. Creating policy file...
9. Performing policy scan...
All of replicas in tape JD2065JD have been relocated successfully.
 
[root@ltfssrv18 ~]# ltfsee info tapes | grep JD2065JD
JD2065JD Unavailable TS1150 0 0 0 - lib0 1036 -
 
[root@ltfssrv18 ~]# ltfsee info files `ls | head -1`
Name: LTFS_EE_FILE_0qIc9FO1LsaBmyTvA_b1ljgl.bin
Tape id:JD2066JD@lib0:JD2070JD@lib0:JD2064JD@lib0 Status: premigrated
 
Note: To obtain the relocate_replica.sh script, see Appendix A, “Additional material” on page 353.
10.5 Handling export errors
The following are the steps to clean up files referencing exported tapes on the Spectrum Archive file system when there are export errors:
1. Stop the LTFS EE service by running ltfsee stop.
2. After the process has stopped and verified by running pidof mmm, gather all the IBMTPS attributes from the failed export message.
3. Run ltfsee_export_fix -T <IBMTPS_attribute> [<IBMTPS_attribute>].
4. Start the LTFS EE service by running ltfsee start.
Example 10-14 shows a typical export error and then follows the steps above in resolving the problem.
Example 10-14 Fix IBMTPS file pointers on the gpfs file system
[root@ltfssrv18 ~]# ltfsee export -p PrimPool -t JD3592JD
GLESS016I(00184): Reconciliation requested.
GLESM401I(00194): Loaded the global configuration.
GLESS049I(00636): Tapes to reconcile: JD3592JD .
GLESS050I(00643): GPFS file systems involved: /ibm/glues .
GLESS134I(00665): Reserving tapes for reconciliation.
GLESS135I(00698): Reserved tapes: JD3592JD .
GLESS054I(00736): Creating GPFS snapshots:
GLESS055I(00741): Deleting previous reconcile snapshot and creating a new one for /ibm/glues ( gpfs ).
GLESS056I(00762): Scanning GPFS snapshots:
GLESS057I(00767): Scanning GPFS snapshot of /ibm/glues ( gpfs ).
GLESS060I(00843): Processing scan results:
GLESS061I(00848): Processing scan results for /ibm/glues ( gpfs ).
GLESS141I(00861): Removing stale DMAPI attributes:
GLESS142I(00866): Removing stale DMAPI attributes for /ibm/glues ( gpfs ).
GLESS063I(00899): Reconciling the tapes:
GLESS086I(00921): Reconcile is skipped for tape JD3592JD because it is already reconciled.
GLESS137I(01133): Removing tape reservations.
GLESS058I(02319): Removing GPFS snapshots:
GLESS059I(02326): Removing GPFS snapshot of /ibm/glues ( gpfs ).
Export of tape JD3592JD has been requested...
GLESL075E(00660): Export of tape JD3592JD completed with errors. Some GPFS files still refer files in the exported tape.
GLESL373I(00765): Moving tape JD3592JD.
Tape JD3592JD is unmounted because it is inserted into the drive.
GLESL043I(00151): Removing tape JD3592JD from storage pool replicate.
GLESL631E(00193): Failed to export some tapes.
Tapes (<no tape>) were successfully exported.
Tapes (<no tape>) are still in the pool and needs a retry to export them.
Tapes (JD3592JD) are in Exported state but some GPFS files may still refer files in these tapes. TPS list to fix are: JD3592JD@65fb41ec-b42d-4fc5-8957-d57e1567aac1@0000013FA0020404
 
[root@ltfssrv18 ~]# ltfsee stop
Library name: lib_ltfssrv18, library id: 0000013FA0020404, control node (MMM) IP address: 9.11.121.249.
Stopped LTFS EE service (MMM) for library lib_ltfssrv18.
 
[root@ltfssrv18 ~]# pidof mmm
[root@ltfssrv18 ~]# ltfsee_export_fix -T JD3592JD@65fb41ec-b42d-4fc5-8957-d57e1567aac1@0000013FA0020404
Please make sure that IBM Spectrum Archive EE is not running on any node in this cluster.
And please do not remove/rename any file in all the DMAPI-enabled GPFS if possible.
Type "yes" to continue.
?
yes
GLESY016I(00545): Start finding strayed stub files and fix them for GPFS gpfs (id=14593091431837534985)
GLESY020I(00558): Listing up files that needs to be fixed for GPFS gpfs.
GLESY015I(00531): Fix of exported files completes. Total=985, Succeeded=985, Failed=0
GLESY018I(00603): Successfully fixed files in GPFS gpfs.
GLESY025I(00615): ltfsee_export_fix exits with RC=0
[root@ltfssrv18 ~]# ltfsee start
Library name: lib_ltfssrv18, library id: 0000013FA0020404, control node (MMM) IP address: 9.11.121.249.
GLESM397I(00221): Configuration option: DISABLE_AFM_CHECK yes.
GLESM401I(00264): Loaded the global configuration.
GLESM402I(00301): Created the Global Resource Manager.
GLESM403I(00316): Fetched the node groups from the Global Resource Manager.
GLESM404I(00324): Detected the IP address of the MMM (9.11.121.249).
GLESM405I(00335): Configured the node group (G0).
GLESM406I(00344): Created the unassigned list of the library resources.
GLESL536I(00080): Started the Spectrum Archive EE service (MMM) for library lib_ltfssrv18.
10.6 Software
IBM Spectrum Archive EE is composed of four major components, each with its own set of log files so problem analysis is slightly more involved than other products. This section describes troubleshooting issues with each component in turn and the Linux operating system and Simple Network Management Protocol (SNMP) alerts.
10.6.1 Linux
The log file /var/log/messages contains global LINUX system messages, including the messages that are logged during system start and messages that are related to LTFS and IBM Spectrum Archive EE functions. However, three specific log files are also created:
ltfs.log
ltfsee.log
ltfsee_trc.log
Unlike with previous LTFS/IBM Spectrum Archive products, there is no need to enable the system logging on Linux because it is automatically performed during the installation process. Example 10-15 shows the changes to the rsyslog.conf file and the location of the two log files.
Example 10-15 rsyslog.con
[root@ltfssn1 ~]# cat /etc/rsyslog.conf | grep ltfs
:msg, startswith, "GLES," /var/log/ltfsee_trc.log;gles_trc_template
:msg, startswith, "GLES" /var/log/ltfsee.log;RSYSLOG_FileFormat
:msg, regex, "LTFS[ID0-9][0-9]*[EWID]" /var/log/ltfs.log;RSYSLOG_FileFormat
By default, after the ltfs.log, ltfsee.log, and ltfsee_trc.log files reach 1 MB, they are rotated and four copies are kept. Example 10-16 shows the log file rotation settings. These settings can be adjusted as needed within the /etc/logrotate.d/syslog control file.
Example 10-16 Syslog rotation
[root@ltfssn1 ~]# cat /etc/logrotate.d/syslog
/var/log/cron
/var/log/maillog
/var/log/messages
/var/log/secure
/var/log/spooler
{
sharedscripts
postrotate
/bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true
endscript
}
/var/log/ltfs.log {
size 1M
rotate 4
missingok
compress
sharedscripts
postrotate
/bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true
/bin/kill -HUP `cat /var/run/rsyslogd.pid 2> /dev/null` 2> /dev/null || true
endscript
}
/var/log/ltfsee.log {
size 1M
rotate 4
missingok
compress
sharedscripts
postrotate
/bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true
/bin/kill -HUP `cat /var/run/rsyslogd.pid 2> /dev/null` 2> /dev/null || true
endscript
}
/var/log/ltfsee_trc.log {
size 1M
rotate 4
missingok
compress
sharedscripts
postrotate
/bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true
/bin/kill -HUP `cat /var/run/rsyslogd.pid 2> /dev/null` 2> /dev/null || true
endscript
}
These log files (ltfs.log, ltfsee.log, ltfsee_trc.log, and /var/log/messages) are invaluable in troubleshooting LTFS messages. The ltfsee.log file contains only warning and error messages. So, it is easy to start looking here for the reasons of failure. For example, a typical file migration might return the information message that is shown in Example 10-17.
Example 10-17 Simple migration with informational messages
# ltfsee migrate mig -p PrimPool@lib_lto
GLESL167I(00400): A list of files to be migrated has been sent to LTFS EE using scan id 1842482689.
GLESL159E(00440): Not all migration has been successful.
GLESL038I(00448): Migration result: 0 succeeded, 1 failed, 0 duplicate, 0 duplicate wrong pool, 0 not found, 0 too small to qualify for migration, 0 too early for migration.
From the GLESL159E message, you know that the migration was unsuccessful, but you do not know why it was unsuccessful. To understand why, you must examine the ltfsee.log file. Example 10-18 shows the end of the ltfsee.log file immediately after the failed migrate command is run.
Example 10-18 ltfsee.log file
# tail /var/log/ltfsee.log
2016-12-14T09:05:38.494320-07:00 ltfs97 mmm[7889]: GLESM600E(01691): Failed to migrate/premigrate file /ibm/gpfs/file1.mpeg. The specified pool name does not match the existing replica copy.
2016-12-14T09:05:48.500848-07:00 ltfs97 ltfsee[29470]: GLESL159E(00440): Not all migration has been successful.
In this case, the migration of the file was unsuccessful because it was previously migrated/premigrated to a different tape cartridge.
With IBM Spectrum Archive EE, there are two logging facilities. One is in a human-readable format that is monitored by users and the other is in machine-readable format that is used for further problem analysis. The former facility is logged in to /var/log/ltfsee.log through the “user” syslog facility and contains only warnings and errors. The latter facility is logged in to /var/log/ltfsee_trc.log through the “local2” Linux facility.
The messages in machine-readable format can be converted into human-readable format by the newly created tool ltfsee_catcsvlog, which is run by the following command:
/opt/ibm/ltfsee/bin/ltfsee_catcsvlog /var/log/ltfsee_trc.log
The ltfsee_catcsvlog command accepts multiple log files as command-line arguments. If no argument is specified, ltfsee_catcsvlog reads from stdin.
Persistent problems
This section describes ways to solve persistent IBM Spectrum Archive EE problems.
If an unexpected and persistent condition occurs in the IBM Spectrum Archive EE environment, contact your IBM service representative and provide the following information to help IBM re-create and solve the problem:
Machine type and model of your IBM tape library in use for IBM Spectrum Archive EE
Machine type and model of the tape drives that are embedded in the tape library
Specific IBM Spectrum Archive EE version
Description of the problem
System configuration
Operation that was performed at the time the problem was encountered
The operating system automatically generates system log files after initial configuration of the IBM Spectrum Archive EE. Provide the results of the ltfsee_log_collection command to your IBM service representative.
10.6.2 SNMP
This section describes how to troubleshoot SNMP traps that are received for the IBM Spectrum Archive EE system.
Traps are sent when migration, recall, reclaim, reconcile, import, or export processes stop unexpectedly because of an error condition. When a trap is received, a message ID is contained in the trap. Example 10-19 shows a trap that was generated by a failed migration attempt.
Example 10-19 Example of an SNMP trap
Received 119 bytes from UDP: [9.11.11.2]:35054->[9.11.11.2]
0000: 30 75 02 01 01 04 06 70 75 62 6C 69 63 A7 68 02 0u.....public.h.
0016: 04 02 14 36 30 02 01 00 02 01 00 30 5A 30 0F 06 ...60......0Z0..
0032: 08 2B 06 01 02 01 01 03 00 43 03 02 94 E6 30 19 .+.......C....0.
0048: 06 0A 2B 06 01 06 03 01 01 04 01 00 06 0B 2B 06 ..+...........+.
0064: 01 04 01 02 06 81 76 02 03 30 11 06 0C 2B 06 01 ......v..0...+..
0080: 04 01 02 06 81 76 01 01 00 02 01 01 30 19 06 0C .....v......0...
0096: 2B 06 01 04 01 02 06 81 76 01 02 00 04 09 47 4C +.......v.....GL
0112: 45 53 4C 31 35 39 45 ESL159E
 
2013-04-18 23:10:22 xyz.storage.tucson.ibm.com [UDP: [9.11.11.2]:35054->[9.11.11.2]]:
SNMPv2-SMI::mib-2.1.3.0 = Timeticks: (169190) 0:28:11.90 SNMPv2-SMI::snmpModules.1.1.4.1.0 = OID: LTFSEE-MIB::ltfseeErrJobTrap LTFSEE-MIB::ltfseeJob.0 = INTEGER: migration(1) LTFSEE-MIB::ltfseeJobInfo.0 = STRING: "GLESL159E"
The reported IBM Spectrum Archive EE error message is GLESL159E. It is possible to investigate the error further by using this ID by reviewing the /var/log/ltfsee.log log file. More information can also be obtained by referring to the corresponding message ID in “Messages reference” on page 273, which shows that GLESL159E indicates that “Not all migration has been successful.”
10.6.3 IBM Spectrum Scale
IBM Spectrum Scale writes operational messages and error data to the IBM Spectrum Scale log file. The IBM Spectrum Scale log can be found in the /var/adm/ras directory on each node. The IBM Spectrum Scale log file is named mmfs.log.date.nodeName, where date is the time stamp when the instance of IBM Spectrum Scale started on the node and nodeName is the name of the node. The latest IBM Spectrum Scale log file can be found by using the symbolic file name /var/adm/ras/mmfs.log.latest.
The IBM Spectrum Scale log from the prior start of IBM Spectrum Scale can be found by using the symbolic file name /var/adm/ras/mmfs.log.previous. All other files have a time stamp and node name that is appended to the file name.
At IBM Spectrum Scale start, files that were not accessed during the last 10 days are deleted. If you want to save old files, copy them elsewhere.
Example 10-20 shows normal operational messages that appear in the IBM Spectrum Scale log file.
Example 10-20 Normal operational messages in an IBM Spectrum Scale log file
[root@ltfs97 ]# cat /var/adm/ras/mmfs.log.latest
Wed Apr 3 13:25:04 JST 2013: runmmfs starting
Removing old /var/adm/ras/mmfs.log.* files:
Unloading modules from /lib/modules/2.6.32-220.el6.x86_64/extra
Loading modules from /lib/modules/2.6.32-220.el6.x86_64/extra
Module Size Used by
mmfs26 1749012 0
mmfslinux 311300 1 mmfs26
tracedev 29552 2 mmfs26,mmfslinux
Wed Apr 3 13:25:06.026 2013: mmfsd initializing. {Version: 3.5.0.7 Built: Dec 12 2012 19:00:50} ...
Wed Apr 3 13:25:06.731 2013: Pagepool has size 3013632K bytes instead of the requested 29360128K bytes.
Wed Apr 3 13:25:07.409 2013: Node 192.168.208.97 (htohru9) is now the Group Leader.
Wed Apr 3 13:25:07.411 2013: This node (192.168.208.97 (htohru9)) is now Cluster Manager for htohru9.ltd.sdl.
 
Starting ADSM Space Management daemons
Wed Apr 3 13:25:17.907 2013: mmfsd ready
Wed Apr 3 13:25:18 JST 2013: mmcommon mmfsup invoked. Parameters: 192.168.208.97 192.168.208.97 all
Wed Apr 3 13:25:18 JST 2013: mounting /dev/gpfs
Wed Apr 3 13:25:18.179 2013: Command: mount gpfs
Wed Apr 3 13:25:18.353 2013: Node 192.168.208.97 (htohru9) appointed as manager for gpfs.
Wed Apr 3 13:25:18.798 2013: Node 192.168.208.97 (htohru9) completed take over for gpfs.
Wed Apr 3 13:25:19.023 2013: Command: err 0: mount gpfs
Wed Apr 3 13:25:19 JST 2013: finished mounting /dev/gpfs
Depending on the size and complexity of your system configuration, the amount of time to start IBM Spectrum Scale varies. Taking your system configuration into consideration, if you cannot access a file system that is mounted (automatically or by running a mount command) after a reasonable amount of time, examine the log file for error messages.
The IBM Spectrum Scale log is a repository of error conditions that were detected on each node, and operational events, such as file system mounts. The IBM Spectrum Scale log is the first place to look when you are attempting to debug abnormal events. Because IBM Spectrum Scale is a cluster file system, events that occur on one node might affect system behavior on other nodes, and all IBM Spectrum Scale logs can have relevant data.
10.6.4 IBM Spectrum Archive LE+ component
This section describes the options that are available to analyze problems that are identified by the LTFS logs. It also provides links to messages and actions that can be used to troubleshoot the source of an error.
The messages that are referenced in this section provide possible actions only for solvable error codes. The error codes that are reported by LTFS program can be retrieved from the terminal console or log files. For more information about retrieving error messages, see 10.6.1, “Linux” on page 266.
When multiple errors are reported, LTFS attempts to find a message ID and an action for each error code. If you cannot locate a message ID or an action for a reported error code, LTFS encountered a critical problem. If after trying an initial action again and you continue to fail, LTFS also encountered a critical problem. In these cases, contact your IBM service representative for more support.
Message ID strings start with the keyword LTFS and are followed by a four- or five-digit value. However, some message IDs include the uppercase letter I or D after LTFS, but before the four- or five-digit value. When an IBM Spectrum Archive EE command is run and returns an error, check the message ID to ensure that you do not mistake the letter I for the numeral 1.
A complete list of all LTFS messages can be found in the IBM Spectrum Archive EE IBM Knowledge Center, which is available at this website:
At the end of the message ID, the following single capital letters indicate the importance of the problem:
E: Error
W: Warning
I: Information
D: Debugging
When you troubleshoot, check messages for errors only.
Example 10-21 shows a problem analysis procedure for LTFS.
Example 10-21 LTFS messages
cat /var/log/ltfs.log
LTFS11571I State of tape '' in slot 0 is changed from 'Not Initialized' to 'Non-supported'
LTFS14546W Cartridge '' does not have an associated LTFS volume
LTFS11545I Rebuilding the cartridge inventory
LTFSI1092E This operation is not allowed on a cartridge without a bar code
LTFS14528D [localhost at [127.0.0.1]:34888]: --> CMD_ERROR
The set of 10 characters represents the message ID, and the text that follows describes the operational state of LTFS. The fourth message ID (LTFSI1092E) in this list indicates that an error was generated because the last character is the letter E. The character immediately following LTFS is the letter I. The complete message, including an explanation and appropriate course of action for LTFSI1092E, is shown in the following example:
LTFSI1092E This operation is not allowed on a cartridge without a bar code
Explanation
A bar code must be attached to the medium to perform this operation.
Action
Attach a bar code to the medium and try again.
Based on the description that is provided here, the tape cartridge in the library does not have a bar code. Therefore, the operation is rejected by LTFS. The required user action to solve the problem is to attach a bar code to the medium and try again.
10.6.5 Hierarchical storage management
During installation, hierarchical storage management (HSM) is configured to write log entries to a log file in /opt/tivoli/tsm/client/hsm/bin/dsmerror.log. Example 10-22 shows an example of this file.
Example 10-22 dsmerror.log
[root@ltfs97 /]# cat dsmerror.log
03/29/2013 15:24:28 ANS9101E Migrated files matching '/ibm/glues/file1.img' could not be found.
03/29/2013 15:24:28 ANS9101E Migrated files matching '/ibm/glues/file2.img' could not be found.
03/29/2013 15:24:28 ANS9101E Migrated files matching '/ibm/glues/file3.img' could not be found.
03/29/2013 15:24:28 ANS9101E Migrated files matching '/ibm/glues/file4.img' could not be found.
03/29/2013 15:24:28 ANS9101E Migrated files matching '/ibm/glues/file5.img' could not be found.
03/29/2013 15:24:28 ANS9101E Migrated files matching '/ibm/glues/file6.img' could not be found.
04/02/2013 16:24:06 ANS9510E dsmrecalld: cannot get event messages from session 515A6F7E00000000, expected max message-length = 1024, returned message-length = 144. Reason : Stale NFS file handle
04/02/2013 16:24:06 ANS9474E dsmrecalld: Lost my session with errno: 1 . Trying to recover.
04/02/13 16:24:10 ANS9433E dsmwatchd: dm_send_msg failed with errno 1.
04/02/2013 16:24:11 ANS9433E dsmrecalld: dm_send_msg failed with errno 1.
04/02/2013 16:24:11 ANS9433E dsmrecalld: dm_send_msg failed with errno 1.
04/02/2013 16:24:11 ANS9433E dsmrecalld: dm_send_msg failed with errno 1.
04/03/13 13:25:06 ANS9505E dsmwatchd: cannot initialize the DMAPI interface. Reason: Stale NFS file handle
04/03/2013 13:38:14 ANS1079E No file specification entered
04/03/2013 13:38:20 ANS9085E dsmrecall: file system / is not managed by space management.
The HSM log contains information about file migration and recall, threshold migration, reconciliation, and starting and stopping the HSM daemon. You can analyze the HSM log to determine the current state of the system. For example, the logs can indicate when a recall has started but not finished within the last hour. The administrator can analyze a particular recall and react accordingly. In addition, an HSM log might be analyzed by an administrator to optimize HSM usage. For example, if the HSM log indicates that 1,000 files are recalled at the same time, the administrator might suggest that the files can be first compressed into one .tar file and then migrated.
10.6.6 IBM Spectrum Archive EE logs
This section describes IBM Spectrum Archive EE logs and message IDs and provide some tips for dealing with failed recalls and lost or strayed files.
IBM Spectrum Archive EE log collection tool
IBM Spectrum Archive EE writes its logs to the files /var/log/ltfsee.log and /var/log/ltfsee_trc.log. These files can be viewed in a text editor for troubleshooting purposes. Use the IBM Spectrum Archive EE log collection tool to collect data that you can send to IBM Support.
The ltfsee_log_collection tool is in the /opt/ibm/ltfsee/bin folder. To use the tool, complete the following steps:
1. Log on to the operating system as the root user and open a terminal.
2. Start the tool by running the following command:
# /opt/ibm/ltfsee/bin/ltfsee_log_collection
3. When the following message displays, read the instructions, then enter y or p to continue:
LTFS Enterprise Edition - log collection program
This program collects the following information from your GPFS cluster.
a. Log files that are generated by GPFS, LTFS Enterprise Edition
b. Configuration information that is configured to use GPFS and LTFS Enterprise Edition
c. System information including OS distribution and kernel, and hardware information (CPU and memory)
If you want to collect all the information, enter y.
If you want to collect only a and b, enter p (partial).
If you do not want to collect any information, enter n.
The collected data is compressed in the ltfsee_log_files_<date>_<time>.tar.gz file. You can check the contents of the file before submitting it to IBM.
4. Make sure that a packed file with the name ltfsee_log_files_[date]_[time].tar.gz is created in the current directory. This file contains the collected log files.
5. Send the tar.gz file to your IBM service representative.
Messages reference
For IBM Spectrum Archive EE, message ID strings start with the keyword GLES and are followed by a single letter and then by a three-digit value. The single letter indicates which component generated the message. For example, GLESL is used to indicate all messages that are related to the IBM Spectrum Archive EE command. At the end of the message ID, the following single uppercase letter indicates the importance of the problem:
E: Error
W: Warning
I: Information
D: Debugging
When you troubleshoot, check messages for errors only. Example 10-23 shows the complete list of messages that were reported by IBM Spectrum Archive EE and its components.
Example 10-23 List of messages
GLESA002E "Failed to access attribute (%s)."
GLESA003E "Failed to evaluate the status of file %s."
GLESA004E "Unable to get the DMAPI handle for file %s (errno: %d)."
GLESA005E "Failed to parse IBMSTIME DMAPI attribute (value: %s)."
GLESA006E "Failed to access file on tape %s."
GLESA007E "Failed to get the status of the file system on which the file %s exists in (errno: %d)."
GLESA008E "Failed to get the status of file %s (errno: %d)."
GLESA009E "Failed to get parent directory of file %s."
GLESA010E "Unexpected command output (command: %s, output: %s)."
GLESA011E "Failed to invoke command %s (errno: %d)."
GLESA012E "Cannot invoke command %s by current user (uid: %d)."
GLESA013E "Error writing %s (errno: %d)."
GLESA014E "Buffer too short."
GLESA015E "Failed to open mtab file (errno: %d)."
GLESA016E "Failed to open file %s (errno: %d)."
GLESA017E "Failed to sync file %s (errno: %d)."
#GLESA018E "Failed to find mmm config file on GPFS."
GLESA019E "Failed to open mmm config file (%s)."
GLESA020E "Failed to parse mmm config file (%s)."
GLESA021E "Failed to find GPFS filesystem working with LTFS Enterprise Edition."
GLESA022E "Failed to create tape repos file %s (errno: %d)."
GLESA023E "Failed to write tape repos file %s (errno: %d)."
GLESA024E "Failed to unlink tape repos file %s (errno: %d)."
GLESA025E "Failed to open tape repos file %s (errno: %d)."
GLESA026E "Failed to read tape repos file %s (errno: %d)."
GLESA027E "Failed to set tape state information %s for tape %s (errno: %d)."
GLESA028E "Failed to get tape state information %s for tape %s (errno: %d)."
GLESA029E "Failed to parse extended attribute (name: %s value: %s)."
GLESA030E "Failed to create a directory on tape %s. (errno:%d)"
GLESA031E "Failed to access .LTFSEE_DATA directory on tape %s. (errno:%d)"
GLESA032E "Failed to get remaining capacity for tape %s. (%d)"
GLESA033E "Failed to check validity for tape %s. (%d)"
GLESA034E "Failed to create tape repos file for tape %s. (%d)"
GLESA035E "Failed to acquire DMAPI access right to a file. (%d)"
GLESA036E "Failed to open a configuration file or a configuration file /var/opt/ibm/ltfsee/local/ltfsee_config.filesystem is missing (errno: %d)."
GLESA037E "Failed to parse the configuration file."
GLESA038E "Exception from configuration database (%s)."
GLESA039I "Failed to get DMAPI attribute %s (errno:%d)."
GLESA040E "Failed to parse DMAPI attribute (name: %s, value: %s)."
GLESA041E "Failed to release DMAPI access right to a file. (%d)"
GLESA042E "Unable to unlink lock file file %s. System error code being returned: %d."
GLESA043E "Unable to open lock file %s. System error code being returned: %d."
GLESA044E "Lock file %s has not been opened."
GLESA045E "Failed to close lock file %s. System error code being returned: %d."
GLESA046E "Unable to create a lock on lock file %s. System error code being returned: %d."
GLESA047E "Unable to unlock lock file %s. System error code being returned: %d."
GLESA048W "Generation number is maximal value (%s)."
GLESA049E "Failed to write %s DMAPI attribute."
GLESA050E "Failed to create scan list file %s (errno: %d)."
GLESA051E "Failed to open scan list file %s (errno: %d)."
GLESA052E "Failed to get file status for scan list file %s (errno: %d)."
GLESA053E "The offset location of scan list file %s is wrong (actual location: %lu, expected location: %lu)."
GLESA054E "Failed to get %s advisory lock for scan file %s (errno: %d)."
GLESA055E "Failed to release advisory lock for scan file %s (errno: %d)."
GLESA056E "Failed to write scan file %s (errno: %d)."
GLESA057E "Failed to seek scan file %s (errno: %d)."
GLESA058E "Specified scan file index %lu is out of range (max: %lu)."
GLESA059E "Failed to read scan file %s (errno: %d)."
GLESA060E "Failed to truncate scan file %s (errno: %d)."
GLESA061E "Failed to close scan file %s (errno: %d)."
GLESA062W "The size of scan list file %s is wrong (actual size: %lu, expected size: %lu)."
GLESA063E "Failed to determine configuration path."
GLESA064E "Pool ID for Pool name (%s) is not found."
GLESA065E "Error looking up pool ID for pool with name '%s'."
GLESA066E "No library found with name '%s'."
GLESA067E "Error looking up library with name '%s'."
GLESA068E "Error looking up default library."
GLESA069I "RPC send/receive error (Function: %s, Line: %d, errno:%d)."
GLESA070E "Error opening library configuration database."
GLESA071I "The MMM for tape library (%s) is not running."
GLESA072I "Failed to check MMM (%d)."
GLESA073E "Unable to parse IBMTPS (%s)."
GLESA074E "Failed to get the filesystem's snap handle of file %s (errno:%d)."
GLESA075E "Failed to open GPFS i-node scan (file name: %s, errno:%d)."
GLESA076E "Failed to open dsm.sys file (errno: %d)."
GLESA077E "More than two 'AFMSKIPUNCACHEDFILES' directives exist in dsm.sys"
GLESA078E "No additional option is specified for the 'AFMSKIPUNCACHEDFILES' directive in dsm.sys"
GLESA079E "Invalid option (%s) is specified for the 'AFMSKIPUNCACHEDFILES' directive in dsm.sys"
GLESA080E "Failed to write dsm.sys file (errno:%d)."
GLESA081E "Failed to open %s file (errno:%d)."
GLESA082E "Failed to rename dsm.sys file to %s (errno:%d)."
GLESA083E "Invalid name (%s) was specified to create DMAPI session."
GLESA084E "Failed to access DMAPI session directory (errno: %d)."
GLESA085E "Failed to create DMAPI session directory (errno: %d)."
GLESA086E "Cannot create DMAPI session directory %s because its name is already in use."
GLESA087E "Failed to create DMAPI session %s (errno: %d)."
GLESA088E "Failed to open DMAPI session file %s (errno: %d)."
GLESA089E "Failed to open local node configuration file %s (errno: %d)."
GLESA090E "Failed to write local node configuration file %s (errno: %d)."
GLESA091E "Failed to parse local node configration file %s (line: %s)."
GLESA092E "Failed to parse local node configration file %s (value: %s)."
GLESA093E "Unable get tape ID (%d)."
GLESA094E "File %s has been previously premigrated to storage pools %s, which are not matched to specified pools %s."
GLESA095E "PRESTIME%d is not valid."
GLESA096E "Unable to access file %s (errno:%d)."
GLESA097E "File has been updated before stubbing."
GLESA098E "Failed stubbing for pool id (%s). The file (%s) has been migrated to pool id(%s)."
GLESA099E "Unable to migrate file %s because all tapes are offline exported."
GLESA100E "Failed to complete migration or premigration of file %s. At least one of the copy jobs for this file failed."
GLESA101W "Failed to remove DMAPI attributes (%s) for file %s."
GLESA102I "Signal recieved (%d). The job will be terminated."
GLESA103E "Unable to internal lock file (%s) (errno: %d)."
GLESA104E "Unable to lock internal file (%s) (errno: %d)."
GLESA105E "Unable to unlock internal file (%s) (errno: %d)."
GLESA106I "Internal file (%s) was locked."
GLESA107E "Failed to get a list of extended attributes for file %s."
GLESA108E "Failed to read extended attribute values for file %s."
GLESA109E "Failed to resolve network address %s."
 
 
GLESA500E "Failed to open file %s for reading extended attributes (errno:%d)."
GLESA501E "Failed to get list of extended attributes for file %s (errno:%d)."
GLESA502E "Failed to read extended attribute values for file %s (errno:%d)."
GLESA503E "Failed to allocate memory to read extended attributes for file %s."
GLESA504E "EA names conflict with Spectrum Archive EE reserved EA names."
GLESA505E "User-defined EA values have a size above the Spectrum Archive EE individual EA size limit."
GLESA506E "User-defined EA values have an aggregated size above the Spectrum Archive EE aggregated EAs size limit."
GLESA507E "Unable to get the IP address for tape library (%s)."
GLESA508W "Failed to connect to Spectrum Archive EE service for tape library (%s). Reason:%d."
GLESA509W "Unable to get read advisory lock. (%d)"
GLESA510W "Unable to release the advisory lock. (%d)"
GLESA511W "Unable to connect to the Spectrum Archive EE service for tape library (%s) to check tape state for (%s)."
GLESA512E "Unable to connect to Spectrum Archive EE (Function: %s, Line: %d, RPC error:%d)."
GLESA513E "Unable to request exclusive DMAPI rights for file system object %s, DMAPI error code: %d."
GLESA514E "Unable to update DMAPI Attribute for file system object %s."
GLESA515E "Unable to remove DMAPI Attribute for file system object %s."
GLESA516I "Unable to update DMAPI Attribute %s (errno:%d)."
GLESA517E "Failed to allocate memory to read DMAPI attributes %s."
GLESA518I "Unable to remove DMAPI Attribute %s (errno:%d)."
GLESA519E "Failed to retrieve tape library information. Tape library control (MMM) node might not be configured (ltfsee_config -m ADD_CTRL_NODE)."
GLESA520E "Library configuration database file %s is not present or tape library control node (MMM) is not be configured (ltfsee_config -m ADD_CTRL_NODE)."
GLESA521E "Database access timed out."
GLESA522E "Base64 decoder: empty input."
GLESA523E "Base64 decoder: invalid character in the input."
GLESA524E "Base64 decoder: input length is not a multiple of 4."
GLESA525E "Failed to allocate memory."
GLESA526E "Base64 decoding failed attribute %s ( %s )."
GLESA527E "General error accessing database, rv: %d, err: %d, msg: '%s'."
GLESA528I "Failed to get library name from library id ( %s )."
GLESA529E "Drive not found '%s'."
GLESA530E "Tape not found '%s'."
GLESA531E "Error looking up tape '%s'."
GLESA532E "Cannot get default library name (%d)."
GLESA533I "Retry to connect to Spectrum Archive EE service for tape library (%s). (%d)"
GLESA534E "Failed to remove an extended attribute %s from file %s. (errno=%d)"
GLESA535I "ltfs.sync start for tape %s."
GLESA536I "ltfs.sync comp for tape %s."
GLESA537E "Failed to synchronize tape %s, errno=%d"
GLESA538I ".LTFSEE_DATA directory is not found on tape %s. Creating the directory."
GLESA539I "Setting tape state information for tape %s into EAs of %s."
GLESA540I "Extended attribute of file %s: key=%s, value=%s, rc=%d, errno=%d"
GLESA541W "Found an empty IBMTPS."
GLESA542E "Failed to invoke a snapshot-related process. (op=%s, GPFS=%s, snapshot=%s)"
GLESA543E "Failed to create a GPFS snapshot. (GPFS=%s, snapshot=%s, rc=%d)"
GLESA544E "Failed to delete a GPFS snapshot. (GPFS=%s, snapshot=%s, rc=%d)"
GLESA545E "Failed to create a directory at %s. (reason=%s)"
GLESA546E "Failed to delete a directory at %s. (reason=%s)"
GLESA547E "Failed to get the GPFS device name that is mounted at %s."
GLESA548E "Failed to get the GPFS ID that is mounted at %s."
GLESA549E "Failed to remove at least one tape from IBMTPS attr of %s."
GLESA550E "Failed to write an extended attribute %s (value=%s) to file %s. (errno=%d)"
GLESA551W "Failed to get export version, using verison of "0.0"."
GLESA552I "ExportVersion of importing tape %s is %s."
GLESA553W "Retried to connect to Spectrum Archive EE for tape library (%s), and successfully connected."
 
# MMM related messages:
GLESM001E "Unable to get ip addresses of cluster nodes."
GLESM002E "Unable to get node ids of cluster nodes."
GLESM003E "Unable to ltfs port information for node %s."
GLESM004E "Could not connect to LTFS on any of the nodes."
GLESM005E "Could not connect to LTFS on one or more of the nodes."
GLESM006E "Could not retrieve drive inventory from some or all of the nodes."
GLESM007E "Could not retrieve tape inventory from some or all of the nodes."
GLESM008E "Tape %s does not exist."
GLESM009I "Tape %s is in default storage pool: removing from list."
GLESM010E "Unable to open file %s that is providing a list of files to be migrated (%d)."
GLESM011E "Unable to get size for file %s. Please verify whether this file exists within a GPFS file system."
GLESM012E "Unable to open the configuration file %s."
GLESM013E "There is not enough space available on the tapes belonging to storage pool %s to perform the migration of file %s."
GLESM014I "removing file %s from queue."
GLESM015E "Unable to write LTFS EE lock files."
GLESM016I "recall of file with inode %lu has been scheduled."
GLESM017E "BAD ASSIGNMENT OF FILE TO BE RECALLED: inode of file: %lu."
GLESM018I "mount initiated for tape %s on drive %s."
GLESM019I "unmounting tape %s on drive %s."
GLESM020I "The remaining space on tape %s is critical, free space on tape: %lu, size of file: %lu."
GLESM021I "starting migration for file %s on node with ip %s and tape with id %s."
##### DO NOT REMOVE!! this msg is parsed by log analyzer #####
GLESM022I "The file to recall with inode %llu on tape %s will be recalled."
GLESM023E "error detaching LTFS EE."
GLESM024E "The file system that has been specified does not exist or the file system type is not local GPFS."
GLESM025E "config file %s not found."
GLESM026I "exiting LTFS EE."
GLESM027I "migration started for file %s."
GLESM028I "A migration phase for file %s is finished."
GLESM029I "recall started for file with inode %lu."
GLESM030I "recall finished for file with inode %lu."
##### DO NOT REMOVE!! this msg is parsed by log analyzer #####
GLESM031I "A list of %d file(s) has been added to the migration and recall queue."
GLESM032I "Initialization of the LTFS EE service finished."
GLESM033E "The Spectrum Archive EE service has been already started."
GLESM035E "usage: %s [-d <debug level:1-3>] [-b] -l <library_id>."
GLESM036E "migration for file %s timed out."
GLESM037E "recall timed out for file with inode %lu."
GLESM038E "file %s not found in migration and recall queue."
GLESM039W "Unable to find file %s in internal list."
GLESM040I "mount initiated for tape %s(%s) on drive %s for file %s(%s)."
GLESM041I "unmounting tape %s(%s) on drive %s for file %s(%s)."
GLESM042E "error obtaining resources."
GLESM043E "unable to allocate memory."
GLESM044E "tape %s has been incorrectly inventoried: tape mounted on drive %s."
GLESM045E "tape %s has been incorrectly inventoried: tape is not mounted."
GLESM046E "unable to obtain free capacity for tape %s."
GLESM047E "unable to obtain mount information for test %s."
GLESM048E "HSM is not operating correctly on host %s. LTFS EE will shut down."
GLESM049E "Unable to migrate file %s: storage pool %s does not exist or has no tapes assigned."
GLESM050E "Maximum length of line specified in configuration file is 1024. Exiting LTFS EE service."
GLESM051E "Maximum length of storage pool specified in configuration file is 32. Exiting LTFS EE service."
GLESM052E "Invalid storage pool name specified in configuration file. Only alpha-numeric or underscore characters are allowed. Exiting LTFS EE service."
GLESM053E "Unable to get the status of a Spectrum Archive EE node (%s)."
GLESM054I "There are still scans in progess. To stop the Spectrum Archive service, run ltfsee stop again."
GLESM055I "Tape with ID %s is in unknown state. Determining state."
GLESM057E "Tape with ID %s is not usable. Please check if it is ltfs formatted."
GLESM058I "File with name %s already part of the job list. File has not been added."
GLESM059E "File with name %s already part of the job list but assigned to a different storage pool. File has not been added."
GLESM060E "Unable to determine file system containing file %s. File has not been added."
GLESM061E "The file system containing file %s is not managed by HSM. The file has not been added."
GLESM062E "Generic session with number %u not found."
GLESM063E "Generic request with the same identifier %s is already within the migration and recall queue."
GLESM064I "New generic request with identifier %s on tape %s."
GLESM065I "Generic request with identifier %s for tape %s started."
GLESM066I "Generic request with identifier %s for tape %s finished."
GLESM067E "Unable to find generic request identifier %s in internal list."
GLESM068I "Generic request with identifier %s for tape %s and drive %s has been scheduled."
GLESM069E "An identifier for a generic job needs to consist of printable characters and may not contain white-spaces. '%s' is not allowed."
GLESM070E "The size of file %s has changed and therefore skipped for migration."
GLESM071I "protocol version: %s."
GLESM072E "Unable to determine protocol version information from node %s."
GLESM073E "A different protocol versions found on node %s."
GLESM074I "Unable to determine protocol version information from node %s - skipping node."
GLESM075I "No LTFS directory found for tape %s, removing from list."
GLESM076E "Unable to find job for session %llu and id %s in queue."
GLESM077E "File %s has been already migrated."
# to print out any message executed by a sub-command
GLESM078I "%s"
GLESM079E "Unable to write configuration file %s."
GLESM080E "Tape %s is not available. Recall request for file with i-node %llu will not be processed."
GLESM081E "Recall item with i-node %llu not found."
GLESM082E "File %s occupies no blocks within the file system. This file is be skipped for migration."
+ " A reason could be that this file has a zero size, is already migrated, or it is a sparse file."
GLESM083E "Could not get port from node id %d."
GLESM084E "Could not get ip address from node id %d."
GLESM085E "Could not get node id from drive %s."
GLESM086E "Could not get drive state or state internal, drive %s."
GLESM087I "Drive %s was added successfully."
GLESM088E "Could not set drive %s remove requested."
GLESM089E "Could not get tape id from drive %s."
GLESM090I "Could not get drive state. Remove requested drive does not exist. Drive serial %s."
GLESM091I "Drive %s was prepared to be removed from LTFS EE."
GLESM092E "Drive %s remove request cleared."
GLESM093E "Invalid node id from drive serial %s, node id %d."
GLESM094E "Could not get drive %s remove request."
GLESM095E "Could not set drive %s internal state %d."
GLESM096I "Library address mismatch for tape %s: address stored internally: %d - address provided by ltfs %d: synchronizing now."
GLESM097E "No valid tape is found to recall the file with i-node %lu."
GLESM098E "Available drive %s on multiple node. node %d is ignored."
GLESM099E "Failed to move tape %s. Tape does not exist in the specified pool."
GLESM100E "Tape %s is failed to move because job for this tape remains."
GLESM101E "Tape %s is failed to move because this tape is moving now."
GLESM102E "Tape %s is failed to move."
GLESM103E "Tape %s is successfully moved, but failed to add to the system."
GLESM104E "Tape %s is failed to remove from tape file system."
GLESM105E "Tape %s is failed to move because this tape is in use."
GLESM106E "Tape %s is failed to move because this tape belongs to a pool."
GLESM107E "Fail to call ltfseetrap (%d)."
GLESM108W "SNMP trap is not available (%d)."
GLESM109E "File "%s" has not been migrated since its file name is too long. Only 1024 characters are allowed."
GLESM110W "Tape %s got critical."
GLESM111I "File %s is too small to qualify for migration."
GLESM112E "Server error in scheduler."
GLESM113E "Storage pool %s has been provided twice for file %s."
GLESM114E "Unable to execute command %s."
GLESM115E "Unable to determine if process %d exists on node woth ip address %s."
GLESM116I "The size of the blocks used for file %s is smaller or equal the stub file size. The file will be skipped."
GLESM117E "More than three pools specified for file %s. This file will be skipped."
GLESM118E "Unmount of tape %s failed (drive %s). Check the state of tapes and drives."
GLESM119E "Mount of tape %s failed. Please check the state of tapes and drives."
GLESM120E "generic job with identifier %s(%u) timed out."
GLESM121E "Redundant copy request for file %s timed out at %s phase (waited for %s)."
GLESM122E "Error reading from tape %s: synchonizing now."
GLESM123I "Stubbing file %s."
GLESM124I "Setting debug level to %d."
GLESM125E "Masking file %s will exceed the maximum number of characters to be allowed."
GLESM126E "Unable to initialize DMAPI. Exiting LTFS EE."
GLESM127E "Unable to release DMAPI right on file %s, errno %d."
GLESM128E "Unable to get DMAPI handle for file system object %s.(errno:%d)"
GLESM129E "Unable to determine version of LTFS EE installation package on host %s."
GLESM130I "Host %s node is a non LTFS EE node. If this host should be part of the LTFS EE cluster please check that LTFS EE is installed."
GLESM131E "Unable to determine the version of LTFS EE installation package on the local host. Please verify if the installation is correct."
GLESM132E "A different version %s of LTFS EE is installed on host %s. The version on the local host is %s. This host will be skipped."
GLESM133I "LTFS EE version installed on host %s: %s."
##### DO NOT REMOVE!! this msg is parsed by log analyzer #####
GLESM134I "Migration result: %ld succeeded, %ld failed, %ld duplicate, %ld duplicate wrong pool, %ld not found, %ld too small to qualify for migration, %ld too early for migration."
GLESM135I "Synchronizing tape %s."
GLESM136I "Premigration for file %s has finished %d seconds ago. This file will be stubbed now."
GLESM137E "File system object %s is not a regular file. It has not been added to the migration and recall queue."
GLESM138E "File system object %s is not a regular file. It will be removed from the migration and recall queue."
GLESM139E "A Spectrum Archive node is out of sync (%s)."
GLESM140E "Could not retrieve node status from some or all of the nodes."
GLESM141E "Unable to connect to node: %s, error:%d."
GLESM142E "Unable to retrieve drive inventory from node: %s, error:%d."
GLESM143E "Unable to retrieve tape inventory from node: %s, error:%d."
GLESM144E "Unable to retrieve node status from node: %s, error:%d."
GLESM147E "Unable to determine the migration state of file %s."
GLESM148E "File %s is already migrated and will be skipped."
GLESM149I "File %s is premigrated."
GLESM150E "Unable to determine the tape id information for premigrated file %s. This file will be skipped."
GLESM151E "File %s has been previously premigrated to a storage pool other than the specified pool. This file will be skipped."
GLESM152E "The synchronization of tape %s failed."
GLESM153E "The synchronization of at least one tape failed. No premigrated files will be stubbed."
GLESM154E "File %s has not been found for stubbing."
GLESM155E "The LTFS's mount point on host %s is not '/ltfs'. LTFS EE will shut down."
GLESM156E "Tape %s got critical. File %s will not be stubbed."
GLESM157E "Drive %s is not available anymore. File %s will not be stubbed."
GLESM158W "File %s has changed before stubbing and will be skipped."
GLESM159E "An invalid TSM option was specified on node %s. This needs to be corrected to process further migration and recall jobs."
GLESM160E "Communication disrupted probably caused by a time out."
GLESM161I "Tape %s is skipped to move because it is already in the target slot."
GLESM162E "Tape %s is failed to move because it is moving between slots."
GLESM163E "A license is expired on a node (%s). LTFS EE will shut down."
GLESM164E "License is expired. The following functions and commands will not be available:"
+ " ltfsee migrate"
+ " ltfsee import"
+ " ltfsee export"
+ " ltfsee reconcile"
+ " ltfsee reclaim"
GLESM165E "Unable to request DMAPI right on file %s, errno %d."
GLESM166E "Unable to sync file %s, errno %d."
GLESM167E "Unable to get attr for file %s, errno %d."
GLESM168E "Tape %s in pool %s is not usable (%d). Please ensure that it is LTFS formatted and undamaged."
GLESM169E "Unable to determine the UID information for premigrated file %s. This file will be skipped."
GLESM170E "The number of copy pools specified (%d) does not match the number of copy tapes (%d) for premigrated files %s. This file will be skipped."
GLESM171E "File %s has been previously copied to tape %s. This tape cannot be found. This file will be skipped."
GLESM172E "File %s contains tape %s in it's attributes which is part of storage pool %s. This storage pool has not been specified for migration. This file will be skipped."
GLESM173W "For file %s to be recalled a migration is currently in progress. This migration request will be canceled."
GLESM174I "Stopping RPC server."
GLESM175I "Unable to connect to RPC server. Retring ..."
GLESM176W "Unable to connect to RPC server. Exiting LTFS EE service."
GLESM177E "Cannot resolve LTFS node name: %s."
GLESM178W "Failed to create socket for family/type/protocol: %d/%d/%d. Will retry."
GLESM179E "Error setting socket flags."
GLESM180W "Attempt to connect to LTFS on node %s failed: %s. Will retry."
GLESM181W "Timeout while connecting to LTFS on node %s. Will retry."
GLESM182W "Socket select failed while connecting to LTFS on node %s: %s. Will retry."
GLESM183W "Error reading socket state while connecting to LTFS on node %s: %s. Will retry."
GLESM184W "Socket error while connecting to LTFS on node %s: %d. Will retry."
GLESM185W "Retry connecting to LTFS on node %s."
GLESM186E "Failed to connect to LTFS on node %s, giving up."
GLESM187E "Failed to send LTFS logout message."
GLESM188E "Socket select failed while attempting to send message to LTFS via OOB protocol. Error: %s."
GLESM189E "Error reading socket state while sending a message to LTFS via OOB protocol. Error: %s."
GLESM190E "Socket error while attempting to send a message to LTFS via OOB protocol. Socket error code: %d."
GLESM191E "Socket error while sending a message to LTFS via OOB protocol. Error: %s."
GLESM192E "Timeout while sending a message to LTFS via OOB protocol."
GLESM193E "Socket select failed while attempting to receive message from LTFS via OOB protocol. Error: %s."
GLESM194E "Error reading socket state while receiving a message from LTFS via OOB protocol. Error: %s."
GLESM195E "Socket error while attempting to receive a message from LTFS via OOB protocol. Socket error code: %d."
GLESM196E "Socket error while receiving a message from LTFS via OOB protocol. Error: %s."
GLESM197E "Timeout while receiving a message from LTFS via OOB protocol."
GLESM198E "Socket error while receiving a message from LTFS via OOB protocol. Error: connection closed, peer has performed a shutdown."
GLESM199E "Recalls are not possible since HSM is not operating correctly. Please check if HSM is working correctly on all LTFS EE nodes."
GLESM200W "Recall request for i-node %llu is removed from the job list."
GLESM201I "Recalls will be possible again."
GLESM202E "Recalls will be disabled since HSM is not operating correctly."
GLESM203E "Unable to retrieve all DMAPI sessions."
GLESM204E "No DMAPI session found for recall daemon on node %d."
##### DO NOT REMOVE!! this msg is parsed by log analyzer #####
GLESM205I "Mount request for tape %s on drive %s will be sent to LTFS sub system."
GLESM207I "State of tape %s after mount: %s."
GLESM208I "Mount request for tape %s on drive %s is completed."
##### DO NOT REMOVE!! this msg is parsed by log analyzer #####
GLESM209I "An unmount request for tape %s on drive %s will be sent to the LTFS sub system."
GLESM211W "Library address for tape %s has still not changed after unmount. Please pay attention."
GLESM212I "Unmount request for tape %s is completed."
GLESM213I "Retrieving information from the LTFS sub system."
GLESM214E "Retrieving information from the LTFS sub system failed."
GLESM215I "Retrieving information from the LTFS sub system completed."
GLESM216I "Retrieving information from the LTFS sub system."
GLESM217E "Retrieving information from the LTFS sub system failed."
GLESM218I "Retrieving information from the LTFS sub system completed."
GLESM219E "Session %lu to be queried if done does not exist."
GLESM220E "Generic job with identfier %s does not exist."
GLESM221E "Generic job with identfier %s failed."
GLESM222I "Generic job with identfier %s was successful."
GLESM223E "Not all generic requests for session %u have been successful: %ld failed."
GLESM224I "All generic requests for session %u have been successful."
GLESM225I "New session %u created for new generic job with identifier %s."
GLESM226I "Existing session %u used for new generic job with identifier %s."
GLESM227I "Updating storage pool %s has been requested."
GLESM228I "Adding or removing storage pool %s has been requested."
GLESM229I "Synchronizing LTFS EE has been requested."
GLESM230I "The termination of LTFS EE has been requested."
GLESM231I "Information for LTFS EE %s has been requested."
GLESM232I "A new list of files to be migrated has been submitted to LTFS EE."
GLESM233I "Migration jobs will be added using scan id %lu."
GLESM234I "Migration job for file %s has been added using scan id %lu."
GLESM235I "Stub size for file system %s is %d."
GLESM236I "The recall for file with i-node %lu has a status of scheduled and can start now."
GLESM237I "New recall request for file system %s and for i-node %llu."
GLESM238I "A migration phase for file %s has been finished with an internal result of %d."
GLESM239I "The stubbing migration phase for file %s has been finished."
GLESM240I "The redundant copy job %s has been finished."
GLESM241I "The following numbers will be added to the primary session with id %lu: succeeded %d - failed %d."
GLESM242I "Cleaning up DMAPI sessions on all nodes."
GLESM243E "File %s will not be stubbed since redundant copy processes failed."
GLESM244E "The inventory for drive %s was inconsistent and will be corrected."
GLESM245E "Attribute %s of tape %s is invalid."
GLESM246E "Failed to update the number of tape blocks on MMM."
GLESM247E "Failed to get attribute %s of tape %s (errno: %d)."
GLESM248E "Failed to invoke attr command at node %s."
GLESM249E "Failed to parse output stream from attr command (errno=%d)."
GLESM250E "The attribute %s of tape %s was invalid."
GLESM251I "Unmounting tape %s on drive %s for (copy job) file %s."
GLESM252E "Tape %s is not mounting."
GLESM253E "Failed to get status for tape %s."
GLESM254E "Failed to get the node id of the node where tape %s is mounted."
GLESM255I "Tape %s(%s) mounted on drive %s will be unmounted for a (copy job) file %s(%s)."
GLESM256E "File %s is designated to storage tape %s and storage pool "%s". Tape %s now belongs to storage pool "%s"."
GLESM257W "Timing out %s requests that were scheduled to tape %s."
GLESM258I "The recall request for inode %llu for tape %s and drive %s has been scheduled."
GLESM259I "Data transfer for recall request for inode %llu has finished successfully."
GLESM260E "Data transfer for recall request for inode %llu failed."
GLESM261E "Size of the DMAPI handle exceeds the maximum size of 4096. The recall cannot be processed."
GLESM262I "Recall request for inode %llu has been finished successfully."
GLESM263E "Recall request for file %s (inode %llu) failed."
GLESM264I "There are %d copies available on the following tapes for the recall of a file with i-node %lu: %s."
GLESM265W "Reservations cleanup processing does not work properly. ret=%d"
GLESM266W "NO copy found on tape %s for file with i-node %lu."
GLESM267I "Tape %s selected to recall file with i-node %lu."
GLESM268I "Cannot find ssh service in /etc/services file. Use default port number 22."
GLESM269E "Item too long: %s. Maximum length of item specified in configuration file is %d. Exiting LTFS EE service."
GLESM270E "File %s is already premigrated and will be skipped."
GLESM271E "Failed to move tape %s. This tape is in a critical state."
##### DO NOT REMOVE!! this msg is parsed by log analyzer #####
GLESM272I "Migration job (%s) is scheduled for tape %s and drive %s. These jobs will be run."
GLESM273E "File %s has a future timestamp. The file is not added."
GLESM274W "The DMAPI session or DMAPI token for this process appeared invalid and got renewed."
GLESM275E "Reference counter for tape %s needs to be cleared."
GLESM276I "Configuration file is not new format."
GLESM277W "Ignoring unknown configuration directive '%s'."
GLESM278W "Invalid value for limit job: '%s'."
GLESM279W "Cannot parse configuration file: line too long."
GLESM280W "Cannot parse configuration file: not directive %s."
GLESM281W "Specify a decimal value between 0 and %d for the stubbing threshold."
GLESM282W "Duplicate directive for tape in configuration file: '%s'."
GLESM283W "Duplicate directive for pool in configuration file: '%s'."
GLESM284W "Duplicate directive for %s in configuration file"
GLESM285E "Premigration or migration of file %s failed with the following HSM error code: %s."
GLESM286E "LTFS node %s is out of sync."
GLESM287W "Invalid value for the immediate stubbing threshold: '%s'."
GLESM288W "No running recall daemon found, recalls will not be possible."
GLESM289E "Internal error counting the migration or copy commands. For tape %s mounted on drive %s the counting will be reset."
GLESM290I "Recall job for i-node %llu already removed from queue."
GLESM291I "Drive %s is still locked by a recall job for i-node %llu. Removing the lock."
GLESM292I "Starting the save process for file system object %s on node with ip %s, tape %s and drive %s."
GLESM293E "No save jobs found to copy to tape for scan id %u."
GLESM294E "Unable to retrieve list for file system objects to save on tape for scan id %u."
GLESM295E "File list %s for scan id %u to pass file system objects to save on tape does not exist."
GLESM296E "Inconsistency: save jobs stored for scan %u: %d, jobs deleted: %d."
GLESM297E "Job %s had a wrong state, current %s, expected: %s."
GLESM298I "Failing all save jobs for scan %u."
GLESM299E "There is a difference of %d of the number of outstanding jobs for scan %u."
GLESM300E "Save jobs for scan %u failed."
GLESM301I "Save jobs for scan %u succeeded."
GLESM302E "Unable to find scan %u. Processing file system object list %s aborted."
GLESM303E "Unable to add file system object %s to job queue."
GLESM304I "Job %s timed out. All jobs for the corresponding scan %u file get treated as failed."
GLESM305E "Incorrectly formatted line within save list file %s."
GLESM306E "Unable to determine the drive used for the current save jobs. It is not possible to make this drive available again."
GLESM307E "Storage pool list to migrate file %s to is too long, the rest of the files in the migration list will be skipped."
GLESM308I "Recall result: %ld succeeded, %ld failed, %ld duplicate, %ld not migrated, %ld not found."
GLESM309E "Unable to open file %s that is providing a list of files to be recalled."
GLESM310I "New recall request for file system %s and for file %s."
GLESM311I "Recall for i-node %llu already in LTFS EE job queue."
GLESM312E "No selective recall can be performed since HSM is not operating correctly. File %s recall failed."
GLESM313E "For i-node %lu IBMSGEN#: %llu differs to ibm.ltfsee.objgen on tape %s: %llu."
GLESM314E "There are %d copies of the file with i-node %lu where the comparison of the generation numbers failed."
GLESM315E "The generation number on tape %s does not exist while it is available on disk of file with i-node %lu: %llu."
GLESM316E "Unable to save file system objects: storage pool %s does not exist."
GLESM317E "Unable to save file system objects: storage pool %s has no tapes assigned."
GLESM318E "Masking failed for file %s."
GLESM319E "File %s has been removed or replaced. The corresponding recall request will fail."
GLESM320E "Recall for a file with inode %llu failed."
GLESM321I "The number of commands executed in parallel on drive %s has changed to %d."
GLESM322I "Unable to change the number of commands executed in prallel on drive %s."
GLESM323I "Premigrated files have been added to the LTFS EE job queue. Stubbing will be initiated immediately."
GLESM324E "The primary copy of file %s resides on an offline exported tape and cannot be recalled."
GLESM325W "A Spectrum Archive EE node recovered (%s)."
GLESM326E "Node ID not correctly assigned to tape %s. synchronzing now."
GLESM327E "Tape %s is inaccessible or does not exist. The job for inode or identifier %llu will be failed."
GLESM328E "File system object %s has changed and therefore its transfer to tape is skipped."
GLESM329W "Invalid value for recall deadline: '%s'."
GLESM330W "Please specify a decimal value between 0 and %d for recall deadline."
GLESM331E "File %s not found (errno:%d)."
GLESM332W "File %s is not migrated."
GLESM333E "Unable to determine the IBMTPS attribute for file %s."
GLESM334E "Unable to determine the IBMUID attribute for file %s."
GLESM335E "Tape %s has an invalid type or no matching drive found."
GLESM336E "Failed to add drive %s. Drive does not existed or the drive failed to be added to the configuration db."
GLESM337I "%d recall jobs have been added to the LTFS EE job queue for scan %lu."
GLESM338W "Tape %s is in wrong state. Removed from storage pool %s."
GLESM340E "Unable to determine file system name and stub size for file %s, failing the migration of this file."
GLESM341I "Clearing scheduled size component of tape %s, remaining: %llu bytes."
GLESM342I "Clearing scheduled size component of tape %s was not necessary."
GLESM343I "Update physical tape information for tape %s."
GLESM344I "%lld bytes cleared too much for tape %s."
GLESM345I "%lld bytes cleared for tape %s."
GLESM346I "Synchrozing tapes is initiated due to exceeding job queue threshold (Total limit: %lu, Threshold: %lu%%, Waiting jobs: %u)."
GLESM347E "TS1140 cannot be used with TS1150. Exiting LTFS EE."
GLESM348E "Determining the file inode number while processing (distributing). File recall failed (%s)."
GLESM359I "The file %s is already premigrated. This file will be stubbed now."
GLESM360W "Tape %s is not usable in storage pool %s."
GLESM361E "The specified option %s is too long. Maximum length is %d."
GLESM362I "Stubbing is initiated immediately to complete migration (scan id: %lu)."
GLESM363W "A line in the configuration file exceeded maximum length. (key: %s, length: %d)"
GLESM364W "An invalid drive role (%s) is specified for drive (%s). The drive role must be decimal numeric smaller than 16. 15 is applied to this drive."
GLESM365W "There are no drives that can do migration jobs. It is recommended that you set the migration drive role on at least one drive."
# GLESM366W "There are no drives which can handle copy job. It is recommended that you set copy drive role at least to one drive."
GLESM367W "There are no drives that can handle the recall job. It is recommended that you set the recall drive role on at least one drive."
GLESM368W "There are no drives that can handle a generic job. It is recommended that you set the generic drive role on at least one drive."
GLESM369W "Invalid value for ltfs sync period: '%s'."
GLESM370W "Invalid value for ltfs sync threshold: '%s'."
GLESM371W "Specify a decimal value between 0 and 100 for ltfs sync threshold."
GLESM372E "Failed to create temporary file %s (errno: %d)."
GLESM373W "Results for copy job(s) already have been set."
GLESM374E "Unable to create and open temporary list of files for the copy operation into %s. Exiting Spectrum Archive EE service."
GLESM375E "Unable to write object name %s to temporary list of files for the copy operation. Exiting Spectrum Archive EE service."
GLESM376W "Unable to flush temporary list of files for the copy operation."
GLESM377E "Tape %s does not exist or is already assigned to another storage pool"
GLESM378E "Storage pool %s does not existed."
GLESM379E "Failed to update MMM process ID in configuration database."
GLESM380E "Failed to update MMM startup time in configuration database."
GLESM381E "Failed to update MMM watchdog feed time in configuration database."
GLESM382E "Duplicate job already exists in local MMM (scan id: %u)."
GLESM383E "Duplicated job already exists in remote MMM (MMM: %s)."
GLESM384W "Job management information remains in the DMAPI attribute, but seems to be no longer valid. The DMAPI attribute has been overwritten."
GLESM385E "Unable to create migration lock file. Exiting LTFS EE service."
GLESM386E "Unable to remove migration lock file. Exiting Spectrum Archive EE service."
GLESM387E "Failed to open PID file (errno: %d)."
GLESM388E "Failed to lock PID file (errno: %d)."
GLESM389E "Unable to stub files in the scan %u becase temprary file was failed to create."
##### DO NOT REMOVE!! this msg is parsed by log analyzer #####
GLESM390I "Command to execute... (%s)."
GLESM391I "Could not remove the drive configuration. %s."
GLESM392E "Could not add the drive %s."
GLESM393E "Failed to get the resource to determine format type. The job for inode %llu will be failed."
GLESM394E "Failed to migrate/premigrate/recall a file due to a timeout when acquiring DMAPI access rights. The file %s is exclusively locked."
GLESM395E "Failed to migrate/premigrate file %s because file status cannot be checked."
GLESM396I "Ignored the internal error: %s."
GLESM397I "Configuration option: %s."
GLESM398I "Quick add of tape %s to pool %s."
GLESM399I "Removing tape %s from pool %s (%s)."
GLESM400E "Tape %s is being processed by another job."
GLESM401I "Loaded the global configuration."
GLESM402I "Created the Global Resource Manager."
GLESM403I "Fetched the node groups from the Global Resource Manager."
GLESM404I "Detected the IP address of the MMM (%s)."
GLESM405I "Configured the node group (%s)."
GLESM406I "Created the unassigned list of the library resources."
GLESM407E "Failed to configure the node group (%s)."
GLESM408E "Cannot communicate with all LE nodes for the library '%s'."
GLESM409E "MMM not properly initialized."
GLESM410E "No nodegroups configured."
GLESM411E "Unable to get the IP address of the MMM for tape library with ID '%s'."
GLESM412W "Invalid value for the maximum length of a file list: '%s'."
GLESM413W "Invalid value for the maximum number of migration threads: '%s'."
GLESM414W "Invalid value for the maximum number of files processed by a copy command: '%s'."
GLESM415W "Invalid value for the maximum number of parallel copy commands for a single drive: '%s'."
GLESM416E "Tape(%s) has been moved from pool ID(%s) to pool ID (%s)."
GLESM417W "Invalid value for db busy timeout '%s'"
GLESM418E "Failed to move tape %s. The tape is in the unassigned tape list."
GLESM419E "Nodegroup not specified."
GLESM420E "Failed to call dsmmigrate. Migration job (%d) failed."
GLESM421E "Failed to aquire read lock for tape (%s)."
GLESM422E "The MMM for tape library (%s) is not running."
GLESM423I "Command (%s) did not succefully finish (%d)."
GLESM424W "Invalid value for migration delay '%s'."
GLESM425W "Invalid value for bulk recall timeout '%s'."
GLESM426E "Cannot create the unassigned list."
GLESM427E "Unable to start RPC."
GLESM428E "Unable to create an RPC server thread."
GLESM429W "Invalid value for disable afm check: '%s'."
GLESM430E "Failed to migrate/premigrate/recall file (%s) due to a failure to acquire DMAPI access rights."
GLESM431E "A duplicate job for file (%s) already exists in the local MMM."
GLESM432E "Failed to migrate/premigrate/recall file (%s) due to a DMAPI access failure."
GLESM433E "Failed to initialize DMAPI (errno: %d)."
GLESM434E "Failed to create token (errno: %d)."
GLESM435E "Migration requests are not acceptable due to DMAPI failure."
GLESM436E "Failed to respond to a user event (errno: %d)."
GLESM437E "Recall requests are not acceptable due to DMAPI failure."
GLESM438I "The pool (%s) is almost full - no space is available on tapes in the storage slot."
GLESM439E "Failed to obtain file system usage for %s (errno:%d)."
GLESM440I "File system (%s) usage is at %d. Recall for i-node %llu will be deferred."
GLESM441I "The migration job for %d-th copy for i-node %llu is already removed from the queue."
GLESM442I "The cache file is not in a valid format. (%s). Skip this line."
GLESM443E "Tape %s must be in a home slot to be added to the specified pool."
GLESM444E "Cannot register RPC service %s:%d (errno: %d)."
GLESM445I "RPC service %s:%d(%d) successfully started."
GLESM446E "Cannot create tcp tranport for RPC service %s:%d (errno: %d)."
GLESM447E "Failed to create RPC client (errno: %d)."
GLESM448E "Failed to find nodegroup for node %d (%d)."
GLESM449E "Failed to find RPC server for node %d."
GLESM450E "Unable to add file (%s) to internal list."
GLESM451I "The file %s was updated within 2 minutes. The file will be skipped."
GLESM452W "Invalid value for total size threshold: '%s'."
GLESM453E "Unable to find migration job for %s."
GLESM454I "A migration phase for file %s (%d-th in %s) has been finished with an internal result of %d."
GLESM455E "Unable to find internal migration list for %s."
GLESM456E "Unable to find file information in internal migration list %s."
GLESM457E "Unable to find file information for %s in internal migration list %s."
GLESM458E "Unable to access internal migration list %s."
GLESM459I "File %s is added to internal migration list %s."
GLESM460W "Unable to find tape drive %s in internal list."
GLESM461W "Unable to find tape in tape drive %s."
GLESM462I "Sync job for tape %s used drive %s."
GLESM463I "Sync job for tape %s finished."
GLESM464I "Recall job for tape %s used drive %s."
GLESM465I "Generic job for tape %s used drive %s."
GLESM466I "Save job for tape %s used drive %s."
GLESM467I "Tape %s is mounted on the drive %s."
GLESM469I "Migration job for tape %s used drive %s."
GLESM470I "Job for tape %s used drive %s."
GLESM471E "Tape %s is in a bad state. The job for inode or identifier %llu will fail."
GLESM472E "Failed to change the replication factor of file %s."
GLESM473E "Cannot add a new replica for file %s, which is already in a migrated state."
GLESM474E "Unable to create internal directory (%s). (%d)"
GLESM475E "Unable to find internal directory (%s). (%d)"
GLESM476E "Unable to access internal directory (%s). (%d)"
GLESM477I "Unable to remove old internal file (%s). (%d)"
GLESM478E "Unable to create internal file (%s). (%d)"
GLESM479E "Unable to lock internal file (%s). (%d)"
GLESM480E "Unable to unlock internal file (%s). (%d)"
GLESM481I "Add copy job (scan:%u, pool:%s, file:%s, number of files:%d, total size:%lu)"
GLESM482E "Unable to migrate files under Spectrum Archive EE's work directory."
GLESM483I "Drive %s is not in a MOUNTED state, but a sync job for tape %s will be run (%d)."
GLESM484E "Failed to get the resource to determine format type. The job for identifier %s will fail."
GLESM485I "Finishing export job. Change the tape (%s) to be unusable."
GLESM486E "Unable to find save job for %s."
GLESM487E "Unable to save file system object %s: storage pool %s does not exist or has no tapes assigned."
 
GLESM500E "File (%s) is premigrated and all tapes (%s) are offline."
GLESM501E "Unable to get UID for file (%s)."
GLESM502E "Unable to create temporary file to copy the GPFS scan result in %s.(%d)"
GLESM503E "Unable to create temp file: %s. (%d)"
GLESM504E "Unable to write file path(%s) to temp file (%s)."
GLESM505E "Unable to open temp file (%s) (%d)."
GLESM506E "Files %s was modified after migration started."
GLESM507I "One of the copy threads failed."
GLESM508W "Copy job for file (%s) was not found in internal list."
GLESM509E "Migration requests for pool %s are currently not accepted."
GLESM510I "Sync request for tape %s has been scheduled."
GLESM511E "Unable to determine tape ID for file %s."
GLESM512E "Pool ID for pool name (%s) is not found."
GLESM513E "Unable to connect MMM for tape library (%s)."
GLESM514E "Unable to connect to the Spectrum Archive EE service for tape library (%s). Check if the Spectrum Archive EE for the tape library has been started (%d)."
GLESM515E "Submission of the migration request for tape library (%s) failed."
GLESM516I "A list of files to be migrated for tape library (%s) has been sent to Spectrum Archive EE using scan id %u."
GLESM517W "Unable to get the DMAPI handle for i-node %llu.(errno:%d)"
GLESM518E "Node Group for pool name (%s) is not found."
GLESM519I "Connecting to MMM for library (%s)."
GLESM520I "Send migration request for pool (%s) to MMM for library (%s)."
GLESM521E "Unable to determine nodegroup (%s) for the operation."
GLESM522E "Unable to process RPC call insufficient prameters."
##### DO NOT REMOVE!! this msg is parsed by log analyzer #####
GLESM523I "A migration phase for pool %s has been finished."
GLESM524I "Migration for pool %s is requested from remote MMM."
GLESM525E "File(%s) on tape (%s) has been updated before stubbing. file size was changed from %s to %s."
GLESM526E "File(%s) on tape (%s) did not successfully written."
GLESM527E "Unable to find premigrated file(%s) on tape (%s), pool(%s), library (%s)."
GLESM528E "Unable to create file (%s) (errno:%d)."
GLESM529W "Unable to get starting offset for file (%s) on tape (%s)."
GLESM530E "Unable to determine nodegroup for drive (%s)."
GLESM531E "Error looking up nodegroup for drive (%s)."
GLESM532E "Unable to determine nodegroup for node (%d)."
GLESM533E "Error looking up nodegroup for node (%d)."
GLESM534E "Unable to determine nodegroup for pool with id '%s'."
GLESM535E "Error looking up nodegroup for pool with id '%s'."
GLESM536E "Unable to determine nodegroup for pool with name '%s'."
GLESM537E "Error looking up nodegroup for pool with name '%s'."
GLESM538E "No library found with name '%s'."
GLESM539E "Error looking up library with name '%s'."
GLESM540E "Unable to open file (%s) (errno:%d)."
GLESM541E "Unable to create temporary file to recall file (%s)."
GLESM542E "Unable to write file name %s to temporary file."
GLESM543E "Submission of the recall request for tape library (%s) failed."
GLESM544I "A list of files to be recalled for tape library (%s) has been sent to Spectrum Archive EE using scan id %u."
GLESM545E "Unable to get filesystem ID, inode number, or inode generation (errno: %d)."
GLESM546E "Unable to write DMAPI handle file (errno: %d)."
GLESM547E "Failed to parse DMAPI handle file (value: %s)."
GLESM548E "Failed to save DMAPI file into internal file. All of jobs in the scan (id: %lu) are failed."
GLESM549E "Tape %s cannot be added to the storage pool %s. The type of tape isnot matched into the pool type."
GLESM550E "Tape %s cannot format on format type %02x."
GLESM551E "Unable to check whether the file (%s) is migrated to WORM pool or not."
GLESM552E "File (%s) was already migrated to WORM pool (%s) before."
GLESM553E "Failed to migrate/premigrate all of the files that belong to scan %u."
GLESM554E "Failed to add tape (%s)."
GLESM555E "Failed to format tape (%s)."
GLESM556W "Tape (%s) is not available to recall file (%s) (reason:%d)."
GLESM557E "Failed to parse pid string %s."
GLESM558E "Failed to write PID string to PID file."
GLESM559E "Pool '%s' is a WORM pool, but tape '%s' is not a WORM tape."
GLESM560I "Updating tape status for %s."
GLESM561I "Sync reference = %d for tape %s , but tape internal state is %d."
GLESM562E "File(%s) on tape (%s) did not successfully write due to tape state."
GLESM563E "Database '%s' is using outdated database schema. Upgrade schema, or erase the database file and run ltfsee_config to reconfigure system."
GLESM564E "Error updating threshold."
GLESM565E "MMM internal database error (%s)."
GLESM566E "Library resources database path and filename not set."
GLESM567E "Error looking up pool ID for pool with name '%s'."
GLESM568I "Unmounted the tape and tried to check synced files, but failed to open file (%s) (errno:%d)."
GLESM569E "Failed to recall the file with i-node %llu because tape %s is not valid."
GLESM570E "Failed to add node %s, because LE+ is not configured correctly. Configure LE+ using ltfsee_config -m ADD_NODE for this node, and restart LE+ (work_dir: %s)."
GLESM571W "General lock is acquired for %d sec (Function: %s, Line: %d)."
GLESM572W "A node is disappeared suddenly (%s). Skip this node."
GLESM573W "A node is not in the resource (%s:%s). Skip this node."
GLESM574I "Unmounting tape %s in the resource cleaning sequence."
GLESM575I "Unmounting tape %s in the resource releasing sequence."
GLESM576E "Invalid or no node ID specified (%d)."
GLESM577E "No libraries are configured."
GLESM578E "Invalid or no port specified (%d)."
GLESM579E "Failed to check AFM status of file %s."
GLESM580E "Unable to migrate file %s due to its AFM status (AFM status: %s). This file is be skipped for migration."
GLESM581E "Failed to check the AFM fileset (errno: %d)"
GLESM582E "Failed to start up the MMM. AFM filesets found (filesets: %s). Run the ltfsee_config -m CLUSTER command to enable AFM file state checking."
GLESM583E "Failed to start the migration or premigration process. AFM filesets found (filesets: %s). Run ltfsee_config -m CLUSTER command to enable AFM file state checking."
GLESM584W "AFM file state checking is enabled, but AFM filesets were not found. Run ltfsee_config -m CLUSTER to improve migration performance."
GLESM585E "Cannot capture tape state information for %s"
GLESM586E "Offline import failed because %s is not ini the offline exported state"
GLESM587E "Offline import failed because %s is exported in another GPFS cluster: %ld"
GLESM588E "Offline import failed because %s is in an exporting state"
GLESM589E "Import failed because tape %s in pool %s is offline exported."
GLESM590E "Tape %s is not offline in pool %s."
GLESM591E "Tape %s is offline in pool %s."
GLESM592E "Tape %s is not existed in the pool %s."
GLESM593E "Failed to update remote node (%s) configuration (exit: %d)."
GLESM594W "RPC Server (%s): Time consuming service function (%d) detected (elapsed time: %d sec)."
GLESM595E "The tape %s containing a copy of file %s does not belong to any pool."
GLESM596E "Failed to check the consistency of existing replica copies and the migration target pool id (file: %s)."
GLESM597I "Nothing to do for a migration or premigration request for file %s."
GLESM598I "Reusing existing copy of file %s (tape id: %s, pool id: %s, library id: %s)."
GLESM599E "Cannot move tape %s because the status is "Critical""
GLESM600E "Failed to migrate/premigrate file %s. The specified pool name does not match the existing replica copy."
GLESM601I "Reserve issued node %s is not reachable. Canceling."
GLESM602I "Reserve issued process does not exist. Canceling. %s, %d, %d"
GLESM603E "Failed to obtain IP address"
GLESM604I "Reserve received from nodeid %d, ip %s"
 
# messages for LEcontrol
GLESM700I "Connecting to %s:%d."
GLESM701I "Connected to %s:%d (%d)."
GLESM702I "Reconnecting to %s:%d."
GLESM703I "Reconnected to %s:%d (%d)."
GLESM704I "Disconnecting from %s:%d (%d)."
GLESM705I "Disconnected from %s:%d."
GLESM706I "Getting %s inventory from %s:%d (%d)."
GLESM707I "Got %s inventory from %s:%d (%d)."
GLESM708E "Got %d nodes from %s:%d (%d)."
GLESM709E "%s %s command error: %s:%d (%d): %s."
GLESM710W "Cannot find %s on %s:%d (%d)."
GLESM711I "Assigning %s to %s:%d (%d)."
GLESM712I "Assigned %s to %s:%d (%d)."
GLESM713I "Unassigning %s to %s:%d (%d)."
GLESM714I "Unassigned %s to %s:%d (%d)."
GLESM715I "Object %s was already unassigned: %s:%d (%d)."
GLESM716E "Got %d tapes from %s:%d (%d)."
GLESM717I "Mounting tape %s to %s on %s:%d (%d)."
GLESM718I "Mounted tape %s to %s on %s:%d (%d)."
GLESM719I "Unmounting tape %s on %s:%d (%d)."
GLESM720I "Unmounted tape %s on %s:%d (%d)."
GLESM721I "Syncing tape %s on %s:%d (%d)."
GLESM722I "Synced tape %s on %s:%d (%d)."
GLESM723I "Failed to remove %s on %s:%d (%d), but try to %s."
GLESM724I "Formatting tape %s to %s on %s:%d (%d)."
GLESM725I "Formatted tape %s to %s on %s:%d (%d)."
GLESM726I "Recovering tape %s to %s on %s:%d (%d)."
GLESM727I "Recovered tape %s to %s on %s:%d (%d)."
GLESM728I "Moving tape %s to %s on %s:%d (%d)."
GLESM729I "Moved tape %s to %s on %s:%d (%d)."
 
 
GLESM998E "Unknown - internal use only"
GLESM999I "Timer %s --- %ld.%09ld"
 
# massages related to the ltfsee command
GLESL001I "Usage: ltfsee <command> <options>"
+ ""
+ "Commands:"
+ " cleanup - Removes a specified job scan or session."
+ " drive - Adds or removes a drive to or from a node, and sets"
+ " the drive role attributes."
+ " export - Exports tapes out of the IBM Spectrum Archive system."
+ " fsopt - Queries or updates file system level settings for"
+ " stub size, 'read starts recall', and preview size."
+ " help - This help information for ltfsee command."
+ " import - Imports tapes into the IBM Spectrum Archive system."
+ " info - Prints list and status information for the jobs or a"
+ " resource(libraries, node groups, nodes, drives, pools,"
+ " or tapes)."
+ " migrate - Migrates files to tape storage pools."
+ " pool - Creates or deletes a tape storage pool, adds or removes"
+ " tapes to or from a storage pool."
+ " premigrate - Premigrates files to tape storage pools."
+ " rebuild - Rebuilds files from specified tapes."
+ " recall - Recalls files data from tapes."
+ " recall_deadline - Enables, disables, sets, or gets recall deadline time"
+ " value to control delay of tape mounts for recall jobs."
+ " reclaim - Reclaims unused space on tapes by moving only the"
+ " referenced data to other tapes and removing the"
+ " unreferenced data."
+ " reconcile - Reconciles tapes metadata against the Spectrum Scale"
+ " (GPFS) file systems metadata."
+ " recover - Recovers files from a tape or remove a critical-state"
+ " tape."
+ " repair - Repairs a file or object that is in strayed state."
+ " save - Saves empty files, empty directories, and symbolic links"
+ " objects to tape storage pools."
+ " start - Starts the IBM Spectrum Archive services for one or"
+ " multiple tape libraries."
+ " status - Shows the status of the IBM Spectrum Archive services."
+ " stop - Stops the IBM Spectrum Archive services for one or"
+ " multiple tape libraries."
+ " tape - Moves tape to home slot or IE slot (without exporting"
+ " it)"
+ " threshold - Sets or shows the Spectrum Scale (GPFS) file systems"
+ " usage threshold at which migrations are preferred over"
+ " recalls."
+ ""
+ "To get the syntax and explanation for a specific command, run:"
+ " ltfsee help <command>"
+ ""
GLESL002E "Unable to open migration policy file: %s."
GLESL003E "Too many migration policies in migration policy file: %s (max. is %d)."
GLESL004E "Invalid format, migration policy file: %s."
GLESL005E "Unable to open a GPFS scan result file for reading: %s. Specify correct scan result file."
GLESL006E "Unable to load migration policies from file: %s."
GLESL007I "%d migration policies found:"
GLESL008I "policy %d:"
GLESL009I " regexp: %s"
GLESL010I " pool: %s"
GLESL011E "Unable to compile regular expression: %s."
GLESL012I "Files to be migrated (filename, storage pool):"
GLESL013E "Unable to create temp file: %s."
GLESL014I "%s %s."
GLESL015E "Unable to create temp file: %s."
GLESL016I ""
GLESL017I "Submitting files for migration..."
GLESL018E "Submission of the migration request failed. Please check the file provided to this command."
GLESL019I "."
GLESL020E "Failed to retrieve migration request."
GLESL021I "done."
GLESL023E "Unable to find required components for starting LTFS EE. Please check if LTFS EE is correctly installed."
GLESL024E "Error occurred: errno %d."
GLESL025E "Command failed. Non-root user specified the "%s" option. Login as root and try the command again."
GLESL026E "Unable to compile regular expression: %s."
GLESL027I "No match."
GLESL028I "Match."
GLESL029E "Spectrum Archive EE system not running. Use 'ltfsee start' command first."
GLESL030E "Unable to connect to the LTFS EE service. Please check if the LTFS EE has been started."
GLESL031E "Storage pool %s already exists. Specify a unique storage pool to create."
GLESL032E "Tape %s is already defined within one storage pool."
GLESL033E "Invalid operation on storage pool: %s."
GLESL034E "Invalid operation specified for info command: %s."
GLESL035I "Invalid file system %s specified to store configuration file."
GLESL036I "Retrieve completed."
GLESL037E "Invalid command: %s."
##### DO NOT REMOVE!! this msg is parsed by log analyzer #####
GLESL038I "Migration result: %ld succeeded, %ld failed, %ld duplicate, %ld duplicate wrong pool, %ld not found, %ld too small to qualify for migration, %ld too early for migration."
GLESL039I "%d tape inventories are corrected."
GLESL040E "Storage pool %s does not exist. Specify a valid storage pool."
GLESL041E "Tape %s does not exist in storage pool %s or is in an invalid state. Specify a valid tape ID."
GLESL042I "Adding tape %s to storage pool %s."
GLESL043I "Removing tape %s from storage pool %s."
GLESL044E "Failed to delete storage pool. The pool %s is not empty. Tapes must be removed from the storage pool before it can be deleted."
GLESL045E "Tape %s cannot be removed from storage pool %s since it is currently mounted or a mount/unmount is in progress."
GLESL046E "Tape %s does not exist in the library. Specify a valid tape ID."
GLESL047I "The LTFS EE service has been started on the node with ip address %s and pid %ld."
GLESL048E "Maximum length of storage pool is 16 characters."
GLESL049E "Only alpha-numeric or underscore characters are allowed for a storage pool name."
GLESL050I "There are still scans in progress. To terminate Spectrum Archive EE for this library, run the ltfsee stop command again."
GLESL051E "Unable to copy migration file list temporarily to gpfs file system %s."
GLESL053E "Unable to create temporary file to copy the GPFS scan result in %s."
GLESL054I "Tape %s has previously been assigned to storage pool %s and now moved to storage pool %s."
GLESL055E "Check that tape %s exists and is in the state "Valid LTFS" or "Unknown". This tape has not been added."
GLESL056E "Tape %s is already in a storage pool. Remove the tape from that storage pool before adding it to another storage pool."
GLESL057E "Unable to set threshold %d."
GLESL058I "Current threshold: %d%%."
GLESL059I "Migration requests are currently not accepted."
GLESL060E "Invalid tape ID: %s. Tape IDs must be eight characters long."
GLESL061E "Invalid tape ID: %s. The tape is not found in the specified pool."
GLESL062E "Tape with ID: %s is in an invalid state. Tapes must be either in state "Valid LTFS" or "Unknown"."
GLESL063I "Import of tape %s has been requested."
GLESL064I "Import of tape %s complete."
GLESL065E "Import of tape %s failed. Please consult the log files /var/log/ltfsee.log on the LTFS EE nodes."
GLESL066E "The --offline option is mutually exclusive with the %s options."
GLESL067E "Import/Rebuild failed. Option --path/-P is required since there are multipe GPFS file systems mounted. Try again using the --path/-P <pathName> option."
GLESL068E "Options %s are mutually exclusive and must occur only once."
GLESL069E "The path name character length exceeds the maximum length of %d. Shorten the length of the mount pount path of the GPFS file system."
GLESL070E "Storage pool %s is empty or does not exist. Specify an existing storage pool that is not empty."
GLESL071E "The target file system is not a Spectrum Archive EE managed file system. Use option --path/-P to specify an appropriate import path."
GLESL072W "Import of tape %s completed with errors. Some files could not be imported correctly, but the tape was added to the pool. See the log files /var/log/ltfsee.log on the LTFS EE nodes."
GLESL073I "Export of tape %s has been requested..."
GLESL074I "Export of tape %s complete."
GLESL075E "Export of tape %s completed with errors. Some GPFS files still refer to files in the exported tape."
GLESL076E "Export of tape %s failed. Please consult the log files /var/log/ltfsee.log on the LTFS EE nodes."
GLESL077E "%s --> no information (error: %s)."
GLESL078I "%s --> not migrated."
GLESL079I "%s --> migrated to tape %s."
GLESL080I "Reclamation complete. %d tapes reclaimed, %d tapes removed from the storage pool."
GLESL081I "Tape %s successfully reclaimed, formatted, and removed from storage pool %s."
GLESL082E "Reclamation failed while reclaiming tape %s to target tape %s."
GLESL083I "Cannot apply threshold to tape %s, tape will not be reclaimed. Use the command without a threshold option to reclaim the tape."
GLESL084I "Start reclaiming the following %d tapes:"
GLESL085I "Tape %s successfully reclaimed, it remains in storage pool %s."
GLESL086I "Reclamation has completed but some of files are remained and a reconcile is required. At least tape %s must be reconciled."
GLESL087I "Tape %s successfully formatted. "
GLESL088I "Tape %s successfully checked. "
GLESL089E "Skipped adding tape %s to storage pool because the tape is already formatted. Ensure the specified tape ID to format is valid, and specify the -F option (instead of -f) to force formatting."
GLESL090E "Skipped adding tape %s to storage pool because the tape is write protected. Ensure the specified tape ID to format is valid, and check the write protect switch."
GLESL091E "This operation is not allowed to this state of tape. Need to check the status of Tape %s by using the ltfsee info tapes command."
GLESL092E "Tape %s is missing an EOD mark. Use the -d option (instead of -c) to repair this tape."
GLESL093E "Tape %s is failed to format due to fatal error. "
GLESL094E "Tape %s is failed to check due to fatal error. "
GLESL095E "Tape %s is already belong to a pool."
GLESL096E "Unable to write lock for reconciliation."
GLESL097E "Unable to write lock for reclamation."
GLESL098E "Another reconciliation, reclamation or export process is currently executing. Wait for completion of the executing process and try again."
GLESL099E "Failed to move tape %s because the tape is formatting."
GLESL100E "Tape %s has failed to move."
GLESL101E "Tape %s has failed to remove from this system."
GLESL102E "Failed to move tape %s because there is a pending job for this tape. Wait until the job completes, and then try to move the tape again."
GLESL103I "Tape %s is moved successfully."
GLESL104E "Cannot move the tape %s because it is in the drive."
GLESL105I "The Reclamation process will be stopped."
GLESL106I "The Reclamation process has stopped because of an interruption."
GLESL107E "Unable to setup signal handling."
GLESL108E "Memory allocation error - %s."
GLESL109E "Failed to access attribute (%s) of file %s."
GLESL110E "The specified file does not exist. Check that the file name is correct."
GLESL111E "Failed to access the specified file."
GLESL112E "Specified file is not regular file, directory or symbolic link."
GLESL113E "Failed to invoke dsmls command."
GLESL115E "Failed to parse dsmls command's output."
GLESL116E "Reclamation failed because less than two tapes are available in storage pool %s and reclamation requires at least two tapes. Add a tape to this storage pool, and then try the reclamation again."
GLESL117E "Could not get drive and node info, drive serial %s, node_id %d, rpc_rc %d, ret %d."
GLESL118E "Failed to add a drive %s to LTFS, node %d, port %d, ip_address %s, rc %d."
GLESL119I "Drive %s added successfully."
GLESL120E "Could not set remove request flag to drive serial %s."
GLESL121I "Drive serial %s is removed from LTFS EE drive list."
GLESL122E "Failed to remove a drive %s to LTFS, node %d, port %d, ip_address %s, rc %d."
GLESL123E "Could not check drive remove status. ret = %d, drive serial%s."
GLESL124E "Invalid node id %d to add drive request."
GLESL125E "Source tape %s is not in state 'Valid LTFS'."
GLESL126E "Export failed."
GLESL127E "Session or scan with id %lu does not exist. Specify a valid scan id."
GLESL128I "Session or scan with id %lu has been removed."
GLESL129I "Session or scan with id %lu is not finished."
GLESL130E "RPC error, rc %d."
GLESL131E "Failed to add drive %s because the drive is not in the 'stock' state. The LTFS drive status is: %d."
GLESL132E "Could not remove a drive %s. Drive is not in mount or not mounted state. LTFS status:%d."
GLESL133E "Failed to move tape %s because a tape mount is in progress for this tape. Wait for the completion of the mount, and then try the move again."
GLESL134E "Failed to move tape %s because this tape is in use. Wait until the jobs for this tape are complete, and then try the move again."
GLESL135E "Failed to move tape %s because this tape belongs to a storage pool. Remove the tape from the pool, and then try the move again."
GLESL136I "Reclamation terminated with tape %s. No more free space on other tapes in the pool to move data to. %d tapes reclaimed, %d tapes removed from the storage pool."
GLESL137E "Failed to check tape contents. Check the status of tape %s and take an appropriate action."
GLESL138E "Failed to format the tape %s, because it contains migrated file data. Execute reconcile or reclaim before formatting tape."
GLESL139E "Option %s is mutually exclusive with other options."
GLESL140E "Offline message is too short or too long, it must be 1-63 characters long."
GLESL141E "Tape %s is already in storage pool %s. Tapes to be imported must not be in any storage pool. Remove the tape from the storage pool, and then try the import again."
GLESL142E "Cannot move the critical tape %s from the drive to the home slot (%d)."
GLESL143I "Usage: "
+ " reconcile [ -t <tape_id_1 tape_id_2 ..> ] -p <poolName> -l <libraryName> [-P] [-u] [-w <wait_time>] [-g <gpfs_fs_1 gpfs_fs_2 ..>]"
+ ""
+ " Starts reconciliation of all or selected tapes or storage pools against all or selected GPFS file systems."
+ ""
+ " -t <tape_ids> Reconcile specified tapes."
+ " Can be combined with -p and -g."
+ ""
+ " -p <poolName> The name of the pool to reconcile. If -t isn't specified, reconcile all tapes in specified pool."
+ " Can be combined with -t and -g."
+ ""
+ " -l <libraryName> The name of the library that the pool is or"
+ " will be associated with. If only a single"
+ " library is configured for the system, this"
+ " option can be omitted."
+ ""
+ " -P Partial reconcile - if not all the requested tapes can be reserved for reconcile, reconcile the tapes that can be reserved."
+ ""
+ " -u Skip reconcile pre-check so that tapes get mounted for reconcile regardless of need."
+ ""
+ " -w <wait_time> Maximum time (in seconds) that the reconcile process can spend trying to reserve the requested tapes."
+ " Default value is 300 seconds."
+ ""
+ " -g <gpfs_fss> Reconcile tapes (defined by other options) against the specified GPFS file systems."
+ " Can be combined with any other reconcile option."
+ " If not specified, tapes are reconciled against all GPFS file systems."
+ ""
+ ""
GLESL144E "Failed to reconcile for export. Some of the tapes might have been moved out of their storage pool."
GLESL145E "The reclamation failed (rc:%d). Review logs for further information."
GLESL146I "An export for tape %s has already been started but not finished: skipping reconcile."
GLESL147E "Tape %s is already exported. Verify that the tape ID specified with the export command is correct."
GLESL148E "No valid tapes specified for export."
GLESL149E "Tape %s is not an LTFS EE tape."
GLESL150E "Import of tape %s failed. The tape includes currently migrated files. Please export first."
GLESL151E "Import of tape %s failed. The tape includes offline exported files. Please use the --offline option."
GLESL152E "The number of tapes specified with this command is more than the maximum of 1024."
GLESL153E "For each device type for all tapes within pool %s, it is necessary to have two drives that match these device types."
GLESL154E "For the reclamation of tape %s, the drives available do not match the required device type for the remaining tapes within pool %s."
GLESL155E "Error obtaining the list of tapes for pool %s."
GLESL156E "Error allocating memory in LTFS EE. LTFS EE is shutting down."
GLESL157E "Masking file %s will exceed the maximum number of characters to be allowed."
GLESL158E "Failed to add a tape (%s) that is already exported. Before this tape can be added, it must be imported."
GLESL159E "Not all migration has been successful."
GLESL160E "Unable to open file %s.(%d)"
GLESL161E "Invalid operation specified for localnode command: %s."
GLESL162E "Unable to get ltfs port information (%d)."
GLESL163I "Node status: %s."
GLESL164E "Unable to get ltfs node status (%d)."
GLESL165I "Force to sync successfully."
GLESL166E "Unable to sync for out-of-sync node (%d)."
##### DO NOT REMOVE!! this msg is parsed by log analyzer #####
GLESL167I "A list of files to be migrated has been sent to LTFS EE using scan id %u."
GLESL168E "Tape %s seems not to be used for LTFS EE before."
GLESL169I "Tape %s is skipped to move because it is already in the target slot."
GLESL170E "Failed to move tape %s because tape is assigned to a pool and not offline."
GLESL171E "This function is not available because the license is expired."
GLESL172I "Synchronizing LTFS EE tapes information."
GLESL173E "No file name is specified. Specify at least one file for this command."
GLESL174E "Unable to write lock for export."
GLESL175E "GPFS scan result file (%s) is empty or has an invalid format."
GLESL176I "Destroying DMAPI session %s."
GLESL177E "Unable to get DMAPI sesion information."
GLESL178E "Unable to get DMAPI session information for session %s."
GLESL179I "Number of DMAPI sessions adjusted: %u -> %u."
GLESL180I "%u DMAPI sessions detected."
GLESL181I "DMAPI session %llu has been created by process %lu."
GLESL182I "DMAPI session %llu seems to be an orphan and will be destroyed."
GLESL183I "Orphan DMAPI session file %s found. This file will be deleted."
GLESL184I "DMAPI session found %s."
GLESL185I "Determining the amount of space to be preserved for tape %s. This processing can take some time."
GLESL186E "Tape %s is not exported yet or exported without the offline option. Verify that the tape ID is specified with the export command is correct."
GLESL187E "Cannot import tape %s because it is exported from another GPFS cluster. Verify that the tape ID specified with the export command is correct."
GLESL188E "Export of tape %s is requested but not completed yet. Wait for the tape export operation to complete and then try the import operation again."
GLESL189I "Tape %s has no migrated file."
GLESL190E "Unable to open directory %s (errno: %d)."
GLESL191I "File %s has no GPFS path (%d)."
GLESL192E "Unable to get the DMAPI handle for file %s (errno: %d)."
GLESL193I "File %s has no GPFS stub nor saved object."
GLESL194I "Files (in tape %s) that do not have migrated stub nor saved objects in GPFS:"
GLESL195E "Unable to get file information for %s (errno: %d)."
GLESL196E "Unable to get symbolic link information for %s (errno: %d)."
GLESL197I "Bulk recalling files in tape %s"
GLESL198I "Found File %s has GPFS stub."
GLESL199E "Unable to call recall command for file %s (errno: %d)."
GLESL200E "Unable to call recall file %s (%s in tape) (err: %s)."
GLESL201I "File %s is recalled."
GLESL202I "File system object %s is already resident."
GLESL203E "Failed to call command (%s) (%d)."
GLESL204E "Unable to copy file %s."
GLESL205E "Unable to remove tape %s. (%d)"
GLESL206E "Unable to get the node id for tape %s."
GLESL207E "Tape %s is not mounted."
GLESL208E "Unable to get the status for tape %s."
GLESL209E "Unable to get the IP address for the node that mounting tape %s."
GLESL210E "Tape %s is not in the "Critical" state or in the "Write Fenced" state."
GLESL211E "Unable to get the local node id."
GLESL212E "Tape %s is mounted by node (id:%d, IP address:%s). Run ltfsee recover on the node."
GLESL213I "Copied file %s from tape %s."
GLESL214I "Recovery completed. Recovered %llu files. Failed to recover %llu files."
GLESL215I "There are %llu files on tape %s that are migrated or saved from the current GPFS file system."
GLESL216I "There are %llu files on tape %s that are not migrated nor saved from the current GPFS file system."
GLESL217I "Removed the tape %s from the pool %s."
# GLESL218I "Replica copy time is set to %d."
# GLESL219I "Current replica copy time is %d."
GLESL220E "Directory name is too long."
GLESL221E "Specify a decimal value between 0 and %d."
GLESL222W "Unable to remove internal extended attribute(ibm.ltfsee.gpfs.path) from %s. (%d)"
GLESL223E "Unexpected destination type was specified."
GLESL224W "Unable to remove internal attribute (LTFSEE DMAPI) from %s."
GLESL225I "LTFS EE debug level has been set to %d."
GLESL226E "Failed to invoke dsmmigfs command."
GLESL227I "Updated the settings for file system %s."
GLESL228E "-s option of 'ltfsee fsopt' command can be specified only once per command."
GLESL229E "-r option of 'ltfsee fsopt' command can be specified only once per command."
GLESL230E "-p option of 'ltfsee fsopt' command can be specified only once per command."
GLESL231E "-g option of 'ltfsee fsopt' command can be specified only once per command."
GLESL232I "Usage: "
+ " ltfsee fsopt { query [-G <gpfs_filesystem(s)>] } |"
+ " { update [ -S <stub_size> ] [ -R <read_starts_recall> ] [-P <preview_size>] [-F] -G <gpfs_filesystem(s)> }"
+ ""
+ "Queries or updates file system level settings for stub size, 'read starts recall', and preview size."
+ ""
+ "-S <stub_size> Defines the size of the initial file part that is kept resident on disk for migrated files."
+ "Possible values: 0 - 1073741824. A value must be a multiple of the file system block size and larger than or equal to the preview size."
+ ""
+ "-R <read_starts_recall> When this feature is set to yes, reading from the resident file part starts a background recall of the file."
+ "During the background recall data from the resident part can be read. The rest of the file can be read upon recall."
+ "Possible values: yes|no|undef"
+ ""
+ "-P <preview_size> Defines the initial file part size for which reads from the resident file part do not trigger a recall."
+ "Possible values: 0 - 1073741824. A value must be smaller than or equal to the stub size."
+ ""
+ "-F Forces the settings to update even if a stub size change is requested while there are ongoing migrations."
+ "Use of this option might result in some ongoing migrations that use the old stub size while others use the new stub size."
+ ""
+ "-G <gpfs_filesystems> One or more GPFS file systems for which to query or update settings. File system names must be separated by a space."
+ ""
+ "Examples:"
+ " ltfsee fsopt update -S 10485760 -P 81920 -R yes -G /ibm/gpfs"
+ " ltfsee fsopt query -G /ibm/gpfs"
+ ""
+ ""
GLESL233E "ltfsee fsopt: wrong command syntax."
GLESL234I "ANSxxxxx are TSM warning and error handling messages."
+ " Refer to TSM documentation in case the corrective action is needed and not understood from the displayed GLES and ANS messages."
GLESL235E "ltfsee fsopt update: at least one of -s, -r, -p options must be specified."
GLESL236E "Failed to execute command for determining DMAPI management node."
GLESL237E "Could not determine DMAPI management node."
GLESL238E "Tape %s is exported with the offline option. Verify that the tape ID is specified with the export command is correct."
GLESL239E "Unable to get size for file %s. Verify whether this file exists within a GPFS file system."
GLESL240E "File %s has a future timestamp. The file is not added."
GLESL242E "Scan id number mismatch for save scan file %s."
GLESL243E "Error sending file system object save file list to MMM."
GLESL244E "Unable to open file system object save file list %s for writing."
GLESL245E "Error writing filename %s to file system object pool list file %s."
# OLD_SYNTAX: GLESL246I "Usage: %s <option>"
# OLD_SYNTAX: + "Available options are:"
# OLD_SYNTAX: + " list <gpfs_scan_result_file> <target_storage_pool_name> [<redundant_copy_pool_1> [<redundant_copy_pool_2>]]"
# OLD_SYNTAX: + ""
# OLD_SYNTAX: + " Where the gpfs_scan_result_file file includes the list of file system objects"
# OLD_SYNTAX: + " (empty files, empty directories, symbolic links) that are to be saved."
# OLD_SYNTAX: + " Each line of this file must end with " -- <filename>". All file system"
# OLD_SYNTAX: + " objects will be saved to the specified target storage pool. Optionally,"
# OLD_SYNTAX: + " redundant copies can be created in up to two additional storage pools."
# OLD_SYNTAX: + " This command does not complete until all file system objects have been saved."
# OLD_SYNTAX: + " Examples:"
# OLD_SYNTAX: + " %s list ./gpfsscan.txt poolA"
# OLD_SYNTAX: + " %s list ./gpfsscan.txt poolA@library1 poolB@library2"
# OLD_SYNTAX: + ""
# OLD_SYNTAX: + ""
GLESL246I "Usage: %s <option>"
+ "Available options are:"
+ " list -s <gpfs_scan_result_file> -p <target_storage_pool_name> [<redundant_copy_pool_1> [<redundant_copy_pool_2>]]"
+ ""
+ " Where the gpfs_scan_result_file file includes the list of file system objects"
+ " (empty files, empty directories, symbolic links) that are to be saved."
+ " Each line of this file must end with " -- <filename>". All file system"
+ " objects will be saved to the specified target storage pool. Optionally,"
+ " redundant copies can be created in up to two additional storage pools."
+ " This command does not complete until all file system objects have been saved."
+ " Examples:"
+ " %s list -s ./gpfsscan.txt -p poolA"
+ " %s list -s ./gpfsscan.txt -p poolA@library1 poolB@library2"
+ ""
+ ""
GLESL247E "Unable to get DMAPI handle for file %s."
GLESL248E "Tape %s is in storage pool %s and failed to be removed from the tape from the pool."
GLESL249I "Rebuild of a file system into path %s is started by importing tapes."
GLESL250I "Rebuild of a file system into path %s is finished."
GLESL251E "Could not determine DMAPI management node for %s."
GLESL252E "File system settings are not updated because stub size change is requested while there are ongoing migrations (for %d files)."
GLESL253I "To force updating file system settings when stub size change is requested while there are ongoing migrations, use the -f command option."
GLESL254I "Non-empty regular file %s is in resident state. Repair is not needed."
GLESL255E "Non-empty regular file %s is in migrated state. The file can not be repaired to resident state."
GLESL256E "Non-empty regular file %s was in premigrated state. The file could not be repaired to resident state. (HSM error)"
GLESL257I "Non-empty regular file %s was in premigrated state. The file is repaired to resident state."
GLESL258E "An error was detected while repairing non-empty regular file %s. (%d)"
GLESL259I "File system object %s is in resident state. Repair is not needed."
GLESL260E "File system object %s was in saved state. The file could not be repaired to resident state correctly. (DMAPI error)"
GLESL261I "File system object %s was in saved state. The file is repaired to resident state."
GLESL262E "An error was detected while repairing file system object %s. (%d)"
##### DO NOT REMOVE!! this msg is parsed by log analyzer #####
GLESL263I "Recall result: %ld succeeded, %ld failed, %ld duplicate, %ld not migrated, %ld not found."
GLESL264E "Failed to retrieve recall request."
GLESL265E "Not all recalls have been successful."
GLESL266E "Unable to create temporary file for recalls."
GLESL267E "No file names have been provided to recall."
GLESL268I "%d file name(s) have been provided to recall."
GLESL269E "Unable to add any recall job to the LTFS EE job queue."
GLESL270E "Unable to to get the absolute path name for file %s, error code: %d."
GLESL271E "Invalid recall list: the following has been provided: '%s'."
GLESL272E "Unable to open input file %s for mass recalls."
GLESL273E "Directory for session pool lists for save command does not exist and could not be created: %s."
GLESL276I "Usage: %s <option>"
+ "Available options are:"
+ " recall <gpfs_scan_result_file>"
+ ""
+ " Where the gpfs_scan_result_file file includes the list of files that"
+ " are to be recalled. Each line of this file must end with " -- <file name>"."
+ ""
+ ""
+ " recall <recall_list_file>"
+ ""
+ " Where the recall_list_file file includes the list of files to be"
+ " recalled. Each line contains a file name with an absolute path or"
+ " a relative path based on the working directory. It is also possible"
+ " to pass the output of another command"
+ ""
+ " Examples:"
+ " ltfsee recall ./recall_list.txt"
+ " find . -type f |ltfsee recall"
GLESL277I "The ltfsee recall command is called without specifying an input file waiting for standard input."
+ " If necessary press ^D to exit."
GLESL278E "ltfsee fsopt update: at least one file system must be specified."
GLESL279I "Save result: %ld succeeded, %ld failed, %ld duplicate, %ld duplicate wrong pool, %ld not found, %ld skipped."
GLESL280E "Not all saves have been successful."
GLESL281E "The command 'ltfsee_sub' was invoked without effective root."
GLESL282E "Unable to get the absolute path name for %s. (%d)"
GLESL283E "Failed to get GPFS cluster id."
GLESL284E "Unable to get internal lock. (%d)"
GLESL285E "Import or rebuild job failed. A job of a similar type is in progress on the node. Wait until the job that is in progress completes, and try the job that failed again."
GLESL286E "Unable to set internal lock. (%d)"
GLESL287E "Unable to create internal directory. (%d)"
GLESL288E "Unable to open lock file %s. (%d)"
GLESL289I "Effecting the new settings for file system %s (preparing)."
GLESL290I "Effecting the new settings for file system %s (executing)."
GLESL291E "Failed to determine or create a temporary directory."
GLESL292E "Failed to determine file system id."
GLESL293E "Failed to determine ltfsee metadata directory."
GLESL294E "Failed to effectuate new settings for file system %s."
GLESL295E "Unable to call recall file %s (HSM error)."
GLESL296I "File system object %s is already resident."
GLESL297E "Unable to recover file %s (DMAPI error)."
GLESL298I "File system object %s is recovered."
GLESL299W "No job added for scan %u so far. %d retries left."
GLESL300I "Recall deadline timeout is set to %d."
GLESL301I "Recall deadline timeout is %d."
GLESL302E "Unable to open directory %s, it will not be saved."
GLESL303I "Directory %s is not empty, it will be skipped."
GLESL304E "Regular file %s is not empty, it will not be saved."
GLESL305E "%s is not a directory, regular file or a symbolic link, it will not be saved."
GLESL306E "%s has already been saved, it will not be saved again."
GLESL307E "%s will not be saved, storage pool name with more than 32 characters specified (%s)."
GLESL308E "Error disabling stdout: %d."
GLESL309E "Error enabling stdout: %d."
GLESL310E "Tape %s failed to move because it is mounted on a drive that is currently not accessible."
GLESL311I "Tape %s in pool %s has been exported, and removed from the reclaim target list."
GLESL312E "Failed to check tape status in storage pool %s."
GLESL313I "Tape %s is not in the state "Valid LTFS". Removed from storage pool %s."
GLESL314I "Effecting the new settings for file system %s (done)."
GLESL315E "Failed to add drive %s. Consult the log files /var/log/ltfs.log on the LTFS EE nodes."
GLESL316E "File %s does not exist."
GLESL317E "File %s is not regular file."
GLESL318E "Failed to get information for file %s."
GLESL319E "File %s is already migrated and will be skipped."
GLESL320I "File %s is strayed. Try to repair."
GLESL321E "File %s is strayed. Unable to repair automatically."
GLESL322I "File %s is repaired."
GLESL323E "Unable to repair file %s (DMAPI error)."
GLESL324E "File %s is not resident, it will not be saved."
GLESL325E "File %s is not resident or premigrated, it will not be migrated."
GLESL326E "Storage pool list to migrate file %s to is too long, rest of files in migration list will be skipped."
GLESL327E "Failed to determine DMAPI management node. Look for previous messages."
GLESL328E "Masking file %s will exceed the maximum number of characters to be allowed."
GLESL329E "Reclamation failed because at least one additional tape is required in storage pool %s to move specified tape contents. Add a tape to this storage pool, then try the reclamation again."
GLESL330E "An invalid drive role (%s) is specified. The drive role must be decimal numeric smaller than 8."
GLESL331E "Failed to set a drive role."
GLESL332I "Reclamation has been partially performed. Run the reclamation again for tape %s."
GLESL333E "The "-L" option requires the "-t" option with one defined tape."
GLESL334E "Cannot change offline state of tape %s."
GLESL335I "Updated offline state of tape %s %s."
GLESL336E "Tape %s is already offline exported."
GLESL337E "Tape %s is not found in the offline tape list."
GLESL338E "File %s is offline exported, it will not be saved."
GLESL339E "Unable to perform quick reconcile."
GLESL340W "No drives can handle a migration job. It is recommended that you set the migration drive role (m) on at least one drive."
# not used GLESL341W "There is no drives which can handle copy job. It is recommended to set copy drive role (c) at least to one drive."
GLESL342W "No drives can handle a recall job. It is recommended that you set the recall drive role (r) on at least one drive."
GLESL343W "No drives can handle a generic job. It is recommended that you set the generic drive role (g) on at least one drive."
GLESL344E "Failed to get requested information from GPFS temporary file %s (errno: %d). Need to check GPFS filesystem status."
GLESL345I "Refresh pool completed."
GLESL346E "No tapes are found in the unassigned list. %s is already assigned to another storage pool, is in an IE slot, or does not exist in the library."
GLESL347E "The storage pool %s is not found for adding %s."
GLESL348E "Tape %s cannot be formatted by specified format type."
GLESL349E "Pool type of %s is not matched to the tape type of %s."
GLESL350E "Tape %s cannot be formatted by the specified format type."
GLESL351E "Job(%s) Timed Out."
GLESL352E "Fail to add tapes to pool %s to rebuild."
GLESL353I "Tape status of %s is not unknown. Tape validation is not required."
GLESL354I "Tape %s is unmounted because it is inserted into the drive."
GLESL355E "Tape %s is in a drive or IE slot. Move the tape to the home slot before it is removed."
GLESL356E "Tape %s is offline. It might have migrated files or saved files."
GLESL357E "Tape %s has migrated files or saved files. It has not been removed from the pool."
GLESL358E "Error on processing tape %s (%d)."
GLESL359I "Removed tape %s from pool %s successfully."
GLESL360I "Added tape %s to pool %s successfully."
GLESL361W "Tape %s was removed from pool %s forcefully. Cannot recall files on tape %s. Add the tape to the same pool again if you need to recall files from the tape without formatting."
GLESL362E "Tape %s is being processed by another job."
GLESL363E "The pool name must be provided to run the export."
GLESL364E "Parameter is not correct. Check the command syntax."
GLESL365E "Unable to export because tape %s is not found in pool %s or is in an invalid state."
GLESL366E "Error removing tape %s from pool %s (rc=%d)."
GLESL367E "Export has been done but failed to remove tape (%s) from pool (%s)."
GLESL368E "Invalid characters in offline message '%s' (only alphanumeric, hyphen, and a space are allowed)."
GLESL369E "Invalid tape ID: %s. The tape is not found in the unassigned list."
GLESL370E "A pool must not be specified for moving unassigned tapes (%s)."
GLESL371E "Cannot remove the critical tape %s from the pool %s (%d)."
GLESL372I "Validating tape %s in pool %s."
GLESL373I "Moving tape %s."
GLESL374E "Unable to offline import because pool %s is empty or does not exist."
GLESL375E "Unable to offline import because tape %s is not found in pool %s."
GLESL376E "Tape %s in pool %s does not exist in the library. Use the --force_remove option to forcibly remove the tape from the pool."
GLESL377E "Failed to sync resource status. One or more nodes are down."
GLESL378E "Tape %s in pool %s is already assigned to a pool. The "--offline" option is needed if that is an offline tape."
GLESL379E "Tape %s is not in an offline state in pool %s. Cannot perform offline import."
GLESL380E "Tape %s does not exist in the pool %s."
GLESL381E "Cannot add the offline exported tape %s."
GLESL382E "Failed to update the node configuration."
GLESL383I "Node configuration is successfully updated."
 
GLESL400E "File is not found (%s)."
GLESL401E "File is not valid, or too busy for other filesystem operations (%s)."
GLESL402E "Pool name (%s) is too long."
GLESL403E "Storage pool %s has been provided twice."
GLESL404E "Invalid parameter (%s). Pool name and library name should be specified as POOLNAME@LIBRARYNAME."
GLESL405E "Library name (%s) is too long."
GLESL406E "Illegal characters in nodegroup '%s'."
GLESL407E "Nodegroup '%s' is too long (max %d characters)."
GLESL408E "Illegal characters in library name '%s', only alpha-numeric and underscore are allowed."
GLESL409E "Invalid value for WORM specified: '%s'."
GLESL410E "Invalid device type specified: '%s'."
GLESL411E "Multiple libraries configured, library name must be specified for command."
GLESL412E "Multiple libraries with name '%s' found, verify configuration."
GLESL413E "No library found with name '%s'."
GLESL414E "No libraries configured."
GLESL415E "Error accessing database file '%s' for open/read/write/close."
GLESL416E "General error accessing database, rv: %d, err: %d, msg: '%s'."
GLESL417E "Node ID not specified."
GLESL418E "Internal database error in file '%s'. Restore database from backup or recreate using ltfsee_config."
GLESL419E "Unable to determine nodegroup for drive '%s'."
GLESL420E "Unable to get the nodegroup ID for tape %s."
GLESL421E "Failed to move tape %s. The tape does not exist, or the correct pool must be specified if the tape is a member of a pool."
GLESL422E "Unable to determine nodegroup for pool '%s'."
GLESL423E "Error looking up library with name '%s'."
GLESL424E "Invalid argument(s)."
GLESL425I "Usage: %s <options>"
GLESL426I ""
+ " pool show -p <poolname> [-l <libraryname>] [-a <attribute>]"
+ ""
+ " This option is used to show the configuration attributes of a pool."
+ ""
+ " -p <poolname> The name of the pool of which to show the properties."
+ " of."
+ " -l <libraryname> The name of the library with which the pool is"
+ " is associated. If only a single library is"
+ " configured in the system. This option can"
+ " be omitted."
+ " -a <attribute> The attribute to show. If this option is"
+ " omitted, all attributes of the pool are"
+ " shown. Valid pool attributes are:"
+ " poolname Human-readable name"
+ " poolid Unique identifier"
+ " devtype Device type"
+ " format Preferred tape format"
+ " worm Write-once, read-only"
+ " nodegroup Associated nodegroup"
+ " fillpolicy Tape fill policy"
+ " owner Owner"
+ " mountlimit Maximum number of drives that can be used for"
+ " migration (0: unlimited)"
GLESL427I ""
+ " node show -n <nodeid> [-l <libraryname>] [-a attribute]"
+ ""
+ " This option is used to show the configuration attributes of a node."
+ ""
+ " -n <nodeid> The ID of the node of which you want to show"
+ " the properties of."
+ " -l <libraryname> The name of the library with which the node"
+ " is associated. If only a single library is"
+ " configured in the system, this option can"
+ " be omitted."
+ " -a <attribute> The attribute to show. If this option is"
+ " omitted, all attributes of the node are"
+ " shown."
GLESL428I ""
+ " pool <create|delete> -p <poolname> [-g <nodegroup>] -l <libraryname>"
+ " [--worm <worm>]"
+ ""
+ " This option is used to create or delete the specified Spectrum "
+ " Archive EE storage pool."
+ ""
+ " -p <poolname> The name of the pool to create or delete."
+ " -g <nodegroup> The nodegroup that the pool is/will be"
+ " associated with. For creating new pools,"
+ " option can be omitted if only a single"
+ " nodegroup is configured for the library."
+ " When deleting a pool, the nodegroup can"
+ " always be omitted."
+ " -l <libraryname> The name of the library that the pool is, or"
+ " will be, associated with. If only a single"
+ " library is configured for the system, this"
+ " option may be omitted."
+ " --worm <worm> Specifies if the new pool will be a WORM"
+ " (write-once, read-many) pool, or not. Valid"
+ " options are:"
+ " physical Physical WORM"
+ " no No WORM (default)"
+ ""
+ " Examples:"
+ " ltfsee pool create -p mypool --worm no"
+ " ltfsee pool create -p mypool -g mynodegroup -l mylibrary"
+ " ltfsee pool delete -p mypool -l mylibrary"
GLESL429I ""
+ " pool <add|remove> -p <poolname> -t <tape_id_1> [ ... <tape_id_N>]"
+ " [-l <libraryname>] [-f | -F | -e | -c | -d | -r | -E] [-T <format_type>]"
+ ""
+ " This option is used to add or remove one or more tapes from the"
+ " specified storage pool. When you add a tape to a pool, you can run"
+ " formatting or repairing options on the tape before it is added to"
+ " the pool."
+ ""
+ " -p <poolname> The name of the tape pool to which the tapes"
+ " are to be added or removed."
+ " -t <tapeid> The ID of the tape to add or remove from "
+ " the pool. Multiple tapes can be added or "
+ " removed with a single command by specifying"
+ " the -t option multiple times."
+ " -l <libraryname> The name of the library that the pool is"
+ " associated with. If only a single library is"
+ " configured for the system, this option can"
+ " be omitted."
+ " -f, --format Format the specified tape before adding it"
+ " to the pool."
+ " -F, --force_format Force format of the tape. Required to format a"
+ " tape that is already formatted"
+ " -e, --exception_format Exceptionally format the tape. Required to"
+ " format a tape that contains migrated data."
+ " This option must be used for re-addding a"
+ " tape back to a pool when the tape was previously"
+ " removed by the ltfsee pool remove command"
+ " with the -E option."
+ " -c, --check Check and repair the specified tape before"
+ " adding it to a pool."
+ " -d, --deep_recovery Specify this option instead of -c to repair"
+ " a tape that is missing an EOD mark."
+ " -T <format_type> Format the tape using the specified format."
+ " Valid values are: J4, J5. The format type"
+ " must match the format specified for the"
+ " pool (if any).
+ " -r, --force_remove Try to remove the tape even if there are"
+ " active files on the tape."
+ " -E, --empty_remove Remove the tape even if the tape has not been"
+ " reconciled and still contains stale file data"
+ " that have already been deleted on GPFS."
+ " Note that this option does not carry out"
+ " reconciliation. The removed tape is to be"
+ " formatted with the -e option when adding it back"
+ " to a pool."
+ ""
+ " Examples:"
+ " ltfsee pool add -p mypool -t 327AAQL5 -t 277BAXL5 -t 329AAQL5"
+ " ltfsee pool add -p mypool -t 327AAQL5 -f"
+ " ltfsee pool add -p mypool -t 327AAQL5 -c"
GLESL430I ""
+ " pool set -p <poolname> [-l <libraryname>] -a <attribute> -v <value>"
+ ""
+ " This option is used to set the configuration attributes of a pool."
+ ""
+ " -p <poolname> The name of the pool on which to set the "
+ " properties."
+ " -l <libraryname> The name of the library with which the pool"
+ " is associated. If only a single library is"
+ " configured in the system, this option can"
+ " be omitted."
+ " -a <attribute> The attribute to set."
+ " -v <value> The value that you can assign to the attribute."
+ " The list of attributes that can be set, and"
+ " their valid values are:"
+ " poolname 16 characters, ASCII alpha-"
+ " numerics and underscores"
+ " format ultrium5c, ultrium6c,"
+ " ultrium7c, 3592-4C, 3592-5C"
+ " mountlimit Integer; 0 for no limit"
GLESL431E "Pool name not specified."
GLESL432E "No tapes specified."
GLESL433E "Multiple nodegroups configured, nodegroup must be specified for command (-g)."
GLESL434E "Error looking up default nodegroup."
GLESL435E "Nodegroup does not exist."
GLESL436E "Only one job type can be specified."
GLESL437E "Illegal characters attribute name '%s'."
GLESL438E "Attribute name specified is too long (max %d characters)."
GLESL439E "Illegal characters attribute value '%s'."
GLESL440E "Attribute value specified is too long (max %d characters)."
GLESL441E "Invalid node ID specified."
GLESL442I ""
+ " start [-l <library_name>]"
+ ""
+ "This option starts the Spectrum Archive EE system or its part related to one library."
+ ""
+ "The -l option allows you to define the name of the library for which Spectrum Archive"
+ " EE will be started."
+ "If -l option is ommitted Spectrum Archive EE will be started for all the configured libraries."
+ ""
+ "Examples:"
+ "ltfsee start -l LIB1"
+ "ltfsee start"
+ ""
GLESL443I ""
+ " stop [-l <library_name>]"
+ ""
+ "This option stops the Spectrum Archive EE system or its part related to one library."
+ ""
+ "-l option allows you to define the name of the library for which Spectrum Archive EE"
+ " will be stopped."
+ "If -l option is ommitted Spectrum Archive EE will be stopped for all the configured libraries."
+ ""
+ "Examples:"
+ "ltfsee stop -l LIB1"
+ "ltfsee stop"
GLESL444E "Library name is too long, length: %d, max. allowed: %d."
GLESL445I ""
+ " Usage:"
+ " ltfsee info <resource> [-l <library_name>]"
+ " ltfsee info files -f <filepath_regular_expression> [-q]"
+ " ltfsee info jobs [-l <library_name>]
+ " ltfsee info scans [-l <library_name>]
+ ""
+ " The 'info' option of the ltfsee command shows information about Spectrum"
+ " Archive EE resources, files, jobs, or scans (groups of jobs) of the whole"
+ " Spectrum Archive EE system or its part related to one tape library."
+ ""
+ " <resource> must be one of:"
+ " libraries (w/o -l option)"
+ " tapes"
+ " pools"
+ " nodes"
+ " nodegroups"
+ " drives"
+ ""
+ " Additional options:
+ " -l Show the information for the specified library. If -l is omitted,"
+ " show the information for the whole system."
+ " -q Do not check to see if the file exists on the tape."
+ ""
+ " Examples:"
+ " ltfsee info libraries"
+ " ltfsee info pools -l LIB1"
+ " ltfsee info files /gpfs/dir1/*.jpg"
GLESL446E "Invalid WORM attribute is specified."
GLESL447E "WORM is only supported on the 3592 pool."
GLESL448E "Node group %s does not exist."
GLESL449E "Invalid attribute name specified: '%s'."
GLESL450E "Failed to import tape (%s)."
# OLD_SYNTAX: GLESL451I ""
# OLD_SYNTAX: + " Old syntax (supported only in single library LTFS EE cluster):"
# OLD_SYNTAX: + " ltfsee reclaim <stg_pool> [-n <number> | -t <tape_id_1 tape_id_2 ... tape_id_N> [-l [file_number_th]]] [-r [remaining_capacity_th]] [-g [space_gain_th]] [-q]"
# OLD_SYNTAX: + ""
# OLD_SYNTAX: + " Starts reclamation of one or more tapes within the given storage pool."
# OLD_SYNTAX: + " Optionally it can be defined which tapes to be reclaimed and removed from the storage pool or"
# OLD_SYNTAX: + " how many tapes are to be reclaimed and removed from the storage pool. Furthermore, the set of tapes to be"
# OLD_SYNTAX: + " reclaimed can be further refined by setting one or both remaining-capacity and space gain thresholds."
# OLD_SYNTAX: + " Besides, the number of files to be reclaimed can be defined. It can be used if only one tape is defeined to"
# OLD_SYNTAX: + " be reclaimed."
# OLD_SYNTAX: + " The storage pool to be reclaimed must be specified as the first parameter."
# OLD_SYNTAX: + " If other parameters are not specified all the tapes from the storage pool are reclaimed and"
# OLD_SYNTAX: + " remain in the storage pool."
# OLD_SYNTAX: + ""
# OLD_SYNTAX: + " -n <number> Reclaim and remove <number> tapes from the storage pool. Up to <number> of tapes"
# OLD_SYNTAX: + " from the storage pool are reclaimed and removed from the storage pool. If less then"
# OLD_SYNTAX: + " <number> of tapes are successfully reclaimed, only the reclaimed tapes are removed."
# OLD_SYNTAX: + " The -n and -t options are mutually exclusive but they can be combined with -r."
# OLD_SYNTAX: + ""
# OLD_SYNTAX: + " -t <tape_ids> Reclaim and remove listed tapes from the storage pool. The listed tapes"
# OLD_SYNTAX: + " from the storage pool are reclaimed and removed from the storage pool. If not all"
# OLD_SYNTAX: + " listed tapes are successfully reclaimed, only the reclaimed tapes are removed."
# OLD_SYNTAX: + " The -t and -n options are mutually exclusive but they can be combined with -r."
# OLD_SYNTAX: + " All the listed tapes must be members of the storage pool."
# OLD_SYNTAX: + ""
# OLD_SYNTAX: + " -l [threshold] Defines how many files to be reclaimed in this command execution. It works with the -t"
# OLD_SYNTAX: + " option with only one tape_id. When this option is used, The tape may not be removed and"
# OLD_SYNTAX: + " capacity of storage pool may not be gained in that case, while unreferenced capacity of"
# OLD_SYNTAX: + " the tape will be increased. The tape needs to be reclaimed again to complete the"
# OLD_SYNTAX: + " reclaim operation. This option may be used when a user needs to limit the time of the"
# OLD_SYNTAX: + " command execution."
# OLD_SYNTAX: + " If this option is specified without a threshold value, the default value is 100,000."
# OLD_SYNTAX: + ""
# OLD_SYNTAX: + " -r [threshold] Defines the remaining capacity threshold to qualify for reclamation."
# OLD_SYNTAX: + " The remaining tape capacity threshold defines a percentage of the total tape capacity."
# OLD_SYNTAX: + " If a tape has more free space than this threshold it will not be reclaimed."
# OLD_SYNTAX: + " If this option is specified without a threshold value, the default value 10 (10%%) is used."
# OLD_SYNTAX: + ""
# OLD_SYNTAX: + " -g [threshold] Defines the expected gain threshold to qualify for reclamation."
# OLD_SYNTAX: + " The expected gain threshold defines a percentage of the total tape capacity."
# OLD_SYNTAX: + " If a tape has less expected gain space than this threshold it will not be reclaimed."
# OLD_SYNTAX: + " If this option is specified without a threshold value, the default value 75 (75%%) is used."
# OLD_SYNTAX: + ""
# OLD_SYNTAX: + " -q Quick reconcile is performed before reclaim. All DMAPI enabled GPFS filesystem have to"
# OLD_SYNTAX: + " be mounted to use this option. The -q option solves simple inconsistency between GPFS"
# OLD_SYNTAX: + " and LTFS. When there are files which quick reconcile cannot solve, reclaim operation will"
# OLD_SYNTAX: + " leave those files on source tape. In that case, reconcile command needs to run."
# OLD_SYNTAX: + ""
# OLD_SYNTAX: + " Examples:"
# OLD_SYNTAX: + " ltfsee reclaim pool1 -r -g"
# OLD_SYNTAX: + " ltfsee reclaim pool1 -n 2"
# OLD_SYNTAX: + " ltfsee reclaim pool1 -t D00279L5 1FA682L5"
# OLD_SYNTAX: + " ltfsee reclaim pool1 -t TYO164JC -l 10000"
# OLD_SYNTAX: + ""
GLESL452I ""
+ " ltfsee reclaim -p <poolname> [-l <libraryname>] [-n <number> | -t <tape_id_1 tape_id_2 ... tape_id_N>"
+ " [-L[file_number_th]]] [-R[remaining_capacity_th]] [-G[space_gain_th]] [-q]"
+ ""
+ " Starts reclamation of one or more tapes within the given storage pool."
+ " Optionally it can define which tapes to reclaim and remove from the storage pool or"
+ " how many tapes to reclaim and remove from the storage pool. The set of tapes to be"
+ " reclaimed can be further refined by setting one or both remaining-capacity and space gain thresholds."
+ " Also, the number of files to be reclaimed can be defined, but only when one tape is defined to"
+ " be reclaimed."
+ " The storage pool to be reclaimed must be specified as the first parameter."
+ " If other parameters are not specified all the tapes from the storage pool are reclaimed and"
+ " the tapes remain in the storage pool."
+ ""
+ " -p <poolname> The name of the tape storage pool"
+ ""
+ " -l <libraryname> The name of the tape library to which the tape pool and tapes to be reclaimed belong."
+ " This option may be ommitted in single-library systems."
+ ""
+ " -n <number> Reclaim and remove <number> tapes from the storage pool. Up to <number> tapes"
+ " from the storage pool are reclaimed and removed from the storage pool. If less then"
+ " <number> tapes are successfully reclaimed, only the reclaimed tapes are removed."
+ " The -n and -t options are mutually exclusive but they can be combined with -R."
+ ""
+ " -t <tape_ids> Reclaim and remove listed tapes from the storage pool. The listed tapes"
+ " from the storage pool are reclaimed and removed from the storage pool. If not all"
+ " listed tapes are successfully reclaimed, only the reclaimed tapes are removed."
+ " The -t and -n options are mutually exclusive but they can be combined with -R."
+ " All the listed tapes must be members of the storage pool."
+ ""
+ " -L[threshold] Defines how many files to reclaim with this command. It works with the -t"
+ " option when only one tape_id is specified. When this option is used, "
+ " the tape might not be removed from the storage pool, the storage pool might not"
+ " gain capacity, while the unreferenced capacity of the tape increases. "
+ " The tape needs to be reclaimed again to complete the reclaim operation."
+ " This option can be used to limit the amount of processing time that the reclaim"
+ " operation takes."
+ " If this option is specified without a threshold value, the default value is 100,000."
+ ""
+ " -R[threshold] Defines the remaining capacity threshold to qualify for reclamation."
+ " The remaining tape capacity threshold defines a percentage of the total tape capacity."
+ " If a tape has more free space than this threshold it will not be reclaimed."
+ " If this option is specified without a threshold value, the default value 10 (10%%) is used."
+ ""
+ " -G[threshold] Defines the expected gain threshold to qualify for reclamation."
+ " The expected gain threshold defines a percentage of the total tape capacity."
+ " If a tape has less expected gain space than this threshold it will not be reclaimed."
+ " If this option is specified without a threshold value, the default value 75 (75%%) is used."
+ ""
+ " -q Quick reconcile is performed before reclaim. All DMAPI-enabled GPFS filesystems must"
+ " be mounted to use this option. The -q option resolves inconsistency problems between GPFS and"
+ " Spectrum Archive EE caused by removed files. Other inconsistencies remain as is on the files"
+ " To solve inconsistencies that quick reconcile does not handle, a reconcile command"
+ " needs to be run on those files."
+ ""
+ " Examples:"
+ " ltfsee reclaim -p pool1 -l lib1 -R -G60"
+ " ltfsee reclaim -p pool1 -l lib1 -n 2"
+ " ltfsee reclaim -p pool1 -l lib1 -t D00279L5 1FA682L5"
+ " ltfsee reclaim -p pool1 -l lib1 -t TYO164JC -L10000"
+ ""
+ " Note: For -L, -R, and -G, when the argument value is provided it must not be"
+ " space separated from option. E.g. see the above examples -G60 and -L10000."
+ ""
GLESL453E "Value of -n argument must be a positive integer."
GLESL454E "Value of -R argument must be between 0 and 100."
GLESL455E "Value of -G argument must be between 0 and 100."
GLESL456E "Value of -L argument must be a positive integer."
# OLD_SYNTAX: GLESL457I ""
# OLD_SYNTAX: + " Old syntax (supported only in single library Spectrum Archive EE cluster):"
# OLD_SYNTAX: + " ltfsee export {-t <tape_id_1 tape_id_2 ... tape_id_N> | -s <poolName> } [-o <message>]"
# OLD_SYNTAX: + ""
# OLD_SYNTAX: + " (NOTE: -t option in old syntax is not supported on this version. Use new syntax to export specific tapes.)"
# OLD_SYNTAX: + ""
# OLD_SYNTAX: + " Removes one or more tapes from the Spectrum Archive EE system by removing them from the GPFS namespace."
# OLD_SYNTAX: + " The tapes will be removed from their storage pools such that they are no longer a target for file"
# OLD_SYNTAX: + " migrations, then they get reconciled, and finally inodes referencing files on these tapes get"
# OLD_SYNTAX: + " removed from GPFS. If the --offline option is specified, the inodes referencing the files on"
# OLD_SYNTAX: + " the tapes will not be removed but just set to an offline status. This means that the files"
# OLD_SYNTAX: + " remain visible in the GPFS namespace but are no longer accessible. Tapes that have been"
# OLD_SYNTAX: + " exported with the --offline option can be re-imported using the --offline option of the"
# OLD_SYNTAX: + " import command."
# OLD_SYNTAX: + ""
# OLD_SYNTAX: + " -t, --tape <tape_ids> List of tapes to be exported. (not supported)"
# OLD_SYNTAX: + " -s, --storagepool <poolName> Storage pool defining the list of tapes to be exported."
# OLD_SYNTAX: + " -o, --offline <message> Indicates that the tapes are to be exported into offline mode"
# OLD_SYNTAX: + " meaning that the files remain visible in the GPFS namespace."
# OLD_SYNTAX: + " The given message, which must not exceed 500 characters, will"
# OLD_SYNTAX: + " be stored in a GPFS extended attribute."
# OLD_SYNTAX: + ""
# OLD_SYNTAX: + " Examples:"
# OLD_SYNTAX: + " ltfsee export -t D00279L5 1FA682L5 (not supported)"
# OLD_SYNTAX: + " ltfsee export -s pool1"
# OLD_SYNTAX: + " ltfsee export -s pool1 -o "Moved to storage room B""
# OLD_SYNTAX: + ""
# OLD_SYNTAX: + ""
GLESL458I ""
+ " ltfsee export -p <pool_name> [ -l <library_name> ] [ -t <tape_id_1 tape_id_2 ... tape_id_N> ] [-o <message>]"
+ ""
+ " Removes one or more tapes from the Spectrum Archive EE system by removing them from the GPFS namespace."
+ " The tapes are removed from their storage pools, so they are no longer a target for file"
+ " migrations. Then, the tapes get reconciled. Finally, the inodes that refer to files on the tapes are"
+ " removed from GPFS. If the --offline option is specified, the inodes that refer to the files on"
+ " the tapes are not to be removed but just set to an offline status. This means that the files"
+ " remain visible in the GPFS namespace but are no longer accessible. Tapes that have been"
+ " exported with the --offline option can be imported using the --offline option of the"
+ " import command."
+ ""
+ " -t, --tape <tape_ids> List of tapes to export."
+ " -p, --pool <pool_name> Storage pool defining the list of tapes to export."
+ " -l, --library <library_name> The library name, which can be omitted on single-library systems."
+ " -o, --offline <message> Indicates that the tapes are exported in offline mode,"
+ " meaning that the files remain visible in the GPFS namespace."
+ " The given message, which must not exceed 500 characters, is"
+ " stored in a GPFS extended attribute."
+ ""
+ " Examples:"
+ " ltfsee export -p pool1 -t D00279L5 1FA682L5"
+ " ltfsee export -p pool1 -l lib1"
+ " ltfsee export -p pool1 -o "Moved to storage room B""
+ ""
# OLD_SYNTAX: GLESL459I "Usage (old syntax):"
# OLD_SYNTAX: + " Old syntax is supported only in single library Spectrum Archive EE cluster."
# OLD_SYNTAX: + " ltfsee import <tape_id_1 tape_id_2 ... tape_id_N> { [[ -p <pathName> ] [ -o | -i | -r ] [ -R ] ] | [ --offline ] }"
# OLD_SYNTAX: + ""
# OLD_SYNTAX: + " Adds one or more tapes to the Spectrum Archive EE system and reinstates the files in the"
# OLD_SYNTAX: + " GPFS namespace. Imported files will be in migrated state meaning that the data "
# OLD_SYNTAX: + " will remain on tape. Tapes must not be in any storage pool such that they"
# OLD_SYNTAX: + " are only available for recalling file data. To make them a target for migrations,"
# OLD_SYNTAX: + " the tapes must be added to a storage pool after importing them."
# OLD_SYNTAX: + ""
# OLD_SYNTAX: + " -p, --path <pathName> Specifies the path name where the files are to be imported."
# OLD_SYNTAX: + " If omitted, import will use default path (/<GPFS_Filesystem>/IMPORT)."
# OLD_SYNTAX: + " If the path does not exist, it will be created. If there are multiple"
# OLD_SYNTAX: + " GPFS file systems mounted, this option is mandatory."
# OLD_SYNTAX: + " -o, --overwrite Indicates that files are to be overwritten if they exist already."
# OLD_SYNTAX: + " This option is mutually exclusive with -i, -r and --offline."
# OLD_SYNTAX: + " -i, --ignore Indicates that files are to be ignored if they exist already."
# OLD_SYNTAX: + " This option is mutually exclusive with -o, -r and --offline."
# OLD_SYNTAX: + " -r, --rename Indicates that file are to be renamed if they exist already."
# OLD_SYNTAX: + " This option is mutually exclusive with -i, -o and --offline."
# OLD_SYNTAX: + " -R, --recreate Indicates that files will be imported in the specified path, without"
# OLD_SYNTAX: + " creating a tape ID directory."
# OLD_SYNTAX: + " --offline Indicates that the tapes were exported with the offline option. All"
# OLD_SYNTAX: + " files will be restored to the original exported path. If any file"
# OLD_SYNTAX: + " was modified in GPFS namespace or on tape, it will not be imported."
# OLD_SYNTAX: + " This option is mutually exclusive with -p, -o, -i, -r and -R."
# OLD_SYNTAX: + ""
# OLD_SYNTAX: + " Examples:"
# OLD_SYNTAX: + " ltfsee import D00279L5"
# OLD_SYNTAX: + " ltfsee import D00279L5 1FA682L5 -p /importUser"
# OLD_SYNTAX: + " ltfsee import D00279L5 1FA682L5 -r"
# OLD_SYNTAX: + " ltfsee import D00279L5 1FA682L5 -R -o -p /importUser"
# OLD_SYNTAX: + " ltfsee import D00279L5 1FA682L5 --offline"
# OLD_SYNTAX: + ""
GLESL460I "Usage:"
+ " ltfsee import -p <pool_name> [ -l <library_name> ] -t <tape_id_1 tape_id_2 ... tape_id_N>"
+ " { [[ -P <pathName> ] [ -o | -i | -r ] [ -R ]] | [ --offline ] }"
+ ""
+ " Adds one or more tapes to the Spectrum Archive EE system and reinstates the files in the"
+ " GPFS namespace. Imported files will be in a migrated state meaning that the data "
+ " will remain on tape. The tape must be added to a storage pool."
+ ""
+ " -p, --pool <pool_name> The storage pool to which the imported tapes are assigned."
+ " -l, --library <library_name> The library name, which can be omitted on single-library systems."
+ " -t, --tape <tape_ids> The names (IDs) of the tapes to import."
+ " -P, --path <pathName> Specifies the path name where the files are imported."
+ " If omitted, import will use the default path (/<GPFS_Filesystem>/IMPORT)."
+ " If the path does not exist, it will be created. If there are multiple"
+ " GPFS file systems mounted, this option is mandatory."
+ " -o, --overwrite Indicates that files are overwritten if they exist already."
+ " This option is mutually exclusive with -i, -r and --offline."
+ " -i, --ignore Indicates that files are ignored if they exist already."
+ " This option is mutually exclusive with -o, -r and --offline."
+ " -r, --rename Indicates that files are renamed if they exist already."
+ " This option is mutually exclusive with -i, -o and --offline."
+ " -R, --recreate Indicates that files are imported in the specified path, without"
+ " creating a tape ID directory."
+ " --offline Indicates that the tapes were exported with the offline option. All"
+ " files are restored to the original exported path. If any file"
+ " was modified in the GPFS namespace or on tape, it will not be imported."
+ " This option is mutually exclusive with -p, -o, -i, -r and -R."
+ ""
+ " Examples:"
+ " ltfsee import -p pool1 -l lib1 -t D00279L5 (single or multiple libraries system)"
+ " ltfsee import -p pool1 -t D00279L5 (single library system, the library name may be omitted)"
+ " ltfsee import -p pool1 -l lib1 -t D00279L5 1FA682L5 -P /importUser"
+ " ltfsee import -p pool1 -l lib1 -t D00279L5 1FA682L5 -r"
+ " ltfsee import -p pool1 -l lib1 -t D00279L5 1FA682L5 -R -o -P /importUser"
+ " ltfsee import -p pool1 -l lib1 -t D00279L5 1FA682L5 --offline"
+ ""
# OLD_SYNTAX: GLESL461I "Usage (old syntax):"
# OLD_SYNTAX: + " Old syntax is supported only in single library LTFS EE cluster."
# OLD_SYNTAX: + " 'ltfsee migrate' can be invoked in two ways:"
# OLD_SYNTAX: + ""
# OLD_SYNTAX: + " ltfsee migrate <gpfs_scan_result_file> <target_storage_pool_name> [<redundant_copy_pool_1> [<redundant_copy_pool_2>]]"
# OLD_SYNTAX: + ""
# OLD_SYNTAX: + " Where the gpfs_scan_result_file file includes the list of files that"
# OLD_SYNTAX: + " are to be migrated. Each line of this file must end with " -- <filename>"."
# OLD_SYNTAX: + " All files will be migrated to the specified target storage pool. Optionally,"
# OLD_SYNTAX: + " redundant copies can be created in up to two additional storage pools."
# OLD_SYNTAX: + " This command does not complete until all files have been migrated."
# OLD_SYNTAX: + ""
# OLD_SYNTAX: + " migrate <migration_list_file>"
# OLD_SYNTAX: + ""
# OLD_SYNTAX: + " Where the migration_list_file file includes the list of files to be"
# OLD_SYNTAX: + " migrated. Each entry must consist of two lines where the first line"
# OLD_SYNTAX: + " defines the file name and the second line defines the target storage pool"
# OLD_SYNTAX: + " to which the file is to be migrated. Optionally, the target storage pool can be"
# OLD_SYNTAX: + " followed by up to two additional storage pools in which redundant copies"
# OLD_SYNTAX: + " are to be created. Entries in the file must be terminated/separated by a blank line."
# OLD_SYNTAX: + " This command does not complete until all files have been migrated."
# OLD_SYNTAX: + " Examples:"
# OLD_SYNTAX: + " ltfsee migrate ./gpfsscan.txt mypool"
# OLD_SYNTAX: + " ltfsee migrate ./migration_list.txt"
# OLD_SYNTAX: + ""
GLESL462I "Usage:"
+ " ltfsee migrate -s <gpfs_scan_result_file> -p <pool_name1[@library_name1]> [<pool_name2[@library_name2]> [<pool_name3[@library_name3]>]]"
+ ""
+ " Where the gpfs_scan_result_file file includes the list of files"
+ " to migrate. Each line of this file must end with " -- <filename>"."
+ " All files are migrated to the specified target storage pool <pool_name1>."
+ " Optionally, redundant copies can be created in up to two additional storage"
+ " pools (pool_name2, pool_name3). The pools can be in the same or different "
+ " tape libraries. In single-library systems, the library name can be"
+ " omitted in the input parameters, in multiple library systems, it must be specified."
+ " This command does not complete until all files have been migrated."
+ ""
+ " Examples:"
+ " ltfsee migrate -s ./gpfsscan.txt -p mypool"
+ " ltfsee migrate -s ./gpfsscan.txt -p poolA@library1 poolB@library2"
+ ""
+ " Note: if using policy based migration, the syntax for the OPTS parameter"
+ " value is: '-p <pool1[@lib1]> [<pool2[@lib2]]>'. Example:"
+ " RULE EXTERNAL POOL 'rule1' EXEC '/opt/ibm/ltfsee/bin/ltfsee' OPTS '-p pool1@lib1 pool2@lib2'"
+ ""
GLESL463E "'%s %s' command options and parameters are not provided using correct syntax."
GLESL464E "Invalid nodegroup specified."
GLESL465E "The Spectrum Archive EE service has been started on the node with IP address %s, but it is not yet ready to accept requests (initialization). Check again later."
GLESL466E "Fail to confirm pool type of pool (%s)."
GLESL467E "WORM pool is not allowed to use with the operation (%s)."
GLESL468E "J4 or J5 are the only available format specifiers."
GLESL469E "Some tapes were added to the pool without importing the tape contents."
GLESL470E "Tape (%s) is not able to be added to pool (%s) for import."
GLESL471E "MMM internal database error."
GLESL472E "Migration list file is empty, or no valid file with its target pools or number of replicas are specified in the migration list file."
GLESL473E "Specified format type does not match pool format type."
# OLD_SYNTAX: GLESL474I "Usage (old syntax):"
# OLD_SYNTAX: + " Old syntax is supported only in single library LTFS EE cluster."
# OLD_SYNTAX: + " 'ltfsee premigrate' can be invoked in two ways:"
# OLD_SYNTAX: + ""
# OLD_SYNTAX: + " ltfsee premigrate <gpfs_scan_result_file> <target_storage_pool_name> [<redundant_copy_pool_1> [<redundant_copy_pool_2>]]"
# OLD_SYNTAX: + ""
# OLD_SYNTAX: + " Where the gpfs_scan_result_file file includes the list of files that"
# OLD_SYNTAX: + " are to be premigrated. Each line of this file must end with " -- <filename>"."
# OLD_SYNTAX: + " All files will be premigrated to the specified target storage pool. Optionally,"
# OLD_SYNTAX: + " redundant copies can be created in up to two additional storage pools."
# OLD_SYNTAX: + " This command does not complete until all files have been premigrated."
# OLD_SYNTAX: + ""
# OLD_SYNTAX: + " premigrate <migration_list_file>"
# OLD_SYNTAX: + ""
# OLD_SYNTAX: + " Where the migration_list_file file includes the list of files to be"
# OLD_SYNTAX: + " premigrated. Each entry must consist of two lines where the first line"
# OLD_SYNTAX: + " defines the file name and the second line defines the target storage pool"
# OLD_SYNTAX: + " to which the files are premigrated. Optionally, the target storage pool can be"
# OLD_SYNTAX: + " followed by up to two additional storage pools in which redundant copies"
# OLD_SYNTAX: + " are created. Entries in the file must be terminated or separated by a blank line."
# OLD_SYNTAX: + " This command does not complete until all files have been premigrated."
# OLD_SYNTAX: + " Examples:"
# OLD_SYNTAX: + " ltfsee premigrate ./gpfsscan.txt mypool"
# OLD_SYNTAX: + " ltfsee premigrate ./migration_list.txt"
# OLD_SYNTAX: + ""
GLESL475I "Usage:"
+ " ltfsee premigrate -s <gpfs_scan_result_file> -p <pool_name1[@library_name1]> [<pool_name2[@library_name2]> [<pool_name3[@library_name3]>]]"
+ ""
+ " Where the gpfs_scan_result_file file includes the list of files to premigrate."
+ " Each line of this file must end with " -- <filename>"."
+ " All files are premigrated to the specified target storage pool <pool_name1>."
+ " Optionally, redundant copies can be created in up to two additional storage"
+ " pools (pool_name2, pool_name3). The pools can be in the same or in (up to two)"
+ " different tape libraries. In single-library systems, the library name can be"
+ " omitted in the input parameters. In multiple library systems, it must be specified."
+ " This command does not complete until all files have been premigrated."
+ ""
+ " Examples:"
+ " ltfsee premigrate -s ./gpfsscan.txt -p mypool"
+ " ltfsee premigrate -s ./gpfsscan.txt -p poolA@library1 poolB@library2"
+ ""
+ " Note: if using policy based premigration, the syntax for the OPTS parameter"
+ " value is: '-p <pool1[@lib1]> [<pool2[@lib2]]>'. Example:"
+ " RULE EXTERNAL POOL 'rule1' EXEC '/opt/ibm/ltfsee/bin/ltfsee' OPTS '-p pool1@lib1 pool2@lib2'"
+ ""
GLESL476E "A duplicate storage pool name was specified for the migration target."
GLESL477E "A duplicate storage pool name was specified to migrate file %s."
GLESL478E "Invalid storage pool name specified: '%s'."
GLESL479E "Invalid library name specified: '%s'."
GLESL480E "Device types of pool '%s' and tape '%s' do not match"
GLESL481E "Unable to determine device type for tape '%s'."
GLESL482E "Pool '%s' is a WORM pool, but tape '%s' is not a WORM tape'."
GLESL483I "Usage:"
+ " ltfsee save -s <gpfs_scan_result_file> -p <target_storage_pool_name> [<redundant_copy_pool_1> [<redundant_copy_pool_2>]]"
+ ""
+ " Where the gpfs_scan_result_file file includes the list of file system objects"
+ " (empty files, empty directories, and symbolic links) to save."
+ " Each line of this file must end with " -- <filename>". All file system"
+ " objects are saved to the specified target storage pool. Optionally,"
+ " redundant copies can be created in up to two additional storage pools."
+ " If more than one library is configured, then the pool must be specified"
+ " as poolname@libraryname. This command does not complete until all file"
+ " system objects have been saved."
+ " Examples:"
+ " ltfsee save -s ./gpfsscan.txt -p poolA"
+ " ltfsee save -s ./gpfsscan.txt -p poolA@library1 poolB@library2"
+ " ltfsee save -s ./gpfsscan.txt -p mypool@mylib"
GLESL484I "Usage:"
+ " rebuild -P <pathName> -p <poolName> -l <libraryName> <-t tapeId1 ... tapeIdN>"
+ ""
+ " Rebuilds a GPFS file system into the specified directory with"
+ " the files that are found on the specified tapes. The specified"
+ " tape must be in a pool and must not be offline."
+ " Rebuild is performed by import, internally. If multiple versions"
+ " or generations of a file are found, the latest version or
+ " generation is selected. If any of the versions or generations"
+ " cannot be determined, the file most recently imported is renamed"
+ " and two (or more) versions or generations are rebuilt or recovered."
GLESL485E "Only tapes from a single pool can be used with the rebuild command. (Tapes from pools '%s' and '%s' were specified.)"
GLESL486E "Unable to determine the pool for tape '%s'."
GLESL487I "Tape %s stays in pool %s while it is offline exported."
##### DO NOT REMOVE!! this msg is parsed by log analyzer #####
GLESL488I "The EE CLI is starting: %s"
GLESL489E "Unable to open temporary file %s (errno:%d)"
GLESL490I "Export command completed successfully."
GLESL491E "Failed to get the real path of file %s (ret: %d)."
GLESL492E "Failed to update inventory information."
 
GLESL500I "Migration thread is busy. Will retry."
# OLD_SYNTAX: GLESL501I "Usage (old syntax):"
# OLD_SYNTAX: + " Old syntax is supported only in single library Spectrum Archive EE cluster."
# OLD_SYNTAX: + " 'ltfsee recall' can be invoked in multiple ways:"
# OLD_SYNTAX: + ""
# OLD_SYNTAX: + " ltfsee recall <recall_list_file>"
# OLD_SYNTAX: + ""
# OLD_SYNTAX: + " Where the recall_list_file file includes the list of files to be"
# OLD_SYNTAX: + " recalled. Each line contains a file name with an absolute path or"
# OLD_SYNTAX: + " a relative path based on the working directory. It is also possible"
# OLD_SYNTAX: + " to pass the output of another command"
# OLD_SYNTAX: + ""
# OLD_SYNTAX: + " ltfsee recall <gpfs_scan_result_file>"
# OLD_SYNTAX: + ""
# OLD_SYNTAX: + " Where the gpfs_scan_result_file file includes the list of files that"
# OLD_SYNTAX: + " are to be recalled. Each line of this file must end with " -- <filename>"."
# OLD_SYNTAX: + ""
# OLD_SYNTAX: + " Examples:"
# OLD_SYNTAX: + " ltfsee recall ./recall_list.txt"
# OLD_SYNTAX: + " find . -type f |ltfsee recall"
# OLD_SYNTAX: + ""
GLESL502I "Usage:"
+ " 'ltfsee recall' can be invoked in multiple ways:"
+ ""
+ " ltfsee recall [-l library_name] <recall_list_file>"
+ ""
+ " Where the recall_list_file file includes the list of files to"
+ " recall. Each line contains a file name with an absolute path or"
+ " a relative path that is based on the working directory. In single-library"
+ " systems, the library name can be omitted in the input parameters."
+ " In multiple library systems it must be specified. It is also possible"
+ " to pass the output of another command."
+ ""
+ " ltfsee recall [-l library_name] <gpfs_scan_result_file>"
+ ""
+ " Where the gpfs_scan_result_file file includes the list of files"
+ " to recall. Each line of this file must end with " -- <filename>"."
+ " In single-library systems, the library name can be omitted in the input"
+ " parameters. In multiple library systems it must be specified."
+ ""
+ " Examples:"
+ " ltfsee recall ./recall_list.txt"
+ " find . -type f |ltfsee recall"
+ " ltfsee recall -l library2 ./recall_list.txt"
+ " find . -type f |ltfsee recall -l library2"
+ ""
# OLD_SYNTAX: GLESL503I "Usage (old syntax):"
# OLD_SYNTAX: + " Old syntax is supported only in single library Spectrum Archive EE cluster."
# OLD_SYNTAX: + ""
# OLD_SYNTAX: + " ltfsee drive <add | remove> <drive_serial[:attr]> [node_id]"
# OLD_SYNTAX: + ""
# OLD_SYNTAX: + " Adds a drive to the node or removes a drive."
# OLD_SYNTAX: + " If a tape is in the drive and a job is in progress, the tape is unloaded automatically when the job completes."
# OLD_SYNTAX: + " Maximum length of drive_serial is 32 characters. node_id is required for add command."
# OLD_SYNTAX: + " Optionally drive attributes can be set when adding tape drive. Drive attributes is logical OR of the attributes -"
# OLD_SYNTAX: + " migrate(4), recall(2), and generic(1). If set, the corresponding job can be executed on that drive."
# OLD_SYNTAX: + " Drive attributes can be specified after ':' following to drive serial, and must be decimal numeric."
# OLD_SYNTAX: + " In the exmaples below, '6' is logical OR of migrate(4) and recall(2), so migration jobs and recall jobs are allowed"
# OLD_SYNTAX: + " to execute on this drive."
# OLD_SYNTAX: + " If omitted, all of attributes are enabled by default."
# OLD_SYNTAX: + ""
# OLD_SYNTAX: + " Examples:"
# OLD_SYNTAX: + " ltfsee drive add 1068002111 1"
# OLD_SYNTAX: + " ltfsee drive remove 1068002111"
# OLD_SYNTAX: + " ltfsee drive add 1068002111:6 1"
# OLD_SYNTAX: + ""
GLESL504I "Usage:"
+ ""
+ " ltfsee drive <add | remove> -d <drive_serial[:role]> [-n <node_id>] [-l <library_name>]"
+ ""
+ " Adds a drive to the node or removes a drive."
+ " If a tape is in the drive and a job is in progress, the tape is unloaded automatically when the job completes."
+ " The maximum length of the drive_serial is 32 characters. The node_id is required for the add command."
+ " Optionally, drive roles can be set when adding a tape drive. Drive roles are the logical OR of the roles -"
+ " migrate(4), recall(2), and generic(1). If set, the corresponding job can be run on that drive."
+ " Drive roles can be specified after ':' following the drive serial, and must be decimal numeric."
+ " In the examples below, '6' is the logical OR of migrate(4) and recall(2), so migration jobs and recall jobs are allowed"
+ " to run on these drives."
+ " If omitted, all of the roles are enabled by default."
+ " Option -l must be provided on multi-library systems and may be omitted on single-library systems."
+ ""
+ " Examples:"
+ " ltfsee drive add -d 1068002111 -n 1"
+ " ltfsee drive add -d 1068002113:6 -n 3 -l library2"
+ " ltfsee drive remove -d 1068002111"
+ ""
# OLD_SYNTAX: GLESL505I "Usage (old syntax):"
# OLD_SYNTAX: + " Old syntax is supported only in single library Spectrum Archive EE cluster."
# OLD_SYNTAX: + ""
# OLD_SYNTAX: + " ltfsee cleanup <scan number>"
# OLD_SYNTAX: + ""
# OLD_SYNTAX: + " If scans or sessions are finished but the calling process is not able to remove the session, the cleanup command"
# OLD_SYNTAX: + " provides a possibility to cleanup this scan or session."
# OLD_SYNTAX: + ""
# OLD_SYNTAX: + " Example:"
# OLD_SYNTAX: + " ltfsee cleanup 3603955717"
# OLD_SYNTAX: + ""
GLESL506I "Usage:"
+ ""
+ " ltfsee cleanup -s <scan number> [-l library_name]"
+ ""
+ " If scans or sessions are finished but the calling process is not able to remove the session, the cleanup command"
+ " provides a possibility to clean up this scan or session."
+ " Option -l must be provided on multi-library systems and may be omitted on single-library systems."
+ ""
+ " Example:"
+ " ltfsee cleanup -s 3603955717"
+ " ltfsee cleanup -s 3603955717 -l library2"
+ ""
GLESL507I "Usage:"
+ ""
+ " ltfsee recall_deadline [-T <time (in seconds)>] [-l library_name]"
+ ""
+ " This option enables (or disables) the recall deadline and sets the recall deadline value."
+ " 120 (120 seconds) is the default setting."
+ " With this option enabled, if a recall job has been in the queue for recall_deadline"
+ " seconds more than the average of all recall jobs in the queue, it is eligible for"
+ " high-priority processing from the queue of recall requests that have expired."
+ " If this option is set to 0, the recall deadline queue is disabled, meaning that all"
+ " file recall requests are processed according to the starting block of the file on"
+ " tape."
+ ""
+ " -T <time (in seconds)> Specify a value of 0 to disable the recall deadline queue"
+ " Specify a value in the range 1 - 1440 to enable the"
+ " recall deadline queue and set the number of seconds older"
+ " that a recall request must be (compared to the average of"
+ " all recall requests in the queue) before it is eligible for"
+ " prioritized recall processing."
+ " If the -T option is ommitted, the currently set value is displayed."
+ ""
+ " -l <library_name> Get or set the recall deadline for the specified library."
+ " If the -l option is ommitted, get or set deadline for all active libraries."
+ ""
+ " Examples:"
+ " ltfsee recall_deadline"
+ " ltfsee recall_deadline -T 240"
+ " ltfsee recall_deadline -T 240 -l library1"
+ ""
+ " Note: older syntax is also supported:"
+ " ltfsee recall_deadline <time (in seconds)>"
+ ""
# GLESL508I "Usage:"
#+ ""
#+ " ltfsee replica_copy_time [-T <time (in minutes)>] [ -l library_name ]"
#+ ""
#+ " This option enables (or disables) replica copy timing and sets the replica copy timing value."
#+ " 0 (disabled) is the default setting."
#+ " With this option enabled, if the target tape for a copy job is not already"
#+ " mounted and is in the unscheduled state longer than the replica copy time"
#+ " value, a tape is selected from the primary pool and scheduled for an unmount."
#+ " Then, mount of the target tape is scheduled."
#+ ""
#+ " -T <time (in minutes)> Specify a value of 0 to disable replica copy timing."
#+ " Specify a value in the range 1 through 1440 to"
#+ " enable replica copy timing and set the number of"
#+ " minutes to use as the replica copy timing value."
#+ " If the -T option is ommitted, the currently set value"
#+ " displays."
#+ ""
#+ " -l <library_name> Get or set the recall deadline for the specified library."
#+ " If the -l option is ommitted, get or set the replica copy time"
#+ " for all active libraries."
#+ ""
#+ " Examples:"
#+ " ltfsee replica_copy_time -T 5"
#+ " ltfsee replica_copy_time -T 5 -l library1"
#+ ""
#+ " Note: older syntax is also supported on single tape library systems:"
#+ " ltfsee replica_copy_time <time (in minutes)>"
#+ ""
GLESL509I "Usage:"
+ ""
+ " ltfsee retrieve [-l <library_name>]"
+ ""
+ " Triggers the Spectrum Archive EE system to retrieve information about physical resources"
+ " from the tape library. The Spectrum Archive EE system periodically does this automatically,
+ " but it can be useful to explicitly trigger this operation if the"
+ " configuration has changed, for example, if a drive has been added/removed."
+ ""
+ " -l <library_name> Retrieve the information for specified library."
+ " If the -l option is ommitted, retrieve the information"
+ " for all active libraries."
+ ""
+ " Examples:"
+ " ltfsee retrieve"
+ " ltfsee retrieve -l library1"
+ ""
GLESL510E "Failed to get file status due to timed out to acquire DMAPI access right. The file %s is exclusively locked."
GLESL511E "Illegal characters in pool name '%s', only alpha-numeric and underscore are allowed."
GLESL512I "Usage:"
+ ""
+ " ltfsee recover -t <tape_id> [-p <pool_name>] [-l <library_name>] [-f <list_file>] <-s | -r | -c>"
+ ""
+ " This option is used to recover files from a tape or to remove a tape from Spectrum Archive EE "
+ " when the tape is in a critical state or in a write fenced state."
+ " It follows to the GNU's getopt behavior if two of the -s, -c and -r parameters are specified at a same time."
+ ""
+ " -t <tape_id> Specifies the tape ID to recover."
+ ""
+ " -p <pool_name> The pool name, must be provided."
+ ""
+ " -l <library_name> The library name, must be provided on multi-library systems, but can be omitted on single-library systems."
+ ""
+ " -f <list_file> Specify the output file, which has a file list to be recovered and a file of non-EE files."
+ " This command uses /tmp/[hostname].[pid].[first-mountpoint-GPFS].recoverlist by default"
+ ""
+ " -s List the files that have corresponding migrated stubs or saved objects in GPFS."
+ " This option is mutually exclusive with -c and -r."
+ ""
+ " -c Recover files that have corresponding migrated stubs or saved objects in GPFS."
+ " This option is mutually exclusive with -s and -r."
+ ""
+ " -r Remove the tape from the LTFS inventory if no recoverable files exist."
+ " This option is mutually exclusive with -s and -c"
+ ""
+ " Example:"
+ " ltfsee recover -t D00279L5 -p pool1 -l library2 -s -f /home/foo/recover_list.txt"
+ " ltfsee recover -t D00279L5 -p pool1 -l library2 -c"
+ " ltfsee recover -t D00279L5 -p pool1 -l library2 -r"
+ ""
GLESL513I ""
+ " tape move homeslot | ieslot -t <tape_id_1 tape_id_2 ... tape_id_N> [-p <pool_name>] [-l <library_name>] "
+ ""
+ " Moves the specified tape between its home slot and an IE slot in the I/O station."
+ " If the tape belongs to a storage pool and is online (not in the offline state), a request to move it to an IE slot is fails."
+ " A tape in an IE slot cannot be added to a storage pool. A tape must be moved to its home slot"
+ " before it can be added"
+ ""
+ " -t <tape_id> Specifies the tape ID to move."
+ ""
+ " -p <pool_name> A pool name, must be provided if the target tapes are assigneded to a storage pool. The pool name must be omitted for"
+ " unassigned tapes"
+ ""
+ " -l <library_name> The library names, must be provided on multi-library systems, but can be omitted on single-library systems."
+ ""
+ " Examples:"
+ " ltfsee tape move homeslot -t D00279L5 1FA682L5 -p pool1 -l library1"
+ " ltfsee tape move homeslot -t D00279L5 1FA682L5 -l library1"
+ " ltfsee tape move ieslot -t D00279L5 1FA682L5 -p pool1 -l library2"
+ ""
+ " Note: older syntax is also supported on single tape library systems:"
+ " ltfsee tape move homeslot | ieslot <tape_id_1 tape_id_2 ... tape_id_N>"
+ ""
GLESL514E "Format type has not been specified for pool with name '%s'."
GLESL515E "Error determining the threshold."
GLESL516I "The 'ltfsee localnode' command is obsolete. Issuing a separate command to synchronize a node with LE+ is not needed any more."
GLESL517E "Migration supports creating up to 3 tape replicas. A migration request for %d tape replicas was submitted."
GLESL518E "Invalid data in database (%s)."
GLESL519I "Spectrum Archive EE service (MMM) for library %s is already running."
GLESL520W "Unable to check if Spectrum Archive EE service (MMM) of library %s already running. Skip it."
GLESL521E "File system %s does not exist or is not configured to use for Spectrum Archive EE configuration and metadata."
GLESL522E "%s: option -%c requires an argument."
GLESL523E "%s: unknown/ambiguous option -%c."
GLESL524I "The Spectrum Archive EE service (MMM) of library %s is configured but not running. Skip it."
GLESL525W "Failed to get the configuration information for library %s. Skip it."
GLESL526W "Failed to connect to the Spectrum Archive EE service (MMM) of library %s. Skip it."
GLESL527E "Failed to get the configuration information for library %s, exiting."
GLESL528E "Failed to connect to the Spectrum Archive EE service (MMM) of library %s."
GLESL529E "Unable to open migration list file %s."
GLESL530E "Unable to determine library information from migration list file %s."
GLESL531E "Library name and pool name must be specified (pool_name@library_name) for migration if more then one library is configured. There are %d libraries configured."
GLESL532E "Library name must be specified (-l option) if more then one library is configured. There are %d libraries configured."
GLESL533E "Library name and pool name must be specified (-l and -p options) for this command if more than one library is configured. There are %d libraries configured."
GLESL534E "Old command syntax to start Spectrum Archive EE service (MMM) is supported only with one library. There are %d libraries configured."
GLESL535E "Unable to start the Spectrum Archive EE service (MMM) for library %s (unable to get a file descriptor)."
GLESL536I "Started the Spectrum Archive EE service (MMM) for library %s."
GLESL537E "Unable to start the Spectrum Archive EE service (MMM) for library %s."
GLESL538I "The Spectrum Archive EE service (MMM) of library %s is configured but not running. Nothing to do."
GLESL539I "Usage:"
+ ""
+ " ltfsee status"
+ ""
+ " Provides the status of the Spectrum Archive EE services (MMMs) and the access information for the nodes that host those services."
+ " Examples:"
+ " ltfsee status"
+ ""
GLESL540I "Library name: %s, library id: %s, control node (MMM) IP address: %s."
GLESL541I "Stopped LTFS EE service (MMM) for library %s."
GLESL542E "Old syntax is not supported for tape import."
GLESL543E "Pool name not specified."
# GLESL544I is a copy of the old GLESL001I, currently it is not used
GLESL544I "Usage: %s <option>"
+ "Available options are:"
+ " migrate <gpfs_scan_result_file> <target_storage_pool_name> [<redundant_copy_pool_1> [<redundant_copy_pool_2>]]"
+ ""
+ " Where the gpfs_scan_result_file file includes the list of files that"
+ " are to be migrated. Each line of this file must end with " -- <filename>"."
+ " All files will be migrated to the specified target storage pool. Optionally,"
+ " redundant copies can be created in up to two additional storage pools."
+ " This command does not complete until all files have been migrated."
+ ""
+ ""
+ " migrate <migration_list_file>"
+ ""
+ " Where the migration_list_file file includes the list of files to be"
+ " migrated. Each entry must consist of two lines where the first line"
+ " defines the file name and the second line defines the target storage pool"
+ " to which the file is to be migrated. Optionally, the target storage pool can be"
+ " followed by up to two additional storage pools in which redundant copies"
+ " are to be created. Entries in the file must be terminated/separated by a blank line."
+ " This command does not complete until all files have been migrated."
+ " Examples:"
+ " ./ltfsee migrate ./gpfsscan.txt mypool"
+ " ./ltfsee migrate ./migration_list.txt"
+ ""
+ ""
+ " save <gpfs_scan_result_file> <target_storage_pool_name> [<redundant_copy_pool_1> [<redundant_copy_pool_2>]]"
+ ""
+ " Where the gpfs_scan_result_file file includes the list of file system objects"
+ " (empty files, empty directories, symbolic links) that are to be saved."
+ " Each line of this file must end with " -- <filename>". All file system"
+ " objects will be saved to the specified target storage pool. Optionally,"
+ " redundant copies can be created in up to two additional storage pools."
+ " This command does not complete until all file system objects have been saved."
+ ""
+ ""
+ " save <save_list_file>"
+ ""
+ " Where the save_list_file file includes the list of file system objects"
+ " (empty files, empty directories, symbolic links) that are to be saved."
+ " Each entry must consist of two lines where the first line defines the name"
+ " of the file system object and the second line defines the target storage pool"
+ " to which the file system object is to be saved. Optionally, the target"
+ " storage pool can be followed by up to two additional storage pools in which"
+ " redundant copies are to be created. Entries in the file must be terminated/
+ " separated by a blank line. This command does not complete until all file
+ " system objects have been saved."
+ " Examples:"
+ " ./ltfsee save ./gpfsscan.txt mypool"
+ " ./ltfsee save ./save_list.txt"
+ ""
+ ""
+ " premigrate <gpfs_scan_result_file> <target_storage_pool_name> [<redundant_copy_pool_1> [<redundant_copy_pool_2>]]"
+ ""
+ " Where the gpfs_scan_result_file file includes the list of files that"
+ " are to be premigrated. Each line of this file must end with " -- <filename>"."
+ " All files will be premigrated to the specified target storage pool. Optionally,"
+ " redundant copies can be created in up to two additional storage pools."
+ " This command does not complete until all files have been premigrated."
+ ""
+ ""
+ " premigrate <premigration_list_file>"
+ ""
+ " Where the premigration_list_file file includes the list of files to be"
+ " premigrated. Each entry must consist of two lines where the first line"
+ " defines the file name and the second line defines the target storage pool"
+ " to which the file is to be premigrated. Optionally, the target storage pool can be"
+ " followed by up to two additional storage pools in which redundant copies"
+ " are to be created. Entries in the file must be terminated/separated by a blank line."
+ " This command does not complete until all files have been premigrated."
+ " Examples:"
+ " ./ltfsee premigrate ./gpfsscan.txt mypool"
+ " ./ltfsee premigrate ./premigration_list.txt"
+ ""
+ ""
+ " pool <create|delete> <poolname>"
+ " pool <add|remove> <poolname> <tape_id_1 tape_id_2 ... tape_id_N> [-f | -F | -e | -c | -d]"
+ ""
+ " This option is used to create or delete the specified LTFS storage pool"
+ " or to add or remove one or more tapes from the specified storage pool."
+ " When adding tape to pool, formatting or repairing option can be added"
+ " to execute these operation to tape before adding it."
+ ""
+ " -f, --format Format the specified tape before adding it to pool."
+ " -F, --force_format Force format of the tape. Required to format 'already formatted' tape"
+ " -e, --exception_format Exceptionally format the tape. Required to format 'containing migrated file data' tape"
+ " -c, --check Check and repair the specified tape before adding it to pool."
+ " -d, --deep_recovery Specify this option instead of -c to repair a tape that is missing an EOD mark."
+ ""
+ " Examples:"
+ " ./ltfsee pool create mypool"
+ " ./ltfsee pool add mypool 327AAQL5 277BAXL5 329AAQL5"
+ " ./ltfsee pool add mypool 327AAQL5 -f"
+ " ./ltfsee pool add mypool 327AAQL5 -c"
 
+ ""
+ " info <jobs|tapes|drives|nodes|pools|scans|files -f <file_1 file_2 ...> [-q]>"
+ ""
+ " This option prints current LTFS EE resource inventory information or"
+ " information about ongoing migration and recall jobs/scans. Whereas a job"
+ " refers to the migration or recall of a single file, a scan refers to a list"
+ " of such jobs, usually originating from a GPFS policy scan, that has been"
+ " submitted via the migrate option."
+ " Examples:"
+ " ./ltfsee info pools"
+ " ./ltfsee info scans"
+ ""
+ ""
+ " localnode <status|sync>"
+ ""
+ " This option initiates an operation against the local node."
+ ""
+ " <status> Shows the current status of the local node for a multi-nodes cluster. "
+ " If it shows 'Out of sync', the node is in an error state. "
+ " To recover from the error state, issue the 'ltfsee localnode sync' command. "
+ " <sync> Forces a synchronization (sync) operation against the local node when it is out of sync in a multi-node cluster."
+ " Examples:"
+ " ./ltfsee localnode status"
+ " ./ltfsee localnode sync"
+ ""
+ ""
+ " start [ -l <library_name> ]"
+ ""
+ " This option starts the LTFS EE system and defines which library"
+ " is to be used by LTFS EE."
+ " Examples:"
+ " ./ltfsee start -l LIB01"
+ "Note: on a single library system the old command syntax can also be used: ltfsee start <gpfs file system>"
+ ""
+ ""
+ " status"
+ ""
+ " This option provides the IP address of the node where the LTFS EE service has been"
+ " started and its corresponding process id."
+ " Examples:"
+ " ./ltfsee status"
+ ""
+ ""
+ " stop -l <library_name> "
+ ""
+ " This option stops the LTFS EE system."
+ " Examples:"
+ " ./ltfsee stop LIB01"
+ ""
+ ""
+ " retrieve"
+ ""
+ " Triggers the LTFS EE system to retrieve information about physical resources"
+ " from the tape library. The LTFS EE systems does this automatically from time"
+ " to time but it might be useful to explicitly trigger this operation if the"
+ " configuration has changed, for example, if a drive has been added/removed."
+ " Examples:"
+ " ./ltfsee retrieve"
+ ""
+ ""
+ " threshold [percentage]"
+ ""
+ " The threshold parameter determines the limit of the file system usage, by percentage"
+ " at which migrations are preferred over recalls. The default value is 95 percent."
+ " If the threshold value is passed for one of the managed file systems recalls are"
+ " preferred again when the file system usage drops by 5 percent. For example, if"
+ " a threshold of 93 percent is chosen, recalls are preferred again"
+ " when the file system usage is at or below 88 percent."
+ " If no threshold value is specified, the current value is shown."
+ " Examples:"
+ " ./ltfsee threshold 22"
+ ""
+ ""
+ " import <tape_id_1 tape_id_2 ... tape_id_N> { [[ -p <pathName> ] [ -o | -i | -r ] [ -R ] ] | [ --offline ] }"
+ ""
+ " Adds one or more tapes to the LTFS EE system and reinstantiates the files in the"
+ " GPFS namespace. Imported files will be in migrated state meaning that the data "
+ " will remain on tape. Tapes must not be in any storage pool such that they"
+ " are only available for recalling file data. To make them a target for migrations,"
+ " the tapes must be added to a storage pool after importing them."
+ ""
+ " -p, --path <pathName> Specifies the path name where the files are to be imported."
+ " If omitted, import will use default path (/<GPFS_Filesystem>/IMPORT)."
+ " If the path does not exist, it will be created. If there are multiple"
+ " GPFS file systems mounted, this option is mandatory."
+ " -o, --overwrite Indicates that files are to be overwritten if they exist already."
+ " This option is mutually exclusive with -i, -r and --offline."
+ " -i, --ignore Indicates that files are to be ignored if they exist already."
+ " This option is mutually exclusive with -o, -r and --offline."
+ " -r, --rename Indicates that file are to be renamed if they exist already."
+ " This option is mutually exclusive with -i, -o and --offline."
+ " -R, --recreate Indicates that files will be imported in the specified path, without"
+ " creating a tape ID directory."
+ " --offline Indicates that the tapes were exported with the offline option. All"
+ " files will be restored to the original exported path. If any file"
+ " was modified in GPFS namespace or on tape, it will not be imported."
+ " This option is mutually exclusive with -p, -o, -i, -r and -R."
+ ""
+ " Examples:"
+ " ./ltfsee import D00279L5"
+ " ./ltfsee import D00279L5 1FA682L5 -p /importUser"
+ " ./ltfsee import D00279L5 1FA682L5 -r"
+ " ./ltfsee import D00279L5 1FA682L5 -R -o -p /importUser"
+ " ./ltfsee import D00279L5 1FA682L5 --offline"
+ ""
+ ""
+ " export {-t <tape_id_1 tape_id_2 ... tape_id_N> | -s <poolName> } [-o <message>]"
+ ""
+ " Removes one or more tapes from the LTFS EE system by removing them from the GPFS namespace."
+ " The tapes will be removed from their storage pools such that they are no longer a target for file"
+ " migrations, then they get reconciled, and finally inodes referencing files on these tapes get"
+ " removed from GPFS. If the --offline option is specified, the inodes referencing the files on"
+ " the tapes will not be removed but just set to an offline status. This means that the files"
+ " remain visible in the GPFS namespace but are no longer accessible. Tapes that have been"
+ " exported with the --offline option can be re-imported using the --offline option of the"
+ " import command."
+ ""
+ " -t, --tape <tape_ids> List of tapes to be exported."
+ " -s, --storagepool <poolName> Storage pool defining the list of tapes to be exported."
+ " -o, --offline <message> Indicates that the tapes are to be exported into offline mode"
+ " meaning that the files remain visible in the GPFS namespace."
+ " The given message, which must not exceed 500 characters, will"
+ " be stored in a GPFS extended attribute."
+ ""
+ " Examples:"
+ " ./ltfsee export -t D00279L5 1FA682L5"
+ " ./ltfsee export -s pool1"
+ " ./ltfsee export -s pool1 -o "Moved to storage room B""
+ ""
+ ""
+ " tape move homeslot | ieslot <tape_id_1 tape_id_2 ... tape_id_N>"
+ ""
+ " Moves the specified tape between its home slot and an IE slot in the I/O station."
+ " If the tape belongs to a storage pool, a request to move it to an IE slot is failed."
+ " Once a tape is moved to an IE slot, the tape cannot be accessed. If the tape contains migrated"
+ " files, the tape should not be moved to an IE slot without exporting."
+ " A tape in an IE slot cannot be added to storage pool. Such a tape must be moved to its home slot"
+ " before it can be added"
+ ""
+ " Examples:"
+ " ./ltfsee tape move homeslot D00279L5 1FA682L5"
+ " ./ltfsee tape move ieslot D00279L5 1FA682L5"
+ ""
+ ""
+ " reconcile [ -t <tape_id_1 tape_id_2 ..> ] -p <poolName> -l <libraryName> [-P] [-u] [-w <wait_time>] [-g <gpfs_fs_1 gpfs_fs_2 ..>]"
+ ""
+ " Starts reconciliation of all or selected tapes or storage pools against all or selected GPFS file systems"
+ ""
+ " -t <tape_ids> Reconcile specified tapes."
+ " Can be combined with -s and -g."
+ ""
+ " -p <poolName> The name of the pool to reconcile. If -t isn't specified, reconcile all tapes in specified pool"
+ " Can be combined with -t and -g."
+ ""
+ " -l <libraryName> The name of the library that the pool is/"
+ " will be associated with. If only a single"
+ " library is configured for the system, this"
+ " option may be omitted."
+ ""
+ " -P Partial reconcile - if not all the requested tapes can be reserved for reconcile, reconcile the tapes that can be reserved."
+ ""
+ " -u Skip reconcile pre-check so that tapes get mounted for reconcile regardless of need."
+ ""
+ " -w <wait_time> Maximum time (in seconds) that the reconcile process can spend trying to reserve the requested tapes."
+ " Default value is 300 seconds."
+ ""
+ " -g <gpfs_fss> Reconcile tapes (defined by other options) against specified GPFS file systems."
+ " Can be combined with any other reconcile option."
+ " If not specified, tapes are reconciled against all GPFS file systems."
+ ""
+ ""
+ " reclaim <stg_pool> [-n <number> | -t <tape_id_1 tape_id_2 ... tape_id_N> [-l [file_number_th]]] [-r [remaining_capacity_th]] [-g [space_gain_th]] [-q]"
+ ""
+ " Starts reclamation of one or more tapes within the given storage pool."
+ " Optionally it can be defined which tapes to be reclaimed and removed from the storage pool or"
+ " how many tapes are to be reclaimed and removed from the storage pool. Furthermore, the set of tapes to be"
+ " reclaimed can be further refined by setting one or both remaining-capacity and space gain thresholds."
+ " Besides, the number of files to be reclaimed can be defined. It can be used only one tape is defeined to"
+ " be reclaimed."
+ " The storage pool to be reclaimed must be specified as the first parameter."
+ " If other parameters are not specified all the tapes from the storage pool are reclaimed and"
+ " remain in the storage pool."
+ ""
+ " -n <number> Reclaim and remove <number> tapes from the storage pool. Up to <number> of tapes"
+ " from the storage pool are reclaimed and removed from the storage pool. If less then"
+ " <number> of tapes are successfully reclaimed, only the reclaimed tapes are removed."
+ " The -n and -t options are mutually exclusive but they can be combined with -r."
+ ""
+ " -t <tape_ids> Reclaim and remove listed tapes from the storage pool. The listed tapes"
+ " from the storage pool are reclaimed and removed from the storage pool. If not all"
+ " listed tapes are successfully reclaimed, only the reclaimed tapes are removed."
+ " The -t and -n options are mutually exclusive but they can be combined with -r."
+ " All the listed tapes must be members of the storage pool."
+ ""
+ " -l [threshold] Defines how many files to be reclaimed in this command execution. It works with the -t"
+ " option with only one tape_id. When this option is used, The tape may not be removed and"
+ " capacity of storage pool may not be gained in that case, while unreferenced capacity of"
+ " the tape will be increased. The tape needs to be reclaimed again to complete the"
+ " reclaim operation. This option may be used when a user needs to limit the time of the"
+ " command execution."
+ " If this option is specified without a threshold value, the default value is 100,000."
+ ""
+ " -r [threshold] Defines the remaining capacity threshold to qualify for reclamation."
+ " The remaining tape capacity threshold defines a percentage of the total tape capacity."
+ " If a tape has more free space than this threshold it will not be reclaimed."
+ " If this option is specified without a threshold value, the default value 10 (10%) is used."
+ ""
+ " -g [threshold] Defines the expected gain threshold to qualify for reclamation."
+ " The expected gain threshold defines a percentage of the total tape capacity."
+ " If a tape has less expected gain space than this threshold it will not be reclaimed."
+ " If this option is specified without a threshold value, the default value 75 (75%) is used."
+ ""
+ " -q Quick reconcile is performed before reclaim. All DMAPI enabled GPFS filesystem have to"
+ " be mounted to use this option. The -q option solves simple inconsistency between GPFS"
+ " and LTFS. When there are files which quick reconcile cannot solve, reclaim operation will"
+ " leave those files on source tape. In that case, reconcile command needs to run."
+ ""
+ " Examples:"
+ " ./ltfsee reclaim pool1 -r -g"
+ " ./ltfsee reclaim pool1 -n 2"
+ " ./ltfsee reclaim pool1 -t D00279L5 1FA682L5"
+ " ./ltfsee reclaim pool1 -t TYO164JC -l 10000"
+ ""
+ " drive <add | remove> <drive_serial[:attr]> [node_id]"
+ ""
+ " Adds a drive to the node or removes a drive."
+ " If a tape is in the drive and a job is in progress, the tape is unloaded automatically when the job completes."
+ " Maximum length of drive_serial is 32 characters. node_id is required for add command."
+ " Optionally drive attributes can be set when adding tape drive. Drive attributes is logical OR of the attributes -"
+ " migrate(8), recall(2), and generic(1). If set, the corresponding job can be executed on that drive."
+ " Drive attributes can be specified after ':' following to drive serial, and must be decimal numeric."
+ " In the exmaples below, '12' is logical OR of migrate(8) and copy(4), so migration jobs and copy jobs are allowed"
+ " to execute on this drive."
+ " If omitted, all of attributes are enabled by default."
+ ""
+ " Examples:"
+ " ./ltfsee drive add 1068002111 1"
+ " ./ltfsee drive remove 1068002111"
+ " ./ltfsee drive add 1068002111:10 1"
+ ""
+ ""
+ " cleanup <scan number>"
+ ""
+ " If scans or sessions are finished but the calling process is not able to remove the session, the cleanup command"
+ " provides a possibility to cleanup this scan or session."
+ ""
+ " Example:"
+ " ./ltfsee cleanup 3603955717"
+ ""
+ ""
+ " recover -t <tape_id> <-s | -S | -r | -c | -C [-o <directory>] >"
+ ""
+ " This option is used to recover files from a tape or to remove a tape from LTFS EE when the tape is in critical state."
+ " To remove or recover a tape, this command must run on the node where the tape is mounted."
+ " If the command is run on another node, the mount node ID and IP address for the specified tape is shown."
+ ""
+ " -t <tape_id> Specifies the tape ID to be recovered."
+ ""
+ " -s List the files that have corresponding migrated stubs or saved objects in GPFS."
+ " This option is mutually exclusive with -S, -c, -C, -r and -o."
+ ""
+ " -S List the files that do not have corresponding migrated stubs or saved objects in GPFS."
+ " This option is mutually exclusive with -s, -c, -C, -r and -o."
+ ""
+ " -c Recover files that have corresponding migrated stubs or saved objects in GPFS."
+ " This option is mutually exclusive with -s, -S, -C, -r and -o."
+ ""
+ " -C Copy the files that do not have corresponding migrated stub or saved objects from the tape. "
+ " If -o is not specified, the files are copied under /tmp directory."
+ " If -o is specified, the files are copied to the specified directory. "
+ " This option is mutually exclusive with -s, -S, -c and -r."
+ ""
+ " -o <directory> Specifies the directory name where the files, "
+ " which do not have corresponding migrated stubs or saved objects in GPFS, are to be copied."
+ " -C must also be specified."
+ " This option is mutually exclusive with -s, -S, -c and -r."
+ ""
+ " -r Remove the tape from the LTFS inventory. "
+ " This option is mutually exclusive with -s, -S, -c, -C and -o. "
+ ""
+ " Example:"
+ " ./ltfsee recover -t D00279L5 -s"
+ " ./ltfsee recover -t D00279L5 -S"
+ " ./ltfsee recover -t D00279L5 -c"
+ " ./ltfsee recover -t D00279L5 -C -o /tmp/work"
+ " ./ltfsee recover -t D00279L5 -r"
+ ""
+ " repair <file_name>"
+ ""
+ " This option is used to repair a file or object in strayed state by changing it"
+ " to the resident state when the tape (or tapes) used for premigration, migration,"
+ " or save are not available."
+ " This option does not check tape availability."
+ " This option removes metadata, on GPFS, used for keeping the file/object state."
+ ""
+ " Example:"
+ " ./ltfsee repair file123"
+ ""
+ " replica_copy_time <time (in minutes)>"
+ ""
+ " This option enables (or disables) replica copy timing and sets the replica copy timing value."
+ " 0 (disabled) is the default setting."
+ " With this option enabled, if the target tape for a copy job is not already"
+ " mounted and is in the unscheduled state longer than the replica copy time"
+ " value, a tape is selected from the primary pool and scheduled for unmount."
+ " Then, mount of the target tape is scheduled."
+ ""
+ " <time (in minutes)> Specify a value of 0 to disable replica copy timing."
+ " Specify a value in the range 1 through 1440 to"
+ " enable replica copy timing and set the number of"
+ " minutes to use as the replica copy timing value."
+ ""
+ " Example:"
+ " ./ltfsee replica_copy_time 5"
+ ""
+ ""
+ " recall_deadline <time (in seconds)>"
+ ""
+ " This option enables (or disables) the recall deadline and sets the recall deadline value."
+ " 120 (120 seconds) is the default setting."
+ " With this option enabled, if a recall job has been in the queue for recall_deadline"
+ " seconds more than the average of all recall jobs in the queue, it is eligible for"
+ " high-priority processing from the queue of recall requests that have expired."
+ " If this option is set to 0, the recall deadline queue is disabled, meaning that all"
+ " file recall requests will be processed according to the starting block of the file on"
+ " tape."
+ ""
+ " <time (in seconds)> Specify a value of 0 to disable the recall deadline queue"
+ " Specify a value in the range 1 - 1440 to enable the"
+ " recall deadline queue and set the number of seconds older"
+ " that a recall request must be (compared to the average of"
+ " all recall requests in the queue) before it is eligible for"
+ " prioritized recall processing."
+ ""
+ ""
+ " fsopt { query [ -g <gpfs_filesystem(s)> ] } |"
+ " { update [ -s <stub_size> ] [ -r <read_starts_recall> ] [-p <preview_size>] [-f] -g <gpfs_filesystems> }"
+ ""
+ "The 'ltfsee fsopt' command queries or updates file system level settings for stub size, 'read starts recall', and preview size."
+ ""
+ "-s <stub_size> Defines the size of the initial file part that is kept resident on disk for migrated files."
+ "Possible values: 0 - 1073741824. A value must be a multiple of the file system block size and larger than or equal to the preview size."
+ ""
+ "-r <read_starts_recall> When this feature is set to yes, reading from the resident file part starts a background recall of the file."
+ "During the background recall, data from the resident part can be read. The rest of the file can be read upon recall."
+ "Possible values: yes|no|undef"
+ ""
+ "-p <preview_size> Defines initial file part size for which reads from the resident file part do not trigger recall."
+ "Possible values: 0 - 1073741824. A value must be smaller than or equal to the stub size."
+ ""
+ "-f Force the settings update even if stub size change is requested while there are ongoing migrations."
+ "Use of this option might result in some ongoing migrations that use the old stub size while others use the new stub size."
+ ""
+ "-g <gpfs_filesystems> One or more GPFS file systems for which to query or update settings. File system names must be separated by a space."
+ ""
+ "Examples:"
+ " ltfsee fsopt update -s 10485760 -p 81920 -r yes -g /ibm/gpfs"
+ " ltfsee fsopt query -g /ibm/gpfs"
+ ""
+ ""
+ " rebuild <pathName> <tape_id_1 tape_id_2 ... tape_id_N>"
+ ""
+ " Rebuilds a GPFS file system into the specified directory with the files found on the specified tapes."
+ " Rebuild is performed by import, internally."
+ " If multiple versions or generations of a file are found, the latest version or generation is selected."
+ " If any of the versions or generations cannot be determined, the file most recently imported is renamed"
+ " and two (or more) versions or generations are rebuilt or recovered."
+ ""
+ ""
+ " recall <gpfs_scan_result_file>"
+ ""
+ " Where the gpfs_scan_result_file file includes the list of files that"
+ " are to be recalled. Each line of this file must end with " -- <filename>"."
+ ""
+ ""
+ " recall <recall_list_file>"
+ ""
+ " Where the recall_list_file file includes the list of files to be"
+ " recalled. Each line contains a file name with an absolute path or"
+ " a relative path based on the working directory. It is also possible"
+ " to pass the output of another command"
+ ""
+ " Examples:"
+ " ltfsee recall ./recall_list.txt"
+ " find . -type f |ltfsee recall"
+ ""
GLESL555I "Usage:"
+ ""
+ " ltfsee threshold [percentage]"
+ ""
+ " The threshold parameter determines the limit of the file system usage, by percentage"
+ " at which migrations are preferred over recalls. The default value is 95 percent."
+ " If the threshold value is passed for one of the managed file systems, recalls are"
+ " preferred again when the file system usage drops by 5 percent. For example, if"
+ " a threshold of 93 percent is chosen, recalls are preferred again"
+ " when the file system usage is at or below 88 percent."
+ " If no threshold value is specified, the current value is shown."
+ " Examples:"
+ " ltfsee threshold 22"
+ ""
GLESL556I "Usage:"
+ ""
+ " ltfsee repair <file_name>"
+ ""
+ " This command is used to repair a file or object in the strayed state by changing it"
+ " to the resident state, when the tape (or tapes) used for premigration, migration,"
+ " or save are not available."
+ " This option does not check tape availability."
+ " This option removes metadata, on GPFS, used for keeping the file/object state."
+ ""
+ " Example:"
+ " ltfsee repair file123"
+ ""
GLESL557I "Usage:"
+ ""
+ " ltfsee cleanup_dm_sess"
+ ""
+ " This option is used to cleanup stale DMAPI sessoins."
+ ""
+ " Example:"
+ " ./ltfsee cleanup_dm_sess"
+ ""
GLESL558E "More than one pool name is specified."
GLESL559E "More than one tape is specified."
GLESL560E "More than one node is specified."
GLESL561E "Too many replicas (%d replicas) were specified to migrate file %s, up to %d tape replicas are supported (target tape pools)."
GLESL562I ""
+ " tape show -t <tape_id> [-l <libraryname> ] [-a <attribute>]"
+ ""
+ " This option is used to show the configuration attributes of a tape."
+ ""
+ " -t <tape_id> The name (ID) of the tape of which to show the properties."
+ " -l <libraryname> The name of the library with which the tape is associated."
+ " If only a single library is configured in the system, this"
+ " option can be omitted."
+ " -a <attribute> The attribute to show. If this option is omitted, all"
+ " attributes of the tape are shown."
+ " Valid tape attributes are:"
+ " tapeid Name (ID) of the tape"
+ " poolname Name of the pool to which the tape is assigned"
+ " offline Offline message of the tape that was"
+ " offline-exported."
+ ""
GLESL563E "Tape %s is not found in any storage pools of the Spectrum Archive EE system."
GLESL564I "Usage: ltfsee <command> <options>"
+ ""
+ "Commands:"
+ " help - This help information for ltfsee command."
+ " info - Prints list and status information for the jobs or a resource"
+ " (libraries, node groups, nodes, drives, pools, or tapes)."
+ ""
+ "To get the syntax and explanation for a specific command, run:"
+ " ltfsee help <command>"
+ ""
GLESL565E "Failed to start the migration or premigration process. AFM filesets were found. Run the ltfsee_config -m CLUSTER command to enable AFM file state checking."
GLESL566E "Failed to import tape %s. Attempted to import files to the AFM cache fileset."
GLESL567E "Failed to import/rebuild. The specified path %s is in an AFM cache fileset."
GLESL568E "Failed to import/rebuild. Unable to check the status of path %s."
GLESL569E "The configuration of Spectrum Archive EE system is not completed."
GLESL570I ""
+ " tape validate -p <pool_name> [-l <library_name>] -t <tape_id>"
+ ""
+ " This option is used to test the real status of the specified tape which is assigned to a pool but the status is "UNKNOWN""
+ ""
+ " -p <pool_name> The pool name, which must be provided."
+ ""
+ " -l <library_name> The library name, which must be provided on multi-library systems, but can be omitted on single-library systems."
+ ""
+ " -t <tape_id> Specifies the tape ID to validate. The tape must be in the "UNKNOWN" statem, and must be assigned to a pool."
+ ""
+ " Example:"
+ " ltfsee tape validate -l library1 -p pool1 -t D00279L5"
+ ""
GLESL571I "Usage:"
+ ""
+ " ltfsee update_config [-l <library_name>]"
+ ""
+ " This option is used to update the local node configration."
+ ""
+ " Example:"
+ " ./ltfsee update_config"
+ ""
GLESL572I "Removed tape %s from pool %s successfully. Format the tape when adding it back to a pool."
GLESL573E "Invalid internal parameter"
GLESL574E "Unable to initialize DMAPI."
GLESL575I "Tape %s contains files but all of them have been deleted in GPFS."
GLESL576I "A valid renamed file found in tape:%s, ltfs:%s, gpfs:%s"
GLESL577I "A valid file found in tape:%s, ltfs:%s, gpfs:%s"
GLESL578I "Unable to stat a file on tape:%s, ltfs:%s, gpfs:%s"
GLESL579I "Orphan file found in tape:%s, ltfs:%s, gpfs:%s, uid:%s"
GLESL580E "Error occurred rc = %d on tape:%s, ltfs:%s, gpfs:%s"
GLESL581E "Tape %s needs to be reconciled."
GLESL582I "%d removed files found on tape %s"
GLESL583E "The --force_remove and --empty_remove options are mutually exclusive."
GLESL584I "Reserving tapes."
GLESL585E "Failed to reserve tapes."
GLESL586I "Tapes failed to be reserved are %s"
GLESL587I "Continue for successfully reserve tapes."
GLESL588E "No valid tape to remove."
GLESL589E "Reserve failed for all valid tapes"
GLESL590E "Failed to import/rebuild. Given path %s is not under GPFS."
GLESL591E "Failed to import/rebuild. Blanks cannot be included in the target path."
GLESL592E "Failed to import/rebuild. Target path is not a directory."
GLESL593E "Import of tape %s completed with errors. See the log files /var/log/ltfsee.log for details. Try again after fixing the issues."
GLESL594E "Import of tape %s completed with errors. See the log files /var/log/ltfsee.log for details. Try again after fixing the issues and removing the target dir for import."
GLESL595E "Invalid scan %u was specified to show the migration history. The scan must be completed to show the migration history"
GLESL596E "Internal lock file error, file %s, operation %s, errno %d"
GLESL597E "Internal conf file error, file %s, operation %s, errno %d"
GLESL598E "Unable to obtain ip address"
GLESL599I "Deleting internal conf file %s"
 
# Messages for ltfsee recovery messages
GLESL600I "Scanning GPFS file systems to find migrated/saved objects in tape %s."
GLESL601I "Making %d files resident in tape %s."
GLESL602I "Scanning remaining objects migrated/saved in tape %s."
GLESL603I "Scanning non EE objects in tape %s."
GLESL604E "Scan fail for tape %s (%s)."
GLESL605I "Tape %s has files to be recovered. The list is saved to %s."
GLESL606I "Tape %s has no file to be recovered."
GLESL607E "Recall fail for tape %s (%s)."
GLESL608E "Failed to make files resident for tape %s (%s)."
GLESL609E "There are still some files to be recovered on tape %s. Try the ltfsee recover command again."
GLESL610I "Recovery of tape %s was successful. %d files were recovered. The list is saved to %s."
GLESL611I "Tape %s has %d files to be recovered. The list is saved to %s."
GLESL612I "Changed to resident: %d/%d."
GLESL613E "Cannot remove tape %s because there are files to be recovered."
GLESL614E "Cannot open %s for write. (%d)"
GLESL615E "Cannot open %s for read. (%d)"
GLESL616E "Cannot append %s to %s. (%d)"
GLESL617E "Scan fail for filesystem %s."
GLESL618E "Cannot read %s correctly. (%d)"
GLESL619E "Cannot write %s correctly. (%d)"
GLESL620I "Recovery of tape %s is successful. But the operation failed to find non-EE files on the tape."
GLESL621I "%d files are recovered. The list is saved to %s."
GLESL622I "%d non-EE files are found on tape %s. The list is saved to %s. Salvage the files by using Spectrum Archive LE or SDE after the tape is removed."
GLESL623E "Cannot write the non-EE file list %s correctly. (%d)"
GLESL624E "There is no local work directory %s for the GPFS policy scan."
GLESL625I "Tape %s is already recovered."
GLESL626E "Mount check for DMAPI-enabled file systems failed at %s"
GLESL627E "Found unmounted DMAPI-enabled fs, mntpt:%s"
GLESL628I "Existing DMAPI-enabled fs mntpt:%s"
GLESL629E "All DMAPI-enabled file systems need to be mounted."
GLESL630E "Cannot move tape %s because the status is "Critical""
GLESL631E "Failed to export some tapes."
+ "Tapes (%s) were successfully exported."
+ "Tapes (%s) are still in the pool and need a retry to export them."
+ "Tapes (%s) are in the Exported state but some GPFS files may still refer to files on these tapes. TPS list to fix is: %s"
GLESL632I "Reconcile as a part of an export finishes"
GLESL633E "Export of tape %s completed with errors. Tape was not exported and needs a retry."
GLESL634I "ltfsee export runs in special mode (%s)."
GLESL635I "Detected stale internal conf file. Overwriting."
GLESL636I "Internal conf file, pid %d, node_id %d, ip_address %s"
GLESL637I "Writing to internal conf file %s, pid %d, node_id %d, ip_address %s"
 
# messages related to migration driver (G stands for Glue, D is taken by GLESDEBUG)
##### DO NOT REMOVE!! this msg is parsed by log analyzer #####
GLESG001I "Migrating data of GPFS file %s to %s."
GLESG002I "Updating in GPFS the LTFS EE metadata related to GPFS file %s and LTFS file %s."
GLESG003E "Migration target tape not provided."
GLESG004E "Unable to get GPFS node id for this node."
GLESG005E "Could not obtain a node (id) for distributing a file recall."
GLESG006E "Could not obtain the debug level from LTFS EE."
GLESG007I "%d copy processes outstanding."
GLESG008E "Unable to determine the cluster ID."
GLESG009E "The migration and recall driver is not able to connect to Spectrum Archive EE (%d)."
GLESG010E "Unable to change status of the migration job for file %s."
GLESG011W "Synchronizing LTFS EE failed."
GLESG012E "Error in file status change for file %s, status change flag: %d."
GLESG013I "Current data of GPFS file %s previously migrated to %s, skipping new data transfer."
GLESG014E "Migrating data of GPFS file %s: unable to write to tape %s and LTFS file %s (errno:%d)."
GLESG015E "Updating LTFS EE metadata of GPFS file %s: unable to write to tape %s and LTFS file %s (errno:%d)."
GLESG016I "%s, file path: %s, file inode number: %s."
GLESG017I "Signal %d (%s) received."
GLESG018I "Removing LTFS EE functionality from GPFS file system %s."
GLESG019E "Failed to get LTFS EE IP address, error return code: %d."
GLESG020E "Unable to connect to LTFS EE service."
GLESG021E "Unable to recall file with inode number: %llu from tape %s. The tape might not be available."
GLESG022E "Processing recall for file with inode: %llu. Tape %s became unavailable. The recall fails."
GLESG023E "Processing recall for file with inode: %llu. Unable to open data file %s (errno:%d)."
GLESG024E "Processing recall for file with inode: %llu. Request for exclusive DMAPI write on the file failed."
##### DO NOT REMOVE!! this msg is parsed by log analyzer #####
GLESG025I "Processing recall for file with inode: %llu. Started copying data from %s."
GLESG026E "Processing recall for file with inode: %llu. Copying data from %s failed."
##### DO NOT REMOVE!! this msg is parsed by log analyzer #####
GLESG027I "Processing recall for file with inode: %llu. Recalled data from %s to GPFS (Size: %ld, R: %ld.%09ld, W: %ld.%09ld, T: %ld.%09ld)."
GLESG028E "Processing recall for file with inode: %llu. Releasing exclusive DMAPI write on the file failed."
GLESG029W "Processing recall for file with inode: %llu. Failed to remove LTFS file %s."
GLESG030I "Processing recall for file with inode: %llu. Removed LTFS file %s."
GLESG031W "Determining file inode number while processing a file state change failed."
GLESG032E "Failed to read LTFS EE DMAPI attribute IBMTPS when processing migration of file %s."
#stbd should GLESG033 be W?
GLESG033E "Removing LTFS EE DMAPI attributes failed."
GLESG034E "Processing recall for file with inode: %llu. Reading data from LTFS file %s failed."
GLESG035E "Unable to allocate memory."
GLESG036E "Error in notifying and processing file state change, file path: %s, file inode number: %lu."
GLESG037E "Unable to read attributes from LTFS EE data file %s. (errno:%d)"
GLESG038I "File %s has been replaced after migration. The redundant copy or stubbing process will be stopped."
GLESG039E "dm_remove_dmattr() failed, errno: %d, error: %s."
GLESG040I "externalNotifyFileStateChange() invoked with: sid=%d, handle=%d, hlen=%d, token=%d, evType=%s, path=%s, target=%s, options=%s, flags=%d."
GLESG041I "Files %s has been modified after migration. The redundant copy process will be stopped."
GLESG042E "Migration of GPFS file %s: LTFS data file path/name too long: %s/%s/%s."
##### DO NOT REMOVE!! this msg is parsed by log analyzer #####
GLESG043I "Data move completed (Size: %ld, R: %ld.%09ld, W: %ld.%09ld, T: %ld.%09ld). Updating the Spectrum Archive EE metadata related to GPFS file %s and Spectrum Archive EE file %s."
GLESG044E "Determining the file inode number while processing (distributing). File recall failed."
GLESG045E "Processing recall for file with inode: %s, file cannot be recalled because it is offline, offline message: %s."
GLESG046E "Processing recall for file with inode: %s, fatal error while determining the file offline status."
GLESG047I "File %s has been modified after premigration. The stubbing process will be stopped."
GLESG048E "File %s cannot be migrated because the presence of the premigrated file on LTFS could not be verified."
GLESG049W "LTFS EE appears to not be running. It failed to get LTFS EE IP address, error return code: %d."
GLESG050W "LTFS EE appears to not be running. It failed to connect to LTFS EE service."
GLESG051W "LTFS EE seems to not be running - failed a query to LTFS EE service."
GLESG052E "Unable get file UID (%d)."
GLESG053E "Unable get tape ID (%d)."
GLESG054E "Unable get file system ID (%d)."
GLESG055E "Unable get file system name (%d)."
GLESG056E "Unable get node ID for recall."
GLESG057E "Failed to change the file (name:%s, inode:%llu) from migrated to premigrated."
GLESG058E "Failed to change the file (name:%s, inode:%llu) from migrated to resident."
GLESG059E "Failed to change the file (name:%s, inode:%llu) from premigrated to resident."
#GLESG060E "Failed to change the file (name:%s, inode:%s) from resident to premigrated."
#GLESG061E "Failed to change the file (name:%s, inode:%s) from resident to premigrated and removing LTFSEE DMAPI attributes failed."
#GLESG062E "Failed to change the file (name:%s, inode:%s) from resident to migrated."
GLESG063E "Failed to change the file (name:%s, inode:%s) from resident to migrated and removing LTFSEE DMAPI attributes failed."
GLESG064E "File (name:%s, inode:%llu) changed from migrated to resident but removing the LTFSEE DMAPI attributes failed."
GLESG065E "File (name:%s, inode:%llu) changed from premigrated to resident but removing the LTFSEE DMAPI attributes failed."
GLESG066E "File (name:%s, inode:%llu) truncated to zero size and changed to resident but removing the LTFSEE DMAPI attributes failed."
GLESG067E "Failed to truncate the file (name:%s, inode:%llu) to zero size."
GLESG068E "Preparing the file (name:%s, inode:%s) for migration state change failed."
GLESG069E "Premigrating file %s failed due to a read error in GPFS (dm_read_invis() errno: %d)."
GLESG070E "Not enough space to recall file with i-node %lu."
GLESG071E "Cannot migrate file %s because its EA names conflict with LTFS EE reserved EA names."
GLESG072E "Cannot migrate file %s because some of its user-defined EA values have size above LTFS EE individual EA size limit."
GLESG073E "Cannot migrate file %s because its user-defined EA values have aggregated size above LTFS EE aggregated EAs size limit."
GLESG074E "Failed to open file %s for reading and migrating its extended attributes."
GLESG075E "Failed to get list of extended attributes while migrating file %s."
GLESG076E "Failed to read extended attribute values while migrating file %s."
GLESG077E "Failed to allocate memory for reading extended attributes while migrating file %s."
GLESG078E "Failed to write user-defined EAs to LTFS while migrating file %s."
GLESG079I "Synchronizing LTFS EE tapes information."
GLESG080E "Synchronizing LTFS EE failed."
GLESG081E "Migrating data of GPFS file %s: write failed to tape %s and LTFS file %s (data length: %d, rc: %d, errno: %d)."
GLESG082W "Processing recall for file with inode: %llu. Failed to get ibm.ltfsee.gpfs.path attribute of LTFS file %s."
GLESG083W "Processing recall for file with inode: %llu. The symbolic link %s refers wrong LTFS file %s."
GLESG084W "Processing recall for file with inode: %llu. Failed to remove symbolic link %s (unlink() errno: %d)."
GLESG085W "Processing recall for file with inode: %llu. Failed to remove directory %s (rmdir() errno: %d)."
GLESG086I "Processing recall for file with inode: %llu, removed LTFS symbolic link %s."
GLESG087E "Failed to reliably read user-defined EAs while migrating file %s, error code: %d."
GLESG088I "LTFS EE service seems to be running."
GLESG089I "LTFS EE service for tape library (%s) seems to not be running."
GLESG090E "Failed to communicate to LTFS EE service."
GLESG091E "There are ongoing migrations (%d), failing removal of LTFS EE functionality for GPFS filesystem %s."
GLESG092E "Failed to prepare temporary directory to be used for storing GPFS scan result."
GLESG093I "Scanning GPFS filesystem %s for migrated and premigrated files."
GLESG094E "Failed to scan GPFS filesystem %s for migrated and premigrated files."
GLESG095I "Readmiting migration requests for library (%s)."
GLESG096E "Failed to readmit migration requests."
GLESG097I "Proceedng to remove LTFS EE functionality for GPFS filesystem %s."
GLESG098E "Failed to remove LTFS EE functionality for GPFS filesystem %s because it has migrated and/or premigrated files ( listed in /tmp/ltfsee/remmgmt/scanresult )."
GLESG099I "Stop accepting new migration requests for library (%s)."
GLESG100E "Cannot find the file (%s) for recall (errorno: %d)."
GLESG101E "Cannot find linked file from (%s) (errorno: %d)."
GLESG102E "The attribute %s of tape %s was invalid."
GLESG103E "Failed to update the number of tape blocks on MMM."
GLESG104E "Failed to get attribute %s of tape %s (errno: %d)."
GLESG105E "Error creating tape locking directory (errno: %d)."
GLESG106E "Path of tape locking directory in not a directory."
GLESG107E "Unable to open tape lock (errno: %d)."
GLESG108E "Unable to lock tape (filename: %s, errno: n/a)."
GLESG109E "Unable to unlock tape."
GLESG110E "Tape %s has changed its storage pool. Processing of file %s terminated."
GLESG111I "File with inode %lld successfully changed to premigrated state."
GLESG112I "File with inode %lld successfully changed to resident state."
GLESG113E "Unable to add recall job for i-node %llu."
GLESG114W "Generation number is maximal value (%s)."
GLESG115I "Do not migrate %s because its file changed from empty to non-empty."
GLESG116I "Do not save %s because the file changed from non-empty to empty."
GLESG117E "Failed to open %s for reading and saving its extended attributes."
GLESG118E "Failed to get list of extended attributes while saving %s."
GLESG119E "Failed to read extended attribute values while saving %s."
GLESG120E "Failed to allocate memory for reading extended attributes while saving %s."
GLESG121E "Failed to reliably read user-defined EAs while saving file %s, error code: %d."
GLESG122E "Cannot save %s because its EA names conflict with LTFS EE reserved EA names."
GLESG123E "Cannot save %s because some of its user-defined EA values have size above LTFS EE individual EA size limit."
GLESG124E "Cannot save %s because its user-defined EA values have aggregated size above LTFS EE aggregated EAs size limit."
GLESG125E "Saving data of GPFS file %s: unable to write to tape %s and LTFS file %s (errno:%d)."
GLESG126E "Failed to create symbolic link %s."
GLESG127E "Failed to write user-defined EAs to LTFS while saving file %s."
GLESG128E "Failed to write EA %s to %s."
GLESG129E "Failed to write %s DMAPI attribute to %s."
GLESG130E "Save of GPFS file %s: LTFS data file path/name too long: %s/%s/%s."
GLESG131E "The save driver is not able to connect to LTFS EE."
GLESG132I "Cannot save %s because its file type is incorrect."
GLESG133I "Cannot save %s because the state is offline."
GLESG134E "Failed to create directory on tape. (errno:%d)"
GLESG135I "%s has been modified after save. The save process will be stopped."
GLESG136W "Failed to save %s because symbolic link path is too long."
GLESG137E "Failed to get real path from %s, error code: %d."
GLESG138I "Do not save %s because the dir changed from non-empty to empty."
GLESG139E "Unable to process the job status change within MMM."
GLESG140E "Unable to request exclusive DMAPI rights for file system object %s, DMAPI error code: %d."
GLESG141E "Unable to release exclusive DMAPI rights for file system object %s, DMAPI error code: %d."
GLESG142E "Unable to clean up DMAPI data structures."
GLESG143E "Incorrectly formatted line within save list file %s."
GLESG144E "File access failed because file is offline (%s), file inode number: %llu."
GLESG145E "File access failed because file is offline (%s) and failed to determine file inode number."
GLESG146E "Failed to check offline status for file with inode number %llu."
GLESG147E "Failed to check offline status for file and failed to determine file inode number."
GLESG148E "Unable to read IBMUID for file with i-node number %llu. Recall for this file failed."
GLESG149E "Failed to get GPFS cluster id."
GLESG150E "Processing %s failed because tape %s is not accessible."
GLESG151W "Unable to obtain the modification time value for file %s."
GLESG152I "File %s has been modified while transferring to tape. The copy process will be stopped."
GLESG153E "Tape %s has access error. Processing of file %s terminated."
GLESG154E "File access failed because file is offline, file inode number: %llu."
GLESG155E "Unable to add recall job for i-node %llu from tape %s."
GLESG156E "Unable to start recall driver (argc:%d)."
GLESG157E "Unable to start recall driver (event:%s)."
GLESG158E "Unable to initialize DMAPI."
GLESG159I "Recall %llu from tape (%s)."
GLESG160E "Unable to finish the recall job for i-node %llu."
GLESG161I "Recalled replica:Tape(%s),Pool(%s),Library(%s),i-node:%llu"
GLESG162I "The RPC Server is temporary unavailable. Will retry (%d)."
GLESG163I "Tape %s is offline. Skip validity check for file (%s) for stubbing."
GLESG164E "MMM is not functional. Terminating the job."
GLESG165E "Unexpected error occurred."
 
GLESG500E "Unable get file UID (file:%s)."
GLESG501E "Failed to create user-defined EA map for file %s."
GLESG502E "Unable to migrate file %s because EA is not acceptable."
GLESG503E "Unable to access %s.(errno:%d)"
GLESG504E "Unable to set EA (%s) for file %s.(errno:%d)"
GLESG505E "Spectrum Archive EE Data Directory is too long(%d). (%s/%s/%s/%s)"
GLESG506E "Migration file (%s) to tape %s failed (%d)."
GLESG507E "Failed to read Spectrum Archive EE DMAPI attribute (%s) (errno:%d)."
GLESG508E "Failed to stub file %s."
GLESG509E "Premigrated file %s does not have valid tape."
#GLESG510E "PRESTIME%d is not valid."
#GLESG511E "Unable to access file %s (errno:%d)."
#GLESG512E "File has been updated before stubbing."
#GLESG513W "Failed to remove DMAPI attributes (%s) for file %s."
GLESG514W "Unable to identify the file (inode number: %llu)."
GLESG515E "Unable to update the DMAPI attribute for file (inode number: %llu)."
# not used GLESG516W "Unable to remove the DMAPI attribute for file (inode number: %llu)."
GLESG517I "%s, file path: %s, file inode number: %llu."
GLESG518I "Recall initiated:(inode number: %llu)."
#GLESG519E "Failed to stubbing for pool id (%s). The file (%s) has been migrated to pool id(%s)."
GLESG520E "Internal parameter was invalid (%s)."
GLESG521W "Unable to connect to the Spectrum Archive EE service for tape library (%s) to check the tape state for (%s)."
GLESG522E "Unable to get MMM information for library (%s). Failed to recall (inode number: %llu)."
GLESG523I "Updating immutable information in GPFS for the Spectrum Archive EE metadata that is related to GPFS file %s and LTFS file %s."
GLESG524E "Cannot readmit migration requests for library (%s) (%d)."
GLESG525I "Updating tape information for %s."
GLESG526E "Failed to update tape information for %s (%d)."
GLESG527E "Failed to check the AFM status of file %s."
GLESG528E "Unable to migrate file %s due to its AFM status (AFM status: %s). This file is skipped for migration."
#GLESG529E "Unable to migrate file %s because all tapes are offline exported."
GLESG530E "Unable to confirm tape %s in tape library (%s) is valid for stubbing file (%s)."
GLESG531E "Tape %s in tape library (%s) is not valid."
 
GLESG545W "Unknown return code detected (ret: %d)."
 
 
# messages related to reconciliation driver
GLESS001I "Reconciling tape %s has been requested."
GLESS002I "Reconciling tape %s complete."
GLESS003E "Reconciling tape %s failed due to a generic error."
GLESS004E "Unable to allocate memory."
GLESS005E "The reconcile process is unable to connect to the LTFS EE service."
GLESS006I "Loading GPFS info for tape %s."
GLESS007I "Reconcile tape %s process has started."
GLESS008E "The Spectrum Archive EE system is not running. Use the 'ltfsee start command' first."
GLESS009E "Unable to connect to the LTFS EE service. Check if LTFS EE has been started."
GLESS010I "Loading information for tape %s."
GLESS011E "Failed to check tape %s files against existing GPFS file systems."
GLESS012E "The reconcile process is unable to connect to the LTFS EE service and/or obtain debug level."
GLESS013E "A tape reconcile process is unable to connect to the LTFS EE service and/or obtain debug level."
GLESS014W "GPFS orphan found. GPFS file %s was migrated to LTFS file %s, but the LTFS file does not exist."
GLESS015E "Specified reconcile options are not valid."
GLESS016I "Reconciliation requested."
GLESS018E "Reconciliation process could not determine LTFS EE metadata directory."
GLESS019E "Failed to get LTFS EE IP address, error return code: %d."
GLESS020E "Unable to connect to LTFS EE service."
GLESS021E "%s is migrated or premigrated or saved to LTFS but does not have consistent LTFS EE DMAPI attributes."
GLESS022E "Found one or more migrated or saved GPFS files or empty objects with inconsistent Spectrum Archive EE DMAPI attributes."
+ "Continuing reconciliation can result in data loss."
+ "The problematic files or empty objects are listed in the GLESS021E messages in the ltfsee.log."
+ "It is necessary to fix or remove the problematic files or empty objects for the reconciliation to work."
GLESS023E "Reconcile option -t cannot be combined with options -T or -S."
GLESS024E "Reconcile option -s cannot be combined with options -T or -S."
GLESS025E "Reconcile options -T and -S cannot be combined."
GLESS026E "Reconcile option -t requires arguments."
GLESS027E "Reconcile option -s requires arguments."
GLESS028E "Reconcile option -g requires arguments."
GLESS029E "Reconcile option --delete_files_from_other_file systems is not supported."
GLESS030E "One or more specified GPFS file systems does(do) not exist: %s."
GLESS031E "One or more specified GPFS file systems is not DMAPI enabled (HSM managed): %s."
GLESS032E "One or more specified GPFS file systems is not mounted: %s."
GLESS033E "Specified tape is not a potentially valid tape visible to LTFS EE: %s."
GLESS034E "The specified pool %s is not a valid pool."
GLESS035I "Reconciling data files on tape %s."
GLESS036I "LTFS data file %s matches GPFS file %s."
GLESS037I "Updating ibm.ltfsee.gpfs.path attribute of LTFS data file %s to %s."
GLESS038I "Removing LTFS data file %s (EA ibm.ltfsee.gpfs.path: %s)."
GLESS039I "Reconciled data files on tape %s."
GLESS040I "Reconciling symbolic links (namespace) on tape %s."
GLESS041I "symlink --> data file : %s --> %s."
GLESS042I "Reconciled symbolic links (namespace) on tape %s."
GLESS043E "Failed creating a snapshot of GPFS file system %s."
GLESS044W "Unable to determine whether LTFS data file %s belongs to GPFS FSs the tape is being reconciled against - leaving the file untouched (EA ibm.ltfsee.gpfs.path: %s)."
GLESS045E "Found data files on tape %s for which it cannot be determined whether they belong to GPFS file systems the tape is being reconciled against. Reconcile fails."
GLESS046I "Tape %s has been exported. Reconcile will be skipped for that tape."
GLESS047I "No valid tapes for reconciliation found."
GLESS048E "Fatal error while creating snapshot of GPFS file system %s."
GLESS049I "Tapes to reconcile: %s."
GLESS050I "GPFS file systems involved: %s."
GLESS051E "Reconciliation cannot proceed. Make sure LTFS EE resources are not reserved by another process, then try the reconciliation again."
GLESS052I "Need to wait for pending migrations to finish. Number of pending migrations: %d."
GLESS053I "Number of pending migrations: %d."
GLESS054I "Creating GPFS snapshots:"
GLESS055I "Deleting the previous reconcile snapshot and creating a new one for %s ( %s )."
GLESS056I "Scanning GPFS snapshots:"
GLESS057I "Scanning GPFS snapshot of %s ( %s )."
GLESS058I "Removing GPFS snapshots:"
GLESS059I "Removing GPFS snapshot of %s ( %s )."
GLESS060I "Processing scan results:"
GLESS061I "Processing scan results for %s ( %s )."
GLESS062E "Error in processing scan results for %s ( %s )."
GLESS063I "Reconciling the tapes:"
GLESS064E "Cannot readmit migration requests. Check if LTFS EE is running."
GLESS065E "Failed to determine the GPFS file systems involved (1)."
GLESS066E "Failed to determine the GPFS file systems involved (2)."
GLESS067E "Reconcillation: failed to determine the state of tapes involved."
GLESS068E "Creating set of all GPFS file systems failed (%d)."
GLESS069E "Fatal error while creating set of all GPFS file systems (%d)."
GLESS070E "Creating set of DMAPI enabled GPFS file systems failed (%d)."
GLESS071E "Fatal error while creating set of DMAPI enabled GPFS file systems (%d)."
GLESS072E "Creating set of mounted DMAPI enabled GPFS file systems failed (%d)."
GLESS073E "Fatal error while creating set of mounted DMAPI enabled GPFS file systems (%d)."
GLESS074E "Failed to call command (%s) (%d)."
GLESS075E "Removing directory for reconcile failed."
GLESS076E "Creating directory for reconcile failed."
GLESS077E "Scanning GPFS snapshot failed."
GLESS078E "Reconciling ibm.ltfsee.gpfs.path EA (LTFS EE metadata) of LTFS file %s corresponding to GPFS file %s failed, setxattr() return value: %d, errno: %d (%s)."
GLESS079I "Reconcile is skipped for tape %s because the tape does not contain LTFSEE data."
GLESS080E "The reconcile process is unable to connect to LTFS EE service to check reconcile job status for tape %s."
GLESS081E "Failed to get storage pools information from LTFS EE service."
GLESS082E "Failed to determine tapes belonging to one or more storage pools."
GLESS083E "Symlink is not created for file %s because EA ibm.ltfsee.gpfs.path is absent or invalid (%s)."
GLESS084I "Reconciling EAs of LTFS data file %s corresponding to GPFS file %s."
GLESS085E "Reconciling EAs of LTFS data file %s corresponding to GPFS file %s failed."
GLESS086I "Reconcile is skipped for tape %s because it is already reconciled."
GLESS087E "Arguments too long (arguments string maximum size is 8KB)."
GLESS088E "Failed to create temporary GPFS scan policy file (open() failed)."
GLESS089E "Failed to read temporary GPFS scan result file (open() failed)."
GLESS090E "Failed creating temporary files containing GPFS filesystems lists."
GLESS091W "Removing GPFS snapshot of %s ( %s ) failed."
GLESS092E "Failed to open the temporary GPFS list file for tape %s."
GLESS093E "Failed to load GPFS scan result for tape %s."
GLESS094E "Failed to create and load the information for tape %s. Ret:%d"
GLESS095E "Failed to determine involved GPFS filesystems when reconciling tape %s."
GLESS096E "Failed loading GPFS filesystems lists (open() failed) when reconciling tape %s."
GLESS097E "Failed to create symlink %s, symlink() rc:%d errno:%d."
GLESS098E "Reconciling tape %s failed because orphaned (lost or strayed) files are found in GPFS."
GLESS099E "Reconciling tape %s failed because of conflicts during the setting symbolic links."
GLESS100E "Reconciling tape %s failed because setting symbolic links failed."
GLESS101E "Reconciling tape %s failed because there are files on the tape that are not from the GPFS file systems the tape is being reconciled against, and EA ibm.ltfsee.gpfs.path is absent or invalid for some of those files. Those files are listed in ltfsee.log files in GLESS083E messages."
GLESS102E "Reconciling EAs of LTFS data file %s corresponding to GPFS file %s failed because opening GPFS file for read (from snapshot) failed."
GLESS103E "Reconciling EAs of LTFS data file %s corresponding to GPFS file %s failed because reading GPFS file EA names (from snapshot) failed."
GLESS104E "Reconciling EAs of LTFS data file %s corresponding to GPFS file %s failed because reading GPFS file EAs (from snapshot) failed."
GLESS105E "Reconciling EAs of LTFS data file %s corresponding to GPFS file %s failed because allocating memory for reading GPFS file EAs failed."
GLESS106E "Reconciling EAs of LTFS data file %s corresponding to GPFS file %s failed because GPFS file EA names conflict with EA names reserved for LTFS EE metadata."
GLESS107E "Reconciling EAs of LTFS data file %s corresponding to GPFS file %s failed because some EAs of GPFS file are individually too long."
GLESS108E "Reconciling EAs of LTFS data file %s corresponding to GPFS file %s failed because total size of GPFS EAs is too long."
GLESS109E "Reconciling EAs of LTFS data file %s corresponding to GPFS file %s failed because opening LTFS data file for read failed."
GLESS110E "Reconciling EAs of LTFS data file %s corresponding to GPFS file %s failed because reading LTFS data file EA names failed."
GLESS111E "Reconciling EAs of LTFS data file %s corresponding to GPFS file %s failed because reading LTFS data file EAs failed."
GLESS112E "Reconciling EAs of LTFS data file %s corresponding to GPFS file %s failed because allocating memory for reading LTFS data file EAs failed."
GLESS113E "Reconciling EAs of LTFS data file %s corresponding to GPFS file %s failed because removing LTFS data file stale EAs failed."
GLESS114E "Reconciling EAs of LTFS data file %s corresponding to GPFS file %s failed because (re)writing LTFS data file EAs failed."
GLESS115E "Reconciling EAs of LTFS data file %s corresponding to GPFS file %s failed because opening LTFS data file for writing (EAs) failed."
GLESS116E "Reconciling tape %s failed because opening GPFS file for read (from snapshot) failed. File information is in GLESS102E in ltfsee.log files."
GLESS117E "Reconciling tape %s failed because reading GPFS file EA names (from snapshot) failed. File information is in GLESS103E in ltfsee.log files."
GLESS118E "Reconciling tape %s failed because reading GPFS file EAs (from snapshot) failed. File information is in GLESS104E in ltfsee.log files."
GLESS119E "Reconciling tape %s failed because allocating memory for reading GPFS file EAs failed. File information is in GLESS105E in ltfsee.log files."
GLESS120E "Reconciling tape %s failed because GPFS file EA names conflict with EA names reserved for LTFS EE metadata. File information is in GLESS106E in ltfsee.log files."
GLESS121E "Reconciling tape %s failed because some EAs of GPFS file are individually too long. File information is in GLESS107E in ltfsee.log files."
GLESS122E "Reconciling tape %s failed because total size of GPFS EAs is too long. File information is in GLESS108E in ltfsee.log files."
GLESS123E "Reconciling tape %s failed because opening LTFS data file for read failed. File information is in GLESS109E in ltfsee.log files."
GLESS124E "Reconciling tape %s failed because reading LTFS data file EA names failed. File information is in GLESS110E in ltfsee.log files."
GLESS125E "Reconciling tape %s failed because reading LTFS data file EAs failed. File information is in GLESS111E in ltfsee.log files."
GLESS126E "Reconciling tape %s failed because allocating memory for reading LTFS data file EAs failed. File information is in GLESS112E in ltfsee.log files."
GLESS127E "Reconciling tape %s failed because removing LTFS data file stale EAs failed. File information is in GLESS113E in ltfsee.log files."
GLESS128E "Reconciling tape %s failed because (re)writing LTFS data file EAs failed. File information is in GLESS114E in ltfsee.log files."
GLESS129E "Reconciling tape %s failed because opening LTFS data file for writing (EAs) failed. File information is in GLESS115E in ltfsee.log files."
GLESS130E "Reconciling tape %s failed because reconciling a file EAs failed. File information is in GLESS085E in ltfsee.log files."
GLESS131E "Failed to reserve tapes for reconciliation."
GLESS132I "Tapes that could not be reserved for reconciliation: %s."
GLESS133I "Continuing reconciliation for successfully reserved tapes."
GLESS134I "Reserving tapes for reconciliation."
GLESS135I "Reserved tapes: %s."
GLESS136E "Failed to cancel tape reservations."
GLESS137I "Removing tape reservations."
GLESS138E "Reconcile option -w arguments are missing or invalid."
GLESS139E "Scanning GPFS snapshot was terminated (%d)."
GLESS140E "Error opening directory %s."
GLESS141I "Removing stale DMAPI attributes:"
GLESS142I "Removing stale DMAPI attributes for %s ( %s )."
GLESS143E "Failed removing stale DMAPI attributes for %s ( %s )."
GLESS144E "Failed to determine snapshot time."
GLESS145E "Unexpected end of XML stream."
GLESS146E "Failed to read XML file (%d)."
GLESS147E "Unexpected XML Syntax."
GLESS148E "Failed to get file name."
GLESS149I "Can't skip to ltfsee directory."
GLESS150E "Unexpected format of IBMTPS ( %s )."
GLESS151I "Can't find the ltfsee directory in XML."
GLESS152E "Cannot create an LTFS label parser for file %s."
GLESS153E "Failed to get an attribute."
GLESS154E "Reconcile option -p requires arguments."
GLESS155E "Reconcile option -l requires arguments."
GLESS156E "Reconcile needs option -p."
GLESS157E "Cannot get default library name (%d)."
GLESS158E "Cannot get library name (%d)."
GLESS159I "File %s size is 0."
GLESS160E "The synchronization of tape %s failed."
GLESS161I "Reconcile %s precheck started."
GLESS162I "Reconcile %s precheck completed. Precheck count: %u"
GLESS163I "Loading of GPFS info completed for tape %s."
GLESS164I "Loading of Spectrum Archive info completed for tape %s."
GLESS165I "Reconcile all data files completed for tape %s."
GLESS166I "Reconcile %s progress: %u/%u ltfs files (~%d%%)"
GLESS167I "Reconcile %s precheck: GPFS info size: %u Spectrum Archive info size: %u"
GLESS168I "Reconcile %s precheck: found a Spectrum Archive file that does not exist in GPFS: %s"
GLESS169I "Reconcile %s precheck: found a renamed file: %s"
GLESS170W "Reconcile %s precheck: failed to obtain UEAs from GPFS %s rc: %d"
GLESS171I "Reconcile %s precheck: UEA num differ for %s and %s"
GLESS172I "Reconcile %s precheck: UEA differ for %s and %s"
GLESS173I "Reconcile %s precheck: symbolic link check started"
GLESS174I "Reconcile %s precheck: found a file with no sym link: %s"
GLESS175I "Reconcile %s precheck: found a sym link mismatch: %s"
GLESS176I "Reconcile %s precheck: symbolic link check completed"
GLESS177E "XML read error %d during %s for tape %s"
GLESS178E "Can't get node name."
GLESS179E "Can't get node type."
GLESS180E "Can't get a target of symlink."
GLESS181E "Failed to create symlink directory %s, errno %d"
GLESS182E "Reconciling tape %s failed due to symbolic link creation failure. The tape is possibly full."
GLESS183I "Tape:%s, free_capacity:%lu, ref_block_num:%lu, total_block_num:%lu"
GLESS184E "Reconcile %s failed due to not enough remaining capacity."
GLESS185I "Loading %s symlink."
GLESS186I "Initializing reconcile working directory."
GLESS187I "Detected state changing files for %s."
GLESS188I "Detected following fs have state changing files: %s"
GLESS189I "Waiting for %d sec."
GLESS190I "Checking and waiting for the state changes to complete for %s."
GLESS191I "Detected timeout for %s. No further fs will be checked. Proceeding to retry."
GLESS192I "Checking complete. No retry is needed."
GLESS193I "Checking the state changing files for %s."
GLESS194E "Valid state changing files detected for %s. This is the second time, giving up."
GLESS195I "Checking done for %s."
GLESS196E "State change file load failed. fs=%s %s"
GLESS197E "QEDMMM check failed. %s"
GLESS198I "Wait state change %s complete"
GLESS199I "Wait state change %s started"
GLESS200I "Wait state change %s sleeping %d sec"
GLESS201I "Wait state change %s vecsize=%lu, loop_cnt=%u"
GLESS202I "Wait state change %s all QEDMMM are stale"
GLESS203I "Wait state change %s timeout"
GLESS204E "Found invalid format GPFS list file for tape %s at %s"
GLESS205I "Checked QEDMMM %s for %s, result=%s"
GLESS206E "Found invalid format GPFS list file for gpfs %s at %s"
GLESS207I "Loaded %s size: %u"
GLESS208I "Could not load Spectrum Archive list for tape %s (ret=%d). Starting a reconcile job."
GLESS209I "Submit tape reserve: tape=%s, res_uid=%s, pid=%d, stime=%lu, nodeid=%d, nodeip=%s"
GLESS210I "Valid tapes in the pool: %s"
 
 
 
#messages related to reclamation
GLESR001I "Reclamation started, source tape: %s, target tape: %s."
GLESR002I "Reclamation successful."
#GLESR003I removed: a message has been changed to dbgprintf
GLESR004E "Processing file %s failed: exiting the reclamation driver."
#GLESR005I removed: a message has been changed to dbgprintf
GLESR006E "Unable to format tape %s."
GLESR007E "Unable to receive storage pool information for tape %s to remove it from its pool."
GLESR008E "Error removing tape %s from pool %s (rc=%d)."
GLESR009I "Unable to find file %s: skipping %s for reclamation."
GLESR010E "Unable to determine GPFS path for file %s: skipping this file for reclamation."
GLESR011I "Orphan data for file %s found. UID stored for this file: %s."
GLESR012E "Unable to get information for file %s on tape %s."
GLESR013E "Unable to read data directory on tape %s."
GLESR014I "The reclamation process has been interrupted.(%d)"
GLESR015E "Unable to determine the start block for file %s: skipping this file for reclamation."
GLESR016I "Source tape %s has no data to transfer to target tape. (%d)"
GLESR017E "unable to allocate memory."
GLESR018E "Tape %s not found."
GLESR019E "Job with id %s and session %u has not been found."
GLESR020E "The %s reclamation process is unable to connect to the Spectrum Archive EE service."
GLESR021E "Unable to determine the IP address of the LTFS EE service."
GLESR022I "The reclamation process needs to be performed on another target tape since the current tape is full."
GLESR023I "The reclamation process has been interrupted by a recall."
GLESR024E "The reclamation process failed. There was an issue with the source tape (%d)."
+ " Please have a look for previous messages."
GLESR025E "The reclamation process failed. There was an issue with the target tape (%d)."
+ " Please have a look for previous messages."
GLESR026E "The reclamation process failed (%d)."
+ " Please have a look for previous messages."
GLESR027I "The reclamation process was not completed. The source tape needs to be reconciled.(%d)"
+ " Please have a look for previous messages."
GLESR028I "The reclamation process was interrupted."
GLESR029E "The reclamation process failed because the target tape was not a valid LTFS tape."
GLESR030E "The reclamation process failed. (%d)"
+ " Please have a look for previous messages."
GLESR031E "The reclamation process failed because it was not possible to initialize a DMAPI session."
GLESR032E "The reclamation process failed because there was an error synchronizing tape %s."
GLESR033E "Unable to synchronize LTFS EE. The reclamation process will be stopped."
GLESR034E "The reclamation process failed because there was an error changing the DMAPI attributes of file %s."
GLESR035E "The copy process from source to destination tape failed for the file %s."
GLESR036E "Unable to determine file size for file %s on target tape."
GLESR037E "Unable to determine file size for file %s on source tape."
GLESR038E "File sizes differ for file %s on source and target tapes. (source:%d target:%d)"
GLESR039E "Error obtaining extended attributes from file %s. File will be copied to target tape without extended attributes."
GLESR040E "Error converting extended attributes from file %s to user attributes. File will be copied to target tape without extended attributes."
GLESR041E "Error setting extended attributes on target tape for file %s."
GLESR042E "Error opening file %s."
GLESR043E "Error opening directory %s."
GLESR044E "Error removing IBMTPS, IBMSPATH, IBMSTIME and/or IBMUID attributes from %s."
GLESR045E "Error reading extended attribute %s from file %s (errno: %d)."
GLESR046E "Error writing extended attribute %s value %s to file %s (errno: %d)."
GLESR047E "Error removing data for file %s and link %s on source (%d)."
#GLESR048E "Error removing link %s on source (%d)."
GLESR049I "Getting the list of files from the source tape %s."
GLESR050I "Copy process of %d files started."
GLESR051I "%d of %d files have been processed."
GLESR052I "Removing files from source tape %s."
GLESR053I "%d of %d files have been removed."
GLESR054I "Formatting the source tape %s."
GLESR055E "Failed to copy all files."
GLESR056I "Limit option is used. %d files remained."
GLESR057I "%d files remained on the source tape which needs to be reconciled."
GLESR058E "Error creating directory for Spectrum Archive EE data on target tape %s. (errno: %d)"
GLESR059E "Failed to get the DMAPI handle for file %s (rc:%d:%d)."
GLESR060E "Error when checking if file %s must be reconciled."
GLESR061I "Orphan data for file %s found. Tape IDs stored for this file: %s."
GLESR062I "%d files are found. Continue to get the rest of the files."
GLESR063E "Following command failed with (rc:%d:%d) : %s."
GLESR064I "%d of %d files have been removed during a reconcile operation."
GLESR065I "Removeing %d files that have been removed on gpfs."
GLESR066E "Error removing a file during a reconcile operation."
GLESR067I "The copy process from source to destination tape failed for a set of files. Start retry for each files."
GLESR068E "Error finding the file (%s - %s) by the DMAPI handle."
GLESR069I "1 of 3 steps of %d files have been done."
GLESR070I "2 of 3 steps of %d files have been done."
GLESR071E "The reclamation process failed to get file information on the source tape (%d)."
GLESR072E "Failed to get volume cache file %s."
GLESR073E "Error of file size zero for volume cache %s."
GLESR074I "Source tape %s may not have any data to transfer to the target tape."
GLESR075I "A cp command is copying %lu bytes."
 
# massages related to the ltfseetrap command
GLEST001E "Invalid argument."
GLEST002E "Unknown trap (%c)."
GLEST003E "Unknown job (%d)."
GLEST004E "Unable to get job information."
 
# messages related to the runjob command
GLESJ001E "The runjob process is unable to communicate with the LTFS EE service."
GLESJ002E "Error when setting extended attributes for the Spectrum Archive EE data directory (tape=%s)."
GLESJ003E "Error missing arguments."
GLESJ004E "The runjob process is unable to connect to the LTFS EE service."
GLESJ005E "Generic job not found by LTFS EE."
GLESJ006E "Failed to open shell to run the job."
GLESJ007E "Failed to get LTFS EE IP address, error return code: %d."
GLESJ008I "Unable to get the extended attributes for the LTFS EE data directory."
GLESJ009E "The attribute %s of tape %s was invalid."
GLESJ010E "Failed to update the number of tape blocks on MMM."
GLESJ011E "Failed to get attribute %s of tape %s (errno: %d)."
 
# messages related to processing of other HSM calls such as externalRemoveManagement()
GLESH001I "LTFS EE service seems to be running."
GLESH002I "LTFS EE service seems to not be running."
GLESH003E "Failed to communicate to LTFS EE service."
GLESH004E "There are ongoing migrations (%d), HSM managmement removal is not possible."
GLESH005E "Failed to prepare temporary directory to be used for storing GPFS scan result."
GLESH006I "Scanning the gpfs filesystem %s for migrated and premigrated files."
GLESH007E "Failed to scan gpfs filesystem %s for migrated and premigrated files."
GLESH008I "Readmiting migration requests."
GLESH009E "Failed to readmit migration requests."
GLESH010I "Ok to proceed removing LTFS EE management for filesystem %s."
GLESH011E "Failed to remove LTFS EE management for filesystem %s because it has migrated and/or premigrated files ( listed in /tmp/ltfsee/remmgmt/scanresult )."
 
# messages related to the ltfseecp command
GLESC001I "Data for copy job %s will now be transferred to tape mounted on %s."
GLESC002E "Unable to do data management operations for file %s."
GLESC003E "Redundant copy for file %s to tape %s failed."
GLESC004E "Unable to open temp file (%s) (%d)."
GLESC005E "The object name %s is provided in a wrong format."
GLESC006E "Malformed option string provided to the copy command:%s."
GLESC007E "Unable to allocate memory, operation is ending."
GLESC008E "Unable to connect to the Spectrum Archive EE service."
GLESC009E "The copy operation is not able to connect to Spectrum Archive EE."
GLESC010E "Unable to initialize DMAPI (errno: %d)."
GLESC011I "The copy process for file %s has been interupted by another operation and will be rescheduled."
GLESC012E "Unable to access file %s to copy (errno: %d)."
GLESC013E "Invalid number of arguments in a parameter (%d)."
GLESC014I "Migration driver start for %s (%s)."
GLESC015E "Failed to get HSM status for file %s (ret: %d)."
GLESC016I "Number of files: %d, Data size: %ld"
##### DO NOT REMOVE!! this msg is parsed by log analyzer #####
GLESC017I "Data move completed for tape %s, R: %ld.%09ld, W: %ld.%09ld, T: %ld.%09ld, A: %ld.%09ld, B: %ld.%09ld, C: %ld.%09ld, D: %ld.%09ld, E: %ld.%09ld, F: %ld.%09ld"
 
# messages related to import and export
# some IDs were used in old python codes and are not needed any more
# don't re-use these ids (those are ### <id>)
 
GLESI001E "Directory %s should exist but it was not found or not a directory."
### GLESI002E "Tape %s does not exist in LTFS EE"
### GLESI003E "The tape %s has the status: %s ."
### GLESI004E "Invalid option %s ."
### GLESI005E "Must specify almost one tape id"
### GLESI007E "Option %s incorrect."
### GLESI008E "The %s option(s) are mutually exclusive"
### GLESI010E "The storage pool name %s not exist in LTFS EE"
### GLESI011E "Cannot get %s attribute of this %s file."
### GLESI012E "There are not valid tape(s) to export."
### GLESI013E "The offline option is mutually exclusive with %s options"
### GLESI014E "File %s not found in GPFS"
GLESI015E "The destination path is not given but there are multiple GPFS file systems mounted."
GLESI016E "The file system %s is not managed by HSM"
GLESI017W "Destination path %s does not exist. This path will be created."
GLESI018W "Importing file %s: will be overwritten."
GLESI019W "Importing file %s: a file already exists at destination path so will be imported as %s."
GLESI020W "Importing file %s: a file already exists at destination path so won't be imported."
### GLESI021W "There are multiple file system in GPFS"
### GLESI022W "The file %s does not have offline attribute or not exist"
### GLESI023W "Some files may not be imported properly, refer to the log '/var/log/ltfsee.log' for more details."
### GLESI024I "The Tape %s is Valid"
GLESI025I "Export start for tape: %s."
### GLESI026I "Remove offline attribute of %s"
### GLESI027I "The command was successful."
### GLESI029I "%s"
GLESI030E "Cannot get %s attribute of %s"
GLESI031E "The path %s is not under GPFS managed space."
### GLESI032E "The tape %s is a migrated tape"
### GLESI033E "The specified tape %s is currently offline. You should import the tape with offline option"
### GLESI034I "The file %s has been assigned to the UID; %s"
### GLESI036W "The file %s does not have ibm.ltfsee.gpfs.path attribute or does not exist"
### GLESI037W "The file %s is in a premigrated state. This will be changed to the migrated state."
### GLESI038E "The specified tape %s is not currently offline. You should import the tape without offline option"
### GLESI039W "The file %s was added to specified tape %s."
### GLESI040E "Cannot set %s attribute to this file: %s"
### GLESI041I "The file %s was changed to offline status."
### GLESI042I "Creating symlink --> data file: %s --> %s"
### GLESI043I "The file %s has been assigned to the tape with the ID: %s "
### GLESI044E "Unable to determine the GPFS node id of this node."
### GLESI045I "Removing file %s from GPFS."
### GLESI046E "Unable to determine the UID for this file: %s"
GLESI047E "Unable to create a UID file for this file: %s"
### GLESI048E "Unable to import for this tape: %s"
GLESI049E "Unable to create %s directory. (tape=%s, reason=%s)"
GLESI050E "Unable to rename %s to %s. (tape=%s, reason=%s)"
### GLESI051E "Unable to create file/directory for this : %s"
GLESI052E "Unable to remove %s. (reason=%s)"
GLESI053E "Unable to update UID file for %s. (tape=%s)"
### GLESI054E "Unable to determine the GPFS cluster id."
GLESI055W "The file %s is the same or an older generation. (tape=%s)"
GLESI056W "The file %s is a newer generation file. (tape=%s)"
### GLESI057W "Offline file %s has different tape ID (%s) from imported tape (%s)."
### GLESI058W "Offline file %s has different UID (%s) from imported file (%s)."
GLESI059W "Unable to recreate the link path for %s. (tape=%s)"
GLESI060I "Import start for tape: %s."
GLESI061E "The tape %s is not valid or is in an invalid state."
GLESI062E "Unable to import file/dir %s. Unable to convert it into a Spectrum Archive format. (tape=%s)"
GLESI063E "Unable to lock file for %s. (tape=%s, reason=%s)"
GLESI064W "Unable to import symlink %s (-> %s). The symlink appears to have been created outside of the Spectrum Archive environment. (tape=%s)"
### GLESI065E "Must specify at least a library ID."
GLESI066E "Unable to import tape %s. Destination path %s is under an AFM cache fileset."
GLESI067E "Unable to init RPC client. (reason=%s)"
GLESI068E "DMAPI API call %s failed. (reason=%s)"
GLESI069E "Unable to get extended attributes from the file %s."
GLESI070E "Unable to %s DMAPI attr %s for %s. (reason=%s)"
GLESI071E "Unable to export a file %s. (tape=%s)"
GLESI072E "Extended attribute %s is not found in %s."
GLESI073E "Unable to create symlink %s that points to %s. (tape=%s, reason=%s)"
GLESI074E "Destination path %s is found but it is not a directory. (tape=%s)"
GLESI075E "No GPFS file system is found. (tape=%s)"
GLESI076E "Unable to run command %s. (tape=%s, reason=%s)"
GLESI077E "Trying to generate a new name because %s already exists, but a path or filename that is too long is found. (tape=%s)"
GLESI078E "Failed to remove the tape information from an unknown GPFS stub file. (tape=%s, uid_path=%s)"
GLESI079E "OVERWRITE is specified but found an existing non-empty directory at dest_path %s. Can't import %s"
GLESI080E "Unable to get file size of %s. (tape=%s)"
GLESI081E "Unable to create stub file at %s for %s. (tape=%s)"
GLESI082E "Unable to create GPFS stub file for %s. (reason=%s)"
GLESI083E "Unable to sync LTFS index for tape %s. (reason=%s)"
GLESI084E "Dynamic link error. %s. (reason=%s)"
GLESI085E "External command %s is failed. (reason=%s)"
GLESI086E "Need to create a directory at %s but a non-directory file already exists. (tape=%s)"
GLESI087E "Unable to import a file %s and this tape won't be added to the pool. (tape=%s)"
GLESI088W "Unable to import a file %s but this tape might be added to the pool. (tape=%s)"
GLESI089E "Failed to open file %s. (reason=%s)"
GLESI090E "Failed to export some files."
GLESI091E "Failed to remove the tape information from the GPFS stub file at %s. (tape=%s, uid_path=%s)"
GLESI092E "Failed to remove the GPFS stub file at %s. (tape=%s, uid_path=%s)"
GLESI093I "Tape %s is not ready for export. Waiting another 15 seconds."
GLESI094E "Timed out waiting for tape %s to get ready for export."
GLESI095E "Tape %s is not usable for export."
GLESI096I "Writing export version into the tape. (version=%s, tape=%s)"
GLESI097E "Failed to write the export version but the tape will be exported."
GLESI099E "Failed to remove the tape information from the GPFS stub file at %s. Possible error reason is that the file has been renamed/deleted during exporting. (tape=%s, uid_path=%s)"
### Progress indicator for export/import
GLESI950I "Progress of export for %s: started. Number of files to process: %u"
GLESI951I "Progress of export for %s: %u/%u files have been exported."
GLESI952I "Progress of export for %s: completed. RC=%d"
GLESI953I "Progress of import for %s (phase #1): started. Number of files to process: %u"
GLESI954I "Progress of import for %s (phase #1): %u/%u files have been converted into Spectrum Archive format."
GLESI955I "Progress of import for %s (phase #1): completed. RC=%d"
GLESI956I "Progress of import for %s (phase #2): started. Number of files to process: %u"
GLESI957I "Progress of import for %s (phase #2): %u/%u files have been imported."
GLESI958I "Progress of import for %s (phase #2): completed. RC=%d"
 
# special message to be used for debug purpose only
##### DO NOT REMOVE!! this msg is parsed by log analyzer #####
GLESZ999D "%s"
 
# reserved messages for scripts [range of digits]
#GLESBxxxx for ltfsee_config [000-299,701,702,751,752,801,802,999]
#GLESBxxxx for ltfsee_config_save [500-699]
#GLESExxxx for ltfsee_install
#GLESFxxxx for ltfsee_log_collection
 
#GLESYxxxx for ltfsee_export_fix
GLESY000I "Invoked ltfsee_export_fix: argv=%s"
GLESY001E "Failed to open the lock file %s. (errno=%d)"
GLESY002E "Failed to acquire lock from the lock file %s. (errno=%d)"
GLESY003E "Failed to create a GPFS snapshot. (GPFS=%s, snapshot=%s)"
GLESY004E "Failed to scan a GPFS snapshot. (GPFS=%s, snapshot=%s)"
GLESY005E "Failed to delete a GPFS snapshot. (GPFS=%s, snapshot=%s)"
GLESY006E "Failed to open a GPFS policy file %s. (errno=%d)"
GLESY007E "Failed to list all the DMAPI-enabled GPFS filesystems."
GLESY008E "No tape is given."
GLESY009E "Tape %s is in an invalid format."
GLESY010E "Failed to parse a line=%s)"
GLESY011E "No valid DMAPI-enabled GPFS is found."
GLESY012E "Failed to remove the tape information from the GPFS stub file at %s."
GLESY013E "Failed to remove the tape information from the GPFS stub file at %s. Possible reason is that the file has been renamed/deleted during fixing."
GLESY014E "Failed to remove the GPFS stub file at %s."
GLESY015I "Fix of exported files completes. Total=%lu, Succeeded=%lu, Failed=%lu"
GLESY016I "Start finding strayed stub files and fix them for GPFS %s (id=%s)"
GLESY017E "Failed to fix files in GPFS %s. Retry is needed to fix completely."
GLESY018I "Successfully fixed files in GPFS %s."
GLESY019E "Tape %s@%s@%s is not in the Exported state."
GLESY020I "Listing files that need to be fixed for GPFS %s."
GLESY021E "Failed to open gpfs list path %s for GPFS %s. (errno=%d)"
GLESY022E "Failed to set up. Retry the operation."
GLESY023E "Failed to aquire a lock. Another ltfsee_export_fix process is still running."
GLESY024E "Invalid argument(s)."
GLESY025I "ltfsee_export_fix exits with RC=%d"
GLESY026I "No file needing fixed was found for GPFS %s. Skipping it."
GLESY027I "Usage: ltfsee_export_fix -T <tape_1> [ <tape_2> ... <tape_N> ]"
+ " [ -g <gpfs_1> [ <gpfs_2> ... <gpfs_N> ] ]"
+ ""
+ " * Each tape must be in TPS format (tape@pool_id@lib_id)."
GLESY028I "Ensure that IBM Spectrum Archive EE is not running on any node in this cluster."
+ "Do not remove or rename any file in any of the DMAPI-enabled GPFS file systems, if possible."
+ "Type "yes" to continue."
GLESY029I "Canceled by a user.”
Failed reconciliations
Failed reconciliations usually are indicated by the GLESS003E error message with the following description:
Reconciling tape %s failed due to a generic error.
Lost or strayed files
Table 10-2 describes the six possible status codes for files in IBM Spectrum Archive EE. They can be viewed for individual files by running the ltfsee info files command.
Table 10-2 Status codes for files in IBM Spectrum Archive EE
Status code
Description
Resident
The file is resident in the IBM Spectrum Scale namespace and was migrated or pre-migrated to a tape.
Migrated
The file was migrated. The file was copied from a GPFS file system to a tape, and exists only as a stub file in the IBM Spectrum Scale namespace.
Premigrated
The file was partially migrated. An identical copy exists on your local file system and in tape.
Offline
The file was migrated to a tape cartridge and thereafter the tape cartridge was exported offline.
Lost
The file was in the migrated status, but the file is not accessible from IBM Spectrum Archive EE because the tape cartridge that the file is supposed to be on is not accessible. The file might be lost because of tape corruption or if the tape cartridge was removed from the system without first exporting it.
Strayed
The file was in the pre-migrated status, but the file is not accessible from IBM Spectrum Archive EE because the tape that the file is supposed to be on is not accessible. The file might be lost because of tape corruption or if the tape cartridge was removed from the system without first exporting.
The only two status codes that indicate an error are lost and strayed. Files in these states should be fixed where possible by returning the missing tape cartridge to the tape library or by attempting to repair the tape corruption (for more information, see 7.17, “Checking and repairing” on page 184). If this is not possible, they should be restored from a redundant copy (for more information, see 7.11.2, “Selective recall” on page 172). If a redundant copy is unavailable, the stub files must be deleted from the GPFS file system.
10.7 Recovering from system failures
The system failures that are described in this section are the result of hardware failures or temporary outages that result in IBM Spectrum Archive EE errors.
10.7.1 Power failure
When a library power failure occurs, the data on the tape cartridge that is actively being written probably is left in an inconsistent state.
To recover a tape cartridge from a power failure, complete the following steps:
1. Create a mount point for the tape library. For more information, see the appropriate procedure, as described in 7.2.2, “LTFS Library Edition Plus component” on page 128.
2. If you do not know which tape cartridges are in use, try to access all tape cartridges in the library. If you do know which tape cartridges are in use, try to access the tape cartridge that was in use when the power failure occurred.
3. If a tape cartridge is damaged, it is identified as inconsistent and the corresponding subdirectories disappear from the file system. You can confirm which tape cartridges are damaged or inconsistent by running the ltfsee info tapes command. The list of tape cartridges that displays indicates the volume name, which is helpful in identifying the inconsistent tape cartridge. For more information, see 7.17, “Checking and repairing” on page 184.
4. Recover the inconsistent tape cartridge by running the ltfsee pool add command with the -c option. For more information, see 7.17, “Checking and repairing” on page 184.
10.7.2 Mechanical failure
When a library receives an error message from one of its mechanical parts, the process to move a tape cartridge cannot be performed.
 
Important: A drive in the library normally performs well despite a failure so that ongoing access to an opened file on the loaded tape cartridge is not interrupted or damaged.
To recover a library from a mechanical failure, complete the following steps:
1. Run the umount or fusermount -u command to start a tape cartridge unload operation for each tape cartridge in each drive.
 
Important: The umount command might encounter an error when it tries to move a tape cartridge from a drive to a storage slot.
To confirm that the status of all tapes, access the library operator panel or the web user interface to view the status of the tape cartridges.
2. Turn off the power to the library and remove the source of the error.
3. Follow the procedure that is described in 10.7.1, “Power failure” on page 319.
 
Important: One or more inconsistent tape cartridges might be found in the storage slots and might need to be made consistent by following the procedure that is described in “Unavailable status” on page 259.
10.7.3 Inventory failure
When a library cannot read the tape cartridge bar code for any reason, an inventory operation for the tape cartridge fails. The corresponding media folder does not display, but a specially designated folder that is named UNKN0000 is listed instead. This designation indicates that a tape cartridge is not recognized by the library. If the user attempts to access the tape cartridge contents, the media folder is removed from the file system. The status of any library tape cartridge can be determined by running the ltfsee info tapes command. For more information, see 7.23, “Obtaining inventory, job, and scan status” on page 200.
To recover from an inventory failure, complete the following steps:
1. Remove any unknown tape cartridges from the library by using the operator panel or Tape Library Specialist web interface, or by opening the door or magazine of the library.
2. Check all tape cartridge bar code labels.
 
Important: If the bar code is removed or about to peel off, the library cannot read it. Replace the label or firmly attach the bar code to fix the problem.
3. Insert the tape cartridge into the I/O station.
4. Check to determine whether the tape cartridge is recognized by running the ltfsee info tapes command.
5. Add the tape cartridge to the LTFS inventory by running the ltfsee pool add command.
10.7.4 Abnormal termination
If LTFS terminates because of an abnormal condition, such as a system hang-up or after the user initiates a kill command, the tape cartridges in the library might remain in the tape drives. If this occurs, LTFS locks the tape cartridges in the drives and the following command is required to release them:
# ltfs release_device -o changer_devname=[device name]
 
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.172.195