Linux computers
|
|
Supported operating systems (x86_64)
|
Red Hat Enterprise Linux Server 6.7
Red Hat Enterprise Linux Server 6.8
Red Hat Enterprise Linux Server 7.1
Red Hat Enterprise Linux Server 7.2
SUSE Linux Enterprise Server 11 SP4
|
Supported tape libraries
|
A single logical library within an IBM TS4500 tape library for each Linear Tape File System (LTFS) EE Control Node
A single logical library within an IBM TS3500 tape library for each LTFS EE Control
A single logical library within an IBM TS3310 tape library for each LTFS EE Control Node
|
Supported tape drives
|
IBM TS1140 tape drive with drive firmware level D3I3_B0A or later
|
IBM TS1150 tape drive with drive firmware level D3I4_70E or later
|
|
LTO-5 full-high tape drive with drive firmware level LTO5_G9N0 or later
|
|
LTO-6 full-high tape drive with drive firmware level LTO6_G9P0 or later
|
|
LTO-7 full-high tape drive with drive firmware level LTO7_G9Q0 or later
|
|
Supported tape media
|
TS1140 media: JB, JC, JK, and JY.
TS1150 media: JC, JD, JK, JL, JY, and JZ
|
LTO media: LTO 7, LTO 6, and LTO 5
|
|
Server
|
|
Processor
|
Minimum: A x86_64 processor
Recommended: A single socket server with the latest chipset
|
Memory
|
Minimum: (d) x (f) + 1 GB of RAM available for the IBM Spectrum Archive EE program. In addition, IBM Spectrum Scale must be configured with adequate RAM.
•d: Number of tape drives
•f: Number of millions of files/directories on the tape cartridges
Example: There are six tape drives in the system and three million files are stored on the tape cartridges. The required RAM is 19 GB.
(6 x 3 + 1 = 19)
Recommended: 64 GB RAM and above
|
HBA
|
Minimum: Fibre Channel Host Bus Adapter supported by TS1150, TS1140, LTO-7, LTO-6, and LTO-5 tape drives
Recommended: 8 Gbps/ 16 Gbps Dual port or Quad port Fibre Channel Host Bus Adapter
|
Network
|
TCP/IP based protocol network
|
Disk device for LTFS EE tape file system metadata
|
For more information, see 3.4.2, “IBM Spectrum Archive EE metadata file system” on page 51.
|
One or more disk devices for the GPFS file system
|
The amount of disk space that is required depends on the IBM Spectrum Scale settings that are used.
|
Note: The above required RPMS were gathered from a RHEL6.8 machine, depending on the version of your RHEL OS your RPMS versions may differ.
|
Node type
|
IBM General Parallel File System
|
IBM Spectrum Archive internal Hierarchical Storage Manager (HSM)
|
LTFS LE+
|
Multi-tape management module (MMM)
|
IBM Spectrum Scale only node
|
Yes
|
No
|
No
|
No
|
IBM Spectrum Archive EE node
|
Yes
|
Yes
|
Yes
|
Yes
|
Note: Servers with multiple sockets can have the sockets physically removed.
|
Note: The example uses a x3650 M5 rack server because all standard models come by default with an Intel Xeon E5-2600 v5 single socket processor with 4 PCIe slots dedicated to the socket. For more information about the x3650 m5 server, see this website:
|
Note: Typically when using RAID 5 with eight data disks with a stripe size of 256 KB, the block size should be 2 MB, and thus the Spectrum Scale file system should be 2 MB.
|
Important: The choice of block size is important for large file systems. For file systems larger than 100 TB, you should use a block size of at least 256 KB.
|
Options
|
mmcrfs
|
mmchfs
|
Default value
|
Device name of the file system
|
X
|
X
|
none
|
DiskDesc for each disk in your file system
|
X
|
Issue the mmadddisk or mmdeldisk command to add or delete disks from the file system.
|
none
|
-F StanzaFile to specify a file that contains a list of NSD stanzas
|
X
|
Issue the mmadddisk or mmdeldisk command to add or delete disks as indicated in the stanza file.
|
none
|
-A {yes | no | automount} to determine when to mount the file system
|
X
|
X
|
yes
|
-B BlockSize to set the data block size: 64K, 128K, 256K, 512K, 1M, 2M, 4M, 8M, or 16M
|
X
|
This value cannot be changed without re-creating the file system.
|
256K
|
-D {posix | nfs4} to set the semantics for a deny-write open lock
|
X
|
X
|
nfs4
|
-E {yes | no} to report exact mtime values
|
X
|
X
|
yes
|
-i InodeSize to set the size of inodes: 512, 1024, or 4096 bytes
|
X
|
This value cannot be changed.
|
4096
|
-j {cluster | scatter} to determine the block allocation map type
|
X
|
N/A
|
See Block allocation map.
|
-k {posix | nfs4 | all} to determine the authorization types that are supported by the file system
|
X
|
X
|
all
|
-K {no | whenpossible | always} to enforce strict replication
|
X
|
X
|
whenpossible
|
-L LogFileSize to specify the size of the internal log file
|
X
|
This value cannot be changed.
|
4 MB
|
-m DefaultMetadataReplicas
|
X
|
X
|
1
|
-M MaxMetadataReplicas
|
X
|
This value cannot be changed.
|
2
|
-n NumNodes to mount the file system
|
X
|
X
|
32
|
-o MountOptions to be passed to the mount command
|
N/A
|
X
|
none
|
-Q {yes | no} to active quota
|
X
|
X
|
no
|
-r DefaultDataReplicas
|
X
|
X
|
1
|
-R MaxDataReplicas
|
X
|
This value cannot be changed.
|
2
|
-S {yes | no | relatime} to control how the atime value is updated
|
X
|
X
|
no
|
-t DriveLetter
|
X
|
X
|
none
|
-T Mountpoint
|
X
|
X
|
/gpfs/DeviceName
|
-V {full | compat} to change the file system format to the latest level
|
NA
|
X
|
none
|
-W NewDeviceName to assign a new device name to the file system
|
NA
|
X
|
none
|
-z {yes | no} to enable DMAPI
|
X
|
X
|
no
|
--inode-limit MaxNumInodes[:NumInodestoPreallocate] to determine the maximum number of files in the file system
|
X
|
X
|
File system size/1 MB
|
--perfileset-quota to enable or disable per-file set user and group quotas
|
X
|
X
|
|
--filesetdf to specify (when quotas are enforced for a file set) whether the df command reports numbers based on the quotas for the file set and not for the total file system
|
X
|
X
|
--nofilesetdf
|
--metadata-block-size MetadataBlockSize to specify the block size for the system storage pool
|
X
|
NA
|
The default is the same as the value set of -B BlockSize
|
--mount-priority Priority to control the order in which the individual file systems are mounted at daemon start or when all of the all keywords are specified on the mmmount command
|
X
|
X
|
0
|
--version VersionString to enable only the file system features that are compatible with the specified release
|
X
|
NA
|
3.5.0.0
|
Note: It is recommended just to have a single file system.
|
Important: If there is more than 1 tape library, the number of cartridges in your calculation must be the total number of cartridges in all libraries.
|
Note: The most commons setup is to have two copies, that is, one primary and one copy pool with the same number of tapes.
|
Note: Several components of the reference architecture affect the overall migration performance of the solution, including backend disk speeds, SAN connectivity, NUMA node configuration, and amount of memory. Thus, this migration performance data should be used as a guideline only and any final migration performance measurements should be done on the actual customer hardware.
|
Note: Figure 3-3 shows a single fibre channel cable going to the drive zones however it is recommended to have a second fibre channel cable for fail over scenarios. This is the same reason why the zone that goes to the external storage unit has 2 fibre channel cables from 1 HBA on each server.
|
Node/Drive
Configuration
|
File Size
|
||||
5 MiB
|
10 Mib
|
100 MiB
|
1 Gib
|
10 Gib
|
|
8 Drives (2N4D8T)
|
369.0
|
577.3
|
1473.4
|
2016.4
|
2315.3
|
6 Drives (2N3D6T)
|
290.6
|
463.2
|
1229.5
|
1656.2
|
1835.5
|
4 Drives (1N4D4T)
|
211.3
|
339.3
|
889.2
|
1148,4
|
1244.9
|
3 Drives (1N3D3T)
|
165.3
|
267.1
|
701.8
|
870.6
|
931.0
|
2 Drives (1N2D2T)
|
114.0
|
186.3
|
465.8
|
583.6
|
624.3
|
Node/Drive
Configuration
|
File Size
|
||||
5 MiB
|
10 Mib
|
100 MiB
|
1 Gib
|
10 Gib
|
|
8 Drives (2N4D8T)
|
365.4
|
561.8
|
1287.8
|
1731.8
|
1921.7
|
6 Drives (2N3D6T)
|
286.3
|
446.5
|
1057.9
|
1309.4
|
1501.7
|
4 Drives (1N4D4T)
|
208.5
|
328.1
|
776.7
|
885.6
|
985.1
|
3 Drives (1N3D3T)
|
162.9
|
254.4
|
605.5
|
668.3
|
749.1
|
2 Drives (1N2D2T)
|
111.0
|
178.4
|
406.7
|
439.2
|
493.7
|
3.145.11.227