Linux computers
|
|
Supported operating systems (x86_64)
|
Red Hat Enterprise Linux Server 7.1.
Red Hat Enterprise Linux Server 7.2.
Red Hat Enterprise Linux Server 7.3.
Red Hat Enterprise Linux Server 7.4.
|
Supported tape libraries
|
A single logical library within a supported tape library for each Spectrum Archive EE Control Node.
|
Supported tape drives
|
IBM TS1140 tape drive.
|
IBM TS1150 tape drive.
|
|
IBM TS1155 tape drive.
|
|
LTO-5 full-high tape drive.
|
|
LTO-6 full-high tape drive.
|
|
LTO-7 full-high tape drive.
|
|
LTO-8 full-high tape drive .
|
|
Supported tape media
|
TS1140 media: JB, JC, JK, and JY.
TS1155/TS1150 media: JC, JD, JK, JL, JY, and JZ.
|
LTO media: LTO 8, LTO 8 M8, LTO 7, LTO 6, and LTO 5.
|
|
Server
|
|
Processor
|
Minimum: A x86_64 processor.
Preferred: Dual socket server with the latest chipset.
|
Memory
|
Minimum: (d) x (f) + 1 GB of RAM available for the IBM Spectrum Archive EE program:
•d: Number of tape drives
•f: Number of millions of files/directories on the tape cartridges
In addition, IBM Spectrum Scale must be configured with adequate RAM.
Example: There are six tape drives in the system and three million files are stored on the tape cartridges. The required RAM is 19 GB (6 x 3 + 1 = 19).
Preferred: 64 GB RAM and greater.
|
HBA
|
Minimum: Fibre Channel Host Bus Adapter supported by TS1155, TS1150, TS1140, LTO-8, LTO-7, LTO-6, and LTO-5 tape drives.
Preferred: 8 Gbps/ 16 Gbps Dual port or Quad port Fibre Channel Host Bus Adapter.
|
Network
|
TCP/IP based protocol network.
|
Disk device for LTFS EE tape file system metadata
|
For more information, see 3.4.2, “IBM Spectrum Archive EE metadata file system” on page 46.
|
One or more disk devices for the GPFS file system
|
The amount of disk space that is required depends on the IBM Spectrum Scale settings that are used.
|
Note: The above required RPMS were gathered from an RHEL 7.2 machine. Your RPMS versions might differ depending on the version of your RHEL OS.
|
Node type
|
IBM Spectrum Scale
|
IBM Spectrum Archive internal Hierarchical Storage Manager (HSM)
|
IBM Spectrum Archive LE+
|
Multi-tape management module (MMM)
|
IBM Spectrum Scale only node
|
Yes
|
No
|
No
|
No
|
IBM Spectrum Archive EE node
|
Yes
|
Yes
|
Yes
|
Yes
|
Note: Servers with multiple sockets can have the sockets physically removed.
|
Note: The example uses a x3650 M5 rack server because all standard models come by default with an Intel Xeon E5-2600 v5 single socket processor with 4 PCIe slots dedicated to the socket. For more information about the x3650 m5 server, see this website:
|
Note: Typically when using RAID 6 with eight data disks with a stripe size of 256 KB, the block size should be 2 MB. Therefore, the IBM Spectrum Scale file system should be 2 MB.
|
Important: The choice of block size is important for large file systems. For file systems larger than 100 TB, you should use a block size of at least 256 KB.
|
Options
|
mmcrfs
|
mmchfs
|
Default value
|
Device name of the file system
|
X
|
X
|
none
|
DiskDesc for each disk in your file system
|
X
|
Issue the mmadddisk or mmdeldisk command to add or delete disks from the file system.
|
none
|
-F StanzaFile to specify a file that contains a list of NSD stanzas
|
X
|
Issue the mmadddisk or mmdeldisk command to add or delete disks as indicated in the stanza file.
|
none
|
-A {yes | no | automount} to determine when to mount the file system
|
X
|
X
|
yes
|
-B BlockSize to set the data block size: 64K, 128K, 256K, 512K, 1M, 2M, 4M, 8M, or 16M
|
X
|
This value cannot be changed without re-creating the file system.
|
256K
|
-D {posix | nfs4} to set the semantics for a deny-write open lock
|
X
|
X
|
nfs4
|
-E {yes | no} to report exact mtime values
|
X
|
X
|
yes
|
-i InodeSize to set the size of inodes: 512, 1024, or 4096 bytes
|
X
|
This value cannot be changed.
|
4096
|
-j {cluster | scatter} to determine the block allocation map type
|
X
|
N/A
|
See Block allocation map.
|
-k {posix | nfs4 | all} to determine the authorization types that are supported by the file system
|
X
|
X
|
all
|
-K {no | whenpossible | always} to enforce strict replication
|
X
|
X
|
whenpossible
|
-L LogFileSize to specify the size of the internal log file
|
X
|
This value cannot be changed.
|
4 MB
|
-m DefaultMetadataReplicas
|
X
|
X
|
1
|
-M MaxMetadataReplicas
|
X
|
This value cannot be changed.
|
2
|
-n NumNodes to mount the file system
|
X
|
X
|
32
|
-o MountOptions to be passed to the mount command
|
N/A
|
X
|
none
|
-Q {yes | no} to active quota
|
X
|
X
|
no
|
-r DefaultDataReplicas
|
X
|
X
|
1
|
-R MaxDataReplicas
|
X
|
This value cannot be changed.
|
2
|
-S {yes | no | relatime} to control how the atime value is updated
|
X
|
X
|
no
|
-t DriveLetter
|
X
|
X
|
none
|
-T Mountpoint
|
X
|
X
|
/gpfs/DeviceName
|
-V {full | compat} to change the file system format to the latest level
|
NA
|
X
|
none
|
-W NewDeviceName to assign a new device name to the file system
|
NA
|
X
|
none
|
-z {yes | no} to enable DMAPI
|
X
|
X
|
no
|
--inode-limit MaxNumInodes[:NumInodestoPreallocate] to determine the maximum number of files in the file system
|
X
|
X
|
File system size/1 MB
|
--perfileset-quota to enable or disable per-file set user and group quotas
|
X
|
X
|
|
--filesetdf to specify (when quotas are enforced for a file set) whether the df command reports numbers based on the quotas for the file set and not for the total file system
|
X
|
X
|
--nofilesetdf
|
--metadata-block-size MetadataBlockSize to specify the block size for the system storage pool
|
X
|
NA
|
The default is the same as the value set of -B BlockSize
|
--mount-priority Priority to control the order in which the individual file systems are mounted at daemon start or when all of the all keywords are specified on the mmmount command
|
X
|
X
|
0
|
--version VersionString to enable only the file system features that are compatible with the specified release
|
X
|
NA
|
3.5.0.0
|
Note: It is recommended just to have a single file system.
|
Important: If there is more than one tape library, the number of cartridges in your calculation must be the total number of cartridges in all libraries.
|
Note: The most commons setup is to have two copies, that is, one primary and one copy pool with the same number of tapes.
|
Note: Several components of the reference architecture affect the overall migration performance of the solution, including backend disk speeds, SAN connectivity, NUMA node configuration, and amount of memory. Thus, this migration performance data should be used as a guideline only and any final migration performance measurements should be done on the actual customer hardware.
For more migration performance information, see the IBM Spectrum Archive Enterprise Edition V1.2.2 Performance white paper, which is available at:
|
Note: Figure 3-3 shows a single Fibre Channel cable going to the drive zones. However, generally you should have a second Fibre Channel cable for failover scenarios. This is the same reason why the zone that goes to the external storage unit has two Fibre Channel cables from 1 HBA on each server.
|
Node/Drive
Configuration
|
File Size
|
||||
5 MiB
|
10 Mib
|
100 MiB
|
1 Gib
|
10 Gib
|
|
8 Drives (2N4D8T)
|
369.0
|
577.3
|
1473.4
|
2016.4
|
2315.3
|
6 Drives (2N3D6T)
|
290.6
|
463.2
|
1229.5
|
1656.2
|
1835.5
|
4 Drives (1N4D4T)
|
211.3
|
339.3
|
889.2
|
1148.4
|
1244.9
|
3 Drives (1N3D3T)
|
165.3
|
267.1
|
701.8
|
870.6
|
931.0
|
2 Drives (1N2D2T)
|
114.0
|
186.3
|
465.8
|
583.6
|
624.3
|
Node/Drive
Configuration
|
File Size
|
||||
5 MiB
|
10 Mib
|
100 MiB
|
1 Gib
|
10 Gib
|
|
8 Drives (2N4D8T)
|
365.4
|
561.8
|
1287.8
|
1731.8
|
1921.7
|
6 Drives (2N3D6T)
|
286.3
|
446.5
|
1057.9
|
1309.4
|
1501.7
|
4 Drives (1N4D4T)
|
208.5
|
328.1
|
776.7
|
885.6
|
985.1
|
3 Drives (1N3D3T)
|
162.9
|
254.4
|
605.5
|
668.3
|
749.1
|
2 Drives (1N2D2T)
|
111.0
|
178.4
|
406.7
|
439.2
|
493.7
|
18.219.219.197