Planning
This chapter provides planning information that is related to the IBM Spectrum Archive Enterprise Edition (EE). Review the Planning section within the IBM Spectrum Archive EE IBM Knowledge Center website:
The most current information for IBM Spectrum Archive EE hardware and software configurations, notices, and limitations can always be found in the readme file of the software package.
This chapter includes the following topics:
3.1 System requirements
IBM Spectrum Archive EE supports the Linux operating systems and hardware platforms that are shown in Table 3-1.
Table 3-1 Linux system requirements
Linux computers
Supported operating systems (x86_64)
Red Hat Enterprise Linux Server 7.1.
Red Hat Enterprise Linux Server 7.2.
Red Hat Enterprise Linux Server 7.3.
Supported tape libraries
A single logical library within an IBM TS4500 tape library for each Linear Tape File System (LTFS) EE Control Node.
A single logical library within an IBM TS3500 tape library for each LTFS EE Control .
A single logical library within an IBM TS3310 tape library for each LTFS EE Control Node.
Supported tape drives
IBM TS1140 tape drive with drive firmware level D3I3_B87 or later.
IBM TS1150 tape drive with drive firmware level D3I4_7A4 or later.
IBM TS1155 tape drive with drive firmware level D314_7A4 or later.
LTO-5 full-high tape drive with drive firmware level LTO5_H27E or later.
LTO-6 full-high tape drive with drive firmware level LTO6_H4T0 or later.
LTO-7 full-high tape drive with drive firmware level LTO7_H5B0 or later.
Supported tape media
TS1140 media: JB, JC, JK, and JY.
TS1155/TS1150 media: JC, JD, JK, JL, JY, and JZ.
LTO media: LTO 7, LTO 6, and LTO 5.
Server
Processor
Minimum: A x86_64 processor.
Recommended: A single socket server with the latest chipset.
Memory
Minimum: (d) x (f) + 1 GB of RAM available for the IBM Spectrum Archive EE program. In addition, IBM Spectrum Scale must be configured with adequate RAM.
d: Number of tape drives.
f: Number of millions of files/directories on the tape cartridges.
Example: There are six tape drives in the system and three million files are stored on the tape cartridges. The required RAM is 19 GB (6 x 3 + 1 = 19).
Recommended: 64 GB RAM and above.
HBA
Minimum: Fibre Channel Host Bus Adapter supported by TS1155, TS1150, TS1140, LTO-7, LTO-6, and LTO-5 tape drives.
Recommended: 8 Gbps/ 16 Gbps Dual port or Quad port Fibre Channel Host Bus Adapter.
Network
TCP/IP based protocol network.
Disk device for LTFS EE tape file system metadata
One or more disk devices for the GPFS file system
The amount of disk space that is required depends on the IBM Spectrum Scale settings that are used.
3.2 Required software
This section describes the required software for IBM Spectrum Archive EE on Red Hat systems. SUSE is very similar, so it is not described in this section.
The following RPM Package Manager software (RPMS) must be installed for a Red Hat Enterprise Linux system before installing IBM Spectrum Archive EE V1.2.2:
Latest Fix Pack for GPFS V4.2.2.x or V4.2.3.x
pyOpenSSL-0.13.1-2.e or later
fuse-2.8.3-5 or later
fuse-libs-2.8.3-5 or later
libxml2-2.7.6-21 or later
libuuid-2.17.2-12.24 or later
libicu-4.2.1-14 or later
glibc-2.12-1.192.el6.x86_64 or later (prerequisite for 64-bit BA client)
nss-softokn-freebl-3.14.3-23.3 or later
rpcbind-0.2.0-12 or later
python-2.6.6-66 or later, but earlier than 3.0
IBM tape device driver for Linux (lin_tape) 3.0.20 or later
A device driver for the host bus adapter that is used to connect to tape library and attach to tape drives
Net-snmp 5.5-57 or later
Net-snmp-libs 5.5-57 or later
attr 2.4.44-7 or later
boost-thread-1.41.0-28 or later
boost-filesystem-1.41.0-28 or later
 
Note: The above required RPMS were gathered from a RHEL 6.8 machine, depending on the version of your RHEL OS your RPMS versions may differ.
3.3 Hardware and software setup
Valid combinations of IBM Spectrum Archive EE components in an IBM Spectrum Scale cluster are shown in Table 3-2.
Table 3-2 Valid combinations for types of nodes in the IBM Spectrum Scale cluster
Node type
IBM General Parallel File System
IBM Spectrum Archive internal Hierarchical Storage Manager (HSM)
LTFS LE+
Multi-tape management module (MMM)
IBM Spectrum Scale only node
Yes
No
No
No
IBM Spectrum Archive EE node
Yes
Yes
Yes
Yes
All other combinations are invalid as an IBM Spectrum Archive EE system. IBM Spectrum Archive EE nodes have connections to the IBM tape libraries and drives.
Multiple IBM Spectrum Archive EE nodes enable access to the same set of IBM Spectrum Archive EE tapes. The purpose of enabling this capability is to increase the performance of the host migrations and recalls by assigning fewer tape drives to each IBM Spectrum Archive EE node. The number of drives per node depends on the HBA/switch/host combination. The idea is to have the maximum number of drives on the node such that all drives on the node can be writing or reading at their maximum speeds.
The following hardware/software/configuration setup must be prepared before IBM Spectrum Archive EE is installed:
IBM Spectrum Scale is installed on each of the IBM Spectrum Archive EE nodes.
The IBM Spectrum Scale cluster is created and all of the IBM Spectrum Archive EE nodes belong to the cluster.
A single NUMA node is preferable for better performance. For servers that contain multiple CPUs, the key is to remove memory from the other CPUs and group them in a single CPU to create a single NUMA node. This configuration allows all the CPUs to access the shared memory, resulting in higher read/write performances between the disk storage and tape storage.
FC switches can be added between the host and tape drives as well as between the host and the disk storage to create a storage area network (SAN) to further expand storage needs as required.
If you plan to configure your own server, an example of a server configuration is listed below for a Lenovo x3650 M5 rack server: Server System x3650 M5 - 8871KXU:
Intel Xeon Processor E5-2640 v4 10C 2.4 GHz 25 MB Cache 2133 MHz 90 W (Standard)
16 GB TruDDR4 Memory (2Rx4, 1.2 V) PC4-19200 CL17 2400 MHz LP RDIMM (Standard) x 4
ServeRAID M5210 SAS/SATA Controller (Standard)
System x Enterprise Slides Kit (Standard)
System x 900 W High Efficiency Platinum AC Power Supply (Standard)
System x 900 W High Efficiency Platinum AC Power Supply (Standard)
System x3650 M5 PCIe Riser 1 (1 x16 FH/FL + 1 x8 FH/HL Slots)
 – QLogic 8 Gb FC Dual-port HBA (16 Gb preferred)
 – QLogic 8 Gb FC Dual-port HBA (16 Gb preferred)
System x3650 M5 PCIe Riser 2 (1 x16 FH/FL + 1 x8 FH/HL Slots)
 – QLogic 8 Gb FC Dual-port HBA (16 Gb preferred)
 – QLogic 8 Gb FC Dual-port HBA (16 Gb preferred)
x3650 M5 PCIe x8
 – QLogic 8 Gb FC Dual-port HBA (16 Gb preferred)
x3650 M5 PCIe x8
 – QLogic 8 Gb FC Dual-port HBA (16 Gb preferred)
 
Note: Servers with multiple sockets can have the sockets physically removed.
Figure 3-1 provides an example of an IBM Spectrum Archive EE node hardware setup with a x3650 M5 rack server.
Figure 3-1 IBM Spectrum Archive EE node hardware setup with a x3650 M5 rack server
 
Note: The example uses a x3650 M5 rack server because all standard models come by default with an Intel Xeon E5-2600 v5 single socket processor with 4 PCIe slots dedicated to the socket. For more information about the x3650 m5 server, see this website:
3.4 Sizing and settings
There are many considerations that are required when you are planning for an IBM Spectrum Scale file system, including the IBM Spectrum Archive EE HSM-managed file system and the IBM Spectrum Archive EE metadata file system. Thus, this section describes the IBM Spectrum Scale file system aspects to help avoid the need to make changes later.
3.4.1 IBM Spectrum Scale block size
The size of data blocks in a file system can be specified at file system creation by using the -B option on the mmcrfs command or allowed to default to 256 KiB. This value cannot be changed without re-creating the file system.
When deciding on a block size for a file system, consider these points:
Supported block sizes are 64 KiB, 128 KiB, 256 KiB, 512 KiB, 1 MiB, 2 MiB, 4 MiB, 8 MiB, and 16 MiB.
The GPFS block size determines factors:
 – The minimum disk space allocation unit. The minimum amount of space that file data can occupy is a subblock. A subblock is 1/32 of the block size.
 – The maximum size of a read or write request that IBM Spectrum Scale sends to the underlying disk driver.
From a performance perspective, it is a preferred practice that you set the IBM Spectrum Scale block size to match the application buffer size, the RAID stripe size, or a multiple of the RAID stripe size. If the IBM Spectrum Scale block size does not match the RAID stripe size, performance might be severely degraded, especially for write operations. If Spectrum Scale RAID is in use, the block size must equal the VDisk track size.
For more information about IBM Spectrum Scale RAID, see IBM Spectrum Scale RAID: Administration, SC27-6658.
 
Note: Typically when using RAID 5 with eight data disks with a stripe size of 256 KB, the block size should be 2 MB, and thus the IBM Spectrum Scale file system should be 2 MB.
 
 
The effect of block size on file system performance largely depends on the application I/O pattern:
 – A large block size is often beneficial for large sequential read and write workloads.
 – A smaller block size is likely to offer better performance for small file, small random read and write, and metadata-intensive workloads.
The efficiency of many algorithms that rely on caching file data in an IBM Spectrum Scale page pool depends more on the number of blocks cached rather than the absolute amount of data. For a page pool of a given size, a larger file system block size means that fewer blocks cached. Therefore, when a file system is created with a block size larger than the default of 256 KiB, it is a preferred practice that the page pool size is increased in proportion to the block size.
The file system block size must not exceed the value of the IBM Spectrum Scale maximum file system block size. The default maximum block size is 1 MiB. If a larger block size is wanted, use the mmchconfig command to increase the maxblocksize configuration parameter before starting IBM Spectrum Scale.
This value should be specified with the character K or M, for example: 512K or 4M. You should choose the following block size that is based on the application set that you plan to support and whether you are using RAID hardware:
The 64-KB block size offers a compromise if there is a mix of many files of approximately 64 K or less in size. It makes more efficient use of disk space than 256 KB, while allowing faster sequential I/O operations than 16 KB.
The 256-KB block size is the default block size and normally is the best block size for file systems that have a mixed usage or wide range of file size from small to large files.
The 1-MB block size can be more efficient if the dominant I/O pattern is sequential access to large files (1 MB or more).
The --metadata-block-size option on the mmcrfs command allows a different block size to be specified for the system storage pool, provided its usage is set to metadataOnly. This can be especially beneficial if the default block size is larger than 1 MB. Valid values are the same as those listed for the -B option.
If you plan to use RAID devices in your file system, a larger block size might be more effective and help avoid the penalties that are involved in small block write operations to RAID devices. For example, in a RAID configuration that use four data disks and one parity disk (an 8+P RAID 5 configuration) that uses a 256-KB stripe size, the optimal file system block size is an integral multiple of 256 KB (eight data disks × 256-KB stripe size = 2 MB).
A block size of an integral multiple of 2 MB results in a single data write that encompasses the four data disks and a parity-write to the parity disk. If a block size smaller than 2 MB (such as 64 KB) is used with the same RAID configuration, write performance is degraded by the read-modify-write behavior. A 64-KB block size results in a single disk writing 64 KB and a subsequent read from the three remaining disks to compute the parity that is then written to the parity disk. The extra read degrades performance.
The choice of block size also affects the performance of certain metadata operations, in particular, block allocation performance. The GPFS block allocation map is stored in blocks, which are similar to regular files. When the block size is small, the following points should be considered:
It takes more blocks to store an amount of data, which results in more work to allocate those blocks.
One block of allocation map data contains less information.
 
Important: The choice of block size is important for large file systems. For file systems larger than 100 TB, you should use a block size of at least 256 KB.
Fragments and subblocks
This section describes file system planning considerations and provide a description of how IBM Spectrum Scale manages fragments and subblocks.
File system creation considerations
File system creation involves anticipating usage within the file system and considering your hardware configurations. Before you create a file system, consider how much data is stored and the amount of demand for the files in the system.
Each of these factors can help you to determine how much disk resource to devote to the file system, which block size to choose, where to store data and metadata, and how many replicas to maintain. For the current supported file system size, see the current IBM Spectrum Scale FAQ at this website:
Your IBM Spectrum Scale file system is created by running the mmcrfs command. Table 3-3 details the file system creation options that are specified in the mmcrfs command, which options can be changed later by using the mmchfs command, and the default values.
For more information about moving an existing file system into a new GPFS or IBM Spectrum Scale cluster, see Exporting file system definitions between clusters in the IBM Spectrum Scale V4.1.1: Advanced Administration Guide, which is available at this website:
Table 3-3 IBM Spectrum Scale file system configurations
Options
mmcrfs
mmchfs
Default value
Device name of the file system
X
X
none
DiskDesc for each disk in your file system
X
Issue the mmadddisk or mmdeldisk command to add or delete disks from the file system.
none
-F StanzaFile to specify a file that contains a list of NSD stanzas
X
Issue the mmadddisk or mmdeldisk command to add or delete disks as indicated in the stanza file.
none
-A {yes | no | automount} to determine when to mount the file system
X
X
yes
-B BlockSize to set the data block size: 64K, 128K, 256K, 512K, 1M, 2M, 4M, 8M, or 16M
X
This value cannot be changed without re-creating the file system.
256K
-D {posix | nfs4} to set the semantics for a deny-write open lock
X
X
nfs4
-E {yes | no} to report exact mtime values
X
X
yes
-i InodeSize to set the size of inodes: 512, 1024, or 4096 bytes
X
This value cannot be changed.
4096
-j {cluster | scatter} to determine the block allocation map type
X
N/A
See Block allocation map.
-k {posix | nfs4 | all} to determine the authorization types that are supported by the file system
X
X
all
-K {no | whenpossible | always} to enforce strict replication
X
X
whenpossible
-L LogFileSize to specify the size of the internal log file
X
This value cannot be changed.
4 MB
-m DefaultMetadataReplicas
X
X
1
-M MaxMetadataReplicas
X
This value cannot be changed.
2
-n NumNodes to mount the file system
X
X
32
-o MountOptions to be passed to the mount command
N/A
X
none
-Q {yes | no} to active quota
X
X
no
-r DefaultDataReplicas
X
X
1
-R MaxDataReplicas
X
This value cannot be changed.
2
-S {yes | no | relatime} to control how the atime value is updated
X
X
no
-t DriveLetter
X
X
none
-T Mountpoint
X
X
/gpfs/DeviceName
-V {full | compat} to change the file system format to the latest level
NA
X
none
-W NewDeviceName to assign a new device name to the file system
NA
X
none
-z {yes | no} to enable DMAPI
X
X
no
--inode-limit MaxNumInodes[:NumInodestoPreallocate] to determine the maximum number of files in the file system
X
X
File system size/1 MB
--perfileset-quota to enable or disable per-file set user and group quotas
X
X
 
--filesetdf to specify (when quotas are enforced for a file set) whether the df command reports numbers based on the quotas for the file set and not for the total file system
X
X
--nofilesetdf
--metadata-block-size MetadataBlockSize to specify the block size for the system storage pool
X
NA
The default is the same as the value set of -B BlockSize
--mount-priority Priority to control the order in which the individual file systems are mounted at daemon start or when all of the all keywords are specified on the mmmount command
X
X
0
--version VersionString to enable only the file system features that are compatible with the specified release
X
NA
3.5.0.0
Specifying the maximum number of files that can be created
The maximum number of files that can be created can be specified by using the --inode-limit option on the mmchfs command. Allowable values, which range from the current number of created inodes (determined by running the mmdf command with the -F option) through the maximum number of files that are supported, are constrained by the following formula:
Maximum number of files = (total file system space) / (inode size + subblock size)
You can determine the inode size (-i) and subblock size (value of the -B parameter / 32) of a file system by running the mmlsfs command. The maximum number of files in a file system can be specified at file system creation by using the --inode-limit option on the mmcrfs command, or it can be increased later by using --inode-limit on the mmchfs command. This value defaults to the size of the file system at creation divided by 1 MB and cannot exceed the architectural limit. When the file system is created, it uses the default inode size of 4096 bytes to determine how many inodes can be allocated on the file system.
The --inode-limit option applies only to the root file set. When there are multiple inode spaces, use the --inode-space option of the mmchfileset command to alter the inode limits of independent file sets. The mmchfileset command can also be used to modify the root inode space. The --inode-space option of the mmlsfs command shows the sum of all inode spaces.
Inodes are allocated when they are used. When a file is deleted, the inode is reused, but inodes are never deallocated. When the maximum number of inodes in a file system is set, there is the option to preallocate inodes. However, in most cases there is no need to preallocate inodes because, by default, inodes are allocated in sets as needed. If you do decide to preallocate inodes, be careful not to preallocate more inodes than are used; otherwise, the allocated inodes unnecessarily use metadata space that cannot be reclaimed.
These options limit the maximum number of files that can actively exist within a file system. However, the maximum number of files in the file system can be restricted further by IBM Spectrum Scale so that the control structures that are associated with each file do not use all of the file system space.
When you are managing inodes, the following points should be considered:
For file systems that are supporting parallel file creates, as the total number of free inodes drops below 5% of the total number of inodes, there is the potential for slowdown in the file system access. Take this into consideration when you are creating or changing your file system. Run the mmdf command to show the number of free inodes.
Excessively increasing the value for the maximum number of files can cause the allocation of too much disk space for control structures.
Fragments and subblocks
IBM Spectrum Scale divides each block into 32 subblocks. Files smaller than one block size are stored in fragments, which are made up of one or more subblocks. Large files are stored in a number of full blocks and zero or more subblocks to hold the data at the end of the file.
The block size is the largest contiguous amount of disk space that is allocated to a file and is the largest amount of data that can be accessed in a single I/O operation. The subblock is the smallest unit of disk space that can be allocated. For a block size of 256 KB, IBM Spectrum Scale reads as much as 256 KB of data in a single I/O operation and small files can occupy as little as 8 KB of disk space. With a block size of 16 KB, small files occupy as little as 512 bytes of disk space (not including the inode), but IBM Spectrum Scale cannot read more than 16 KB in a single I/O operation.
IBM Spectrum Scale settings for performance improvement
The following values of the IBM Spectrum Scale configuration attributes optimize performance. The attributes need to be modified per node and cluster configuration, so the values used in the lab environment might differ than the ones in your environment. See 8.9, “Configuring IBM Spectrum Scale settings for performance improvement” on page 247 for the values used in the lab environment.
Here are some of the IBM Spectrum Scale settings that can affect the performance of IBM Spectrum Archive EE (these settings are not comprehensive):
pagepool
workerThreads
nsdBufSpace
nsdMaxWorkerThreads
nsdMinWorkerThreads
nsdMultiQueue
nsdMultiQueueType
nsdSmallThreadRatio
nsdThreadsPerQueue
numaMemoryInterleave
maxStatCache
ingorePrefetchLUNCount
logPingPongSector
scatterBufferSize
These settings can be retrieved by running the mmlsconfig command, and can be updated by running the mmchconfig command. Based on a 2 MB block size and the workerThreads being set to 1024, set the page pool to at least 4 GB. Anywhere between 4 GB - 7 GB should be a good place to start to see better performance.
These settings are changed to the following values by running the ltfsee_config command:
worker1Threads 512
dmapiWorkerThreads 64
Settings that are too high or too low can also affect IBM Spectrum Archive EE performance.
For example, if system memory that is available for IBM Spectrum Archive EE is not large enough for 28 GB of page pool and 15 GB of token, it causes unexpected Linux memory swapping, which degrades system performance. In such a case, these settings should be reduced to fit available memory space.
For more information about the parameters, see this website:
3.4.2 IBM Spectrum Archive EE metadata file system
IBM Spectrum Archive EE requires space for the file metadata that is stored on an IBM Spectrum Scale file system. If this metadata file system is separate from the IBM Spectrum Scale space-managed file systems, you must ensure that the size and number of inodes of the metadata file system is large enough to handle the number of migrated files.
The IBM Spectrum Archive EE metadata directory can be stored in its own IBM Spectrum Scale file system or it can share the IBM Spectrum Scale file system that is being space-managed.
When the IBM Spectrum Archive EE metadata file system is using the same IBM Spectrum Scale file system to be space-managed, it has the advantage of being flexible by sharing the resources. Space-managed and IBM Spectrum Archive EE metadata can accommodate each other by growing and shrinking as needed. Thus, it is recommended to have a single file system. For metadata optimization, it is preferable to put the GPFS metadata and the IBM Spectrum Archive metadata on SSDs or flash.
 
Note: It is recommended just to have a single file system.
The size requirements of the IBM Spectrum Scale file system that is used to store the IBM Spectrum Archive EE metadata directory depends on the block size and the number of files that are migrated to IBM Spectrum Archive EE.
The following calculation produces an estimate of the minimum number of inodes that the IBM Spectrum Scale file system must have available, and it depends on the number of cartridges:
Number of inodes = 500 + (15 x c) (Where c is the number of cartridges.)
 
 
 
Important: If there is more than one tape library, the number of cartridges in your calculation must be the total number of cartridges in all libraries.
The following calculation produces an estimate of the size of the metadata that the IBM Spectrum Scale file system must have available:
Number of GBs = 10 + (3 x F x N)
where:
F is the number of files, in millions, to migrate.
N is the number of replicas to create.
For example, to migrate 50 million files to two tape storage pools, 310 GB of metadata is required:
10 + (3 x 50 x 2) = 310 GB
3.4.3 Redundant copies
The purpose of redundant copies is to enable the creation of multiple LTFS copies of each GPFS file during migration. One copy is considered to be the primary and the other copies are considered the redundant copies. The redundant copies can be created only in pools that are different from the pool of the primary copy and different from the pools of other redundant copies. The maximum number of redundant copies is two. The primary copy and redundant copies can be in a single tape library or spread across two tape libraries.
Thus, to ensure that file migration can occur on redundant copy tapes, the number of tapes in the redundant copy pools must be the same or greater than the number of tapes in the primary copy pool. For example, if the primary copy pool has 10 tapes, the redundant copy pools also should have at least 10 tapes. For more information about redundant copies, see 7.10.4, “Replicas and redundant copies” on page 186.
 
Note: The most commons setup is to have two copies, that is, one primary and one copy pool with the same number of tapes.
3.4.4 Performance
Performance planning is an important aspect of any IBM Spectrum Archive EE implementation, specifically migration performance. The migration performance is the rate at which IBM Spectrum Archive EE can move data from disk to tape and then freeing up space on disk. The number of tape drives (including tape drive generation) and servers that are required for the configuration can be determined based on the amount of data that needs to be moved per day and an estimate of the average file size of that data.
 
Note: Several components of the reference architecture affect the overall migration performance of the solution, including backend disk speeds, SAN connectivity, NUMA node configuration, and amount of memory. Thus, this migration performance data should be used as a guideline only and any final migration performance measurements should be done on the actual customer hardware.
For additional migration performance information see the IBM Spectrum Archive Enterprise Edition V1.2.2 Performance White Paper at:
The configuration shown in Figure 3-2 was used to run the lab performance test in this section.
Figure 3-2 IBM Spectrum Archive EE configuration used in lab performance
The performance data shown in this section was derived by using two x3850 X5 servers consisting of multiple QLogic QLE2562 8 Gb FC HBA cards. The servers were also modified by moving all the RAM memory to a single CPU socket to create a multi-CPU, single NUMA node. The HBAs were relocated so they are on one NUMA node.
This modification was made so that memory can be shared locally to improve performance. The switch used in the lab was an IBM 2498 model B40 8 Gb FC switch that was zoned out so that it had a zone for each HBA in each node. An IBM Storwize® V7000 disk storage unit was used for the disk space and either TS1150 or TS1070 drives were used in a TS4500 tape library.
Figure 3-2 on page 57 shows the example configuration. This is one of many configurations. Yours might be different.
Figure 3-3 shows a basic internal configuration of Figure 3-2 on page 57 in more details. The 2 x3850 X5 servers are utilizing a single NUMA and have all the HBAs performing out of it. This allows all CPUs within the servers to work more efficiently. The 8 GB fibre channel switch is broken up so that each zone handles a single HBA on each server.
Figure 3-3 Internal diagram of two nodes running off NUMA 0 connected to an 8 Gb switch with four drives per node and one external storage unit
 
Note: Figure 3-3 shows a single fibre channel cable going to the drive zones however it is recommended to have a second fibre channel cable for fail over scenarios. This is the same reason why the zone that goes to the external storage unit has 2 fibre channel cables from 1 HBA on each server.
This diagram should be used as a guide on how one would set up their own environment for best performances. This is one of many configurations. For instance if one has more drives they can add more tape drives per zone, or add more HBAs on each server and create a new zone.
Table 3-4 and Table 3-5 show the raw data of total combined transfer rate in MiB/s on multiple node configurations with various files sizes (the combined transfer rate of all drives). In these tables, N represents nodes, D represents number of drives per node, and T represents the total number of drives for the configuration.
With a TS1150 configuration of 1N4D4T, one can expect to see a combined total transfer rate of 1244.9 MB/s for 10 GiB files. If that configuration is doubled to 2N4D8T, the total combined transfer rate is nearly doubled to 2315.3 MiB/s for 10 GiB files. With this information, you can estimate the total combined transfer rate for their configuration.
Table 3-4 TS1150 raw performance data for multiple node/drive configurations with 5 MiB, 10 MiB, 1 GiB, and 10 GiB files
Node/Drive
Configuration
File Size
5 MiB
10 Mib
100 MiB
1 Gib
10 Gib
8 Drives (2N4D8T)
369.0
577.3
1473.4
2016.4
2315.3
6 Drives (2N3D6T)
290.6
463.2
1229.5
1656.2
1835.5
4 Drives (1N4D4T)
211.3
339.3
889.2
1148,4
1244.9
3 Drives (1N3D3T)
165.3
267.1
701.8
870.6
931.0
2 Drives (1N2D2T)
114.0
186.3
465.8
583.6
624.3
Table 3-5 shows the raw performance for the LTO7 drives.
Table 3-5 LTO7 raw performance data for multiple node/drive configurations with 5 MiB, 10 MiB, 1 GiB, and 10 GiB files
Node/Drive
Configuration
File Size
5 MiB
10 Mib
100 MiB
1 Gib
10 Gib
8 Drives (2N4D8T)
365.4
561.8
1287.8
1731.8
1921.7
6 Drives (2N3D6T)
286.3
446.5
1057.9
1309.4
1501.7
4 Drives (1N4D4T)
208.5
328.1
776.7
885.6
985.1
3 Drives (1N3D3T)
162.9
254.4
605.5
668.3
749.1
2 Drives (1N2D2T)
111.0
178.4
406.7
439.2
493.7
Figure 3-4 shows a comparison line graph of the raw performance data obtain for TS1150 drives and LTO7 drives using the same configurations. It is clear that using small files, the comparison between the two types of drives is minimal. However, when migrating file sizes of 1 GiB and greater, there is a noticeable difference. Comparing the biggest configuration of 2N4D8T, the LTO peaks at a total combined transfer rate of 1921.7 MiB/s. With the same configuration but with TS1150 drives, it peaks at a total combined transfer rate of 2315.3 MiB/s.
Figure 3-4 Comparison between TS1150 and LTO-7 drives using multiple node/drive configurations
 
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.129.15.234