Overview of the IBM Storwize V5000 Gen2 system
This chapter provides an overview of the IBM Storwize V5000 Gen2 architecture and includes a brief explanation of storage virtualization.
Specifically, this chapter provides information about the following topics:
Overview
Terminology
Models
IBM Storwize V5000 Gen1 and Gen2 compatibility
Hardware
Terms
Features
1.1 IBM Storwize V5000 Gen2 overview
The IBM Storwize V5000 Gen2 solution is a modular entry-level and midrange storage solution. The IBM Storwize V5000 Gen2 includes the capability to virtualize its own internal Redundant Array of Independent Disk (RAID) storage and existing external storage area network (SAN)-attached storage (the Storwize V5030 only).
The three IBM Storwize V5000 Gen2 models (Storwize V5010, Storwize V5020, and Storwize V5030) offer a range of performance scalability and functional capabilities. Table 1-1 shows a summary of the features of these models.
Table 1-1 IBM Storwize V5000 Gen2 models
 
Storwize V5010
Storwize V5020
Storwize V5030
CPU cores
2
2
6
Cache
16 GB
Up to 32 GB
Up to 64 GB
Supported expansion enclosures
10
10
20
External storage virtualization
No
No
Yes
Compression
No
No
Yes
Encryption
No
Yes
Yes
For a more detailed comparison, see Table 1-3 on page 6.
IBM Storwize V5000 Gen2 features the following benefits:
Enterprise technology available to entry and midrange storage
Expert administrators are not required
Easy client setup and service
Simple integration into the server environment
Ability to grow the system incrementally as storage capacity and performance needs change
The IBM Storwize V5000 Gen2 addresses the block storage requirements of small and midsize organizations. The IBM Storwize V5000 Gen2 consists of one 2U control enclosure and, optionally, up to ten 2U expansion enclosures on the Storwize V5010 and Storwize V5020 systems and up to twenty 2U expansion enclosures on the Storwize V5030 systems. The Storwize V5030 systems are connected by serial-attached Small Computer Systems Interface (SCSI) (SAS) cables that make up one system that is called an I/O group.
With the Storwize V5030 systems, two I/O groups can be connected to form a cluster, providing a maximum of two control enclosures and 40 expansion enclosures. With the High Density expansion drawers, you are able to attach up to 16 expansion enclosures to a cluster.
The control and expansion enclosures are available in the following form factors, and they can be intermixed within an I/O group:
12 x 3.5-inch (8.89-centimeter) drives in a 2U unit
24 x 2.5-inch (6.35-centimeter) drives in a 2U unit
92 x 2.5-inch in carriers or 3.5-inch drives in a 5U unit
Two canisters are in each enclosure. Control enclosures contain two node canisters, and expansion enclosures contain two expansion canisters.
The IBM Storwize V5000 Gen2 supports up to 1,520 x 2.5 inch drives or 3.5 inch drives or a combination of both drive form factors for the internal storage in a two I/O group Storwize V5030 cluster.
SAS, Nearline (NL)-SAS, and flash drive types are supported.
The IBM Storwize V5000 Gen2 is designed to accommodate the most common storage network technologies to enable easy implementation and management. It can be attached to hosts through a Fibre Channel (FC) SAN fabric, an Internet Small Computer System Interface (iSCSI) infrastructure, or SAS. Hosts can be attached directly or through a network.
Important: For more information about supported environments, configurations, and restrictions, see the IBM System Storage Interoperation Center (SSIC):
For more information, see this website:
The IBM Storwize V5000 Gen2 is a virtualized storage solution that groups its internal drives into RAID arrays, which are called managed disks (MDisks). MDisks can also be created on the Storwize V5030 systems by importing logical unit numbers (LUNs) from external FC SAN-attached storage. These MDisks are then grouped into storage pools. Volumes are created from these storage pools and provisioned out to hosts.
Storage pools are normally created with MDisks of the same drive type and drive capacity. Volumes can be moved non-disruptively between storage pools with differing performance characteristics. For example, a volume can be moved between a storage pool that is made up of NL-SAS drives to a storage pool that is made up of SAS drives to improve performance.
The IBM Storwize V5000 Gen2 system also provides several configuration options to simplify the implementation process. It also provides configuration presets and automated wizards that are called Directed Maintenance Procedures (DMP) to help resolve any events that might occur.
Included with an IBM Storwize V5000 Gen2 system is a simple and easy to use graphical user interface (GUI) to enable storage to be deployed quickly and efficiently. The GUI runs on any supported browser. The management GUI contains a series of preestablished configuration options that are called presets that use commonly used settings to quickly configure objects on the system. Presets are available for creating volumes and IBM FlashCopy® mappings and for setting up a RAID configuration.
You can also use the command-line interface (CLI) to set up or control the system.
1.2 IBM Storwize V5000 Gen2 terminology
The IBM Storwize V5000 Gen2 system uses terminology that is consistent with the entire IBM Storwize family and the IBM SAN Volume Controller. The terms are defined in Table 1-2. More terms can be found in Appendix B, “Terminology” on page 803.
Table 1-2 IBM Storwize V5000 Gen2 terminology
IBM Storwize V5000 Gen2 term
Definition
Battery
Each control enclosure node canister in an IBM Storwize V5000 Gen2 contains a battery.
Chain
Each control enclosure has either one or two chains, which are used to connect expansion enclosures to provide redundant connections to the inside drives.
Clone
A copy of a volume on a server at a particular point. The
contents of the copy can be customized and the contents of the original volume are preserved.
Control enclosure
A hardware unit that includes a chassis, node canisters, drives, and power sources.
Data migration
IBM Storwize V5000 Gen2 can migrate data from existing external storage to its internal volumes.
Distributed RAID (DRAID)
No dedicated spare drives are in an array. The spare capacity is distributed across the array, which allows faster rebuild of the failed disk.
Drive
IBM Storwize V5000 Gen2 supports a range of hard disk drives (HDDs) and Flash Drives.
Event
An occurrence that is significant to a task or system. Events can include the completion or failure of an operation, a user action, or the change in the state of a process.
Expansion canister
A hardware unit that includes the SAS interface hardware that enables the control enclosure hardware to use the drives of the expansion enclosure. Each expansion enclosure has two expansion canisters.
Expansion enclosure
A hardware unit that includes expansion canisters, drives, and power supply units.
External storage
MDisks that are SCSI logical units (LUs) that are presented by storage systems that are attached to and managed by the clustered system.
Fibre Channel port
Fibre Channel ports are connections for the hosts to get access to the IBM Storwize V5000 Gen2.
Host mapping
The process of controlling which hosts can access specific volumes within an IBM Storwize V5000 Gen2.
Internal storage
Array MDisks and drives that are held in enclosures that are part of the IBM Storwize V5000 Gen2.
iSCSI (Internet Small Computer System Interface)
Internet Protocol (IP)-based storage networking standard for linking data storage facilities.
Managed disk (MDisk)
A component of a storage pool that is managed by a clustered system. An MDisk is part of a RAID array of internal storage or a SCSI LU for external storage. An MDisk is not visible to a host system on the SAN.
Node canister
A hardware unit that includes the node hardware, fabric, and service interfaces, SAS expansion ports, and battery. Each control enclosure contains two node canisters.
PHY
A single SAS lane. Four PHYs are in each SAS cable.
Power Supply Unit
Each enclosure has two power supply units (PSU).
Quorum disk
A disk that contains a reserved area that is used exclusively for cluster management. The quorum disk is accessed when it is necessary to determine which half of the cluster continues to read and write data.
Serial-Attached SCSI (SAS) ports
SAS ports are connections for expansion enclosures and direct attachment of hosts to access the IBM Storwize V5000 Gen2.
Snapshot
An image backup type that consists of a point-in-time view of a volume.
Storage pool
An amount of storage capacity that provides the capacity requirements for a volume.
Strand
The SAS connectivity of a set of drives within multiple enclosures. The enclosures can be control enclosures or expansion enclosures.
Thin provisioning or thin provisioned
The ability to define a storage unit (full system, storage pool, or volume) with a logical capacity size that is larger than the physical capacity that is assigned to that storage unit.
Traditional RAID (TRAID)
Traditional RAID uses the standard RAID levels.
Volume
A discrete unit of storage on disk, tape, or other data recording medium that supports a form of identifier and parameter list, such as a volume label or input/output control.
Worldwide port names
Each Fibre Channel port and SAS port is identified by its physical port number and worldwide port name (WWPN).
1.3 IBM Storwize V5000 Gen2 models
The IBM Storwize V5000 Gen2 platform consists of different models. Each model type supports a different set of features, as shown in Table 1-3.
Table 1-3 IBM Storwize V5000 feature comparison
Feature
V5000 Gen1
V5010
V5020
V5030
Cache
16 GB
16 GB
16 GB or 32 GB
 
32 GB or 64 GB
CPU
4 - core
Ivy Bridge Xeon CPU
2 GHz
 
2- core Broadwell-DE Celeron CPU
1.2 GHz
2- core Broadwell-DE Xeon CPU
2.2 GHz Hyper-threading
6 - core Broadwell-DE Xeon CPU
1.9 GHz
Hyper-threading
Compression
None
None
None
Licensed (with 64 GB cache only)
DRAID
Yes
Yes
Yes
Yes
SAS HW Encryption
None
None
Licensed
Licensed
External Virtualization
Licensed
Data Migration Only
Data Migration Only
Licensed
IBM Easy Tier®
Licensed
Licensed
Licensed
Licensed
FlashCopy
Licensed
Licensed
Licensed
Licensed
Hyperswap
Yes
No
No
Yes
Remote Copy
Licensed
Licensed
Licensed
Licensed
Thin Provisioning
Yes
Yes
Yes
Yes
Traditional RAID
Yes
Yes
Yes
Yes
Volume Mirroring
Yes
Yes
Yes
Yes
VMware Virtual Volumes (VVols)
Yes
Yes
Yes
Yes
More information: For more information about the features, benefits, and specifications of IBM Storwize V5000 Gen2 models, see the following website:
The information in this book is accurate at the time of writing. However, as the IBM Storwize V5000 Gen2 matures, expect to see new features and enhanced specifications.
The IBM Storwize V5000 Gen2 models are described in Table 1-4. All control enclosures have two node canisters. F models are expansion enclosures.
Table 1-4 IBM Storwize V5000 Gen2 models
Model
Description
Cache
Drive Slots
One-year warranty
 
 
 
 
2077-112
IBM Storwize V5010 large form factor (LFF) Control Enclosure
16 GB
12 x 3.5-inch
 
2077-124
IBM Storwize V5010 small form factor (SFF) Control Enclosure
16 GB
24 x 2.5-inch
 
2077-212
IBM Storwize V5020 LFF Control Enclosure
16 GB or 32 GB
12 x 3.5-inch
 
2077-224
IBM Storwize V5020 SFF Control Enclosure
16 GB or 32 GB
24 x 2.5-inch
 
2077-312
IBM Storwize V5030 LFF Control Enclosure
32 GB or 64 GB
12 x 3.5-inch
 
2077-324
IBM Storwize V5030 SFF Control Enclosure
32 GB or 64 GB
 
24 x 2.5-inch
 
2077-AF3
IBM Storwize V5030F All-Flash Array Control Enclosure
64GB
24 x 2.5-inch
2077-12F
IBM Storwize V5000 LFF Expansion Enclosure
N/A
12 x 3.5-inch
 
2077-24F
IBM Storwize V5000 SFF Expansion Enclosure
N/A
24 x 2.5-inch
 
2077-AFF
IBM Storwize V5030F SFF Expansion Enclosure
N/A
24 x 2.5-inch
2077-A9F
IBM Storwize V5030F High Density LFF Expansion Enclosure
N/A
92 x 3.5-inch
Three-year warranty
 
 
 
 
2078-112
IBM Storwize V5010 LFF Control Enclosure
16 GB
12 x 3.5-inch
 
2078-124
IBM Storwize V5010 SFF Control Enclosure
16 GB
24 x 2.5-inch
 
2078-212
IBM Storwize V5020 LFF Control Enclosure
16 GB or 32 GB
12 x 3.5-inch
 
2078-224
IBM Storwize V5020 SFF Control Enclosure
16 GB or 32 GB
24 x 2.5-inch
 
2078-312
IBM Storwize V5030 LFF Control Enclosure
32 GB or 64 GB
12 x 3.5-inch
 
2078-324
IBM Storwize V5030 SFF Control Enclosure
32 GB or 64 GB
 
24 x 2.5-inch
 
2078-AF3
IBM Storwize V5030F All-Flash Array Control Enclosure
64GB
24 x 2.5-inch
2078-12F
IBM Storwize V5000 LFF Expansion Enclosure
N/A
12 x 3.5-inch
 
2078-24F
IBM Storwize V5000 SFF Expansion Enclosure
N/A
24 x 2.5-inch
 
2078-AFF
IBM Storwize V5030F SFF Expansion Enclosure
N/A
24 x 2.5-inch
2078-A9F
IBM Storwize V5030F High Density LFF Expansion Enclosure
N/A
92 x 3.5-inch
Storwize V5030F control enclosures support only the attachment of Storwize V5030F expansion enclosures (Models AFF and A9F). Storwize V5000 expansion enclosures (Models 12E, 24E, 12F, 24F, and 92F) are not supported with Storwize V5030F control enclosures.
Storwize V5030F expansion enclosures are only supported for attachment to Storwize V5030F control enclosures. Storwize V5000 control enclosures (Models 12C, 24C, 112, 124, 212, 224, 312, and 324) do not support the attachment of Storwize V5030F expansion enclosures.
Table 1-5 shows the 2U expansion enclosures and 5U expansion enclosure mix rules. Shown are the maximum # of Drive Slots per SAS expansion string without disks in the controller itself.
Table 1-5 2U expansion enclosures and 5U expansion enclosure mix rules
 
5U
 
 
 
 
 
 
 
 
 
 
 
2U
 
0
1
2
3
4
5
6
7
8
9
10
 
0
000
024
048
072
096
120
144
168
192
216
240
 
1
092
116
140
164
188
212
236
260
-
-
-
 
2
184
208
232
256
280
304
-
-
-
-
-
 
3
276
300
324
-
-
-
-
-
-
-
-
 
4
368
-
-
-
-
-
-
-
-
-
-
The Storwize V5030 systems can be added to an existing IBM Storwize V5000 Gen1 cluster to form a two-I/O group configuration. This configuration can be used as a migration mechanism to upgrade from the IBM Storwize V5000 Gen1 to the IBM Storwize V5000 Gen2.
The IBM Storwize V5000 Gen1 models are described in Table 1-6 for completeness.
Table 1-6 IBM Storwize V5000 Gen1 models
Model
Cache
Drive slots
One-year warranty
 
 
2077-12C
16 GB
12 x 3.5-inch
2077-24C
16 GB
24 x 2.5-inch
2077-12E
N/A
12 x 3.5-inch
2077-24E
N/A
24 x 2.5-inch
Three-year warranty
 
 
2078-12C
16 GB
12 x 3.5-inch
2078-24C
16 GB
24 x 2.5-inch
2078-12E
N/A
12 x 3.5-inch
2078-24E
N/A
24 x 2.5-inch
Figure 1-1 shows the front view of the 2077/2078-12 and 12F enclosures.
Figure 1-1 IBM Storwize V5000 Gen2 front view for 2077/2078-12 and 12F enclosures
The drives are positioned in four columns of three horizontally mounted drive assemblies. The drive slots are numbered 1 - 12, starting at the upper left and moving left to right, top to bottom.
Figure 1-2 shows the front view of the 2077/2078-24 and 24F enclosures.
Figure 1-2 IBM Storwize V5000 Gen2 front view for 2077/2078-24 and 24F enclosure
The drives are positioned in one row of 24 vertically mounted drive assemblies. The drive slots are numbered 1 - 24, starting from the left. A vertical center drive bay molding is between slots 12 and 13.
1.3.1 IBM Storage Utility Offerings
The IBM 2078 Model U5A is the IBM Storwize V5030 with a three-year warranty, to be utilized in the Storage Utility Offering space. These models are physically and functionally identical to the Storwize V5030 model 324 with the exception of target configurations and variable capacity billing. The variable capacity billing uses IBM Spectrum Control™ Storage Insights to monitor the system usage, allowing allocated storage usage above a base subscription rate to be billed per TB, per month.
Allocated storage is identified as storage that is allocated to a specific host (and unusable to other hosts), whether data is written or not. For thin-provisioning, the data that is actually written is considered used. For thick provisioning, total allocated volume space is considered used.
IBM Storage Utility Offerings include the IBM FlashSystem® 900 (9843-UF3), IBM Storwize V5030 (2078-U5A), and Storwize V7000 (2076-U7A) storage utility models that enable variable capacity usage and billing.
These models provide a fixed total capacity, with a base and variable usage subscription of that total capacity. IBM Spectrum Control Storage Insights is used to monitor the system capacity usage. It is used to report on capacity used beyond the base subscription capacity, referred to as variable usage. The variable capacity usage is billed on a quarterly basis. This enables customers to grow or shrink their usage, and only pay configured capacity.
IBM Storage Utility Offering models are provided for customers who can benefit from a variable capacity system, where billing is based on actually provisioned space above the base. The base subscription is covered by a three-year lease that entitles the customer to utilize the base capacity at no additional cost. If storage needs increase beyond the base capacity, usage is billed based on the average daily provisioned capacity per terabyte, per month, on a quarterly basis.
Example
A customer has a Storwize V5030 utility model with 2 TB nearline disks, for a total system capacity of 48 TB. The base subscription for such a system is 16.8 TB. During the months where the average daily usage is below 16.8 TB, there is no additional billing.
The system monitors daily provisioned capacity and averages those daily usage rates over the month term. The result is the average daily usage for the month.
If a customer uses 25 TB, 42.6 TB, and 22.2 TB in three consecutive months, Storage Insights calculates the overage as follows (rounding to the nearest terabyte) as shown in Table 1-7.
Table 1-7 Overage calculation
Average daily
Base
Overage
To be billed
25.0
16.8
08.2
08
42.6
16.8
25.8
26
22.2
16.8
05.4
05
The capacity billed at the end of the quarter will be a total of 39 TB-months in this example.
Disk expansions (2076-24F for the Storwize V7000 and 2078-24F for the Storwize V5030) may be ordered with the system at the initial purchase, but may not be added through MES. The expansions must have like-type and capacity drives, and must be fully populated.
For example, on a Storwize V7000 utility model with twenty-four 7.68 TB flash drives in the controller, a 2076-24F with twenty-four 7.68 TB drives may be configured with the initial system. Expansion drawers do not apply to FlashSystem 900 (9843-UF3). Storwize V5030 and Storwize V7000 utility model systems support up to 760 drives in the system.
The usage data collected by Storage Insights is used by IBM to determine the actual physical data provisioned in the system. This data is compared to the base system capacity subscription, and any provisioned capacity beyond that base subscription is billed per terabyte, per month, on a quarterly basis. The calculated usage is based on the average use over a given month.
In a highly variable environment, such as managed or cloud service providers, this enables the system to be utilized only as much as is necessary during any given month. Usage can increase or decrease, and is billed accordingly. Provisioned capacity is considered capacity that is reserved by the system. In thick-provisioned environments (available on FlashSystem 900 and Storwize), this is the capacity that is allocated to a host whether it has data written.
For thin-provisioned environments (available on the Storwize system), this is the data that is actually written and used. This is because of the different ways in which thick and thin provisioning utilize disk space.
These systems are available worldwide, but there are specific client and program differences by location. Consult with your IBM Business Partner or sales person for specifics.
1.4 IBM Storwize V5000 Gen1 and Gen2 compatibility
The Storwize V5030 systems can be added into existing Storwize V5000 Gen1 clustered systems. All systems within a cluster must use the same version of Storwize V5000 software, which is version 7.6.1 or later.
 
Restriction: The Storwize V5010 and Storwize V5020 are not compatible with V5000 Gen1 systems because they are not able to join an existing I/O group.
A single Storwize V5030 control enclosure can be added to a single Storwize V5000 cluster to bring the total number of I/O groups to two. They can be clustered by using either Fibre Channel (FC) or Fibre Channel over Ethernet (FCoE). The possible I/O group configuration options for all Storwize V5000 models are shown in Table 1-8.
Table 1-8 IBM Storwize V5000 I/O group configurations
I/O group 0
I/O group 1
V5010
N/A
V5020
N/A
V5030
N/A
V5030
V5030
V5030
V5000 Gen1
V5000 Gen 1
V5030
V5000 Gen1
N/A
V5000 Gen1
V5000 Gen1
1.5 IBM Storwize V5000 Gen2 hardware
The IBM Storwize V5000 Gen2 solution is a modular storage system that is built on a common enclosure platform that is shared by the control enclosures and expansion enclosures.
Figure 1-3 shows an overview of hardware components of the IBM Storwize V5000 Gen2 solution.
Figure 1-3 IBM Storwize V5000 Gen2 hardware components
Figure 1-4 shows the control enclosure rear view of an IBM Storwize V5000 Gen2 enclosure (the Storwize V5020).
Figure 1-4 Storwize V5020 control enclosure rear view
In Figure 1-4, you can see two power supply slots at the bottom of the enclosure. The power supplies are identical and exchangeable. Two canister slots are at the top of the chassis.
In Figure 1-5, you can see the rear view of an IBM Storwize V5000 Gen2 expansion enclosure.
Figure 1-5 IBM Storwize V5000 Gen2 expansion enclosure rear view
You can see that the only difference between the control enclosure and the expansion enclosure is the canister. The canisters of the expansion enclosure have only two SAS ports.
For more information about the expansion enclosure, see 1.5.5, “Expansion enclosure” on page 17.
1.5.1 Control enclosure
Each IBM Storwize V5000 Gen2 system has one control enclosure that contains two node canisters (nodes), disk drives, and two power supplies.
The two node canisters act as a single processing unit and form an I/O group that is attached to the SAN fabric, an iSCSI infrastructure, or that is directly attached to hosts through FC or SAS. The pair of nodes is responsible for serving I/O to a volume. The two nodes provide a highly available fault-tolerant controller so that if one node fails, the surviving node automatically takes over. Nodes are deployed in pairs that are called I/O groups.
One node is designated as the configuration node, but each node in the control enclosure holds a copy of the control enclosure state information.
The Storwize V5010 and Storwize V5020 support a single I/O group. The Storwize V5030 supports two I/O groups in a clustered system.
The terms node canister and node are used interchangeably throughout this book.
The battery is used if power is lost. The IBM Storwize V5000 Gen2 system uses this battery to power the canister while the cache data is written to the internal system flash. This memory dump is called a fire hose memory dump.
 
Note: The batteries of the IBM Storwize V5000 Gen2 are able to process two fire hose memory dumps in a row. After this, you cannot power up the system immediately. There is a need to wait until the batteries are charged over a level that allows them to run the next fire hose memory dump.
After the system is up again, this data is loaded back to the cache for destaging to the disks.
1.5.2 Storwize V5010
Figure 1-6 shows a single Storwize V5010 node canister.
Figure 1-6 Storwize V5010 node canister
Each Storwize V5010 node canister contains the following hardware:
Battery
Memory: 8 GB
One 12 Gbps SAS port for expansions
Two 10/100/1000 Mbps Ethernet ports
One USB 2.0 port that is used to gather system information
System flash
Host interface card (HIC) slot (different options are possible)
Figure 1-6 shows the following features that are provided by the Storwize V5010 node canister:
Two 10/100/1000 Mbps Ethernet ports. Port 1 must be used for management, and port 2 can optionally be used for management. Port 2 serves as a technician port (as denoted by the white box with “T” in it) for system initialization and service.
 
Note: All three models use a technician port to perform initial setup. The implementation of the technician port varies between models: On Storwize V5010/20 the second 1 GbE port (labelled T) is initially enabled as a technician port. After cluster creation, this port is disabled and can then be used for I/O or management.
On Storwize V5030 the onboard 1 GbE port (labelled T) is permanently enabled as a technician port. Connecting the technician port to the LAN will disable the port. The Storwize V5010/20 technician port can be re-enabled after initial setup.
The following commands are used to enable or disable the technical port:
satask chserviceip -techport enable -force
satask chserviceip -techport disable
Both ports can be used for iSCSI traffic and IP replication. For more information, see Chapter 5, “Host configuration” on page 199 and Chapter 10, “Copy services” on page 481.
One USB port for gathering system information.
 
 
System initialization: Unlike the Storwize V5000 Gen1, you must perform the system initialization of the Storwize V5010 by using the technician port instead of the USB port.
One 12 Gbps serial-attached SCSI (SAS 3.0) port to connect to the optional expansion enclosures. The Storwize V5010 supports up to 10 expansion enclosures.
 
Important: The canister SAS port on the Storwize V5010 does not support SAS host attachment. The Storwize V5010 supports SAS hosts by using an optional host interface card. For more information, see 1.5.6, “Host interface cards” on page 18.
Do not use the port that is marked with a wrench. This port is a service port only.
1.5.3 Storwize V5020
Figure 1-7 shows a single Storwize V5020 node canister.
Figure 1-7 Storwize V5020 node canister
Each node canister contains the following hardware:
Battery
Memory: 8 GB upgradable to 16 GB
Three 12 Gbps SAS ports (two for Host attachment, one for expansions)
Two 10/100/1000 Mbps Ethernet ports
One USB 2.0 port that is used to gather system information
System flash
HIC slot (different options are possible)
Figure 1-7 shows the following features that are provided by the Storwize V5020 node canister:
Two 10/100/1000 Mbps Ethernet ports. Port 1 must be used for management, and port 2 can optionally be used for management. Port 2 serves as a technician port (as denoted by the white box with “T” in it) for system initialization and service.
 
Note: All three models use a technician port to perform initial setup. The implementation of the technician port varies between models: On Storwize V5010/20 the second 1 GbE port (labelled T) is initially enabled as a technician port. After cluster creation, this port is disabled and can then be used for I/O or management.
On Storwize V5030 the onboard 1 GbE port (labelled T) is permanently enabled as a technician port. Connecting the technician port to the LAN will disable the port. The Storwize V5010/20 technician port can be re-enabled after initial setup.
Commands used to enable or disable the techport:
satask chserviceip -techport enable -force
satask chserviceip -techport disable
Both ports can be used for iSCSI traffic and IP replication. For more information, see Chapter 5, “Host configuration” on page 199 and Chapter 10, “Copy services” on page 481.
One USB port for gathering system information.
 
 
System initialization: Unlike the Storwize V5000 Gen1, you must perform the system initialization of the Storwize V5020 by using the technician port instead of the USB port.
Three 12 Gbps serial-attached SCSI (SAS 3.0) ports. The ports are numbered 1 - 3 from left to right. Port 1 is used to connect to the optional expansion enclosures. Ports 2 and 3 can be used to connect directly to SAS hosts. (Both 6 Gb and 12 Gb hosts are supported.) The Storwize V5020 supports up to 10 expansion enclosures.
 
 
Service port: Do not use the port that is marked with a wrench. This port is a service port only.
1.5.4 Storwize V5030
Figure 1-8 shows a single Storwize V5030 node canister.
Figure 1-8 Storwize V5030 node canister
Each node canister contains the following hardware:
Battery
Memory: 16 GB upgradable to 32 GB
Two 12 Gbps SAS ports for Expansions
One 10/100/1000 Mbps Ethernet technician port
Two 1/10 Gbps Ethernet ports
One USB 2.0 port that is used to gather system information
System flash
HIC slot (different options are possible)
Figure 1-8 shows the following features that are provided by the Storwize V5030 node canister:
One Ethernet technician port (as denoted by the white box with “T” in it). This port can be used for system initialization and service only. For more information, see Chapter 1, “Overview of the IBM Storwize V5000 Gen2 system” on page 1. It cannot be used for anything else.
Two 1/10 Gbps Ethernet ports. These ports are Copper 10GBASE-T with RJ45 connectors. Port 1 must be used for management. Port 2 can optionally be used for management. Both ports can be used for iSCSI traffic and IP replication. For more information, see Chapter 5, “Host configuration” on page 199 and Chapter 10, “Copy services” on page 481.
 
Important: The 1/10 Gbps Ethernet ports do not support speeds less than 1 Gbps
(100 Mbps is not supported).
Ensure that you use the correct port connectors. The Storwize V5030 canister 10 Gbps connectors appear the same as the 1 Gbps connectors on the other Storwize V5000 models. These RJ45 connectors differ from the optical small form-factor plug able (SFP+) connectors on the optional 10 Gbps HIC. When you plan to implement the Storwize V5030, ensure that any network switches provide the correct connector type.
One USB port to gather system information.
 
 
System initialization: Unlike the Storwize V5000 Gen1, you must perform the system initialization of the Storwize V5030 by using the technician port instead of the USB port.
Two 12 Gbps serial-attached SCSI (SAS 3.0) ports. The ports are numbered 1 and 2 from left to right to connect to the optional expansion enclosures. The Storwize V5030 supports up to 20 expansion enclosures. Ten expansion enclosures can be connected to each port.
 
Important: The canister SAS ports on the Storwize V5030 do not support SAS host attachment. The Storwize V5030 supports SAS hosts by using an HIC. See 1.5.6, “Host interface cards” on page 18.
Do not use the port that is marked with a wrench. This port is a service port only.
1.5.5 Expansion enclosure
The optional IBM Storwize V5000 Gen2 expansion enclosure contains two expansion canisters, disk drives, and two power supplies. Four types of expansion enclosures are available: large form factor 2u (LFF) Expansion Enclosure Model 12F, a small form factor 2U (SFF) Expansion Enclosure Model 24F, a small form factor 2U (SFF) Expansion Enclosure for flash drives Model AFF and the high density drawers 5U LFF Model 92F or the flash version A9F. They are available with one or three-year warranty.
Figure 1-9 shows the rear of the 2U expansion enclosure.
Figure 1-9 2u expansion enclosure of the IBM Storwize V5000 Gen2
The expansion enclosure power supplies are the same as the control enclosure power supplies. A single power lead connector is on each power supply unit.
Each expansion canister provides two SAS interfaces that are used to connect to the control enclosure and any further optional expansion enclosures. The ports are numbered 1 on the left and 2 on the right. SAS port 1 is the IN port, and SAS port 2 is the OUT port.
The use of SAS connector 1 is mandatory because the expansion enclosure must be attached to a control enclosure or another expansion enclosure further up in the chain. SAS connector 2 is optional because it is used to attach to further expansion enclosures down the chain.
The Storwize V5010 and Storwize V5020 support a single chain of up to 10 expansion enclosures that attach to the control enclosure. The Storwize V5030 supports up to 40 expansion enclosures in a configuration that consists of two control enclosures, which are each attached to 20 expansion enclosures in two separate chains.
Table 1-9 shows the maximum number of supported expansion enclosures and the drive limits for each model.
Table 1-9 Expansion enclosure and drive limits
 
V5010
V5020
V5030
Maximum number of supported expansion enclosures
010
010
0040
Maximum number of supported drives
392
392
1520
Each port includes two LEDs to show the status. The first LED indicates the link status and the second LED indicates the fault status.
For more information about LED and ports, see this website:
 
Restriction: The IBM Storwize V5000 Gen2 expansion enclosures can be used only with an IBM Storwize V5000 Gen2 control enclosure. The IBM Storwize V5000 Gen1 expansion enclosures cannot be used with an IBM Storwize V5000 Gen2 control enclosure.
1.5.6 Host interface cards
All IBM Storwize V5000 Gen2 support Ethernet ports as standard for iSCSI connectivity. For the Storwize V5010 and Storwize V5020, these Ethernet ports are 1 GbE ports. For the Storwize V5030, these Ethernet ports are 10 GbE ports. The Storwize V5020 also includes 12 Gb SAS ports for host connectivity as a standard.
Additional host connectivity options are available through an optional adapter card. Table 1-10 shows the available configurations for a single control enclosure.
Table 1-10 IBM Storwize V5000 Gen2 configurations available
 
1 Gb Ethernet (iSCSI)
10 Gb Ethernet Copper 10GBASE-T (iSCSI)
12 Gb SAS
16 Gb FC
10 Gb Ethernet Optical SFP+ iSCSI/FCoE
V5030
8 ports (with optional adapter card).
4 ports (standard).
8 ports (with optional adapter card).
8 ports (with optional adapter card).
8 ports (with optional adapter card).
V5020
4 ports (standard). Additional 8 ports (with optional adapter card)
N/A
4 ports (standard). Additional 8 ports (with optional adapter card).
8 ports (with optional adapter card).
8 ports (with optional adapter card).
V5010
4 ports (standard). Additional 8 ports (with optional adapter card).
N/A
8 ports (with optional adapter card).
8 ports (with optional adapter card).
8 ports (with optional adapter card).
 
Optional adapter cards: Only one pair of identical adapter cards is allowed for each control enclosure.
1.5.7 Disk drive types
IBM Storwize V5000 Gen2 enclosures support Flash Drives, SAS, and Nearline SAS drive types. Each drive has two ports (two PHYs) to provide fully redundant access from each node canister. I/O can be issued down both paths simultaneously.
Table 1-11 shows the IBM Storwize V5000 Gen2 disk drive types, disk revolutions per minute (RPMs), and sizes that are available at the time of writing.
Table 1-11 IBM Storwize V5000 Gen2 disk drive types
Drive type
RPM
Size
2.5-inch form factor
Flash Drive
N/A
400 GB, 800 GB, 1.6 TB, and 3.2 TB
2.5-inch form factor
 
Read Intensive (RI) Flash Drive
N/A
1,92 TB, 3.84 TB, and 7.68 TB
2.5-inch form factor
SAS
10,000
900 GB, 1.2 TB, and 1.8 TB
2.5-inch form factor
SAS
15,000
300 GB, 600 GB, and 900 GB
2.5-inch form factor
Nearline SAS
07,200
2 TB
3.5-inch form factor
SAS
10,000
900 GB, 1.2 TB, and 1.8 TB1
3.5-inch form factor
SAS
15,000
300 GB, 600 GB, and 900 GBa
3.5-inch form factor
Nearline SAS
07,200
4 TB, 6 TB, 8 TB, and 10 TB

1 2.5-inch drive in a 3.5-inch drive carrier
1.6 IBM Storwize V5000 Gen2 terms
In this section, we introduce the terms that are used for the IBM Storwize V5000 Gen2 throughout this book.
1.6.1 Hosts
A host system is a server that is connected to IBM Storwize V5000 Gen2 through a Fibre Channel connection, an iSCSI connection, or an SAS connection.
Hosts are defined on IBM Storwize V5000 Gen2 by identifying their WWPNs for Fibre Channel and SAS hosts. The iSCSI hosts are identified by using their iSCSI names. The iSCSI names can be iSCSI qualified names (IQNs) or extended unique identifiers (EUIs). For more information, see Chapter 5, “Host configuration” on page 199.
Hosts can be Fibre Channel-attached through an existing Fibre Channel network infrastructure or direct-attached, iSCSI-attached through an existing IP network, or directly attached through SAS.
1.6.2 Node canister
A node canister provides host interfaces, management interfaces, and SAS interfaces to the control enclosure. A node canister has the cache memory, the internal storage to store software and logs, and the processing power to run the IBM Storwize V5000 Gen2 virtualization and management software. A clustered system consists of one or two node pairs. Each node pair forms one I/O group. I/O groups are explained in 1.6.3, “I/O groups” on page 20.
One of the nodes within the system, which is known as the configuration node, manages configuration activity for the clustered system. If this node fails, the system nominates the other node to become the configuration node.
1.6.3 I/O groups
Within IBM Storwize V5000 Gen2, one or two pairs of node canisters are known as I/O groups. The IBM Storwize V5000 Gen2 supports either two-node or four-node canisters in a clustered system, which provides either one or two I/O groups, depending on the model. See Table 1-8 on page 11 for more details.
When a host server performs I/O to one of its volumes, all of the I/Os for a specific volume are directed to the I/O group. Also, under normal conditions, the I/Os for that specific volume are always processed by the same node within the I/O group.
When a host server performs I/O to one of its volumes, all of the I/O for that volume is directed to the I/O group where the volume was defined. Under normal conditions, these I/Os are also always processed by the same node within that I/O group.
Both nodes of the I/O group act as preferred nodes for their own specific subset of the total number of volumes that the I/O group presents to the host servers (a maximum of 2,048 volumes for each host). However, both nodes also act as a failover node for the partner node within the I/O group. Therefore, a node takes over the I/O workload from its partner node (if required) without affecting the server’s application.
In an IBM Storwize V5000 Gen2 environment (which uses active-active architecture), the I/O handling for a volume can be managed by both nodes of the I/O group. The I/O groups must be connected to the SAN so that all hosts can access all nodes. The hosts must use multipath device drivers to handle this capability.
Up to 256 host server objects can be defined to one-I/O group or 512 host server objects can be defined in a two-I/O group system. More information about I/O groups is in Chapter 6, “Volume configuration” on page 287.
 
Important: The active/active architecture provides the availability to process I/Os for both controller nodes and allows the application to continue to run smoothly, even if the server has only one access route or path to the storage controller. This type of architecture eliminates the path/LUN thrashing that is typical of an active/passive architecture.
1.6.4 Clustered system
A clustered system consists of one or two pairs of node canisters. Each pair forms an I/O group. All configuration, monitoring, and service tasks are performed at the system level. The configuration settings are replicated across all node canisters in the clustered system. To facilitate these tasks, one or two management IP addresses are set for the clustered system. By using this configuration, you can manage the clustered system as a single entity.
A process exists to back up the system configuration data on to disk so that the clustered system can be restored in a disaster. This method does not back up application data. Only IBM Storwize V5000 Gen2 system configuration information is backed up.
 
System configuration backup: After the system configuration is backed up, save the backup data on to your local hard disk (or at the least outside of the SAN). If you are unable to access the IBM Storwize V5000 Gen2, you do not have access to the backup data if it is on the SAN. Perform this configuration backup after each configuration change to be safe.
The system can be configured by using the IBM Storwize V5000 Gen2 management software (GUI), CLI, or USB key.
1.6.5 RAID
The IBM Storwize V5000 Gen2 contains several internal drive objects, but these drives cannot be directly added to the storage pools. Drives need to be included in a Redundant Array of Independent Disks (RAID) to provide protection against the failure of individual drives.
These drives are referred to as members of the array. Each array has a RAID level. RAID levels provide various degrees of redundancy and performance. The maximum number of members in the array varies based on the RAID level.
Traditional RAID (TRAID) has the concept of hot spare drives. When an array member drive fails, the system automatically replaces the failed member with a hot spare drive and rebuilds the array to restore its redundancy. Candidate and spare drives can be manually exchanged with array members.
Apart from traditional disk arrays, IBM Spectrum™ Virtualize V7.6 introduced Distributed RAIDs. Distributed RAID improves recovery time of failed disk drives in an array by the distribution of spare capacity between primary disks, rather than dedicating a whole spare drive for replacement.
Details about traditional and distributed RAID arrays are described in Chapter 4, “Storage pools” on page 143.
1.6.6 Managed disks
A managed disk (MDisk) refers to the unit of storage that IBM Storwize V5000 Gen2 virtualizes. This unit can be a logical volume on an external storage array that is presented to the IBM Storwize V5000 Gen2 or a (traditional or distributed) RAID array that consists of internal drives. The IBM Storwize V5000 Gen2 can then allocate these MDisks into storage pools.
An MDisk is invisible to a host system on the storage area network because it is internal to the IBM Storwize V5000 Gen2 system. An MDisk features the following modes:
Array
Array mode MDisks are constructed from internal drives by using the RAID functionality. Array MDisks are always associated with storage pools.
Unmanaged
LUNs that are presented by external storage systems to IBM Storwize V5000 Gen2 are discovered as unmanaged MDisks. The MDisk is not a member of any storage pools, which means that it is not used by the IBM Storwize V5000 Gen2 storage system.
Managed
Managed MDisks are LUNs, which are presented by external storage systems to an IBM Storwize V5000 Gen2, that are assigned to a storage pool and provide extents so that volumes can use them. Any data that might be on these LUNs when they are added is lost.
Image
Image MDisks are LUNs that are presented by external storage systems to an IBM Storwize V5000 Gen2 and assigned directly to a volume with a one-to-one mapping of extents between the MDisk and the volume. For more information, see Chapter 6, “Volume configuration” on page 287.
1.6.7 Quorum disks
A quorum disk is an MDisk that contains a reserved area for use exclusively by the system. In the IBM Storwize V5000 Gen2, internal drives can be considered as quorum candidates. The clustered system uses quorum disks to break a tie when exactly half the nodes in the system remain after a SAN failure.
The clustered system automatically forms the quorum disk by taking a small amount of space from an MDisk. It allocates space from up to three different MDisks for redundancy, although only one quorum disk is active.
To avoid the possibility of losing all of the quorum disks because of a failure of a single storage system if the environment has multiple storage systems, you need to allocate the quorum disk on different storage systems. You can manage the quorum disks by using the CLI.
IP quorum base support provides an alternative for Storwize V5000 IBM HyperSwap® implementations. Instead of Fibre Channel storage on a third site, the IP network is used for communication between the IP quorum application and node canisters in the system to cope with tie-break situations if the inter-site link fails. The IP quorum application is a Java application that runs on a host at the third site. The IP quorum application enables the use of a lower-cost IP network-attached host as a quorum disk for simplified implementation and operation.
 
Note: IP Quorum allows the user to replace a third-site Fibre Channel-attached quorum disk with an IP Quorum application. The Java application runs on a Linux host and is used to resolve split-brain situations. Quorum disks are still required in sites 1 and 2 for cookie crumb and metadata. The application can also be used with clusters in a standard topology configuration, but the primary use case is a customer with a cluster split over two sites (stretched or HyperSwap).
You need Java to run the IP quorum. Your Network must provide as least < 80 ms round-trip latency. All nodes need a service IP address, and all service IP addresses must be pingable from the quorum host.
1.6.8 Storage pools
A storage pool (up to 1028 per system) is a collection of MDisks (up to 128) that are grouped to provide capacity for volumes. All MDisks in the pool are split into extents of the same size. Volumes are then allocated out of the storage pool and are mapped to a host system.
MDisks can be added to a storage pool at any time to increase the capacity of the pool. MDisks can belong in only one storage pool. For more information, see Chapter 4, “Storage pools” on page 143.
Each MDisk in the storage pool is divided into a number of extents. The size of the extent is selected by the administrator when the storage pool is created and cannot be changed later. The size of the extent ranges from 16 MB - 8 GB.
 
Default extent size: The GUI of IBM Storwize V5000 Gen2 has a default extent size value of 1024 MB when you define a new storage pool.
The extent size directly affects the maximum volume size and storage capacity of the clustered system.
A system can manage 2^22 (4,194,304) extents. For example, with a 16 MB extent size, the system can manage up to 16 MB x 4,194,304 = 64 TB of storage.
The effect of extent size on the maximum volume and cluster size is shown in Table 1-12.
Table 1-12 Maximum volume and cluster capacity by extent size
Extent size (MB)
Maximum volume capacity for normal volumes (GB)
Maximum storage capacity of cluster
0016
002,048 (2 TB)
064 TB
0032
004,096 (4 TB)
128 TB
0064
008,192 (8 TB)
256 TB
0128
016,384 (16 TB)
512 TB
0256
032,768 (32 TB)
001 PB
0512
065,536 (64 TB)
002 PB
1024
131,072 (128 TB)
004 PB
2048
262,144 (256 TB)
0v8 PB
4096
262,144 (256 TB)
016 PB
8192
262,144 (256 TB)
032 PB
Use the same extent size for all storage pools in a clustered system. This rule is a prerequisite if you want to migrate a volume between two storage pools. If the storage pool extent sizes are not the same, you must use volume mirroring to copy volumes between storage pools, as described in Chapter 4, “Storage pools” on page 143.
You can set a threshold warning for a storage pool that automatically issues a warning alert when the used capacity of the storage pool exceeds the set limit.
Child storage pools
Instead of being created directly from MDisks, child pools are created from existing capacity that is allocated to a parent pool. As with parent pools, volumes can be created that specifically use the capacity that is allocated to the child pool. Parent pools grow automatically as more MDisks are allocated to them. However, child pools provide a fixed capacity pool of storage. You can use a child pool to manage a quota of storage for a particular purpose.
Child pools can be created by using the management GUI, CLI, or IBM Spectrum Control when you create VMware vSphere virtual volumes. For more information about child pools, see Chapter 4, “Storage pools” on page 143.
Single-tiered storage pool
MDisks that are used in a single-tiered storage pool must have the following characteristics to prevent performance problems and other problems:
They must have the same hardware characteristics, for example, the same RAID type, RAID array size, disk type, and disk revolutions per minute (RPMs).
The disk subsystems that provide the MDisks must have similar characteristics, for example, maximum input/output operations per second (IOPS), response time, cache, and throughput.
You need to use MDisks of the same size and ensure that the MDisks provide the same number of extents. If this configuration is not feasible, you must check the distribution of the volumes’ extents in that storage pool.
Multi-tiered storage pool
A multi-tiered storage pool has a mix of MDisks with more than one type of disk, for example, a storage pool that contains a mix of generic_hdd and generic_ssd MDisks.
A multi-tiered storage pool contains MDisks with different characteristics unlike the single-tiered storage pool. MDisks with similar characteristics then form the tiers within the pool. However, each tier needs to have MDisks of the same size and that provide the same number of extents.
A multi-tiered storage pool is used to enable automatic migration of extents between disk tiers by using the IBM Storwize V5000 Gen2 IBM Easy Tier function, as described in Chapter 9, “Advanced features for storage efficiency” on page 431.
This functionality can help improve the performance of host volumes on the IBM Storwize V5000.
1.6.9 Volumes
A volume is a logical disk that is presented to a host system by the clustered system. In our virtualized environment, the host system has a volume that is mapped to it by IBM Storwize V5000 Gen2. IBM Storwize V5000 Gen2 translates this volume into a number of extents, which are allocated across MDisks. The advantage with storage virtualization is that the host is decoupled from the underlying storage, so the virtualization appliance can move around the extents without affecting the host system.
The host system cannot directly access the underlying MDisks in the same manner as it can access RAID arrays in a traditional storage environment.
The following types of volumes are available:
Striped
A striped volume is allocated one extent in turn from each MDisk in the storage pool. This process continues until the space that is required for the volume is satisfied.
It also is possible to supply a list of MDisks to use.
Figure 1-10 shows how a striped volume is allocated, assuming that 10 extents are required.
Figure 1-10 Striped volume
Sequential
A sequential volume is a volume in which the extents are allocated one after the other from one MDisk to the next MDisk, as shown in Figure 1-11.
Figure 1-11 Sequential volume
Image mode
Image mode volumes are special volumes that have a direct relationship with one MDisk. They are used to migrate existing data into and out of the clustered system to or from external FC SAN-attached storage.
When the image mode volume is created, a direct mapping is made between extents that are on the MDisk and the extents that are on the volume. The logical block address (LBA) x on the MDisk is the same as the LBA x on the volume, which ensures that the data on the MDisk is preserved as it is brought into the clustered system, as shown in Figure 1-12.
Figure 1-12 Image mode volume
Certain virtualization functions are not available for image mode volumes, so it is often useful to migrate the volume into a new storage pool. After it is migrated, the MDisk becomes a managed MDisk.
If you want to migrate data from an existing storage subsystem, use the storage migration wizard, which guides you through the process.
If you add an MDisk that contains data to a storage pool, any data on the MDisk is lost. If you are presenting externally virtualized LUNs that contain data to an IBM Storwize V5000 Gen2, import them as image mode volumes to ensure data integrity or use the migration wizard.
1.6.10 iSCSI
iSCSI is an alternative method of attaching hosts to the IBM Storwize V5000 Gen2. The iSCSI function is a software function that is provided by the IBM Storwize V5000 Gen2 code, not hardware. In the simplest terms, iSCSI allows the transport of SCSI commands and data over an Internet Protocol network that is based on IP routers and Ethernet switches.
iSCSI is a block-level protocol that encapsulates SCSI commands into TCP/IP packets and uses an existing IP network instead of requiring FC host bus adapters (HBAs) and a SAN fabric infrastructure. Concepts of names and addresses are carefully separated in iSCSI.
An iSCSI name is a location-independent, permanent identifier for an iSCSI node. An iSCSI node has one iSCSI name, which stays constant for the life of the node. The terms initiator name and target name also refer to an iSCSI name.
An iSCSI address specifies the iSCSI name of an iSCSI node and a location of that node. The address consists of a host name or IP address, a TCP port number (for the target), and the iSCSI name of the node. An iSCSI node can have any number of addresses, which can change at any time, particularly if they are assigned by way of Dynamic Host Configuration Protocol (DHCP). An IBM Storwize V5000 node represents an iSCSI node and provides statically allocated IP addresses.
Each iSCSI node, that is, an initiator or target, has a unique IQN, which can have a size of up to 255 bytes. The IQN is formed according to the rules that were adopted for Internet nodes. The IQNs can be abbreviated by using a descriptive name, which is known as an alias. An alias can be assigned to an initiator or a target.
For more information about configuring iSCSI, see Chapter 4, “Storage pools” on page 143.
1.6.11 Serial-attached SCSI
The serial-attached SCSI (SAS) standard is an alternative method of attaching hosts to the IBM Storwize V5000 Gen2. The IBM Storwize V5000 Gen2 supports direct SAS host attachment to address easy-to-use, affordable storage needs. Each SAS port device has a worldwide unique 64-bit SAS address and operates at 12 Gbps.
1.6.12 Fibre Channel
Fibre Channel (FC) is the traditional method that is used for data center storage connectivity. The IBM Storwize V5000 Gen2 supports FC connectivity at speeds of 4, 8, and 16 Gbps. Fibre Channel Protocol is used to encapsulate SCSI commands over the FC network. Each device in the network has a unique 64-bit worldwide port name (WWPN). The IBM Storwize V5000 Gen2 supports FC connections directly to a host server or to external FC switched fabrics.
1.7 IBM Storwize V5000 Gen2 features
In this section, we describe the features of the IBM Storwize V5000 Gen2. Different models offer a different range of features. See Table 1-3 on page 6 for a comparison.
1.7.1 Mirrored volumes
IBM Storwize V5000 Gen2 provides a function that is called storage volume mirroring, which enables a volume to have two physical copies. Each volume copy can belong to a different storage pool and be on a different physical storage system to provide a high-availability (HA) solution. Each mirrored copy can be either a generic, thin-provisioned, or compressed volume copy.
When a host system issues a write to a mirrored volume, IBM Storwize V5000 Gen2 writes the data to both copies. When a host system issues a read to a mirrored volume, IBM Storwize V5000 Gen2 requests it from the primary copy.
If one of the mirrored volume copies is temporarily unavailable, the IBM Storwize V5000 Gen2 automatically uses the alternative copy without any outage for the host system. When the mirrored volume copy is repaired, IBM Storwize V5000 Gen2 synchronizes the data again.
A mirrored volume can be converted into a non-mirrored volume by deleting one copy or by splitting away one copy to create a non-mirrored volume.
The use of mirrored volumes can also assist with migrating volumes between storage pools that have different extent sizes. Mirrored volumes can also provide a mechanism to migrate fully allocated volumes to thin-provisioned or compressed volumes without any host outages.
The Volume Mirroring feature is included as part of the base software, and no license is required.
1.7.2 Thin provisioning
Volumes can be configured to be thin-provisioned or fully allocated. A thin-provisioned volume behaves as though it were a fully allocated volume in terms of read/write I/O. However, when a volume is created, the user specifies two capacities: the real capacity of the volume and its virtual capacity.
The real capacity determines the quantity of MDisk extents that are allocated for the volume. The virtual capacity is the capacity of the volume that is reported to IBM Storwize V5000 Gen2 and to the host servers.
The real capacity is used to store the user data and the metadata for the thin-provisioned volume. The real capacity can be specified as an absolute value or a percentage of the virtual capacity.
The Thin Provisioning feature can be used on its own to create over-allocated volumes, or it can be used with FlashCopy. Thin-provisioned volumes can be used with the mirrored volume feature, also.
A thin-provisioned volume can be configured to auto expand, which causes the IBM Storwize V5000 Gen2 to automatically expand the real capacity of a thin-provisioned volume as it gets used. This feature prevents the volume from going offline. Auto expand attempts to maintain a fixed amount of unused real capacity on the volume. This amount is known as the contingency capacity.
When the thin-provisioned volume is initially created, the IBM Storwize V5000 Gen2 initially allocates only 2% of the virtual capacity in real physical storage. The contingency capacity and auto expand features seek to preserve this 2% of free space as the volume grows.
If the user modifies the real capacity, the contingency capacity is reset to be the difference between the used capacity and real capacity. In this way, the autoexpand feature does not cause the real capacity to grow much beyond the virtual capacity.
A volume that is created with a zero contingency capacity goes offline when it must expand. A volume with a non-zero contingency capacity stays online until it is used up.
To support the auto expansion of thin-provisioned volumes, the volumes themselves have a configurable warning capacity. When the used free capacity of the volume exceeds the warning capacity, a warning is logged.
For example, if a warning of 80% is specified, the warning is logged when 20% of the free capacity remains. This approach is similar to the capacity warning that is available on storage pools.
A thin-provisioned volume can be converted to a either a fully allocated volume or compressed volume by using volume mirroring (and vice versa).
The Thin Provisioning feature is included as part of the base software, and no license is required.
1.7.3 Real-time Compression
The Storwize V5030 model can create compressed volumes, allowing more data to be stored in the same physical space. IBM Real-time Compression™ (RtC) can be used for primary active volumes and with mirroring and replication (FlashCopy/Remote Copy). RtC is available on the Storwize V5030 model only.
Existing volumes can take advantage of Real-time Compression to result in an immediate capacity saving. An existing volume can be converted to a compressed volume by creating a compressed volume copy of the original volume followed by deleting the original volume.
No changes to the existing environment are required to take advantage of RtC. It is transparent to hosts while the compression occurs within the IBM Storwize V5000 Gen2 system.
 
 
Software-only compression: The use of RtC on the Storwize V5030 requires dedicated CPU resources from the node canisters. If more performance is required for deploying RtC, consider purchasing the Storwize V7000 system. The Storwize V7000 system uses dedicated hardware options for compression acceleration.
The Storwize V5030 model has the additional memory upgrade (32 GB for each node canister). When the first compressed volume is created 4 of the 6 CPU cores are allocated to RtC. Of the 32 GB of memory on each node canister roughly 9-10 GB is allocated to RtC. There are no hardware compression accelerators as in the Storwize V7000 Gen2. The actual LZ4 compression is done by the CPUs as was the case with the Storwize V7000 Gen1.
Table 1-13 shows how the cores are used with RtC.
Table 1-13 Cores usage with RtC
 
Compression Disabled
Compression Enabled
Model
Normal Processing
RtC
Normal Processing
RtC
V5010
2 cores
NA
NA
NA
V5020
2 cores
NA
NA
NA
V5030
6 cores
0 cores
2 cores
4 cores + HT
The faster CPU with more cores, the extra memory and the hyper-threading capability of the Storwize V5030, as well as improvements to RtC software results in good performance for smaller customer configurations common to the market segment this product is intended to serve. The feature is licensed per enclosure. Conversely, Real-time Compression is not available on the Storwize V5010 or Storwize V5020 models.
1.7.4 Easy Tier
IBM Easy Tier provides a mechanism to seamlessly migrate extents to the most appropriate tier within the IBM Storwize V5000 Gen2 solution. This migration can be to different tiers of internal drives within IBM Storwize V5000 Gen2 or to external storage systems that are virtualized by IBM Storwize V5000 Gen2, for example, an IBM FlashSystem 900.
The Easy Tier function can be turned on or turned off at the storage pool and volume level.
You can demonstrate the potential benefit of Easy Tier in your environment before you install Flash drives by using the IBM Storage Advisor Tool. For more information about Easy Tier, see Chapter 9, “Advanced features for storage efficiency” on page 431.
The IBM Easy Tier feature is licensed per enclosure.
1.7.5 Storage Migration
By using the IBM Storwize V5000 Gen2 Storage Migration feature, you can easily move data from other existing Fibre Channel-attached external storage to the internal capacity of the IBM Storwize V5000 Gen2. You can migrate data from other storage to the IBM Storwize V5000 Gen2 storage system to realize the benefits of the IBM Storwize V5000 Gen2 with features, such as the easy-to-use GUI, internal virtualization, thin provisioning, and copy services.
The Storage Migration feature is included in the base software, and no license is required.
1.7.6 FlashCopy
The FlashCopy feature copies a source volume on to a target volume. The original contents of the target volume is lost. After the copy operation starts, the target volume has the contents of the source volume as it existed at a single point in time. Although the copy operation completes in the background, the resulting data at the target appears as though the copy was made instantaneously. FlashCopy is sometimes described as an instance of a time-zero (T0) copy or point-in-time (PiT) copy technology.
FlashCopy can be performed on multiple source and target volumes. FlashCopy permits the management operations to be coordinated so that a common single point in time is chosen for copying target volumes from their respective source volumes.
IBM Storwize V5000 Gen2 also permits multiple target volumes to be FlashCopies from the same source volume. This capability can be used to create images from separate points in time for the source volume, and to create multiple images from a source volume at a common point in time. Source and target volumes can be any volume type (generic, thin-provisioned, or compressed).
Reverse FlashCopy enables target volumes to become restore points for the source volume without breaking the FlashCopy relationship and without waiting for the original copy operation to complete. IBM Storwize V5000 Gen2 supports multiple targets and multiple rollback points.
The FlashCopy feature is licensed per enclosure.
For more information about FlashCopy copy services, see Chapter 10, “Copy services” on page 481.
1.7.7 Remote Copy
Remote Copy can be implemented in one of two modes, synchronous or asynchronous.
With the IBM Storwize V5000 Gen2, Metro Mirror and Global Mirror are the IBM branded terms for the functions that are synchronous Remote Copy and asynchronous Remote Copy.
By using the Metro Mirror and Global Mirror copy services features, you can set up a relationship between two volumes so that updates that are made by an application to one volume are mirrored on the other volume. The volumes can be in the same system or on two different systems.
For both Metro Mirror and Global Mirror copy types, one volume is designated as the primary and the other volume is designated as the secondary. Host applications write data to the primary volume, and updates to the primary volume are copied to the secondary volume. Normally, host applications do not perform I/O operations to the secondary volume.
The Metro Mirror feature provides a synchronous copy process. When a host writes to the primary volume, it does not receive confirmation of I/O completion until the write operation completes for the copy on the primary and secondary volumes. This design ensures that the secondary volume is always up-to-date with the primary volume if a failover operation must be performed.
The Global Mirror feature provides an asynchronous copy process. When a host writes to the primary volume, confirmation of I/O completion is received before the write operation completes for the copy on the secondary volume. If a failover operation is performed, the application must recover and apply any updates that were not committed to the secondary volume. If I/O operations on the primary volume are paused for a brief time, the secondary volume can become an exact match of the primary volume.
Global Mirror can operate with or without cycling. When it is operating without cycling, write operations are applied to the secondary volume as soon as possible after they are applied to the primary volume. The secondary volume is less than 1 second behind the primary volume, which minimizes the amount of data that must be recovered in a failover. However, this approach requires that a high-bandwidth link is provisioned between the two sites.
When Global Mirror operates with cycling mode, changes are tracked and where needed copied to intermediate change volumes. Changes are transmitted to the secondary site periodically. The secondary volumes are much further behind the primary volume, and more data must be recovered in a failover. Because the data transfer can be smoothed over a longer time period, lower bandwidth is required to provide an effective solution.
For more information about the IBM Storwize V5000 Gen2 copy services, see Chapter 10, “Copy services” on page 481.
The IBM Remote Copy feature is licensed for each enclosure.
1.7.8 IP replication
IP replication enables the use of lower-cost Ethernet connections for remote mirroring. The capability is available as a chargeable option on all Storwize family systems.
The function is transparent to servers and applications in the same way that traditional Fibre Channel-based mirroring is transparent. All remote mirroring modes (Metro Mirror, Global Mirror, and Global Mirror with Change Volumes) are supported.
Configuration of the system is straightforward. The Storwize family systems normally find each other in the network, and they can be selected from the GUI.
IP replication includes Bridgeworks SANSlide network optimization technology, and it is available at no additional charge. Remember, Remote Mirror is a chargeable option but the price does not change with IP replication. Existing Remote Mirror users have access to the function at no additional charge.
IP connections that are used for replication can have long latency (the time to transmit a signal from one end to the other), which can be caused by distance or by many “hops” between switches and other appliances in the network. Traditional replication solutions transmit data, wait for a response, and then transmit more data, which can result in network utilization as low as 20% (based on IBM measurements). Also, this scenario gets worse the longer the latency.
Bridgeworks SANSlide technology that is integrated with the IBM Storwize family requires no separate appliances, no additional cost, and no configuration steps. It uses artificial intelligence (AI) technology to transmit multiple data streams in parallel, adjusting automatically to changing network environments and workloads.
SANSlide improves network bandwidth utilization up to 3x so clients can deploy a less costly network infrastructure or take advantage of faster data transfer to speed up replication cycles, improve remote data currency, and enjoy faster recovery.
IP replication can be configured to use any of the available 1 GbE or 10 GbE Ethernet ports (apart from the technician port) on the IBM Storwize V5000 Gen2. See Table 1-10 on page 18 for port configuration options.
Copy services configuration limits
For the most up-to-date list of these limits, see the following website:
1.7.9 External virtualization
By using this feature, you can consolidate FC SAN-attached disk controllers from various vendors into pools of storage. In this way, the storage administrator can manage and provision storage to applications from a single user interface and use a common set of advanced functions across all of the storage systems under the control of the IBM Storwize V5000 Gen2. External virtualization is only available for the IBM Storwize V5030.
The External Virtualization feature is licensed per disk enclosure.
1.7.10 Encryption
IBM Storwize V5000 Gen2 provides optional encryption of data-at-rest functionality, which protects against the potential exposure of sensitive user data and user metadata that is stored on discarded, lost, or stolen storage devices. Encryption can be enabled and configured only on the Storwize V5020 and Storwize V5030 enclosures that support encryption. The Storwize V5010 does not offer encryption functionality.
Encryption is a licensed feature that requires a license key to enable it before it can be used.
1.8 Problem management and support
In this section, we introduce problem management and support topics.
1.8.1 IBM Support assistance
To use IBM Support assistance, you must have access to the internet. Support assistance enables support personnel to access the system to complete troubleshooting and maintenance tasks. You can configure either local support assistance, where support personnel visit your site to fix problems with the system, or remote support assistance.
Both local and remote support assistance uses secure connections to protect data exchange between the support center and system. More access controls can be added by the system administrator. You can use the management GUI or the command-line interface to view support assistance settings.
1.8.2 Event notifications
IBM Storwize V5000 Gen2 can use Simple Network Management Protocol (SNMP) traps, syslog messages, and e-mail to notify you and the IBM Support Center when significant events are detected. Any combination of these notification methods can be used simultaneously.
You can configure IBM Storwize V5000 Gen2 to send different types of notification to specific recipients and choose the alerts that are important to you. When you configure Call Home to the IBM Support Center, all events are sent through email only.
1.8.3 SNMP traps
SNMP is a standard protocol for managing networks and exchanging messages. IBM Storwize V5000 Gen2 can send SNMP messages that notify personnel about an event. You can use an SNMP manager to view the SNMP messages that IBM Storwize V5000 Gen2 sends. You can use the management GUI or the IBM Storwize V5000 Gen2 CLI to configure and modify your SNMP settings.
You can use the Management Information Base (MIB) file for SNMP to configure a network management program to receive SNMP messages that are sent by the IBM Storwize V5000 Gen2. This file can be used with SNMP messages from all versions of IBM Storwize V5000 Gen2 software.
1.8.4 Syslog messages
The syslog protocol is a standard protocol for forwarding log messages from a sender to a receiver on an IP network. The IP network can be IPv4 or IPv6. IBM Storwize V5000 Gen2 can send syslog messages that notify personnel about an event. IBM Storwize V5000 Gen2 can transmit syslog messages in expanded or concise format. You can use a syslog manager to view the syslog messages that IBM Storwize V5000 Gen2 sends. IBM Storwize V5000 Gen2 uses the User Datagram Protocol (UDP) to transmit the syslog message. You can use the management GUI or the CLI to configure and modify your syslog settings.
1.8.5 Call Home email
The Call Home feature transmits operational and error-related data to you and IBM through a Simple Mail Transfer Protocol (SMTP) server connection in the form of an event notification email. When configured, this function alerts IBM service personnel about hardware failures and potentially serious configuration or environmental issues. You can use the Call Home function if you have a maintenance contract with IBM or if the IBM Storwize V5000 Gen2 is within the warranty period.
To send email, you must configure at least one SMTP server. You can specify as many as five other SMTP servers for backup purposes. The SMTP server must accept the relaying of email from the IBM Storwize V5000 Gen2 clustered system IP address. You can then use the management GUI or the CLI to configure the email settings, including contact information and email recipients.
Set the reply address to a valid email address. Send a test email to check that all connections and infrastructure are set up correctly. You can disable the Call Home function at any time by using the management GUI or the CLI.
1.9 More information resources
This section describes resources that are available for more information.
1.9.1 Useful IBM Storwize V5000 Gen2 websites
For more information about IBM Storwize V5000 Gen2, see the following websites:
The IBM Storwize V5000 Gen2 home page:
IBM Storwize V5000 Gen2 Knowledge Center:
IBM Storwize V5000 Gen2 Online Announcement Letter:
The Online Information Center also includes a Learning and Tutorial section where you can obtain videos that describe the use and implementation of the IBM Storwize V5000 Gen2.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.188.142.146