IBM SAN Volume Controller and IBM Storwize family
The IBM Storwize family provides hybrid solutions with common functionality, management, and flexibility. It includes built-in functions such as Real-time Compression and Easy Tier technology optimizing flash and hard disk drives to deliver extraordinary levels of efficiency and high performance. Available in a wide range of storage systems, IBM Storwize family delivers sophisticated capabilities that are easy to deploy, and help control costs for growing businesses.
The all-flash enterprise-class V5000 and V7000F solutions deliver high performance.
This chapter includes the following sections:
4.1 IBM SAN Volume Controller
Built with IBM Spectrum Virtualize software, part of the IBM Spectrum Storage family, IBM SAN Volume Controller (SVC) is a dependable system that improves data value, security, and simplicity for new and existing storage infrastructure. Proven over 12 years in thousands of deployments, its innovative data virtualization capabilities help organizations achieve better data economics by supporting new workloads that are critical to their success. SVC systems can handle the massive volumes of data from mobile and social applications, enable rapid and flexible cloud services deployments, and deliver the performance and scalability that are needed to gain insights from the latest analytics technologies.
IBM SAN Volume Controller enhances storage capabilities with sophisticated virtualization, management, functionality, and provides the following benefits:
Enhance data storage functions, economics, and flexibility with sophisticated virtualization
Use hardware-accelerated data compression for efficiency and performance
Use encryption to help improve security for data on existing storage systems
Optimize tiered storage, including flash storage, automatically with IBM Easy Tier
Improve network utilization for remote mirroring with innovative replication technology
Implement multi-site configurations for high availability and data mobility between data centers
Move data among virtualized storage systems without disruptions
4.1.1 SAN Volume Controller Software
IBM SAN Volume Controller is built with IBM Spectrum Virtualize software, which is part of the IBM Spectrum Storage family.
The software provides these functions for the host systems that attach to SAN Volume Controller:
Creates a single pool of storage
Provides logical unit virtualization
Manages logical volumes
Mirrors logical volumes
The SAN Volume Controller system also provides these functions:
Large scalable cache
Copy Services
 – IBM FlashCopy (point-in-time copy) function, including thin-provisioned FlashCopy to make multiple targets affordable
 – IBM HyperSwap (active-active copy) function
 – Metro Mirror (synchronous copy)
 – Global Mirror (asynchronous copy)
 – Data migration
Space management
 – IBM Easy Tier function to migrate the most frequently used data to higher-performance storage
 – Metering of service quality when combined with IBM Spectrum Control
 – Thin-provisioned logical volumes
 – Compressed volumes to consolidate storage
For more information about IBM Spectrum Virtualize software, see 2.3.1, “IBM Spectrum Virtualize” on page 29.
4.1.2 IBM SAN Volume Controller architecture
The SAN Volume Controller combines software and hardware into a comprehensive, modular appliance that uses symmetric virtualization.
Symmetric virtualization is achieved by creating a pool of managed disks (MDisks) from the attached storage systems. Those storage systems are then mapped to a set of volumes for use by attached host systems. System administrators can view and access a common pool of storage on the storage area network (SAN). This functionality helps administrators to use storage resources more efficiently and provides a common base for advanced functions.
A SAN is a high-speed Fibre Channel network that connects host systems and storage devices. In a SAN, a host system can be connected to a storage device across the network. The connections are made through units such as routers and switches. The area of the network that contains these units is known as the fabric of the network.
Figure 4-1 shows only one SAN fabric and two zones: Host and storage. In a real environment, it is a preferred practice to use two redundant SAN fabrics. The SVC can be connected to up to four fabrics.
Figure 4-1 SVC conceptual and topology overview
Volumes
A system of SAN Volume Controller nodes presents volumes to the hosts. Most of the advanced functions that SAN Volume Controller provides are defined on volumes. These volumes are created from MDisks that are presented by the RAID storage systems or by RAID arrays that are provided by flash drives in an expansion enclosure such as SAN Volume Controller 2145-24F. All data transfer occurs through the SAN Volume Controller node, which is described as symmetric virtualization.
Figure 4-2 shows the data flow across the fabric.
Figure 4-2 Data flow in a SAN Volume Controller system
The nodes in a system are arranged into pairs that are known as I/O groups. A single pair is responsible for serving I/O on a volume. Because a volume is served by two nodes, no loss of availability occurs if one node fails or is taken offline.
Volume types
You can create the following types of volumes on the system.
Basic volumes, where a single copy of the volume is cached in one I/O group. Basic volumes can be established in any system topology. Figure 4-3 shows a standard system topology.
Figure 4-3 Example of a basic volume
Mirrored volumes, where copies of the volume can either be in the same storage pool or in different storage pools. The volume is cached in a single I/O group. Typically, mirrored volumes are established in a standard system topology. Figure 4-4 shows an example of mirrored volumes.
Figure 4-4 Example of mirrored volumes
Stretched volumes, where copies of a single volume are in different storage pools at different sites. The volume is cached in one I/O group. Stretched volumes are only available in stretched topology systems. Figure 4-5 shows an example of a stretched volume.
Figure 4-5 Example of stretched volumes
HyperSwap volumes, where copies of a single volume are in different storage pools that are on different sites. The volume is cached in two I/O groups that are on different sites. These volumes can be created only when the system topology is HyperSwap. Figure 4-6 shows an example of a HyperSwap volume.
Figure 4-6 Example of HyperSwap volumes
System topology
The topology property of a SAN Volume Controller system can be set to one of the following states:
The standard topology where all nodes are at the same site or where each node of an I/O group is at a different site. Higher availability can be achieved by using Global Mirror or Metro Mirror to maintain a copy of a volume on a different system at a remote site. The copy at the remote site can be used for disaster recovery. Figure 4-7 shows an example of a standard system topology.
Figure 4-7 Example of a standard system topology
The stretched topology where each node of an I/O group is at a different site. Access to a volume can continue when one site is not available but with reduced performance. Figure 4-8 shows an example of a stretched system topology.
Figure 4-8 Example of a stretched system topology
The HyperSwap topology where each I/O group is at a different site. A volume can be active on two I/O groups so that it can immediately be accessed through the other site when a site is not available. Figure 4-9 shows an example of a HyperSwap system topology.
Figure 4-9 Example of a HyperSwap system topology
 
Note: You cannot mix I/O groups of different topologies in the same system.
Table 4-1 summarizes the types of volumes that can be associated with each system topology.
Table 4-1 Summary of system topology and volumes
 
Volume Type
Topology
Basic
Mirrored
Stretched
HyperSwap
Custom
Standard
X
X
 
 
X
Stretched
X
 
X
 
X
HyperSwap
X
 
 
X
X
System management
The SAN Volume Controller nodes in a clustered system operate as a single system and present a single point of control for system management and service. System management and error reporting are provided through an Ethernet interface to one of the nodes in the system, which is called the configuration node. The configuration node runs a web server and provides a command-line interface (CLI). The configuration node is a role that any node can take. If the current configuration node fails, a new configuration node is selected from the remaining nodes. Each node also provides a command-line interface and web interface for initiating hardware service actions.
Fabric types
I/O operations between hosts and SAN Volume Controller nodes, and between SAN Volume Controller nodes and RAID storage systems, use the SCSI standard. The SAN Volume Controller nodes communicate with each other through private SCSI commands.
Fibre Channel over Ethernet connectivity is supported on SAN Volume Controller models 2145-DH8 and 2145-CG8.
Table 4-2 shows the fabric type that can be used for communicating between hosts, nodes, and RAID storage systems. These fabric types can be used at the same time.
Table 4-2 SAN Volume Controller communications types
Communications type
Host to SAN Volume Controller
SAN Volume Controller to storage system
SAN Volume Controller to SAN Volume Controller
Fibre Channel SAN
Yes
Yes
Yes
iSCSI (1 Gbps Ethernet or 10 Gbps Ethernet)
 
Yes
 
No
 
No
Fibre Channel Over Ethernet SAN (10 Gbps Ethernet)
 
Yes
 
Yes
 
Yes
Flash drives
Some SAN Volume Controller nodes contain flash drives or are attached to expansion enclosures that contain flash drives. These flash drives can be used to create RAID MDisks that in turn can be used to create volumes. In SAN Volume Controller 2145-DH8 nodes, the flash drives are in an expansion enclosure that is connected to both sides of an I/O group.
Flash drives provide host servers with a pool of high-performance storage for critical applications. Figure 4-10 shows this configuration. MDisks on flash drives can also be placed in a storage pool with MDisks from regular RAID storage systems. IBM Easy Tier performs automatic data placement within that storage pool by moving high-activity data onto better performing storage.
Figure 4-10 SAN Volume Controller nodes with internal Flash drives
Easy Tier
SAN Volume Controller includes IBM Easy Tier, a function that responds to the presence of drives in a storage pool that also contains hard disk drives (HDDs). The system automatically and nondisruptively moves frequently accessed data from HDD MDisks to flash drive MDisks, thus placing such data in a faster tier of storage.
Easy Tier eliminates manual intervention when you assign highly active data on volumes to faster responding storage. In this dynamically tiered environment, data movement is seamless to the host application regardless of the storage tier in which the data belongs. Manual controls exist so that you can change the default behavior, for example, such as turning off Easy Tier on pools that have any combinations of the three types of MDisks.
The system supports these tiers:
Flash tier
The flash tier exists when flash MDisks are in the pool. The flash MDisks provide greater performance than enterprise or nearline MDisks.
Enterprise tier
The enterprise tier exists when enterprise-class MDisks are in the pool, such as those built from serial-attached SCSI (SAS) drives.
Nearline tier
The nearline tier exists when nearline-class MDisks are used in the pool, such as those drives built from nearline SAS drives.
All MDisks belong to one of the tiers, which includes MDisks that are not yet part of a pool.
Encryption
The SAN Volume Controller 2145-DH8 system provides optional encryption of data at rest. This encryption protects against the potential exposure of sensitive user data and user metadata that is stored on discarded, lost, or stolen storage devices. Encryption of system data and system metadata is not required, so system data and metadata are not encrypted. Encryption is performed by the SVC Storage Engine for data stored within the SVC Expansion Enclosures and externally virtualized storage subsystems. Encryption is enabled on the SVC Storage Engine through the acquisition of the Encryption Enablement feature on each SVC DH8 Storage Engine.
The Encryption USB Flash Drives (Pair) feature is required when the Encryption Enablement feature is acquired. This feature provides two USB flash drives for storing the encryption master access key.
Encryption by Spectrum Virtualize is transparent to hosts and applications, and the data is encrypted and protected across the network between the Storwize and SVC controllers and one of more storage devices.
IBM SAN Volume Controller hardware
Each SAN Volume Controller node is an individual server in a SAN Volume Controller clustered system on which the SAN Volume Controller software runs.
The nodes are always installed in pairs, with a minimum of one and a maximum of four pairs of nodes constituting a system. Each pair of nodes is known as an I/O group.
I/O groups take the storage that is presented to the SAN by the storage systems as MDisks and translates the storage into logical disks (volumes) that are used by applications on the hosts. A node is in only one I/O group and provides access to the volumes in that I/O group.
Figure 4-11 shows the front-side view of the SVC 2145-DH8 node.
Figure 4-11 SVC 2145-DH8 storage engine
If you have a SAN Volume Controller 2145-DH8 model, you can add expansion enclosures to expand the available capacity of the system. Each system can have a maximum of four I/O groups with two expansion enclosure attached to each I/O group. Other system models do not support expansion enclosures.
An expansion enclosure houses the following additional hardware: Power supply units (PSUs), canisters, and drives. Enclosure objects report the connectivity of the enclosure. Each expansion enclosure is assigned a unique ID. It is possible to change the enclosure ID later.
 
Note: A maximum of one expansion enclosure is allowed for each SAS chain. SAN Volume Controller 2145-DH8 nodes support a maximum of two SAS chains.
The SAN Volume Controller 2145-DH8 node has the following features:
A 19-inch rack-mounted enclosure
At least one Fibre Channel adapter or one 10 Gbps Ethernet adapter
Optional second, third, and fourth Fibre Channel adapters
32 GB memory per processor
One or two, eight-core processors
Dual redundant power supplies
Dual redundant batteries for better reliability, availability, and serviceability than for a SAN Volume Controller 2145-CG8 with an uninterruptible power supply
Up to two SAN Volume Controller 2145-24F expansion enclosures to house up to 24 flash drives each
iSCSI host attachment (1 Gbps Ethernet and optional 10 Gbps Ethernet)
Supports optional IBM Real-time Compression
A dedicated technician port for local access to the initialization tool or the service assistant interface.
Model DH8 includes three 1 Gb Ethernet ports standard for iSCSI connectivity. It can be configured with up to four I/O adapter features providing up to sixteen 16 Gb FC ports, up to sixteen 8 Gb FC ports, or up to four 10 Gb Ethernet (iSCSI/Fibre Channel over Ethernet (FCoE)) ports.
Using compression reduces the amount of physical storage across your environment. Compressed volumes are a special type of volume where data is compressed as it is written to disk, saving more space. To use the compression function, you must obtain the IBM Real-time Compression license. In addition, the hardware level for both nodes within the I/O group must be either SAN Volume Controller 2145-DH8, 2145-CG8, or 2145-CF8 for that I/O group to support compression. SAN Volume Controller 2145-DH8 nodes must have two CPUs and at least one compression acceleration adapter installed to use compression. Each compression accelerator increases the speed of I/O transfers between nodes and compressed volumes. Enabling compression on SAN Volume Controller 2145-DH8 nodes does not affect non-compressed host to disk I/O performance.
4.1.3 Copy Services functions
IBM SAN Volume Controller provides Copy Services functions that can be used to improve availability and support disaster recovery. The following Copy Services functions are available for all supported hosts that are connected to SAN Volume Controller:
IBM FlashCopy function makes an instant, point-in-time copy from a source volume to a target volume.
Metro Mirror provides a consistent copy of a source volume on a target volume. Data is written to the target volume synchronously after it is written to the source volume so that the copy is continuously updated.
Global Mirror provides a consistent copy of a source volume on a target volume. The data is written to the target volume asynchronously and the copy is continuously updated. However, the copy might not contain the most recent updates when a disaster recovery operation is performed.
Global Mirror with change volumes provides support for Global Mirror with higher recovery point objectives (RPOs) by the use of change volumes. This function is for environments that have less bandwidth available between the sites than the update rate of the replicated workload.
IBM HyperSwap function provides dual-site, active-active access to a volume. Data is written to the copies on both sites synchronously so that both copies are continuously updated and can be used immediately if the other copy becomes unavailable.
Metro Mirror, Global Mirror, Global Mirror with change volumes, and HyperSwap functions are types of remote copy or remote replication.
FlashCopy function
The FlashCopy function creates a point-in-time copy of data stored on a source volume to a target volume.
In its basic mode, the IBM FlashCopy function copies the contents of a source volume to a target volume. Any data that existed on the target volume is lost and is replaced by the copied data. After the copy operation has completed, the target volumes contain the contents of the source volumes as they existed at a single point in time, unless target writes have been processed. The FlashCopy function is sometimes described as an instance of a time-zero copy (T 0) or point-in-time copy technology. Although the FlashCopy operation takes some time to complete, the resulting data on the target volume is presented so that the copy appears to have occurred immediately.
Although it is difficult to make a consistent copy of a data set that is constantly updated, point-in-time copy techniques help solve this problem. If a copy of a data set is created using a technology that does not provide point-in-time techniques and the data set changes during the copy operation, the resulting copy might contain data that is not consistent. For example, if a reference to an object is copied earlier than the object itself and the object is moved before it is copied, the copy contains the referenced object at its new location, but the copied reference still points to the previous location.
More advanced FlashCopy functions allow operations to occur on multiple source and target volumes. FlashCopy management operations are coordinated to provide a common, single point-in-time for copying target volumes from their respective source volumes. This process creates a consistent copy of data that spans multiple volumes. The FlashCopy function also allows multiple target volumes to be copied from each source volume. This function can be used to create images from different points in time for each source volume.
HyperSwap, Metro Mirror, and Global Mirror
Metro Mirror and Global Mirror are two types of remote-copy operations. You can use these functions to set up a relationship between two volumes, where updates made to one volume are mirrored on the other volume. For Metro Mirror and Global Mirror operations, the volumes can be on two different systems (intersystem). Metro Mirror can also support volumes that reside on the same system (intrasystem).
Although data is only written to a single volume, the system maintains two copies of the data. If the copies are separated by a significant distance, the Metro Mirror and Global Mirror copies can be used as a backup for disaster recovery. A prerequisite for Metro Mirror and Global Mirror operations between systems over Fibre Channel connections is that the SAN fabric to which they are attached provides adequate bandwidth between the systems. SAN fabrics are not required for IP-only connections.
For both Metro Mirror and Global Mirror copy types, one volume is designated as the primary and the other volume is designated as the secondary. Host applications write data to the primary volume, and updates to the primary volume are copied to the secondary volume. Normally, host applications do not run I/O operations to the secondary volume.
Metro Mirror is a type of remote copy that creates a synchronous copy of data from a primary volume to a secondary volume. A secondary volume can either be on the same system or on another system. With synchronous copies, host applications write to the primary volume but do not receive confirmation that the write operation has completed until the data is written to the secondary volume. This process ensures that both the volumes have identical data when the copy operation completes. After the initial copy operation completes, the Metro Mirror function always maintains a fully synchronized copy of the source data at the target site.
The Global Mirror function provides an asynchronous copy process. When a host writes to the primary volume, confirmation of I/O completion is received before the write operation completes for the copy on the secondary volume.
When Global Mirror operates without cycling, write operations are applied to the secondary volume as soon as possible after they are applied to the primary volume. The secondary volume is generally less than 1 second behind the primary volume, which minimizes the amount of data that must be recovered if a failover occurs. However, a high-bandwidth link must be provisioned between the two sites.
The system supports the following types of relationships and consistency groups:
Active-active (for HyperSwap volumes)
Metro Mirror
Global Mirror without change volumes (cycling mode set to None)
Global Mirror with change volumes (cycling mode set to Multiple)
If necessary, you can change the copy type of a Metro Mirror or Global Mirror remote-copy relationship or consistency group without re-creating the relationship or consistency group with the different type. For example, if the latency of the long-distance link affects host performance, you can change the copy type to Global Mirror to improve host performance over high latency links. For Global Mirror relationships with multiple cycling mode, changes are tracked and copied to intermediate change volumes. The changes are transmitted to the secondary site periodically to lower bandwidth requirements.
 
Note: You cannot change the type of an active-active relationship or consistency group.
The Metro Mirror and Global Mirror functions support the following operations:
Intersystem copying of a volume, in which one volume belongs to a system and the other volume belongs to a different system.
 
Note: A system can participate in active Metro Mirror and Global Mirror relationships with itself and up to three other systems.
Intersystem and intrasystem Metro Mirror relationships can be used concurrently on the same system.
Bidirectional links are supported for intersystem relationships. This configuration means that data can be copied from system A to system B for one pair of volumes while data is copied from system B to system A for a different pair of volumes.
The copy direction can be reversed for a consistent relationship.
You can change the copy type for relationships and consistency groups between Metro Mirror and Global Mirror with or without change volumes.
Consistency groups are supported to manage a group of relationships that must be kept synchronized for the same application. This configuration also simplifies administration because a command that is issued to the consistency group is applied to all the relationships in that group.
The system supports a maximum of 8192 Metro Mirror and Global Mirror relationships per system.
For more information about the HyperSwap function, see the IBM Knowledge Center:
4.1.4 More information
For more information, see the following IBM Redbooks publications:
Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6, SG24-7933
IBM SAN Volume Controller 2145-DH8 Introduction and Implementation, SG24-8229
IBM System Storage SAN Volume Controller and Storwize V7000 Best Practices and Performance Guidelines, SG24-7521
IBM SAN Volume Controller and Storwize Family Native IP Replication, REDP-5103
IBM Spectrum Virtualize and SAN Volume Controller Enhanced Stretched Cluster with VMware, SG24-8211
Also, see these websites:
SAN Volume Controller product page
SAN Volume Controller V7.6 Recommended Software Levels
SAN Volume Controller information in the IBM Knowledge Center
SAN Volume Controller Support
4.2 IBM Storwize V3700
IBM Storwize V3700 is a powerful, easy to use, self-optimizing, and affordable Storwize system, bringing Storwize V7000 value to entry clients.
IBM Storwize V3700 supports up to two redundant RAID controllers with either 12 or 24 drive configurations. There are two types of expansion enclosures, each with either 12 or 24 drives. IBM Storwize V3700 can scale up to a maximum of 10 enclosures with up to 240 drives, depending on the configuration. Built on the proven Storwize V7000 technology, IBM Storwize V3700 provides the following features:
Great consolidation flexibility through support for the following host attachment protocols: iSCSI, FC, FCoE, and SAS
The same management interface as Storwize V7000, providing a familiar experience
Advanced virtualization features that were previously only in midrange systems, such as IBM Storwize V7000
Sophisticated data protection that uses FlashCopy technology with optional upgrades
Six Gbps SAS drive attachment technology
Multiple internal drive options: Flash drives, 15k r SAS, 10k r SAS, and nearline SAS
Figure 4-12 shows the 12-slot model.
Figure 4-12 IBM Storwize V3700 12-slot control or expansion enclosure
Figure 4-13 shows the 24-slot model.
Figure 4-13 IBM Storwize V3700 24-slot control or expansion enclosure
Table 4-3 lists the IBM Storwize V3700 specifications.
Table 4-3 IBM Storwize V3700 specifications
Component
Description
Model
2072-12C - V3700 Control Enclosure 12 x 3.5” drives
2072-24C - V3700 Control Enclosure 24 x 2.5” drives
2072-12E - V3700 Expansion Enclosure 12 x 3.5” drives
2072-24E - V3700 Expansion Enclosure 24 x 2.5” drives
2072-2DC- V3700 SFF DC Dual Control Enclosure
2072-2DE- V3700 SFF DC Dual Expansion Enclosure
RAID controller
Dual active, hot-swappable controllers
Cache per controller
4 GB cache per controller with 8 GB upgrade (battery-backed)
Host interface
Six 6 Gb SAS and four 1 Gb Ethernet ports that are standard for SAS and iSCSI connectivity. In addition, select from these options (per one node canister):
Four 8 Gbps Fibre Channel ports
Four 1 Gbps iSCSI ports
Two 10 Gbps iSCSI ports
Four 6 Gbps SAS ports
Supported drives
6 Gbps SAS 3.5-inch drives:
2 TB, 3 TB, 4 TB, 6 TB, and 8 TB 7,2k r nearline
900 GB, 1.2 TB, and 1.8 TB 10k r
300 GB, and 600 GB 15k r
 
6 Gbps SAS 2.5-inch drives:
300 GB 15k r
600 GB 15k r
600 GB 10k r
900 GB 10k r
1 TB 7.2k r nearline
2 TB 7.2k r nearline
1.2 TB 10k r
1.8 TB 10k r
 
2.5-inch Flash drives:
200 GB, 400 GB, 800 GB, and 1.6 TB
RAID
RAID 0, 1, 5, 6, and 10
Maximum drives supported
Up to 240 drives (high-performance, enterprise class disk drives, high-capacity, archival-class nearline disk drives, and Flash drives)
Fans and power supplies
Fully redundant, hot-swappable
Rack support
Standard 19-inch rack-mount enclosure
Warranty
3-year limited warranty, customer-replaceable units, onsite service, next-business-day 9x5, service upgrades available
For the latest specification information about the IBM Storwize V3700, see this website:
4.2.1 IBM Storwize V3700 components
IBM Storwize V3700 is a 2U rack mountable enclosure with up to two node or expansion canisters, 12 or 24 drives, and two power supplies that contain fans.
Node canisters contain 4 GB of cache, which can be upgraded to 8 GB that provide up to 16 GB of cache for the system. Each node canister contains a battery to provide cache backup during a power failure. Six Gbps SAS ports are available for expansion connectivity. Node canisters can be upgraded with host interface cards (HICs) to accommodate your host environment:
Four port 8 Gbps Fibre Channel HIC
Four port 1 Gbps iSCSI HIC
Four port 6 GB SAS HIC
Two port 10 Gbps iSCSI/FCoE HIC
 
Attachment: IBM Storwize V3700 allows directly attached FC hosts. A maximum of four redundant FC hosts can attach directly.
An expansion canister houses two 6 Gbps SAS ports that are used for connection to the control enclosure and additional expansions.
IBM Storwize V3700 uses new mini SAS high-density (HD) connectors to connect the expansions to the control enclosure. Figure 4-14 shows the difference between mini SAS cables, which are used by IBM Storwize V7000; and mini SAS HD cables that are used by IBM Storwize V3700.
Figure 4-14 Comparison of mini SAS and mini SAS HD cables
IBM Storwize V3700 has a single expansion SAS chain that supports up to nine expansion enclosures. Node canisters use SAS port 4 for the expansion connection.
Figure 4-15 shows the IBM Storwize V3700 expansion cabling topology.
Figure 4-15 IBM Storwize V3700 expansion cabling topology
Most of the V3700 components are customer-replaceable units (CRUs), except for the enclosure midplane:
Power supplies (with fans)
Node and expansion canisters
Battery Backup Unit (BBU)
Cache memory
HIC
All disk drives
4.2.2 IBM Storwize V3700 software and configuration
IBM Storwize V3700 software is based on the SAN Volume Controller and V7000 software stack. However, unlike the IBM SAN Volume Controller and V7000, V3700 software is distributed as Licensed Machine Code (LMC). Therefore, users do not have to purchase and install a separate license to use the system.
The following functions are included with every Storwize V3700 system:
RAID levels 0, 1, 5, 6, and 10 provide the flexibility to choose the level of data protection that is required.
Virtualization of internal storage enables rapid, flexible provisioning and simple configuration changes.
Thin provisioning optimizes efficiency by allocating disk storage space in a flexible manner among multiple users, based on the minimum space that is required by each user at any time. With thin provisioning, applications use only the space they need, not the total space that is allocated to them.
Data migration enables easy and nondisruptive moves of volumes from another storage system onto the Storwize V3700 system with Fibre Channel (FC) or SAS connectivity. You can easily migrate data from DS3200/DS3500 to Storwize V3700. For more information, see “Migrating” in the IBM Knowledge Center:
FlashCopy creates copies of data for backup, parallel processing, testing, and development. The copies are available almost immediately. Storwize V3700 supports up to 64 FlashCopy targets per system.
Several licensed functions have 90-day trial licenses available. You can use the trial licenses to determine whether the function works as expected, create a business justification for a licensed function, or test the benefit in your environment. If you use a trial license, the system warns you when the trial is about to expire at regular intervals. If a license is not purchased and activated before the trial period expires, the system automatically suspends any configuration that is associated with the function.
The following additional licenses can be purchased to expand the capabilities of your system.
Easy Tier
Easy Tier responds to pools with a mixture of flash, enterprise, and nearline storage in any combination. The system automatically and nondisruptively moves data between the MDisks to optimize performance. For example, Easy Tier moves frequently accessed data from enterprise or nearline MDisks to flash drive MDisks, placing the data in a faster tier of storage. A 90-day trial version of this function is available.
Remote copy
With the remote copy function, you can set up a relationship between two volumes so that updates that are made by an application to one volume are mirrored on the other volume. The volumes can be in the same system or on two systems. The license settings apply only to the system on which you are configuring license settings. For remote copy partnerships, a license is also required on any remote systems that are in the partnership. A 90-day trial version of this function is available.
FlashCopy upgrade
The FlashCopy upgrade extends the base FlashCopy function that is included with the product. The base version of FlashCopy limits the system to 64 target volumes. With the FlashCopy upgrade license activated on the system, an unlimited number of FlashCopy mappings are allowed. If you reach the limit that is imposed by the base function before you activate the upgrade license, you cannot create any more FlashCopy mappings.
Turbo Performance
The Turbo Performance function provides enhanced performance for the system. A 90-day trial version of this function is available.
An IBM Storwize family system is in one of two layers: the replication layer or the storage layer. The system layer affects how the system interacts with the SAN Volume Controller and external Storwize family systems. The SVC can be in the replication layer only. The IBM Storwize V3700 and V7000 can be in the storage layer, which means that they can be put behind another SAN Volume Controller, V3700, or V7000. Also, they can be in the replication layer, which means that they can virtualize another V3700 or V7000, which is used as the storage layer. The Turbo Performance function provides the opportunity for flexible organization and virtualization configurations.
 
Important: Only image mode disks can be virtualized by IBM Storwize V3700.
The IBM Storwize V3700 uses basic storage units that are called managed disks and collects them into one or more storage pools. These storage pools then provide the physical capacity to create volumes for use by hosts.
The Storwize V3700 supports hot-spare drives. Spare drives are global spares, which means that any spare that is at least as large as the drive that is being replaced can be used in an array. The spare system prefers the best possible match based on the following factors: technology (SAS, Flash, or nearline SAS), speed, capacity, and location.
4.2.3 More information
For more information, see Implementing the IBM Storwize V3700, SG24-8107.
Also, see the IBM Storwize V3700 information in the IBM Knowledge Center:
4.3 IBM Storwize V5000 Gen2
Designed for software-defined environments and built with IBM Spectrum Virtualize software, the IBM Storwize family is an industry-leading solution for storage virtualization. It includes technologies to complement and enhance virtual environments, delivering a simpler, more scalable, and cost-efficient IT infrastructure.
IBM Storwize IBM Storwize V5000 is a highly flexible, easy to use, virtualized hybrid storage system that is designed to enable midsized organizations to overcome their storage challenges with advanced functionality. IBM Storwize V5000 second-generation models offer improved performance and efficiency, enhanced security, and increased scalability with a set of three models to deliver a range of performance, scalability, and functional capabilities.
4.3.1 IBM Storwize V5000 Gen2 models
IBM Storwize V5000 second-generation models deliver a range of performance, scalability, and functional capabilities.
Table 4-4 provides an overview of IBM Storwize V5000.
Table 4-4 IBM Storwize V5000 model overview
Model
Processor
Expansion enclosures supported
Additional available advanced functions1
IBM Storwize V5030
Two 6-core processors and up to 64 GB of cache
Up to 20 IBM Storwize V5000 expansion enclosures
External storage virtualization
IBM Real-time Compression
Encryption of data at rest
IBM Storwize V5020
Two 2-core, 4-thread processors and up to 32 GB of cache
Up to 10 IBM Storwize V5000 expansion enclosures
Encryption of data at rest
IBM Storwize V5010
Two 2-core, 2-thread processors and 16 GB of cache
Up to 10 IBM Storwize 5000 expansion enclosures
 

1 In addition to IBM Spectrum Virtualize functions that are supported by the entire family. See , “More detailed information” on page 106.
All IBM Storwize V5000 second-generation models include these features:
I/O connectivity options for 16 Gb FC, 12 Gb SAS, 10 Gb iSCSI / FCoE, and 1 Gb iSCSI
Twelve 3.5-inch large form factor (LFF) or twenty-four 2.5-inch small form factor (SFF) drive slots within the enclosure
Support for the attachment of second-generation IBM Storwize V5000 12 Gb SAS expansion enclosures
Support for IBM Spectrum Virtualize functions, including thin provisioning, Easy Tier, FlashCopy, and remote mirroring
A 2U, 19-inch rack mount enclosure with either AC or DC power
A one- or three-year warranty
For a detailed list of specifications, see the following website:
For more information, see Implementing the IBM Storwize V5000, SG24-8162
IBM Storwize V5030
IBM Storwize V5030 control enclosure models offer the highest levels of performance, scalability, and functionality with these features:
Two 6-core processors and up to 64 GB of cache
Support for 504 drives per system with the attachment of 20 IBM Storwize V5000 expansion enclosures and 1,008 drives with a two-way clustered configuration
External virtualization to consolidate and provide IBM Storwize V5000 capabilities to existing storage infrastructures
Real-time Compression for improved storage efficiency
Encryption of data at rest stored within the IBM Storwize V5000 system and externally virtualized storage systems
An all-flash enterprise class solution option
Connectivity:
 – Standard: 10 Gb iSCSI, 1 Gb iSCSI
 – Optional: 16 Gb Fibre Channel,12 Gb SAS, 10 Gb iSCSI / Fibre Channel over Ethernet (FCoE), 1 Gb iSCSI
Storwize V5030 systems can be clustered to help deliver even greater performance, bandwidth, and storage capacity. With two-way system clustering, the maximum size of a Storwize V5030 system increases to 1,008 drives. A Storwize V5030 system can be added into existing Storwize V5000 clustered systems that include previous generation Storwize V5000 systems.
Further scalability of a Storwize V5030 system can be achieved with virtualization of external storage. When Storwize V5030 virtualizes an external disk system, capacity in the external system inherits the functional richness and ease of use of Storwize V5030.
IBM Storwize V5020
IBM Storwize V5020 control enclosure models offer mid-level performance, scalability, and functionality with these features:
Two 2-core, 4-thread processors and up to 32 GB of cache
Support for 264 drives per system with the attachment of 10 IBM Storwize V5000 expansion enclosures
Encryption of data at rest stored within the IBM Storwize V5000 system
Connectivity
 – Standard: 12 Gb SAS, 1 Gb iSCSI
 – Optional: 16 Gb Fibre Channel, 12 Gb SAS, 10 Gb iSCSI / FCoE, 1 Gb iSCSI
IBM Storwize V5010
Storwize V5010 control enclosure models offer entry-level performance, scalability, and functionality with:
Two 2-core, 2-thread processors and 16 GB of cache
Support for 264 drives per system with the attachment of 10 Storwize V5000 expansion enclosures
Connectivity
 – Standard: 1 Gb iSCSI
 – Optional: 16 Gb Fibre Channel, 12 Gb SAS, 10 Gb iSCSI / FCoE, 1 Gb iSCSI
4.3.2 Components
This section describes the IBM Storwize V5000 Gen2 hardware.
Enclosures
An enclosure is the rack-mounted hardware that contains all the main components of the system: Canisters, drives, and power supplies. The term enclosure is also used to describe the hardware and other parts that are plugged into the enclosure.
The system has two different types of enclosures: Control enclosures and expansion enclosures. A control enclosure manages your storage systems, communicates with the host, and manages interfaces. In addition, each control enclosure can have multiple attached expansion enclosures, which expand the available capacity of the existing control enclosure. The system supports up to two control enclosures with up to nine expansion enclosures per control enclosure.
An expansion enclosure houses the following extra hardware: PSUs, canisters, and drives. Enclosure objects report the connectivity of the enclosure.
Storwize V5000 drives can only be used in Storwize V5000 enclosures. Storwize V7000, Flex System V7000 Storage Node, Storwize V3500, and Storwize V3700 drives cannot be used in a Storwize V5000 enclosure.
 
Note: IBM Storwize V5010 and IBM Storwize V5020 control enclosures systems can support up to 10 expansion enclosures on one chain. Storwize V5030 systems can support two chains, and each chain can support up to 10 expansion enclosures.
A Storwize V5000 expansion enclosure can only be used with a Storwize V5000 control enclosure. A Storwize V5000 control enclosure can only manage a Storwize V5000 expansion enclosure.
For more information about enclosures, including ports, see the IBM Storwize V5000 Gen2 hardware components information in the IBM Knowledge Center:
For more information about supported configurations, see the Storwize V5000 supported environment information in the IBM Knowledge Center:
Supported drives
The IBM Storwize V5000 Gen2 enclosures support a range of enterprise-class, nearline-class, and flash drives as shown in Table 4-5.
Table 4-5 Drive types
Drive type
Speed
Size
2.5-inch form factor
SSD
N/A
400, 800 GB, and 1.6, 3.2 TB
2.5-inch form factor
SAS
15,000 rpm
300 GB, 600 GB
2.5-inch form factor
SAS
10,000 rpm
900 GB, 1.2 TB, 1.8 TB
2.5-inch form factor
Nearline SAS
7,200 rpm
2 TB
3.5-inch form factor
SAS
15,000 rpm
300 GB, 600 GB1
3.5-inch form factor
SAS
10,000 rpm
900 GB, 1.2 TB, 1.8 TB2
3.5-inch form factor
Nearline SAS
7,200 rpm
4 TB, 6 TB, 8 TB

1 2.5-inch drive in a 3.5-inch drive carrier
2 2.5-inch drive in a 3.5-inch drive carrier
More detailed information
For more information about the hardware, including enclosures, host interface adapter ports and indicators, and other components, see the IBM Storwize V5000 Gen2 hardware components information in the IBM Knowledge Center:
4.3.3 Functional capabilities
Storwize V5000 uses IBM Spectrum Virtualize V6.1 or later software to deliver innovative, proven, and comprehensive capabilities. The following functions are supported on all Storwize V5000 second-generation models, except as noted:
Virtualization of internal storage: Enables rapid, flexible provisioning and simple configuration changes
External storage virtualization: Virtualizes existing IBM and non-IBM storage to make it part of the IBM Storwize V5000 system where it inherits the advantages of the IBM Storwize V5000 (IBM Storwize V5030 only)
Thin provisioning: Supports business applications that need to grow dynamically, while consuming only the space that is actually used
Encryption of data at rest: Helps protect data from unauthorized access when disks are physically removed from the IBM Storwize system or from externally virtualized storage systems (IBM Storwize V5030 and IBM Storwize V5020 only)
Real-time Compression: Improves efficiency by storing more active primary data in the same physical disk space (IBM Storwize V5030 only)
FlashCopy: Enables copies of data to be created with minimal effect to production environments
Easy Tier: Enables automatic and intelligent migration of frequently accessed data elements to high-performing drives or flash storage (an additional license is required)
Remote mirroring: Enables synchronous or asynchronous data replication between IBM Storwize V5000 systems or with any other IBM Storwize family system, including IBM Storwize V7000 and IBM SAN Volume Controller
Copy Services: Improves availability and support disaster recovery (additional licenses are required):
 – Metro Mirror provides a consistent copy of a source volume on a target volume. Data is written to the target volume synchronously after it is written to the source volume so that the copy is continuously updated.
 – Global Mirror provides a consistent copy of a source volume on a target volume. The data is written to the target volume asynchronously and the copy is continuously updated. However, the copy might not contain the most recent updates when a disaster recovery operation is performed.
HyperSwap: Delivers high-availability configurations for resilient virtualized environments (IBM Storwize V5030 only)
Dynamic migration: Delivers efficiency and business value through a nondisruptive migration function. A Fibre Channel connection is required to import data from an existing storage controller.
IBM Storage Mobile Dashboard: Offers basic monitoring capabilities to securely check the health and performance of IBM Storwize family systems
Graphical user interface: Delivers intuitive data management designed with point-and-click system management capabilities
For more information about Storwize V5000 functional capabilities and software, see the IBM Storwize V5000 Gen2 overview in the IBM Knowledge Center:
4.3.4 VersaStack Solution for IBM Storwize V5000
The VersaStack solution by Cisco and IBM can help you accelerate the deployment of your data centers. It reduces costs by more efficiently managing information and resources while still maintaining your ability to adapt to business change.
The VersaStack solution is based on Cisco UCS Mini. It integrates Cisco Unified Computing System (Cisco UCS) Mini blade servers, IBM Storwize V5000 disk system, and UCS Central software as a single system, providing versatility, efficiency, and simplicity for data centers.
For more information, see the following website:
4.3.5 More information
For more information, see the following links:
IBM Storwize V5000 information in the IBM Knowledge Center
IBM Storwize V5000 product page
4.4 IBM Storwize V7000 Unified, IBM Storwize V7000, and IBM Storwize V7000F
IBM Storwize V7000 Unified and IBM Storwize V7000 are virtualized, enterprise-class hybrid storage systems that provide the foundation for implementing an effective storage infrastructure and transforming the economics of data storage. With industry-first hardware accelerated Real-time Compression, they can reduce the cost of storage by up to 80 percent while maintaining application performance.
IBM Storwize V7000 supports block workloads. IBM Storwize V7000 Unified consolidates block and file workloads into a single system and can be combined with Storwize V7000 File Modules to create a Storwize V7000 Unified solution that consolidates block and file workloads into a single storage system for greater simplicity and efficiency.
Built with IBM Spectrum Virtualize software, IBM Storwize V7000 Unified and Storwize V7000 provide the latest storage technologies for unlocking the business value of stored data, including virtualization and Real-time Compression. In addition, the systems include a powerful hardware platform that can support the massive volumes of data created by today’s demanding cloud, analytics, and traditional applications. They are designed to deliver outstanding efficiency, ease of use, and dependability for organizations of all sizes.
IBM Storwize V7000 Unified and IBM Storwize V7000 provide the following key benefits:
Hardware accelerated Real-time Compression reduces storage acquisition costs by using up to 80% less disk and flash capacity
File support consolidates block and file storage in a system for simplicity and greater efficiency (Storwize V7000 Unified only)
Active File Management enables highly efficient policy-based management of files to reduce costs through use of tiered storage (Storwize V7000 Unified only)
Ability for block systems to both scale up and out for performance and capacity with clustered systems
IP replication reduces remote mirroring costs with innovative network optimization
Available IBM performance and capacity guarantees help you focus on your business, not your storage, and deploy with confidence
Automated storage tiering with IBM Easy Tier provides advanced technology for automatically migrating data between storage tiers based on real-time usage analysis patterns
New generation GUI provides easy-to-use data management designed with a graphical user interface and point-and-click system management capabilities
It is part of VersaStack, which is an integrated infrastructure solution jointly developed by IBM and Cisco to provide faster application and workload delivery, and IT agility
4.4.1 IBM Storwize V7000 Unified
The IBM Storwize V7000 Unified Disk System integrates the serving of storage and file-related services, such as file sharing and file transfer capabilities, in one system. The Storwize V7000 Unified system can provide storage system virtualization as well, using the mature virtualization capabilities of the IBM SAN Volume Controller. It is an integrated storage server, storage virtualization, and file server appliance.
To serve logical volumes and files, the hardware and software to provide these services are integrated into one product. Viewed from its clients, one part of the Storwize V7000 Unified Disk System is a storage server and the other part is a file server. Therefore, it is called Unified.
IBM Storwize V7000 Unified system is a single, integrated storage infrastructure with unified central management that simultaneously supports Fibre Channel, IP Storage Area Networks (iSCSI), and network-attached storage (NAS) data formats, and is centrally managed.
Storwize V7000 Unified storage subsystem: The Storwize V7000
The V7000 Unified system uses internal storage to generate and provide logical volumes to storage clients, so it is a storage system. It can manage the virtualization of external storage systems as well.
The storage subsystem consists of the hardware and software of the IBM Storwize V7000 storage system (Storwize V7000). At the time of writing, it runs the IBM Spectrum Virtualize 7.6 software.
The storage subsystem is used for these functions:
Provision of logical volumes to external storage clients
Provision of logical volumes to the internal storage clients, the file modules
Storwize V7000 Unified system file server subsystem: The file modules
In addition to providing logical volumes, the Storwize V7000 Unified Disk System provides access to file system space and to files in those file systems. It uses file sharing protocol, file access protocols, and file transfer or file copy protocols, so it acts as a file server.
The file server subsystem of V7000 Unified system consists of two IBM Storwize V7000 file modules. The file module is a volume storage system that is composed of two units that provide file systems for use by NAS. The file module uses the Storwize V7000 storage system to provide the file module with volumes. Other volumes, which are block volumes, are provided on the SAN to be presented to hosts. The file system is based on IBM General Parallel File System (GPFS) technology.
The IBM Storwize V7000 File Module Software within the Storwize V7000 Unified system contains the interface node and management node functions. The management node function allows you to configure, administer, and monitor a system. The interface node function connects a system to an Internet Protocol (IP) network. You can connect by using any of the following protocols:
Common Internet File System (CIFS)
Network File System (NFS)
File Transfer Protocol (FTP)
Hypertext Transfer Protocol Secure (HTTPS)
Secure Copy Protocol (SCP)
The Storwize V7000 Unified Disk System consists these components:
A single IBM Storwize V7000 control enclosure
0 - 20 Storwize V7000 expansion enclosures (storage server subsystem)
Two file modules (file server subsystem)
The connecting cables
You can add up to three control enclosures for more I/O groups. Each additional control enclosure, together with the associated expansion enclosures (up to twenty per control enclosure), provides a new volume I/O group. The file modules remain directly connected to the original control enclosure, which presents I/O group 0. That is the default configuration. Plan to add only block volumes in the new I/O groups. File volumes that are created for you when a new file system is created must continue to be in I/O group 0.
 
Note: Adding more file modules to the system is not supported.
Storwize V7000 Unified system file server subsystem configuration
The file server subsystem of the V7000 Unified system consists of two file modules. They are IBM System x servers x3650 M3.
The following details are for one file module:
Form factor: 2U
Processor: Single four core Intel Xeon C3539 2.13 GHz, 8 G L3 cache (or similar)
Cache: 72 GB
Storage: Two 600 GB 10 K SAS drives, RAID 1
Power supply units: Two (redundant), 675 W
The following interfaces are on one file module:
Four 1 GbE ports
 – Two ports for external connectivity to file clients and file level remote copy
 – Two ports for the management network between the file modules for unified system clustering
Two 10 GbE ports for external connectivity to file clients and file level remote copy
Two 8 Gb FC ports, one of which is internally connected to each Storwize V7000 node canister
The internal and external interfaces of the V7000 Unified system, including the optional 10 GbE interfaces on the Storwize V7000 subsystem, are shown in Figure 4-16.
Figure 4-16 Storwize V7000 Unified configuration1
For more information about IBM Storwize V7000 Unified, see Implementing the IBM Storwize V7000 Unified Disk System, SG24-8010.
4.4.2 IBM Storwize V7000
IBM Storwize V7000 is a storage-system-based in-band block virtualization process, in which intelligent functionality, including advanced storage functions, is available for internal storage and any virtualized storage device.
IBM Storwize V7000 next-generation models offer increased performance and connectivity, integrated compression acceleration, and additional scalability with the following features:
Two node canisters, each with an eight-core processor and integrated hardware-assisted compression acceleration
64 GB cache (per system) standard, with optional 128 GB cache (per system) for Real-time Compression workloads
16 Gb FC, 8 Gb FC, 10 Gb iSCSI / FCoE, and 1 Gb iSCSI connectivity options
12 Gb SAS expansion enclosures supporting twelve 3.5-inch LFF or twenty-four 2.5-inch SFF drives
Scaling for up to 504 drives per system with the attachment of 20 Storwize V7000 expansion enclosures and up to 1,056 drives in a four-way clustered configuration
The ability to be added into existing clustered systems with previous generation Storwize V7000 systems
Compatibility with IBM Storwize V7000 Unified File Modules for unified storage capability
Storwize V7000 can improve the use of your storage resources, simplify your storage management, and improve the availability of your applications. It can also reduce the number of separate environments that need to be managed down to a single environment. It also provides a single interface for storage management. After the initial configuration of the storage subsystems that you are going to virtualize, all of the day-to-day storage management operations are performed from Storwize V7000.
IBM Storwize V7000 architecture
Storwize V7000 is a SAN block aggregation virtualization appliance that is designed for attachment to various host computer systems.
Two major approaches currently exist that you should consider for the implementation of block-level aggregation and virtualization:
Symmetric: In-band appliance
The device is a SAN appliance that sits in the data path, and all I/O flows through the device. This implementation is referred to as symmetric virtualization or in-band.
The device is both target and initiator. It is the target of I/O requests from the host perspective, and the initiator of I/O requests from the storage perspective. The redirection is performed by issuing new I/O requests to the storage. Storwize V7000 uses symmetric virtualization.
Asymmetric: Out-of-band or controller-based
The device is usually a storage controller that provides an internal switch for external storage attachment. In this approach, the storage controller intercepts and redirects I/O requests to the external storage as it does for internal storage. The actual I/O requests are themselves redirected. This implementation is referred to as asymmetric virtualization or out-of-band.
Figure 4-17 shows variations of the two virtualization approaches.
Figure 4-17 Overview of block-level virtualization architectures
The IBM Storwize V7000 solution provides a modular storage system that includes the capability to virtualize both external SAN-attached storage and its own internal storage. This solution is built upon the IBM SAN Volume Controller technology base, and uses technology from the IBM System Storage DS8000 family.
IBM Storwize V7000 system provides several configuration options that are aimed at simplifying the implementation process. It also provides automated wizards, called Directed Maintenance Procedures (DMP), to help resolve any events that might occur. A Storwize V7000 system is a midrange, clustered, scalable, and external virtualization device.
Included with a Storwize V7000 system is a graphical user interface (GUI) that enables storage to be deployed quickly and efficiently. The GUI runs on the Storwize V7000 system, so a separate console is not needed. The management GUI contains a series of preestablished configuration options that are called presets, and that use common settings to quickly configure objects on the system. Presets are available for creating volumes and FlashCopy mappings, and for setting up a RAID configuration.
The Storwize V7000 solution provides a choice of up to 1056 SAS drives for the internal storage in a clustered system. It uses SAS cables and connectors to attach to the optional expansion enclosures. In a clustered system, the Storwize V7000 can provide about 4 pebibytes (PiB) of internal raw capacity.
The Storwize V7000 solution consists of 1 - 4 control enclosures and, optionally, up to 80 expansion enclosures. It also supports the intermixing of the different expansion enclosures. Within each enclosure are two canisters. Control enclosures contain two node canisters, and expansion enclosures contain two expansion canisters.
4.4.3 IBM Storwize V7000 components
Storwize V7000 provides the following benefits:
Brings enterprise technology to midrange storage.
Specialty administrators are not required.
Client setup and service can be done easily.
The system can grow incrementally as storage capacity and performance needs change.
Multiple storage tiers are in a single system with nondisruptive migration between them.
Simple integration can be done into the server environment.
The Storwize V7000 subsystem consists of a set of drive enclosures. Control enclosures contain disk drives and two nodes (an I/O Group), which are attached to the SAN fabric. Expansion enclosures contain drives, and are attached to control enclosures.
The simplest use of Storwize V7000 is as a traditional RAID subsystem. The internal drives are configured into RAID arrays and virtual disks created from those arrays. IBM Storwize V7000 can also be used to virtualize other storage controllers.
IBM Storwize V7000 supports regular and solid-state drives (SSDs) and uses IBM System Storage Easy Tier to automatically place volume hot spots on better-performing storage. This section briefly explains the basic architecture components of Storwize V7000.
Nodes
Each IBM Storwize V7000 hardware controller is called a node or node canister. The node provides the virtualization for a set of volumes, cache, and copy services functions. Nodes are deployed in pairs, and multiple pairs make up a clustered system or system. A system can consist of 1 - 4 Storwize V7000 node pairs.
One of the nodes within the system is known as the configuration node. The configuration node manages the configuration activity for the system. If this node fails, the system chooses a new node to become the configuration node.
Because the nodes are installed in pairs, each node provides a failover function to its partner node during a node failure.
I/O Groups
IBM Storwize V7000 can have 1 - 4 pairs of node canisters known as I/O Groups. Storwize V7000 supports eight node canisters in the clustered system, which provides four I/O Groups.
When a host server performs I/O to one of its volumes, all of the I/Os for a specific volume are directed to the I/O Group. Also, under normal conditions, the I/Os for that specific volume are always processed by the same node within the I/O Group.
Both nodes of the I/O Group act as preferred nodes for their own specific subset of the total number of volumes that the I/O Group presents to the host servers (a maximum of 2048 volumes per I/O Group). However, each node also acts as a failover node for its partner node within the I/O Group. Therefore, a node takes over the I/O workload from its partner node, if required, with no effect to the server’s application.
In an IBM Storwize V7000 environment that is using an active/active architecture, the I/O handling for a volume can be managed by both nodes of the I/O Group. Therefore, it is mandatory for servers that are connected through FC connectors to use multipath device drivers to be able to handle this capability.
The IBM Storwize V7000 I/O Groups are connected to the SAN so that all application servers that access volumes from the I/O Group have access to them. Up to 2048 host server objects can be defined in four I/O Groups.
If required, host servers can be mapped to more than one I/O Group in the IBM Storwize V7000 system. Therefore, they can access volumes from separate I/O Groups. You can move volumes between I/O Groups to redistribute the load between the I/O Groups.
However, moving volumes between I/O Groups cannot always be done concurrently with host I/O, and requires in some cases a brief interruption to remap the host. On the following website, you can check the compatibility of IBM Storwize V7000 Non Disruptive Volume Move (NDVM) function with your hosts:
 
Important: The active/active architecture provides availability to process I/Os for both controller nodes, and enables the application to continue running smoothly, even if the server has only one access route or path to the storage controller. This type of architecture eliminates the path and LUN thrashing typical of an active/passive architecture.
System
The system or clustered system consists of 1 - 4 I/O Groups. Certain configuration limitations are then set for the individual system. For example, the maximum number of volumes that are supported per system is 8192 (having a maximum of 2048 volumes per I/O Group), or the maximum managed disk supported is 32 petabytes (PB) per system.
All configuration, monitoring, and service tasks are performed at the system level. Configuration settings are replicated to all nodes in the system. To facilitate these tasks, a management Internet Protocol (IP) address is set for the system.
A process is provided to back up the system configuration data onto disk so that it can be restored during a disaster. Note that this method does not back up application data. Only IBM Storwize V7000 system configuration information is backed up. For the purposes of remote data mirroring, two or more systems must form a partnership before creating relationships between mirrored volumes.
 
System configuration backup: After backing up the system configuration, save the backup data on your hard disk (or at the least outside of the SAN). If you are unable to access IBM Storwize V7000, you do not have access to the backup data if it is on the SAN.
For details about the maximum configurations that are applicable to the system, I/O Group, and nodes, see the following link:
RAID
The IBM Storwize V7000 setup contains several internal drive objects, but these drives cannot be directly added to storage pools. The drives need to be included in a RAID to provide protection against the failure of individual drives.
These drives are referred to as members of the array. Each array has a RAID level. RAID levels provide various degrees of redundancy and performance, and have various restrictions regarding the number of members in the array.
IBM Storwize V7000 supports hot spare drives. When an array member drive fails, the system automatically replaces the failed member with a hot spare drive and rebuilds the array to restore its redundancy. Candidate and spare drives can be manually exchanged with array members.
Each array has a set of goals that describe the location and performance of each array. A sequence of drive failures and hot spare takeovers can leave an array unbalanced (with members that do not match these goals). The system automatically rebalances such arrays when the appropriate drives are available.
The following RAID levels are available:
RAID 0 (striping, no redundancy)
RAID 1 (mirroring between two drives)
RAID 5 (striping, can survive one drive fault)
RAID 6 (striping, can survive two drive faults)
RAID 10 (RAID 0 on top of RAID 1)
MDisks
An MDisk is the unit of storage that IBM Storwize V7000 virtualizes. This unit could be a logical volume on an external storage array presented to IBM Storwize V7000, or a RAID array that consists of internal drives. IBM Storwize V7000 can then allocate these MDisks into various storage pools. An MDisk is not visible to a host system on the SAN because it is internal or zoned only to the IBM Storwize V7000 system.
The MDisks are placed into storage pools where they are divided into several extents, which can be 16 - 8192 megabytes (MB), as defined by the storage administrator. See the following link for an overview of the total storage capacity that is manageable per system regarding the selection of extents:
A volume is host-accessible storage that has been provisioned out of one storage pool or, if it is a mirrored volume, out of two storage pools. The maximum size of an MDisk is 1 PB. IBM Storwize V7000 system supports up to 4096 MDisks (including internal RAID arrays).
At any point in time, an MDisk is in one of the following four modes:
Array
Array mode MDisks are constructed from drives by using the RAID function. Array MDisks are always associated with storage pools.
Unmanaged MDisk
An MDisk is reported as unmanaged when it is not a member of any storage pool. An unmanaged MDisk is not associated with any volumes, and has no metadata stored on it. Storwize V7000 does not write to an MDisk that is in unmanaged mode, except when it attempts to change the mode of the MDisk to one of the other modes. Storwize V7000 can see the resource, but the resource is not assigned to a storage pool.
Managed MDisk
Managed mode MDisks are always members of a storage pool, and they contribute extents to the storage pool. Volumes (if not operated in image mode) are created from these extents. MDisks operating in managed mode might have metadata extents allocated on them, and can be used as quorum disks. This mode is the most common and normal mode for an MDisk.
Image mode MDisk
Image mode provides a direct block-for-block translation from the MDisk to the volume by using virtualization. This mode is provided to satisfy three major usage scenarios:
 – Image mode enables the virtualization of MDisks already containing data that was written directly, and not through IBM Storwize V7000. Rather, it was created by a direct-connected host. This mode enables a client to insert Storwize V7000 into the data path of an existing storage volume or LUN with minimal downtime. The image mode is typically used for data migration from old storage systems to new.
 – Image mode enables a volume that is managed by IBM Storwize V7000 to be used with the native copy services function provided by the underlying RAID controller. To avoid the loss of data integrity when IBM Storwize V7000 is used in this way, it is important that you disable the IBM Storwize V7000 cache for the volume.
 – IBM Storwize V7000 provides the ability to migrate to image mode, which enables IBM Storwize V7000 to export volumes and access them directly from a host without the IBM Storwize V7000 in the path.
Each MDisk presented from an external disk controller has an online path count that is the number of nodes that have access to that MDisk. The maximum count is the maximum number of paths detected at any point in time by the system. The current count is what the system sees currently. A current value less than the maximum can indicate that SAN fabric paths have been lost.
SSDs (flash drives) that are in IBM Storwize V7000 are presented to the cluster as MDisks. To determine whether the selected MDisk is a flash drive, click the link on the MDisk name to display the Viewing MDisk Details pane. The Viewing MDisk Details pane displays values for the Node ID, Node Name, and Node Location attributes.
Quorum disk
A quorum disk is an MDisk that contains a reserved area for use exclusively by the system. The system uses quorum disks to break a tie when exactly half the nodes in the system remain after a SAN failure. This situation is referred to as split brain. Quorum functionality is not supported on flash drives in IBM Storwize V7000. There are three candidate quorum disks. However, only one quorum disk is active at any time.
Disk tier
It is likely that the MDisks (LUNs) presented to the IBM Storwize V7000 system have various performance attributes due to the type of disk or RAID on which they reside. The MDisks can be on 15,000 disk revolutions per minute (RPMs) Fibre Channel or SAS disks, Nearline SAS or Serial Advanced Technology Attachment (SATA) disks, or even flash drives.
Therefore, a storage tier attribute is assigned to each MDisk, with the default being enterprise. A tier 0 (zero)-level disk attribute (ssd) is available for flash drives, and a tier 2-level disk attribute (nearline) is available for nl-sas.
Storage pool
A storage pool is a collection of up to 128 MDisks that provides the pool of storage from which volumes are provisioned. A single system can manage up to 128 storage pools. The size of these pools can be changed (expanded or shrunk) at run time by adding or removing MDisks, without taking the storage pool or the volumes offline.
At any point in time, an MDisk can only be a member in one storage pool, except for image mode volumes.
Each MDisk in the storage pool is divided into several extents. The size of the extent is selected by the administrator when the storage pool is created, and cannot be changed later. The size of the extent can be 16 - 8192 MB.
It is a leading practice to use the same extent size for all storage pools in a system. This approach is a prerequisite for supporting volume migration between two storage pools. If the storage pool extent sizes are not the same, you must use volume mirroring.
Figure 4-18 shows an overview of a Storwize clustered system with an I/O Group.
Figure 4-18 IBM Storwize V7000 clustered system
IBM Storwize V7000 limits the number of extents in a system to 222 = ~4 million. Because the number of addressable extents is limited, the total capacity of a Storwize V7000 system depends on the extent size that is chosen by the Storwize V7000 administrator. The capacity numbers that are specified in Table 4-6 for a Storwize V7000 system assume that all of the defined storage pools have been created with the same extent size.
Table 4-6 Extent size-to-addressability matrix
Extent size maximum
System capacity
Extent size maximum
System capacity
16 MB
64 terabytes (TB)
512 MB
2 PB
32 MB
128 TB
1024 MB
4 PB
64 MB
256 TB
2048 MB
8 PB
128 MB
512 TB
4096 MB
16 PB
256 MB
1 PB
8192 MB
32 PB
For most systems, a capacity of 1 - 2 PB is sufficient. In a Storwize V7000 environment, generally use the default size of 1 gigabyte (GB) as the standard extent size.
Volumes
Volumes are logical disks that are presented to the host or application servers by Storwize V7000. The hosts cannot see the MDisks. They can only see the logical volumes created from combining extents from a storage pool.
There are three types of volumes: Striped, sequential, and image. These types are determined by how the extents are allocated from the storage pool:
A volume created in striped mode has extents allocated from each MDisk in the storage pool in a round-robin fashion.
With a sequential mode volume, extents are allocated sequentially from an MDisk.
Image mode is a one-to-one mapped extent mode volume.
Using striped mode is the best method to use for most cases. However, sequential extent allocation mode can slightly increase the sequential performance for certain workloads.
Figure 4-19 shows the striped volume mode and sequential volume mode, and illustrates how the extent allocation from the storage pool differs.
Figure 4-19 Storage pool extents overview
You can allocate the extents for a volume in many ways. The process is under full user control at volume creation time, and can be changed at any time by migrating single extents of a volume to another MDisk within the storage pool.
Hosts
Volumes can be mapped to a host to allow access for a specific server to a set of volumes. A host in Storwize V7000 is a collection of host bus adapter (HBA) worldwide port names (WWPNs) or iSCSI qualified names (IQNs) defined on the specific server. Note that iSCSI names are internally identified by “fake” WWPNs, or WWPNs that are generated by the IBM Storwize V7000.
Volumes can be mapped to multiple hosts, for example, a volume that is accessed by multiple hosts of a server system. iSCSI is an alternative means of attaching hosts. However, all communication with back-end storage subsystems, and with other Storwize V7000 systems, is still through FC.
Node failover can be handled without installing a multipath driver on the iSCSI server. An iSCSI-attached server can simply reconnect after a node failover to the original target IP address, which is now presented by the partner node. To protect the server against link failures during network or HBA failures, using a multipath driver is mandatory.
Volumes are LUN-masked to the host’s HBA WWPNs by a process called host mapping. Mapping a volume to the host makes it accessible to the WWPNs or IQNs that are configured on the host object. For a SCSI over Ethernet connection, the IQN identifies the iSCSI target (destination) adapter. Host objects can have both IQNs and WWPNs.
Easy Tier
Easy Tier is a performance function that automatically migrates or moves extents off a volume to, or from, one MDisk storage tier to another MDisk storage tier. In Storwize V7000, the Easy Tier automatically moves extents between highly used and less-used MDisks within the same storage tier. This function is called Storage Pool Balancing, and it is enabled by default without any need for licensing.
Balancing cannot be disabled by the user. Easy Tier monitors the host I/O activity and latency on the extents of all volumes with the Easy Tier function turned on in a multitier storage pool, over a 24-hour period.
 
New in Storwize family software V7.3: Easy Tier V3 integrates the automatic functionality to balance the workloads between highly used and less-used MDisks within the same tier. It is enabled by default, cannot be disabled by the user, and does not need an Easy Tier license.
Next, it creates an extent migration plan based on this activity, and then dynamically moves high-activity (or hot) extents to a higher disk tier in the storage pool. It also moves extents whose activity has dropped off (or cooled) from the high-tier MDisks back to a lower-tiered MDisk.
 
Easy Tier: The Easy Tier function can be turned on or off at the storage pool level and the volume level. It supports any combination of three tiers within the system. Flash drives are always marked as Tier 0. Turning off Easy Tier does not disable Storage Pool Balancing.
To experience the potential benefits of using Easy Tier in your environment before installing expensive flash drives, turn on the Easy Tier function for a single-level storage pool. Next, turn on the Easy Tier function for the volumes within that pool. Easy Tier then starts monitoring activity on the volume extents in the pool.
Easy Tier creates a report every 24 hours, providing information about how Easy Tier behaves if the tier were a multitiered storage pool. So, even though Easy Tier extent migration is not possible within a single-tiered pool, the Easy Tier statistical measurement function is available.
The Easy Tier function can make it more appropriate to use smaller storage pool extent sizes. The usage statistics file can be off-loaded from Storwize V7000. You can then use the IBM Storage Tier Advisor Tool to create a summary report.
Thin provisioning
Volumes can be configured to be either thin-provisioned or fully allocated. A thin-provisioned volume behaves regarding application reads and writes as though they were fully allocated. When creating a thin-provisioned volume, the user specifies two capacities:
The real physical capacity allocated to the volume from the storage pool
The virtual capacity available to the host
In a fully allocated volume, these two values are the same.
Therefore, the real capacity determines the quantity of MDisk extents that is initially allocated to the volume. The virtual capacity is the capacity of the volume reported to all other Storwize V7000 components (for example, FlashCopy, Cache, and remote copy), and to the host servers. The real capacity is used to store both the user data and the metadata for the thin-provisioned volume. The real capacity can be specified as an absolute value, or a percentage of the virtual capacity.
Thin-provisioned volumes can be used as volumes assigned to the host, by FlashCopy to implement thin-provisioned FlashCopy targets, and also with the mirrored volumes feature. When a thin-provisioned volume is initially created, a small amount of the real capacity is used for initial metadata.
Write I/Os to grains of the thin volume that were not previously written to cause grains of the real capacity to be used to store metadata and the actual user data. Write I/Os to grains that were previously written to update the grain where data was previously written. The grain size is defined when the volume is created, and can be 32 kilobytes (KB), 64 KB, 128 KB, or 256 KB. The default grain size is 256 KB, which is the strongly suggested option. If you select 32 KB for the grain size, the volume size cannot exceed 260,000 GB.
The grain size cannot be changed after the thin-provisioned volume has been created. Generally, smaller grain sizes save space, but require more metadata access, which can adversely affect performance. If you are not going to use the thin-provisioned volume as a FlashCopy source or target volume, use 256 KB to maximize performance. If you are going to use the thin-provisioned volume as a FlashCopy source or target volume, specify the same grain size for the volume and for the FlashCopy function.
Figure 4-20 illustrates the thin-provisioning concept.
Figure 4-20 Conceptual diagram of a thin-provisioned volume
Thin-provisioned volumes store both user data and metadata. Each grain of data requires metadata to be stored. Therefore, the I/O rates that are obtained from thin-provisioned volumes are less than the I/O rates that are obtained from fully allocated volumes.
The metadata storage fixed use is never greater than 0.1% of the user data. The fixed resource use is independent of the virtual capacity of the volume. If you are using thin-provisioned volumes in a FlashCopy map, for the best performance, use the same grain size as the map grain size. If you are using the thin-provisioned volume directly with a host system, use a small grain size.
The real capacity of a thin volume can be changed if the volume is not in image mode. Increasing the real capacity enables a larger amount of data and metadata to be stored on the volume. Thin-provisioned volumes use the real capacity that is provided in ascending order as new data is written to the volume. If the user initially assigns too much real capacity to the volume, the real capacity can be reduced to free storage for other uses.
A thin-provisioned volume can be configured to autoexpand. This feature causes Storwize V7000 to automatically add a fixed amount of extra real capacity to the thin volume as required. Autoexpand attempts to maintain a fixed amount of unused real capacity for the volume, which is known as the contingency capacity.
The contingency capacity is initially set to the real capacity that is assigned when the volume is created. If the user modifies the real capacity, the contingency capacity is reset to be the difference between the used capacity and the real capacity. A volume that is created without the autoexpand feature, and therefore has a zero contingency capacity, goes offline as soon as the real capacity is fully used and needs to expand.
Autoexpand does not cause the real capacity to grow much beyond the virtual capacity. The real capacity can be manually expanded to more than the maximum that is required by the current virtual capacity, and the contingency capacity is recalculated.
To support the auto expansion of thin-provisioned volumes, the storage pools from which they are allocated have a configurable capacity warning. When the used capacity of the pool exceeds the warning capacity, a warning event is logged. For example, if a warning of 80% has been specified, the event is logged when 20% of the free capacity remains.
A thin-provisioned volume can be converted nondisruptively to a fully allocated volume, or vice versa, by using the volume mirroring function. For example, you can add a thin-provisioned copy to a fully allocated primary volume, and then remove the fully allocated copy from the volume after they are synchronized.
The fully allocated to thin-provisioned migration procedure uses a zero-detection algorithm so that grains containing all zeros do not cause any real capacity to be used.
Real-time Compression
Compressed volumes are a special type of volume where data is compressed as it is written to disk, saving additional space. To use the compression function, you must obtain the IBM Real-time Compression license. With the IBM Storwize V7000 model (2076-524), you already have one compression acceleration adapter included in the base product, and you can add one more.
It is also suggested to upgrade your memory to 64 GB for best use of Real-time Compression. Enabling compression on Storwize V7000 nodes does not affect non-compressed host-to-disk I/O performance. Like thin-provisioned volumes, compressed volumes have virtual, real, and used capacities. Use the following guidelines before working with compressed volumes:
Real capacity is the extent space that is allocated from the storage pool. The real capacity is also set when the volume is created and, like thin-provisioned volumes, can be expanded or shrunk down to the used capacity.
Virtual capacity is available to hosts. The virtual capacity is set when the volume is created, and can be expanded or shrunk afterward.
Used capacity is the amount of real capacity that is used to store client data and metadata after compression.
Capacity before compression is the amount of client data that has been written to the volume and then compressed. The capacity before compression does not include regions where zero data is written to unallocated space.
An I/O Group can contain a maximum of 200 compressed volumes and compressed volume mirrors.
You can also monitor information about compression usage to determine the savings to your storage capacity when volumes are compressed. To monitor system-wide compression savings and capacity, select Monitoring  System and either select the system name or Compression View. You can compare the amount of capacity that is used before compression is applied to the capacity that is used for all compressed volumes.
In addition, you can view the total percentage of capacity savings when compression is used on the system. Furthermore, you can also monitor compression savings across individual pools and volumes. For volumes, you can use these compression values to determine which volumes have achieved the highest compression savings.
Cache
The primary benefit of storage cache is to improve I/O response time. Reads and writes to a magnetic disk drive suffer from both seek and latency time at the drive level, which can result in from 1 millisecond (ms) - 10 ms of response time (for an enterprise-class disk).
The IBM Storwize V7000 nodes combined with IBM Spectrum Virtualize software V7.6 provide 32 GB memory per node, or 64 GB (128 GB) per I/O Group, or 256 GB (512 GB) per system. The Storwize V7000 provides a semi-flexible cache model, and the node’s memory can be used as read or write cache, either one as an I/O workload cache. The size of the write cache is maximally 12 GB of the node’s memory. The remaining part of the memory is split between read cache allocation and compression allocation.
When data is written by the host, the preferred node saves the data in its cache. Before the cache returns completion to the host, the write must be mirrored to the partner node, or copied into the cache of its partner node, for availability reasons. After having a copy of the written data, the cache returns completion to the host. A volume that has not received a write update during the last two minutes will automatically have all modified data destaged to disk.
 
Note: Optional cache upgrade of 32 GB on Storwize V7000 is reserved for RtC, and it is not used when RtC is disabled.
Starting with Storwize V7000, the concept of the cache architecture has been changed. Storwize V7000 now distinguishes between upper and lower cache that enables the system to be more scalable:
Required for support beyond 8192 volumes
Required for support beyond 8 node clusters
Required for 64-bit addressing beyond 28 GB
Required for larger memory in nodes
Required for more processor cores
Required for improved performance and stability
The architectural overview is shown in Figure 4-21.
Figure 4-21 New cache architecture
If one node of an I/O Group is missing, due to a restart or a hardware failure, the remaining node empties all of its write cache and proceeds in operation mode. This is referred to as write-through mode. A node operating in write-through mode writes data directly to the disk subsystem before sending an I/O complete status message back to the host. Running in this mode can degrade the performance of the specific I/O Group.
Write cache is partitioned by storage pool. This feature restricts the maximum amount of write cache that a single storage pool can allocate in a system. Table 4-7 shows the upper limit of write-cache data that a single storage pool in a system can occupy.
Table 4-7 Upper limit of write cache per storage pool
One storage pool
Two storage pools
Three storage pools
Four storage pools
More than four storage pools
100%
66%
40%
33%
25%
Storwize V7000 will treat part of its physical memory as non-volatile. Non-volatile means that its contents are preserved across power losses and resets. Bitmaps for FlashCopy and Remote Mirroring relationships, the virtualization table, and the write cache are items in the non-volatile memory.
During a disruption or external power loss, the physical memory is copied to a file in the file system on the node’s internal disk drive so that the contents can be recovered when external power is restored. The functionality of uninterruptible power supply units is provided by internal batteries, which are delivered with each node’s hardware. The batteries ensure that there is sufficient internal power to keep a node operational to perform this dump when the external power is removed. After dumping the content of the non-volatile part of the memory to disk, Storwize V7000 shuts down.
IBM Storwize V7000F
IBM Storwize V7000F is an all-flash, highly scalable, virtualized, enterprise-class storage system that is designed to consolidate workloads into a single system for ease of management, reduced costs, superior performance, and high availability.
IBM Storwize all-flash storage systems
Today’s organizations need to meet ever-changing demands for storage while also improving data economics. IT staff must deliver more services faster and more efficiently, enable real-time insight, and support more customer interaction. All-flash storage can meet these requirements for fast, reliable, and consistent access to data. The technology is accessible to more businesses than ever.
IBM Storwize family offers the new IBM Storwize V7000F and IBM Storwize V5000F systems as all-flash, virtualized, enterprise- class storage systems designed to deliver the high performance needed to derive real- time insights from business data combined with advanced management functions. These all-f lash solutions address the need to handle the massive volumes of data created by today’s demanding cloud, analytics, and traditional applications.
Built with IBM Spectrum Virtualize software, Storwize V7000F and Storwize V5000F provide the latest storage technologies for unlocking the business value of stored data. These technologies include virtualization, IBM Real-time Compression, thin provisioning, snapshots, cloning, replication, and high- availability configurations. They are designed to deliver outstanding performance, efficiency, ease of use, and dependability for organizations of all sizes.
The IBM Storwize all-flash storage systems have the following highlights:
High performance
The all-flash Storwize solutions deliver both high performance and dramatically improved data economics. They help accelerate multiple workloads for faster decision making with best-of-class performance. The solutions are most often deployed to support high-performance enterprise applications, such as databases like IBM DB2, Oracle, and SAP, as well as virtual desktop infrastructure (VDI) and server virtualization.
Efficiency
IBM Spectrum Virtualize software is designed to deliver extraordinary levels of efficiency, helping to revolutionize data economics and drive down costs for cloud, analytics, virtual server, and other enterprise-class deployments. As a result, organizations no longer have to choose between performance and efficiency. They can get both within a single storage solution.
IBM Real-time Compression with hardware acceleration, available in Storwize V7000F, enables the system to deliver higher performance for compressed data than traditional systems offer for uncompressed data. This performance enables its use for practically all data types. It is designed to enable storing up to five times as much data in the same physical drive space by compressing data as much as 80 percent. The benefits include reduced acquisition cost (because less hardware is required), reduced rack space, and lower power and cooling costs throughout the lifetime of the system. And, when combined with external data virtualization, Real-time Compression can significantly enhance the usable capacity of existing storage systems, extending their useful life even further.
When replicating block data for business continuity, Storwize V7000F and Storwize V5000F can use IP network connections for simplicity and lower cost. Integrated Bridgeworks SANrockIT technology helps improve network utilization up to three times compared with traditional approaches, which can help reduce networking costs and accelerate replication cycles.
Data virtualization
IBM Spectrum Virtualize data virtualization technology helps insulate applications from physical storage. This characteristic enables applications to run without disruption, even when changes are made to the storage infrastructure.
Storwize V7000F and Storwize V5000F also extend data virtualization to other storage systems. When virtualized, data in a disk system becomes part of the Storwize system, and it can be managed in the same way as internal drives. Data in external disk systems inherits all the Storwize functional richness and ease-of-use features, including advanced replication, high performance thin provisioning, data migration, and Real-time Compression. Virtualizing external storage helps improve administrator productivity and boost storage utilization while also enhancing and extending the value of existing storage investments.
Moving data is one of the most common causes of planned downtime. Data virtualization enables moving data from existing storage into the new system or between arrays, while maintaining access to the data. This function can be used when replacing older storage with newer storage, as part of load-balancing work, or when moving data in a tiered storage infrastructure from disk drives to flash.
Data virtualization can improve efficiency and business value. Nondisruptive migration can speed time-to-value from weeks or months to days, minimize downtime for migration, eliminate the cost of add-on migration tools, and can help avoid penalties and additional maintenance charges for lease extensions. The result can be real cost savings for your business.
Ease of use
Storwize V7000F and Storwize V5000F are easy to use from the start. An intuitive management interface enables administrators to easily manage the solution. IBM Spectrum Control, based on IBM Tivoli Storage Productivity Center, can also provide organizations with an end-to-end view of storage health, long-term performance analytics, and capacity statistics for Storwize V7000F, Storwize V5000F, and the surrounding storage infrastructure.
What’s more, IBM Spectrum Virtualize technologies, including Real-time Compression and IP replication with Bridgeworks SANrockIT technology, operate automatically and require little or no customization.
Dependability
Storwize V7000F and Storwize V5000F are part of the proven IBM Storwize family, with more than 90,000 enclosures and 2 exabytes of capacity deployed in organizations worldwide.
With their virtualized storage design and tight affinity with IBM PowerVM, OpenStack, Microsoft ODX, VMware vSphere v6, and VMware vSphere Virtual Volumes (VVOL), Storwize V7000F and Storwize V5000F are an ideal complement for virtualized servers that are at the heart of cloud deployments.
4.4.4 VersaStack Solution for IBM Storwize V7000 and V7000 Unified
The VersaStack solution combines the performance and innovation of Cisco UCS Integrated Infrastructure, which includes the Cisco Unified Computing System (Cisco UCS), Cisco Nexus and Cisco MDS 9000 Family switches, and Cisco UCS Director, with the performance and efficiency of the IBM Storwize storage system. The IBM Storwize V7000 includes technologies that both complement and enhance virtual environments with built-in functions such as IBM Data Virtualization, Real-time Compression, and Easy Tier that deliver extraordinary levels of performance and efficiency.
For more information, see this website:
4.4.5 More information
For more information about the IBM Storwize family, see the following materials
Implementing the IBM Storwize V7000 and IBM Spectrum Virtualize V7.6, SG24-7938
Implementing the IBM Storwize V7000 Gen2, SG24-8244
IBM Real-time Compression in IBM SAN Volume Controller and IBM Storwize V7000, REDP-4859
IBM System Storage SAN Volume Controller and Storwize V7000 Best Practices and Performance Guidelines, SG24-7521
IBM SAN Volume Controller and Storwize Family Native IP Replication, REDP-5103
Implementing the IBM Storwize V7000 Unified Disk System, SG24-8010
IBM Storwize V7000, Spectrum Virtualize, HyperSwap, and VMware Implementation, SG24-8317
 

1 The 10gbpsEthernet ports cannot be used for management
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.222.167.178