Host configuration
This chapter describes the host configuration procedures to attach supported hosts to IBM FlashSystem V9000.
This chapter includes the following topics:
7.1 Host attachment overview
The IBM FlashSystem V9000 can be attached to a client host by using three interface types:
Fibre Channel (FC)
Fibre Channel over Ethernet (FCoE)
IP-based Small Computer System Interface (iSCSI)
Always check the IBM System Storage Interoperation Center (SSIC) to get the latest information about supported operating systems, hosts, switches, and so on.
If a configuration that you want is not available on the SSIC, a Solution for Compliance in a Regulated Environment (SCORE) or request for price quotation (RPQ) must be submitted to IBM requesting approval. To submit a SCORE/RPQ, contact your IBM FlashSystem marketing representative or IBM Business Partner.
IBM FlashSystem V9000 supports 16 gigabits per second (Gbps) FC attachment direct connection to client hosts. Verify your environment and check its compatibility to use 16 Gbps direct attachment to the host by accessing the IBM System Storage Interoperation Center (SSIC) web page:
For details, see the IBM FlashSystem V9000 web page in IBM Knowledge Center:
Also see, the “Host attachment” topic in IBM Knowledge Center:
IBM FlashSystem V9000 supports a wide range of host types (both IBM and non-IBM), which makes it possible to consolidate storage in an open systems environment into a common pool of storage. The storage can then be managed using pooling more efficiently as a single entity from a central point on the storage area network (SAN).
The ability to consolidate storage for attached open systems hosts provides these benefits:
Unified, single-point storage management
Increased utilization rate of the installed storage capacity
Use data mobility to share storage technologies between applications
Advanced copy services functions offered across storage systems from separate vendors
Only one kind of multipath driver to consider for attached hosts
7.2 IBM FlashSystem V9000 setup
In most IBM FlashSystem V9000 environments where high performance and high availability requirements exist, hosts are attached through a SAN using the Fibre Channel Protocol (FCP). Even though other supported SAN configurations are available, for example, single fabric design, the preferred practice and a commonly used setup is for the SAN to consist of two independent fabrics. This design provides redundant paths and prevents unwanted interference between fabrics if an incident affects one of the fabrics.
Internet Small Computer System Interface (iSCSI) connectivity provides an alternative method to attach hosts through an Ethernet local area network (LAN). However, any communication within IBM FlashSystem V9000 system, and between IBM FlashSystem V9000 and its storage, solely takes place through FC.
IBM FlashSystem V9000 also supports FCoE, by using 10 gigabit Ethernet (GbE) lossless Ethernet.
Redundant paths to volumes can be provided for both SAN-attached, FCoE-attached, and iSCSI-attached hosts. Figure 7-1 shows the types of attachment that are supported by IBM FlashSystem V9000.
Figure 7-1 IBM FlashSystem V9000 host attachment overview
7.2.1 Fibre Channel and SAN setup overview
Host attachment to IBM FlashSystem V9000 with FC can be made through a SAN fabric or direct host attachment. For IBM FlashSystem V9000 configurations, the preferred practice is to use two redundant SAN fabrics. Therefore, IBM advises that you have each host equipped with a minimum of two host bus adapters (HBAs) or at least a dual-port HBA with each HBA connected to a SAN switch in either fabric.
IBM FlashSystem V9000 imposes no particular limit on the actual distance between IBM FlashSystem V9000 and host servers. Therefore, a server can be attached to an edge switch in a core-edge configuration and IBM FlashSystem V9000 is at the core of the fabric.
For host attachment, IBM FlashSystem V9000 supports up to three inter-switch link (ISL) hops in the fabric, which means that connectivity between the server and IBM FlashSystem V9000 can be separated by up to five FC links, four of which can be 10 km long (6.2 miles) if longwave small form-factor pluggables (SFPs) are used.
The zoning capabilities of the SAN switches are used to create distinct zones for host connectivity, for connectivity to the IBM FlashSystem V9000 storage enclosures in scalable building blocks, and to any external storage arrays virtualized with IBM FlashSystem V9000.
IBM FlashSystem V9000 supports 2 GBps, 4 GBps, 8 GBps, or 16 Gbps FC fabric, depending on the hardware configuration and on the switch where the IBM FlashSystem V9000 is connected. In an environment where you have a fabric with multiple-speed switches, the preferred practice is to connect the IBM FlashSystem V9000 and any external storage systems to the switch that is operating at the highest speed.
For more details about SAN zoning and SAN connections, see the topic about planning and configuration in Implementing the IBM System Storage SAN Volume Controller V7.4, SG24-7933.
IBM FlashSystem V9000 contains shortwave small form-factor pluggables (SFPs). Therefore, they must be within 300 meters (984.25 feet) of the switch to which they attach. The IBM FlashSystem V9000 shortwave SFPs can be replaced with longwave SFPs, which extends the distance for connectivity to the switches with that of the SFPs specification typically 5, 10, or 25 kilometers.
Table 7-1 shows the fabric type that can be used for communicating between hosts, nodes, and RAID storage systems. These fabric types can be used at the same time.
Table 7-1 IBM FlashSystem V9000 communication options
Communication type
Host to FlashSystem V9000
FlashSystem V9000 to external Storage
FlashSystem V9000 to FlashSystem V9000
Fibre Channel SAN (FC)
Yes
Yes
Yes
iSCSI (1 Gbps or 10 Gbps Ethernet)
Yes
Yes1
No
FCoE (10 Gbps Ethernet)
Yes
No
Yes

1 Starting with Version 7.7.1.1 of the IBM FlashSystem V9000 software, IBM FlashSystem A9000 and XIV Gen 3 can also be attached via iSCSI.
To avoid latencies that lead to degraded performance, avoid ISL hops when possible. That is, in an optimal setup, the servers connect to the same SAN switch as the IBM FlashSystem V9000.
The following guidelines apply when you connect host servers to an IBM FlashSystem V9000:
Up to 512 hosts per building block are supported, which results in a total of 2,048 hosts for a fully scaled system.
If the same host is connected to multiple building blocks of a cluster, it counts as a host in each building block.
A total of 2048 distinct, configured, host worldwide port names (WWPNs) are supported per building block for a total of 8192 for a fully scaled system.
This limit is the sum of the FC host ports and the host iSCSI names (an internal WWPN is generated for each iSCSI name) that are associated with all of the hosts that are associated with a building block.
7.2.2 Fibre Channel SAN attachment
Switch zoning on the SAN fabric defines the access from a server to IBM FlashSystem V9000.
Consider the following rules for zoning hosts with IBM FlashSystem V9000:
Homogeneous HBA port zones
Switch zones that contain HBAs must contain HBAs from similar host types and similar HBAs in the same host. For example, AIX and Microsoft Windows hosts must be in separate zones, and QLogic and Emulex adapters must also be in separate zones.
 
Important: A configuration that breaches this rule is unsupported because it can introduce instability to the environment.
HBA to IBM FlashSystem V9000 port zones
Place each host’s HBA in a separate zone along with one or two IBM FlashSystem V9000 ports. If there are two ports, use one from each controller in the building block. Do not place more than two IBM FlashSystem V9000 ports in a zone with an HBA, because this design results in more than the advised number of paths, as seen from the host multipath driver.
 
Number of paths: For n + 1 redundancy, use the following number of paths:
With two HBA ports, zone HBA ports to IBM FlashSystem V9000 ports 1:2 for a total of four paths.
With four HBA ports, zone HBA ports to IBM FlashSystem V9000 ports 1:1 for a total of four paths.
Optional (n+2 redundancy): With 4 HBA ports, zone HBA ports to IBM FlashSystem V9000 ports 1 - 2 for a total of eight paths. The term HBA port is used here to describe the SCSI initiator and IBM FlashSystem V9000 port to describe the SCSI target.
Maximum host paths per logical unit (LU)
For any volume, the number of paths through the SAN from IBM FlashSystem V9000 to a host must not exceed eight. For most configurations, four paths to a building block (four paths to each volume that is provided by this building block) are sufficient.
 
Important: The maximum number of host paths per LUN should not exceed eight.
Balanced host load across HBA ports
To obtain the best performance from a host with multiple ports, ensure that each host port is zoned with a separate group of IBM FlashSystem V9000 ports.
Balanced host load across IBM FlashSystem V9000 ports
To obtain the best overall performance of the system and to prevent overloading, the workload to each IBM FlashSystem V9000 port must be equal. You can achieve this balance by zoning approximately the same number of host ports to each IBM FlashSystem V9000 port.
When possible, use the minimum number of paths that are necessary to achieve a sufficient level of redundancy. For IBM FlashSystem V9000, no more than four paths per building block are required to accomplish this layout.
All paths must be managed by the multipath driver on the host side. If you assume that a server is connected through four ports to IBM FlashSystem V9000, each volume is seen through eight paths. With 125 volumes mapped to this server, the multipath driver must support handling up to 1,000 active paths (8 x 125).
IBM FlashSystem V9000 with 8-port and 12-port configurations provide an opportunity to isolate traffic on dedicated ports, as a result providing a level of protection against misbehaving devices and workloads that can compromise the performance of the shared ports.
There is benefit in isolating remote replication traffic on dedicated ports to ensure that problems affecting the cluster-to-cluster interconnect do not adversely affect ports on the primary cluster and as a result affect the performance of workloads running on the primary cluster. Migration from existing configurations with only four ports, or even later migrating from 8-port or 12-port configurations to configurations with additional ports can reduce the effect of performance issues on the primary cluster by isolating remote replication traffic.
7.2.3 Fibre Channel direct attachment
If you attach the IBM FlashSystem V9000 directly to a host, the host must be attached to both controllers of a building block. If the host is not attached to both controllers, the host is shown as degraded.
If you use SAN attachment and direct attachment simultaneously on an IBM FlashSystem V9000, the direct-attached host state will be degraded. Using a switch enforces the switch rule for all attached hosts, which means that a host port must be connected to both IBM FlashSystem canisters. Because a direct-attached host cannot connect one port to both canisters, it does not meet the switch rule and its state will be degraded.
 
Note: You can attach a host through a switch and simultaneously attach a host directly to the IBM FlashSystem V9000. But then, the direct-attached host is shown as degraded.
7.3 iSCSI
The iSCSI protocol is a block-level protocol that encapsulates SCSI commands into TCP/IP packets and, therefore, uses an existing IP network rather than requiring the FC HBAs and SAN fabric infrastructure. The iSCSI standard is defined by RFC 3720. iSCSI connectivity is a software feature that is provided by IBM FlashSystem V9000.
The iSCSI-attached hosts can use a single network connection or multiple network connections.
 
iSCSI for external storage: In earlier versions, only hosts could iSCSI-attach to IBM FlashSystem V9000. However starting with Version 7.7.1.1 of IBM FlashSystem V9000 software, IBM FlashSystem A9000 and XIV Gen 3 can be attached and virtualized through iSCSI.
Each IBM FlashSystem V9000 controller is equipped with four onboard Ethernet ports, which can operate at a link speed of up to1 Gbps for the AC2 controllers and up to 10 Gbps for AC3 controllers. One of these Ethernet ports is the technician port and cannot be used for iSCSI traffic. Each controller’s Ethernet port that is numbered 1 is used as the primary cluster management port.
One additional AH12 four-port 10 Gbps Ethernet PCIe adapter with four preinstalled SFP+ transceivers can be added to each IBM FlashSystem V9000 control enclosure. This adds iSCSI and FCoE connectivity to the system.
For optimal performance achievement, IBM advise that you use 10 Gbps Ethernet connections between IBM FlashSystem V9000 and iSCSI-attached hosts.
7.3.1 Initiators and targets
An iSCSI client, which is known as an (iSCSI) initiator, sends SCSI commands over an IP network to an iSCSI target. A single iSCSI initiator or iSCSI target is referred to as an iSCSI node.
You can use the following types of iSCSI initiators in host systems:
Software initiator: Available for most operating systems; for example, AIX, Linux, and Windows.
Hardware initiator: Implemented as a network adapter with an integrated iSCSI processing unit, which is also known as an iSCSI HBA.
For more information about the supported operating systems for iSCSI host attachment and the supported iSCSI HBAs, see the following web pages:
IBM System Storage Interoperation Center (SSIC) for the IBM FlashSystem V9000 interoperability matrix:
IBM FlashSystem V9000 web page at IBM Knowledge Center:
An iSCSI target refers to a storage resource that is on an iSCSI server. It also refers to one of potentially many instances of iSCSI nodes that are running on that server.
7.3.2 iSCSI nodes
One or more iSCSI nodes exist within a network entity. The iSCSI node is accessible through one or more network portals. A network portal is a component of a network entity that has a TCP/IP network address and can be used by an iSCSI node.
An iSCSI node is identified by its unique iSCSI name and is referred to as an iSCSI qualified name (IQN). The purpose of this name is for only the identification of the node, not for the node’s address. In iSCSI, the name is separated from the addresses. This separation enables multiple iSCSI nodes to use the same addresses or, while it is implemented in the IBM FlashSystem V9000, the same iSCSI node to use multiple addresses.
7.3.3 iSCSI qualified name
An IBM FlashSystem V9000 can provide up to eight iSCSI targets, one per controller. Each IBM FlashSystem V9000 controller has its own IQN, which by default is in the following form:
iqn.1986-03.com.ibm:2145.<clustername>.<nodename>
An iSCSI host in IBM FlashSystem V9000 is defined by specifying its iSCSI initiator names. The following example shows an IQN of a Windows server’s iSCSI software initiator:
iqn.1991-05.com.microsoft:itsoserver01
During the configuration of an iSCSI host in IBM FlashSystem V9000, you must specify the host’s initiator IQNs.
An alias string can also be associated with an iSCSI node. The alias enables an organization to associate a string with the iSCSI name. However, the alias string is not a substitute for the iSCSI name.
 
Note: Ethernet link aggregation (port trunking) or channel bonding for the Ethernet ports of the IBM FlashSystem V9000 controllers is not supported.
7.3.4 iSCSI set up of IBM FlashSystem V9000 and host server
You must perform the following procedure when you are setting up a host server for use as an iSCSI initiator with IBM FlashSystem V9000 volumes. The specific steps vary depending on the particular host type and operating system that you use.
To configure a host, first select a software-based iSCSI initiator or a hardware-based iSCSI initiator. For example, the software-based iSCSI initiator can be a Linux or Windows iSCSI software initiator. The hardware-based iSCSI initiator can be an iSCSI HBA inside the host server.
To set up your host server for use as an iSCSI software-based initiator with IBM FlashSystem V9000 volumes, complete the following steps (the CLI is used in this example):
1. Set up your IBM FlashSystem V9000 cluster for iSCSI:
a. Select a set of IPv4 or IPv6 addresses for the Ethernet ports on the nodes that are in the building block that use the iSCSI volumes.
b. Configure the node Ethernet ports on each IBM FlashSystem V9000 controller by running the cfgportip command.
c. Verify that you configured the Ethernet ports of IBM FlashSystem V9000 correctly by reviewing the output of the lsportip command and lssystemip command.
d. Use the mkvdisk command to create volumes on IBM FlashSystem V9000 clustered system.
e. Use the mkhost command to create a host object on IBM FlashSystem V9000. The mkhost command defines the host’s iSCSI initiator to which the volumes are to be mapped.
f. Use the mkvdiskhostmap command to map the volume to the host object in IBM FlashSystem V9000.
2. Set up your host server:
a. Ensure that you configured your IP interfaces on the server.
b. Ensure that your iSCSI HBA is ready to use, or install the software for the iSCSI software-based initiator on the server, if needed.
c. On the host server, run the configuration methods for iSCSI so that the host server iSCSI initiator logs in to IBM FlashSystem V9000 and discovers IBM FlashSystem V9000 volumes. The host then creates host devices for the volumes.
After the host devices are created, you can use them with your host applications.
7.3.5 Volume discovery
Hosts can discover volumes through one of the following three mechanisms:
Internet Storage Name Service (iSNS)
IBM FlashSystem V9000 can register with an iSNS name server; the IP address of this server is set by using the chsystem command. A host can then query the iSNS server for available iSCSI targets.
Service Location Protocol (SLP)
IBM FlashSystem V9000 controller runs an SLP daemon, which responds to host requests. This daemon reports the available services on the node. One service is the CIM object manager (CIMOM), which runs on the configuration controller; iSCSI I/O service now also can be reported.
SCSI Send Target request
The host can also send a Send Target request by using the iSCSI protocol to the iSCSI TCP/IP port (port 3260). You must define the network portal IP addresses of the iSCSI targets before a discovery can be started.
7.3.6 Authentication
The authentication of hosts is optional; by default, it is disabled. The user can choose to enable Challenge Handshake Authentication Protocol (CHAP) or CHAP authentication, which involves sharing a CHAP secret between the cluster and the host. If the correct key is not provided by the host, IBM FlashSystem V9000 does not allow it to perform I/O to volumes. Also, you can assign a CHAP secret to the IBM FlashSystem V9000.
7.3.7 Target failover
A feature with iSCSI is the option to move iSCSI target IP addresses between IBM FlashSystem V9000 controllers in a building block. IP addresses are moved only from one controller to its partner controller if a controller goes through a planned or unplanned restart. If the Ethernet link to IBM FlashSystem V9000 fails because of a cause outside of IBM FlashSystem V9000 (such as disconnection of the cable or failure of the Ethernet router), IBM FlashSystem V9000 makes no attempt to fail over an IP address to restore IP access to the system. To enable the validation of the Ethernet access to the controllers, it responds to ping with the standard one-per-second rate without frame loss.
For handling the iSCSI IP address failover a clustered Ethernet port is used. A clustered Ethernet port consists of one physical Ethernet port on each controller in the IBM FlashSystem V9000. The clustered Ethernet port contains configuration settings that are shared by all of these ports.
An iSCSI target node failover happens during a planned or unplanned node restart in an IBM FlashSystem V9000 building block. This example refers to IBM FlashSystem V9000 controllers with no optional 10 GbE iSCSI adapter installed:
1. During normal operation, one iSCSI target node instance is running on each IBM FlashSystem V9000 controller. All of the IP addresses (IPv4 and IPv6) that belong to this iSCSI target (including the management addresses if the controller acts as the configuration controller) are presented on the two ports of a controller.
2. During a restart of an IBM FlashSystem V9000 controller, the iSCSI target, including all of its IP addresses defined on Ethernet ports 1, 2, and 3, and the management IP addresses (if it acted as the configuration controller), fail over to Ethernet ports 1, 2, and 3 of the partner controller within the building block. An iSCSI initiator that is running on a server reconnects to its iSCSI target at the IBM FlashSystem V9000, but now the same IP addresses are presented by the other controller of the IBM FlashSystem V9000 building block.
3. When the controller finishes its restart, the iSCSI target node (including its IP addresses) that is running on the partner controller fails back. Again, the iSCSI initiator that is running on a server runs a reconnect to its iSCSI target. The management addresses do not fail back. The partner controller remains in the role of the configuration controller for this IBM FlashSystem V9000.
7.3.8 Host failover
From a host perspective, a multipathing I/O (MPIO) driver is not required to handle an IBM FlashSystem V9000 controller failover. In the case of an IBM FlashSystem V9000 controller restart, the host reconnects to the IP addresses of the iSCSI target node that reappear after several seconds on the ports of the partner node.
A host multipathing driver for iSCSI is required in the following situations:
To protect a host from network link failures, including port failures on IBM FlashSystem V9000 controllers
To protect a host from a HBA failure (if two HBAs are in use)
To protect a host from network failures, if it is connected through two HBAs to two separate networks
To provide load balancing on the server’s HBA and the network links
The commands for the configuration of the iSCSI IP addresses are separated from the configuration of the cluster management IP addresses. The following commands are used for managing iSCSI IP addresses:
The lsportip command lists the iSCSI IP addresses that are assigned for each port on each controller in the IBM FlashSystem V9000.
The cfgportip command assigns an IP address to each controller’s Ethernet port for iSCSI I/O.
The following commands are used for viewing and configuring IBM FlashSystem V9000 cluster management IP addresses:
The lssystemip command returns a list of the IBM FlashSystem V9000 management IP addresses that are configured for each port.
The chsystemip command modifies the IP configuration parameters for the IBM FlashSystem V9000.
The parameters for remote services (SSH and web services) remain associated with the IBM FlashSystem V9000 object. During an IBM FlashSystem V9000 code upgrade, the configuration settings for the IBM FlashSystem V9000 are applied to the controller Ethernet port one.
For iSCSI-based access, the use of redundant network connections and separating iSCSI traffic by using a dedicated network or virtual LAN (VLAN) prevents any NIC, switch, or target port failure from compromising the host server’s access to the volumes.
Three of the four onboard Ethernet ports of an IBM FlashSystem V9000 controller can be configured for iSCSI. For each Ethernet port, a maximum of one IPv4 address and one IPv6 address can be designated. Each port can be simultaneously used for management, remote copy over IP and as an iSCSI target for hosts.
When using the IBM FlashSystem V9000 onboard Ethernet ports for iSCSI traffic the suggestion is for port one to be dedicated to IBM FlashSystem V9000 management, and for ports two and three to be dedicated for iSCSI usage. By using this approach, ports two and three can be connected to a dedicated network segment or VLAN for iSCSI.
Because IBM FlashSystem V9000 does not support the use of VLAN tagging to separate management and iSCSI traffic, you can assign the LAN switch port to a dedicated VLAN to separate IBM FlashSystem V9000 management and iSCSI traffic.
7.4 File alignment for the best RAID performance
File system alignment can improve performance for storage systems by using a RAID storage mode. File system alignment is a technique that matches file system I/O requests with important block boundaries in the physical storage system. Alignment is important in any system that implements a RAID layout. I/O requests that fall within the boundaries of a single stripe have better performance than an I/O request that affects multiple stripes. When an I/O request crosses the endpoint of one stripe and into another stripe, the controller must then modify both stripes to maintain their consistency.
Unaligned accesses include those requests that start at an address that is not divisible by
4 KB, or are not a multiple of 4 KB. These unaligned accesses are serviced at much higher response times, and they can also significantly reduce the performance of aligned accesses that were issued in parallel.
 
Note: Format all client host file systems on the storage system at 4 KB or at a multiple of 4 KB. This preference is for a used sector size of 512 and 4096 bytes. For example, file systems that are formatted at an 8 KB allocation size or a 64 KB allocation size are satisfactory because they are a multiple of 4 KB.
7.5 AIX: Specific information
This section describes specific information that relates to the connection of IBM AIX based hosts in an IBM FlashSystem V9000 environment.
 
Note: In this section, the IBM System p information applies to all AIX hosts that are listed on IBM FlashSystem V9000 interoperability support website, including IBM System i partitions and IBM JS blades.
7.5.1 Optimal logical unit number configurations for AIX
The number of logical unit numbers (LUNs) that you create on the IBM FlashSystem V9000 (as well as on IBM FlashSystem 900) can affect the overall performance of AIX. Applications perform optimally if at least 32 LUNs are used in a volume group. If fewer volumes are required by an application, use the Logical Volume Manager (LVM) to map fewer logical volumes to 32 logical units. This does not affect performance in any significant manner (LVM resource requirements are small).
 
Note: Use at least 32 LUNs in a volume group because this number is the best tradeoff between good performance (the more queued I/Os, the better the performance) and minimizing resource use and complexity.
7.5.2 Configuring the AIX host
To attach IBM FlashSystem V9000 volumes to an AIX host, complete these steps:
1. Install the HBAs in the AIX host system.
2. Ensure that you installed the correct operating systems and version levels on your host, including any updates and authorized program analysis reports (APARs) for the operating system.
3. Connect the AIX host system to the FC switches.
4. Configure the FC switch zoning.
5. Install and configure the Subsystem Device Driver Path Control Module (SDDPCM).
6. Perform the logical configuration on IBM FlashSystem V9000 to define the host, volumes, and host mapping.
7. Run the cfgmgr command to discover and configure IBM FlashSystem V9000 volumes.
 
7.5.3 Configuring fast fail and dynamic tracking
For hosts that are running AIX V5.3 or later operating systems, enable both fast fail and dynamic tracking.
Complete these steps to configure your host system to use the fast fail and dynamic tracking attributes:
1. Run the following command to set the FC SCSI I/O Controller Protocol Device to each adapter:
chdev -l fscsi0 -a fc_err_recov=fast_fail
That command is for adapter fscsi0. Example 7-1 shows the command for both adapters on a system that is running IBM AIX V6.1.
Example 7-1 Enable fast fail
#chdev -l fscsi0 -a fc_err_recov=fast_fail
fscsi0 changed
#chdev -l fscsi1 -a fc_err_recov=fast_fail
fscsi1 changed
2. Run the following command to enable dynamic tracking for each FC device:
chdev -l fscsi0 -a dyntrk=yes
This command is for adapter fscsi0.
Example 7-2 shows the command for both adapters in IBM AIX V6.1.
Example 7-2 Enable dynamic tracking
#chdev -l fscsi0 -a dyntrk=yes
fscsi0 changed
#chdev -l fscsi1 -a dyntrk=yes
fscsi1 changed
 
 
Note: The fast fail and dynamic tracking attributes do not persist through an adapter delete and reconfigure operation. Therefore, if the adapters are deleted and then configured back into the system, these attributes are lost and must be reapplied.
Host adapter configuration settings
You can display the availability of installed host adapters by using the command that is shown in Example 7-3.
Example 7-3 FC host adapter availability
#lsdev -Cc adapter |grep fcs
fcs0 Available 1Z-08 FC Adapter
fcs1 Available 1D-08 FC Adapter
You can display the WWPN, along with other attributes, including the firmware level, by using the command that is shown in Example 7-4. The WWPN is represented as the Network Address.
Example 7-4 FC host adapter settings and WWPN
#lscfg -vpl fcs0
fcs0 U0.1-P2-I4/Q1 FC Adapter
 
Part Number.................00P4494
EC Level....................A
Serial Number...............1E3120A68D
Manufacturer................001E
Device Specific.(CC)........2765
FRU Number.................. 00P4495
Network Address.............10000000C932A7FB
ROS Level and ID............02C03951
Device Specific.(Z0)........2002606D
Device Specific.(Z1)........00000000
Device Specific.(Z2)........00000000
Device Specific.(Z3)........03000909
Device Specific.(Z4)........FF401210
Device Specific.(Z5)........02C03951
Device Specific.(Z6)........06433951
Device Specific.(Z7)........07433951
Device Specific.(Z8)........20000000C932A7FB
Device Specific.(Z9)........CS3.91A1
Device Specific.(ZA)........C1D3.91A1
Device Specific.(ZB)........C2D3.91A1
Device Specific.(YL)........U0.1-P2-I4/Q1
7.5.4 Subsystem Device Driver Path Control Module (SDDPCM)
The SDDPCM is a loadable path control module for supported storage devices to supply path management functions and error recovery algorithms. When the supported storage devices are configured as Multipath I/O (MPIO) devices, SDDPCM is loaded as part of the AIX MPIO FCP or AIX MPIO serial-attached SCSI (SAS) device driver during the configuration.
The AIX MPIO device driver automatically discovers, configures, and makes available all storage device paths. SDDPCM then manages these paths to provide the following functions:
High availability and load balancing of storage I/O
Automatic path-failover protection
Concurrent download of supported storage devices’ licensed machine code
Prevention of a single-point failure
The AIX MPIO device driver along with SDDPCM enhances the data availability and I/O load balancing of IBM FlashSystem V9000 volumes.
SDDPCM installation
Download the appropriate version of SDDPCM and install it by using the standard AIX installation procedure. The latest SDDPCM software versions are available at the following web page:
Check the driver readme file and make sure that your AIX system meets all prerequisites.
Example 7-5 shows the appropriate version of SDDPCM that is downloaded into the /tmp/sddpcm directory. From here, you extract it and run the inutoc command, which generates a dot.toc file that is needed by the installp command before SDDPCM is installed. Finally, run the installp command, which installs SDDPCM onto this AIX host.
Example 7-5 Installing SDDPCM on AIX
# ls -l
total 3232
-rw-r----- 1 root system 1648640 Jul 15 13:24 devices.sddpcm.61.rte.tar
# tar -tvf devices.sddpcm.61.rte.tar
-rw-r----- 271001 449628 1638400 Oct 31 12:16:23 2007 devices.sddpcm.61.rte
# tar -xvf devices.sddpcm.61.rte.tar
x devices.sddpcm.61.rte, 1638400 bytes, 3200 media blocks.
# inutoc .
# ls -l
total 6432
-rw-r--r-- 1 root system 531 Jul 15 13:25 .toc
-rw-r----- 1 271001 449628 1638400 Oct 31 2007 devices.sddpcm.61.rte
-rw-r----- 1 root system 1648640 Jul 15 13:24 devices.sddpcm.61.rte.tar
# installp -ac -d . all
Example 7-6 shows the lslpp command that checks the version of SDDPCM that is installed.
Example 7-6 Checking SDDPCM device driver
# lslpp -l | grep sddpcm
devices.sddpcm.61.rte 2.2.0.0 COMMITTED IBM SDD PCM for AIX V61
devices.sddpcm.61.rte 2.2.0.0 COMMITTED IBM SDD PCM for AIX V61
For more information about how to enable the SDDPCM web interface, see 7.13, “Using SDDDSM, SDDPCM, and SDD web interface” on page 319.
7.5.5 Configuring the assigned volume by using SDDPCM
This example uses an AIX host with host name Atlantic to demonstrate attaching IBM FlashSystem V9000 volumes to an AIX host. Example 7-7 shows host configuration before IBM FlashSystem V9000 volumes are configured. The lspv output shows the existing hdisks and the lsvg command output shows the existing volume group (VG).
Example 7-7 Status of AIX host system Atlantic
# lspv
hdisk0 0009cdcaeb48d3a3 rootvg active
hdisk1 0009cdcac26dbb7c rootvg active
hdisk2 0009cdcab5657239 rootvg active
# lsvg
rootvg
Identifying the WWPNs of the host adapter ports
Example 7-8 shows how the lscfg commands can be used to list the WWPNs for all installed adapters. The WWPNs are used later for mapping IBM FlashSystem V9000 volumes.
Example 7-8 HBA information for host Atlantic
# lscfg -vl fcs* |egrep “fcs|Network”
fcs1 U0.1-P2-I4/Q1 FC Adapter
Network Address.............10000000C932A865
Physical Location: U0.1-P2-I4/Q1
fcs2 U0.1-P2-I5/Q1 FC Adapter
Network Address.............10000000C94C8C1C
Displaying IBM FlashSystem V9000 configuration
You can use the CLI to display host configuration on the IBM FlashSystem V9000 and to validate the physical access from the host to IBM FlashSystem V9000.
Example 7-9 shows the use of the lshost and lshostvdiskmap commands to obtain the following information:
That a host definition was properly defined for the host Atlantic.
That the WWPNs (listed in Example 7-8) are logged in, with two logins each.
Atlantic has three volumes that are assigned to each WWPN, and the volume serial numbers are listed.
Example 7-9 IBM FlashSystem V9000 definitions for host system Atlantic
IBM_2145:ITSO_V9000:admin>svcinfo lshost Atlantic
id 8
name Atlantic
port_count 2
type generic
mask 1111
iogrp_count 4
WWPN 10000000C94C8C1C
node_logged_in_count 2
state active
WWPN 10000000C932A865
node_logged_in_count 2
state active
 
IBM_2145:ITSO_V9000:admin>svcinfo lshostvdiskmap Atlantic
id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID
8 Atlantic 0 14 Atlantic0001 10000000C94C8C1C 6005076801A180E90800000000000060
8 Atlantic 1 22 Atlantic0002 10000000C94C8C1C 6005076801A180E90800000000000061
8 Atlantic 2 23 Atlantic0003 10000000C94C8C1C 6005076801A180E90800000000000062
Discovering and configuring LUNs
The cfgmgr command discovers the new LUNs and configures them into AIX. The cfgmgr command probes the devices on the adapters individually:
# cfgmgr -l fcs1
# cfgmgr -l fcs2
The following command probes the devices sequentially across all installed adapters:
# cfgmgr -vS
The lsdev command (Example 7-10) lists the three newly configured hdisks that are represented as MPIO FC 2145 devices.
Example 7-10 Volumes from IBM FlashSystem V9000
# lsdev -Cc disk
hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI Disk Drive
hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI Disk Drive
hdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI Disk Drive
hdisk3 Available 1D-08-02 MPIO FC 2145
hdisk4 Available 1D-08-02 MPIO FC 2145
hdisk5 Available 1D-08-02 MPIO FC 2145
Now, you can use the mkvg command to create a VG with the three newly configured hdisks, as shown in Example 7-11.
Example 7-11 Running the mkvg command
# mkvg -y itsoaixvg hdisk3
0516-1254 mkvg: Changing the PVID in the ODM.
itsoaixvg
# mkvg -y itsoaixvg1 hdisk4
0516-1254 mkvg: Changing the PVID in the ODM.
itsoaixvg1
# mkvg -y itsoaixvg2 hdisk5
0516-1254 mkvg: Changing the PVID in the ODM.
itsoaixvg2
The lspv output now shows the new VG label on each of the hdisks that were included in the VGs (Example 7-12).
Example 7-12 Showing the vpath assignment into the Volume Group (VG)
# lspv
hdisk0 0009cdcaeb48d3a3 rootvg active
hdisk1 0009cdcac26dbb7c rootvg active
hdisk2 0009cdcab5657239 rootvg active
hdisk3 0009cdca28b589f5 itsoaixvg active
hdisk4 0009cdca28b87866 itsoaixvg1 active
hdisk5 0009cdca28b8ad5b itsoaixvg2 active
7.5.6 Using SDDPCM
You administer the SDDPM by using the pcmpath command. You use this command to perform all administrative functions, such as displaying and changing the path state. The pcmpath query adapter command displays the current state of the adapters.
Example 7-13 shows the status that both adapters show as optimal with State is NORMAL and Mode is ACTIVE.
Example 7-13 SDDPCM commands that are used to check the availability of the adapters
# pcmpath query adapter
 
Active Adapters :2
Adpt# Name State Mode Select Errors Paths Active
0 fscsi1 NORMAL ACTIVE 407 0 6 6
1 fscsi2 NORMAL ACTIVE 425 0 6 6
The pcmpath query device command displays the current state of adapters. Example 7-14 shows path’s State and Mode for each of the defined hdisks. Both adapters show the optimal status of State is NORMAL and Mode is ACTIVE. Additionally, an asterisk (*) that is displayed next to a path indicates an inactive path that is configured to the non-preferred IBM FlashSystem V9000 controller.
Example 7-14 SDDPCM commands that are used to check the availability of the devices
# pcmpath query device
Total Devices : 3
DEV#: 3 DEVICE NAME: hdisk3 TYPE: 2145 ALGORITHM: Load Balance
SERIAL: 6005076801A180E90800000000000060
==========================================================================
Path# Adapter/Path Name State Mode Select Errors
0 fscsi1/path0 OPEN NORMAL 152 0
1* fscsi1/path1 OPEN NORMAL 48 0
2* fscsi2/path2 OPEN NORMAL 48 0
3 fscsi2/path3 OPEN NORMAL 160 0
 
DEV#: 4 DEVICE NAME: hdisk4 TYPE: 2145 ALGORITHM: Load Balance
SERIAL: 6005076801A180E90800000000000061
==========================================================================
Path# Adapter/Path Name State Mode Select Errors
0* fscsi1/path0 OPEN NORMAL 37 0
1 fscsi1/path1 OPEN NORMAL 66 0
2 fscsi2/path2 OPEN NORMAL 71 0
3* fscsi2/path3 OPEN NORMAL 38 0
 
DEV#: 5 DEVICE NAME: hdisk5 TYPE: 2145 ALGORITHM: Load Balance
SERIAL: 6005076801A180E90800000000000062
==========================================================================
Path# Adapter/Path Name State Mode Select Errors
0 fscsi1/path0 OPEN NORMAL 66 0
1* fscsi1/path1 OPEN NORMAL 38 0
2* fscsi2/path2 OPEN NORMAL 38 0
3 fscsi2/path3 OPEN NORMAL 70 0
7.5.7 Creating and preparing volumes for use with AIX and SDDPCM
This section demonstrates how to create and prepare volumes for use with SDDPCM in AIX V6.1 and later.
The itsoaixvg volume group (VG) is created with hdisk3. A logical volume is created by using the VG. Then, the testlv1 file system is created and mounted, as shown in Example 7-15.
Example 7-15 Host system new VG and file system configuration
# lsvg -o
itsoaixvg2
itsoaixvg1
itsoaixvg
rootvg
# crfs -v jfs2 -g itsoaixvg -a size=3G -m /itsoaixvg -p rw -a agblksize=4096
File system created successfully.
3145428 kilobytes total disk space.
New File System size is 6291456
# lsvg -l itsoaixvg
itsoaixvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
loglv00 jfs2log 1 1 1 closed/syncd N/A
fslv00 jfs2 384 384 1 closed/syncd /itsoaixvg
7.5.8 Expanding an AIX volume
AIX supports dynamic volume expansion starting at IBM AIX 5L™ version 5.2. By using this capability, a volume’s capacity can be increased by the storage subsystem while the volumes are actively in use by the host and applications. The following guidelines apply:
The volume cannot belong to a concurrent-capable VG.
The volume cannot belong to a FlashCopy, Metro Mirror, or Global Mirror relationship.
The following steps expand a volume on an AIX host when the volume is on the IBM FlashSystem V9000:
1. Display the current size of IBM FlashSystem V9000 volume by using IBM FlashSystem V9000 lsvdisk <VDisk_name> CLI command. The capacity of the volume, as seen by the host, is displayed in the capacity field (as GBs) of the lsvdisk output.
2. Identify the corresponding AIX hdisk by matching the vdisk_UID from the lsvdisk output with the SERIAL field of the pcmpath query device output.
3. Display the capacity that is configured in AIX by using the lspv hdisk command. The capacity is shown in the TOTAL PPs field in MBs.
4. To expand the capacity of IBM FlashSystem V9000 volume, use the expandvdisksize command.
5. After the capacity of the volume is expanded, AIX must update its configured capacity. To start the capacity update on AIX, use the chvg -g vg_name command, where vg_name is the VG in which the expanded volume is found.
If AIX does not return any messages, the command was successful and the volume changes in this VG were saved.
If AIX cannot see any changes in the volumes, it returns an explanatory message.
6. Display the new AIX configured capacity by using the lspv hdisk command. The capacity (in MBs) is shown in the TOTAL PPs field.
7.5.9 Running IBM FlashSystem V9000 commands from AIX host system
To run CLI commands, install and prepare the SSH client system on the AIX host system. For AIX 5L V5.1 and later, you can get OpenSSH from the Bonus Packs. You also need its prerequisite, OpenSSL, from the AIX toolbox for Linux applications for IBM Power Systems:
The AIX Open SSH installation images are available at this website:
Complete the following steps:
1. To generate the key files on AIX, run the following command:
ssh-keygen -t rsa -f filename
The -t parameter specifies the type of key to generate: rsa1, rsa2, or dsa. The value for rsa2 is only rsa. For rsa1, the type must be rsa1. When you are creating the key to IBM FlashSystem V9000, use type rsa2.
The -f parameter specifies the file names of the private and public keys on the AIX server (the public key has the .pub extension after the file name).
2. Install the public key on IBM FlashSystem V9000 by using the GUI.
3. On the AIX server, make sure that the private key and the public key are in the .ssh directory and in the home directory of the user.
4. To connect to IBM FlashSystem V9000 and use a CLI session from the AIX host, run the following command:
ssh -l admin -i filename V9000
5. You can also run the commands directly on the AIX host, which is useful when you are making scripts. To run the commands directly on the AIX host, add IBM FlashSystem V9000 commands to the previous command. For example, to list the hosts that are defined on IBM FlashSystem V9000, enter the following command:
ssh -l admin -i filename V9000 lshost
In this command, -l admin is the user name that is used to log in to IBM FlashSystem V9000, -i filename is the file name of the private key that is generated, and V9000 is the host name or IP address of IBM FlashSystem V9000.
7.6 IBM i: Specific information
This section describes specific information that relates to the connection of IBM i hosts in an IBM FlashSystem V9000 environment.
 
7.6.1 Connection of IBM FlashSystem V9000 to IBM i
IBM FlashSystem V9000 can be attached to IBM i in the following ways:
Native connection without using Virtual I/O Server (VIOS)
Connection with VIOS in N_Port ID Virtualization (NPIV) mode
Connection with VIOS in virtual SCSI (VSCSI) mode
Requirements
Table 7-2 lists the basic requirements.
Table 7-2 Basic requirements
Attachment type
Requirements
Native connection
IBM i logical partition (LPAR) must reside in an IBM POWER7® system or later.
When implemented in POWER7, requires IBM i V7.1, Technology Release (TR) 7 or later.
When implemented in an IBM POWER8® system, requires IBM i V7.1 TR 8 or later.
Connection with VIOS NPIV
IBM i partition must reside in a POWER7 system or later.
When implemented in POWER7, requires IBM i V7.1 TR 6 or later.
When implemented in POWER8, requires IBM i level V7.1 TR 8 or later.
Connection with VIOS VSCSI
IBM i partition must reside in a POWER7 system or later.
IBM i release V6.1.1 or later is needed when the LPAR resides in POWER7
when implemented in POWER8, IBM i V7.1 TR8 or later is needed.
For detailed information about requirements, see the following resources:
IBM System Storage Interoperation Center (SSIC):
IBM i POWER® External Storage Support Matrix Summary:
Native connection: Implementation considerations
Native connection with SAN switches can be done with these adapters:
4 Gb Fibre Channel (FC) adapters, feature number 5774 or 5276
8 Gb FC adapters, feature number 5735 or 5273
16 Gb FC adapters, feature number EN0A or EN0B
Direct native connection without SAN switches can be done with these adapters:
4 Gb FC adapters in IBM i connected to 8 Gb adapters in IBM FlashSystem V9000
16 Gb adapters in IBM i connected to 16 Gb adapters in IBM FlashSystem V9000
You can attach a maximum 64 LUNs to a port in an IBM i adapter. The LUNs report in IBM i as disk units with type 2145.
IBM i enables SCSI command tag queuing in the LUNs from natively connected IBM FlashSystem V9000; the queue depth on a LUN with this type of connection is 16.
Connection with VIOS in NPIV: Implementation considerations
The following rules are for mapping server virtual FC adapters to the ports in VIOS when implementing IBM i in VIOS NPIV connection:
Map a maximum of one virtual FC adapter from an IBM i LPAR to a port in VIOS.
You can map up to 64 virtual FC adapters each from another IBM i LPAR to the same port in VIOS.
You can use the same port in VIOS for both NPIV mapping and connection with VIOS VSCSI.
You can attach a maximum of 64 LUNs to a port in virtual FC adapter in IBM i. The LUNs report in IBM i as disk units with type 2145.
IBM i enables SCSI command tag queuing in the LUNs from VIOS NPIV connected IBM FlashSystem V9000; the queue depth on a LUN with this type of connection is 16.
For details about the VIOS NPIV connection, see IBM PowerVM Virtualization Introduction and Configuration, SG24-7940.
Connection with VIOS VSCSI: Implementation considerations
A possibility is to connect up to 4,095 LUNs per target and up to 510 targets per port in a physical adapter in VIOS.
With IBM i release 7.2 and later, you can map a maximum of 32 LUNs to a virtual SCSI adapter in IBM i. With IBM i releases prior to 7.2, a maximum of 16 LUNs can be mapped to an IBM i virtual SCSI adapter. The LUNs report in IBM i as disk units of the type 6B22.
IBM i enables SCSI command tag queuing in the LUNs from a VIOS virtual SCSI-connected IBM FlashSystem V9000. The queue depth on a LUN with this type of connection, is 32.
For more information about VIOS VSCSI connection, see IBM PowerVM Virtualization Introduction and Configuration, SG24-7940.
7.6.2 Block translation
IBM i disk units have a block size of 520 bytes. The IBM FlashSystem V9000 is formatted with a block size of 512 bytes, so a translation or mapping is required to attach these storage units to IBM i.
IBM i performs the following change of the data layout to support 512 byte blocks (sectors) in external storage:
For every page (8 * 520 byte sectors), it uses an additional ninth sector.
It stores the 8-byte headers of the 520-byte sectors in the ninth sector, and therefore changes the previous 8 * 520 byte blocks to 9 * 512 byte blocks.
The data that was previously stored in 8 * sectors is now spread across 9 * sectors, so the required disk capacity on IBM FlashSystem V9000 is 9/8 of the IBM i usable capacity.
Conversely, the usable capacity in IBM i is 8/9 of the allocated capacity in these storage systems.
Therefore, when attaching an IBM FlashSystem V9000 to IBM i, you should have the capacity overhead on the storage system to be able to use only 8/9th of the effective capacity.
The performance impact of block translation in IBM i is very insignificant or negligible.
7.6.3 IBM i LUNs and capacity
The size of an IBM i LUN can vary from 160 MB up to 2 TB, both these values excluded. When defining LUNs for IBM i, account for the minimal capacity for load source (boot disk) LUN:
With IBM i release 7.1 the minimal capacity is 20 GB
With IBM i release 7.2 prior to TR1 the minimal capacity is 80 GB in IBM i
With IBM i release 7.2 TR1 and later the minimal capacity is 40 GB in IBM i
For performance reasons, the preferred approach is to define LUNs of capacity 40 GB to 200 GB, and to define a minimum 8 LUNs for an IBM i host. Another preferred practice is to define all LUNs in an IBM i Auxiliary Storage Pool (ASP) of the same capacity.
The capacity of a corresponding disk unit in IBM i, is 8/9th of the LUN capacity. The reason for this is the IBM i block translation described in 7.6.2, “Block translation” on page 275.
 
Note: When calculating LUN capacity take into account that IBM i reports capacity in decimal notation (GB) while IBM FlashSystem V9000 reports capacity by default in binary notation (GiB).
7.6.4 Data layout
A storage pool is a collection of managed disks from which volumes are created and presented to the IBM i system as LUNs. The primary property of a storage pool is the extent, which can be of size 16 MB to 8192 MB. The default extent size in IBM FlashSystem V9000 is 1024 MB. This extent size is the smallest unit of allocation from the pool and determines some maximum storage capacity that can be managed by the IBM FlashSystem V9000. For further details, see the V7.6 Configuration Limits and Restrictions for IBM FlashSystem V9000 support document:
When defining IBM FlashSystem V9000 LUNs for IBM i, use the default extent size of 1024 MB. Another good practice is to use the default option of cache mode enabled on the LUNs for IBM i.
Sharing a storage pool among workloads means spreading workloads across all the available resources of the storage system. Consider the following guidelines for sharing a disk pool:
With traditional disk drives, performance problems can possibly arise when sharing resources because of contention on these resources. For this reason, a good practice is to isolate the important production workloads to separate disk pools.
With IBM FlashSystem V9000, the data is distributed in a disk pool that can have one or more MDisks, with each MDisk created from one background IBM FlashSystem V9000.
 – If multiple IBM FlashSystem V9000 exist, consider separating them for different workloads.
 – If one IBM FlashSystem V9000 is in the background, a sensible approach is for IBM i LPARs to share the storage pool among them.
7.6.5 Thin provisioning and IBM Real-time Compression
IBM i can take advantage of thin provisioning and Real-time Compression in IBM FlashSystem V9000 because these functions are transparent to the host server.
With thin provisioning, IBM i 7.1 and later do not pre-format LUNs so that initial allocations of LUNs can be thin provisioned, however there is no space reclamation, thus the effectiveness of the thin provisioning might decline over time.
IBM Real-time Compression allows the use of less physical space on disk than is presented to the IBM i host. The capacity needed on the IBM FlashSystem V9000 is reduced due to both compression and thin provisioning. However, Real-time Compression typically has a latency impact on I/O service times and a throughput impact.
Plan carefully before using thin provisioning or Real-time Compression with IBM i. For information about Real-time Compression see Accelerate with IBM FlashSystem V840 Compression, REDP-5147.
7.6.6 Multipath
Multipath provides greater resiliency for SAN attached storage. IBM i supports up to eight active paths to each LUN. In addition to the availability considerations, lab performance testing has shown that two or more paths provide performance improvements when compared to a single path.
With traditional disk drives, two paths to a LUN is typically the ideal balance of price and performance. With IBM FlashSystem V9000 you can expect higher access density (I/O per second per GB) than with traditional disk drives; consider three to four active paths per LUN.
Multipath for a LUN is achieved by connecting the LUN to two or more ports in different physical or virtual adapters in the IBM i partition:
With native connection to IBM FlashSystem V9000, the ports for multipath must be in different physical adapters in IBM i.
With VIOS NPIV in dual path, the virtual Fibre Channel adapters for multipath must be assigned to different Virtual I/O Servers. With more that two paths, use at least two Virtual I/O Servers and spread the virtual FC adapters evenly among the Virtual I/O Servers.
With VIOS VSCSI connection in dual path, the virtual SCSI adapters for multipath must be assigned to different VIOS. With more than two paths, use at least two VIOS and spread the VSCSI adapters evenly among the VIOS.
Every LUN in IBM FlashSystem V9000 uses one control enclosure as the preferred node; the I/O rate to or from the particular LUN normally goes through the preferred node. If the preferred node fails, the I/O operations are transferred to the remaining node.
With IBM i multipath, all the paths to a LUN through the preferred node are active and the paths through the non-preferred node are passive. Multipath employs the load balancing among the paths to a LUN that go through the node that is preferred for that LUN. For more information, see Implementing the IBM System Storage SAN Volume Controller V7.4, SG24-7933.
7.6.7 Fibre Channel adapters in IBM i partition
The following Fibre Channel adapters can be used in IBM i when connecting IBM FlashSystem V9000 in native mode:
16 Gb PCIe2 2-port FC adapter, feature number EN0A or feature number EN0B (low-profile)
8 Gb PCIe 2-port Port FC adapter, feature number 5735 or feature number 5273 (low-profile)
4 Gb PCIe 2-port Port FC adapter, feature number 5774 or feature number 5276 (low-profile)
The following Fibre Channel adapters can be used in IBM i VIOS when connecting IBM FlashSystem V9000 to IBM i client in VIOS NPIV mode:
16 Gb PCIe2 2-port FC adapter, feature number EN0A,or feature number EN0B (low-profile)
8 Gb PCIe 2-port Port FC adapter, feature number 5735, or feature number 5273 (low-profile)
8 Gb PCIe2 4-Port FC adapter, feature number 5729
The following Fibre Channel adapters can be used in IBM i VIOS when connecting IBM FlashSystem V9000 to IBM i client in VIOS Virtual SCSI mode:
16 Gb PCIe2 2-port FC adapter feature number EN0A, or feature number EN0B (low-profile)
8 Gb PCIe 2-port Port FC adapter feature number 5735, or feature number 5273 (low-profile)
8 Gb PCIe2 4-port FC adapter feature number EN0Y (low-profile)
When you size the number of FC adapters for an IBM i workload for native or VIOS NPIV connection, account for the maximum I/O rate (I/O per second) and data rate (MB per second) that a port in a particular adapter can sustain at 70% utilization, and the I/O rate and data rate of the IBM i workload.
If multiple IBM i partitions connect through the same port in VIOS NPIV, account for the maximum rate at the port at 70% utilization and the sum of I/O rates and data rates of all connected LPARs.
7.6.8 Zoning SAN switches
This section provides guidelines for zoning the SAN switches with IBM FlashSystem V9000 connection to IBM i. It describes zoning guidelines for different types of connection, for one IBM FlashSystem V9000 I/O group, and for an IBM FlashSystem V9000 cluster.
Native or VIOS NPIV connection, one IBM FlashSystem V9000 I/O group
With native connection and the connection with VIOS NPIV, zone the switches so that one World Wide Port Name (WWPN) of one IBM i port is in a zone with two ports of IBM FlashSystem V9000, each port from one control enclosure. This approach improves resiliency for the I/O rate to or from a LUN assigned to that WWPN. If the preferred controller node for that LUN fails, the I/O rate will continue using the non-preferred controller node.
 
Note: For a VIOS NPIV configuration a virtual FC client adapter for IBM i has two WWPNs. For connecting external storage, the first WWPN is used, while the second WWPN is used for Live Partition Mobility. Therefore, a good practice is to zone both WWPNs if you plan to use Live Partition Mobility, otherwise, zone only the first WWPN.
Native or VIOS NPIV connection, two IBM FlashSystemV9000 I/O groups in a cluster
With two I/O groups in the cluster, zone half of the IBM i ports (either in physical or in virtual FC adapter) with one I/O group, each IBM i port zoned with both control enclosures of the I/O group.
Zone half of IBM i ports with the other I/O group, each port zoned with both control enclosures of the I/O group. Assign a LUN to an IBM i port (either in physical or in virtual FC adapter) that is zoned with the caching I/O group for this LUN. In multipath, assign the LUN to two or more ports, all of them zoned with the caching I/O group of this LUN.
In some cases, such as preparing for LUN migration, preparing for HyperSwap or for additional resiliency, consider zoning each IBM i port with both I/O groups of the IBM FlashSystem V9000 cluster.
For more information, see IBM i and IBM Storwize Family: A Practical Guide to Usage Scenarios, SG24-8197.
VIOS VSCSI connection
When connecting with VIOS virtual SCSI (VSCSI), zone one physical port in VIOS with all available ports in IBM FlashSystem V9000, or with as many ports as possible to allow load balancing. Keep in mind that a maximum of 8 paths are available from VIOS to the IBM FlashSystem V9000. IBM FlashSystem V9000 ports zoned with one VIOS port should be evenly spread across the IBM FlashSystem V9000 control enclosures.
7.6.9 Boot from SAN
IBM i boot disk (load source disk) resides on an IBM FlashSystem V9000 LUN. The suggestion is that the load source LUN be of the same size as the other LUNs in the system disk pool (ASP1). No special requirements or guidelines exist for layout or connection of the load source LUN.
When installing the IBM i operating system with disk capacity on IBM FlashSystem V9000, the installation prompts to select one of the available IBM FlashSystem V9000 LUNs for the load source.
7.6.10 IBM i mirroring
Some organizations prefer to have additional resiliency with the IBM i mirroring function. For example, they use mirroring between two IBM FlashSystem V9000, each connected with one VIOS.
When connecting with VIOS, start IBM i mirroring by following these steps:
1. Add the LUNs from two virtual adapters, each adapter connecting one to-be mirrored half of LUNs.
2. After mirroring is started for those LUNs, add the LUNs from two new virtual adapters, each adapter connecting one to-be mirrored half.
3. Continue this way until the mirroring is started for all LUNs.
By following these steps, you ensure that the mirroring is started between the two halves that you want to be mirrored. For example, following this approach, you ensure that the mirroring is started between two IBM FlashSystem V9000 and not among the LUNs in the same IBM FlashSystem V9000.
7.6.11 Migration
This section describes types of migrations of IBM i partition to disk capacity on IBM FlashSystem V9000.
Migration with ASP balancing and copying load source
This migration approach can be used to migrate IBM i disk capacity from internal disk, or from any storage system to IBM FlashSystem V9000. It requires relatively short downtime, but it might require to temporarily connect additional FC adapters to IBM i.
Use the following steps to perform the migration:
1. Connect IBM FlashSystem V9000 along with existing internal disk or storage system to the IBM i LPAR.
2. By using the ASP balancing function, migrate data from the currently used disks or LUNs except load source, to the IBM FlashSystem V9000 LUNs. ASP balancing does not require any downtime; it is done while the IBM i partition is running. Depending on the installation needs, you can perform load balancing relatively quickly with some impact on performance, or slowly with minimal performance impact.
3. After the data (except load source) is migrated to IBM FlashSystem V9000, use the IBM i Dedicated Service Tools function to copy the load source to an IBM FlashSystem V9000 LUN, which must be at least as big as the present load source. This action is disruptive, and it requires careful planning and execution.
4. After load source is copied to an IBM FlashSystem V9000 LUN, IBM i starts working with the entire disk capacity on IBM FlashSystem V9000.
5. Disconnect the previous storage system from IBM i, or remove the internal disk.
For information about ASP balancing, see the IBM i web page:
For information about copying load source, see IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i, SG24-7120.
Migration with save and restore
Migration by saving IBM i system to tape and restoring it from tape to an LPAR with disk capacity on IBM FlashSystem V9000 can be used in any scenario. This migration is straightforward and does not require any additional resources, however, it requires a relatively long downtime.
Migration from one to another type of connection
You can migrate IBM i from any type of V9000 connection to any type of connection by simply disconnecting the LUNs and reconnecting them the other way. IBM i must be powered-down during these actions. Table 7-3 shows all possible combinations of connections for this type of migration.
Table 7-3 Supported migration with disconnecting and reconnecting LUNs
Migration supported
To Native
To VIOS NPIV
To VIOS VSCSI
From Native
Yes
Yes
Yes
From VIOS NPIV
Yes
Yes
Yes
From VIOS VSCSI
Yes
Yes
Yes
Use the following steps to perform this type of migration:
1. Power-down IBM i.
2. Disconnect the V9000 LUNs from the IBM i partition.
3. Connect the IBM FlashSystem V9000 LUNs to the IBM i partition in the new way.
4. IPL IBM i.
7.7 Windows: Specific information
This section describes specific information about the connection of Windows-based hosts to IBM FlashSystem V9000 environment.
7.7.1 Configuring Windows Server 2008 and 2012 hosts
For attaching IBM FlashSystem V9000 to a host that is running Windows Server 2008, Windows Server 2008 R2, or Windows Server 2012, you must install the IBM SDDDSM multipath driver to make the Windows server capable of handling volumes that are presented by IBM FlashSystem V9000.
 
Note: With Windows 2012, you can use native Microsoft device drivers, but a strong suggestion is to install IBM SDDDSM drivers.
Before you attach IBM FlashSystem V9000 to your host, make sure that all of the following requirements are fulfilled:
Check all prerequisites that are provided in section 2.0 of the SDDSM readme file.
Check the LUN limitations for your host system. Ensure that there are enough FC adapters installed in the server to handle the total number of LUNs that you want to attach.
7.7.2 Configuring Windows
To configure the Windows hosts, complete the following steps:
1. Make sure that the current OS service pack and fixes are applied to your Windows server system.
2. Use the current supported firmware and driver levels on your host system.
3. Install the HBA or HBAs on the Windows server, as described in 7.7.4, “Installing and configuring the host adapter” on page 282.
4. Connect the Windows Server FC host adapters to the switches.
5. Configure the switches (zoning).
6. Install the FC host adapter driver, as described in 7.7.3, “Hardware lists, device driver, HBAs, and firmware levels” on page 282.
7. Configure the HBA for hosts that are running Windows, as described in 7.7.4, “Installing and configuring the host adapter” on page 282.
8. Check the HBA driver readme file for the required Windows registry settings, as described in 7.7.3, “Hardware lists, device driver, HBAs, and firmware levels” on page 282.
9. Check the disk timeout on Windows Server, as described in 7.7.5, “Changing the disk timeout on Windows Server” on page 282.
10. Install and configure SDDDSM.
11. Restart the Windows Server host system.
12. Configure the host, volumes, and host mapping in IBM FlashSystem V9000.
13. Use Rescan disk in Computer Management of the Windows Server to discover the volumes that were created on IBM FlashSystem V9000.
7.7.3 Hardware lists, device driver, HBAs, and firmware levels
For more information about the supported hardware, device driver, and firmware, see the SSIC web page:
There, you can find the hardware list for supported HBAs and the driver levels for Windows. Check the supported firmware and driver level for your HBA and follow the manufacturer’s instructions to upgrade the firmware and driver levels for each type of HBA. The driver readme files from most manufacturers list the instructions for the Windows registry parameters that must be set for the HBA driver.
7.7.4 Installing and configuring the host adapter
Install the host adapters in your system. See the manufacturer’s instructions for the installation and configuration of the HBAs.
Also, check the documentation that is provided for the server system for the installation guidelines of FC HBAs regarding the installation in certain PCI(e) slots, and so on.
The detailed configuration settings that you must make for the various vendor FC HBAs are available at the IBM FlashSystem V9000 web page of IBM Knowledge Center. Search for Configuring and then select Host attachment → Fibre Channel host attachments → Hosts running the Microsoft Windows Server operating system.
7.7.5 Changing the disk timeout on Windows Server
This section describes how to change the disk I/O timeout value on Windows Server 2008, Windows Server 2008 R2, and Windows Server 2012 systems.
On your Windows Server hosts, complete the following steps to change the disk I/O timeout value to 60 in the Windows registry:
1. In Windows, click Start, and then select Run.
2. In the dialog text box, enter regedit and press Enter.
3. In the registry browsing tool, locate the following key:
HKEY_LOCAL_MACHINESystemCurrentControlSetServicesDiskTimeOutValue
4. Confirm that the value for the key is 60 (decimal value) and, if necessary, change the value to 60, as shown in Figure 7-2 on page 283.
Figure 7-2 Regedit
7.7.6 Installing the SDDDSM multipath driver on Windows
This section describes how to install the SDDDSM driver on a Windows Server 2008 R2 host and Windows Server 2012.
Windows Server 2012 (R2), Windows Server 2008 (R2), and MPIO
Microsoft Multipath I/O (MPIO) is a generic multipath driver that is provided by Microsoft, which does not form a complete solution. It works with device-specific modules (DSMs), which usually are provided by the vendor of the storage subsystem. This design supports the parallel operation of multiple vendors’ storage systems on the same host without interfering with each other, because the MPIO instance interacts only with that storage system for which the DSM is provided.
MPIO is not installed with the Windows operating system, by default. Instead, storage vendors must pack the MPIO drivers with their own DSMs. IBM SDDDSM is the IBM Multipath I/O solution that is based on Microsoft MPIO technology. It is a device-specific module that is designed specifically to support IBM storage devices on Windows Server 2008 (R2), and Windows 2012 (R2) servers.
The intention of MPIO is to achieve better integration of multipath storage with the operating system. It also supports the use of multipathing in the SAN infrastructure during the boot process for SAN boot hosts.
SDDDSM for IBM FlashSystem V9000
SDDDSM installation is a package for IBM FlashSystem V9000 device for the Windows Server 2008 (R2), and Windows Server 2012 (R2) operating systems. Together with MPIO, SDDDSM is designed to support the multipath configuration environments in IBM FlashSystem V9000. SDDDSM is in a host system along with the native disk device driver and provides the following functions:
Enhanced data availability
Dynamic I/O load-balancing across multiple paths
Automatic path failover protection
Enabled concurrent firmware upgrade for the storage system
Path-selection policies for the host system
Table 7-4 lists the SDDDSM driver levels that are supported at the time of this writing.
Table 7-4 Currently supported SDDDSM driver levels
Windows operating system
SDD level
Windows Server 2012 R2 (x64)
2.4.7.1
Windows Server 2012 (x64)
2.4.7.1
Windows Server 2008 R2 (x64)
2.4.7.1
Windows Server 2008 (32-bit)/Windows Server 2008 (x64)
2.4.7.1
For more information about the levels that are available, see this web page:
 
Note: At the time of writing, IBM FlashSystem V9000 is not part of that SDDDSM support matrix. IBM FlashSystem V9000 is supported with SDDDSM and follows the SAN Volume Control levels in this case, and hence it is supported.
To download SDDDSM, see this web page:
After you download the appropriate archive (.zip file), extract it to your local hard disk and start setup.exe to install SDDDSM. A command prompt window opens (Figure 7-3). Confirm the installation by entering Y.
Figure 7-3 SDDDSM installation
After the setup completes, enter Y again to confirm the reboot request (Figure 7-4).
Figure 7-4 Restart system after installation
After the restart, the SDDDSM installation is complete. You can verify the installation completion in Device Manager because the SDDDSM device appears (Figure 7-5) and the SDDDSM tools are installed (Figure 7-6 on page 286).
Figure 7-5 SDDDSM installation
The SDDDSM tools are installed, as shown in Figure 7-6.
Figure 7-6 SDDDSM installation
7.7.7 Attaching IBM FlashSystem V9000 volumes to Windows Server 2008 R2 and Windows Server 2012 R2
Create the volumes on IBM FlashSystem V9000 and map them to the Windows Server 2008 R2 or Windows Server 2012 R2 host.
This example maps three IBM FlashSystem V9000 disks to the Windows Server 2008 R2 host that is named Diomede, as shown in Example 7-16.
Example 7-16 SVC host mapping to host Diomede
IBM_2145:ITSO_V9000:admin>lshostvdiskmap Diomede
id name SCSI_id vdisk_id vdisk_name       wwpn            vdisk_UID
0 Diomede 0        20     Diomede_0001 210000E08B0541BC 6005076801A180E9080000000000002B
0 Diomede 1        21      Diomede_0002 210000E08B0541BC 6005076801A180E9080000000000002C
0 Diomede 2        22      Diomede_0003 210000E08B0541BC 6005076801A180E9080000000000002D
Complete the following steps to use the devices on your Windows Server 2008 R2 host:
1. Click Start  Run.
2. Run the diskmgmt.msc command, and then click OK. The Disk Management window opens.
3. Select Action  Rescan Disks, as shown in Figure 7-7 on page 287.
Figure 7-7 Windows Server 2008 R2: Rescan disks
IBM FlashSystem V9000 disks now appear in the Disk Management window (Figure 7-8).
Figure 7-8 Windows Server 2008 R2 Disk Management window
After you assign IBM FlashSystem V9000 disks, they are also available in Device Manager. The three assigned drives are represented by SDDDSM/MPIO as IBM-2145 Multipath disk devices in the Device Manager (Figure 7-9).
Figure 7-9 Windows Server 2008 R2 Device Manager
4. To check that the disks are available, select Start → All Programs → Subsystem Device Driver DSM, and then click Subsystem Device Driver DSM (Figure 7-10). The SDDDSM Command Line Utility is displayed.
Figure 7-10 Windows Server 2008 R2 Subsystem Device Driver DSM utility
5. Run the datapath query device command and press Enter. This command displays all disks and available paths, including their states (Example 7-17).
Example 7-17 Windows Server 2008 R2 SDDDSM command-line utility
Microsoft Windows [Version 6.0.6001]
Copyright (c) 2006 Microsoft Corporation. All rights reserved.
 
C:Program FilesIBMSDDDSM>datapath query device
 
Total Devices : 3
 
DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED
SERIAL: 6005076801A180E9080000000000002B
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0
1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 1429 0
2 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 1456 0
3 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0
 
DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED
SERIAL: 6005076801A180E9080000000000002C
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 1520 0
1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 0 0
2 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0
3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 1517 0
 
DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED
SERIAL: 6005076801A180E9080000000000002D
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 27 0
1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 1396 0
2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 1459 0
3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0
 
C:Program FilesIBMSDDDSM>
 
 
SAN zoning: When the SAN zoning guidance is followed, you see this result, which uses one volume and a host with two HBAs, (number of volumes) x (number of paths per building block per HBA) x (number of HBAs) = 1 x 2 x 2 = four paths.
 
6. Right-click the disk in Disk Management and then select Online to place the disk online (Figure 7-11).
Figure 7-11 Windows Server 2008 R2: Place disk online
7. Repeat step 6 for all of your attached IBM FlashSystem V9000 disks.
8. Right-click one disk again and select Initialize Disk (Figure 7-12 on page 290).
Figure 7-12 Windows Server 2008 R2: Initialize Disk
9. Mark all of the disks that you want to initialize and then click OK, (Figure 7-13).
Figure 7-13 Windows Server 2008 R2: Initialize Disk
10. Right-click the deallocated disk space and then select New Simple Volume (Figure 7-14).
Figure 7-14 Windows Server 2008 R2: New Simple Volume
7.7.8 Extending a volume
Using IBM FlashSystem V9000 and Windows Server 2008 and later gives you the ability to extend volumes while they are in use.
You can expand a volume in IBM FlashSystem V9000 cluster, even if it is mapped to a host, because version 2000 Windows Server can handle the volumes that are expanded even if the host has applications running.
A volume, which is defined to be in a FlashCopy, Metro Mirror, or Global Mirror mapping on IBM FlashSystem V9000, cannot be expanded. Therefore, the FlashCopy, Metro Mirror, or Global Mirror on that volume must be deleted before the volume can be expanded.
If the volume is part of a Microsoft Cluster (MSCS), Microsoft advises that you shut down all but one MSCS cluster node. Also, you must stop the applications in the resource that access the volume to be expanded before the volume is expanded. Applications that are running in other resources can continue to run. After the volume is expanded, start the applications and the resource, and then restart the other nodes in the MSCS.
To expand a volume in use on a Windows Server host, you use the Windows DiskPart utility.
To start DiskPart, select Start → Run, and enter DiskPart.
Diskpart was developed by Microsoft to ease the administration of storage on Windows hosts. It is a command-line interface (CLI), which you can use to manage disks, partitions, and volumes by using scripts or direct input on the command line. You can list disks and volumes, select them, and after selecting them, get more detailed information, create partitions, extend volumes, and so on. For more information about DiskPart, see this website:
For more information about expanding partitions of a cluster-shared disk, see this web page:
Dynamic disks can be expanded by expanding the underlying IBM FlashSystem V9000 volume. The new space appears as deallocated space at the end of the disk.
In this case, you do not need to use the DiskPart tool. Instead, you can use Windows Disk Management functions to allocate the new space. Expansion works irrespective of the volume type (simple, spanned, mirrored, and so on) on the disk. Dynamic disks can be expanded without stopping I/O, in most cases.
 
Important: Never try to upgrade your basic disk to dynamic disk or vice versa without backing up your data. This operation is disruptive for the data because of a change in the position of the logical block address (LBA) on the disks.
7.7.9 Removing a disk from Windows
To remove a disk from Windows, when the disk is an IBM FlashSystem V9000 volume, follow the standard Windows procedure to ensure that no data exists that you want to preserve on the disk, that no applications are using the disk, and that no I/O is going to the disk. After completing this procedure, remove the host mapping on IBM FlashSystem V9000. Ensure that you are removing the correct volume. To confirm, use Subsystem Device Driver (SDD) to locate the serial number of the disk. On IBM FlashSystem V9000, run the lshostvdiskmap command to find the volume’s name and number. Also check that the SDD serial number on the host matches the UID on IBM FlashSystem V9000 for the volume.
When the host mapping is removed, perform a rescan for the disk, Disk Management on the server removes the disk, and the vpath goes into the status of CLOSE on the server. Verify these actions by running the datapath query device SDD command, but the vpath that is closed is first removed after a restart of the server.
The following examples show how to remove an IBM FlashSystem V9000 volume from a Windows server. Although the examples show it on a Windows Server 2008 operating system, the steps also apply to Windows Server 2008 and Windows Server 2012.
Example 7-18 shows the Disk Manager before removing the disk (Disk1).
To find the volume information for the disk device to be removed, type datapath query device by using SDDDSM CLI, as shown in Example 7-18.
Example 7-18 Removing IBM FlashSystem V9000 disk from the Windows server
C:Program FilesIBMSDDDSM>datapath query device
 
Total Devices : 3
 
DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED
SERIAL: 6005076801A180E9080000000000000F
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 1471 0
1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0
2 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0
3 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 1324 0
 
DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED
SERIAL: 6005076801A180E90800000000000010
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 20 0
1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 94 0
2 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 55 0
3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0
 
DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED
SERIAL: 6005076801A180E90800000000000011
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 100 0
1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 0 0
2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0
3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 69 0
Knowing the Serial/UID of the volume and that the host name is Senegal, identify the host mapping to remove by running the lshostvdiskmap command on IBM FlashSystem V9000.
Then, remove the actual host mapping by using rmvdiskhostmap, as shown in Example 7-19.
Example 7-19 Finding and removing the host mapping
IBM_2145:ITSO_V9000:admin>lshostvdiskmap Senegal
id name SCSI_id vdisk_id vdisk_name    wwpn              vdisk_UID
1 Senegal 0     7    Senegal_bas0001 210000E08B89B9C0 6005076801A180E9080000000000000F
1 Senegal 1     8    Senegal_bas0002 210000E08B89B9C0 6005076801A180E90800000000000010
1 Senegal 2     9    Senegal_bas0003 210000E08B89B9C0 6005076801A180E90800000000000011
 
IBM_2145:ITSO_V9000:admin>rmvdiskhostmap -host Senegal Senegal_bas0001
 
IBM_2145:ITSO_V9000:admin>lshostvdiskmap Senegal
id name SCSI_id vdisk_id vdisk_name    wwpn              vdisk_UID
1 Senegal 1     8    Senegal_bas0002 210000E08B89B9C0 6005076801A180E90800000000000010
1 Senegal 2     9    Senegal_bas0003 210000E08B89B9C0 6005076801A180E90800000000000011
Here, you can see that the volume is removed from the server. On the server, you then perform a disk rescan in Disk Management, and you now see that the correct disk (Disk1) was removed (Figure 7-15).
Figure 7-15 Disk Management: Disk is removed
SDDDSM now shows the status for paths to Disk1 have changed to CLOSE because the disk is not available, as shown in Example 7-20.
Example 7-20 SDD: Closed path
C:Program FilesIBMSDDDSM>datapath query device
 
Total Devices : 3
 
DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED
SERIAL: 6005076801A180E9080000000000000F
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk1 Part0 CLOSE NORMAL 1471 0
1 Scsi Port2 Bus0/Disk1 Part0 CLOSE NORMAL 0 0
2 Scsi Port3 Bus0/Disk1 Part0 CLOSE NORMAL 0 0
3 Scsi Port3 Bus0/Disk1 Part0 CLOSE NORMAL 1324 0
 
DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED
SERIAL: 6005076801A180E90800000000000010
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 20 0
1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 124 0
2 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 72 0
3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0
 
DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED
SERIAL: 6005076801A180E90800000000000011
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 134 0
1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 0 0
2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0
3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 82 0
The disk (Disk1) is now removed from the server. However, to remove the SDDDSM information about the disk, you must restart the server at a convenient time.
7.7.10 Using IBM FlashSystem V9000 CLI from a Windows host
To run CLI commands, you must install and prepare the SSH client system on the Windows host system.
You can install the PuTTY SSH client software on a Windows host by using the PuTTY installation program. You can download PuTTY from this web page:
Cygwin software features an option to install an OpenSSH client. You can download Cygwin from this website:
7.7.11 Microsoft 2012 and Offloaded Data Transfer (ODX)
Microsoft ODX is a copy and storage application embedded into the operating system that supports the passing of copy command sets using an API, to reduce CPU utilization during copy operations. Rather than buffering the read and write operations, Microsoft ODX initiates the copy operation with an offload read, and gets a token that represents the data.
Next, the API initiates the offload write command that requests the movement of the data from the source to destination storage volume. The copy manager of the storage device then performs the data movement according to the token.
Client/server data movement is massively reduced, freeing CPU cycles because the actual data movement is on the backend storage device and not traversing the storage area network, further reducing traffic. Use cases include large data migrations and tiered storage support, and can also reduce the overall hardware spending cost and deployment.
ODX and IBM FlashSystem V9000 offer an excellent combination of server and storage integration to reduce CPU usage, and to take advantage of the speed of IBM FlashSystem V9000 all-flash storage arrays and IBM FlashCore technology.
Starting with IBM FlashSystem V9000 software version 7.5, ODX is supported with Microsoft 2012, including the following platforms:
Clients: Windows
Servers: Windows Server 2012
The following functions are included:
The ODX feature is embedded in the copy engine of Windows, so there is no additional software to install.
Both the source and destination storage device LUNs must be ODX-compatible.
If a copy of an ODX operation fails, traditional Windows copy is used as a fallback.
Drag-and-drop and Copy-and-Paste actions can be used to initiate the ODX copy.
 
Note: By default, the ODX capability of IBM FlashSystem V9000 is disabled. To enable the ODX function from the CLI, issue the chsystem -odx on command on the Config Node.
For more details about offloaded data transfer, see the following website:
7.7.12 Microsoft Volume Shadow Copy (VSS)
IBM FlashSystem V9000 supports the Microsoft Volume Shadow Copy Service (VSS). The Microsoft Volume Shadow Copy Service can provide a point-in-time (shadow) copy of a Windows host volume while the volume is mounted and the files are in use.
In this section, is described how to install the Microsoft Volume Copy Shadow Service. The following operating system versions are supported:
Windows Server 2008 with SP2 (x86 and x86_64)
Windows Server 2008 R2 with SP1
Windows Server 2012
The following components are used to support the service:
IBM FlashSystem V9000
IBM VSS Hardware Provider, which is known as the IBM System Storage Support for Microsoft VSS
Microsoft Volume Shadow Copy Service
IBM VSS Hardware Provider is installed on the Windows host.
To provide the point-in-time shadow copy, the components follow this process:
1. A backup application on the Windows host starts a snapshot backup.
2. The Volume Shadow Copy Service notifies IBM VSS that a copy is needed.
3. IBM FlashSystem V9000 prepares the volume for a snapshot.
4. The Volume Shadow Copy Service quiesces the software applications that are writing data on the host and flushes file system buffers to prepare for a copy.
5. IBM FlashSystem V9000 creates the shadow copy by using the FlashCopy Service.
6. The VSS notifies the writing applications that I/O operations can resume and notifies the backup application that the backup was successful.
The VSS maintains a free pool of volumes for use as a FlashCopy target and a reserved pool of volumes. These pools are implemented as virtual host systems on IBM FlashSystem V9000.
You can download the installation archive from IBM Support and extract it to a directory on the Windows server where you want to install IBM VSS:
7.8 Linux: Specific information
This section describes specific information that relates to the connection of Linux on Intel based hosts to IBM FlashSystem V9000 environment.
7.8.1 Configuring the Linux host
Complete the following steps to configure the Linux host:
1. Use the current firmware levels on your host system.
2. Install the HBA or HBAs on the Linux server, as described in 7.7.4, “Installing and configuring the host adapter” on page 282.
3. Install the supported HBA driver or firmware and upgrade the kernel, if required.
4. Connect the Linux server FC host adapters to the switches.
5. Configure the switches (zoning), if needed.
6. Configure DMMP for Linux, as described in 7.9, “VMware: Configuration information” on page 303.
7. Configure the host, volumes, and host mapping in the IBM FlashSystem V9000.
8. Rescan for LUNs on the Linux server to discover the volumes that were created on IBM FlashSystem V9000.
7.8.2 Supported Linux distributions
IBM FlashSystem V9000 supports hosts that run the following Linux distributions:
Red Hat Enterprise Linux (RHEL)
SUSE Linux Enterprise Server
Ensure that your hosts running the Linux operating system use the correct HBAs and host software.
The IBM SSIC web page has current interoperability information for HBA and platform levels:
Check the supported firmware and driver level for your HBA, and follow the manufacturer’s instructions to upgrade the firmware and driver levels for each type of HBA.
7.8.3 Multipathing in Linux
You must configure and enable multipathing software on all hosts that are attached to IBM FlashSystem V9000. The following software provides multipathing support for hosts that run the Linux operating system:
SUSE Linux Enterprise Server version 9 and Red Hat Enterprise Linux version 4 support both SDD and native multipathing support that is provided by the operating system.
SUSE Linux Enterprise Server versions 10 and later and Red Hat Enterprise Linux versions 5 and later support only native multipathing that is provided by the operating system.
Device mapper multipathing for Red Hat Enterprise Linux 7 (RHEL7)
Device mapper multipathing (DM Multipath) allows you to configure multiple I/O paths between server nodes and storage arrays into a single device. These I/O paths are physical SAN connections that can include separate cables, switches, and controllers. Multipathing aggregates the I/O paths, creating a new device that consists of the aggregated paths.
Multipath settings for specific Linux distributions
Different Linux distributions require different multipath configurations. Figure 7-16 on page 298 and Figure 7-17 on page 299 show the device configurations that are required for /etc/multipath.conf for the Linux versions.
Red Hat Linux versions 5.x, 6.0, and 6.1
 
vendor "IBM"
product "2145"
path_grouping_policy "group_by_prio"
path_selector "round-robin 0"
prio_callout "/sbin/mpath_prio_alua /dev/%n" #Used by Red Hat 5.x
prio "alua"
path_checker "tur"
failback "immediate"
no_path_retry 5
rr_weight uniform
rr_min_io 1000
dev_loss_tmo 120
 
Red Hat Linux versions 6.2 and higher and 7.x
 
vendor "IBM"
product "2145"
path_grouping_policy "group_by_prio"
path_selector "round-robin 0"
# path_selector "service-time 0" # Used by Red Hat 7.x
prio "alua"
path_checker "tur"
failback "immediate"
no_path_retry 5
rr_weight uniform
rr_min_io_rq "1"
dev_loss_tmo 120
Figure 7-16 Device configurations required for /etc/multipath.conf (Part 1 of 2)
SUSE Linux Versions 10.x and 11.0 and 11SP1
 
vendor "IBM"
product "2145"
path_grouping_policy "group_by_prio"
path_selector "round-robin 0"
prio "alua"
path_checker "tur"
failback "immediate"
no_path_retry 5
rr_weight uniform
rr_min_io 1000
dev_loss_tmo 120
 
SUSE Linux Versions 11SP2 and higher
 
vendor "IBM"
product "2145"
path_grouping_policy "group_by_prio"
path_selector "round-robin 0" # Used by SLES 11 SP2
# path_selector "service-time 0" # Used by SLES 11 SP3+
prio "alua"
path_checker "tur"
failback "immediate"
no_path_retry 5
rr_weight uniform
rr_min_io_rq "1"
dev_loss_tmo 120
Figure 7-17 Device configurations required for /etc/multipath.conf (Part 2 of 2)
Setting up DM Multipath
Before setting up DM Multipath on your system, ensure that your system is updated and includes the device mapper multipath package.
You set up multipath with the mpathconf utility, which creates the /etc/multipath.conf multipath configuration file. Consider this information:
If the /etc/multipath.conf file already exists, the mpathconf utility updates it.
If the /etc/multipath.conf file does not exist, the mpathconf utility creates it by using a default built-in template file. This does not include multipathing for IBM FlashSystem V9000, which then must be added to the configuration.
 
Note: The examples in this section are based on the Red Hat Enterprise Linux 7 DM Multipath document:
Configure and enable DM Multipath
To configure and enable DM Multipath on a Red Hat Enterprise Linux 7 (RHEL7):
1. Enable DM Multipath by running the following command:
mpathconf --enable
In this example, no /etc/multipath.conf file exists yet, so the mpathconf command creates it when the DM Multipath daemon is enabled (Example 7-21).
Example 7-21 Configure DM Multipath
[root@rhel7 ~]# mpathconf --enable
[root@rhel7 ~]#
2. Open the multipath.conf file and insert the appropriate definitions for the operating system as specified in “Multipath settings for specific Linux distributions” on page 297.
The multipath.conf file is in the /etc directory. Example 7-22 shows editing the file by using vi.
Example 7-22 Editing the multipath.conf file
[root@rhel7 etc]# vi multipath.conf
3. Add the following entry to the multipath.conf file:
device {
vendor "IBM"
product "2145"
path_grouping_policy "group_by_prio"
path_selector "round-robin 0"
# path_selector "service-time 0" # Used by Red Hat 7.x
prio "alua"
path_checker "tur"
failback "immediate"
no_path_retry 5
rr_weight uniform
rr_min_io_rq "1"
dev_loss_tmo 120
}
4. Start the DM Multipath daemon by running the following command:
service multipathd start
When the DM Multipath daemon is started, it loads the newly edited /etc/multipath.conf file (Example 7-23).
Example 7-23 Starting the DM Multipath daemon
[root@rhel7 ~]# service multipathd start
[root@rhel7 ~]#
5. Check the DM Multipath configuration by running the following commands:
multipathd show config      (This command is shown in Example 7-24 on page 301.)
multipathd -k
multipathd> show config
Example 7-24 Show the current DM Multipath configuration (output shortened for clarity)
[root@rhel7 ~]# multipathd show config
device {
vendor "IBM"
product "2145"
path_grouping_policy "group_by_prio"
path_selector "round-robin 0"
path_checker "tur"
features "1 queue_if_no_path"
hardware_handler "0"
prio "alua"
failback immediate
rr_weight "uniform"
no_path_retry 5
rr_min_io_rq 1
dev_loss_tmo 120
}
[root@rhel7 ~]
6. Run the multipath -dl command to see the MPIO configuration. You see two groups with two paths each. All paths must have the state [active][ready], and one group shows [enabled].
7. Run the fdisk command to create a partition on IBM FlashSystem V9000. Use this procedure to improve performance by aligning a partition in the Linux operating system.
The Linux operating system defaults to a 63-sector offset. To align a partition in Linux using fdisk, complete the following steps:
a. At the command prompt, enter # fdisk /dev/mapper/<device>.
b. To change the listing of the partition size to sectors, enter u.
c. To create a partition, enter n.
d. To create a primary partition, enter p.
e. To specify the partition number, enter 1.
f. To set the base sector value, enter 128.
g. Press Enter to use the default last sector value.
h. To write the changes to the partition table, enter w.
 
Note: In step a, <device> is the IBM FlashSystem V9000 volume.
The newly created partition now has an offset of 64 KB and works optimally with an aligned application.
8. If you are installing the Linux operating system on the storage system, create the partition scheme before the installation process. For most Linux distributions, this process requires starting at the text-based installer and switching consoles (press Alt+F2) to get the command prompt before you continue.
9. Create a file system by running the mkfs command (Example 7-25).
Example 7-25 The mkfs command
[root@rhel7 ~]# mkfs -t ext3 /dev/dm-2
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
518144 inodes, 1036288 blocks
51814 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1061158912
32 block groups
32768 blocks per group, 32768 fragments per group
16192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736
 
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done
 
This filesystem will be automatically checked every 29 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[root@rhel7 ~]#
10. Create a mount point and mount the drive (Example 7-26).
Example 7-26 Mount point
[root@rhel7 ~]# mkdir /svcdisk_0
[root@rhel7 ~]# cd /svcdisk_0/
[root@rhel7 svcdisk_0]# mount -t ext3 /dev/dm-2 /svcdisk_0
[root@rhel7 svcdisk_0]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
73608360 1970000 67838912 3% /
/dev/hda1 101086 15082 80785 16% /boot
tmpfs 967984 0 967984 0% /dev/shm
/dev/dm-2 4080064 73696 3799112 2% /svcdisk_0
See the following resources:
To configure Linux hosts with multipathing on IBM FlashSystem V9000, see the “Hosts that run the Linux operating system” topic in IBM Knowledge Center:
For more information about Red Hat Enterprise Linux 7 Multipath configurations see Red Hat Enterprise Linux 7 DM Multipath:
For more information of SUSE Linux Enterprise Server 11 Multipath configurations, see Managing Multipath I/O for Devices:
7.9 VMware: Configuration information
This section describes the requirements and other information for attaching VMware hosts and operating systems to IBM FlashSystem V9000.
For more details about the best practices for configuring, attaching, and operating IBM FlashSystem V9000 in a VMware environment, see IBM FlashSystem V9000 and VMware Best Practices Guide, REDP-5247.
7.9.1 Configuring VMware hosts
To configure the VMware hosts, complete the following steps:
1. Install the HBAs in your host system.
2. Connect the server FC host adapters to the switches.
3. Configure the switches (zoning), as described in 7.9.4, “VMware storage and zoning guidance” on page 304.
4. Install the VMware operating system (if not already installed) and check the HBA timeouts.
5. Configure the host, volumes, and host mapping in the IBM FlashSystem V9000, as described in 7.9.6, “Attaching VMware to volumes” on page 304.
7.9.2 Operating system versions and maintenance levels
For more information about VMware support, see the IBM SSIC web page:
At the time of this writing, the following versions are supported:
ESXi V6.x
ESXi V5.x
ESX / ESxi V4.x (no longer supported by VMware)
7.9.3 HBAs for hosts that are running VMware
Ensure that your hosts that are running on VMware operating systems use the correct HBAs and firmware levels. Install the host adapters in your system. See the manufacturer’s instructions for the installation and configuration of the HBAs.
For more information about supported HBAs for older ESX/ESXi versions, see this web page:
Mostly, the supported HBA device drivers are included in the ESXi server build. However, for various newer storage adapters, you might be required to load more ESXi drivers. if you must load a custom driver for your adapter, see the following VMware web page:
After the HBAs are installed, load the default configuration of your FC HBAs. You must use the same model of HBA with the same firmware in one server. Configuring Emulex and QLogic HBAs to access the same target in one server is not supported.
If you are unfamiliar with the VMware environment and the advantages of storing virtual machines and application data on a SAN, it is useful to get an overview about VMware products before you continue.
VMware documentation is available at this web page:
7.9.4 VMware storage and zoning guidance
The VMware ESXi server can use a Virtual Machine File System (VMFS). VMFS is a file system that is optimized to run multiple virtual machines as one workload to minimize disk I/O. It also can handle concurrent access from multiple physical machines because it enforces the appropriate access controls. Therefore, multiple ESXi hosts can share the set of LUNs.
Theoretically, you can run all of your virtual machines on one LUN. However, for performance reasons in more complex scenarios, it can be better to load balance virtual machines over separate LUNs.
The use of fewer volumes has the following advantages:
More flexibility to create virtual machines without creating space on IBM FlashSystem V9000
More possibilities for taking VMware snapshots
Fewer volumes to manage
The use of more and smaller volumes has the following advantages:
Separate I/O characteristics of the guest operating systems
More flexibility (the multipathing policy and disk shares are set per volume)
Microsoft Cluster Service requires its own volume for each cluster disk resource
For more information about designing your VMware infrastructure, see these web pages:
7.9.5 Multipathing in ESXi
The VMware ESXi server performs native multipathing. You do not need to install another multipathing driver, such as SDDDSM.
 
Guidelines: ESXi server hosts that use shared storage for virtual machine failover or load balancing must be in the same zone. You can have only one VMFS volume per volume.
7.9.6 Attaching VMware to volumes
This section describes how to attach VMware to volumes.
First, make sure that the VMware host is logged in to the IBM FlashSystem V9000. These examples use the VMware ESXi server V6 and the host name of Nile.
Enter the following command to check the status of the host:
lshost <hostname>
Example 7-27 shows that host Nile is logged in to IBM FlashSystem V9000 with two HBAs.
Example 7-27 The lshost Nile
IBM_2145:ITSO_V9000:admin>lshost Nile
id 1
name Nile
port_count 2
type generic
mask 1111
iogrp_count 2
WWPN 210000E08B892BCD
node_logged_in_count 4
state active
WWPN 210000E08B89B8C0
node_logged_in_count 4
state active
 
Tips:
If you want to use features, such as high availability (HA), the volumes that own the VMDK file must be visible to every ESXi host that can host the virtual machine.
In IBM FlashSystem V9000, select Allow the virtual disks to be mapped even if they are already mapped to a host.
The volume should have the same SCSI ID on each ESXi host.
In some configurations, such as MSCS In-guest clustering, the virtual machines must share Raw-device mapping disks for clustering purposes. In this case, it is required to have consistent SCSI ID across all ESXi hosts in the cluster.
For this configuration, one volume was created and mapped to the ESXi host (Example 7-28).
Example 7-28 Mapped volume to ESXi host Nile
IBM_2145:ITSO_V9000:admin>lshostvdiskmap Nile
id name  SCSI_id vdisk_id vdisk_name     wwpn             vdisk_UID
1   Nile    0      12      VMW_pool 210000E08B892BCD 60050768018301BF2800000000000010
ESXi does not automatically scan for SAN changes (except when rebooting the entire ESXi server). If you made any changes to your IBM FlashSystem V9000 or SAN configuration, complete the following steps (see Figure 7-18 on page 306 for an illustration):
1. Open your VMware vSphere Client.
2. Select the host.
3. In the Hardware window, choose Storage.
4. Click Rescan.
To configure a storage device to use it in VMware, complete the following steps:
1. Open your VMware vSphere Client.
2. Select the host for which you want to see the assigned volumes and click the Configuration tab.
3. In the Hardware window on the left side, click Storage (Figure 7-18 on page 306).
4. To create a storage datastore, select Add storage.
5. The Add storage wizard opens. Select Create Disk/Lun, and then click Next.
6. Select IBM FlashSystem V9000 volume that you want to use for the datastore, and then click Next.
7. Review the disk layout, and then click Next.
8. Enter a datastore name, and then click Next.
9. Enter the size of the new partition, and then click Next.
10. Review your selections, and then click Finish.
Now, the created VMFS data store is listed in the Storage window (Figure 7-18). You see the details for the highlighted datastore. Check whether all of the paths are available and that the Path Selection is set to Round Robin.
Figure 7-18 VMware storage configuration
If not all of the paths are available, check your SAN and storage configuration. After the problem is fixed, click Rescan All to perform a path rescan. The view is updated to the new configuration.
The preferred practice is to use the Round Robin Multipath Policy for IBM FlashSystem V9000. If you need to edit this policy, complete the following steps:
1. Highlight the datastore.
2. Click Properties.
3. Click Managed Paths.
4. Click Change.
5. Select Round Robin.
6. Click OK.
7. Click Close.
Now, your VMFS data store is created and you can start using it for your guest operating systems. Round Robin distributes the I/O load across all available paths. If you want to use a fixed path, the Fixed policy setting also is supported.
7.9.7 Volume naming in VMware
In the Virtual vSphere Client, a device is identified either as volume name if specified during creation in V9000 or as a serial number, as shown in Figure 7-19.
Figure 7-19 V9000 device, volume name
 
Disk partition: The number of the disk partition (this value never changes). If the last number is not displayed, the name stands for the entire volume.
7.9.8 Extending a VMFS volume
VMFS volumes can be extended while virtual machines are running. First, you must extend the volume on IBM FlashSystem V9000, and then you can extend the VMFS volume.
 
Note: Before you perform the steps that are described here, back up your data.
Complete the following steps to extend a volume:
1. Expand the volume by running the expandvdisksize -size 1 -unit gb <VDiskname> command (Example 7-29).
Example 7-29 Expanding a volume on the IBM FlashSystem V9000
IBM_2145:ITSO_V9000:admin>lsvdisk VMW_pool
id 12
name VMW_pool
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 0
mdisk_grp_name MDG_DS45
capacity 60.0GB
...
IBM_2145:ITSO_V9000:admin>expandvdisksize -size 5 -unit gb VMW_pool
IBM_2145:ITSO_V9000:admin>lsvdisk VMW_pool
id 12
name VMW_pool
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 0
mdisk_grp_name MDG_DS45
capacity 65.0GB
2. Open the Virtual Infrastructure Client.
3. Select the host.
4. Select Configuration.
5. Select Storage Adapters.
6. Click Rescan.
7. Make sure that the Scan for new Storage Devices option is selected, and then click OK. After the scan completes, the new capacity is displayed in the Details section.
8. Click Storage.
9. Right-click the VMFS volume and click Properties.
10. Click Add Extend.
11. Select the new free space, and then click Next.
12. Click Next.
13. Click Finish.
The VMFS volume is now extended and the new space is ready for use.
7.9.9 Removing a data store from an ESXi host
Before you remove a data store from an ESXi host, you must migrate or delete all of the virtual machines that are on this data store.
To remove the data store, complete the following steps:
1. Back up the data.
2. Open the Virtual Infrastructure Client.
3. Select the host.
4. Select Configuration.
5. Select Storage.
6. Right-click the datastore that you want to remove.
7. Click Unmount (must be done for all ESXi with mounted datastore).
8. Click Delete to remove.
9. Read the warning, and if you are sure that you want to remove the data store and delete all data on it, click Yes.
10. Select Devices view, right-click the device you want to detach, then click Detach.
11. Remove the host mapping on IBM FlashSystem V9000, or delete the volume, as shown in Example 7-30.
Example 7-30 Host mapping: Delete the volume
IBM_2145:ITSO_V9000:admin>rmvdiskhostmap -host Nile VMW_pool
IBM_2145:ITSO_V9000:admin>rmvdisk VMW_pool
12. In the VI Client, select Storage Adapters.
13. Click Rescan.
14. Make sure that the Scan for new Storage Devices option is selected and click OK.
15. After the scan completes, the disk is removed from the view.
Your data store is now removed successfully from the system.
For more information about supported software and driver levels, see the SSIC web page:
7.10 Oracle (Sun) Solaris: Configuration information
At the time of writing, Oracle (Sun) Solaris hosts (SunOS) versions 8, 9, 10, 11, and 12 are supported by IBM FlashSystem V9000. However, SunOS 8 and SunOS 9 (Solaris 8 and Solaris 9) are in the sustaining support phase of their lifecycles.
7.10.1 MPxIO dynamic pathing
Solaris provides its own MPxIO multipath support for the operating system. Therefore, you do not have to install another device driver. Alternatively Veritas DMP can be used.
Veritas Volume Manager with dynamic multipathing
Veritas Volume Manager (VM) with dynamic multipathing (DMP) automatically selects the next available I/O path for I/O requests without action from the administrator. VM with DMP is also informed when you repair or restore a connection, and when you add or remove devices after the system is fully booted (if the operating system recognizes the devices correctly). The Java Native Interface (JNI) drivers support the host mapping of new volumes without rebooting the Solaris host.
The support characteristics are as follows:
Veritas VM with DMP supports load balancing across multiple paths with IBM FlashSystem V9000.
Veritas VM with DMP does not support preferred pathing with IBM FlashSystem V9000.
OS cluster support
Solaris with Symantec Cluster V4.1, Symantec SFHA, and SFRAC V4.1/5.0, and Solaris with Sun Cluster V3.1/3.2 are supported at the time of this writing.
SAN boot support
Boot from SAN is supported under Solaris 10 and later running Symantec Volume Manager or MPxIO.
Aligning the partition for Solaris
For ZFS no alignment is needed, if a disk is added directly to a ZFS pool using the zpool utility. The utility creates the partition starting at sector 256, which automatically creates a properly aligned partition.
7.11 Hewlett-Packard UNIX: Configuration information
For information about what is supported with Hewlett-Packard UNIX (HP-UX), see the IBM SSIC web page:
7.11.1 Operating system versions and maintenance levels
At the time of this writing, the following HP-UX operating systems (64-bit only) are supported with IBM FlashSystem V9000:
HP-UX V11iv1 (11.11)
HP-UX V11iv2 (11.23)
HP-UX V11iv3 (11.31)
7.11.2 Supported multipath solutions
For HP-UX version 11.31, HP does not require installing a separate multipath driver. As part of this version, native multipathing solution is supported with the mass storage stack feature.
For releases of HP-UX before 11.31, multipathing support is available using either of the following software:
IBM System Storage Multipath Subsystem Device Driver (SDD)
HP PVLinks
The IBM FlashSystem V9000 documentation has more information about multipath:
7.11.3 Clustered-system support
HP-UX version 11.31 supports ServiceGuard 11.18, which provides a locking mechanism called cluster lock LUN. On the IBM FlashSystem V9000, specify the block device name of a volume for CLUSTER_LOCK_LUN variable in the configuration ASCII file. The lock LUN among all system nodes must point to the same volume. This consistency can be ensured by determining the worldwide ID (WWID) of the volume. The system lock LUN cannot be used for multiple system locking and cannot be used as a member of a Logical Volume Manager (LVM) volume group or VxVM disk group.
7.11.4 Support for HP-UX with greater than eight LUNs
HP-UX does not recognize more than eight LUNS per port that use the generic SCSI behavior. If you want to use more than eight LUNs per SCSI target, you must set the type attribute to hpux when you create the host object. You can use the IBM FlashSystem V9000 command-line interface or the management GUI to set this attribute.
7.12 Using NPIV functionality
With IBM FlashSystem V9000 version 7.7.1 and later, N_Port ID Virtualization (NPIV) is available. Configuring IBM FlashSystem V9000 target ports for NPIV provides the advantage of zero connectivity reduction in case of firmware updates and maintenance where a controller is unavailable. When NPIV is enabled, the target ports are virtualized, and all the worldwide names (WWNs) representing the ports remain available during controller outages.
7.12.1 How NPIV works
How NPIV works, how to migrate from traditional physical port WWNs, and how to enable NPIV are all demonstrated in “Check the existing non NPIV environment” on page 312. Also, it demonstrates the failover scenario where one controller is removed from the system, while all WWNs remain available to the attached host.
The scenario for the demonstration is a Microsoft Windows 2008 Server, which connects to IBM FlashSystem V9000 through a dual port HBA connected to two Brocade SAN switches. The SAN switches have zoning enabled for traditional non-NPIV access.
Check the existing non NPIV environment
To check the existing environment, perform the following tasks:
1. The Microsoft Windows 2008 Server has access to a single 100 GB volume from IBM FlashSystem V9000. View this information in the Disk Management window of the server OS (Figure 7-20).
Figure 7-20 Server has access to a single IBM FlashSystem V9000 volume
2. To check the current target port mode, go to the IBM FlashSystem V9000 GUI and select Settings → System → I/O Groups. Target port mode is disabled (Figure 7-21), which means that the Fibre Channel ports on IBM FlashSystem V9000 are physical ports, each with a WWN connecting to the SAN switches.
Figure 7-21 Target port mode is disabled
3. By logging on to the SAN switches, review the zoning configuration either by using CLI or GUI (as in this example). The physical WWNs can be reviewed while they are connected to the SAN switch ports. IBM FlashSystem V9000 control enclosures connect with two connections to SAN switch ports 10 and 11, as shown in Figure 7-22 on page 313. A similar configuration exists on the second switch fabric, which is not shown here.
Figure 7-22 SAN switch ports 10 and 11 shows which WWNs are connected
For the zoning configuration above two WWNs are configured for the alias V9000. Both WWNs are shown as blue in the right side frame indicating that these are online. The WWNs are greyed out in the left side frame because these are active in the alias V9000 which is selected.
Change to Target Port Mode Transitional
Enabling NPIV must be done in two steps so that SAN zoning can be modified with the new WWNs before access to storage is disabled when NIPIV is enabled.
To enable Target Port Mode Transitional perform the following steps:
1. From the IBM FlashSystem V9000 GUI, select Settings → System → I/O Groups. Select the I/O Group to be changed and select Actions → Change Target Port Mode as shown in Figure 7-23.
Figure 7-23 Change Target Port Mode
2. The Change Target Port Mode wizard offers only a change from Disabled to Transitional (Figure 7-24). In Transitional mode, both physical and virtual ports are in use. Click Continue to enable Transitional mode.
Figure 7-24 Change to Transitional
Viewing the SAN switches after changing to Transitional mode now shows additional WWNs (Figure 7-25). These are the virtualized NPIV ports that must be added to the zoning configuration. This ensures hosts keep having access to IBM FlashSystem V9000 storage when the configuration is finalized by change to Target Port Mode Enabled.
Figure 7-25 Additional virtual ports are available
3. The next step is to prepare the zoning configuration for NPIV use. Figure 7-26 shows the new virtual NPIV ports have been added to the alias V9000 so that both physical WWNs and Virtual WWNs are in the zone. Save the configuration and enable it.
Figure 7-26 The virtual NPIV ports are added to alias V9000
In the final zoning configuration, the original physical port WWNs must be removed from zoning so that only the new virtual port WWNs exist in the V9000 zone.
4. To ensure nondisruptive storage access, the SAN zoning configuration with both physical WWNs and virtual WWNs must be saved and enabled, then the host must rediscover storage and connectivity to the new NPIV WWNs must be verified. The methods for doing so depends on the operating system. In this example, the Windows 2008 server SDDDSM is used to check the number of paths.
Example 7-31 shows the output, which is shortened here for clarity, from SDDDSM datapath query device before adding the NPIV WWNs to the zoning configuration.
Example 7-31 Paths the IBM FlashSystem V9000 before adding NPIV WWNs
DEV#: 1 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: LEAST I/O AND WEIGHT
SERIAL: 60050768028183D58000000000000036 Reserved: No LUN SIZE: 100.0GB
HOST INTERFACE: FC
===========================================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk3 Part0         OPEN       NORMAL 215 0
1 * Scsi Port2 Bus0/Disk3 Part0         OPEN       NORMAL 0 0
2 Scsi Port3 Bus0/Disk3 Part0         OPEN       NORMAL 184 0
3 * Scsi Port3 Bus0/Disk3 Part0         OPEN       NORMAL 0 0
Example 7-32 shows the output, which is shortens for clarity, from SDDDSM datapath query device after adding the NPIV WWNs to the zoning configuration.
Example 7-32 Paths the IBM FlashSystem V9000 after adding NPIV WWNs
DEV#: 1 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: LEAST I/O AND WEIGHT
SERIAL: 60050768028183D58000000000000036 Reserved: No LUN SIZE: 100.0GB
HOST INTERFACE: FC
===========================================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 * Scsi Port2 Bus0/Disk3 Part0         OPEN       NORMAL 215 0
1 * Scsi Port2 Bus0/Disk3 Part0         OPEN       NORMAL 0 0
2 * Scsi Port3 Bus0/Disk3 Part0         OPEN       NORMAL 184 0
3 * Scsi Port3 Bus0/Disk3 Part0         OPEN       NORMAL 0 0
4 * Scsi Port2 Bus0/Disk3 Part0         OPEN       NORMAL 0 0
5 Scsi Port2 Bus0/Disk3 Part0         OPEN       NORMAL 0 0
6 * Scsi Port3 Bus0/Disk3 Part0         OPEN       NORMAL 0 0
7 Scsi Port3 Bus0/Disk3 Part0         OPEN       NORMAL 0 0
The number of paths increased from four to eight, indicating that the host connects to IBM FlashSystem V9000 on the NPIV paths.
5. Remove the original physical port WWNs from the zoning configuration (Figure 7-27). Save the zoning configuration and enable it.
Figure 7-27 The new NPIV WWNs are now in the V9000 alias instead of the original physical
6. At this point go to your host and rescan for disks to make sure the host is communicating correctly with IBM FlashSystem V9000 and the NPIV ports.
Change to Target Port Mode Enabled
To enable NPIV, perform the following steps:
1. The next step is to enable target port mode. During this step IBM FlashSystem V9000 stops using the physical ports and starts to use only the virtual NPIV ports. From Settings → System → I/O Groups, select the I/O Group to be changed and select Actions → Change Target Port Mode.
In the Change Target Port Mode dialog, (Figure 7-28), select Enabled and select the Force change check box, and then click Continue.
Figure 7-28 Enable NPIV
A message warns that host operations can be disrupted (Figure 7-29). At this point, if the zoning configuration has not been changed to include the virtualized NPIV WWNs the host will lose access to IBM FlashSystem V9000 storage. Click Yes to continue.
Figure 7-29 Warning - host access may be disrupted
CLI executes and NPIV is enabled. IBM FlashSystem V9000 now no longer uses the original WWNs to propagate its volumes to the Windows 2008 Server host.
2. Click Close to finish the configuration change (Figure 7-30).
Figure 7-30 CLI executes: NPIV is now enabled
 
Warning: Make sure that the new NPIV WWNs are included in the zoning configuration or the host will loose access to IBM FlashSystem V9000 volumes.
3. Review the Target Port Mode, which now indicates Enabled (Figure 7-31).
Figure 7-31 NPIV is now enabled for I/O group 0
4. Check again that the host is functional and connects to its IBM FlashSystem V9000 disks. In this example, the Windows 2008 server continues to have access to IBM FlashSystem V9000 storage during the transition from physical to virtualized NPIV ports.
Simulating IBM FlashSystem V9000 controller outage
IBM FlashSystem V9000 controller reboots during firmware updates. During such a reboot, the physical Fibre Channel connections and their WWNs are unavailable if you use traditional Target Port Mode. When Target Port Mode is enabled, NPIV is active and the WWNs are virtualized, meaning that they can move between controllers when needed.
Figure 7-32 shows an IBM FlashSystem V9000 controller that is placed in Service State and therefore simulates a controller outage. The controller that is being placed in Service State is connected to switch port 10, which before the outage showed two WWNs (one physical and one virtual). Now when the controller is placed in Service State, the two virtualized NPIV ports are both represented on port 11, as shown in the left pane. In the right pane, the two NPIV ports remain blue and online.
Figure 7-32 SAN ports remain online with IBM FlashSystem V9000 controller down
For more information about how to enable and verify NPIV, see 9.5.7, “I/OGroups: Enable and disable NPIV” on page 464.
For more information about NPIV, see the “N_Port ID Virtualization configuration” topic in IBM Knowledge Center:
7.13 Using SDDDSM, SDDPCM, and SDD web interface
After the SDDDSM or SDD driver is installed, specific commands are available. To open a command window for SDDDSM or SDD, select Start  Programs  Subsystem Device Driver  Subsystem Device Driver Management from the desktop.
For more information about the command documentation for the various operating systems, see Multipath Subsystem Device Driver User’s Guide, S7000303:
Also possible is to configure SDDDSM to offer a web interface that provides basic information. Before this configuration can work, you must configure the web interface. SDDSRV does not bind to any TCP/IP port by default, but it does allow port binding to be dynamically enabled or disabled.
The multipath driver package includes an sddsrv.conf template file that is named the sample_sddsrv.conf file. On all UNIX platforms, the sample_sddsrv.conf file is in the /etc directory. On Windows platforms, it is in the directory in which SDDDSM was installed.
You must use the sample_sddsrv.conf file to create the sddsrv.conf file in the same directory as the sample_sddsrv.conf file by copying it and naming the copied file to sddsrv.conf. You can then dynamically change the port binding by modifying the parameters in the sddsrv.conf file and changing the values of Enableport and Loopbackbind to True.
Figure 7-33 shows the start window of the multipath driver web interface.
Figure 7-33 SDD web interface
7.14 More information
For more information about host attachment, storage subsystem attachment, and troubleshooting, see IBM FlashSystem V9000 web page in IBM Knowledge Center:
 
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.22.181.154