Host configuration
In this chapter, we describe the host configuration procedures that attach supported hosts to IBM FlashSystem V9000.
This chapter includes the following topics:
7.1 Host attachment overview
The IBM FlashSystem V9000 can be attached to a client host by using three interface types:
Fibre Channel (FC)
Fibre Channel over Ethernet (FCoE)
IP-based Small Computer System Interface (iSCSI)
Always check the IBM System Storage Interoperation Center (SSIC) to get the latest information about supported operating systems, hosts, switches, and so on.
If a configuration that you want is not available on the SSIC, a Solution for Compliance in a Regulated Environment (SCORE) or request for price quotation (RPQ) must be submitted to IBM requesting approval. To submit a SCORE/RPQ, contact your IBM FlashSystem marketing representative or IBM Business Partner.
With FlashSystem V9000 software version 7.5, 16 gigabits per second (Gbps) FC attachment arbitrated loop topology (direct connection to client hosts) is supported. At the time that this book was written, IBM AIX does not yet support point-to-point FC direct connections. Check your environment and check the SSIC to use 16 Gbps direct attachment to the host at the following website:
Also see the FlashSystem V9000 IBM Knowledge Center for details:
See the “Host attachment” topic at the IBM FlashSystem V9000 web page of the IBM Knowledge Center:
IBM FlashSystem V9000 supports a wide range of host types (both IBM and non-IBM), which makes it possible to consolidate storage in an open systems environment into a common pool of storage. The storage can then be managed using pooling more efficiently as a single entity from a central point on the storage area network (SAN).
The ability to consolidate storage for attached open systems hosts provides these benefits:
Unified, single-point storage management
Increased utilization rate of the installed storage capacity
Use data mobility to share storage technologies between applications
Advanced copy services functions offered across storage systems from separate vendors
Only one kind of multipath driver to consider for attached hosts
7.2 IBM FlashSystem V9000 setup
In most IBM FlashSystem V9000 environments where high performance and high availability requirements exist, hosts are attached through a SAN using the Fibre Channel Protocol (FCP). Even though other supported SAN configurations are available, for example, single fabric design, the preferred practice and a commonly used setup is for the SAN to consist of two independent fabrics. This design provides redundant paths and prevents unwanted interference between fabrics if an incident affects one of the fabrics.
Internet Small Computer System Interface (iSCSI) connectivity provides an alternative method to attach hosts through an Ethernet local area network (LAN). However, any communication within IBM FlashSystem V9000 system, and between FlashSystem V9000 and its storage, solely takes place through FC.
FlashSystem V9000 also supports FCoE, using only 10 gigabit Ethernet (GbE) lossless Ethernet.
Redundant paths to volumes can be provided for both SAN-attached and iSCSI-attached hosts. Figure 7-1 shows the types of attachment that are supported by IBM FlashSystem V9000.
Figure 7-1 IBM FlashSystem V9000 host attachment overview
7.2.1 Fibre Channel and SAN setup overview
Host attachment to IBM FlashSystem V9000 with FC can be made through a SAN fabric or direct host attachment. For FlashSystem V9000 configurations, the preferred practice is to use two redundant SAN fabrics. Therefore, we advise that you have each host equipped with a minimum of two host bus adapters (HBAs) or at least a dual-port HBA with each HBA connected to a SAN switch in either fabric.
IBM FlashSystem V9000 imposes no particular limit on the actual distance between FlashSystem V9000 and host servers. Therefore, a server can be attached to an edge switch in a core-edge configuration and IBM FlashSystem V9000 is at the core of the fabric.
For host attachment, FlashSystem V9000 supports up to three inter-switch link (ISL) hops in the fabric, which means that the server to IBM FlashSystem V9000 can be separated by up to five FC links, four of which can be 10 km long (6.2 miles) if longwave small form-factor pluggables (SFPs) are used.
The zoning capabilities of the SAN switch are used to create three distinct zones. FlashSystem V9000 supports 2 GBps, 4 GBps, 8 GBps, or 16 Gbps FC fabric, depending on the hardware configuration and on the switch where the FlashSystem V9000 is connected. In an environment where you have a fabric with multiple-speed switches, the preferred practice is to connect the FlashSystem V9000 and any external storage systems to the switch that is operating at the highest speed.
For more details about SAN zoning and SAN connections, see Implementing the IBM System Storage SAN Volume Controller V7.4, SG24-7933, Chapter 3, Planning and Configuration.
IBM SAN FlashSystem V9000 contains shortwave small form-factor pluggables (SFPs). Therefore, they must be within 300 km (984.25 feet) of the switch to which they attach.
Table 7-1 shows the fabric type that can be used for communicating between hosts, nodes, and RAID storage systems. These fabric types can be used at the same time.
Table 7-1 IBM FlashSystem V9000 communication options
Communication type
Host to FlashSystem V9000
FlashSystem V9000 to external Storage
FlashSystem V9000 to FlashSystem V9000
Fibre Channel SAN (FC)
Yes
Yes
Yes
iSCSI (1 Gbps or 10 Gbps Ethernet)
Yes
No
No
FCoE (10 Gbps Ethernet)
Yes
No
Yes
To avoid latencies that lead to degraded performance, we suggest that you avoid ISL hops whenever possible. That is, in an optimal setup, the servers connect to the same SAN switch as the FlashSystem V9000.
The following guidelines apply when you connect host servers to a FlashSystem V9000:
Up to 256 hosts per building block are supported, which results in a total of 1,024 hosts for a fully scaled system.
If the same host is connected to multiple building blocks of a cluster, it counts as a host in each building block.
A total of 512 distinct, configured, host worldwide port names (WWPNs) are supported per building block.
This limit is the sum of the FC host ports and the host iSCSI names (an internal WWPN is generated for each iSCSI name) that are associated with all of the hosts that are associated with a building block.
7.2.2 Fibre Channel SAN attachment
Switch zoning on the SAN fabric defines the access from a server to FlashSystem V9000.
Consider the following rules for zoning hosts with FlashSystem V9000:
Homogeneous HBA port zones
Switch zones that contain HBAs must contain HBAs from similar host types and similar HBAs in the same host. For example, AIX and Microsoft Windows hosts must be in separate zones, and QLogic and Emulex adapters must also be in separate zones.
 
Important: A configuration that breaches this rule is unsupported because it can introduce instability to the environment.
HBA to FlashSystem V9000 port zones
Place each host’s HBA in a separate zone along with one or two IBM FlashSystem V9000 ports. If there are two ports, use one from each controller in the building block. Do not place more than two FlashSystem V9000 ports in a zone with an HBA, because this design results in more than the advised number of paths, as seen from the host multipath driver.
 
Number of paths: For n + 1 redundancy, use the following number of paths:
With two HBA ports, zone HBA ports to FlashSystem V9000 ports 1:2 for a total of four paths.
With four HBA ports, zone HBA ports to FlashSystem V9000 ports 1:1 for a total of four paths.
Optional (n+2 redundancy): With 4 HBA ports, zone HBA ports to IBM FlashSystem V9000 ports 1 - 2 for a total of eight paths. Here, we use the term HBA port to describe the SCSI initiator and IBM FlashSystem V9000 port to describe the SCSI target.
Maximum host paths per logical unit (LU)
For any volume, the number of paths through the SAN from FlashSystem V9000 to a host must not exceed eight. For most configurations, four paths to a building block (four paths to each volume that is provided by this building block) are sufficient.
 
Important: The maximum number of host paths per LUN should not exceed eight.
Balanced host load across HBA ports
To obtain the best performance from a host with multiple ports, ensure that each host port is zoned with a separate group of FlashSystem V9000 ports.
Balanced host load across FlashSystem V9000 ports
To obtain the best overall performance of the system and to prevent overloading, the workload to each IBM FlashSystem V9000 port must be equal. You can achieve this balance by zoning approximately the same number of host ports to each FlashSystem V9000 port.
When possible, use the minimum number of paths that are necessary to achieve a sufficient level of redundancy. For FlashSystem V9000, no more than four paths per building block are required to accomplish this layout.
All paths must be managed by the multipath driver on the host side. If we assume that a server is connected through four ports to FlashSystem V9000, each volume is seen through eight paths. With 125 volumes mapped to this server, the multipath driver must support handling up to 1,000 active paths (8 x 125).
FlashSystem V9000 with 8-port and 12-port configurations provide an opportunity to isolate traffic on dedicated ports, as a result providing a level of protection against misbehaving devices and workloads that can compromise the performance of the shared ports.
There is benefit in isolating remote replication traffic on dedicated ports to ensure that problems affecting the cluster-to-cluster interconnect do not adversely affect ports on the primary cluster and as a result affect the performance of workloads running on the primary cluster. Migration from existing configurations with only four ports, or even later migrating from 8-port or 12-port configurations to configurations with additional ports can reduce the effect of performance issues on the primary cluster by isolating remote replication traffic.
7.2.3 Fibre Channel direct attachment
If you attach the FlashSystem V9000 directly to a host, the host must be attached to both controllers of a building block. If the host is not attached to both controllers, the host is shown as degraded.
If you use SAN attachment and direct attachment simultaneously on a FlashSystem V9000, the direct-attached host state will be degraded. Using a switch enforces the switch rule for all attached hosts, which means that a host port must be connected to both FlashSystem canisters. Because a direct-attached host cannot connect one port to both canisters, it does not meet the switch rule and its state will be degraded.
 
Note: You can attach a host through a switch and simultaneously attach a host directly to the FlashSystem V9000. But then, the direct-attached host is shown as degraded.
7.3 iSCSI
The iSCSI protocol is a block-level protocol that encapsulates SCSI commands into TCP/IP packets and, therefore, uses an existing IP network rather than requiring the FC HBAs and SAN fabric infrastructure. The iSCSI standard is defined by RFC 3720. iSCSI connectivity is a software feature that is provided by IBM FlashSystem V9000.
The iSCSI-attached hosts can use a single network connection or multiple network connections.
 
SAN for external storage: Only hosts can iSCSI-attach to FlashSystem V9000. FlashSystem V9000 external storage must use SAN.
Each IBM FlashSystem V9000 controller is equipped with two onboard Ethernet network interface cards (NICs), which can operate at a link speed of 10 Mbps, 100 Mbps, or
1000 Mbps. Both cards can be used to carry iSCSI traffic. Each controller’s NIC that is numbered 1 is used as the primary FlashSystem V9000 cluster management port.
For optimal performance achievement, we advise that you use a 1 Gb Ethernet connection between FlashSystem V9000 and iSCSI-attached hosts when IBM FlashSystem V9000 controller’s onboard NICs are used.
An optional 10 Gbps 2-port Ethernet adapter with the required 10 Gbps shortwave SFPs are available. The 10 GbE option is used solely for iSCSI traffic.
7.3.1 Initiators and targets
An iSCSI client, which is known as an (iSCSI) initiator, sends SCSI commands over an IP network to an iSCSI target. We refer to a single iSCSI initiator or iSCSI target as an iSCSI node.
You can use the following types of iSCSI initiators in host systems:
Software initiator: Available for most operating systems; for example, AIX, Linux, and Windows.
Hardware initiator: Implemented as a network adapter with an integrated iSCSI processing unit, which is also known as an iSCSI HBA.
For more information about the supported operating systems for iSCSI host attachment and the supported iSCSI HBAs, see the following web pages:
IBM System Storage Interoperation Center (SSIC) for the IBM FlashSystem V9000 interoperability matrix:
IBM FlashSystem V9000 web page at the IBM Knowledge Center:
An iSCSI target refers to a storage resource that is on an iSCSI server. It also refers to one of potentially many instances of iSCSI nodes that are running on that server.
7.3.2 iSCSI nodes
One or more iSCSI nodes exist within a network entity. The iSCSI node is accessible through one or more network portals. A network portal is a component of a network entity that has a TCP/IP network address and can be used by an iSCSI node.
An iSCSI node is identified by its unique iSCSI name and is referred to as an iSCSI qualified name (IQN). The purpose of this name is for only the identification of the node, not for the node’s address. In iSCSI, the name is separated from the addresses. This separation enables multiple iSCSI nodes to use the same addresses, or, while it is implemented in the FlashSystem V9000, the same iSCSI node to use multiple addresses.
7.3.3 iSCSI qualified name
An IBM FlashSystem V9000 can provide up to eight iSCSI targets, one per controller. Each FlashSystem V9000 controller has its own IQN, which by default is in the following form:
iqn.1986-03.com.ibm:2145.<clustername>.<nodename>
An iSCSI host in FlashSystem V9000 is defined by specifying its iSCSI initiator names. The following example shows an IQN of a Windows server’s iSCSI software initiator:
iqn.1991-05.com.microsoft:itsoserver01
During the configuration of an iSCSI host in FlashSystem V9000, you must specify the host’s initiator IQNs.
An alias string can also be associated with an iSCSI node. The alias enables an organization to associate a string with the iSCSI name. However, the alias string is not a substitute for the iSCSI name.
A host that is accessing FlashSystem V9000 volumes through iSCSI connectivity uses one or more Ethernet adapters or iSCSI HBAs to connect to the Ethernet network.
Both onboard Ethernet ports of a FlashSystem V9000 controller can be configured for iSCSI. If iSCSI is used for host attachment, we advise that you dedicate the Ethernet port one for IBM FlashSystem V9000 management and port two for iSCSI use. This way, port two can be connected to a separate network segment or virtual LAN (VLAN) for iSCSI because IBM FlashSystem V9000 does not support the use of VLAN tagging to separate management and iSCSI traffic.
 
Note: Ethernet link aggregation (port trunking) or “channel bonding” for IBM FlashSystem V9000 controller’s Ethernet ports is not supported for the 1 Gbps ports.
For each IBM FlashSystem V9000 controller, that is, for each instance of an iSCSI target node in FlashSystem V9000, you can define two IPv4 and two IPv6 addresses or iSCSI network portals.
7.3.4 iSCSI set up of FlashSystem V9000 and host server
You must perform the following procedure when you are setting up a host server for use as an iSCSI initiator with IBM FlashSystem V9000 volumes. The specific steps vary depending on the particular host type and operating system that you use.
To configure a host, first select a software-based iSCSI initiator or a hardware-based iSCSI initiator. For example, the software-based iSCSI initiator can be a Linux or Windows iSCSI software initiator. The hardware-based iSCSI initiator can be an iSCSI HBA inside the host server.
To set up your host server for use as an iSCSI software-based initiator with FlashSystem V9000 volumes, complete the following steps (the CLI is used in this example):
1. Set up your FlashSystem V9000 cluster for iSCSI:
a. Select a set of IPv4 or IPv6 addresses for the Ethernet ports on the nodes that are in the building block that use the iSCSI volumes.
b. Configure the node Ethernet ports on each FlashSystem V9000 controller by running the cfgportip command.
c. Verify that you configured the Ethernet ports of FlashSystem V9000 correctly by reviewing the output of the lsportip command and lssystemip command.
d. Use the mkvdisk command to create volumes on FlashSystem V9000 clustered system.
e. Use the mkhost command to create a host object on FlashSystem V9000. The mkhost command defines the host’s iSCSI initiator to which the volumes are to be mapped.
f. Use the mkvdiskhostmap command to map the volume to the host object in FlashSystem V9000.
2. Set up your host server:
a. Ensure that you configured your IP interfaces on the server.
b. Ensure that your iSCSI HBA is ready to use, or install the software for the iSCSI software-based initiator on the server, if needed.
c. On the host server, run the configuration methods for iSCSI so that the host server iSCSI initiator logs in to IBM FlashSystem V9000 and discovers FlashSystem V9000 volumes. The host then creates host devices for the volumes.
After the host devices are created, you can use them with your host applications.
7.3.5 Volume discovery
Hosts can discover volumes through one of the following three mechanisms:
Internet Storage Name Service (iSNS)
IBM FlashSystem V9000 can register with an iSNS name server; the IP address of this server is set by using the chsystem command. A host can then query the iSNS server for available iSCSI targets.
Service Location Protocol (SLP)
IBM FlashSystem V9000 controller runs an SLP daemon, which responds to host requests. This daemon reports the available services on the node. One service is the CIM object manager (CIMOM), which runs on the configuration controller; iSCSI I/O service now also can be reported.
SCSI Send Target request
The host can also send a Send Target request by using the iSCSI protocol to the iSCSI TCP/IP port (port 3260). You must define the network portal IP addresses of the iSCSI targets before a discovery can be started.
7.3.6 Authentication
The authentication of hosts is optional; by default, it is disabled. The user can choose to enable Challenge Handshake Authentication Protocol (CHAP) or CHAP authentication, which involves sharing a CHAP secret between the cluster and the host. If the correct key is not provided by the host, IBM FlashSystem V9000 does not allow it to perform I/O to volumes. Also, you can assign a CHAP secret to the FlashSystem V9000.
7.3.7 Target failover
A feature with iSCSI is the option to move iSCSI target IP addresses between FlashSystem V9000 controllers in a building block. IP addresses are moved only from one controller to its partner controller if a controller goes through a planned or unplanned restart. If the Ethernet link to FlashSystem V9000 fails because of a cause outside of FlashSystem V9000 (such as disconnection of the cable or failure of the Ethernet router), FlashSystem V9000 makes no attempt to fail over an IP address to restore IP access to the system. To enable the validation of the Ethernet access to the controllers, it responds to ping with the standard one-per-second rate without frame loss.
For handling the iSCSI IP address failover a clustered Ethernet port is used. A clustered Ethernet port consists of one physical Ethernet port on each controller in the FlashSystem V9000. The clustered Ethernet port contains configuration settings that are shared by all of these ports.
An iSCSI target node failover happens during a planned or unplanned node restart in a FlashSystem V9000 building block. This example refers to FlashSystem V9000 controllers with no optional 10 GbE iSCSI adapter installed:
1. During normal operation, one iSCSI node target node instance is running on each IBM FlashSystem V9000 controller. All of the IP addresses (IPv4/IPv6) that belong to this iSCSI target (including the management addresses if the controller acts as the configuration controller) are presented on the two ports of a controller.
2. During a restart of a FlashSystem V9000 controller, the iSCSI initiator, including all of its network ports IP addresses defined on Port1/Port2 and the management IP addresses (if it acted as the configuration controller), fail over to Port1/Port2 of the partner controller within the building block. An iSCSI initiator that is running on a server runs a reconnect to its iSCSI target; that is, the same IP addresses that are presented now by a new controller of FlashSystem V9000.
3. When the controller finishes its restart, the iSCSI target node (including its IP addresses) that is running on the partner controller fails back. Again, the iSCSI initiator that is running on a server runs a reconnect to its iSCSI target. The management addresses do not fail back. The partner Controller remains in the role of the configuration controller for this FlashSystem V9000.
7.3.8 Host failover
From a host perspective, a multipathing I/O (MPIO) driver is not required to handle an IBM FlashSystem V9000 controller failover. In the case of a FlashSystem V9000 controller restart, the host reconnects to the IP addresses of the iSCSI target node that reappear after several seconds on the ports of the partner node.
A host multipathing driver for iSCSI is required in the following situations:
To protect a host from network link failures, including port failures on FlashSystem V9000 controllers.
To protect a host from an HBA failure (if two HBAs are in use).
To protect a host from network failures, if it is connected through two HBAs to two separate networks.
To provide load balancing on the server’s HBA and the network links.
The commands for the configuration of the iSCSI IP addresses were separated from the configuration of the cluster IP addresses.
The following commands are used for managing iSCSI IP addresses:
The lsportip command lists the iSCSI IP addresses that are assigned for each port on each controller in the FlashSystem V9000.
The cfgportip command assigns an IP address to each controller’s Ethernet port for iSCSI I/O.
The following commands are used for managing FlashSystem V9000 IP addresses:
The lssystemip command returns a list of the FlashSystem V9000 management IP addresses that are configured for each port.
The chsystemip command modifies the IP configuration parameters for the FlashSystem V9000.
The parameters for remote services (SSH and web services) remain associated with the FlashSystem V9000 object. During a FlashSystem V9000 code upgrade, the configuration settings for the FlashSystem V9000 are applied to the controller Ethernet port 1.
For iSCSI-based access, the use of redundant network connections and separating iSCSI traffic by using a dedicated network or virtual LAN (VLAN) prevents any NIC, switch, or target port failure from compromising the host server’s access to the volumes.
Port 1 can be configured as both Management and iSCSI. It is advised that Port 1 be dedicated for FlashSystem V9000 management, and port 2 and 3 be dedicated for iSCSI usage. By using this approach, port 2 and 3 can be connected to a dedicated network segment or VLAN for iSCSI. Because FlashSystem V9000 does not support the use of VLAN tagging to separate management and iSCSI traffic, you can assign the correct LAN switch port to a dedicated VLAN to separate FlashSystem V9000 management and iSCSI traffic.
7.4 File alignment for the best RAID performance
File system alignment can improve performance for storage systems by using a RAID storage mode. File system alignment is a technique that matches file system I/O requests with important block boundaries in the physical storage system. Alignment is important in any system that implements a RAID layout. I/O requests that fall within the boundaries of a single stripe have better performance than an I/O request that affects multiple stripes. When an I/O request crosses the endpoint of one stripe and into another stripe, the controller must then modify both stripes to maintain their consistency.
Unaligned accesses include those requests that start at an address that is not divisible by
4 KB, or are not a multiple of 4 KB. These unaligned accesses are serviced at much higher response times, and they can also significantly reduce the performance of aligned accesses that were issued in parallel.
 
Note: Format all client host file systems on the storage system at 4 KB or at a multiple of 4 KB. This preference is for a used sector size of 512 and 4096 bytes. For example, file systems that are formatted at an 8 KB allocation size or a 64 KB allocation size are satisfactory because they are a multiple of 4 KB.
7.5 AIX: Specific information
This section describes specific information that relates to the connection of IBM AIX based hosts in an IBM FlashSystem V9000 environment.
 
Note: In this section, the IBM System p information applies to all AIX hosts that are listed on FlashSystem V9000 interoperability support website, including IBM System i partitions and IBM JS blades.
7.5.1 Optimal logical unit number configurations for AIX
The number of logical unit numbers (LUNs) that you create on the IBM FlashSystem V9000 (as well as on IBM FlashSystem 900) can affect the overall performance of AIX. Applications perform optimally if at least 32 LUNs are used in a volume group. If fewer volumes are required by an application, use the Logical Volume Manager (LVM) to map fewer logical volumes to 32 logical units. This does not affect performance in any significant manner (LVM resource requirements are small).
 
Note: Use at least 32 LUNs in a volume group because this number is the best tradeoff between good performance (the more queued I/Os, the better the performance) and minimizing resource use and complexity.
7.5.2 Configuring the AIX host
To attach FlashSystem V9000 volumes to an AIX host, complete these steps:
1. Install the HBAs in the AIX host system.
2. Ensure that you installed the correct operating systems and version levels on your host, including any updates and authorized program analysis reports (APARs) for the operating system.
3. Connect the AIX host system to the FC switches.
4. Configure the FC switch zoning.
5. Install the 2145 host attachment support package. For more information, see 7.5.4, “Installing the 2145 host attachment support package” on page 226.
6. Install and configure the Subsystem Device Driver Path Control Module (SDDPCM).
7. Perform the logical configuration on FlashSystem V9000 to define the host, volumes, and host mapping.
8. Run the cfgmgr command to discover and configure FlashSystem V9000 volumes.
 
7.5.3 Configuring fast fail and dynamic tracking
For hosts that are running AIX V5.3 or later operating systems, enable both fast fail and dynamic tracking.
Complete these steps to configure your host system to use the fast fail and dynamic tracking attributes:
1. Run the following command to set the FC SCSI I/O Controller Protocol Device to each adapter:
chdev -l fscsi0 -a fc_err_recov=fast_fail
That command is for adapter fscsi0. Example 7-1 shows the command for both adapters on a system that is running IBM AIX V6.1.
Example 7-1 Enable fast fail
#chdev -l fscsi0 -a fc_err_recov=fast_fail
fscsi0 changed
#chdev -l fscsi1 -a fc_err_recov=fast_fail
fscsi1 changed
2. Run the following command to enable dynamic tracking for each FC device:
chdev -l fscsi0 -a dyntrk=yes
This command is for adapter fscsi0.
Example 7-2 shows the command for both adapters in IBM AIX V6.1.
Example 7-2 Enable dynamic tracking
#chdev -l fscsi0 -a dyntrk=yes
fscsi0 changed
#chdev -l fscsi1 -a dyntrk=yes
fscsi1 changed
 
 
Note: The fast fail and dynamic tracking attributes do not persist through an adapter delete and reconfigure operation. Therefore, if the adapters are deleted and then configured back into the system, these attributes are lost and must be reapplied.
Host adapter configuration settings
You can display the availability of installed host adapters by using the command that is shown in Example 7-3.
Example 7-3 FC host adapter availability
#lsdev -Cc adapter |grep fcs
fcs0 Available 1Z-08 FC Adapter
fcs1 Available 1D-08 FC Adapter
You can display the WWPN, along with other attributes, including the firmware level, by using the command that is shown in Example 7-4. The WWPN is represented as the Network Address.
Example 7-4 FC host adapter settings and WWPN
#lscfg -vpl fcs0
fcs0 U0.1-P2-I4/Q1 FC Adapter
 
Part Number.................00P4494
EC Level....................A
Serial Number...............1E3120A68D
Manufacturer................001E
Device Specific.(CC)........2765
FRU Number.................. 00P4495
Network Address.............10000000C932A7FB
ROS Level and ID............02C03951
Device Specific.(Z0)........2002606D
Device Specific.(Z1)........00000000
Device Specific.(Z2)........00000000
Device Specific.(Z3)........03000909
Device Specific.(Z4)........FF401210
Device Specific.(Z5)........02C03951
Device Specific.(Z6)........06433951
Device Specific.(Z7)........07433951
Device Specific.(Z8)........20000000C932A7FB
Device Specific.(Z9)........CS3.91A1
Device Specific.(ZA)........C1D3.91A1
Device Specific.(ZB)........C2D3.91A1
Device Specific.(YL)........U0.1-P2-I4/Q1
7.5.4 Installing the 2145 host attachment support package
To configure IBM FlashSystem V9000 volumes to an AIX host with the proper device type, you must install the host attachment support file set before you run the cfgmgr command. Running the cfgmgr command before you install the host attachment support file set results in the LUNs being configured as “Other SCSI Disk Drives” and they are not recognized by the SDDPCM. To correct the device type, you must delete the hdisks by using the rmdev -dl hdiskX command and then rerunning the cfgmgr command.
Complete the following steps to install the host attachment support package:
1. See the following web page:
2. Download the appropriate host attachment package archive for your AIX version; the file set that is contained in the package is devices.fcp.disk.ibm.mpio.rte.
3. Follow the instructions that are provided on the web page and the readme files to install the script.
7.5.5 Subsystem Device Driver Path Control Module (SDDPCM)
The SDDPCM is a loadable path control module for supported storage devices to supply path management functions and error recovery algorithms. When the supported storage devices are configured as Multipath I/O (MPIO) devices, SDDPCM is loaded as part of the AIX MPIO FCP or AIX MPIO serial-attached SCSI (SAS) device driver during the configuration.
The AIX MPIO device driver automatically discovers, configures, and makes available all storage device paths. SDDPCM then manages these paths to provide the following functions:
High availability and load balancing of storage I/O
Automatic path-failover protection
Concurrent download of supported storage devices’ licensed machine code
Prevention of a single-point failure
The AIX MPIO device driver along with SDDPCM enhances the data availability and I/O load balancing of IBM FlashSystem V9000 volumes.
 
SDDPCM installation
Download the appropriate version of SDDPCM and install it by using the standard AIX installation procedure. The latest SDDPCM software versions are available at the following website:
Check the driver readme file and make sure that your AIX system meets all prerequisites.
Example 7-5 on page 227 shows the appropriate version of SDDPCM that is downloaded into the /tmp/sddpcm directory. From here, we extract it and run the inutoc command, which generates a dot.toc file that is needed by the installp command before SDDPCM is installed. Finally, we run the installp command, which installs SDDPCM onto this AIX host.
Example 7-5 Installing SDDPCM on AIX
# ls -l
total 3232
-rw-r----- 1 root system 1648640 Jul 15 13:24 devices.sddpcm.61.rte.tar
# tar -tvf devices.sddpcm.61.rte.tar
-rw-r----- 271001 449628 1638400 Oct 31 12:16:23 2007 devices.sddpcm.61.rte
# tar -xvf devices.sddpcm.61.rte.tar
x devices.sddpcm.61.rte, 1638400 bytes, 3200 media blocks.
# inutoc .
# ls -l
total 6432
-rw-r--r-- 1 root system 531 Jul 15 13:25 .toc
-rw-r----- 1 271001 449628 1638400 Oct 31 2007 devices.sddpcm.61.rte
-rw-r----- 1 root system 1648640 Jul 15 13:24 devices.sddpcm.61.rte.tar
# installp -ac -d . all
Example 7-6 shows the lslpp command that checks the version of SDDPCM that is installed.
Example 7-6 Checking SDDPCM device driver
# lslpp -l | grep sddpcm
devices.sddpcm.61.rte 2.2.0.0 COMMITTED IBM SDD PCM for AIX V61
devices.sddpcm.61.rte 2.2.0.0 COMMITTED IBM SDD PCM for AIX V61
For more information about how to enable the SDDPCM web interface, see 7.11, “Using SDDDSM, SDDPCM, and SDD web interface” on page 259.
7.5.6 Configuring the assigned volume by using SDDPCM
We use an AIX host with host name Atlantic to demonstrate attaching IBM FlashSystem V9000 volumes to an AIX host. Example 7-7 shows host configuration before FlashSystem V9000 volumes are configured. The lspv output shows the existing hdisks and the lsvg command output shows the existing volume group (VG).
Example 7-7 Status of AIX host system Atlantic
# lspv
hdisk0 0009cdcaeb48d3a3 rootvg active
hdisk1 0009cdcac26dbb7c rootvg active
hdisk2 0009cdcab5657239 rootvg active
# lsvg
rootvg
Identifying the WWPNs of the host adapter ports
Example 7-8 shows how the lscfg commands can be used to list the WWPNs for all installed adapters. We use the WWPNs later for mapping IBM FlashSystem V9000 volumes.
Example 7-8 HBA information for host Atlantic
# lscfg -vl fcs* |egrep “fcs|Network”
fcs1 U0.1-P2-I4/Q1 FC Adapter
Network Address.............10000000C932A865
Physical Location: U0.1-P2-I4/Q1
fcs2 U0.1-P2-I5/Q1 FC Adapter
Network Address.............10000000C94C8C1C
Displaying FlashSystem V9000 configuration
You can use IBM FlashSystem V9000 CLI to display the host configuration on FlashSystem V9000 and to validate the physical access from the host to FlashSystem V9000. Example 7-9 shows the use of the lshost and lshostvdiskmap commands to obtain the following information:
That a host definition was properly defined for the host Atlantic.
That the WWPNs that are listed in Example 7-8 on page 227 are logged in, with two logins each.
Atlantic has three volumes that are assigned to each WWPN, and the volume serial numbers are listed.
Example 7-9 IBM FlashSystem V9000 definitions for host system Atlantic
IBM_2145:ITSO_V9000:admin>svcinfo lshost Atlantic
id 8
name Atlantic
port_count 2
type generic
mask 1111
iogrp_count 4
WWPN 10000000C94C8C1C
node_logged_in_count 2
state active
WWPN 10000000C932A865
node_logged_in_count 2
state active
 
IBM_2145:ITSO_V9000:admin>svcinfo lshostvdiskmap Atlantic
id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID
8 Atlantic 0 14 Atlantic0001 10000000C94C8C1C 6005076801A180E90800000000000060
8 Atlantic 1 22 Atlantic0002 10000000C94C8C1C 6005076801A180E90800000000000061
8 Atlantic 2 23 Atlantic0003 10000000C94C8C1C 6005076801A180E90800000000000062
Discovering and configuring LUNs
The cfgmgr command discovers the new LUNs and configures them into AIX. The following command probes the devices on the adapters individually:
# cfgmgr -l fcs1
# cfgmgr -l fcs2
The following command probes the devices sequentially across all installed adapters:
# cfgmgr -vS
The lsdev command (Example 7-10) lists the three newly configured hdisks that are represented as MPIO FC 2145 devices.
Example 7-10 Volumes from IBM FlashSystem V9000
# lsdev -Cc disk
hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI Disk Drive
hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI Disk Drive
hdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI Disk Drive
hdisk3 Available 1D-08-02 MPIO FC 2145
hdisk4 Available 1D-08-02 MPIO FC 2145
hdisk5 Available 1D-08-02 MPIO FC 2145
Now, you can use the mkvg command to create a VG with the three newly configured hdisks, as shown in Example 7-11.
Example 7-11 Running the mkvg command
# mkvg -y itsoaixvg hdisk3
0516-1254 mkvg: Changing the PVID in the ODM.
itsoaixvg
# mkvg -y itsoaixvg1 hdisk4
0516-1254 mkvg: Changing the PVID in the ODM.
itsoaixvg1
# mkvg -y itsoaixvg2 hdisk5
0516-1254 mkvg: Changing the PVID in the ODM.
itsoaixvg2
The lspv output now shows the new VG label on each of the hdisks that were included in the VGs (Example 7-12).
Example 7-12 Showing the vpath assignment into the Volume Group (VG)
# lspv
hdisk0 0009cdcaeb48d3a3 rootvg active
hdisk1 0009cdcac26dbb7c rootvg active
hdisk2 0009cdcab5657239 rootvg active
hdisk3 0009cdca28b589f5 itsoaixvg active
hdisk4 0009cdca28b87866 itsoaixvg1 active
hdisk5 0009cdca28b8ad5b itsoaixvg2 active
7.5.7 Using SDDPCM
You administer the SDDPM by using the pcmpath command. You use this command to perform all administrative functions, such as displaying and changing the path state. The pcmpath query adapter command displays the current state of the adapters.
Example 7-13 shows the status that both adapters show as optimal with State is NORMAL and Mode is ACTIVE.
Example 7-13 SDDPCM commands that are used to check the availability of the adapters
# pcmpath query adapter
 
Active Adapters :2
Adpt# Name State Mode Select Errors Paths Active
0 fscsi1 NORMAL ACTIVE 407 0 6 6
1 fscsi2 NORMAL ACTIVE 425 0 6 6
The pcmpath query device command displays the current state of adapters. Example 7-14 shows path’s State and Mode for each of the defined hdisks. Both adapters show the optimal status of State is NORMAL and Mode is ACTIVE. Additionally, an asterisk (*) that is displayed next to a path indicates an inactive path that is configured to the non-preferred FlashSystem V9000 controller.
Example 7-14 SDDPCM commands that are used to check the availability of the devices
# pcmpath query device
Total Devices : 3
DEV#: 3 DEVICE NAME: hdisk3 TYPE: 2145 ALGORITHM: Load Balance
SERIAL: 6005076801A180E90800000000000060
==========================================================================
Path# Adapter/Path Name State Mode Select Errors
0 fscsi1/path0 OPEN NORMAL 152 0
1* fscsi1/path1 OPEN NORMAL 48 0
2* fscsi2/path2 OPEN NORMAL 48 0
3 fscsi2/path3 OPEN NORMAL 160 0
 
DEV#: 4 DEVICE NAME: hdisk4 TYPE: 2145 ALGORITHM: Load Balance
SERIAL: 6005076801A180E90800000000000061
==========================================================================
Path# Adapter/Path Name State Mode Select Errors
0* fscsi1/path0 OPEN NORMAL 37 0
1 fscsi1/path1 OPEN NORMAL 66 0
2 fscsi2/path2 OPEN NORMAL 71 0
3* fscsi2/path3 OPEN NORMAL 38 0
 
DEV#: 5 DEVICE NAME: hdisk5 TYPE: 2145 ALGORITHM: Load Balance
SERIAL: 6005076801A180E90800000000000062
==========================================================================
Path# Adapter/Path Name State Mode Select Errors
0 fscsi1/path0 OPEN NORMAL 66 0
1* fscsi1/path1 OPEN NORMAL 38 0
2* fscsi2/path2 OPEN NORMAL 38 0
3 fscsi2/path3 OPEN NORMAL 70 0
7.5.8 Creating and preparing volumes for use with AIX V6.1 and SDDPCM
The itsoaixvg volume group (VG) is created with hdisk3. A logical volume is created by using the VG. Then, the testlv1 file system is created and mounted, as shown in Example 7-15.
Example 7-15 Host system new VG and file system configuration
# lsvg -o
itsoaixvg2
itsoaixvg1
itsoaixvg
rootvg
# crfs -v jfs2 -g itsoaixvg -a size=3G -m /itsoaixvg -p rw -a agblksize=4096
File system created successfully.
3145428 kilobytes total disk space.
New File System size is 6291456
# lsvg -l itsoaixvg
itsoaixvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
loglv00 jfs2log 1 1 1 closed/syncd N/A
fslv00 jfs2 384 384 1 closed/syncd /itsoaixvg
7.5.9 Expanding an AIX volume
AIX supports dynamic volume expansion starting at IBM AIX 5L™ version 5.2. By using this capability, a volume’s capacity can be increased by the storage subsystem while the volumes are actively in use by the host and applications.
The following guidelines apply:
The volume cannot belong to a concurrent-capable VG.
The volume cannot belong to a FlashCopy, Metro Mirror, or Global Mirror relationship.
These are the steps to expand a volume on an AIX host when the volume is on the FlashSystem V9000:
1. Display the current size of IBM FlashSystem V9000 volume by using IBM FlashSystem V9000 lsvdisk <VDisk_name> CLI command. The capacity of the volume, as seen by the host, is displayed in the capacity field (as GBs) of the lsvdisk output.
2. Identify the corresponding AIX hdisk by matching the vdisk_UID from the lsvdisk output with the SERIAL field of the pcmpath query device output.
3. Display the capacity that is configured in AIX by using the lspv hdisk command. The capacity is shown in the TOTAL PPs field in MBs.
4. To expand the capacity of FlashSystem V9000 volume, use the expandvdisksize command.
5. After the capacity of the volume is expanded, AIX must update its configured capacity. To start the capacity update on AIX, use the chvg -g vg_name command, where vg_name is the VG in which the expanded volume is found.
If AIX does not return any messages, the command was successful and the volume changes in this VG were saved.
If AIX cannot see any changes in the volumes, it returns an explanatory message.
6. Display the new AIX-configured capacity by using the lspv hdisk command. The capacity (in MBs) is shown in the TOTAL PPs field.
7.5.10 Running FlashSystem V9000 commands from AIX host system
To run CLI commands, install and prepare the SSH client system on the AIX host system. For AIX 5L V5.1 and later, you can get OpenSSH from the Bonus Packs. You also need its prerequisite, OpenSSL, from the AIX toolbox for Linux applications for IBM Power Systems:
The AIX installation images are available at this website:
Complete the following steps:
1. To generate the key files on AIX, run the following command:
ssh-keygen -t rsa -f filename
The -t parameter specifies the type of key to generate: rsa1, rsa2, or dsa. The value for rsa2 is only rsa. For rsa1, the type must be rsa1. When you are creating the key to FlashSystem V9000, use type rsa2. The -f parameter specifies the file names of the private and public keys on the AIX server (the public key has the .pub extension after the file name).
2. Install the public key on FlashSystem V9000 by using the GUI.
3. On the AIX server, make sure that the private key and the public key are in the .ssh directory and in the home directory of the user.
4. To connect to FlashSystem V9000 and use a CLI session from the AIX host, run the following command:
ssh -l admin -i filename V9000
5. You can also run the commands directly on the AIX host, which is useful when you are making scripts. To run the commands directly on the AIX host, add IBM FlashSystem V9000 commands to the previous command. For example, to list the hosts that are defined on FlashSystem V9000, enter the following command:
ssh -l admin -i filename V9000 lshost
In this command, -l admin is the user name that is used to log in to FlashSystem V9000, -i filename is the filename of the private key that is generated, and V9000 is the host name or IP address of FlashSystem V9000.
7.6 Windows: Specific information
This section describes specific information about the connection of Windows-based hosts to IBM FlashSystem V9000 environment.
7.6.1 Configuring Windows Server 2008 and 2012 hosts
For attaching IBM FlashSystem V9000 to a host that is running Windows Server 2008, Windows Server 2008 R2, or Windows Server 2012, you must install the IBM SDDDSM multipath driver to make the Windows server capable of handling volumes that are presented by FlashSystem V9000.
 
Note: With Windows 2012 you can use native Microsoft device drivers, but we strongly suggest installing IBM SDDDSM drivers.
Before you attach FlashSystem V9000 to your host, make sure that all of the following requirements are fulfilled:
Check all prerequisites that are provided in section 2.0 of the SDDSM readme file.
Check the LUN limitations for your host system. Ensure that there are enough FC adapters installed in the server to handle the total number of LUNs that you want to attach.
7.6.2 Configuring Windows
To configure the Windows hosts, complete the following steps:
1. Make sure that the current OS service pack and fixes are applied to your Windows server system.
2. Use the current supported firmware and driver levels on your host system.
3. Install the HBA or HBAs on the Windows server, as described in 7.6.4, “Installing and configuring the host adapter” on page 233.
4. Connect the Windows Server FC host adapters to the switches.
5. Configure the switches (zoning).
6. Install the FC host adapter driver, as described in 7.6.3, “Hardware lists, device driver, HBAs, and firmware levels” on page 233.
7. Configure the HBA for hosts that are running Windows, as described in 7.6.4, “Installing and configuring the host adapter” on page 233.
8. Check the HBA driver readme file for the required Windows registry settings, as described in 7.6.3, “Hardware lists, device driver, HBAs, and firmware levels” on page 233.
9. Check the disk timeout on Windows Server, as described in 7.6.5, “Changing the disk timeout on Windows Server” on page 233.
10. Install and configure SDDDSM.
11. Restart the Windows Server host system.
12. Configure the host, volumes, and host mapping in FlashSystem V9000.
13. Use Rescan disk in Computer Management of the Windows Server to discover the volumes that were created on FlashSystem V9000.
7.6.3 Hardware lists, device driver, HBAs, and firmware levels
For more information about the supported hardware, device driver, and firmware, see this web page:
There, you can find the hardware list for supported HBAs and the driver levels for Windows. Check the supported firmware and driver level for your HBA and follow the manufacturer’s instructions to upgrade the firmware and driver levels for each type of HBA. The driver readme files from most manufacturers list the instructions for the Windows registry parameters that must be set for the HBA driver.
7.6.4 Installing and configuring the host adapter
Install the host adapters in your system. See the manufacturer’s instructions for the installation and configuration of the HBAs.
Also, check the documentation that is provided for the server system for the installation guidelines of FC HBAs regarding the installation in certain PCI(e) slots, and so on.
The detailed configuration settings that you must make for the various vendors’ FC HBAs are available at the IBM FlashSystem V9000 web page of the IBM Knowledge Center. Select Configuring → Host attachment → Fibre Channel host attachments → Hosts running the Microsoft Windows Server operating system.
7.6.5 Changing the disk timeout on Windows Server
This section describes how to change the disk I/O timeout value on Windows Server 2008, Windows Server 2008 R2, and Windows Server 2012 systems.
On your Windows Server hosts, complete the following steps to change the disk I/O timeout value to 60 in the Windows registry:
1. In Windows, click Start, and then select Run.
2. In the dialog text box, enter regedit and press Enter.
3. In the registry browsing tool, locate the following key:
HKEY_LOCAL_MACHINESystemCurrentControlSetServicesDiskTimeOutValue
4. Confirm that the value for the key is 60 (decimal value) and, if necessary, change the value to 60, as shown in Figure 7-2.
Figure 7-2 Regedit
7.6.6 Installing the SDDDSM multipath driver on Windows
This section describes how to install the SDDDSM driver on a Windows Server 2008 R2 host and Windows Server 2012.
Windows Server 2012, Windows Server 2008 (R2), and MPIO
Microsoft Multipath I/O (MPIO) is a generic multipath driver that is provided by Microsoft, which does not form a complete solution. It works with device-specific modules (DSMs), which usually are provided by the vendor of the storage subsystem. This design supports the parallel operation of multiple vendors’ storage systems on the same host without interfering with each other, because the MPIO instance interacts only with that storage system for which the DSM is provided.
MPIO is not installed with the Windows operating system, by default. Instead, storage vendors must pack the MPIO drivers with their own DSMs. IBM SDDDSM is the IBM Multipath I/O solution that is based on Microsoft MPIO technology. It is a device-specific module that is designed specifically to support IBM storage devices on Windows Server 2008 (R2), and Windows 2012 servers.
The intention of MPIO is to achieve better integration of multipath storage with the operating system. It also supports the use of multipathing in the SAN infrastructure during the boot process for SAN boot hosts.
SDDDSM for IBM FlashSystem V9000
SDDDSM installation is a package for FlashSystem V9000 device for the Windows Server 2008 (R2), and Windows Server 2012 operating systems. Together with MPIO, SDDDSM is designed to support the multipath configuration environments in FlashSystem V9000. SDDDSM is in a host system along with the native disk device driver and provides the following functions:
Enhanced data availability
Dynamic I/O load-balancing across multiple paths
Automatic path failover protection
Enabled concurrent firmware upgrade for the storage system
Path-selection policies for the host system
Table 7-2 lists the SDDDSM driver levels that are supported at the time of this writing.
Table 7-2 Currently supported SDDDSM driver levels
Windows operating system
SDD level
Windows Server 2012 (x64)
2.4.5.1
Windows Server 2008 R2 (x64)
2.4.5.1
Windows Server 2008 (32-bit)/Windows Server 2008 (x64)
2.4.5.1
For more information about the levels that are available, see this web page:
To download SDDDSM, see this web page:
After you download the appropriate archive (.zip file), extract it to your local hard disk and start setup.exe to install SDDDSM. A command prompt window opens (Figure 7-3). Confirm the installation by entering Y.
Figure 7-3 SDDDSM installation
After the setup completes, enter Y again to confirm the reboot request (Figure 7-4).
Figure 7-4 Restart system after installation
After the restart, the SDDDSM installation is complete. You can verify the installation completion in Device Manager because the SDDDSM device appears (Figure 7-5) and the SDDDSM tools are installed (Figure 7-6).
Figure 7-5 SDDDSM installation
The SDDDSM tools are installed, as shown in Figure 7-6.
Figure 7-6 SDDDSM installation
7.6.7 Attaching FlashSystem V9000 volumes to Windows Server 2008 R2 and Windows Server 2012
Create the volumes on IBM FlashSystem V9000 and map them to the Windows Server 2008 R2 or 2012 host.
In this example, we mapped three FlashSystem V9000 disks to the Windows Server 2008 R2 host that is named Diomede, as shown in Example 7-16.
Example 7-16 SVC host mapping to host Diomede
IBM_2145:ITSO_V9000:admin>lshostvdiskmap Diomede
id name SCSI_id vdisk_id vdisk_name       wwpn            vdisk_UID
0 Diomede 0        20     Diomede_0001 210000E08B0541BC 6005076801A180E9080000000000002B
0 Diomede 1        21      Diomede_0002 210000E08B0541BC 6005076801A180E9080000000000002C
0 Diomede 2        22      Diomede_0003 210000E08B0541BC 6005076801A180E9080000000000002D
Complete the following steps to use the devices on your Windows Server 2008 R2 host:
1. Click Start  Run.
2. Run the diskmgmt.msc command, and then click OK. The Disk Management window opens.
3. Select Action  Rescan Disks, as shown in Figure 7-7.
Figure 7-7 Windows Server 2008 R2: Rescan disks
FlashSystem V9000 disks now appear in the Disk Management window (Figure 7-8).
Figure 7-8 Windows Server 2008 R2 Disk Management window
After you assign FlashSystem V9000 disks, they are also available in Device Manager. The three assigned drives are represented by SDDDSM/MPIO as IBM-2145 Multipath disk devices in the Device Manager (Figure 7-9).
Figure 7-9 Windows Server 2008 R2 Device Manager
4. To check that the disks are available, select Start → All Programs → Subsystem Device Driver DSM, and then click Subsystem Device Driver DSM (Figure 7-10). The SDDDSM Command Line Utility is displayed.
Figure 7-10 Windows Server 2008 R2 Subsystem Device Driver DSM utility
5. Run the datapath query device command and press Enter. This command displays all disks and available paths, including their states (Example 7-17).
Example 7-17 Windows Server 2008 R2 SDDDSM command-line utility
Microsoft Windows [Version 6.0.6001]
Copyright (c) 2006 Microsoft Corporation. All rights reserved.
 
C:Program FilesIBMSDDDSM>datapath query device
 
Total Devices : 3
 
DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED
SERIAL: 6005076801A180E9080000000000002B
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0
1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 1429 0
2 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 1456 0
3 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0
 
DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED
SERIAL: 6005076801A180E9080000000000002C
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 1520 0
1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 0 0
2 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0
3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 1517 0
 
DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED
SERIAL: 6005076801A180E9080000000000002D
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 27 0
1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 1396 0
2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 1459 0
3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0
 
C:Program FilesIBMSDDDSM>
 
 
SAN zoning: When the SAN zoning guidance is followed, you see this result, which uses one volume and a host with two HBAs, (number of volumes) x (number of paths per building block per HBA) x (number of HBAs) = 1 x 2 x 2 = four paths.
 
6. Right-click the disk in Disk Management and then select Online to place the disk online (Figure 7-11).
Figure 7-11 Windows Server 2008 R2: Place disk online
7. Repeat step 6 for all of your attached FlashSystem V9000 disks.
8. Right-click one disk again and select Initialize Disk (Figure 7-12).
Figure 7-12 Windows Server 2008 R2: Initialize Disk
9. Mark all of the disks that you want to initialize and then click OK, (Figure 7-13).
Figure 7-13 Windows Server 2008 R2: Initialize Disk
10. Right-click the deallocated disk space and then select New Simple Volume (Figure 7-14).
Figure 7-14 Windows Server 2008 R2: New Simple Volume
7.6.8 Extending a volume
Using IBM FlashSystem V9000 and Windows Server 2008 and later gives you the ability to extend volumes while they are in use.
You can expand a volume in FlashSystem V9000 cluster, even if it is mapped to a host, because version 2000 Windows Server can handle the volumes that are expanded even if the host has applications running.
A volume, which is defined to be in a FlashCopy, Metro Mirror, or Global Mirror mapping on FlashSystem V9000, cannot be expanded. Therefore, the FlashCopy, Metro Mirror, or Global Mirror on that volume must be deleted before the volume can be expanded.
If the volume is part of a Microsoft Cluster (MSCS), Microsoft advises that you shut down all but one MSCS cluster node. Also, you must stop the applications in the resource that access the volume to be expanded before the volume is expanded. Applications that are running in other resources can continue to run. After the volume is expanded, start the applications and the resource, and then restart the other nodes in the MSCS.
To expand a volume in use on a Windows Server host, you use the Windows DiskPart utility.
To start DiskPart, select Start → Run, and enter DiskPart.
Diskpart was developed by Microsoft to ease the administration of storage on Windows hosts. It is a command-line interface (CLI), which you can use to manage disks, partitions, and volumes by using scripts or direct input on the command line. You can list disks and volumes, select them, and after selecting them, get more detailed information, create partitions, extend volumes, and so on. For more information about DiskPart, see this website:
For more information about expanding partitions of a cluster-shared disk, see this website:
Dynamic disks can be expanded by expanding the underlying FlashSystem V9000 volume. The new space appears as deallocated space at the end of the disk.
In this case, you do not need to use the DiskPart tool. Instead, you can use Windows Disk Management functions to allocate the new space. Expansion works irrespective of the volume type (simple, spanned, mirrored, and so on) on the disk. Dynamic disks can be expanded without stopping I/O, in most cases.
 
Important: Never try to upgrade your basic disk to dynamic disk or vice versa without backing up your data. This operation is disruptive for the data because of a change in the position of the logical block address (LBA) on the disks.
7.6.9 Removing a disk on Windows
To remove a disk from Windows, when the disk is a FlashSystem V9000 volume, follow the standard Windows procedure to ensure that there is no data that we want to preserve on the disk, that no applications are using the disk, and that no I/O is going to the disk. After completing this procedure, remove the host mapping on FlashSystem V9000. Ensure that you are removing the correct volume. To confirm, use Subsystem Device Driver (SDD) to locate the serial number of the disk. On FlashSystem V9000, run the lshostvdiskmap command to find the volume’s name and number. Also check that the SDD serial number on the host matches the UID on IBM FlashSystem V9000 for the volume.
When the host mapping is removed, perform a rescan for the disk, Disk Management on the server removes the disk, and the vpath goes into the status of CLOSE on the server. Verify these actions by running the datapath query device SDD command, but the vpath that is closed is first removed after a restart of the server.
In the following examples, we show how to remove an IBM FlashSystem V9000 volume from a Windows server. We show it on a Windows Server 2008 operating system, but the steps also apply to Windows Server 2008 and Windows Server 2012.
Example 7-18 on page 243 shows the Disk Manager before removing the disk.
We now remove Disk 1. To find the correct volume information, we find the serial or UID number by using SDD, as shown in Example 7-18.
Example 7-18 Removing IBM FlashSystem V9000 disk from the Windows server
C:Program FilesIBMSDDDSM>datapath query device
 
Total Devices : 3
 
DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED
SERIAL: 6005076801A180E9080000000000000F
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 1471 0
1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0
2 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0
3 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 1324 0
 
DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED
SERIAL: 6005076801A180E90800000000000010
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 20 0
1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 94 0
2 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 55 0
3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0
 
DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED
SERIAL: 6005076801A180E90800000000000011
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 100 0
1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 0 0
2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0
3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 69 0
Knowing the Serial/UID of the volume and that the host name is Senegal, we identify the host mapping to remove by running the lshostvdiskmap command on FlashSystem V9000. Then, we remove the actual host mapping, as shown in Example 7-19.
Example 7-19 Finding and removing the host mapping
IBM_2145:ITSO_V9000:admin>lshostvdiskmap Senegal
id name SCSI_id vdisk_id vdisk_name    wwpn              vdisk_UID
1 Senegal 0     7    Senegal_bas0001 210000E08B89B9C0 6005076801A180E9080000000000000F
1 Senegal 1     8    Senegal_bas0002 210000E08B89B9C0 6005076801A180E90800000000000010
1 Senegal 2     9    Senegal_bas0003 210000E08B89B9C0 6005076801A180E90800000000000011
 
IBM_2145:ITSO_V9000:admin>rmvdiskhostmap -host Senegal Senegal_bas0001
 
IBM_2145:ITSO_V9000:admin>lshostvdiskmap Senegal
id name SCSI_id vdisk_id vdisk_name    wwpn              vdisk_UID
1 Senegal 1     8    Senegal_bas0002 210000E08B89B9C0 6005076801A180E90800000000000010
1 Senegal 2     9    Senegal_bas0003 210000E08B89B9C0 6005076801A180E90800000000000011
Here, we can see that the volume is removed from the server. On the server, we then perform a disk rescan in Disk Management, and we now see that the correct disk (Disk1) was removed, as shown in Figure 7-15.
Figure 7-15 Disk Management: Disk is removed
SDDDSM also shows us that the status for all paths to Disk1 changed to CLOSE because the disk is not available, as shown in Example 7-20 on page 245.
Example 7-20 SDD: Closed path
C:Program FilesIBMSDDDSM>datapath query device
 
Total Devices : 3
 
DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED
SERIAL: 6005076801A180E9080000000000000F
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk1 Part0 CLOSE NORMAL 1471 0
1 Scsi Port2 Bus0/Disk1 Part0 CLOSE NORMAL 0 0
2 Scsi Port3 Bus0/Disk1 Part0 CLOSE NORMAL 0 0
3 Scsi Port3 Bus0/Disk1 Part0 CLOSE NORMAL 1324 0
 
DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED
SERIAL: 6005076801A180E90800000000000010
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 20 0
1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 124 0
2 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 72 0
3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0
 
DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED
SERIAL: 6005076801A180E90800000000000011
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 134 0
1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 0 0
2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0
3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 82 0
The disk (Disk1) is now removed from the server. However, to remove the SDDDSM information about the disk, you must restart the server at a convenient time.
7.6.10 Using FlashSystem V9000 CLI from a Windows host
To run CLI commands, we must install and prepare the SSH client system on the Windows host system.
We can install the PuTTY SSH client software on a Windows host by using the PuTTY installation program. You can download PuTTY from this website:
SSH client alternatives for Windows are available at this website:
Cygwin software features an option to install an OpenSSH client. You can download Cygwin from this website:
7.6.11 Microsoft 2012 and ODX (Offloaded Data Transfer)
Microsoft ODX is a copy and storage application embedded into the operating system that supports the passing of copy command sets using an API, to reduce CPU utilization during copy operations. Rather than buffering the read and write operations, Microsoft ODX initiates the copy operation with an offload read, and gets a token that represents the data.
Next, the API initiates the offload write command that requests the movement of the data from the source to destination storage volume. The copy manager of the storage device then performs the data movement according to the token.
Client/server data movement is massively reduced, freeing CPU cycles because the actual data movement is on the backend storage device and not traversing the storage area network, further reducing traffic. Use cases include large data migrations and tiered storage support, and can also reduce the overall hardware spending cost and deployment.
ODX and FlashSystem V9000 offer an excellent combination of server and storage integration to reduce CPU usage, and to take advantage of the speed of FlashSystem V9000 all-flash storage arrays and IBM FlashCore technology.
Starting with IBM FlashSystem V9000 software version 7.5, ODX is supported with Microsoft 2012, including the following platforms:
Clients: Windows
Servers: Windows Server 2012
The following functions are included:
The ODX feature is embedded in the copy engine of Windows, so there is no additional software to install.
Both the source and destination storage device LUNs must be ODX-compatible.
If a copy of an ODX operation fails, traditional Windows copy is used as a fallback.
Drag-and-drop and Copy-and-Paste actions can be used to initiate the ODX copy.
 
Note: By default the ODX capability of FlashSystem V9000 is disabled. To enable the ODX function from the CLI, issue the chsystem -odx on command on the Config Node.
For more details about Microsoft Offloaded Data transfers, see the following website:
7.6.12 Microsoft Volume Shadow Copy
IBM FlashSystem V9000 supports the Microsoft Volume Shadow Copy Service (VSS). The Microsoft Volume Shadow Copy Service can provide a point-in-time (shadow) copy of a Windows host volume while the volume is mounted and the files are in use.
In this section, we describe how to install the Microsoft Volume Copy Shadow Service. The following operating system versions are supported:
Windows Server 2008 with SP2 (x86 and x86_64)
Windows Server 2008 R2 with SP1
Windows Server 2012
The following components are used to support the service:
IBM FlashSystem V9000
IBM System Storage hardware provider, which is known as the IBM System Storage Support for Microsoft VSS
Microsoft Volume Shadow Copy Service
IBM VSS is installed on the Windows host.
To provide the point-in-time shadow copy, the components follow this process:
1. A backup application on the Windows host starts a snapshot backup.
2. The Volume Shadow Copy Service notifies IBM VSS that a copy is needed.
3. FlashSystem V9000 prepares the volume for a snapshot.
4. The Volume Shadow Copy Service quiesces the software applications that are writing data on the host and flushes file system buffers to prepare for a copy.
5. FlashSystem V9000 creates the shadow copy by using the FlashCopy Service.
6. The VSS notifies the writing applications that I/O operations can resume and notifies the backup application that the backup was successful.
The VSS maintains a free pool of volumes for use as a FlashCopy target and a reserved pool of volumes. These pools are implemented as virtual host systems on FlashSystem V9000.
You can download the installation archive from the following IBM web page and extract it to a directory on the Windows server where you want to install IBM VSS:
 
7.7 Linux (on x86/x86_64): Specific information
This section describes specific information that relates to the connection of Linux on Intel based hosts to IBM FlashSystem V9000 environment.
7.7.1 Configuring the Linux host
Complete the following steps to configure the Linux host:
1. Use the current firmware levels on your host system.
2. Install the HBA or HBAs on the Linux server, as described in 7.6.4, “Installing and configuring the host adapter” on page 233.
3. Install the supported HBA driver or firmware and upgrade the kernel, if required.
4. Connect the Linux server FC host adapters to the switches.
5. Configure the switches (zoning), if needed.
6. Configure DMMP for Linux, as described in 7.7.3, “Multipathing in Linux” on page 248.
7. Configure the host, volumes, and host mapping in the FlashSystem V9000.
8. Rescan for LUNs on the Linux server to discover the volumes that were created on FlashSystem V9000.
7.7.2 Configuration information
IBM FlashSystem V9000 supports hosts that run the following Linux distributions:
Red Hat Enterprise Linux (RHEL)
SUSE Linux Enterprise Server (SLES)
For the current information, see this web page:
This web page provides the hardware list for supported HBAs and device driver levels for Linux. Check the supported firmware and driver level for your HBA, and follow the manufacturer’s instructions to upgrade the firmware and driver levels for each type of HBA.
7.7.3 Multipathing in Linux
Red Hat Enterprise Linux 5 (RHEL5), and later, and SUSE Linux Enterprise Server 10 (SLES10), and later, provide their own multipath support by the operating system.
Device Mapper Multipath (DM-MPIO)
Because RHEL5 (and later) and SLES10 (and later) provide their own multipath support for the operating system, you do not have to install another device driver. Always check whether your operating system includes one of the supported multipath drivers. This information is available by using the link that is provided in 7.7.2, “Configuration information” on page 248.
In SLES10, the multipath drivers and tools are installed by default. However, for RHEL5, the user must explicitly choose the multipath components during the operating system installation to install them. Each of the attached IBM FlashSystem V9000 LUNs has a special device file in the Linux /dev directory.
Hosts that use 2.6 kernel Linux operating systems can have as many FC disks as FlashSystem V9000 allows. The following website provides the most current information about the maximum configuration for the FlashSystem V9000:
Creating and preparing DM-MPIO volumes for use
First, you must start the Microsoft Multipath I/O daemon on your system. Run the following SUSE Linux Enterprise Server command or the RHEL command on your host system:
1. Enable MPIO for SUSE Linux Enterprise Server by running the following commands:
/etc/init.d/boot.multipath {start|stop}
/etc/init.d/multipathd
{start|stop|status|try-restart|restart|force-reload|reload|probe}
 
Tip: Run insserv boot.multipath multipathd to automatically load the multipath driver and multipathd daemon during start.
2. Enable MPIO for RHEL by running the following commands:
modprobe dm-multipath
modprobe dm-round-robin
service multipathd start
chkconfig multipathd on
Example 7-21 shows the commands that are run on a Red Hat Enterprise Linux 6.3 operating system.
Example 7-21 Starting MPIO daemon on Red Hat Enterprise Linux
[root@palau ~]# modprobe dm-round-robin
[root@palau ~]# multipathd start
[root@palau ~]# chkconfig multipathd on
[root@palau ~]#
Complete the following steps to enable multipathing for IBM devices:
1. Open the multipath.conf file and follow the instructions. The file is in the /etc directory. Example 7-22 shows editing by using vi.
Example 7-22 Editing the multipath.conf file
[root@palau etc]# vi multipath.conf
2. Add the following entry to the multipath.conf file:
device {
vendor "IBM"
product "2145"
path_grouping_policy group_by_prio
prio_callout "/sbin/mpath_prio_alua /dev/%n"
}
 
Note: You can download example multipath.conf files from the following IBM Subsystem Device Driver for Linux website:
3. Restart the multipath daemon, as shown in Example 7-23.
Example 7-23 Stopping and starting the multipath daemon
[root@palau ~]# service multipathd stop
Stopping multipathd daemon: [ OK ]
[root@palau ~]# service multipathd start
Starting multipathd daemon: [ OK ]
4. Run the multipath -dl command to see the MPIO configuration. You see two groups with two paths each. All paths must have the state [active][ready], and one group shows [enabled].
5. Run the fdisk command to create a partition on FlashSystem V9000. Use this procedure to improve performance by aligning a partition in the Linux operating system.
The Linux operating system defaults to a 63-sector offset. To align a partition in Linux using fdisk, complete the following steps:
a. At the command prompt, enter # fdisk /dev/mapper/<device>.
b. To change the listing of the partition size to sectors, enter u.
c. To create a partition, enter n.
d. To create a primary partition, enter p.
e. To specify the partition number, enter 1.
f. To set the base sector value, enter 128.
g. Press Enter to use the default last sector value.
h. To write the changes to the partition table, enter w.
 
Note: The <device> is the FlashSystem V9000 volume.
The newly created partition now has an offset of 64 KB and works optimally with an aligned application.
6. If you are installing the Linux operating system on the storage system, create the partition scheme before the installation process. For most Linux distributions, this process requires starting at the text-based installer and switching consoles (press Alt+F2) to get the command prompt before you continue.
7. Create a file system by running the mkfs command, as shown in Example 7-24.
Example 7-24 The mkfs command
[root@palau ~]# mkfs -t ext3 /dev/dm-2
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
518144 inodes, 1036288 blocks
51814 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1061158912
32 block groups
32768 blocks per group, 32768 fragments per group
16192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736
 
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done
 
This filesystem will be automatically checked every 29 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[root@palau ~]#
8. Create a mount point and mount the drive, as shown in Example 7-25.
Example 7-25 Mount point
[root@palau ~]# mkdir /svcdisk_0
[root@palau ~]# cd /svcdisk_0/
[root@palau svcdisk_0]# mount -t ext3 /dev/dm-2 /svcdisk_0
[root@palau svcdisk_0]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
73608360 1970000 67838912 3% /
/dev/hda1 101086 15082 80785 16% /boot
tmpfs 967984 0 967984 0% /dev/shm
/dev/dm-2 4080064 73696 3799112 2% /svcdisk_0
7.8 VMware: Configuration information
This section describes the requirements and other information for attaching IBM FlashSystem V9000 to various guest host operating systems that are running on the VMware operating system.
For more details about the best practices for configuring, attaching, and operating IBM FlashSystem V9000 in a VMware environment, see the IBM Redpaper™, FlashSystem V9000 in a VMware Environment - Best Practices Guide, REDP-5247.
7.8.1 Configuring VMware hosts
To configure the VMware hosts, complete the following steps:
1. Install the HBAs in your host system.
2. Connect the server FC host adapters to the switches.
3. Configure the switches (zoning), as described in 7.8.4, “VMware storage and zoning guidance” on page 252.
4. Install the VMware operating system (if not already installed) and check the HBA timeouts.
5. Configure the host, volumes, and host mapping in the FlashSystem V9000, as described in 7.8.6, “Attaching VMware to volumes” on page 252.
7.8.2 Operating system versions and maintenance levels
For more information about VMware support, see this web page:
At the time of this writing, the following versions are supported:
ESXi V6.x
ESXi V5.x
ESX / ESxi V4.x (No longer supported by VMware)
7.8.3 HBAs for hosts that are running VMware
Ensure that your hosts that are running on VMware operating systems use the correct HBAs and firmware levels. Install the host adapters in your system. See the manufacturer’s instructions for the installation and configuration of the HBAs.
For more information about supported HBAs for older ESX/ESXi versions, see this web page:
Mostly, the supported HBA device drivers are included in the ESXi server build. However, for various newer storage adapters, you might be required to load more ESXi drivers. Check the following VMware HCL if you must load a custom driver for your adapter:
After the HBAs are installed, load the default configuration of your FC HBAs. You must use the same model of HBA with the same firmware in one server. Configuring Emulex and QLogic HBAs to access the same target in one server is not supported.
If you are unfamiliar with the VMware environment and the advantages of storing virtual machines and application data on a SAN, it is useful to get an overview about VMware products before you continue.
VMware documentation is available at this web page:
7.8.4 VMware storage and zoning guidance
The VMware ESXi server can use a Virtual Machine File System (VMFS). VMFS is a file system that is optimized to run multiple virtual machines as one workload to minimize disk I/O. It also can handle concurrent access from multiple physical machines because it enforces the appropriate access controls. Therefore, multiple ESXi hosts can share the set of LUNs.
Theoretically, you can run all of your virtual machines on one LUN. However, for performance reasons in more complex scenarios, it can be better to load balance virtual machines over separate LUNs.
The use of fewer volumes has the following advantages:
More flexibility to create virtual machines without creating space on FlashSystem V9000
More possibilities for taking VMware snapshots
Fewer volumes to manage
The use of more and smaller volumes has the following advantages:
Separate I/O characteristics of the guest operating systems
More flexibility (the multipathing policy and disk shares are set per volume)
Microsoft Cluster Service requires its own volume for each cluster disk resource
For more information about designing your VMware infrastructure, see these web pages:
7.8.5 Multipathing in ESXi
The VMware ESXi server performs native multipathing. You do not need to install another multipathing driver, such as SDDDSM.
 
Guidelines: ESXi server hosts that use shared storage for virtual machine failover or load balancing must be in the same zone. You can have only one VMFS volume per volume.
7.8.6 Attaching VMware to volumes
This section details the steps to attach VMware to volumes.
First, we make sure that the VMware host is logged in to the FlashSystem V9000. In our examples, we use the VMware ESXi server V6 and the host name Nile.
Enter the following command to check the status of the host:
lshost <hostname>
Example 7-26 shows that the host Nile is logged in to FlashSystem V9000 with two HBAs.
Example 7-26 The lshost Nile
IBM_2145:ITSO_V9000:admin>lshost Nile
id 1
name Nile
port_count 2
type generic
mask 1111
iogrp_count 2
WWPN 210000E08B892BCD
node_logged_in_count 4
state active
WWPN 210000E08B89B8C0
node_logged_in_count 4
state active
 
Tips:
If you want to use features, such as high availability (HA), the volumes that own the VMDK file must be visible to every ESXi host that can host the virtual machine.
In FlashSystem V9000, select Allow the virtual disks to be mapped even if they are already mapped to a host.
The volume should have the same SCSI ID on each ESXi host.
In some configurations, such as MSCS In-guest clustering, the virtual machines must share Raw-device mapping disks for clustering purposes. In this case, it is required to have consistent SCSI ID across all ESXi hosts in the cluster.
For this configuration, we created one volume and mapped it to our ESXi host, as shown in Example 7-27.
Example 7-27 Mapped volume to ESXi host Nile
IBM_2145:ITSO_V9000:admin>lshostvdiskmap Nile
id name  SCSI_id vdisk_id vdisk_name     wwpn             vdisk_UID
1   Nile    0      12      VMW_pool 210000E08B892BCD 60050768018301BF2800000000000010
ESXi does not automatically scan for SAN changes (except when rebooting the entire ESXi server). If you made any changes to your IBM FlashSystem V9000 or SAN configuration, complete the following steps (see Figure 7-16 on page 254 for an illustration):
1. Open your VMware vSphere Client.
2. Select the host.
3. In the Hardware window, choose Storage.
4. Click Rescan.
To configure a storage device to use it in VMware, complete the following steps:
1. Open your VMware vSphere Client.
2. Select the host for which you want to see the assigned volumes and click the Configuration tab.
3. In the Hardware window on the left side, click Storage, as shown in Figure 7-16.
4. To create a storage datastore, select Add storage.
5. The Add storage wizard opens.
6. Select Create Disk/Lun, and then click Next.
7. Select FlashSystem V9000 volume that you want to use for the datastore, and then click Next.
8. Review the disk layout. Click Next.
9. Enter a datastore name. Click Next.
10. Enter the size of the new partition. Click Next.
11. Review your selections. Click Finish.
Now, the created VMFS data store is listed in the Storage window (Figure 7-16). You see the details for the highlighted datastore. Check whether all of the paths are available and that the Path Selection is set to Round Robin.
Figure 7-16 VMware storage configuration
If not all of the paths are available, check your SAN and storage configuration. After the problem is fixed, click Rescan All to perform a path rescan. The view is updated to the new configuration.
The preferred practice is to use the Round Robin Multipath Policy for FlashSystem V9000. If you need to edit this policy, complete the following steps:
1. Highlight the datastore.
2. Click Properties.
3. Click Managed Paths.
4. Click Change.
5. Select Round Robin.
6. Click OK.
7. Click Close.
 
Note: Since ESXi 5.5, Round Robin policy is set by default for V9000.
Now, your VMFS data store is created and you can start using it for your guest operating systems. Round Robin distributes the I/O load across all available paths. If you want to use a fixed path, the Fixed policy setting also is supported.
7.8.7 Volume naming in VMware
In the Virtual vSphere Client, a device is identified either as volume name if specified during creation in V9000 or as a serial number, as shown in Figure 7-17.
Figure 7-17 V9000 device, volume name
 
Disk partition: The number of the disk partition (this value never changes). If the last number is not displayed, the name stands for the entire volume.
7.8.8 Extending a VMFS volume
VMFS volumes can be extended while virtual machines are running. First, you must extend the volume on FlashSystem V9000, and then you can extend the VMFS volume.
 
Note: Before you perform the steps that are described here, backup your data.
Complete the following steps to extend a volume:
1. Expand the volume by running the expandvdisksize -size 1 -unit gb <VDiskname> command, as shown in Example 7-28.
Example 7-28 Expanding a volume on the FlashSystem V9000
IBM_2145:ITSO_V9000:admin>lsvdisk VMW_pool
id 12
name VMW_pool
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 0
mdisk_grp_name MDG_DS45
capacity 60.0GB
...
IBM_2145:ITSO_V9000:admin>expandvdisksize -size 5 -unit gb VMW_pool
IBM_2145:ITSO_V9000:admin>lsvdisk VMW_pool
id 12
name VMW_pool
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 0
mdisk_grp_name MDG_DS45
capacity 65.0GB
2. Open the Virtual Infrastructure Client.
3. Select the host.
4. Select Configuration.
5. Select Storage Adapters.
6. Click Rescan.
7. Make sure that the Scan for new Storage Devices option is selected, and then click OK. After the scan completes, the new capacity is displayed in the Details section.
8. Click Storage.
9. Right-click the VMFS volume and click Properties.
10. Click Add Extend.
11. Select the new free space, and then click Next.
12. Click Next.
13. Click Finish.
The VMFS volume is now extended and the new space is ready for use.
7.8.9 Removing a data store from an ESXi host
Before you remove a data store from an ESXi host, you must migrate or delete all of the virtual machines that are on this data store.
To remove the data store, complete the following steps:
1. Back up the data.
2. Open the Virtual Infrastructure Client.
3. Select the host.
4. Select Configuration.
5. Select Storage.
6. Right-click the datastore that you want to remove.
7. Click Unmount (must be done for all ESXi with mounted datastore).
8. Click Delete to remove.
9. Read the warning, and if you are sure that you want to remove the data store and delete all data on it, click Yes.
10. Select Devices view, right-click the device you want to detach, then click Detach.
11. Remove the host mapping on FlashSystem V9000, or delete the volume, as shown in Example 7-29.
Example 7-29 Host mapping: Delete the volume
IBM_2145:ITSO_V9000:admin>rmvdiskhostmap -host Nile VMW_pool
IBM_2145:ITSO_V9000:admin>rmvdisk VMW_pool
12. In the VI Client, select Storage Adapters.
13. Click Rescan.
14. Make sure that the Scan for new Storage Devices option is selected and click OK.
15. After the scan completes, the disk is removed from the view.
Your data store is now removed successfully from the system.
For more information about supported software and driver levels, see the following web page:
7.9 Oracle (Sun) Solaris hosts
At the time of writing, Oracle (Sun) Solaris hosts (SunOS) versions 8, 9, 10, and 11 are supported by FlashSystem V9000. However, SunOS 5.8 (Solaris 8) is discontinued.
7.9.1 MPxIO dynamic pathing
Solaris provides their own MPxIO multipath support for the operating system. Therefore, you do not have to install another device driver. Alternatively Veritas DMP can be used.
Veritas Volume Manager with dynamic multipathing
Veritas Volume Manager (VM) with dynamic multipathing (DMP) automatically selects the next available I/O path for I/O requests without action from the administrator. VM with DMP is also informed when you repair or restore a connection, and when you add or remove devices after the system is fully booted (if the operating system recognizes the devices correctly). The Java Native Interface (JNI) drivers support the host mapping of new volumes without rebooting the Solaris host.
The support characteristics are as follows:
Veritas VM with DMP supports load balancing across multiple paths with FlashSystem V9000.
Veritas VM with DMP does not support preferred pathing with FlashSystem V9000.
OS cluster support
Solaris with Symantec Cluster V4.1, Symantec SFHA, and SFRAC V4.1/5.0, and Solaris with Sun Cluster V3.1/3.2 are supported at the time of this writing.
SAN boot support
Boot from SAN is supported under Solaris 10 and later running Symantec Volume Manager or MPxIO.
Aligning the partition for Solaris
For ZFS no alignment is needed, if a disk is added directly to a ZFS pool using the zpool utility. The utility creates the partition starting at sector 256, which automatically creates a properly aligned partition.
7.10 Hewlett-Packard UNIX: Configuration information
For more information about Hewlett-Packard UNIX (HP-UX) support, see the following web page:
7.10.1 Operating system versions and maintenance levels
At the time of this writing, HP-UX V11.0 and V11i v1/v2/v3 are supported (64-bit only).
7.10.2 Supported multipath solutions
For HP-UX version 11.31, HP does not require installing a separate multipath driver. As part of this version, native multipathing solution is supported with the mass storage stack feature.
For releases of HP-UX before 11.31, multipathing support is available using either of the following software:
IBM System Storage Multipath Subsystem Device Driver (SDD)
HP PVLinks
For more information see the FlashSystem V9000 web page in the IBM Knowledge Center.
7.10.3 Clustered-system support
HP-UX version 11.31 supports ServiceGuard 11.18, which provides a locking mechanism called cluster lock LUN. On the FlashSystem V9000, specify the block device name of a volume for CLUSTER_LOCK_LUN variable in the configuration ASCII file. The lock LUN among all system nodes must point to the same volume. This consistency can be ensured by determining the worldwide ID (WWID) of the volume. The system lock LUN cannot be used for multiple system locking and cannot be used as a member of a Logical Volume Manager (LVM) volume group or VxVM disk group.
7.10.4 Support for HP-UX with greater than eight LUNs
HP-UX does not recognize more than eight LUNS per port that use the generic SCSI behavior. If you want to use more than eight LUNs per SCSI target, you must set the type attribute to hpux when you create the host object. You can use the FlashSystem V9000 command-line interface or the management GUI to set this attribute.
7.11 Using SDDDSM, SDDPCM, and SDD web interface
After the SDDDSM or SDD driver is installed, specific commands are available. To open a command window for SDDDSM or SDD, select Start  Programs  Subsystem Device Driver  Subsystem Device Driver Management from the desktop.
For more information about the command documentation for the various operating systems, see Multipath Subsystem Device Driver User’s Guide, S7000303:
Also possible is to configure SDDDSM to offer a web interface that provides basic information. Before this configuration can work, you must configure the web interface. SDDSRV does not bind to any TCP/IP port by default, but it does allow port binding to be dynamically enabled or disabled.
The multipath driver package includes an sddsrv.conf template file that is named the sample_sddsrv.conf file. On all UNIX platforms, the sample_sddsrv.conf file is in the /etc directory. On Windows platforms, it is in the directory in which SDDDSM was installed.
You must use the sample_sddsrv.conf file to create the sddsrv.conf file in the same directory as the sample_sddsrv.conf file by copying it and naming the copied file to sddsrv.conf. You can then dynamically change the port binding by modifying the parameters in the sddsrv.conf file and changing the values of Enableport and Loopbackbind to True.
Figure 7-18 shows the start window of the multipath driver web interface.
Figure 7-18 SDD web interface
7.12 More information
For more information about host attachment, storage subsystem attachment, and troubleshooting, see FlashSystem V9000 web page in the IBM Knowledge Center:
For more information about SDDDSM configuration, see IBM System Storage Multipath Subsystem Device Driver User’s Guide, S7000303, which is available from this address:
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.138.135.80