Configuring hosts
This chapter provides general guidelines and best practices for configuring host systems. The primary reference for host configuration is available at this IBM Documentation web pae.
For more information about host attachment, see this IBM Documentation web page.
For more information about hosts that are connected by using Fibre Channel, see Chapter 2, SAN Design. Host connectivity is a key consideration in overall SAN design.
Before attaching a new host, confirm that the host is supported by the IBM Spectrum Virtualize storage. For more information, see IBM System Storage Interoperation Center (SSIC).
The host configuration guidelines apply equally to all IBM Spectrum Virtualize systems. As such, the product name often is referred to as an IBM Spectrum Virtualize system.
This chapter includes the following topics:
8.1 General configuration guidelines
In this section, we discuss some general configuration guidelines. The information that is presented here complements the content in Chapter 2, “Connecting IBM Spectrum Virtualize and IBM Storwize in storage area networks” on page 37.
8.1.1 Number of paths
It is generally recommended that the total number of Fibre Channel paths per volume be limited to four paths. For HyperSwap and Stretch Cluster configurations, eight paths per volume is recommended. Adding paths does not significantly increase redundancy and it tends to bog down the host with path management. Also, too many paths might increase failover time.
8.1.2 Host ports
Each host uses two ports from two different host bus adapters (HBAs). These ports should go to separate SAN fabrics and be zoned to one target port of each node or node canister. When the volumes are created, they are assigned to an I/O group and the resulting path count between the volume and the host should be four.
Preferred practice: Keep Fibre Channel tape (including Virtual Tape Libraries) and Fibre Channel disks on separate HBAs. These devices have two different data patterns when operating in their optimum mode. Switching between them can cause unwanted processor usage and performance slowdown for the applications.
8.1.3 Port masking
In general, Fibre Channel ports should be dedicated to specific functions. Hosts must be zoned to only ports that are designated for host I/O.
8.1.4 N-port ID virtualization
IBM Spectrum Virtualize now uses N-port ID virtualization (NPIV) by default. This use reduces failover time and allows for features, such as hot spare nodes.
8.1.5 Host to I/O group mapping
An I/O group consists of two nodes or node canisters that share the management of volumes within the cluster. Use a single I/O group (iogrp) for all volumes that are allocated to a specific host. This guideline results in the following benefits:
Minimizes port fan-outs within the SAN fabric.
Maximizes the potential host attachments to IBM Spectrum Virtualize because maximums are based on I/O groups.
Reduces the number of target ports that must be managed within the host.
8.1.6 Volume size versus quantity
In general, host resources, such as memory and processing time, are used up by each storage LUN that is mapped to the host. For each extra path, more memory can be used, and a portion of more processing time is also required. The user can control this effect by using fewer larger LUNs rather than many small LUNs. However, you might need to tune queue depths and I/O buffers to support controlling the memory and processing time efficiently.
If a host does not have tunable parameters, such as on the Windows operating system, the host does not benefit from large volume sizes. AIX greatly benefits from larger volumes with a smaller number of volumes and paths that are presented to it.
8.1.7 Host volume mapping
Host mapping is the process of controlling which hosts can access specific volumes within the system. IBM Spectrum Virtualize always presents a specific volume with the same SCSI ID on all host ports. When a volume is mapped, IBM Spectrum Virtualize software automatically assigns the next available SCSI ID if none is specified. In addition, a unique identifier, called the UID, is on each volume.
You can allocate the operating system volume of the SAN boot as the lowest SCSI ID (zero for most hosts), and then allocate the various data disks. If you share a volume among multiple hosts, consider controlling the SCSI ID so that the IDs are identical across the
hosts. This consistency ensures ease of management at the host level and prevents potential issues during IBM Spectrum Virtualize updates and even node reboots, mostly for ESX operating systems.
If you are using image mode to migrate a host to IBM Spectrum Virtualize, allocate the volumes in the same order that they were originally assigned on the host from the back-end storage.
The lshostvdiskmap command displays a list of VDisk (volumes) that are mapped to a host. These volumes are recognized by the specified host. Example 8-1 shows the syntax of the lshostvdiskmap command that is used to determine the SCSI ID and the UID of volumes.
Example 8-1 The lshostvdiskmap command
svcinfo lshostvdiskmap -delim
Example 8-2 shows the results of using the lshostvdiskmap command.
Example 8-2 Output of using the lshostvdiskmap command
svcinfo lshostvdiskmap -delim : HG-ESX6
id:name:SCSI_id:vdisk_id:vdisk_name:vdisk_UID:IO_group_id:IO_group_name:mapping_type:host_cluster_id:host_cluster_name:protocol
3:HG-ESX6:0:5:DB_Volume:60050768108104A2F000000000000037:0:io_grp0:private:::scsi
3:HG-ESX6:1:15:Infra_Volume:60050768108104A2F000000000000041:0:io_grp0:private:::scsi
3:HG-ESX6:2:43:onprem_volume_Ansible:60050768108104A2F000000000000081:0:io_grp0:private:::scsi
3:HG-ESX6:3:14:Volume IP Replication:60050768108104A2F000000000000040:0:io_grp0:private:::scsi
3:HG-ESX6:4:48:ansible:60050768108104A2F000000000000086:0:io_grp0:private:::scsi
3:HG-ESX6:5:49:ansible2:60050768108104A2F000000000000087:0:io_grp0:private:::scsi
3:HG-ESX6:6:34:Onprem_Demo_Ansible_Vol:60050768108104A2F00000000000009F:0:io_grp0:private:::scsi
3:HG-ESX6:7:50:vol_HG-ESX6_1:60050768108104A2F0000000000000A5:0:io_grp0:private:::scsi
3:HG-ESX6:8:51:vol_HG-ESX6_10:60050768108104A2F0000000000000A8:0:io_grp0:private:::scsi
Example 8-3 shows the lsvdiskhostmap command.
Example 8-3 The lsvdiskhostmap command
svcinfo lsvdiskhostmap -delim
Example 8-4 shows the results of using the lsvdiskhostmap command.
Example 8-4 Output of using the lsvdiskhostmap command
svcinfo lsvdiskhostmap -delim : EEXCLS_HBin01
id:name:SCSI_id:host_id:host_name:wwpn:vdisk_UID 950:EEXCLS_HBin01:14:109:HDMCENTEX1N1:10000000C938CFDF:600507680191011D4800000000000466 950:EEXCLS_HBin01:14:109:HDMCENTEX1N1:10000000C938D01F:600507680191011D4800000000000466 950:EEXCLS_HBin01:13:110:HDMCENTEX1N2:10000000C938D65B:600507680191011D4800000000000466 950:EEXCLS_HBin01:13:110:HDMCENTEX1N2:10000000C938D3D3:600507680191011D4800000000000466 950:EEXCLS_HBin01:14:111:HDMCENTEX1N3:10000000C938D615:600507680191011D4800000000000466 950:EEXCLS_HBin01:14:111:HDMCENTEX1N3:10000000C938D612:600507680191011D4800000000000466 950:EEXCLS_HBin01:14:112:HDMCENTEX1N4:10000000C938CFBD:600507680191011D4800000000000466 950:EEXCLS_HBin01:14:112:HDMCENTEX1N4:10000000C938CE29:600507680191011D4800000000000466 950:EEXCLS_HBin01:14:113:HDMCENTEX1N5:10000000C92EE1D8:600507680191011D4800000000000466 950:EEXCLS_HBin01:14:113:HDMCENTEX1N5:10000000C92EDFFE:600507680191011D4800000000000466
8.1.8 Server adapter layout
Note: Example 8-4 shows the same volume that is mapped to five different hosts, but host 110 features a different SCSI ID than the other four hosts. This example is a non-recommended practice that can lead to loss of access in some situations because of SCSI ID mismatch.
If your host system includes multiple internal I/O buses, place the two adapters that are used for IBM Spectrum Virtualize cluster access on two different I/O buses to maximize the availability and performance. When purchasing a server, always have two cards instead of one. For example, two dual-port HBA cards are preferred over one quad-port HBA card because you can spread the I/O and add redundancy.
8.1.9 Host status improvements
IBM Spectrum Virtualize provides an alternative for reporting host status.
Previously, a host was marked as degraded if one of the host ports logged off the fabric. However, examples exist in which this marking might be normal and can cause confusion.
At the host level, a new status_policy setting is available that includes the following settings:
The complete setting uses the original host status definitions.
By using the redundant setting, a host is not reported as degraded unless not enough ports are available for redundancy.
8.1.10 NVMe over Fibre Channel host attachments considerations
IBM Spectrum Virtualize now supports a single host initiator port that uses SCSI and NVMe connections to the storage.
Asymmetric Namespace Access was added to the FC-NVMe protocol standard, which gives it functions that are similar to Asymmetric Logical Unit Access (ALUA). As a result, FC-NVMe can now be used in stretched clusters.
IBM Spectrum Virtualize code 8.4.2 allows a maximum of 64 NVMe hosts per system and 16 hosts per I/O group, if no other types of hosts are attached. IBM Spectrum Virtualize code does not monitor or enforce these limits.
For more information if you are planning to use NVMe hosts with IBM FlashSystem, see this IBM Support web page.
Note: Do not map the same volumes to SCSI and NVMe hosts concurrently. Also, take care not to add NVMe hosts and SCSI hosts in the same host cluster.
8.1.11 iSER host attachment considerations
On the IBM FlashSystem, iSCSI Extensions for RDMA (iSER) hosts with different operating systems can be attached to the system. iSER is a network protocol that extends the iSCSI to use Remote Direct Memory Access (RDMA).
If you are planning to use iSER hosts in your IBM FlashSystem, see following links as you plan your environment:
8.2 IP multitenancy
IP support for all IBM Spectrum Virtualize products previously allowed only a single IPv4 and IPv6 address per port for use with Ethernet connectivity protocols (iSCSI, iSER).
As of 8.4.2, IBM Spectrum Virtualize removed that limitation and supports an increased per port limit to 64 IP addresses (IPv4, IPv6, or both). The scaling of the IP definition also scaled the VLAN limitation, which can be done per IP address or as wanted.
The OBAC model (that is, OBAC-based per tenant administration and partitioned for multitenant cloud environments), also was added to the Ethernet configuration management. For cloud platforms and environments, each port supports a maximum of two IP addresses and VLANs for multiple clients or tenants that share storage resources.
IBM Spectrum Virtualize code 8.4.2 with it is new IP object model introduced a new feature named the portset. The portset object is a group of logical addresses that represents a typical IP function and traffic type. Portsets can be used for traffic types, such as host attachment, backend storage connectivity (iSCSI only), or IP replication.
The following commands can be used to manage IP/Ethernet configuration:
lsportset
mkportset
chportset
rmportset
lsip (lsportip deprecated)
mkip (cfgportip deprecated)
rmkip (rmportip deprecated)
lsportethernet (lsportip deprecated)
chportethernet (cfgportip deprecated)
mkhost (with parameter -portset to bind the host to portset)
chost (with parameter -portset to bind the host to portset)
A host can access storage through the IP addresses that are included in the portset that is mapped to the host. The process to bind a host to a portset includes the following steps:
1. Create portset.
2. Configure IPs with the portset.
3. Create a host object.
4. Bind the host to the portset.
5. Discover and login from the host.
IP portsets can be added by using the management GUI or the command-line interface (CLI). You can configure portsets by using the GUI and selecting Settings → Network → Portsets.
Example 8-6 shows the results of the use of the lsportset command.
Example 8-5 Output of using the lsportset command
svcinfo lsportset
id name type port_count host_count lossless owner_id owner_name
0 portset0 host 2 8 no
1 portset1 replication 2 0
2 portset2 replication 0 0
3 portset3 storage 0 0
4 myportset host 0 0
After the portsets are created, IP addresses can be assigned by using the management GUI or the CLI. You can configure portsets by using the GUI and selecting Settings → Network → Ethernet Ports.
Example 8-6 shows the results of the use of the lsip command.
Example 8-6 Output of using the lsip command
svcinfo lsip
id node_id node_name port_id portset_id portset_name IP_address prefix vlan gateway owner_id owner_name
0 1 node1 1 0 portset0 10.0.240.110 24 10.0.240.9
1 1 node1 1 1 portset1 10.0.240.110 24 10.0.240.9
2 2 node2 1 0 portset0 10.0.240.111 24 10.0.240.9
3 2 node2 1 1 portset1 10.0.240.111 24 10.0.240.9
8.2.1 Considerations and limitations
Consider the following points about IP multitenancy:
Multiple hosts can be mapped to a single portset.
A single host cannot be mapped to multiple portsets.
IP addresses can belong to multiple portsets.
Port masking is used to enable or disable each port per feature for specific traffic types (host, storage, and replication).
Portset 0, Portset 3, and the replication portset are predefined.
When an IP address or host is configured, a portset must be specified.
Portset 0 is the default portset that is automatically configured when the system is updated or created and cannot be deleted.
Portset 0 allows administrators to continue to use an original configuration that does not require multi-tenancy.
After an update, all configured host objects are automatically mapped to Portset 0.
Portset 3 is used for iSCSI back-end storage virtualization.
Unconfigured logins are rejected upon discovery.
The iSNS function registers IP addresses in Portset 0 only with the iSNS server.
Each port can be configured with only one unique routable IP address (gateway specified).
8.3 CSI Block Driver
Container Storage Interface (CSI) enables Container Orchestrators Platform to perform actions on storage systems. The CSI Block Driver connects Kubernetes (K8S) and Red Hat OpenShift Container Platform (OCP) to IBM Block storage devices (Spectrum Virtualize, FlashSystem, DS8K). This process is done through the use of persistent volumes (PVs) to dynamically provision for block storage with stateful containers. Provisioning can be fully automated to scale, deploy, and manage containerized applications. The CSI driver allows hybrid multicloud environments for modern infrastructures.
To use IBM block storage CSI driver, complete the following steps:
1. Create an array secret.
2. Create a storage class.
3. Create a PersistentVolumeClaim (PVC) that is 1 Gb.
4. Display the PVC and the created persistent volume (PV).
5. Create a StatefulSet.
For more information about installing, configuring, and the use of CSI Block Driver, see this IBM Documentation web page.
8.4 Host pathing
Each host mapping associates a volume with a host object and allows all HBA ports in the host object to access the volume. You can map a volume to multiple host objects.
When a mapping is created, multiple paths normally exist across the SAN fabric from the hosts to the IBM Spectrum Virtualize system. Most operating systems present each path as a separate storage device. Therefore, multipathing software is required on the host. The multipathing software manages the paths that are available to the volume, presents a single storage device to the operating system, and provides failover if a path is lost.
If your IBM Spectrum Virtualize system uses NPIV, path failures that occur because of an offline node are masked from host multipathing.
8.4.1 Path selection
I/O for a specific volume is handled exclusively by the nodes in a single I/O group. Although both nodes in the I/O group can service the I/O for the volume, the system prefers to use a consistent node, which is called the preferred node. The primary purposes of the use of a preferred node are to have load balancing and to determine which node destages writes to the backend storage.
When a volume is created, an I/O group and preferred node are defined and optionally can be set by the administrator. The owner node for a volume is the preferred node when both nodes are available.
IBM Spectrum Virtualize uses Asymmetric Logical Unit Access (ALUA), as do most multipathing drivers. Therefore, the multipathing driver gives preference to paths to the preferred node. Most modern storage systems use ALUA.
 
Note: Some competitors claim that ALUA means that IBM Spectrum Virtualize is effectively an active-passive cluster. This claim is not true. Both nodes in IBM Spectrum Virtualize can and do service I/O concurrently.
In the small chance that an I/O goes to the non-preferred node, that node services the I/O without issue.
8.5 I/O queues
Host operating system and HBA software must have a way to fairly prioritize I/O to the storage. The host bus might run faster than the I/O bus or external storage. Therefore, you must have a way to queue I/O to the devices. Each operating system and host adapter use unique methods to control the I/O queue.
The I/O queue can be controlled by using one of the following unique methods:
Host adapter-based
Memory and thread resources-based
Based on the number of commands that are outstanding for a device
8.5.1 Queue depths
Queue depth is used to control the number of concurrent operations that occur on different storage resources. Queue depth is the number of I/O operations that can be run in parallel on a device.
Queue depths apply at various levels of the system:
Disk or flash
Storage controller
Per volume and HBA on the host
For example, each IBM Spectrum Virtualize node has a queue depth of 10,000. A typical disk drive operates efficiently at a queue depth of 8. Most host volume queue depth defaults are approximately around 32.
Guidance for limiting queue depths in large SANs that was described in previous documentation was replaced with calculations for overall I/O group based queue depth considerations.
No set rule is available for setting a queue-depth value per host HBA or per volume. The requirements for your environment is driven by the intensity of each workload.
Ensure that one application or host cannot run away and use the entire controller queue. However, if you have a specific host application that requires the lowest latency and highest throughput, consider giving it a proportionally larger share than others.
Consider the following points:
A single IBM Spectrum Virtualize Fibre Channel port accepts a maximum concurrent queue depth of 2048.
A single IBM Spectrum Virtualize node accepts a maximum concurrent queue depth of 10,000. After this depth is reached, it reports a full status for the queue.
Host HBA queue depths must be set to the maximum (typically, 1024).
Host queue depth must be controlled through the per volume value:
 – A typical random workload volume must use approximately 32
 – To limit the workload of a volume use 4 or less
 – To maximize throughput and give a higher share to a volume, use 64
The total workload capability can be calculated by multiplying the number of volumes by their respective queue depths and summing. With low latency storage, a workload of over 1 million IOPs can be achieved with a concurrency on a single IO Group of 1000.
For more information about queue depths, see the following IBM Documentation web pages:
8.6 Host clusters
IBM Spectrum Virtualize supports host clusters. This feature allows multiple hosts to access the same set of volumes.
Volumes that are mapped to that host cluster are assigned to all members of the host cluster with the same SCSI ID. A typical use case is to define a host cluster that contains all the WWPNs that belong to the hosts that are participating in a host operating system-based cluster, such as IBM PowerHA®, Microsoft Cluster Server (MSCS), or VMware ESXi clusters.
The following commands can be used to manage host clusters:
lshostcluster
lshostclustermember
lshostclustervolumemap
addhostclustermember
chhostcluster
mkhost (with parameter -hostcluster to create the host in one cluster)
mkhostcluster
mkvolumehostclustermap
rmhostclustermember
rmhostcluster
rmvolumehostclustermap
Host clusters can be added by using the GUI. By using the GUI, the system assigns the SCSI IDs for the volumes (you also can manually assign them). For ease of management purposes, it is suggested to use separate ranges of SCSI IDs for hosts and host clusters.
For example, you can use SCSI IDs 0 - 99 for non-cluster host volumes, and greater than 100 for the cluster host volumes. When you choose the System Assign option, the system automatically assigns the SCSI IDs starting from the first available in the sequence.
If you choose Self Assign, the system enables you to select the SCSI IDs manually for each volume. On the right side of the window, the SCSI IDs are shown that are used by the selected host or host cluster (see Figure 8-1 on page 377).
Figure 8-1 SCSI ID assignment on volume mappings
 
Note: Although extra care is always recommended when dealing with hosts, IBM Spectrum Virtualize does not allow you to join a host into a host cluster if it includes a volume mapping with a SCSI ID that also exists in the host cluster:
IBM_2145:ITSO-SVCLab:superuser>addhostclustermember -host ITSO_HOST3 ITSO_CLUSTER1
CMMVC9068E Hosts in the host cluster have conflicting SCSI ID's for their private mappings.
IBM_2145:ITSO-SVCLab:superuser>
8.6.1 Persistent reservations
To prevent hosts from sharing storage inadvertently, establish a storage reservation mechanism. The mechanisms for restricting access to IBM Spectrum Virtualize volumes use the SCSI-3 persistent reserve commands or the SCSI-2 reserve and release commands.
The host software uses several methods to implement host clusters. These methods require sharing the volumes on IBM Spectrum Virtualize between hosts. To share storage between hosts, maintain control over accessing the volumes. Some clustering software uses software locking methods.
You can choose other methods of control by the clustering software or by the device drivers to use the SCSI architecture reserve or release mechanisms. The multipathing software can change the type of reserve that is used from an earlier reserve to persistent reserve, or remove the reserve.
Persistent reserve refers to a set of SCSI-3 standard commands and command options that provide SCSI initiators with the ability to establish, preempt, query, and reset a reservation policy with a specified target device. The functions that are provided by the persistent reserve commands are a superset of the original reserve or release commands.
The persistent reserve commands are incompatible with the earlier reserve or release mechanism. Also, target devices can support only reservations from the earlier mechanism or the new mechanism. Attempting to mix persistent reserve commands with earlier reserve or release commands results in the target device returning a reservation conflict error.
Earlier reserve and release mechanisms (SCSI-2) reserved the entire LUN (volume) for exclusive use down a single path. This approach prevents access from any other host or even access from the same host that uses a different host adapter. The persistent reserve design establishes a method and interface through a reserve policy attribute for SCSI disks. This design specifies the type of reservation (if any) that the operating system device driver establishes before it accesses data on the disk.
The following possible values are supported for the reserve policy:
No_reserve: No reservations are used on the disk.
Single_path: Earlier reserve or release commands are used on the disk.
PR_exclusive: Persistent reservation is used to establish exclusive host access to the disk.
PR_shared: Persistent reservation is used to establish shared host access to the disk.
When a device is opened (for example, when the AIX varyonvg command opens the underlying hdisks), the device driver checks the object data manager (ODM) for a reserve_policy and a PR_key_value. The driver then opens the device. For persistent reserve, each host that is attached to the shared disk must use a unique registration key value.
8.6.2 Clearing reserves
It is possible to accidentally leave a reserve on the IBM Spectrum Virtualize volume or on the IBM Spectrum Virtualize MDisk during migration into IBM Spectrum Virtualize, or when disks are reused for another purpose. Several tools are available from the hosts to clear these reserves.
Instances exist in which a host image mode migration appears to succeed; however, problems occur when the volume is opened for read or write I/O. The problems can result from not removing the reserve on the MDisk before image mode migration is used in IBM Spectrum Virtualize.
You cannot clear a leftover reserve on an IBM Spectrum Virtualize MDisk from IBM Spectrum Virtualize. You must clear the reserve by mapping the MDisk back to the owning host and clearing it through host commands, or through back-end storage commands as advised by IBM technical support.
8.7 AIX hosts
This section discusses support and considerations for AIX hosts.
For more information about configuring AIX hosts, see this IBM Documentation web page.
8.7.1 Multipathing support
Subsystem Device Driver Path Control Module (SDD PCM) is no longer supported. Use the default AIX PCM. For more information, see this IBM Support web page.
8.7.2 AIX configuration recommendations
These device settings can be changed by using the chdev AIX command:
reserve_policy=no_reserve
The default reserve policy is single_path (SCSI-2 reserve). Unless a specific need exists for reservations, use no_reserve:
algorithm=shortest_queue
If coming from SDD PCM, AIX defaults to fail_over. You cannot set the algorithm to shortest_queue unless the reservation policy is no_reserve:
queue_depth=32
The default queue depth is 20. IBM recommends 32:
rw_timeout=30
The default for SDD PCM is 60; for AIX PCM, the default is 30. IBM recommends 30.
For more information about configuration best practices, see AIX Multi Path Best Practices.
8.8 Virtual I/O server hosts
This section discusses support and considerations for virtual I/O server hosts.
For more information about configuring VIOS hosts, see this IBM Documentation web page.
8.8.1 Multipathing support
Subsystem Device Driver Path Control Module (SDDPCM) is no longer supported. Use the default AIX PCM.
For more information, see this IBM Support web page. Where Virtual I/O Server SAN Boot or dual Virtual I/O Server configurations are required, see IBM System Storage Interoperation Center (SSIC).
For more information about VIOS, see this IBM Documentation web page.
8.8.2 VIOS configuration recommendations
These device settings can be changed by using the chdev AIX command:
reserve_policy=single_path
The default reserve policy is single_path (SCSI-2 reserve).
algorithm=fail_over
If coming from SDD PCM, AIX defaults to fail_over.
queue_depth=32
The default queue depth is 20. IBM recommends 32:
rw_timeout=30
The default for SDD PCM is 60; for AIX PCM, the default is 30. IBM recommends 30.
8.8.3 Physical and logical volumes
Virtual SCSI (VSCSI) is based on a client/server relationship. The Virtual I/O Server (VIOS) owns the physical resources and acts as the server or target device.
Physical storage with attached disks (in this case, volumes on IBM Spectrum Virtualize) on the VIOS partition can be shared by one or more client logical partitions. These client logical partitions contain a virtual SCSI client adapter (scsi initiator) that detects these virtual devices (virtual scsi targets) as standard SCSI-compliant devices and LUNs.
You can create the following types of volumes on a VIOS:
Physical volume (PV) VSCSI hdisks
Logical volume (LV) VSCSI hdisks
PV VSCSI hdisks are entire LUNs from the VIOS perspective. If you are concerned about failure of a VIOS and configured redundant VIOSs for that reason, you must use PV VSCSI hdisks. An LV VSCSI hdisk cannot be served up from multiple VIOSs.
LV VSCSI hdisks are in LVM volume groups on the VIOS and must not span PVs in that volume group or be striped LVs. Because of these restrictions, use PV VSCSI hdisks.
8.8.4 Identifying a disk for use as a virtual SCSI disk
The VIOS uses the following methods to uniquely identify a disk for use as a virtual SCSI disk:
Unique device identifier (UDID)
Physical volume identifier (PVID)
IEEE volume identifier
Each of these methods can result in different data formats on the disk. The preferred disk identification method for volumes is the use of UDIDs. For more information about how to determine your disks identifiers, see this IBM Documentation web page.
8.9 Microsoft Windows hosts
This section discusses support and considerations for Microsoft Windows hosts, including Microsoft Hyper-V.
For more information about configuring Windows hosts, see this IBM Documentation web page.
8.9.1 Multipathing support
For multi-pathing support, use Microsoft MPIO with Microsoft Device Specific Module (MS DSM), which is included in the Windows Server operating system. The older Subsystem Device Driver Device Specific Module (SDDDSM) is no longer supported. For more information, see this SIBM Support web page.
The Windows multipathing software supports the following maximum configuration:
Up to 8 paths to each volume
Up to 2048 volumes per windows server/host
Up to 512 volumes per Hyper-V host
8.9.2 Windows and Hyper-V configuration recommendations
Ensure the following components are configured:
Operating system service packs and patches and clustered-system software
HBAs and HBA device drivers
Multipathing drivers (MSDSM)
Regarding Disk Timeout for Windows Servers, change the disk I/O timeout value to 60 in the Windows registry.
8.10 Linux hosts
This section discusses support and considerations for Linux hosts.
For more information about configuring Linux hosts, see this IBM Documentation web page.
8.10.1 Multipathing support
IBM Spectrum Virtualize supports Linux hosts that use native Device Mapper-Multipathing (DM-MP) and native multipathing support.
 
Note: Occasionally, we see storage administrators modify parameters in the multipath.conf file to address some perceived shortcoming in the DM-MP configuration. These modifications can create unintended and unexpected behaviors. The recommendations that are provided in IBM Documentation are optimal for most configurations.
8.10.2 Linux configuration recommendations
Consider the following points about configuration settings for Linux:
Settings and udev rules can be edited in /etc/multipath.conf.
Some Linux levels require polling_interval to be under the defaults section. If polling_interval is under the device section, comment it out by using the # key, as shown in the following example:
# polling_interval
Use default values as described at this IBM Documentation web page.
The dev_loss_tmo settings control how long to wait for device/paths to be pruned. If the inquiry is too short, it might timeout before paths are available. IBM recommends 120 seconds for this setting.
Preferred practice: The scsi_mod.inq_timeout should be set to 70. If this timeout is set incorrectly, it can cause paths to not be rediscovered after a node is restarted.
For more information and this setting and other attachment requirements, see this IBM Documentation web page.
8.11 Oracle Solaris hosts support
This section discusses support and considerations for Oracle hosts. SAN boot and clustering support is available for Oracle hosts.
For more information about configuring Solaris hosts, see this IBM Documentation web page.
8.11.1 Multipathing support
IBM Spectrum Virtualize supports multipathing for Oracle Solaris hosts through Oracle Solaris MPxIO, Symantec Veritas Volume Manager Dynamic Multipathing (DMP), and the Native Multipathing Plug-in (NMP). Specific configurations are dependent on file system requirements, HBA, and operating system level.
Note: The Native Multipathing Plug-In (NMP) does not support the Solaris operating system in a clustered-system environment. For more information about your supported configuration, see IBM System Storage Interoperation Center (SSIC).
8.11.2 Solaris MPxIO configuration recommendations
IBM Spectrum Virtualize software supports load balancing of the MPxIO software. Ensure the host object is configured with the type attribute set to tpgs as shown in the following example:
svctask mkhost -name new_name_arg -hbawwpn wwpn_list -type tpgs
In this command, -type specifies the type of host. Valid entries are hpux, tpgs, generic, openvms, adminlun, and hide_secondary. The tpgs host type enables extra target port unit attentions required by Solaris hosts.
Complete your configuration by using the following process:
1. Configure host objects with host type tpgs.
2. Install the latest Solaris host patches.
3. Copy the /kernel/drv/scsi_vhci.conf file to the /etc/driver/drv/scsi_vhci.conf file.
4. Set the load-balance="round-robin" parameter.
5. Set the auto-failback="enable" parameter.
6. Comment out the device-type-scsi-options-list = "IBM 2145", "symmetric-option" parameter.
7. Comment out the symmetric-option = 0x1000000 parameter.
8. Reboot hosts or run stmsboot -u based on the host level.
9. Verify changes by running luxadm display /dev/rdsk/cXtYdZs2 where cXtYdZs2 is your storage device.
10. Check that preferred node paths are primary and online and non-preferred node paths are secondary and online.
8.11.3 Symantec Veritas DMP configuration recommendations
When you are managing IBM Spectrum Virtualize storage in Symantec volume manager products, you must install an Array Support Library (ASL) on the host so that the volume manager is aware of the storage subsystem properties (active/active or active/passive).
If the suitable ASL is not installed, the volume manager did not claim the LUNs. Use of the ASL is required to enable the special failover or failback multipathing that IBM Spectrum Virtualize requires for error recovery.
Use the commands that are shown in Example 8-7 to determine the basic configuration of a Symantec Veritas server.
Example 8-7 Determining the Symantec Veritas server configuration
pkginfo –l (lists all installed packages)
showrev -p |grep vxvm (to obtain version of volume manager)
vxddladm listsupport (to see which ASLs are configured)
vxdisk list
vxdmpadm listctrl all (shows all attached subsystems, and provides a type where possible)
vxdmpadm getsubpaths ctlr=cX (lists paths by controller)
vxdmpadm getsubpaths dmpnodename=cxtxdxs2’ (lists paths by LUN)
The commands that are shown in Example 8-8 on page 384 and Example 8-9 on page 384 determine whether the IBM Spectrum Virtualize is correctly connected. They also show which ASL is used: native Dynamic Multi-Pathing (DMP), ASL, or SDD ASL.
Example 8-8 on page 384 shows what you see when Symantec Volume Manager correctly accesses IBM Spectrum Virtualize by using the SDD pass-through mode ASL.
Example 8-8 Symantec Volume Manager that uses SDD pass-through mode ASL
# vxdmpadm list enclosure all
ENCLR_NAME ENCLR_TYPE ENCLR_SNO STATUS
============================================================
OTHER_DISKS OTHER_DISKS OTHER_DISKS CONNECTED
VPATH_SANVC0 VPATH_SANVC 0200628002faXX00 CONNECTED
Example 8-9 shows what you see when IBM Spectrum Virtualize is configured by using native DMP ASL.
Example 8-9 IBM Spectrum Virtualize that is configured by using native ASL
# vxdmpadm list enclosure all
ENCLR_NAME ENCLR_TYPE ENCLR_SNO STATUS
============================================================
OTHER_DISKS OTHER_DSKSI OTHER_DISKS CONNECTED
SAN_VC0 SAN_VC 0200628002faXX00 CONNECTED
For more information about the latest ASL levels to use native DMP, see the array-specific module table that is available at this Veritas web page.
To check the installed Symantec Veritas version, enter the following command:
showrev -p |grep vxvm
To check which IBM ASLs are configured into the Volume Manager, enter the following command:
vxddladm listsupport |grep -i ibm
After you install a new ASL by using the pkgadd command, restart your system or run the vxdctl enable command. To list the ASLs that are active, enter the following command:
vxddladm listsupport
8.12 HP 9000 and HP Integrity hosts
This section discusses support and considerations for Linux hosts. SAN boot is supported for all HP-UX 11.3x releases on both HP 9000 and HP Integrity servers.
For more information about configuring Linux hosts, see this IBM Documentation web page.
8.12.1 Multipathing support
IBM Spectrum Virtualize supports multipathing for HP-UX hosts through HP PVLinks and the Native Multipathing Plug-in (NMP). Dynamic multipathing is available when you add paths to a volume or when you present a new volume to a host.
To use PVLinks while NMP is installed, ensure that NMP did not configure a vpath for the specified volume.
For more information about a list of configuration maximums, see this IBM Documentation web page.
8.12.2 HP configuration recommendations
Consider the following configuration recommendations for HP:
HP-UX versions 11.31 September 2007 and later 0803 releases are supported.
HP-UX version 11.31 contains Native Multipathing as part of the mass storage stack feature.
Native Multipathing Plug-in supports only HP-UX 11iv1 and HP-UX 11iv2 operating systems in a clustered-system environment.
SCSI targets that will use more than 8 LUNs must have type attribute hpux set to host object.
Ensure the host object is configured with the type attribute set to hpux as shown in the following example:
svctask mkhost -name new_name_arg -hbawwpn wwpn_list -type hpux
Configure the Physical Volume timeout for NMP for 90 seconds.
Configure the Physical Volume timeout for PVLinks for 60 seconds (the default is 4 minutes).
8.13 VMware ESXi server hosts
This section discusses considerations for VMware hosts.
For more information about configuring VMware hosts, see this IBM Documentation web page.
To determine the various VMware ESXi levels that are supported, see the IBM System Storage Interoperation Center (SSIC).
8.13.1 Multipathing support
VMware features a built-in multipathing driver that supports IBM Spectrum Virtualize ALUA-preferred path algorithms.
The VMware multipathing software supports the following maximum configuration:
A total of 256 SCSI devices
Up to 32 paths to each volume
Up to 4096 paths per server
 
Tip: Each path to a volume equates to a single SCSI device.
For more information about a complete list of maximums, see VMware Configuration Maximums.
8.13.2 VMware configuration recommendations
For more information about specific configuration best practices for VMware, see this CIBM Documentation wbe page.
Consider and verify the following settings:
The storage array type plug-in should be ALUA (VMW_SATP_ALUA).
Path selection policy should be RoundRobin (VMW_PSP_RR).
The Round Robin IOPS should be changed from 1000 to 1 so that I/Os are evenly distributed across as many ports on the system as possible. For more information about how to change this setting, see this web page.
If preferred, all VMware I/O paths (active optimized and non-optimized) can be placed in use by issuing the following esxcli command:
(esxcli storage nmp psp roundrobin deviceconfig set --useano=1 -d <naa of the device>)
For more information about active optimized and active non-optimized paths, see this IBM Documentation web page.
Note: For more information about IBM i-related considerations, see Appendix A, “IBM i considerations” on page 601.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.225.95.248