iSCSI virtualization overview
This chapter provides an overview of how to virtualize back-end storage controllers behind SAN Volume Controller or IBM Storwize systems that are connected over Internet Small Computer System Interface (iSCSI).
This chapter uses the term iSCSI controllers to refer to back-end storage systems that are connected over iSCSI. The term Fibre Channel (FC) controller is used to refer to back-end storage systems that are connected over FC.
This chapter details the planning considerations that are needed when virtualizing an iSCSI controller. It starts with describing the fundamental aspects that differentiate FC controllers from iSCSI controllers, and how to model connectivity to iSCSI controllers. The connectivity options, security aspects, and configuration limits are described, and followed by detailed steps to virtualize iSCSI controllers.
This chapter contains the following topics:
11.1 Planning considerations for iSCSI virtualization
The iSCSI protocol differs from FC in many ways, and virtualizing iSCSI controllers requires a different set of considerations. This chapter starts with recognizing these differences, presents a way to model connectivity to iSCSI controllers, and lays out various considerations when you plan to virtualize storage controllers over iSCSI.
11.1.1 Fibre Channel versus iSCSI virtualization
There are some basic differences when using an FC fabric versus an Ethernet network, so the process of discovery and session establishment is different for iSCSI controllers.
Discovery and session establishment in Fibre Channel
Discovery in a switched FC fabric always happens through a name server. One of the switches in the fabric acts as a name server. When new entities (servers, storage, or virtualization appliances) are connected to the switch over FC, they automatically log in to the name server. When FC controllers join the fabric, they register the target capability. When an initiator logs in to the fabric, it queries the name server for targets on the fabric. It uses the discovery response to log in to the list of discovered targets. Therefore, discovery and session is an automated process in the case of FC controllers. The only prerequisite is to ensure that the initiators and targets are part of the same zone.
Discovery and session establishment in iSCSI
Discovery process refers to several things:
Discovery controllers that are available in the network
The usage of name servers is optional for iSCSI. iSCSI initiators can be configured to directly connect to a target iSCSI controller.
Although name servers are used and supported by iSCSI, this function is not embedded in the Ethernet switches as with FC. The name servers must be configured as separate entities on the same Ethernet network as the iSCSI initiators and targets. An iSCSI target should explicitly register with an iSCSI Name Server (iSNS). The IP address of the name server must be explicitly configured on the target iSCSI controller. To do discovery of targets, the initiator must be explicitly configured with the name server address before you do discovery. Therefore, the discovery process in iSCSI is not automated as with FC.
After you connect, the initiator automatically sends a sendtarget request to the target controller to get a list of the IP addresses and IQN names. This action does not require any manual actions.
Getting a list of the capabilities of the target
The iSCSI initiator and target exchange capabilities are part of the operational negotiation phase of iSCSI Login process. This process does not require any manual actions.
Getting a list of available LUNs from the target
This is a SCSI workflow, and independent of the transport protocol that is used. All LUNs that are mapped to the initiator by using the target-specific LUN mapping rules are reported to the initiator by using SCSI commands.
Establishing sessions to iSCSI controllers from SAN Volume Controller or IBM Storwize storage systems
With FC connectivity, after an initiator discovers the target through the fabric name server, it automatically attempts to log in and establish sessions to the target ports through all zoned ports that are connected to the fabric.
While using iSCSI connectivity for connecting to back-end iSCSI controllers, connections must be set up manually.
Figure 11-1 depicts the overall differences while discovering and establishing sessions to iSCSI controllers versus FC controllers.
Figure 11-1 Differences while discovering and establishing sessions to iSCSI controllers versus Fibre Channel controllers
11.1.2 Storage port configuration model
This section describes some of the basic concepts that you must know to virtualize iSCSI controllers behind SAN Volume Controller or IBM Storwize.
Source Ethernet ports
Whenever a storage controller is virtualized behind SAN Volume Controller or IBM Storwize for high availability, it is expected that every node in the initiator system has at least one path to the target controller. More paths between each initiator node and the target controller help increase the available bandwidth and enable more redundancy for the initiator node/target connectivity.
The first step in the iSCSI controller configuration is to select the Ethernet port on the nodes of the initiator system. To ensure symmetric connectivity from every node, the configuration CLI for iSCSI virtualization refers to the source Ethernet port, which is the port number of the Ethernet port on every node in the system, as shown in Figure 11-2.
Figure 11-2 SAN Volume Controller and IBM Storwize family iSCSI initiator
Storage ports
A storage port is a target controller iSCSI endpoint. It is characterized as a target controller IQN, which is accessible through an IP address on an Ethernet port of the target controller. The concept is fundamental to the configuration interface for iSCSI virtualization. While discovering target controllers and establishing connectivity between initiator and target, the sessions are established between source Ethernet ports (as described in “Source Ethernet ports” on page 215) and target storage ports (as described in this section).
The total candidate storage ports on a controller can be calculated by using the following equation:
Number of candidate storage ports = (number of target IQNs) x (number of configured Ethernet ports)
Here are two examples:
Example 1
An iSCSI controller has a single IQN per controller node and multiple Ethernet ports that are configured with IP addresses for iSCSI connectivity. All the LUNs from the controller are accessible through the same target IQN. If there are four IP addresses that are configured, then the target controller has four candidate storage ports.
Example 2
A controller has four configured Ethernet ports for an iSCSI controller, but associates a single IQN per exported LUN. If there are four LUNs/ IQNs, and all IQNs are accessible through all Ethernet ports, then the number of candidate storage ports on the controller is 16.
To establish a connection from source Ethernet ports to target storage ports, the initiator must send a discovery request from the source Ethernet ports to a specific storage port. The discovery output is a subset of the overall storage port candidates for the target controller.
Selecting the I/O group versus system-wide connectivity
There are iSCSI controllers that associate a different target IQN with every LUN that is exported, which requires a separate session between the initiator nodes and target nodes for every LUN that is mapped to the initiator by the target.
Because there are limits on the total number of sessions that the initiators and targets support, you are not required to have connectivity to an iSCSI target controller from every node in the initiator system. You may, when working with controllers that associate an IQN per LUN, to configure connectivity to a target node from nodes of a single IO group only so that you can reduce the session sprawl for a target controller while ensuring redundant connectivity through nodes of a single IO group.
Before you decide whether to use I/O group versus system-wide connectivity to a target controller, see the target controller documentation to understand how many IQNs are exported by the controller and how many LUNs can be accessed through the target. Also, you must consider the maximum number of sessions that are allowed by the initiator (SAN Volume Controller or IBM Storwize storage system) and target controller.
By default, system-wide connectivity is assumed.
11.1.3 Controller considerations
All iSCSI controllers provide access to storage as SCSI LUNS. There are various different implementations of iSCSI controllers, which might require different configuration steps to virtualize the controller behind SAN Volume Controller or IBM Storwize storage systems.
Here are the primary external characteristics that differ across controllers:
Number of target IQNs that are reported to initiators
An iSCSI initiator sends a discovery request to an iSCSI target by sending a sendtarget request. The target responds with a list of IQN and IP addresses that are configured on the controller to which the initiator is granted access.
Some controllers export only a single target IQN through multiple Ethernet ports. All the mapped LUNs from the controller are accessible to the initiator if the initiator logs in to the target IQN.
There are other controllers that export multiple target IQNs. Each IQN maps to a different LUN.
Number of IP addresses that are visible to initiators
All iSCSI controllers enable connectivity through multiple Ethernet ports to enable redundant connections and to increase the throughput of the data transfer. When an initiator must connect to a target, it must specify a target IP address. Although most controllers enable setting up connections from initiator ports to target ports through the different target IP addresses, some controllers recommend establishing connections to a virtual IP address to enable the controller to load balance connections across the different Ethernet ports.
Depending on how many IQNs are available from a target and how many target IP addresses initiators can establish connections to, the configuration model from initiator to target can be chosen to provide sufficient redundancy and maximum throughput.
Having an IQN per LUN leads to many sessions being established, especially if there are many LUNs that are available from the storage controller. There are limits on the number of sessions that can be established from every node and the number of controllers that can be virtualized behind SAN Volume Controller or IBM Storwize storage systems.
Traditionally, every back-end controller (independent of the type of connectivity that is used) must have at least one session from every initiator node. To avoid session sprawl, for iSCSI controllers that map a target IQN for every LUN that is exported, there is an option that is available to establish sessions from a single IO group only. Also, if the target controller recommends connecting through only a single virtual IP of the target, the session creation process should consider distributing the sessions across the available initiator ports for maximum throughput.
Some controllers map a single target IQN for a group of clustered iSCSI controllers. In such a case, it is not possible to distinguish individual target controllers within the clustered system. Based on the number of Ethernet ports that are available on initiator nodes, the number of sessions can be configured to connect across Ethernet ports of multiple target controllers in the clustered system.
11.1.4 Stretched cluster and HyperSwap topology
iSCSI back-end controllers are supported in both stretched cluster and HyperSwap system topologies.
In a stretched system configuration, each node on the system can be on a different site. If one site experiences a failure, the other site can continue to operate without disruption. You can create an IBM HyperSwap topology system configuration where each I/O group in the system is physically on a different site. When used with active-active relationships to create HyperSwap volumes, these configurations can be used to maintain access to data on the system when power failures or site-wide outages occur.
In both HyperSwap and stretched configurations, each site is defined as an independent failure domain. If one site experiences a failure, then the other site can continue to operate without disruption. You must also configure a third site to host a quorum device or IP quorum application that automatic breaks a tie in case of a link failure between the two main sites.
Controllers that are connected through iSCSI can be placed in a system in stretched topology. When a controller is placed in a system with a stretched or HyperSwap topology, it should be visible and have connectivity to nodes only at a particular site, regardless of whether I/O Group or clusterwide connectivity was set while establishing sessions by using the addiscsistorageport CLI.
To enable connectivity only to nodes of a particular site, you can use an option site parameter of addiscsistorageport. When you run addiscsistorageport, you can specify site_name or site_id from the lssite CLI. The lsiscsistorage command also shows which site (if any) was used to establish connectivity.
Use a third site to house a quorum disk or IP quorum application. Quorum disks cannot be on iSCSI-attached storage systems; therefore, iSCSI storage cannot be configured on a third site.
11.1.5 Security
The security features for connectivity to back-end controllers can be categorized in to two areas:
Authorization
A target controller might require that any initiator node or nodes must present authentication credentials with every discovery or normal iSCSI session. To enable this function, one-way CHAP authentication is supported by most iSCSI controllers. The CLIs for discovery and session establishment can specify the user name and CHAP secret. These credentials, if specified, are stored persistently and used every time that a session is reestablished from initiator source ports to target ports.
Access control
Access control is enabled at each target controller. A target controller creates host objects and maps volumes to hosts. Some controllers present multiple volumes that are mapped to a host through a single target IQN, and other controllers might present a unique IQN for each LUN. In the latter case, a discovery request from the initiator system to the target lists multiple target IQNs, and a separate session that is initiated by addiscsistorgeport must be established for every discovered target IQN. In the former, a single invocation of addiscsistorageport is sufficient to provide the initiator nodes access to all volumes that are exported by the target controller to the host.
11.1.6 Limits and considerations
Quorum disks cannot be placed on iSCSI-attached storage systems. As a result, the following limits should be considered:
iSCSI storage cannot be configured on a third site when using stretched or HyperSwap topology. Using IP quorum is preferred as a tie breaker in a stretched or HyperSwap topology.
An iSCSI MDisk should not be selected when choosing a quorum disk.
SAN Volume Controller iSCSI initiators show only the first 64 discovered IQNs that are sent by the target controller in response to a discovery request from the initiator.
Connections can be established only from an initiator source to the first 64 IQNs for any back-end iSCSI controller.
A maximum of 256 sessions can be established from each initiator node to one or more target storage ports.
When a discovery request is sent from initiator source ports to target storage ports, the target sends a list of IQNs. In controllers that export a unique IQN per LUN, the list might depend on the number of configured LUNs in the target controller. Because each target IQN requires one or more redundant sessions to be established from all nodes in an initiator IO group / cluster, this situation can lead to session sprawl both at the initiator and target. A SAN Volume Controller connectivity limit of 64 target IQNs per controller helps manage this sprawl.
11.2 iSCSI external virtualization steps
This section describes the steps that are required to configure an IBM Storwize storage system as an iSCSI initiator to connect to an IBM Storwize or non IBM Storwize iSCSI back-end controller.
11.2.1 Port selection
List the Ethernet ports on the initiator Storwize nodes, which act as the initiator, and the Ethernet ports on the iSCSI controller, which serve as the back end or target. Ports with the same capability or speed should be chosen to avoid bottlenecks on the I/O path during data migration and virtualization. The ports listing can be seen on the initiator nodes by using the svctaks lsportip command. See the documentation for different target controllers to see how to view the Ethernet ports and speeds.
 
Note: The port speed for IBM Storwize ports is visible only when the port is cabled and the link is active.
11.2.2 Source port configuration
The Ethernet ports on the initiator SAN Volume Controller system can be configured for host attachment, back-end storage attachment, or IP replication. Back-end storage connectivity is turned off by default for SAN Volume Controller Ethernet ports. Configure the initiator ports by using the GUI or CLI (svctask cfgportip) and set the storage/storage_6 flag to yes to enable the ports to be used for back-end storage connectivity.
Example 11-1 shows an example for enabling a source port on the initiator system for back-end storage controller connectivity by using the IPv4 address that is configured on the port.
Example 11-1 Source port configuration
IBM_2145:Redbooks_cluster1:superuser>cfgportip -node node1 -ip 192.168.104.109 -mask 255.255.0.0 -gw 192.168.100.1 -storage yes 3
11.2.3 Target port configuration
Configure the selected ports on the target controller for host connectivity. For specific steps to configure ports on different back-end controllers that can be virtualized by the IBM Storwize initiator system, see the following chapters.
11.2.4 Host mapping and authentication settings on target controllers
You must create host mappings on the target system so that the target system recognizes the initiator system and presents volumes to it to be virtualized. For a target IBM Storwize controller, this process involves creating a host object, associating the IQN for initiator system’s nodes with that host object, and mapping volumes to that host object (these are the volumes that the initiator system virtualizes). See specific target controller documentation for host attachment guides. The IQN of a node in an initiator Storwize system can be found by using the svcinfo lsnodecanister command.
Optionally, if the target controller requires authentication during discovery or login from the initiator, the user name and CHAP secret must be configured on the target system. For more information about how to accomplish this task on different controllers, see the following chapters.
11.2.5 Understanding the storage port model for a back-end controller
Different controllers expose a different number of IQNs and target port IP addresses. As a result, there might be a different number of storage ports, as described in “Storage ports” on page 216. The number of storage ports depends on the controller type. To identify which storage port model suits the back-end controller that is being virtualized, see 11.1.3, “Controller considerations” on page 217. This section also enables you to understand the discovery output and how sessions can be set up between the initiator source ports and target storage ports by running multiple iterations of discovery and configuration.
You should decide which iSCSI sessions to establish during the planning phase because this decision constrains physical aspects of your configuration, such as the number of Ethernet switches and physical ports that are required.
11.2.6 Discovering storage ports from the initiator
After all the initiator source ports or target storage ports are configured, you must discover the target storage ports by using the initiator source ports. This discovery can be achieved either by using the system management GUI or by using the svctask discoveriscsistorageportcandidate command. The command specifies the source ports’ number and the target port IP to be discovered.
Optional parameters that can be specified include the user name and password for controllers that require CHAP authentication during discovery. If the user name is not specified and CHAP is specified, the initiator node IQN is used as the default user name.
Discovery can be done in two modes:
Cluster wide: Discovery is done from the numbers source port on all nodes in the initiator system to a target port IP. This is the default mode for discovery, unless specified otherwise.
I/O Group wide: A discovery request to target controller storage ports can be sent only from a selected source port of a specified I/O group in the initiator system.
The SAN Volume Controller iSCSI initiator sends an iSCSI sendtargets discovery request to the back-end controller target port IP that is specified. If sessions must be established to multiple target ports, discovery request must be sent to every target port after discovery, and then you must do session establishment from one port. A new discovery request purges the old discovery output.
Example 11-2 shows an example of triggering a cluster-wide discovery from source port 3 of the initiator system. The discovery request is sent to the controller with target port 192.168.104.190. The target controller requires authentication, so chapsecret is specified. Because the user name (which is optional) is not specified, the node IQN is used to set the user name while sending the discovery request.
Example 11-2 Triggering a cluster-wide discovery
IBM_2145:Redbooks_cluster1:superuser>detectiscsistorageportcandidate -srcportid 3 -targetip 192.168.104.190 -chapsecret secret1
Example 11-3 shows an example of triggering I/O group-specific discovery from source port 3 of the initiator system. The example is similar to Example 11-2 except that the discovery request is initiated only though the nodes of I/O group 2, assuming that the initiator system has an I/O group with I/O group ID 2. Also, in this example, the target controller does not require any authentication for discovery, so the user name and CHAP secret are not required.
Example 11-3 Triggering an I/O group discovery
IBM_2145:Redbooks_cluster1:superuser>detectiscsistorageportcandidate -iogrp 2 -srcportid 3 -targetip 192.168.104.198
11.2.7 Viewing the discovery results
The output of the successful discovery can be seen in the management GUI or by running svcinfo lsiscsistorageportcandidate. For each target storage port to which the discovery request is sent, the discovery output shows multiple rows of output, one per discovered target IQN.
Example 11-4 shows the result of the successful discovery. The discovered controller has only a single target IQN.
Example 11-4 Discovery results
IBM_2145:Redbooks_cluster1:superuser>lsiscsistorageportcandidate
id src_port_id target_ipv4 target_ipv6 target_iscsiname iogroup_list configured status site_id site_name
0 3 192.168.104.190 iqn.1986-03.com.ibm:2145.redbooksbackendcluster.node1 1:1:-:- no full
The first column in the discovery output shows the discovery row ID. This row ID is used in the next step while setting up sessions to the discovered target controller. The source port ID and target IP that are specified in the discovery request are shown next. The IQN of the discovered controller is shown under the target_iscsiname column.
The iogroup_list shows a colon-separated list of I/O groups with target ports.
The configured column indicates whether the discovered target IQN has any established sessions with source ports or target ports. The values are yes and no (no by default). This is the result of a previous discovery and session establishment.
The status column indicates whether the discovery was successful.
The results from a discovery request are valid only until the following actions occur:
A new discovery is issued, which purges the previous discovery results before sending a new discovery request.
T2/T3/T4 cluster recovery is done, which purges the previous discovery results.
In general, it is preferable to run discovery immediately before adding a session.
11.2.8 Adding sessions to discovered storage ports
The next step after viewing discovery results is to establish sessions between initiator source ports and the target storage ports, which are described in 11.2.7, “Viewing the discovery results” on page 222. This task can be done from the management GUI by selecting the discovered storage ports, or by running the svctask addiscsistorageport command and referencing the row number that is discovered by the svcinfo lsiscsistorageportcandidate command.
If the discovery output shows multiple IQNs corresponding to multiple LUNS on the target controller, run addiscsistorageport once for every discovered storage port, referencing the row number of the discovered storage port from the output of the svcinfo lsiscsistorageportcandidate command.
There are several optional parameters that can be specified while adding sessions to target storage ports. A user name and CHAP secret can be specified if the back-end controller being virtualized requires CHAP authentication for session establishment. If a user name is not specified and CHAP is specified, the initiator node IQN is used by default.
Sessions can be configured in two modes:
Cluster-wide: Sessions can be configured from the numbers source port on all nodes in the initiator system to the specified target storage port (referencing the discovery output row number). This is the default mode for session establishment, unless specified otherwise.
IO Group-wide: Sessions to a target controller storage port can be established only from the selected source port of a specified I/O group in the initiator system. This mode is the recommended mode when virtualizing controllers that expose every LUN as a unique IQN. Specifying a single I/O group for target controller connectivity helps ensure that a single target controller does not consume many of the overall, allowed back-end session count.
Another option that can be specified when establishing sessions from initiator source ports to target storage ports is the site. This option is useful when an initiator system is configured in a stretched cluster or HyperSwap configuration. In such configurations, a controller should be visible only to nodes in a particular site. The allowed options for storage controller connectivity are site1 and site 2. iSCSI controllers are not supported in site3, and are used for tie-breaking in split-brain scenarios.
Example 11-5 shows how to establish connectivity to the target controller that was discovered in Example 11-4 on page 222.
Example 11-5 Establishing connectivity to the target controller
IBM_2145:Redbooks_cluster1:superuser>addiscsistorageport -chapsecret secret1 0
The addiscsistorageport command can specify a site name or site ID in case the system is configured with multiple sites, as in the case of a stretched cluster or HyperSwap topology.
Example 11-6 shows how to establish connectivity to the target controller that was discovered in Example 11-4 on page 222 through nodes only in site 1.
Example 11-6 Establishing connectivity to the target controller
IBM_2145:Redbooks_cluster1:superuser>addiscsistorageport -site 1 -chapsecret secret1 0
11.2.9 Viewing established sessions to storage ports
After sessions are established to the storage ports of a back-end controller, sessions to one or more storage ports can be viewed either in the management GUI or by using the lsiscsistorageport command. When you use the command, there are two views: a concise view and a detailed view.
The concise view provides a consolidated listing of all sessions from the initiator system to all back-end target iSCSI controllers. Each row of output refers to connectivity from initiator system nodes to a single target storage port. You can view this output by running the svctask addiscsistorageport command. Each invocation of the command results in a separate row of output, as shown by the output of the lsiscsistorageport command.
The detailed view can be used to see the connectivity status from every initiator node port to the target storage ports for a selected row of the output of lsiscsistorageport.
Example 11-7 shows the concise lsiscsistorageport view on the initiator system in the example that is illustrated by Example 11-1 on page 220 through Example 11-4 on page 222. The steps in the preceding examples are repeated to add connectivity to node2 of the target controller through source ports 3 and 4.
Example 11-7 Output of the lsiscsistorageport command
IBM_2145:Redbooks_cluster1:superuser>lsiscsistorageport
id src_port_id target_ipv4 target_ipv6 target_iscsiname controller_id iogroup_list status site_id site_name
1 3 192.168.104.190 iqn.1986-03.com.ibm:2145.redbooksbackendcluster.node1 4 1:1:-:- full
2 3 192.168.104.192 iqn.1986-03.com.ibm:2145.redbooksbackendcluster.node2 4 1:1:-:- full
3 4 192.168.104.191 iqn.1986-03.com.ibm:2145.redbooksbackendcluster.node1 4 1:1:-:- full
4 4 192.168.104.193 iqn.1986-03.com.ibm:2145.redbooksbackendcluster.node2 4 1:1:-:- full
The first column specifies the row ID view and denotes the sessions that are established from the specified initiator node ports to a back-end controller target iSCSI qualified name (IQN) through a target Internet Protocol (IP) address. The value is 0 - 1024.
The column src_port_id is the source port identifier for the node Ethernet port number that is displayed in the lsportip output.
The columns target_ipv4 / target_ipv6 indicate the IPv4/ IPv6 address of the iSCSI back-end controller target port to which connectivity is established.
The target_iscsiname indicates the IQN of the iSCSI back-end controller to which the connectivity is established.
The controller_id indicates the controller ID that is displayed in the lscontroller output.
The iogroup_list indicates the colon-separated list of I/O groups through which connectivity is established to the target port. The values are 0 and 1:
0 indicates that the I/O group is available in the system, but discovery is either not triggered through the I/O group or discovery through the I/O group fails.
1 indicates that the I/O group is present and discovery is successful through the I/O group.
- indicates that the I/O group is not valid or is not present in the system.
Status indicates the connectivity status from all nodes in the system to the target port. Here are the values:
Full: If you specify a single I/O group by using the addiscsistorageport command and you establish the session from all nodes in the specified I/O group, the status is full.
Partial: If you specify a single I/O group by using the addiscsistorageport command and you establish the session from a single node in the specified I/O group, the status is partial.
None: If you specify a single I/O group by using the addiscsistorageport command and you do not establish the session from any node in the specified I/O group, the status is none.
The site_id and site_name parameters indicate the site ID / site name (if the nodes being discovered belong to a site). These parameters apply to stretched and HyperSwap systems.
Example 11-8 provides a sample output of the detailed view of the lsiscsistorageport command. The example shows an example for connectivity that is established to a back-end IBM Storwize controller. However, the output is similar to connectivity to other target controllers.
Example 11-8 Output of the lsiscsistorageport command
IBM_2145:Redbooks_cluster1:superuser>lsiscsistorageport 1
id 1
src_port_id 3
target_ipv4 192.168.104.190
target_ipv6
target_iscsiname iqn.1986-03.com.ibm:2145.redbooksbackendcluster.node1
controller_id 4
iogroup_list 1:1:-:-
status full
site_id
site_name
node_id 1
node_name node1
src_ipv4 192.168.104.199
src_ipv6
src_iscsiname iqn.1986-03.com.ibm:2145.redbookscluster1.node1
connected yes
node_id 2
node_name node2
src_ipv4 192.168.104.197
src_ipv6
src_iscsiname iqn.1986-03.com.ibm:2145.redbookscluster1.node2
connected yes
node_id 3
node_name node3
src_ipv4 192.168.104.198
src_ipv6
src_iscsiname iqn.1986-03.com.ibm:2145.redbookscluster1.node3
connected yes
node_id 4
node_name node4
src_ipv4 192.168.104.196
src_ipv6
src_iscsiname iqn.1986-03.com.ibm:2145.redbookscluster1.node4
connected yes
The detailed view enumerates all the fields that are visible in the concise view. While each row in the detailed view sums up the connectivity status across all nodes in an I/O group or all I/O groups of the initiator system, the detailed view provides a connectivity view from each initiator node part of the site/iogroup/cluster that is indicated in the concise view output. It helps to view and isolate connectivity issues to an initiator node and source port.
The node_name / node_id indicates the node name and node ID of the initiator node through which a session is established to a storage port that is identified by target_ipv4 / target-ipv6 and target_iscsi_name. The node_name and node_id are as referenced in other clustered system CLIs.
src_ipv4 / src_ipv4 refers to the IPv4/ IPv6 address of the source port that is referred to by src_port_id on the initiator node that is referenced by node_name/ node_id.
src_iscsiname refer to the IQN of the source node that is referenced by node_name/ node_id.
connected indicates whether the connection is successfully established from a specified source port (src_port_i) of an initiator node that is referenced by node_name/ node_ip, which has an IP address src_ipv4/ src_ipv6 to a target storage port with target_iscsiname IQN and IP address target_ipv4 / target_ipv6. The values are yes and no.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.227.48.131