External virtualization of IBM Storwize storage systems
This chapter contains a detailed description about using an IBM Storwize system as a back-end storage controller connected through Internet Small Computer System Interface (iSCSI).
In such a configuration, there are two Storwize systems: One that is acting as a back-end storage controller, and another that is virtualizing it. The system that is acting as the back end is an iSCSI target, and the system that is virtualizing it is an iSCSI initiator.
To avoid confusion, this chapter uses the term target system to refer to the system that is acting as the back end, and the term initiator system to refer to the system that is virtualizing it.
This chapter describes the following topics:
12.1 Planning considerations
This section describes the things that you should consider before using an IBM Storwize system as an iSCSI-connected back end.
You should decide which iSCSI sessions to establish during the planning phase because this decision constrains physical aspects of your configuration, such as the number of Ethernet switches and physical ports that are required.
Because each node has a unique IQN in SAN Volume Controller and IBM Storwize systems, it is possible to have sessions from each node of the initiator system to each node of the target system. To achieve maximum availability, each node in the initiator system should establish at least one session with each node in the target system. This is also known as cluster-wide connectivity.
Furthermore, for redundancy, each pair of nodes should have two sessions, with each session using a link across a different Ethernet switch. Each SAN Volume Controller or
IBM Storwize system has a limit to the maximum number of external virtualization iSCSI sessions. In addition, there is no performance benefit from having more than two sessions between a pair of nodes. Therefore, do not establish more than two sessions between each pair.
To avoid bottlenecks in the paths between the initiator system and the target system, ensure that all of the ports that are involved in a connection are at the same speed. For example, when using a 10 Gbps source port on the initiator system, ensure that the target port is also 10 Gbps, and that the connection goes through a 10 Gbps switch. When establishing cluster-wide connectivity, the initiator system uses the same source port from each initiator-system node to connect to a given target port. Therefore, you should also ensure that each source port with a given index on the initiator system has the same speed.
This chapter uses the configuration that is shown in Figure 12-1 as an example, showing the steps that are required to set up a simple configuration that includes a Storwize target system.
Figure 12-1 An example configuration with an iSCSI-connected Storwize target system
In Figure 12-1 on page 228, each node of the initiator system has two redundant paths to each node of the target system, with one using Ethernet Switch 1 (in orange) and one using Ethernet Switch 2 (in blue). The initiator system has cluster-wide connectivity to each node of the target system for maximum availability. Each source port of the initiator system can have at most one connection to each port of the target system.
12.1.1 Limits and considerations
From the perspective of the target system, the initiator system is an iSCSI-attached host. Therefore, the limits and considerations that apply to the use of a Storwize system as the target system include the iSCSI host-attachment limits and considerations for IBM Storwize systems. Section 4.5, “IBM Storwize family and iSCSI limits” on page 53 gives references to the detailed lists of limits and considerations for SAN Volume Controller and IBM Storwize products. Also, the limits and considerations for iSCSI external virtualization in general still apply. Section 11.1.6, “Limits and considerations” on page 219 gives references to detailed lists of these limits and restrictions.
12.1.2 Performance considerations
To maximize the performance benefit from using jumbo frames, use the largest possible MTU size in any given network configuration. There is no performance benefit from increasing the MTU size unless all the components in the path use the increased MTU size, so the optimal MTU size setting depends on the MTU size settings of the whole network. The largest MTU size that is supported by IBM Storwize systems is 9000. You can change the MTU size that is configured on a port on either the initiator system or the target system by using the cfgportip command. For more information, see “Ethernet Jumbo frames” on page 208.
12.2 Target configuration
This section describes steps that must be carried out on the target system to enable it to be used as a back-end storage controller that is connected by iSCSI. This section assumes that the back-end system already has volumes set up to be virtualized by the initiator system. For more information, see Implementing the IBM Storwize V5000 Gen2 (including the Storwize V5010, V5020, and V5030), SG24-8162.
From the perspective of the target system, the initiator system is a host that is attached by iSCSI. For this reason, configuring the target system in this case is the same as configuring an IBM Storwize system for iSCSI-connected host attachment, but using the initiator system as the “host”. For more information, see Chapter 7, “Configuring the IBM Storwize storage system and hosts for iSCSI” on page 83.
12.2.1 System layer
If the initiator system is a SAN Volume Controller or is in the replication layer, the Storwize target system must be in the storage layer. You can change the system layer by using the chsystem command. For more information about system layers, see IBM Knowledge Center.
12.2.2 Host mappings
You must create host mappings so that the target system recognizes the initiator system and presents volumes to it to be virtualized. This task involves creating a host object, associating the initiator system’s IQN with that host object, and mapping volumes to that host object (these are the volumes that the initiator system virtualizes).
In Figure 12-1 on page 228, the initiator system has four IBM Storwize node canisters. Because in SAN Volume Controller and IBM Storwize systems each node has its own IQN, the system has four IQNs. The target system should represent the initiator system with one host object, and associate all four IQNs with that host object.
Example 12-1 shows how to set up the host mappings in this example by using the CLI:
1. Create a host object and associate the initiator system’s four IQNs with that host object by using mkhost.
2. Map a volume to that host object by using mkvdiskhostmap.
Example 12-1 Steps that are required to set up host mappings on the target system by using the CLI
IBM_Storwize:Redbooks_Backend_cluster:superuser>mkhost -name Redbooks_initiator_system -iscsiname iqn.1986-03.com.ibm:2145.redbookscluster1.node1,iqn.1986-03.com.ibm:2145.redbookscluster1.node2,iqn.1986-03.com.ibm:2145.redbookscluster1.node3,iqn.1986-03.com.ibm:2145.redbookscluster1.node4 -iogrp 0
Host, id [0], successfully created
IBM_Storwize:Redbooks_Backend_cluster:superuser>mkvdiskhostmap -host Redbooks_initiator_system Vdisk_0
Virtual Disk to Host map, id [0], successfully created
The -iogrp 0 option configures the target system to present volumes only from I/O group 0 to the host. In this case, this is the only I/O group in the target system. You can find the initiator system’s IQNs by using the lsnode command on the initiator system. It is also possible to add IQNs to an existing host object by using addhostport.
12.2.3 Authentication
SAN Volume Controller and IBM Storwize systems support only one-way (the target authenticates the initiator) CHAP authentication for external virtualization by using iSCSI. Therefore, you should configure only one-way CHAP authentication on the target system, although it is possible to configure two-way CHAP authentication on it (because SAN Volume Controller and IBM Storwize systems support two-way CHAP authentication for host attachment).
To configure one-way CHAP authentication on the target system, set a CHAP secret for the host object that represents the initiator system. This can be done either by using the chhost command or by using the iSCSI configuration pane in the GUI. For more information, see 5.2, “Configuring CHAP for an IBM Storwize storage system” on page 57.
Example 12-2 on page 231 demonstrates how to set a CHAP secret for the host object that represents the initiator system in the example that is illustrated by Figure 12-1 on page 228. Set a CHAP secret for the host object that represents the initiator system by using chhost. There is no feedback from a successful invocation.
Example 12-2 Configuring a CHAP secret on the target system for one-way (target authenticates initiator) CHAP authentication
IBM_Storwize:Redbooks_Backend_cluster:superuser>chhost -chapsecret secret1 Redbooks_initiator_system
After a CHAP secret is set, the target system automatically uses it to authenticate the initiator system. Therefore, you must specify the target system’s CHAP secret both when you discover its ports and when you establish sessions with it from the initiator system. These procedures are described in 12.3, “Initiator configuration” on page 232.
12.2.4 Port configuration
For the initiator system to establish iSCSI sessions with the target system, the target system’s iSCSI ports must have IP addresses. For more information about how to set the target system’s iSCSI ports’ IP addresses, see 7.1.1, “Setting the IBM Storwize iSCSI IP address” on page 84. While following these instructions, ensure that you note the IP addresses being set and to which ports they belong. You need this information when discovering the ports from the initiator and establishing sessions in 12.3, “Initiator configuration” on page 232.
In the example that is illustrated by Figure 12-1 on page 228, each node in the target system has four Ethernet ports: Two 1 Gbps ports and two 10 Gbps ports. Each connection should be between ports with the same speed to maximize the link performance.
In Figure 12-1 on page 228, the connections are from 10 Gbps ports on the initiator system to the 10 Gbps ports on the target system. These are ports 3 and 4 on each node of the target system. Therefore, you must assign IP addresses to these four 10 Gbps ports. Example 12-3 shows how to do this with the CLI. Use the cfgportip command to configure IP addresses for the iSCSI ports.
 
Remember: There is no feedback from a successful invocation.
Tip: In this example, the ports do not use VLAN tagging. If you require VLAN tagging, for example, to use Priority Flow Control (PFC), you also must use the -vlan or -vlan6 options.
Example 12-3 Configuring IP addresses for the target system’s iSCSI ports
IBM_Storwize:Redbooks_Backend_cluster:superuser>cfgportip -node node1 -ip 192.168.104.190 -mask 255.255.0.0 -gw 192.168.100.1 3
IBM_Storwize:Redbooks_Backend_cluster:superuser>cfgportip -node node1 -ip 192.168.104.191 -mask 255.255.0.0 -gw 192.168.100.1 4
IBM_Storwize:Redbooks_Backend_cluster:superuser>cfgportip -node node2 -ip 192.168.104.192 -mask 255.255.0.0 -gw 192.168.100.1 3
IBM_Storwize:Redbooks_Backend_cluster:superuser>cfgportip -node node2 -ip 192.168.104.193 -mask 255.255.0.0 -gw 192.168.100.1 4
12.3 Initiator configuration
This section describes steps that must be carried out on the initiator system to virtualize LUNs that are presented by an IBM Storwize target system that is connected with iSCSI. Part 3, “iSCSI virtualization” on page 211 contains instructions for configuring a SAN Volume Controller or IBM Storwize initiator system for external virtualization of a storage controller that is connected by using iSCSI. Those instructions are generic regarding the back-end controller that is used as the target system. These instructions are specific for external virtualization of an IBM Storwize target system, making specific reference to the example that is illustrated by Figure 12-1 on page 228.
12.3.1 Establishing connections and sessions
The initiator system can establish sessions with the target system by using either the CLI or the GUI. You should already know which connections you want to establish during the planning phase of an installation to plan the network architecture, which depends on the planned iSCSI connections. Section 12.1, “Planning considerations” on page 228 describes the connections that should be established between a SAN Volume Controller or
IBM Storwize initiator system and an IBM Storwize target system.
In the example that is shown in Figure 12-1 on page 228, there are 16 iSCSI sessions to establish (two sessions from each of four nodes in the initiator system to each of two nodes in the target system). These 16 sessions are treated by the target system as four groups of four sessions; each group of four sessions is encapsulated in an iSCSI storage port object.
For an IBM Storwize target system, you should configure cluster-wide connectivity to achieve maximum availability. Therefore, each iSCSI storage port object in the example includes a session from each node of the initiator system, of which there are four. Table 12-1 shows the details of the four iSCSI storage port objects in this example.
Table 12-1 The iSCSI storage port objects in the example that is illustrated by Figure 12-1 on page 228
Initiator port ID
Initiator I/O group
Target node name
Target port ID
Target port IP address
3
All
node1
3
192.168.104.190
4
All
node1
4
192.168.104.191
3
All
node2
3
192.168.104.192
4
All
node2
4
192.168.104.193
You can configure these iSCSI sessions by using either the CLI or the GUI on the initiator system. Example 12-4 on page 233 shows how to configure one of the iSCSI storage port objects from Table 12-1, in particular the one that is described in the first row of the table.
With both the detectiscsistorageportcandidate and the addiscsistorageportcandidate commands, the -chapsecret option specifies the CHAP secret that is used for one-way (the target authenticates the initiator) CHAP authentication. Include this option to use the CHAP secret that is set on the target system in 12.2.3, “Authentication” on page 230. Alternatively, do not include this option if the target system has no CHAP secret. To configure cluster-wide connectivity, do not use the -iogrp option with either CLI because doing so configures connectivity to only one I/O group.
Example 12-4 Creating an iSCSI storage port with the CLI
IBM_2145:Redbooks_cluster1:superuser>detectiscsistorageportcandidate -srcportid 3 -targetip 192.168.104.190 -chapsecret secret1
IBM_2145:Redbooks_cluster1:superuser>lsiscsistorageportcandidate
id src_port_id target_ipv4 target_ipv6 target_iscsiname iogroup_list configured status site_id site_name
0 3 192.168.104.190 iqn.1986-03.com.ibm:2145.redbooksbackendcluster.node1 1:1:-:- no full
IBM_2145:Redbooks_cluster1:superuser>addiscsistorageport -chapsecret secret1 0
Figure 12-2 and Figure 12-3 on page 234 show the steps that are required on the initiator system’s GUI to configure all of the iSCSI storage port objects that are described in Table 12-1 on page 232. Figure 12-2 shows the Add External iSCSI Storage wizard. Access this wizard by clicking Add External iSCSI Storage in the upper left of the Pools → External Storage pane.
Figure 12-2 The first stage of the Add External iSCSI Storage wizard
In Figure 12-2, click IBM Storwize and then Next to configure the iSCSI storage port objects to virtualize an IBM Storwize target system.
Figure 12-3 shows the second step of the Add External iSCSI Storage wizard. The details that are entered into the wizard in Figure 12-3 result in the initiator system creating all of the iSCSI storage port objects that are described in Table 12-1 on page 232, which configures all of the iSCSI sessions in the example that is illustrated by Figure 12-1 on page 228.
Figure 12-3 The second step of the Add External iSCSI Storage wizard
The CHAP secret field sets the CHAP secret that is used for one-way (the target authenticates the initiator) CHAP authentication, both when discovering and when establishing sessions with the target system. Enter the CHAP secret that was set on the target system in 12.2.3, “Authentication” on page 230, or leave the field blank if the target system has no CHAP secret configured.
Each Target port on remote storage field corresponds to an iSCSI storage port object that the wizard creates. Each such object has cluster-wide connectivity to maximize availability. The source port on the initiator system for each iSCSI storage port object will be the port that is selected in the Select source port list. The target IP address for each iSCSI storage port object will be the IP address that is entered into the Target port on remote storage field.
To maximize availability and have redundancy in case of a path failure, you should configure two redundant connections to each node of the target system. To enforce this setting, the wizard does not allow you to continue unless all four Target port on remote storage fields are complete.
12.4 Configuration validation
When you complete the necessary steps to configure both the target system and the initiator system, you can verify the configuration by using the CLI on both systems.
You can use the lsiscsistorageport command to view information about the configured iSCSI storage port objects on the initiator system. Example 12-5 shows the concise lsiscsistorageport view on the initiator system in the example that is illustrated by Figure 12-1 on page 228.
Example 12-5 The lsiscsistorageport concise view
IBM_2145:Redbooks_cluster1:superuser>lsiscsistorageport
id src_port_id target_ipv4 target_ipv6 target_iscsiname controller_id iogroup_list status site_id site_name
1 3 192.168.104.190 iqn.1986-03.com.ibm:2145.redbooksbackendcluster.node1 4 1:1:-:- full
2 3 192.168.104.192 iqn.1986-03.com.ibm:2145.redbooksbackendcluster.node2 4 1:1:-:- full
3 4 192.168.104.191 iqn.1986-03.com.ibm:2145.redbooksbackendcluster.node1 4 1:1:-:- full
4 4 192.168.104.193 iqn.1986-03.com.ibm:2145.redbooksbackendcluster.node2 4 1:1:-:- full
The view in Example 12-5 shows all of the configured iSCSI storage port objects, each of which is characterized by a source port ID, target IP address, and target IQN. The view shows some basic connectivity and configuration information for each object. Each object in this view corresponds to a row in Table 12-1 on page 232. In this example, the entry in the I/O group list field for each object is 1:1:-:- and the entry in the status field for each object is full, which indicates that the initiator system has good connectivity to the target system.
The entries in the I/O group list fields are colon-separated lists of keys. Each place in the list describes the connectivity status of an I/O group. The 1 key in the first two places indicates that the first two I/O groups are both meant to have connectivity, and actually do have connectivity. The minus (-) key in the final two places indicates that the final two I/O groups are not meant to have connectivity. In this case, this is because the iSCSI storage port object is configured to have cluster-wide connectivity, but the initiator system has only two I/O groups.
A 0 key in any place indicates that some nodes from the associated I/O group are meant to have connectivity but do not. The first place in the list always refers to I/O group 0, the second place to I/O group 1, and so on; this is regardless of which I/O groups are actually present in the system.
The full entries in the status fields indicate that every node that should have connectivity does have connectivity. An entry of partial indicates that only some nodes that are meant to have connectivity do have connectivity. An entry of none indicates that no nodes that are meant to have connectivity do have connectivity.
The lsiscsistorageport command also allows a detailed view that gives more detailed information about a specific iSCSI storage port object. If the concise view shows that the initiator system does not have full connectivity to the target system, you can use the detailed view to see which nodes do not have connectivity.
Example 12-6 shows the lsiscsistorageport detailed view for the first iSCSI storage port object on the initiator system in the example that is illustrated by Figure 12-1 on page 228.
Example 12-6 The lsiscsistorageport detailed view
IBM_2145:Redbooks_cluster1:superuser>lsiscsistorageport 1
id 1
src_port_id 3
target_ipv4 192.168.104.190
target_ipv6
target_iscsiname iqn.1986-03.com.ibm:2145.redbooksbackendcluster.node1
controller_id 4
iogroup_list 1:1:-:-
status full
site_id
site_name
node_id 1
node_name node1
src_ipv4 192.168.104.199
src_ipv6
src_iscsiname iqn.1986-03.com.ibm:2145.redbookscluster1.node1
connected yes
node_id 2
node_name node2
src_ipv4 192.168.104.197
src_ipv6
src_iscsiname iqn.1986-03.com.ibm:2145.redbookscluster1.node2
connected yes
node_id 3
node_name node3
src_ipv4 192.168.104.198
src_ipv6
src_iscsiname iqn.1986-03.com.ibm:2145.redbookscluster1.node3
connected yes
node_id 4
node_name node4
src_ipv4 192.168.104.196
src_ipv6
src_iscsiname iqn.1986-03.com.ibm:2145.redbookscluster1.node4
connected yes
Example 12-6 shows the detailed view for the iSCSI storage port object with ID 1, which is the object that is described in the first row of Table 12-1 on page 232. In addition to the fields in the concise view, the detailed view also gives per-node connectivity information. In this case, the entry in the connected field in each block of the output is yes, indicating that every node in the initiator system has connectivity to the target port.
For a full description of the lsiscsistorageport command, see IBM Knowledge Center.
From the target system, you can use the lshost command to view information about the host object corresponding to the initiator system. Example 12-7 shows the detailed lshost view on the target system for the host corresponding to the initiator system in the example that is shown in Figure 12-1 on page 228.
Example 12-7 The lshost detailed view
IBM_Storwize:Redbooks_Backend_cluster:superuser>lshost 0
id 0
name Redbooks_initiator_system
port_count 4
type generic
mask 1111111111111111111111111111111111111111111111111111111111111111
iogrp_count 1
status online
site_id
site_name
host_cluster_id
host_cluster_name
iscsi_name iqn.1986-03.com.ibm:2145.redbookscluster1.node4
node_logged_in_count 4
state active
iscsi_name iqn.1986-03.com.ibm:2145.redbookscluster1.node3
node_logged_in_count 4
state active
iscsi_name iqn.1986-03.com.ibm:2145.redbookscluster1.node2
node_logged_in_count 4
state active
iscsi_name iqn.1986-03.com.ibm:2145.redbookscluster1.node1
node_logged_in_count 4
state active
The view on the target system that is shown in Example 12-7 provides detailed information about the host object corresponding to the initiator system. The value in the port count field is 4, which is the number of IQNs that are associated with this host object. This value is correct because there are four nodes in the initiator system. The value in the I/O group count field is 1 because only one I/O group from the target system is configured to present volumes to the initiator system.
Note: Although the target system has only one I/O group, it is possible to configure a host object with associated I/O groups that are not yet present in the target system, which results in a value other than 1 in the I/O group count field.
For each block of the output relating to a specific IQN of the host system, the value in the nodes_logged_in field is 4, and the value in the state field is active. This value indicates that there is good connectivity to each node of the initiator system. The state fields give the number of logins that nodes from the target system have with that IQN from the initiator system.
Because there are two nodes in the target system and all the connections are dual-redundant, the value of 4 indicates full connectivity. The value active in the state field indicates that the only I/O group in the target system (which, in this example, has volume mappings to the host) has at least one iSCSI session with that node of the initiator system.
For a full description of the lshost command, see IBM Knowledge Center.
In addition to the information in the lsiscsistorageport and lshost views, there is further relevant information in the lscontroller and lsportip views. The lscontroller view on the initiator system contains information about the controller object that represents the target system. It is documented in IBM Knowledge Center.
The lsportip view on either the initiator system or the target system contains information about the IP ports on that system. It is documented in IBM Knowledge Center.
 
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.212.211