iSCSI storage

The iSCSI is a different way to implement SAN storage; instead of using a dedicated network stack FC, iSCSI relies on the standard TCP/IP stack. Like FC protocols, there are two different leading roles—the initiator (at host side) and the target (at storage side). Also, of course, the fabric, that is a traditional Ethernet network (maybe with new protocols, such as datacenter bridging (DCB)).

ESXi can be one of the following iSCSI initiator types:

  • Software iSCSI adapter: Use one or more VMkernel network interfaces and the virtual switches to manage the entire iSCSI traffic. With the software iSCSI adapter, you can use iSCSI technology without purchasing specialized hardware.
  • Dependent hardware iSCSI adapter: VMware manages iSCSI management and configuration, and it may also be the part of the network that must be implemented at the virtual switch level. Ethernet NIC with iSCSI offload capabilities falls into this category. At the ESXi level, those NICs are presented with two different components—a hardware iSCSI adapter and a corresponding standard networking NIC.
  • Independent hardware iSCSI adapter or iSCSI HBA: This is like the FC HBA. All of the network stack is implemented in hardware inside the adapter. On the ESXi side, you will see one or more vmhba, like with all other block storage adapters. Network configuration must be performed at the card level, using BIOS management, or specific tools (there are also plugins for vCenter to manage the configuration inside vSphere).

The main difference between one mode and another lies in how the network is configured; for independent hardware iSCSI adapters, you configure at the adapter firmware level; for a software initiator, you have to build a proper virtual network configuration. Performance can change slightly across those modes, but in most cases could remain similar; not so with the host CPU load, which usually decreases when moving from software to HBA mode.

Some iSCSI storage arrays work with a network topology exactly like the FC fabric, two different switches with isolated networks. That means two different logical networks and two different IP classes. This is a solution that does not require any inter-switch connection and provides better resilience (switches are fully independent and isolated from each other). For example, the iSCSI version of Dell-EMC VNX or Compellent storage works in this way. If you are using a software initiator, you need at least two different VMkernel interfaces, one on each logical network.

However, there is also another possibility, a single flat network on both layer 2 and layer 3. That means that the physical switches (to provide resilience and redundancy you want at least two) must be in the same broadcast domain and must be (directly or indirectly) interconnected. For example, Dell-EMC EqualLogic needs this kind of network configuration. Using stacking, a virtual chassis, or similar functions to build a single logical switch could be an option, mainly to simplify management. However, plan it carefully to ensure the right network resilience (for example, some stacked switches need to reboot all the switches during a firmware upgrade). Also, in this kind of network topology, using a software initiator, more VMkernels may be needed, but in this case, you have to bind all of them to the iSCSI adapter:

There isn't a specific service type for the VMkernel interface to tag it for iSCSI network traffic; the choice of the proper interface is made depending on your routing table. For this reason, be sure to use dedicated network ranges for iSCSI only when you have more interfaces on the same network. You need iSCSI NIC binding. Otherwise, only one interface will be used.

As compared to FC storage, there are several different possible tweaks and optimizations for iSCSI, but check what your storage vendor recommends:

  • Jumbo frames (9000 bytes for Ethernet frames): iSCSI traffic can usually benefit from jumbo frames, but is only enabled end-to-end across initiator and target; that means at the VMkernel and virtual switch level (configuration is possible under MTU settings), at the physical switch level (for all the ports used by iSCSI), and at the storage level.
  • DCB: If you use converged networks and your storage supports them, DCB can provide Quality of Service (QoS) for storage traffic. It's usually configured on the ESXi side on CNA adapters.
  • iSCSI initiator advanced setting delayed ACK: Some storage vendors suggest disabling this.
  • iSCSI initiator advanced setting login timeout: The default value is quite low; some storage vendors suggest increasing it (for example, to 60 seconds).
  • TSO and LRO of the physical NICs: Sometimes you have to change these settings using KB 2055140: Understanding TCP Segmentation Offload (TSO) and Large Receive Offload (LRO) in a VMware environment (https://kb.vmware.com/kb/2055140).
  • TCO of the physical NICs: Sometimes you have to change this setting using KB 2052904: Understanding TCP Checksum Offloading (TCO) in a VMware Environment (https://kb.vmware.com/kb/2052904).

Note that iSCSI can provide initiator (and also target) authentication in different ways:

  • IP based: With some storage arrays you can add a list of authorized IPs (or networks).
  • iSCSI Qualified Name (IQN): Each initiator and target has at least one IQN that can be used for authorizing specific hosts. Note that the default ESXi software initiator identifier is based on the hostname (when you activate the software iSCSI adapter) followed by a random string such as iqn.1998-01.com.vmware:esx01-789fac05, but you can change it (this requires a host reboot), or add an alias to use a different string.
  • Challenge Handshake Authentication Protocol (CHAP): This is a real authentication using a shared password, and can also be mutual, so not only does the storage authenticate the host, but the host can also authenticate the storage.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
52.14.126.74