How it works…

Performance, availability, and costs are all factors that should be considered when choosing a storage connectivity option. The following table provides a quick overview of the different storage connectivity options and how they compare with each other in terms of performance, availability, and costs:

Protocol

Performance

Availability

Costs

Local storage

Good

Fair

Low

Fibre channel

Excellent

Excellent

High

iSCSI

Good

Excellent

Medium

NFS

Good

Good

Low

FCoE

Excellent

Excellent

High

 

Direct attached or local storage is storage directly attached to a host. Since this storage is not shared, many VMware features will not be available for virtual machines hosted on the local storage.

Best practices when using direct attached or local storage are as follows:

  • Configure RAID to provide protection against a hard disk failure
  • Use a hardware RAID controller that is on the VMware HCL

Fibre Channel (FC) is a block-level, low latency, high-performance storage network protocol that is well-suited for workloads with high I/O requirements. The FC protocol encapsulates the SCSI commands into the FC frames. A Fibre Channel Host Bus Adapter (HBA) is required to connect the host to the storage network or fabric. FC HBAs can provide a throughput of 2, 4, 8, or 16 Gbps, depending on the capabilities of the HBA and the FC network. FC uses zoning and LUN masking to configure which hosts can connect to which targets on the SAN.

The cost of deploying FC-connected storage can be significantly higher than other options, especially if an existing FC infrastructure does not already exist.

The best practices when using FC are as follows:

  • Use multiple HBAs in the host to provide multiple paths from load balancing and redundancy.
  • Ensure all HBAs and switches are configured for the same speed. Mixing the speed of HBAs and switches can produce contention at the FC switch and SAN.
  • Use single-initiator single-target zoning. A single HBA, the initiator, is zoned to a single array target, the target. A separate zone is created for each host HBA.
  • Mask LUNs are presented to ESXi hosts from other devices.
  • Ensure firmware levels on FC switches and HBAs are up-to-date and compatible.

iSCSI provides block-level storage access by encapsulating SCSI commands in TCP/IP. iSCSI storage can be accessed with the iSCSI software initiator, which is included with ESXi through a standard network adapter, or using a dependent or independent iSCSI HBA:

  • A dependent iSCSI adapter depends on VMware networking and iSCSI configuration for connectivity and management.
  • An independent iSCSI HBA provides its own networking and configuration for connectivity and management. Configuration is done directly on the HBA through its own configuration interface.

Throughput is based on the network bandwidth, the speed of the network interface card (1 Gbps or 10 GbE), and the CPU resources required to encapsulate the SCSI commands into TCP/IP packets.

The cost of implementing iSCSI is typically significantly lesser than implementing FC. Standard network adapters and network switches can be used to provide iSCSI connectivity. Using dedicated iSCSI HBAs not only increases performance, but also increases cost. The price of 10 GbE switches and 10 GbE adapters continues to drop as the deployment of these becomes more widespread.

The best practices when using iSCSI are as follows:

  • Configure multiple vmks bound to multiple vmnics to provide load balancing and redundancy for iSCSI connections.
  • Use network cards with TCP/IP Offload Engine (TOE) enabled to reduce the stress on the host CPU.
  • Use a physically separate network for iSCSI traffic. If a physically separate network is not available, use VLANs to separate iSCSI traffic from other network traffic.
  • Enable jumbo frames (MTU 9000) on the iSCSI network.

The Network File System (NFSprotocol can be used to access virtual machine files stored on a Network Attached Storage (NAS) device. Virtual machine configuration files, disk (VMDK) files, and swap (.vswp) files can be stored on the NAS storage. vSphere 5.5 supports NFS Version 3 over TCP, and vSphere 6 added support for NFS v4.1. The capabilities and limitations of NFS v4.1 will be discussed in a separate recipe later in this chapter.

Throughput is based on the network bandwidth, the speed of the network interface card (1 Gbps or 10 GbE), and the processing speed of the NAS. Multiple paths can be configured for high availability, but load balancing across multiple paths is not supported with NFS.

The cost of implementing NFS connectivity is similar to iSCSI. No specialized network hardware is required. Standard network switches and network adapters are used and there is no need for specialized HBAs.

The best practices when using NFS-connected storage are as follows:

  • Use a physically separate network for NFS traffic. If a physically separate network is not available, use VLANs to separate NFS traffic from other network traffic.
  • Hosts must mount NFS version 3 shares and non-Kerberos NFS version 4.1 shares with root access.
  • Enable jumbo frames (MTU 9000) on the NFS network.

Fibre Channel of Ethernet (FCoE) encapsulates Fibre Channel in Ethernet frames. A Converged Network Adapter (CNA) that supports FCoE is required, or a network adapter with FCoE capabilities can be used with the software FCoE initiator included with ESXi.

A common implementation of FCoE is with Cisco UCS blade chassis. The connectivity for TCP/IP network and FCoE storage traffic is converged between the chassis and the Fabric Interconnects. The Fabric Interconnects splits out the traffic and provides the connectivity paths to the TCP/IP network and storage network fabrics.

The best practices when using FCoE are as follows:

  • Disable the Spanning Tree Protocol (STP) on the switch ports connected to FCoE adapters
  • Ensure that the latest microcode is installed on the FCoE network adapter
  • If the FCoE network adapter has multiple ports, configure each port on a separate vSwitch
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.156.140