Storage design

Choosing the right storage according to your needs could be a very long debate without a simple answer, because there are so many different types of storage solutions. As usual, you have to consider availability, scalability, performance, and manageability aspects, including some new capabilities such as data protection, data migration, security, and so on.

Traditional IOPS sizing could be too limited considering that most of the enterprise storage works with concepts more complex, such as data tiering, data reduction, and data locality; for this reason, it's always suggested you make a capacity and performance estimation using vendor-specific tools.

In most cases you will have storage with some flash technologies:

  • AFA (full flash): This is where performance and storage latency could be critical and you want a storage with predictable throughput and latency
  • Hybrid flash: This is where you also need some low-cost capacity tiers (or simple capacity datastores) but still with the benefit of flash memory for hot data (or simply some datastores).

More complex is choosing between using an NAS or SAN; VMware vSphere abstracts VM with files, so both could be fine and potentially there isn't one that is better than the other. Historically, NFS datastore has got some performance issues and limits (for example, only thin disks without VAAI). But NFS storage can be more VM aware and permit a better integration (for example, see Tintri storage, where you have a full VM visibility also from storage side). And NFS datastore does not have the SCSI reservation issue of VMFS datastore, where you have some limitations to the number of VMs and VMDKs on each datastore, to avoid too much SCSI reservation (note that VAAI has limited this issue).

Anyway, if you are using VVols, the difference between NAS or SAN becomes quite null. If you are choosing a SAN, there can be some concerns related to the frontend protocols, especially if using FC, FCoE, or iSCSI. In case of FC protocol:

  • It remains more efficient and scalable than IP protocol for storage; for example, path failover is usually less than 30 seconds, compared with less than 60 seconds for iSCSI.
  • It is also a little bit more efficient, both for the protocol (8 Gbps FC could be comparable with 10 Gbps iSCSI) but also because FC uses HBA hardware; that means lower host CPU consumptions.

Anyway, the choice depends mostly on the storage type; some have limited options in frontend interface types or by your host's ability to expand; for example, blades may not have pure FC options. For converged networks, remember that you have to plan carefully the network capacity and QoS using DCB.

Converged infrastructure doesn't much change these considerations, except that you will probably always have a fixed storage solution in your stack. HCI solutions are quite different; they will be discussed later in this chapter.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.221.67