ESXi host

In this case you can size the ESXi hosts by considering the total required CPU and the total required RAM, define which growth margin you would like to plan (or which is required by business goals), and preserve a part for the high availability aspects (as we will discuss in Chapter 13, Advanced Availability in vSphere 6.5), considering not only vSphere HA, but also vSphere Fault Tolerance (FT) (if used). With the numbers, you can define a building block (or use a vendor predefined building block, like a Nutanix or vSAN ready node) for the single ESXi host and find out how many blocks are needed to reach the required CPU or RAM amount.

For the hosts, you can use different types of approaches for cluster design:

  • Scale-out: More hosts in a single cluster, each mid-sized
  • Scale-up: Fewer hosts in a single cluster, each usually with a lot of resources

More details will be provided in Chapter 4, Deployment Workflow and Component Installation, but usually, a trade-off between the two different approaches is used. For a traditional computer server, you mainly have to define the type of CPU, the amount of RAM, and the number and type of I/O ports.

Common processors now have a lot of cores inside (a typical number is 16, but it’s growing), and considering that VMware ESXi licensing is normally per socket (except in some specific cases), you may prefer few processors with a lot of cores. In fact, the bi-processor configuration is more common for the virtualization hosts. But also remember other licenses, like the new license model for Windows Server 2016, where the number of cores is also counted.

New server series, with the Intel Skylake family (https://ark.intel.com/en/products/codename/37572/Skylake), are already available on the market (such as the Dell PowerEdge Gen14 line or the HPE ProLiant Gen10 server series). The best deal is around 16 cores, but there is also the Xeon Platinum with 28 cores.

Considering that the processor could be the costliest part of a server, try to balance the performance, the budget, and the license limitations. For example, the Intel® Xeon® Platinum 8180M 2.50 GHz, 28C/56T, 10.4GT/s 3UPI, 38 MB Cache costs more than $13,000 (https://ark.intel.com/products/series/125191/Intel-Xeon-Scalable-Processors).

In a single cluster, you will need a compatible processor (at least compatible with EVC features), with no strict requirements on having the same configuration as the host. But in order to simplify resource distribution and make Host Profiles more effective, it’s recommended to have hosts as similar as possible inside the same cluster.

Intel has been supplying brand new model Xeon processors (codename Everest) designed for customers in the finance sector, where milliseconds on a transaction are crucial because they can mean profit or loss. Server vendors are trying to gain similar features, also on traditional Xeon; for example, a new option in a 14G server called Dell Processor Acceleration Technology (PAT).

About the choice between Intel or AMD: actually, there is no choice at all. The server market is dominated by Intel Xeon and it will be difficult for AMD to fill this gap in the future, considering that you need processor compatibility for vMotion mobility, inside a cluster, across clusters, across vCenters and also across clouds.

RAM memory is now reasonably cheap, and the best deal is for 16 or 32 GB modules (two dual inline memory modules (DIMMs) of 8 GB are costlier than one of 16). Consider that each motherboard has an optimal configuration where DIMMs work at maximum speed, and that with more per channel, the speed could decrease. Try to always match the optimal configuration suggested by the vendor (note that new Intel processors have six channels for DIMMs, instead of four as per the previous models).

Of course, there are different aspects, like the use of blade or rack or other formats (although rack is the more common format), or the type of vendor, or the presence of an out-of-band card (like iLO for HPE or iDRAC for Dell servers).

For hyper-converged infrastructure (more details will be provided in Chapter 7, Advanced Storage Management), the ESXi node size must also consider the required storage capacity and performance (but in those cases, you can have good sizing tools from the specific vendors). Otherwise, you have to size your external shared storage with specific vendor criteria (also, in this case, you can have good sizing tools from most of the vendors).

In both cases, you will also decide which boot device will be used, as discussed in Chapter 4, Deployment Workflow and Component Installation. Common choices are two hard disks in RAID 1 or a redundant SD. But new servers may also provide a SATA DOM device or a new controller card with two dual redundant M.2 sticks (this option is actually quite expensive when compared with the dual SD cards option).

For the I/O cards, it all depends on your network and storage requirements and design; we will discuss this further in the related chapters.

All hardware must be classified as VMware vSphere certified or match the VMware Hardware Compatibility List (HCL) between the hardware components, the firmware version, the drivers, and the proper VMware vSphere version.

Actually, there are two types of drivers—native or vmkLinux based. VMware plans to deprecate the vmkLinux APIs and associated driver ecosystem with the next release of VMware vSphere. This means that in future versions, several drivers could change.

For now, vSphere 6.5 supports I/O drivers built and certified on ESXi 5.5 (ESXi 5.5-based), on ESXi 6.0 (ESXi 6.0-based), and on ESXi 6.5 (ESXi 6.5-based). For more information, see KB 2147697—ESXi 6.5 I/O driver information: certified 5.5 and 6.0 I/O drivers are compatible with vSphere 6.5 at https://kb.vmware.com/kb/2147697.

About the form factor; this is mostly dictated by the space requirements or the vendor options (for example, Cisco has a few fixed form factor options in its USC family). But also  consider the flexibility in future expansion for memory and devices and cards that can easily permit scaling or extend the life cycle of a server. Blade or modular solutions could be an option, but usually make sense with a minimum number of hosts, which could vary from vendor to vendor (usually more than four hosts). For hyper-converged solutions, it is quite common to have building blocks already sized and with (few) fixed form factors.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.132.194