You can choose different VMDK provisioning as described before. EZT disks provide the best performance (also for the first writes), but they occupy the entire space including unused space, so they are not efficient with space utilization. LZT is the most common choice. Thin disks require more management attention to avoid excessive storage over-provisioning.
For more considerations of different virtual disk types in term of performance, see the blog post at https://blogs.vmware.com/vsphere/2014/05/thick-vs-thin-disks-flash-arrays.html.
For the different virtual storage controllers, this table recaps the different types and possible use cases:
Controller type |
VM type |
Minimum virtual hardware |
Use cases |
BusLogic |
Server |
- |
Very old Windows OS |
LSI Logic Parallel (formerly LSI Logic) |
Server/Desktop |
- |
Legacy Windows OS (2003) |
LSI Logic SAS |
Server/Desktop |
VH7 |
Windows OS (>2008) |
PVSCSI |
Server |
VH7 |
Workload I/O intensive |
AHCI/SATA |
Server/Desktop |
VH10 |
Large number of virtual disks, but with limited performance |
NVMe |
Server |
VH13 |
Fast and low latency storage |
Remember that you can have, in a single VM, a maximum of 4 SCSI controllers (15 devices per controller), 4 SATA controllers (30 devices per controller), and 4 NVMe (60 devices per controllers).
Use of more storage controllers has the advantage of handling different queues for the I/O request, and could be useful for specific VMDKs that require more performance (maybe using PVSCSI or NVMe controllers).
Note that adding different types of storage controllers to a VM that uses BIOS firmware can cause operating system boot issues. For more information see SCSI and SATA Storage Controller Conditions, Limitations, and Compatibility at https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.vm_admin.doc/GUID-5872D173-A076-42FE-8D0B-9DB0EB0E7362.html. VMs with EFI firmware are not affected. To manage different performance profiles for each VM, you can use shares or limits, with SIOC enabled.