I often get questions around performance with regards to vSAN. It is not an easy thing to quantify unfortunately because of the variables involved. Servers, disks, IO controllers, networks and not to mention, continuous improvements in software.
I have seen some impressive performance numbers using certified vSAN Ready Nodes which rivals the best of traditional SAN storages. The wide adoption and dwindling costs of SSD & Flash capacities have definitely changed the landscape of HCI in general.
Just to give a feel of what vSAN can do, I decided to run some benchmarks off my spanking new SuperMicro boxes. These boxes are by no means “Enterprise”, and they run consumer grade NVMe M.2 and SATA 6Gbps SSD’s. To be fair, its not even the top-of-the-line consumer drives. It is at best, mid-range consumer flash drives.
Performance benchmarking was done using IOMeter with the following setup.
- vSphere 6.5 U1 & vSAN 6.6.1
- 3 x Windows Server VM (Each with 5 x VMDK’s)
- FTT = 1, RAID 1, Default vSAN Storage Policy
- Single Disk Group Each. (1 x OCZ 128GB RD400 NVMe & 1 x Samsung 950 Evo 512GB SSD)
- 1 x 10GbE Buffalo Switch
- 6 x Worker Threads for each VMDK
- 8 x Outstanding IO for each Worker Threads
- Profile 1 : 8k, 100% Random Read (Hero Number)
- Profile 2 : 8k, 70% Read 30% Write, 70% Random 30% Sequential (Simulating Real-World Workloads)
To be honest, these are fairly good numbers based on the fairly limited home lab setup. Again, it is mid-range consumer grade stuff.
Important to note, the setup is a NON-CERTIFIED vSAN READY NODE, and none of the hardware used are officially on the VMware vSAN HCL list. I would NOT recommend this for Production use.
This benchmark run is my way of kicking the tyres on the new vSAN stack (and my home lab). I hope it has given you an insight on what it can do, even on generic non-optimised hardware. Imagine a world with NVMe & NVDIMM as standard...