Caveat : Running Nested vSAN on a vSAN Cluster

Big fan of William Lam's PowerCLI script for deploying multiple Nested ESXi VM's for demo and POC purposes (link). Had to tweak it slightly for what I wanted it to do. The script ran like a charm, with an exception.

A little background,  current physical ESXi environment hosting the Nested ESXi's are already running on VSAN. What I was trying to achieve was to create a bunch of Nested ESXi's, and have them form a virtual VSAN. Effectively, creating a Nested VSAN on  VSAN.

There's no issue creating the Nested ESXi's with drives. Usual settings apply, thin provisioning, default FTT's and etc. As part of the script, it will add the ESXi's into a new Nested vSAN Cluster. The process will "Auto Claim" the virtual drives, to form Cache & Capacity Tiers.

That is where I ran into an error. The drives were never successfully claimed and disk groups were not created.

Disk Group screen shows none of the disks are in used

When I try to create a Disk Group and add them manually, it spits an error that says "Failed to Reserve Disks". There was another post that addresses this issue with SCSI Reservations (link) and nested ESXi environments. It recommended that I ran the following command.

esxcli system settings advanced set -o /VSAN/FakeSCSIReservations -i 1

Tried it again, still the same error.

Failed to reserve disk error

After a little troubleshooting, I finally found the issue. It seems that you can't actually create a Nested VSAN on an existing VSAN Datastore. The only way is to host those Nested VMDK's on another Datastore, be it a NFS, iSCSI or another VSAN datastore that is not hosting that particular VM. 

Once I vMotioned the Nested VM's onto another Datastore, the Disk Groups were added successfully. Something to note for those building nested VSAN environments.