Just like with standard physical servers, virtual machines have one or multiple virtual disks attached to them and formatted with whatever file system works for this particular workload. The default virtual disk type provided by vSphere is suitable for most virtual machines and is a safe way to get started but it is important to understand the other methods and what is specific to each of them.

There are two ways to provision a virtual machine with storage and the difference between them lies in how the virtual disk is presented to the VM.

Protect Your Data with BDRSuite

Cost-Effective Backup Solution for VMs, Servers, Endpoints, Cloud VMs & SaaS applications. Supports On-Premise, Remote, Hybrid and Cloud Backup, including Disaster Recovery, Ransomware Defense & more!

Now let’s see the VMware virtual disk types in detail

Flat Disks

A flat disk is a traditional method that is used in most cases for a VM. Here the virtual disk and its data are stored on a VMFS datastore. The path to the disk configured in the VM points to a vmdk file name “vmname.vmdk”. This file does not hold any data and is called a descriptor file. Its purpose is to define which disk to write to. This way a disk can have an ordered chain of snapshots. The actual data of the disk is located in a file called “vmname-flat.vmdk”, there is one flat file per descriptor file.

Flat-Disks

Download Banner

Flat disks are very flexible as you can easily expand them, migrate them to another datastore and of course take snapshots. This is the recommended way to do it unless there is a specific requirement for an RDM, more on that later.

Now within the “flat” disk family, there are 2 storage provisioning methods that will define how space is allocated on the datastore. It is very important to understand how this works as it will dramatically alter the way you manage the storage in your environment.

Thick Provisioning

With this method, you estimate how much storage the virtual machine will need for its entire life cycle and the entire provisioned space will be committed to the virtual disk. The benefit of using such disks is that there is no risk of filling up an over-subscribed datastore hence require less attention in managing storage allocation and usage. However, the main drawback is that it can quickly lead to underutilization of datastore capacity as many VMs have a lot of free space on their volumes.

Thick-Provisioning

Also, it is worth noting that there are 2 ways of doing the initial provisioning of a thick disk.

In both cases, the provisioned space will be reserved on the datastore.

  • Thick provisioning Lazy zeroed: Allocates in advance and zero on the first write. Faster to provision in the first place but might induce a (theoretical) slight latency on first writes, not really accurate anymore
  • Thick provisioning Eager zeroed: The empty space is zeroed out straight away. May take a little bit longer to provision

Thin Provisioning

To help avoid over-allocating storage space and save storage, thin disks allow you to use just as much storage capacity as currently needed and then add the required amount of storage space at a later time. For a thin virtual disk, ESXi provisions the entire space required for the disk’s current and future activities but commits only as much storage space as the disk needs for its initial operations.

Thin-Provisioning

Although you will save a lot of space on disk, there is a risk associated with thin disks that is important to understand. Because you can over-subscribe a datastore (meaning you can provision more storage than its physical capacity) if all VMs were to use up all their space or nearly, the datastore would fill up completely while the VMs still thinks there is available space. This situation will cause the IOs of all thin provisioned disks to fail. To fix it you can either migrate virtual machines off this datastore or expand its capacity. However thick provisioned disks will retain access to the capacity as space was physically reserved.

Thin-Provisioning

Unmap

I also wanted to add a quick word about Unmap here. While you see that the datastore only allocates what the VM is using, it is true when you first create the disk. As the disk fills up, the datastore allocates the space accordingly. However, when the guest inside the virtual machine deallocates space (delete a big file), this extra free space is available in the guest but not reclaimed on the datastore as it is not aware of what just happened.
The process of reclaiming unused space is called unmap and needs to be done at every level of the storage stack to have accurate information about it. You can find more information about unmap in this Vembu blog.

Raw Device Mapping

These disks, also known as RDM are a lot less flexible but they cover scenarios otherwise impossible with traditional virtual disks. It provides a mechanism for a virtual machine to have direct access to a LUN on the physical storage layer. An RDM is actually a mapping file in another datastore that acts as a proxy for a raw physical storage device, just like a symbolic link from a VMFS volume to a raw LUN. The RDM only contains metadata for managing and redirecting disk access to the LUN, it does not hold any data.

Raw-Device-Mapping

While you would use raw devices on rare occasions, there are a few use cases that cannot be handled any differently. Note that there can be other uses for it but these are the most common ones I believe.

SAN management software in a virtual machine

Some storage arrays require the use of what is referred to as gatekeeper luns. These luns serve as a way to interact with a storage array when it does not have a management interface. This is the case for EMC’s VMAX systems.

Windows Server Failover Clustering (WSFC)
Formerly known as Microsoft Cluster Server, you will need an RDM to build a windows cluster between two virtual machines. The active node of a Windows cluster needs to be able to lock a clustered volume to ensure consistency. Such operation is performed using SCSI lock commands that are intercepted by VMFS-5/6 with the traditional disk. This interception mechanism was introduced with VMFS-5 to fix a design flaw of VMFS-3 where a VM would lock the whole lun (datastore) during a write operation.

There are two compatibility modes for raw devices. There is actually a KB dedicated to differentiating them.

Physical compatibility mode (pass-through RDMs)

Also known as pass-through RDM, this compatibility mode gives the most access over the lun to the VM. It provides:

  • Lower level control over the lun by allowing more SCSI commands to pass-through. This can be required by SAN management software. All the underlying hardware is exposed
  • Passes all SCSI commands through except REPORT_LUNs
  • Possibility to expand the LUN size and in guest volume with no reboot involved
  • Virtual machine snapshots and cloning are not supported
  • Can be used for clusters across physical and virtual

Virtual compatibility mode

This mode is a little less common among RDM use cases as it is in between traditional disks and pass-through RDMs.

  • Only READ and WRITE commands are sent to the mapped device. The rest of them are virtualized by the vmkernel
  • From the guest point of view, the disk appears as if it was a traditional flat virtual disk and the hardware characteristics are hidden
  • Expanding the lun capacity requires a reboot of the virtual machine
  • Can be converting to flat with a storage vMotion operation
  • Snapshots, cloning and advanced file locking for data protection supported

Follow our Twitter and Facebook feeds for new releases, updates, insightful posts and more.

Rate this post