Since the early beginnings of VMware as a virtualization company, one of the selling features that made vSphere superior to its competitors was vSphere High Availability. The simplicity to configure it while still being a powerful capability to automatically restart virtual machines when a host fails is still to this day the most important part of the SDDC (in my opinion).

While this may seem nice and simple, dealing with high availability means managing failover resources. You will not be able to restart the virtual machines of a failed host if all your servers are used at 90% of capacity (unless you have a very large cluster or a dodgy heterogeneous environment).

Protect Your Data with BDRSuite

Cost-Effective Backup Solution for VMs, Servers, Endpoints, Cloud VMs & SaaS applications. Supports On-Premise, Remote, Hybrid and Cloud Backup, including Disaster Recovery, Ransomware Defense & more!

In this article we will be looking into Admission Control, a safeguard feature of vSphere HA to help manage failover resources in the cluster.

What are failover resources?

Let’s go back to the basics for a bit and talk about High Availability as a whole and where failover resources fit around it.

The point of vSphere High Availability is to automatically restart the VMs that were running on a host that was detected as down. vSphere HA has a few mechanisms to identify failures such as:

Download Banner
  • Regular heartbeat (Host monitoring): ESXi hosts communicate with each other and detect when a host no longer answers
  • Datastore heartbeat: ESXi hosts communicate with each other by writing to a special file that is stored on a datastore. This is useful to detect if the host is down or if it is an issue with the management network

On top of that, it can detect network partition by reaching to an isolation address, storage device and path issues, thanks to different timeouts and even VM and application failures by interacting with VMware Tools.

Now, when such an event happens and the response is configured accordingly, the virtual machines running on the failed host will be automatically registered on other hosts by vSphere HA and started there. However, if no enough resources are available on the other hosts, vSphere HA won’t be able to start the VM, this is where failover resources come into play.

Failover resources are CPU and memory resources that are calculated by Admission Control and reserved for failure event to have enough capacity to start failed VMs. Meaning these resources can’t be used to schedule workloads and will remain unused.

Admission Control often creates confusion among vSphere starters but the best way to describe it would be:

“Admission Control simply prevents you from starting, migrating virtual machines or increasing memory or CPU reservations if a host failure meant you wouldn’t be able to power on all VMs.”

How are failover resources calculated by Admission Control?

In order to reserve resources for failover, Admission Control needs to know how much resources to dedicate to it and this is based on the concept of failures to tolerate (FTT) at the host level. Meaning, how many concurrent host failures do you want to be able to tolerate?

Choosing this figure will mostly depend on your use case and the size of your cluster. While different environments will have different needs, here are a few considerations that will help you to base this decision:

  • How many hosts are in the cluster? You will not get the same choices whether you run 2 nodes or a 20 nodes cluster. For instance, in a 2 nodes cluster you will be limited to FTT=1 (loss of 1 host) with no failover resources after failover, in a 3-nodes you will be limited FTT=2 with the same outcome, and so on
  • Is the cluster homogeneous? A homogeneous cluster means all the hosts have the same hardware configuration. As opposed to heterogeneous clusters where some hosts will have larger amounts of memory or different CPUs, meaning more cores with different clock speeds. The issue with heterogeneous clusters is that Admission control will always try to tolerate the loss of the bigger host. So, if you run a bunch of servers equipped with 64GB of RAM and you have a couple of bigger servers with 512GB of RAM means Admission control a large portion of the cluster for failover which will impact the optimization of resource usage overallv
  • What level of resilience do you need? As mentioned above, Admission Control is calculated based on the number of node failures to tolerate. How many hosts can you afford to lose based on the criticality of the workloads running in the cluster?

When it comes to configuring the failures to tolerate, there are 3 different methods to calculate it that you can choose from, although I suspect most folks out there will be using the cluster resource percentage policy.

Cluster resource percentage

As the name implies, this method will reserve a percentage of the cluster’s resource for failover which won’t be available to power on VMs.

For instance, say you are currently running workloads that use 50% of the cluster’s resources and Admission Control determines that it needs to reserve 25% of the cluster’s resources for failover, you only have an extra 25% of resources available to use, even though the cluster usage shows 50%.

vSphere HA Admission Control

“Admission control percentage policy”

Back in the days most environments were using the default slot policy, but it has limitation. As an alternative you could use percentage-based policy, but you had to manually set the percentage for both CPU and memory of the cluster to somewhat match the equivalent of one host, which was not ideal as it wasn’t a dynamic figure, meaning if you added or removed hosts, or even added memory DIMMS, then the percentage previously configured would be off and would either no longer protect you against a failure or would be wasteful in resources by reserving too much for failover. I used to have a custom PowerCLI script that would calculate the ideal value and send me an email if the cluster was off but this is a bit hacky.

Luckily, VMware improved the mechanism by adding automatic calculation in the form of the number of host failures to tolerate in the cluster resource percentage policy. As a result you can configure how many hosts you tolerate to lose and admission control will automatically calculate the percentage of cluster resources to reserve for failover capacity. This is, in my opinion, the best policy to use in most environments.

Note that if a virtual machine does not have reservations, meaning that the reservation is 0, a default of 0MB memory and 32MHz CPU is applied.

Slot policy

The slot policy in vSphere Admission Control is a bit trickier to wrap your head around. It used to be the go-to way to configure clusters until percentage based became more user friendly. Here we also specify a number of host failures to tolerate but the resources to reserve will be computed differently.

In this mode, it will use the concept of slots, a logical representation of memory and CPU resources, and calculate how many slots each host can hold. The principle is that there must be enough slots in the cluster to accommodate the loss of x hosts.

The slot size is made of the CPU and memory components and is calculated like:

  • CPU: Reservation of each powered-on virtual machine and selects the largest value. If no CPU reservation is set for a virtual machine, it is assigned a default value of 32MHz (can be edited with das.vmcpuminmhz)
  • Memory: Memory reservation + memory overhead of each powered-on virtual machine and selects the largest value, the default value is 0MB

Note that if you have a few VMs that are an order of magnitude larger than the other VMs, your slot size will be much larger to protect the large VMs but it will increase significantly the failover capacity and reduce useable resources. This is one of the flaws of this mechanism. You can set an upper limit to the slot size but you will then jeopardize the large VMs in case of failure.

Admission control then calculates how many slots each host can hold by diving the host’s available resource (CPU and memory) by the slot’s component size (rounded down) and the smallest number is picked as the number of available slots on the host.

As you can see on the diagram below, the CPU reservation drastically impacts the slot size which in turn impacts how many slots, hosts can hold.

vSphere HA

“Admission Control slot size”

The Failover capacity is then processed by finding out how many hosts (from largest to smallest, failures to tolerate set by you) can fail while still having enough slots to run all of the powered-on virtual machines.

Dedicated failover hosts

The last vSphere Admission control policy is a lot simpler to understand as you simply dedicate specific hosts for failover. Meaning no virtual machines will be started on this during normal operation. Once a host fails, the impacted virtual machine will be restarted on the dedicated failover host.

vSphere HA

“Admission control dedicated failover host”

While this is straightforward and provides good visibility on failover resources, an issue with this is that the virtual machines will be concentrated across less hosts which brings a few concerns:

  • Higher consolidation ratio which can lead to resource contention as the workloads are not balanced as well
  • Higher impact in case of host failure. Because more VMs will run on each host, more VMs will be impacted by a host failure
  • Hardware components lifetime will be uneven between the hosts running workloads and the failover hosts

The risks of disabling Admission Control

vSphere Admission control is often a blurry concept for starters and for a good reason. The point is to understand that Admission control doesn’t do any high availability tasks, instead, it simply acts as guardrails to prevent you from starting more VMs than you could run in your environment should you lose one or more vSphere hosts.

Note that vSphere HA admission control can be deactivated but it is not advised in production environments. It can be required at times, especially in smaller clusters when you need to perform maintenance operations for instance. However, without it you have no assurance that the expected number of virtual machines can be restarted after a failure. As a result, it is highly recommended to re-enable Admission Control and not forget it in deactivated state.

The impact of disabling Admission Control often happens long after you disabled it and completely forgot about it. During this period of time, any VM can be powered on without any guardrails and people will keep provisioning resources as the hosts still look fine resource wise, there are no alerts, giving no reason to panic.

The problem is that when one of the hosts fails and your cluster only has enough resources to restart 25% of the VMs that were running on it, you will be left with the last 75% that will remain unavailable until you bring more capacity in the cluster somehow.

In order to ensure that this doesn’t happen, you can implement some sort of way to programmatically check that Admission Control is enabled on all clusters. One option is to run a PowerCLI script in a scheduled task or cron that will send you an email should a cluster be detected with Admission control disabled.

Wrap up

vSphere High Availability is an awesome feature of the product and packs tons of capabilities and settings to play with. One of the best is admission control that should, in my opinion, be enabled on every production clusters! Leaving it disabled could be fine in test clusters for instance where you want as much resources as you can get and you don’t really care if things go down.

Admission Control is an exciting topic that can get quite complicated as we get interested as what is happening under the hood. I believe most environments nowadays use the percentage based policy but to each his own and some environments may benefit from the slot policy or by dedicating hosts to failover for whatever reason. In any case, it is good to understand what Admission policy is and how it works.

Follow our Twitter and Facebook feeds for new releases, updates, insightful posts and more.

Rate this post