In this blog, we will discuss the remarkable ascent of Cloud Native technology and how it fits in the virtualization ecosystem. In recent years, the IT landscape has witnessed a profound transformation, with organizations of all sizes rapidly adopting cloud-native principles and solutions in order to be more agile and deploy software more rapidly. Born out of the need to address the limitations of traditional infrastructure, the rise of cloud native has revolutionized how software is developed, deployed, and operated.

At its core, cloud native is not just a buzzword but a holistic approach that embraces containerization, microservices architecture, continuous integration and deployment, and dynamic orchestration. By leveraging the power of public cloud providers and on-premise platforms, this paradigm enables businesses to achieve unprecedented scalability, flexibility, and efficiency. With a relentless focus on resilience, automation, and fast-paced innovation, cloud native has become the preferred choice for modern application development, empowering enterprises to stay competitive in a rapidly evolving digital landscape.

Protect Your Data with BDRSuite

Cost-Effective Backup Solution for VMs, Servers, Endpoints, Cloud VMs & SaaS applications. Supports On-Premise, Remote, Hybrid and Cloud Backup, including Disaster Recovery, Ransomware Defense & more!

Drawbacks of running Kubernetes on Bare-Metal vs. Virtual Machines

As organizations increasingly embrace Kubernetes as the de facto container orchestration platform, the choice between deploying it on bare-metal, in the cloud or in virtual machines comes as an architecture decision with various deciding factors. All 3 approaches have their advantages, but in this chapter, we’ll focus on the drawbacks of running Kubernetes on bare metal compared to virtual machines.

Resource Utilization and Scalability

One of the significant challenges of running Kubernetes on bare-metal is resource utilization and scalability. Bare-metal servers often have fixed resources allocated to them, which can lead to inefficient utilization when running containerized workloads with varying resource demands. In contrast, while virtual machines have the slight added overhead of the hypervisor, they allow for more flexible allocation of resources, enabling better utilization and scalability by dynamically adjusting resource allocations based on workload requirements.

Download Banner

Projects like cluster-autoscaler even allow an operator to dynamically add nodes to the cluster based on demand thanks to the integration with a wide variety of providers.

Hardware Management and Maintenance

Managing bare-metal infrastructure requires substantial effort and expertise. Ensuring current firmware and operating system versions while maintaining a good balance between return on investment (ROI) and modern hardware is no small feat. It involves tasks such as hardware provisioning, software updates, and physical replacements in case of failures. On the other hand, virtual machines abstract the underlying hardware, simplifying infrastructure management and reducing the burden of hardware maintenance.

High Availability and Fault Tolerance

While Kubernetes offers solid high availability capabilities due to its serverless nature, . When a bare-metal node fails, the entire set of containers running on it might also go down, resulting in significant downtime. Virtual machines, however, can be spread across multiple physical hosts, making it easier to implement high availability and fault tolerance by migrating VMs to healthy hosts in case of failures.

Deployment Time and Flexibility

Setting up and deploying Kubernetes on bare-metal can be a time-consuming process. Provisioning and configuring the hardware, installing the OS, and configuring the networking stack can add considerable lead time before Kubernetes becomes operational. Now you can speed up the process quite a bit by leveraging Infrastructure-as-Code (IaC) automation tools such as Ansible and PXE booting but this will add a significant overhead in managing it. Conversely, virtual machines can be spun up quickly from pre-configured templates and their hardware configurations can be tailored to specific needs dynamically, thus reducing deployment time and increasing flexibility.

Resource Overhead

While virtual machines provide advantages in resource allocation and isolation, they also introduce some resource overhead. The additional layer of virtualization can consume extra CPU and memory compared to running directly on bare-metal. However, modern virtualization platforms such as vSphere 8.0 have significantly reduced this overhead compared to traditional virtualization approaches.

How and where to run Kubernetes

Most software vendors acting in the virtualization space now offer Kubernetes support in one way or another. In this section we will cover various infrastructure providers supporting Kubernetes.

VMware Tanzu

After acquiring Pivotal and several other Cloud Native companies, VMware rebranded all of these products under the Tanzu umbrella. There are several implementations of Tanzu on vSphere which sometimes make it quite confusing to understand what you need. However, they extended the portfolio with satellite products such as Tanzu Application Platform (TAP), Tanzu Application Services (TAS) and so on.

VMware Cloud Director

VMware Cloud Director (VCD) is the private cloud platform that is mostly used by VCPP cloud providers nowadays. The Container Service Extension (CSE) plugin for VCD offers a UI inside VMware Cloud Director to create and manage Kubernetes clusters. Note that since CSE 4.0, it is now based on the Open-Source Cluster API project with which you can manage your K8s cluster from another K8s cluster (management cluster).

OpenStack

The famous Open-Source private cloud platform also supports Kubernetes through Cluster API. The OpenStack Cluster API provider (CAPO) is even more powerful as it can automatically provision the networks and T1 routers to isolate K8s clusters from each other.

OpenShift

Backed by Red Hat, OpenShift is a unified platform to build and deploy applications at scale. OpenShift was among the early players in the Kubernetes ecosystem and established themselves as one of the market leaders with a cloud services and a self-managed edition of the product.

Managed Kubernetes

Another way to run Kubernetes in various environments is by using a managed Kubernetes offering from a specialized cloud native company. As opposed to the likes of AKS or EKS, the service provider installs the Cloud Native platform and will manage it for you by essentially acting as an extension of your platform engineering team. Giant Swarm is a great example of a managed Kubernetes provider that supports many infrastructure platforms.

Persistent Storage

Containers are ephemeral by design so there is no case of “restarting” a container per se as they are destroyed and recreated. Meaning whatever data is stored in the container runtime will be lost, should the pod be deleted. As a result, Kubernetes has the concept of Container Storage Interface (CSI) as a standard to interface with infrastructure providers and leverage their storage capabilities.

For instance, the CSI for VMware Cloud Director will let you create and attach named disks to your pods/VMs, the CSI for vSphere will create Cloud Native Storage volumes (CNS) and OpenStack offers the Cinder CSI to provision volumes.

Virtualization workloads

Above is an example of the VMware Cloud Director CSI:

  1. Install and configure the Named Disks Container Storage Interface (CSI) driver in the Kubernetes Cluster
  2. Create a StorageClass that references the CSI driver
  3. Create a PersistentVolumeClaim (PVC) to request a disk using the StorageClass
  4. The CSI driver will request VCD to create a Named Disk
  5. VCD creates a Named Disk
  6. Attach the PV to a Pod
  7. The CSI driver attaches the named disk to the VM on which the pod is running

The case of Load balancers

When it comes to Kubernetes, one thing that is critical is load balancers. There are 2 main use cases for them:

  • Offer a single point of contact to the Kubernetes API when running 3 control plane nodes (highly recommended)
  • Create services of type Load Balancer in the cluster. Those are most often used to expose the ingress controllers (Nginx, Traefik…)

Like the CSI for storage, the implementation of load balancers is specific to the infrastructure the cluster is running on. The Cloud Provider Interface (CPI) is the component that interacts with the infrastructure endpoint to generate the creation of the required resources (load balancer pools, backend services, networks).

One caveat I’d like to shine light on here is about vSphere. As you know, VMware vSphere is only a hypervisor, meaning it has no notion of load balancers. Customers usually implement NSX-T to achieve these capabilities. As a workaround, the maintainers included kube-vip to expose the Kubernetes API across multiple control plane nodes.

To take the example of VCD, again, when a service of type Load Balancer is created, the CPI will:

  • Grab an IP from the external IP pool
  • Create a Load balancer pool containing the IP of all the worker nodes
  • Create a virtual service with the selected IP and attach it to the pool.

Virtualization workloads

Wrap up

The combination of Kubernetes and virtualization brings together the best of two powerful worlds in the realm of modern IT infrastructure. Through this blog, we’ve explored the symbiotic relationship between Kubernetes, the industry-standard for container orchestration, and virtualization (mostly VMware), a pioneer in virtualization technology.

Whether you’re a seasoned IT professional or just embarking on your containerization journey, the combination of Kubernetes on a virtualization platform opens up a world of possibilities. It simplifies the complexities of managing containers, offers reliable resource allocation, and provides a robust foundation for building and running applications at scale.

Read More:
Virtualization Trends Series: Top 5 VMware-Led Open-Source Projects: Part 11

Follow our Twitter and Facebook feeds for new releases, updates, insightful posts and more.

Rate this post