Read on:

Virtualization Trends Series Part 1: A Brief History of Virtualization

Welcome back to part 2 of our virtualization trends series. In this article we will discuss where hypervisors fit in the ever-evolving IT landscape nowadays. Gartner estimates that, by 2025, over 95% of new digital workloads will be deployed on cloud-native platforms, up from 30% in 2021. As a result, it is only fair to wonder if this shift from virtual machines to micro-services will detract the use and adoption of virtualization and what to expect from it.

Protect Your Data with BDRSuite

Cost-Effective Backup Solution for VMs, Servers, Endpoints, Cloud VMs & SaaS applications. Supports On-Premise, Remote, Hybrid and Cloud Backup, including Disaster Recovery, Ransomware Defense & more!

Hypervisors have long been a critical component of on-premise infrastructure environments, allowing organizations to manage and abstract physical resources and run multiple operating systems and applications on a single physical machine. However, with the growing popularity of cloud services and cloud native technologies, the role of hypervisors with respect to on-premise infrastructure is being called into question.

The Evolution from traditional virtualization to Cloud Services and Cloud Native

In the past decade, there has been a shift from traditional virtualization to the use of cloud services and cloud native technologies. The evolution from traditional monolithic applications to cloud native workloads has been driven by the increasing adoption of cloud services and the need for organizations to build and deploy applications in a more agile and scalable manner.

Traditionally, monolithic applications were built as a single, self-contained unit that included all the necessary code, libraries, and dependencies to run the application. These applications were often difficult to update and scale, as any changes to the codebase required the entire application to be redeployed.

Download Banner

In contrast, cloud native workloads are designed to be built and deployed in a more modular and scalable manner. Cloud native applications are typically composed of small, independent micro-services that each handle a specific function or aspect of the application. This allows organizations to update and deploy individual micro-services independently, rather than having to redeploy the entire application.

Cloud native technologies, such as containers and Kubernetes, have also made it easier for organizations to build and deploy cloud native applications in a more agile and scalable manner. Containers allow organizations to package and deploy applications in a lightweight and portable manner, while Kubernetes provides a platform for managing and orchestrating containerized applications.

Overall, the evolution from traditional monolithic applications to cloud native workloads has been driven by the need for organizations to build and deploy applications in a more agile and scalable manner, enabled by the adoption of cloud services and cloud native technologies such as containers and Kubernetes.

Hypervisors vs container hosts

One way in which hypervisors and cloud native technologies differ is in the way they abstract physical resources. Hypervisors abstract physical resources at the hardware level, allowing multiple operating systems to run on a single physical machine. Container hosts, on the other hand, abstract resources at the operating system level, allowing multiple containers to run on a single operating system, sharing libraries, binaries…

Evolution and Future of Hypervisors

“Virtualization vs containerization”

Technologies such as Kubernetes and containers make it easier for organizations to build and deploy applications with a lot more flexibility which may also reduce the need for traditional virtualization as the number of mixed virtual machines should logically go down. On the other hand, one of the main benefits of virtualization is the ability to easily allocate and reallocate resources to different virtual machines, depending on the needs of their applications. This can be particularly useful for running Kubernetes clusters, as it allows organizations to scale their clusters up or down as needed, without having to provision new physical servers which is cumbersome and time consuming.

As mentioned previously, virtualization abstracts the physical hardware by offering a pool of resources to provision virtual machines on which you will have the operating systems with applications running on it. This can make it easier to manage and maintain Kubernetes clusters, as it allows organizations to focus on managing the virtual machines rather than the underlying hardware (lifecycle, firmware…). Overall, the flexibility and resource allocation capabilities of virtualization make it a good solution for running Kubernetes clusters.

While the cloud native buzzword is used everywhere, many organizations think they need Kubernetes to be more agile and deploy faster. However, this comes at a cost as Kubernetes is a highly complex solution that encompasses all areas of the App stack from the low-level infrastructure up to the build running the app. Managing everything from start to finish is a massive task, actually hundreds of tasks that all require expertise in their own areas such as connectivity, observability, security, delivery, operations… It may be worth it for large companies to invest in a team to manage this area of the SDDC, however, in most cases it will be beneficial to either go with managed Kubernetes providers or directly use hyperscalers cloud services. Note that it can also be argued that small development with unclear roadmaps may benefit from going the legacy monolithic app route to get the business logic down which you can later on tune to use microservices.

Should we discuss virtualization or virtualization admins?

It is important to note that, while the future of hypervisors in on-premise infrastructure may be uncertain, the role of virtualization admins is not going away. Virtualization admins are responsible for managing and maintaining virtualized environments, and they will continue to play a critical role in ensuring the smooth operation of on-premise infrastructure which is still required in a number of cases like data locality, large compute requirements, low latency, costs etc.

Note that the role of virtualization admins is not limited to just that. VI admins are also responsible for ensuring the smooth operation of an organization’s IT infrastructure as a whole, which includes tasks such as monitoring systems, troubleshooting issues, and implementing security measures.

Therefore, while the adoption of cloud services and cloud native technologies may change the way that virtualization is used in some cases, it is unlikely to jeopardize these functions. However, it will be important for virtualization admins to stay up to date with IT trends in order to remain relevant and future-proof their skills.

VMware has actually played a role in that transition by pushing their Tanzu portfolio, which got a wider population of VI admins interested in the Kubernetes ecosystem. However, we do recommend that you start your Kubernetes learning journey with the base (vanilla) product as opposed to Tanzu as it obfuscates a lot of what happens behind the scene which will hinder your learning process.

What should Virtualization admins learn?

There are several things that virtualization admins can do to remain relevant in the evolving IT landscape:

  1. Learn about cloud services and how to integrate them with their existing on-premise infrastructure: As more organizations adopt cloud services, it will be important for virtualization admins to have a good understanding of these technologies and how to integrate them with their environment. This may involve learning about different cloud providers, such as Amazon Web Services (AWS) and Microsoft Azure, as well as tools and technologies for managing and deploying applications in the cloud.
  2. Learn about containerization and Kubernetes: Containerization and Kubernetes are becoming increasingly popular as a way to build and deploy applications in a more agile and scalable manner. Virtualization admins who are familiar with these technologies will be well-positioned to support and manage containerized applications in their organization.
  3. Stay up to date with developments in security: Cybersecurity is a critical concern for all organizations, and virtualization admins play an important role in ensuring the security of their organization’s IT infrastructure. Staying up-to-date with developments in security, such as new threats and best practices for protecting against them, will help virtualization admins remain relevant in this increasingly important area.
  4. Learn about automation and infrastructure as code: Automation and infrastructure as code are becoming increasingly important as organizations look to streamline and automate their IT operations. Virtualization admins who are familiar with these concepts will be well-positioned to help their organization implement and manage these technologies.

Overall, it will be important for virtualization admins to stay up-to-date with developments in cloud services, containerization, security, and automation in order to remain relevant in the evolving IT landscape.

Hypervisors to run Kubernetes

As you may have gathered, cloud services and Kubernetes are two areas of the evolving IT landscape that can have an effect on hypervisors as we know them in terms of their need and usage. However, just like hypervisors were rock-solid providers of compute, storage and networking to virtual machines, they can also serve cloud-native workloads by running Kubernetes nodes in VMs. The benefits of this include unparalleled flexibility and a lot of leeway when things go wrong during upgrades or other infrastructure events as you can easily re-provision the nodes.

If we take a VMware environment running K8s nodes as an example, below you can see the VMware view of the setup, which looks similar to any infrastructure. As opposed to the fact that virtual disks are created slightly differently through a Kubernetes CSI (Container Storage Interface).

Evolution and Future of Hypervisors

“VMware vSphere running kubernetes nodes as VMs”

And below you can see the Kubernetes view and what it looks like from the point of view of the Kubernetes cluster.

Evolution and Future of Hypervisors

“View from the Kubernetes cluster”

Cluster-API is the most popular open-source project when it comes to provisioning Kubernetes clusters onto specific infrastructures. You can read our blog on the topic of how it relates to the defunct Tanzu Community Edition.

Evolution and Future of Hypervisors

“Cluster API in Tanzu Community Edition”

Companies like Giant Swarm that offer managed Kubernetes to their clients rely heavily on Cluster API to provision Kubernetes clusters on infrastructure providers such as GCP, AWS, Azure, vSphere, VMware Cloud Director, OpenStack and so on.

Conclusion

In conclusion, while the future of hypervisors in on-premise infrastructure may be uncertain, it is clear that virtualization admins will continue to play a critical role in managing and maintaining virtualized environments. As cloud services and cloud native technologies become more prevalent, it will be important for virtualization admins to stay up to date with these technologies and expand their reach from the purely infrastructure realm, a little further the software lifecycle world. So, the future of hypervisors in on-premise infrastructure looks bright, as they will continue to play a crucial role in running cloud-native workloads for those customers who will stick with it.

Follow our Twitter and Facebook feeds for new releases, updates, insightful posts and more.

5/5 - (1 vote)