Year 2022 came to a close with a number of big events in the IT infrastructure landscape, the biggest one being obviously the acquisition of VMware by Broadcom. The announcements made by the big players and strategic directions taken are representative of the shift that happened in the last few years which broadens the horizon of virtual environments into areas like cloud computing and cloud native workloads among other things.

In this new series of articles, we will be discussing trends in the virtualization landscape in which we’ll touch on various topics related to this area of the SDDC. Due to my background in VMware environments, the content will be largely biased with this vendor, but most concepts translate to their commercial competitors, as this is the thing with trends. When someone does something new that works, the others follow with a competing product bringing added features and improvements, feeding into the cycle of continuous innovation across the board.

Protect Your Data with BDRSuite

Cost-Effective Backup Solution for VMs, Servers, Endpoints, Cloud VMs & SaaS applications. Supports On-Premise, Remote, Hybrid and Cloud Backup, including Disaster Recovery, Ransomware Defense & more!

However, before diving into the cutting-edge stuff and where we are going, it is useful to have a grasp of where we come from and how things evolved up until now. Note that, you will probably find out that the history of virtualization, while going back further, is tightly coupled with the history of cloud computing as the end goal is slightly similar with different fundamental use cases.

What is virtualization?

First, let’s quickly talk about what virtualization actually is. We’ll see in more details how it came about but I want to get everyone on the same page before cracking on.

When non-IT folks see a rack of servers, they think only a dozen or so actual servers are running as this is how many physical rack servers they can spot in the rack. However, chances are you would be looking at 200 or more virtual machines (VMs) running on the hardware.

Download Banner

Virtualization lets you share the hardware resources of the underlying physical server to be consumed by virtual servers running on it. This will be beneficial in 95% of cases (wet-finger estimate) unless the application running can concurrently make use of all the physical cores of the CPU(s) and all the memory installed in the server. However, in the majority of cases, not all servers will run hot at the same time, some will only be used at a fraction of their capacity, some will be required for a limited period of time… In which case, sharing resources with a hypervisor offers the flexibility to assign compute and memory on a needed basis.

The benefits of virtualization include:

  • Consolidation with better resources usage
  • Reduced energy consumption
  • Centralized management
  • Easier server provisioning and lifecycle
  • High availability in clusters

Virtualization improves resources utilization

“Virtualization improves resources utilization.”

Mainframe Virtualization

Back in the 1960s’, computers and servers as we know them today didn’t exist yet. Everything was centralized in a large and expensive mainframe computer, costing north of $2M. Mainframes were introduced by IBM in the early 1950s’ and came with no operating system. Operators had to manually write jobs to be scheduled in a work queue. After customers develop a first iteration of an operating system for an IBM mainframe, the blue company built on it, which led to the distribution of the famous System/360 and its operating system. This was the first mainframe able to address both commercial and scientific applications with a wide variety of use cases.

In the early 1970s’, IBM created a new operating system called VM (what you nowadays know as z/VM) with the first concept of hypervisor or Control Program. This is where virtualization first happened with a way to partition the OS into multiple virtual machines to enable multi-tasking. The benefit of this being a better utilization of the (very expensive) mainframe resources and allow users to work concurrently.

Rise of x86 architecture

When the x86 architecture became mainstream with cheap processors, along with the rise of Unix OSes and client-server applications, innovation in the virtualization space somewhat stagnated for a while as the use cases could be addressed in a different, cheaper way.

Regardless, there were significant developments with:

  • Simultask, a virtual Machine Monitor (VMM) developed in the mid-1980s’ by Locus Computing Corporation with AT&T
  • SoftPC by Insigna solutions in the late 1980s’ that would let you run MS-DOS applications on UNIX and Mac OS, and later Windows software as well
  • OS/2 Virtual DOS machine (VDM) by IBM, capable of virtualizing DOS, Windows and other 16 bits OSes in the early 1990s’

While this worked out very well for most of the 1990s’ and throughout the early 2000s’, running bare metal servers for specific service or applications brought up several challenges:

  • Waste of resources as most physical servers are usually used at around 15% on average
  • Increasing rack space required to scale out meaning more real estate for infrastructure and increased energy bills to power all these servers
  • Sprawling of hardware to manage (device failures, bios/firmware upgrades, large inventories…)
  • Complicated backups restore and disaster recovery procedures

Modernization of Virtualization

In 1998, VMware was founded and presented Workstation 1.0 in 1999 which allowed a user to run multiple operating systems as virtual machines on a single PC. It supports virtual machines with up to 2GB of RAM, a significant amount of memory back then. It could run several OSes such as MS-DOS 6, Windows 95, 98, NT, Red Hat 5.0, SuSE Linux 5.3, FreeBSD 2.2.8, 3.0, and 3.1.

In 2000, Workstation 2.0 brings the Suspend and Instant Restore feature along with the shrink disk feature as well as bridged and host networking. New versions of VMware Workstation kept on being released until the latest VMware Workstation 17 that was released in November 2022.

However, the release that will really disrupt the industry is VMware GSX in 2001. Like Workstation, GSX is a type-1 hypervisor but this one is aimed at server virtualization on Windows-Based Intel Servers. The aim is to address server consolidation, capacity planning, and rapid provisioning. You can find the news release on VMware’s news website.

VMware GSX had a good few years until it was replaced by ESX Server and then vSphere ESXi. While VMware had the most advanced hypervisor on the market by a long shot, an important selling point for enterprise customers lied in the ecosystem which started with the management layer, a.k.a. vCenter server, which centralizes and simplifies managing several servers and working with clustering capabilities like vMotion, DRS and so on.

The importance of ecosystems and communities

A friend of mine who gives talks about VMware Tanzu came up with an interesting analogy with Apple products. When you take an iPhone by itself, objectively, it isn’t any better than any other smartphone on the market and offers much of the same features. However, when you pair it with Apple iCloud, a Macbook, airpods and so on, the seamless integration between the products makes it a really good experience. Of course, Apple products aren’t exactly cheap and this is usually the most criticized aspect, but this premium includes buying in the great ecosystem. You achieve the same end result with Android but it may be more complicated/custom to get there and you may need to resort to third party products with their own lifecycles offering a not so great user experience.

It is somewhat similar for VMware products. Competitors often point the finger at the license prices, and I will be the first one to agree that they are steep (Especially with the recent rumours around Broadcom jacking them up but we aren’t here to speculate.). While running VMware products is expensive, you do get a lot for your money.

Easy to setup: From the point of view an IT pro, configuring VMware infrastructures is a breeze when you compare it to Open-Source projects when a lot of steps and in-depth OS level is required to get the basics down.

Support: Product versions and interoperability is ensured with testing done on the vendor side to make sure you are running a supported environment. Resources include the HCL (Hardware Compatibility List), Interoperability matrix, end of life matrix and so on.

Large community of users: VMware benefits from a large pool of IT professionals sharing knowledge and helping each other on platforms like VMTN or Slack channels. The vExpert program entices the community to produce content about the ecosystem to spread the word and help fellow users.

Industry standard: By being one of the leaders in the enterprise market, a lot of third-party companies make it a point to be compatible with VMware products. This means more options to choose from, meaning more tried and tested solutions like BDRSuite for Data protection.

I am taking VMware as an example because it fits perfectly into my use case. Competitors like OpenStack are great solutions if you have advanced in-house Linux skills with lots of time to manage and set it up. Microsoft Hyper-V works just fine with loads of cloud services provided by Azure Cloud Services.

I brought up this topic to discuss side products that aren’t directly virtualization products but are tightly coupled such as Monitoring, Automation, Logging, Management, Auditing, Containerization and so on. VMware is great in that regard as they have a rock-solid ecosystem with tight integration (even though changes initiated by the marketing department made waters a bit muddy over the last few years by constantly changing names and editions) which is why we compare it to Apple’s expensive product. On the other hand, OpenStack offers advanced capabilities at no cost but with less integration, complex processes when it comes to setting up and running the platform, which is why we compare it to the less expensive Android ecosystem.

Who are the virtualization players?

When saying virtualization, most folks will think about VMware vSphere or Microsoft Hyper-V, but other cheaper options exist that you may have heard of.

Licensing License
VMware vSphere Paid Proprietary
Microsoft Hyper-V Paid Proprietary
OpenStack Nova Free Open-Source
KVM Free Open-Source
Proxmox Free Open-Source
RHEV Paid Proprietary
Xen / XenServer Free / Paid OSS / Propr…

Wrap up

This ties up our first part of this Virtualization Trends Series. There is a lot more to say about virtualization in general, but it wouldn’t fit in a single article. In this blog we scratched the surface of what virtualization is about and where it comes from.

Note that we focused on the hypervisor aspect of the technology and didn’t touch on trends moving towards cloud services, multi-cloud or cloud native workloads. While VMware remained the leader in this space for a long time, a shift in how applications are built at their core is evening out the playing field and changing the IT landscape of the past 15 years.

Stay tuned for more discussions around Virtualization Trends in the next part.

Follow our Twitter and Facebook feeds for new releases, updates, insightful posts and more.

Rate this post