The origins of the hypervisor can be traced back to IBM’s mainframe systems. Big Blue implemented something approximating a virtualisation platform as an experimental system in the mid-sixties but it wasn’t until 1985 that the idea of the logical partition (or LPAR) on the pSeries and zSeries delivered something recognisable as the hypervisor technologies we see today.
Processor capacity is shared among LPAR workloads based on their entitlement and pools of logical partitions can be allocated supersets of processor and memory capacity – this should be starting to sound familiar – much like reservations and resource pools in the parlance of the modern hypervisor.
More ReadingBy the numbers: The virtualisation options for private cloud hopefulsChannel surfers and the irresistible rise of Content Delivery NetworksParallels Desktop 11 brings Windows 10 and Cortana to MacTin eraser to storage glue: Virtualisation's past, present and futureThe Great Windows Server 2003 migration: How to plan your trip
When VMware introduced ESX (an acronym of Elastic Sky X) in 2002, it followed the pattern of LPAR, that of a resource distribution and partitioning system built upon a custom Linux kernel, as a method of slicing larger servers into smaller workloads, or virtual machines.
The technology changed a lot over the following years, however, and the modern era brings with it a number of alternatives.
The other commercial hypervisors
Aside from VMware ESXi, the two main commercial alternatives are, of course, Citrix XenServer and Microsoft’s Hyper-V.
XenServer is the commercial offering based on the open-source Xen, developed by the University of Cambridge Computer Laboratory in 2003. Citrix acquired XenSource in 2007 and has offered its enterprise focussed XenServer (along with commercial packages) while funding and supporting the open-source development of the Xen Project.
While Xen itself requires a Linux operating system to run atop, the commercial XenServer offering is a fully featured bare-metal hypervisor similar to VMware’s ESXi, and for the most part has been the “go to” alternative for most sysadmins.
The feature set follows closely (yet still somewhat) behind VMware’s core offering and includes snapshots, high availability, site recovery, heterogeneous pools – even XenMotion – analogous to VMware’s much-lauded live migration technology, vMotion.
Microsoft’s Hyper-V takes a very different road. Introduced with Windows Server 2008, Hyper-V is installed as a role from Server Manager and runs atop the Windows Server operating system.
There is a “bare metal” version known as Hyper-V Server – essentially a cut-down version of Windows Server with Hyper-V baked straight into it – but most sysadmins operate with production Hyper-V systems installed on to Windows, and most of them will be doing so on Server 2012 or 2012 R2, as the latest iterations of the software (known as Hyper-V v3) are orders of magnitude more stable and functional than their predecessors.
While the market share has improved and the feature set is close to comparable with the Linux-based hypervisors, the mistrust of running a virtual environment on fundamentally the Windows operating system – patch Wednesdays, BSoDs and all – is still enough to put off a large portion of the sysadmin community.
If you prefer your hypervisor to be open, then you might want to consider Xen – the open source equivalent of XenServer – developed and maintained by the Xen Project as a Linux Foundation Collaborative Project.
Unchained from the commercial interests of Citrix, the Xen Project has been free to pursue other interest aspects of development. This includes fundamental support of cloud innovations such as CloudStack, Eucalyptus and OpenStack, which led to widespread adoption of Xen by Amazon’s Elastic Cloud and Rackspace among others.
It also encouraged the ambitious port of Xen to ARM platforms, seen a major first step in realistically allowing anyone to build vast compute clusters out of low-cost, low-power ARM servers, as Facebook and Google have been doing in their private data centres.
Some hypervisors aggregate and compile other technologies into a single solution, rather than operating as a new and discrete platform in their own right. Oracle purchased Virtual Iron Software in mid 2009, and its core product was quickly disassembled and remade as the Oracle VM/Sun xVM platform.
Not to be confused with Oracle’s VM VirtualBox application (which is a desktop virtualisation product similar to VMware Workstation or Parallels) the Oracle VM platform – built on the now defunct Virtual Iron software – looks like a distinct product. However, a quick scratch on the surface will reveal it is actually a custom implementation of Xen under a coat of Oracle paint.
The Xen Application Programming Interface, or XAPI Toolstack for short, generally plays very nicely with others and is for this reason that a wide array of other technologies operate with Xen as the core framework, and largely explains why the Xen Project found it relatively easy to achieve such wide adoption of the technology by emerging cloud technologies.
Another open-source virtualisation platform is KVM – that’s Kernel-based Virtual Machine, not Keyboard, Video and Mouse – a hosted hypervisor that sits upon other Linux or BSD distributions. From Debian to Red Hat, and Gentoo to Ubuntu, KVM is the hypervisor of choice for those Penguin-fanciers who prefer to virtualise atop a system they are familiar with.
It’s a similar argument to those core Windows sysadmins who favour Hyper-V for their data centres. They would far rather accept the limitations of a platform they know intimately, can troubleshoot easily and tinker under the hood of, than work with a system that might have a better reputation for reliability or perhaps more functionality, but is ultimately a closed black box – a difference engine whose inner workings they simply do not understand.
Like Xen, KVM finds itself very popular among the virtualisation community at large as a result of its propensity for interfacing well with others. Quick Emulator (or QEMU for short) is a perfectly good piece of software in its own right and is chiefly used for virtualising user-level applications and processes.
But when combined with another hypervisor, such as Xen or KVM, its functionality is increased and improved.
Libvirt, on the other hand, is a library of virtual APIs that extend and improve the management of other virtualisation systems. It is on this basis, of extending the functionality and management abilities of a series of hypervisors, that the modern cloud stack as we know it came to pass. OpenCloud and CloudStack form the basis of most enormous cloud platforms.
The real benefit of these cloud platforms based upon Xen and KVM – when they are well written and implemented with an easy-to-use, self-service interface at any rate – is that they democratise the use and spread of virtualisation platforms, allowing those other than hardened sysadmins and virtualisation specialists to reap the benefits of the technology.
The new (old) way – containers and docks
The next big alternative to the classic hypervisor comes in the form of the container.
If FreeBSD is your operating system of choice, then the comically named BSD Jail mechanism will be familiar to you as a method of virtualisation. Rather than creating a discrete virtual machine, including paravirtualising hardware, device drivers and a guest operating system, the BSD admin can subdivide the core operating system resources into compartmentalised workloads known as "jails", allowing for fine-grain segregation of physical resource, data and security among these sub-machines.
If you prefer penguins (Linux) to small red devils (the Unix-like BSD) then you might want to try OpenVirtuozzo – or OpenVZ to its friends – as your package of choice for compartmentalising workloads on CentOS, Debian, Fedora, Suse or Ubuntu.
This method of operating system-level virtualisation – known as containers – is becoming increasingly popular, and the cool new kid on the block is called Docker.
While most virtualisation technologies aim at the machine-level, Docker is designed as an application container, and offers the same levels of abstraction and segregation of resources, workloads and security to the application environment.
It is still a Linux-based operating system-level virtualisation technology (similar to OpenVZ) but the management interface and operating parameters have been adjusted to make it fit better with software as the target.
Docker lets you segregate software implementations by customer without needing to go to the additional effort of virtualising guest operating systems. This suits development shops perfectly, granting the ability to differentiate different forks and branches or live builds and test environments with the minimum of fuss and bluster, in much the same way that well-implemented public clouds allow easy access into operating-system virtualisation.
Docker might well be seen as picking low-hanging fruit or selective virtualisation, but it’s a solution that works: why spend time, energy and money virtualising more than you have to?
IBM Unix/AIX sysadmins are already fairly Hipster-like thanks to a certain taste for bushy beards and plaid shirts, but they will take any opportunity to remind you that they were virtualising “before it was cool” with the original virtual containers – LPARs – on IBM pSeries/zSeries hardware.
With the rise of the container as a method of virtualisation I’m reminded of Big Blue’s LPAR system and how logical partitions kick-started the technologies we have come to rely on, and can’t help but think we are coming full circle. ®