This article is more than 1 year old

What’s new in Hyper-V in Windows Server 2016?

Nested virtualisation to shielded virtual machines: Lots to chew on here

Microsoft is busy reshaping Windows server for the cloud era, and the Hyper-V hypervisor is changing accordingly.

The first release of Hyper-V was with Windows Server 2008. It was a solid and reliable product from the beginning, but with limited features compared to its competition, especially VMware.

The technology is strategic for Microsoft though, and each new edition of Windows Server has brought significant improvements, including, amongst others:

  • Live Migration
  • Hot add and remove of virtual SCSI storage
  • Dynamic memory
  • Hyper-V Replica for easily configured resilience
  • A PowerShell module for command-line and scripted administration
  • Shared virtual hard drives to enable clustered virtual machines (VMs)

Server 2012 R2 introduced Generation 2 VMs, which remove legacy hardware emulation such as BIOS, PCI bus and IDE controllers to improve performance and enable features like UEFI (Unified Extensible Firmware Interface) Secure Boot.

The scalability of Hyper-V VMs has also improved, so that since Server 2012 R2 you can now configure up to 64 virtual processors, 1TB of RAM, 64 TB virtual hard drives, and up to 256 virtual SCSI disks.

In Windows Server 2016 Microsoft is adding more features, and the changes are significant. Many of the changes are already available in Windows 10, for development and testing. The goal of Windows Server architect Jeffrey Snover is to make Windows a “cloud OS”, which includes the notion of on-demand compute resources, VMs that spin up or down as needed.

Improvements in Hyper-V are an immediate benefit to Microsoft’s Azure cloud platform and its users, as well as to those deploying Azure Stack, which offers a subset of Azure features for deployment on premises.

Two complementary Server 2016 features are also worth noting. The first is Nano Server, a stripped-down edition of Windows Server optimised for hosting Hyper-V, running in a VM, or running a single application. There is no desktop or even a local log-on, since it is designed to be automated with PowerShell.

The benefits include faster restarts, lower attack surface, and the ability to run more VMs on the same physical hardware. Fewer features also means fewer patches, and fewer forced reboots. In Server 2016, Microsoft recommends Nano Server as the default host for Hyper-V.

The second feature is containers. Using containers, both the application and its resources and dependencies are packaged, so that deployment is automated. Containers go hand in hand with microservices, the concept of decomposing applications into small units each of which runs separately.

Microsoft’s new operating system supports both Windows Server Containers, which use shared OS files and memory, and Hyper-V containers, which have their own OS kernel files and memory. The idea is that Hyper-V containers have greater isolation and security, at the expense of efficiency.

Nested Virtualisation

Nested VMs in Hyper-V 2016

Nested VMs in Hyper-V 2016

Top of the what’s new list is nested virtualisation, the ability to run VMs in VMs. This is a catch-up with competing hypervisors that already have this feature, but an essential one, since it allows Hyper-V to be used even when your server infrastructure is virtualised on the Azure cloud or elsewhere.

Hyper-V depends on CPU extensions, Intel VT-x or AMD-V, and nested virtualisation includes these extensions in the virtual CPU presented to the guest OS, enabling guests to run their own hardware-based hypervisor. The feature could also help developers working in a VM, since device emulators which use these extensions may work.

Nested Virtualisation works in the latest preview of Windows Server 2016 (currently Technical Preview 4) and in recent builds of Windows 10. You have to run PowerShell scripts to enable the feature in both the host and a VM. There are currently some limitations. Dynamic memory, live migration and checkpoints do not work on VMs which have the feature enabled, though they do work in the innermost guest VMs.

Shielded VMs

One of the disadvantages of cloud computing is that physical access to your infrastructure is in the hands of a third-party, with obvious security implications. The idea of Shielded VMs is to mitigate that by having VMs that cannot be accessed by host administrators.

Shielded VMs use Microsoft’s Bitlocker encryption, Secure Boot and virtual TPM (Trusted Platform Module), and require a new feature called the Host Guardian Service. Once configured, a Shielded VM will only run on designated hosts. The VM is encrypted, as is network traffic for features like Live Migration.

Running a Shielded VM has annoyances. You cannot access the VM from the Hyper-V manager, and you cannot mount its virtual disk drive from outside the VM. There is also, according to Microsoft, up to a 10 per cent performance impact because of the encryption.

Next page: ReFS recommended

More about

TIP US OFF

Send us news


Other stories you might like