It looks like Microsoft's Hyper-V server virtualization hypervisor is maturing enough at the jump to Windows 8 Server next year that Cisco Systems needs to make it a full peer to VMware's ESXi hypervisor, which has been the preferred virtualization layer on the "California" Unified Computing System servers from Cisco for the past two years.
Right out of the chute when they were announced in March 2009, the Cisco blade servers embraced VMware's ESXi hypervisor while merely running Hyper-V, KVM, and Xen hypervisors. VMware's hypervisor was integrated tightly with the extended fabric at the heart of the UCS system, called the Virtual Machine Fabric Extender, or VM-FEX for short. And VMware and Cisco, which are partners with EMC in peddling vBlock preconfigured cloudy infrastructure, worked so Cisco's Nexus 1000V virtual switch could replace VMware's own integrated distributed virtual switch embedded in the ESXi hypervisor.
The Nexus 1000V runs Cisco's NX-OS networking operating system (which is distinct from its IOS operating system for data center switches).
Up until now, partly because VMware was the dominant supplier of server virtualization in the corporate data centers that Cisco is targeting with the UCS iron, partly because VMware is its designated virtualization partner, and partly because Microsoft's Hyper-V was not on par in terms of features, Cisco didn't have to support anything but the ESXi hypervisor with the Nexus100V virtual switch or the VM-FEX switching, which are alternative ways of coping with virtual machines on UCS machinery.
But with Microsoft coming on strong with Hyper-V 3.0 next year and VMware charging a premium for its vSphere virtualization stack, Cisco can't afford to let ESXi have premier status among hypervisors.
Cisco's two approaches to switching for virtual machines
With the Nexus 1000V distributed virtual switch, Cisco is running a virtual switch inside of the hypervisor that, as far as the virtual machines are concerned, looks exactly like any other IOS-based switch from Cisco. What this means is that Cisco network administrators can use the same Data Center Network Manager tool to manage the Nexus 1000V as they would use to manage any other Cisco switch. This virtual switch runs inside of a VMware virtual machine itself and can provide virtual switching to multiple physical servers inside of a UCS chassis.
All of Cisco's other virtual networking services – Virtual Security Gateway, Adaptive Security Appliance, Wide Area Acceleration Service, and Network Analysis Module – can plug into the Nexus 1000V. You get all the benefits of a Cisco switch without actually having to buy Cisco switching gear – and you can keep the network administrators managing both physical and virtual networks and get the server and hypervisor administrators back on the other side of the switch where they belong.
"Separation of duties is very important for compliance and other reasons," explains Prashant Gandhi, senior director of product marketing at Cisco's Server Access and Virtualization Technology Group.
By the way, the Nexus 1000V switch is not limited to running on Cisco's own blade and rack servers and can run on any x64-based machine that has ESXi as its hypervisor. And next year, it will run on any machine supporting Hyper-V 3.0 from Microsoft. Gandhi tells El Reg that the company has over 4,000 customers using the Nexus 1000V, and while the lack of support for Hyper-V has probably not held back the installation of the Nexus 1000V virtual switch, it probably will soon.
And that is why Cisco is working with Microsoft to allow for the integrated virtual switch at the heart of Hyper-V to be swapped out for Nexus 1000V with the 3.0 release of the Microsoft hypervisor. At the moment, only VMware's ESXi 4.0, ESX Server 4.0, and ESXi 4.1 hypervisors can talk to the Nexus 1000V distributed virtual switch.
Let's get physical with virtual
For UCS customers, there is another way to virtualize their links between virtual machines and switching capacity, and that is to have the hypervisor get a virtual Ethernet port on the extended switching fabric embodied in the UCS blade servers. In this case, using the VM-FEX, a virtual interface card can be allocated to a VM that has full, line-rate 10 Gigabit Ethernet performance (or less than that if you want to chop a VIC up into multiple bits for multiple VMs).
The network is external to the VMs and is managed by the same UCS Manager software that is used to control the servers and the chassis switch at the center of the UCS design. At the moment, only the ESXi hypervisor can reach out and talk to the VM-FEX extended fabric and its virtual interfaces, but Gandhi says that due to customer demand it will soon support the KVM hypervisor being championed by commercial Linux distributor Red Hat. Gandhi refused to comment on the possibility of the KVM hypervisor getting support for the Nexus 1000V virtual switch. Cisco didn't want to talk about Xen, either.
While Cisco is working with Microsoft to support the Nexus 1000V and VM-FEX technologies with the future Hyper-V 3.0 hypervisor for Windows 8 Server, Gandhi tells El Reg not to expect Cisco to port these virtual networking methods back to the current Hyper-V 20008 R2. Gandhi says that the upcoming Hyper-V hypervisor has a much-improved "extensibility framework" that allows Cisco to hook into the hypervisor – something that made hooking Nexus 1000V and VM-FEX into Hyper-V easier. While the older Hyper-V could be retrofitted to support these networking mechanisms, it would be difficult, says Gandhi.
Cisco is beta testing Nexus 1000V and VM-FEX with Microsoft's Hyper-V 3.0 now. No word on when it will come to market, but Gandhi says Cisco will be ready to ship whenever Microsoft is. ®