Given the radical change in features, pricing, and packaging that VMware did when it kicked out the vSphere 4.0 server virtualization toolset - which is basically the ESX Server 4.0 hypervisor and its related vCenter management console - it is not reasonable to expect a huge amount of change fifteen months later as VMware revs up the products to the vSphere 4.1 release. But there are always incremental improvements to be made, and there are a bunch of them in vSphere 4.1, announced today.
Not only is VMware tweaking the virtualization stack, but it is also rejiggering some of its vSphere packaging to make its products more appealing to both small and medium businesses and compute cloud builders.
First, the new features. When you get right down to it, there are only two key products in vSphere stack - the ESX Server hypervisor and the vCenter management console - plus a bunch of homegrown and acquired software that gets wrapped around this. Like every other software vendor, VMware lowers prices on the key products in its stack as it deactivates features, which gives the impression of a broader product line and which also gives customers different features and price points. (How many versions of Windows and SQL Server are there?)
The basic feeds and speeds of the ESX Server 4.1 hypervisor have not changed with vSphere 4.1, according to Bogomil Balkansky, vice president of marketing at VMware. The ESX Server 4.0 hypervisor announced last April and that started shipping in May was able to create virtual machines that could span eight CPUs and 255 GB of memory (that's not a typo). This has not changed. But the maximum size of the host computer that the ESX Server hypervisor can span in terms of physical CPUs has increased (double to 128 threads), but the memory that the hypervisor can span stays the same at 1 TB. You can see the official vSphere 4.1 maximum configuration documentation here.
While there are some eight-socket machines Xeon 7500 coming down the pike that will push these limits - with 128 threads and 2 TB of memory - the scalability of the VMware hypervisor in the older 4.0 release was more than good enough for the vast majority of shops.
What VMware is also doing with ESX Server 4.1 is make it faster, and with a number of new features. The first is memory compression for virtual machines. The hypervisor already had memory overcommit to allow for better compression ratios for VMs, as well as memory ballooning and page sharing to speed up processing for VMs with heavy memory loads, and now VMware is adding memory compression. The memory overcommit alone allowed for as much as 50 per cent higher numbers of virtual machines to be slapped onto a particular piece of iron, says Balkansky, and slapping on memory compression can increase consolidation ratios by somewhere between 10 and 15 per cent, he estimates. The memory compression also has a performance aspect, and on systems under heavy load can preserve the existing oomph, which VMware says yields a 25 per cent performance boost. Well, no. What it does is avoid a 25 per cent performance haircut. Which is still a good thing. It is not clear if memory compression helps or hurts performance on machines not under heavy load, but it is an option and you can turn it on and off to see which is best for your workloads.
The number of machines that can be lashed together into a virtualized compute pool has gone up dramatically with vSphere 4.1, and this is more important to the cloud customers that VMware is chasing than is the scalability of the individual VMs or hypervisor. (For now, at least.) The number of physical machines that can be supported in a vSphere cluster remains the same, at 32 boxes, with the 4.1 release. But the number of virtual machines supported per cluster has been increased to 3,000 (up from 1,280 with the 4.0 release). vCenter 4.1 also gets some serious scalability enhancements, with a single instance spanning 1,000 hosts (up from 300 with vCenter 4.0) and the number of registered VMs managed by a single vCenter 4.1 instance rising to 15,000 (more than triple the 4,500 of the vCenter 4.0 tool). The number of VMs that can be activated at the same time and still be managed by a single vCenter 4.1 instance has also been expanded to 10,000, up from 3,000 with vCenter 4.0. Basically, customers can now build a much larger private cloud or service providers can build much larger public clouds and do so without giving VMware as much money as they had to last year.
The vSphere 4.1 stack has a number of other tweaks. The VMotion live migration feature can now teleport a live VM from one physical machine to another five times as fast as with the 4.0 release, and now vCenter can be managing as many as eight concurrent VMotions for each physical server in a compute farm. In addition, vSphere 4.1 has a set of storage and network I/O controls that complement the Distributed Resource Scheduler, which maintains quality of service for compute and memory resources across a pool of servers. The new storage and network I/O controls (which do not have a separate name and which are not lumped under DRS) provide quality of service for storage and network performance. Now, cloud providers and internal IT shops alike can do QoS for all the elements of a modern system. All you have to do is pay a premium for the QoS, and you can be guaranteed it.
Concurrent with the launch of vSphere 4.1, VMware is also repacking three of the Ionix tools that it acquired from parent company EMC back in February for $200m and selling them as vCenter add-ons. The Ionix Application Stack Manager (formerly FastScale) and Ionix Server Configuration Manager have been merged to become vCenter Configuration Manager; the Ionix Application Discovery Manager now loses the Ionix brand and adds the vCenter brand. These products will be priced on a per VM basis, counting up the number of VMs under management by each instead of the raw number of sockets. In a typical configuration, says VMware, these Ionix-derived vCenter add-ons will have a cost of $50,000 in a base configuration.
In a move that shows VMware is serious about helping companies build clouds, all of the vCenter management widgets are now shifting to a per-VM pricing scheme like the rebranded Ionix tools. This was done, says Balkansky, at the request of customers.
"The virtual machine is now the unit measure for how people count things in the data centers," declares Balkansky, and it is hard to argue that point. "The value people get scales with the number of virtual machines, not the number of physical machines it runs on."
Of course, you don't see VMware charging $100 per hypervisor and then a nominal fee per VM inside the hypervisor, now do you? But that day could yet come.
VMware did not say what the per-VM charges would be on the vCenter add-ons, but did say that the new pricing scheme would go into effect on September 1 and that features would be sold in "VM packs," which means forced bundles. The vCenter AppSpeed and ChargeBack features, which were announced last July, will get the per-VM pricing first, along with Site Recovery Manager. The vCenter CapacityIQ capacity planning tool gets per-VM pricing in late 2010 or early 2011, says Balkansky. It's not clear that vCenter Lab Manager, the VM jukeboxing and staging tool, will get per-VM pricing, but there is no reason why it shouldn't.
Rejiggering entry vSphere tools for SMBs
Just to be consistent with the vSphere brand and to try to wipe out the ESX name from the VMware vocabulary, the freebie ESXi embedded hypervisor (which is used in conjunction with the VMware Go online VM management tool and which was originally intended to be stashed on baby flash drives in servers) is now being called the VMware vSphere Hypervisor. And like the hosted VMware Server (formerly GSX Server) variant of VMware's hypervisor, ESXi is free. ESXi is just the ESX Server hypervisor with the console manager ripped out of it, giving it a more streamlined memory footprint on servers and, significantly, one that could fit on what were relatively skinny flash sticks back in 2007. It was intended to be sold on an OEM basis, but two years ago VMware slashed the price to zero as a freebie counterpunch to free Xen and Hyper-V hypervisors from Citrix Systems and Microsoft, respectively. The product is still free, so a lot of users not only won't care what VMware calls it, and will no doubt still call it ESXi no matter what VMware says.
To continue to compete against XenServer and Hyper-V in the SMB space, VMware has also taken a promotional price on its entry vSphere Essentials packaging and made it permanent. The promotional price, which took effect in March, chopped the price of the vSphere Essentials tool from $995 to $495. The Essentials package provides basic server virtualization for up to three two-socket x64 servers. This is the most popular choice for VMware customers that are virtualizing under 30 applications, according to Balkansky.
One other change in packaging with vSphere 4.1 is that VMotion live migration is now being added to the vSphere Essentials Plus and vSphere Standard editions of the software. (You can get the full scoop on the vSphere 4.1 editions and their features here and a detailed explanation of pricing there .) To get VMotion before, you needed to have vSphere Advanced or higher, with vSphere Advanced costing $2,245 per processor socket. Now, Essentials Plus (which includes ESX Server 4.1, a patch manager, management agents, and high availability and data protection features) has VMotion, too. But don't think it is free. The vSphere 4.0 Essentials Plus cost $2,995 across three machines, but the 4.1 release costs $3,495 across three machines. So you are paying $83 per socket for VMotion.
Ditto for vSphere 4.1 Standard, which now has VMotion unlike its 4.0 predecessor. The Standard Edition is sold on a per-socket basis, and now costs $995 instead of the prior $795. So VMotion costs $200 per socket with this license.
Yes, this was just a clever way for VMware to have a price increase at SMB shops. When you are the industry juggernaut, you can do that.
Bootnote: This story originally said what VMware said in a prebriefing ahead of the vSphere 4.1 launch, which is that the ESX Server 4.1 hypervisor could only span 64 threads in a host machine. But the documentation shows that it can support 128 threads, which it needs to do to run on the biggest Xeon 7500 iron today. ®