Server makers are embracing Intel's "Westmere-EPs" Xeons.
Intel's cadence for chips have a tick-tock rhythm, where the cores and processors are updated on the "tick" and the process shrink leading to more cores, larger cache, and sometimes faster clocks comes on the "tock." Servers based on Intel's Xeon chips follow the same meter, of course, but it is the sockets and chipsets that are ticking and tocking.
When the six-core "Westmere-EP" Xeon 5600s that were launched, a lot of the server makers kept essentially the same lineup, allowing for customers to buy last year's x64 boxes. But this time around, they're slapping Xeon 5600s into their boxes instead of last year's Xeon 5500s, the "Nehalem-EPs" that reinvigorated the server market in the wake of the economic meltdown as 2009 was coming to a close.
As Intel's sales pitch for the Xeon 5600s explained, mainstream commercial applications - ERP, database, Java applications, etc. - saw anywhere from a 27 to 42 per cent performance bump. And with roughly the same chip prices (Intel is charging a slight premium, clock for clock, with the Xeon 5600s, sometimes it is charging less), this is an easy upgrade to sell for server makers.
Even if Intel was using an apples-to-applesauce comparison, with 2.93 GHz, 95 watt, quad-core Xeon 5500s being stack up against 3.33 GHz, 130 watt, six-core Xeon 5600s instead of systems using processors running at the same speed or watts. The 130 watt parts are utterly useless in a lot of data centers and would normally have been restricted to workstations where power and cooling are not a big issue.
These 130 watt parts certainly are not expected to be popular with customers as far as the server makers that El Reg spoke to in the wake of the Xeon 5600 announcements can tell. The reason they exist in the server lineup with the Westmere-EPs and were not with the Nehalem-EPs is that Intel needs to have a top-end server part to compare to the impending top-end twelve-core Opteron 6100 from rival Advanced Micro Devices.
The other Xeon 5600 parts are being embraced enthusiastically by the top tier server makers, although with the two-socket boxes having so much more oomph, you might think they'd be at least a little more ambivalent because virtualization atop these machines will cause even that much more footprint compression.
Hewlett-Packard, the volume leader in the x64 server racket, has rolled the Xeon 5600s into 18 of its existing ProLiant G6 machines, and the chips will be available on March 29. Specifically, the ProLiant BL280c, BL2x220c, BL460c, and BL490c blade servers, the ProLiant DL120, DL160, DL170h, DL180, DL320, and DL380 rack-mounted servers, and the ProLiant ML110, ML330, ML350, and ML370 tower servers can all be equipped with the new chip.
Krista Satterthwaite, product marketing manager at HP's Industry Standard Servers unit, says that customers with ProLiant G4 systems will be taking a pretty hard look at the G6 machines with Westmere-EPs. With last year's Nehalem-EP launch, HP reckoned that it could compress around 13 ProLiant G4 servers into a single ProLiant G6 box. With the Westmere-EPs, HP is now figuring it is more like 20 to 1 compression. That is a 27 times improvement in performance per watt, and instead of a three month return on investment (taking into account software licenses, power and cooling, floor space, and other factors) as on the Xeon 5500s, the Xeon 5600s in the ProLiant G6s can get the money back in two months.
The ProLiant G6s sporting their Xeon 5500s had the fastest transition of any x86 or x64 server product launch in HP's history, according to Satterthwaite, and the company expects the momentum to continue to build with the Xeon 5600s. The support for new 40 watt and 60 watt parts and low-voltage DDR3 memory are also going to push ProLiant G6s into places where they might not have been able to go before because of power and cooling issues.
HP is not just interested in selling new ProLiant boxes based on the Xeon 5600s. Satterthwaite says that among some customers where the ROI on such a big performance improvement as the jump from Xeon 5500s to Xeon 5600s embodies is on the order of a half hour - think risk analysis and trading systems at financial services companies, where microseconds are millions of dollars - HP is expecting to sell quite a number of processor upgrade kits. Some big HPC supercomputing labs will also be interested in upgrading.
Sally Stevens, vice president of PowerEdge server marketing at Dell concurs, saying that there are always some "heat seekers" in the customer base who want more performance, and Dell therefore is expecting to sell a fair number of processor upgrades even though customers may have only bought their PowerEdge 11G machines last year.
"In the old days, when CPU performance was maybe on the order of 6 to 10 per cent higher, this didn't make a lot of sense," says Stevens. "But at around a 50 per cent jump, upgrading becomes a lot more attractive."
And at Big Blue...
Over at IBM, the company formally launched the HS22V blade server, which El Reg told you about back in February; this machine is a peppier version of the existing HS22 blade that IBM debuted last March. The HS22V blade has 18 DDR memory slots to be shared by its dual processor sockets, compared to 12 in the regular HS22 blade. The HS22V blade also uses 1.8-inch Micro-SATA 50 GB solid state disks. It is this extra memory and flash that makes this HS22V blade suitable for virtualization, according to Big Blue, saying that the HS22V can host about 50 per cent more virtual machines than the HS22.
The company also slapped an M3 designation on its System x3400 and x3500 tower and x3550 and x3650 rack servers as part of its Xeon 5600 upgrade. These machines are going to be equipped with 16 GB DDR3 DIMMs starting in the second quarter, but the memory controllers don't allow more than a 50 percent increase in capacity compared to using the 8 GB DIMMs that were the top-end parts in the M2 machines that launched in March and in April.
Depending on the server model, IBM has also jacked up the disk storage capacity on the System x M3 machines, by between 60 and 100 per cent. The iDataPlex dx360 M3 server, which is kinda halfway between a blade and a rack server now has redundant power supplies but is otherwise unchanged excepting the Xeon 5600 upgrade and support for low-voltage DDR3 memory.
Oracle, which has newly come to the server business thanks to its acquisition of Sun Microsystems in late January, didn't say squat about the Xeon 5600s, but its Sun Blade X6270 and X6275 blade servers and Sun Fire X2270, X4170, X4270 and X4275 rack-mounted servers can, in theory, support the new processors as well as low-voltage memory. Oracle has put some Sun server spec sheets up, but the Sun products are not yet integrated into the Oracle online store. The servers are still sold over at a Sun address here, though. And the machines have not been updated since the merger and do not have the Xeon 5600s.
Fujitsu actually seems to have decided that the Xeon 5600s were the perfect time to roll out a new Primergy cloudy rack product line, the CX1000, not to be confused with the Cray CX1000s announced this week, which are a series of midrange supercomputers based on blade form factors and using Intel's Xeon 5600 and imminent "Nehalem-EX" Xeon 7500 high-end chips.
The Fujitsu CX1000s are a back-to-back compact rack design with the central chimney to eliminate heat from the rack and to also get ride of the hot aisle typical in data centers. (Or more precisely, to shrink it and contain it between two racks that are back to back. This is akin to similar rack designs from Silicon Graphics (formerly from Rackable Systems) that have been around for years. The CX1000 rack is 43U high and has slots for 38 server trays, each one holding a single two-socket Xeon 5600 server. The CX1000 rack has three vertical 2U bays on the right hand side of the servers, where switches can be mounted, and the back end of the rack has some big fans to pull air through the rack servers and push it up to exhaust ports above the racks. The hot air is never let loose in the data center, which makes sense.
The Primergy CX1000 supports a tray server called the CX120 S1, which has two Xeon 5600s, with up to 64 GB of memory capacity (a little on the skinny side for a six-core chip perhaps) and two Gigabit Ethernet ports. Each tray server has room for two 2.5-inch SATA drives (not hot pluggable) and one low-profile PCI-Express 2.0 x16 peripheral slot. While the CS120 S1 is interesting enough because of the rack it fits into, it seems to be a little light on the memory and storage. It would be interesting to see this machine configured with drawers of two-socket, half-width servers with more memory and room for more disks. But making room for the central chimney meant cutting back on server capacity if Fujitsu was to keep the rack size the same.
Silicon Graphics, as previously reported put out a new baby Xeon blade box called the Origin 400, and is ramping up the use of the Xeon 5600s in its Rackable rack, CloudRack cookie sheet, and Altix ICE blade servers, too. Rival Cray did the CX1000 midrange supers mentioned above, and reminded everyone that its CX1 baby super cluster can support the Xeon 5600s as well.
Server wannabe Cisco Systems has its B-Series horizontal blade servers (the half-width B200-M1 and the full width B250-M1) and the C-Series rack servers (the C200-M1, C210-M1, and C250-M1), all based on the Xeon 5500s and shipping last year. The B250 and C250 sport Cisco's own memory extension ASIC, which allows the two-socket boxes to support up to 384 GB of DDR3 memory, 2.7 times the maximum allowed using Intel's own 5520 chipsets.
Cisco did not make a formal announcement for its upgraded machines using the Xeon 5600 processors - that's the B200-M2 and B250-M2 blades and the C200-M2, C210-M2, and C250-M2 - but did make some noise about Vmark server virtualization benchmark results on the boxes. After comparing spec sheets, the M1 and M2 machines look to differ primarily in terms of the processors supported. Cisco has not divulged pricing on the B-Series and C-Series M2 servers. ®