This article is more than 1 year old

HP welds data center containers into EcoPODs

69 kilowatts per rack

Hewlett-Packard is still one of the big believers in containerized data centers, and the reason is simple: A select number of customers who are focused on power efficiency, speed of deployment, or both pay HP money to weld these things together and slap a coat of paint on them.

At the HP Discover customer and partner extravaganza today in Las Vegas, the company announced its latest iteration in containerized data centers, called the Performance Optimized Datacenter (POD) 240a, also known as the EcoPOD because of the efficiency it allows.

HP EcoPOD

Hewlett-Packard's EcoPOD data center (click to enlarge)

After trying to cram a cold and hot aisle into a single container, which is a bit on the cramped side for human beings, HP has figured out that if you put two containers side-by-side and then open them up, you can run a hallway down the middle and use that as a shared hot aisle in roughly the same dimensions as in an actual brick and mortar data center. The 240 part of the product name refers to the fact that it is based on two 40-foot shipping containers, the kind that come from China to the United States and Europe is far larger numbers than go in the reverse direction.

The POD 240a can be equipped with two rows of 50U equipment racks, which are 8U taller than a standard rack. There is enough room in the two rows for 44 racks – 22 per side – and that gives you a total of 2,200U of aggregate rack space, or enough room for 4,400 server nodes if you use 2U tray servers with half-width server motherboards (which are common these days and not even exotic any more), putting four nodes in a chassis. Those server nodes can have over 24,000 3.5-inch disk drives in aggregate. If you use the densest servers that HP sells, you can crank the number of server nodes in an EcoPOD up to 7,040.

HP EcoPod cold aisle

The cold aisle in the HP EcoPOD (click to enlarge)

Jim Ganthier, vice president of marketing for HP's Industry Standard Servers division, says that the typical brick and mortar data center built to house this equipment would take about 10,000 square feet of floor space and might be able to deliver a power density of somewhere between 6 and 8 kilowatts per 42U rack. But because of the integrated cooling in the POD 240a, this containerized data center can deliver 44 kilowatts of juice per 50U rack and has a peak thermal density of 69 kilowatts. (Hard to believe, isn't it?)

Thermal density is one thing customers want, but free air cooling and a killer power usage effectiveness (PUE) rating is what they are also looking for. With direct expansion (DX) chillers installed to cool the air that is pumped down to the two cold aisles on the internal sides of the POD 240a, a fully loaded containerized data center has a PUE rating of 1.15 to 1.3, says Ganthier. If the outside air is cool enough, then you can push the PUE down to 1.05, and that is as good as any cutting edge Google, Yahoo!, or Facebook data center. The POD 240a can be operated in a mixed mode called DX Assist that uses outside air and chillers. In free air mode, the POD 240a can pull 3,818 cubic feet per minute of air through the racks, and in DX mode the fans get a bit of a break and only need to pull 3,272 cubic feet per minute.

PUE is the ratio of the power pumped into the data center divided by the power actually consumed by the IT gear as it does its work. Getting that PUE ratio as close to one as possible is the goal of data center designers. The typical brick-and-mortar data center runs somewhere between an awful 1.7 to a really terrible 2.0 to 2.4.

HP EcoPod hot aisle

The hot aisle in the HP EcoPOD data center (click to enlarge)

One reason to operate in free air mode is that it leaves more power available to be dedicated to IT equipment. In free air mode and running at the average capacity of 44 kilowatts per rack, the POD 240a had a load limit of 2.3 megawatts. But when the containerized data center is operating with DX chillers, its load limit falls to 1.5 megawatts.

The POD 824a costs somewhere between $5m and $8m, depending on features. Fire suppression, power distribution, cooling units, onsite support for electromechanicals and some planning services are included in the price. To build a 10,000 square foot data center would run something on the order of $33m. You need to pour a concrete slab for this bad boy, which weighs in at around 425,000 pounds; HP"s prior PODs could be rolled just anywhere the ground was flat and initially, according to Ganthier, they were deployed outside in the elements.

Because of the efficiency of cooling in the POD 240a and the use of outside air cooling, a 1.2 megawatt configuration of the POD would cost about $552,000 a year to operate. This compares to $15.4m with a brick-and-mortar data center using chillers.

The POD 240a is manufactured in the POD Works assembly line at the tail end of HP's server and PC factory in Houston, Texas. The POD Works was launched last October and can deliver a completely rigged POD, in either 20 foot or 40 foot variants, in somewhere between six and twelve weeks, depending on what gear is being preconfigured inside the containers. The more complex POD 240a takes about twelve weeks to deliver, considerably less time than the 24 months it would take to build a 10,000 square foot data center.

The POD Works really can do mass delivery, brags Ganthier. Microsoft ordered 22 PODs for one of its data centers, which HP delivered in nine and a half weeks (Kim Bassinger, haven't thought about her for years) equipped with 40,000 server nodes. ®

More about

TIP US OFF

Send us news


Other stories you might like