This article is more than 1 year old

HPC cluster maker sets x64 chips a-fighting

Xeons and Opterons duke it out

Impatient to see benchmark results comparing the performance of Intel's quad-core Xeon 5500 processors to Advanced Micro Devices' six-core Opteron 2400s on supercomputing workloads - and perfectly happy to sell either kind of box to its customers - cluster maker Advanced Clustering Technologies has put the High Performance Linpack benchmark through its paces on two of its Pinnacle rack-mounted, two-socket server nodes.

This implementation of the Linpack Fortran benchmark is the same test that's used by the nerds who put together the semi-annual Top 500 super rankings, the latest listing of which was announced at the International Supercomputing Conference in Hamburg, Germany this week.

While those rankings look at the system-level performance of machines with hundreds, thousands, or tens of thousands of clustered server nodes, the performance of such machines - and any cluster using the message passing interface (MPI) protocol to run simulations - is greatly affected by the networking fabrics that lash the nodes together. To do a proper evaluation, you want to stress-test the server nodes and then look at system-level performance using various interconnects.

To help its own sales force get a handle on the pros and cons of the Istanbul Opterons and the Nehalem EP Xeons, ACT cluster engineer Shane Corder slapped the Linpack test on two-socket servers using these chips and the same software stack and put them through their paces.

You can see Corder's report on the results here. And because Corder understands that computing is about money as much as it is about performance, he priced up the configured systems and did his own price/performance analysis. However, Corder didn't worry about the power draw of the servers tested, a concern for many HPC shops.

In one corner, a Pinnacle rack server equipped with two quad-core 2.66GHz Xeon X5550s, which each have a 95 watt thermal envelope. (I would have probably chosen the 2.53GHz E5540 with an 80 watt power dissipation, which will probably be more popular for HPC clusters). This machine was equipped with 12GB of 1.33GHz DDR3 main memory.

In the other corner, another Pinnacle rack server, but one with two six-core 2.6GHz Opteron 2435s and 16 GB of 800MHz DDR2 main memory. This chip is rated at a much lower 75 watts. (Another reason why the E5540 might have been a better choice for a comparison, but any comparison has compromises.)

Corder used the same power supply, hard drive, and operating system - unspecified, but almost certainly Linux - and says that the amount of memory on the machines was different because of the different memory speeds possible with each chip and the different numbers of memory channels that each chip architecture supports: the Xeons have three channels per socket that run faster than the two channels per socket of the Opterons, so Corder reckoned that to even it out the latter should get a little more memory.

Linpack, says Corder, performs best when all of the available memory in the system is used, and so the benchmark allows the Fortran test to scale its matrix size to accomplish this, and ACT's tests have a slightly larger size on the Opteron box than on the Xenon box.

Interestingly, Corder says that he chose the same compiler and math libraries - those supplied by Intel - for both machines. The reason was that of all the stacks that ACT has tested in its customer engagements, including the open source GNU compilers and those from Portland Group, as well as the AMD Core Math Library and libGOTO library from the University of Texas, the Intel stack does the best.

Here's what happened. The Xeon 5550 box had a peak theoretical number-crunching performance of 85.12 gigaflops, and delivered 74.03 gigaflops on ACT's Linpack run. That means the machine delivered 86.97 per cent of the theoretical performance on the actual workload. The Intel box cost around $3,800 as configured, which worked out to $51.33 per gigaflop.

With six cores running at almost the same speed, you'd expect the AMD box to do better than this, and indeed it did. The Opteron 2435 machine had a peak theoretical performance of 124.8 gigaflops, and it delivered 99.38 gigaflops on the Linpack run. While this was only a 79.63 per cent efficiency, more is more. That's a nice right hook, and the fact that the Opteron node only costs $3,500 is a nice uppercut, yielding a much lower $35.21 per gigaflop.

What seems clear is that the six-core chip needs three memory channels and the four-core chip should have been fine with two channels - well, add a "maybe" to that last one. But contention for access to memory is probably what makes the Istanbul chip less efficient.

But with 34.2 per cent better actual performance, a 7.9 per cent lower system price, and probably a few tens of watts of lower power consumption at the system level, ACT is making a pretty strong case for Opteron nodes.

Guess who is probably going to get a pretty good discount from AMD on Istanbul Opterons and who is probably going to get a phone call (or worse) from Intel? ®

More about

TIP US OFF

Send us news


Other stories you might like