This article is more than 1 year old

Mellanox uncloaks future InfiniBand switches

FDR afterburners

ISC'11 Mellanox is previewing its next-generation InfiniBand switches with FDR afterburners that boost bandwidth up to 56Gb/sec.

Two new InfiniBand switches made their debut on Monday at the International Supercomputing Conference (ISC) in Hamburg, Germany. The switchmaker is the result of the merger of the old Mellanox – which makes InfiniBand silicon, switches, and converged Ethernet/InfiniBand server adapters – and Voltaire – which made both InfiniBand and Ethernet switches.

Mellanox not only believes in making its own silicon to gain advantage in the fiercely competitive switch arena, but has also spent several years creating a converged switching ASIC called SwitchX that is a switch hitter that implements either Ethernet or InfiniBand today and that will eventually be able to accommodate either protocol, changing them on the fly as server workloads dictate, when used in conjunction with the ConnectX-3 adapters from Mellanox.

The SwitchX chip was announced in April, and the ConnectX-3 silicon debuted in early June with generally availability for LAN-on-motherboard or PCI-Express cards within a few months.

Mellanox debuted the first Ethernet switch based on the SwitchX chip, the SX1000 40Gb/sec Ethernet switch, back in May. From the outside, this 36-port switch is virtually indistinguishable from the new FDR InfiniBand switches, at least from the outside. Basically, they're like twin sisters who wear the same outfits and husband-swap by switching their names and trying to keep their stories straight.

SwitchX meets FDR (no, not the prez)

FDR is short for Fourteen Data Rate, and is an upgrade from the Quad Data Rate (QDR) 40Gb/sec InfiniBand switches already on the market.

Mellanox is kicking out two new FDR InfiniBand switches based on the SwitchX chips. The first is the SX6025, which comes in a 1U chassis and has 36 QSFP ports. The SwitchX chip delivers just over 4Tb/sec of aggregate switching bandwidth, and the port-to-port hopping latency is 165 nanoseconds.

The SX6025 is a basic, unmanaged switch; you plug and go. The chassis has redundant power supplies and cooling fans, burns 127 watts using passive cables and 231 watts using active cables, and comes with rear-to-front or front-to-rear cooling options.

The new switch adheres to the InfiniBand Trade Association 1.21 and 1.3 specs, and offers support for forward error correction (FEC), adaptive routing, congestion control, and port mirroring. Mellanox says that this switch is designed to handle converged server and storage traffic and can be used as a top-of-rack switch in a three-tier network or as a leaf in a flatter leaf-spine network.

Mellanox SX6025 FDR InfiniBand switch

The Mellanox SX6025 (top) and SX6036 (bottom) FDR InfiniBand switches

The SX6036 FDR InfiniBand switch looks just like the sx1000 40 Gigabit Ethernet switch announced back in May. The SX6036 has the same basic guts as the SX6025 – over 4Tb/sec of aggregate bandwidth across 36 ports and the same 165 nanosecond latency – it's a managed switch that has a USB port and two Gigabit Ethernet management ports, plus the ability to carve the switch into as many as eight partitions, with a subnet manager built in that is capable of running an InfiniBand fabric with up to 648 server nodes attached to it, right out of the box.

The SX6036 also includes APIs for integration with third-party network-management tools, and has hooks into the Unified Fabric Manager that Voltaire created for its own switches before it was acquired by Mellanox.

Fibre or copper – your call

The SX6000 series of switches are being launched alongside passive copper cables (up to five meters long) as well as active fibre optic cables (which are longer). Mellanox has these cables made to spec with super-low bit error rates.

The ConnectX-3 adapter cards, FDR copper and optical cables, and SX6000-series switches are all sampling now, and will be generally available sometime in the second half of this year. Pricing will be announced at that time.

The SX6536 director switch

At ISC, Germany's Leibniz Supercomputing Centre (Leibniz-Rechenzentrum, or LRZ) said that it would use Mellanox FDR InfiniBand switches to build its three-petaflops "SuperMUC" massively parallel super, which will likely be based on future Intel "Sandy Bridge" Xeon processors as implemented in 84 IBM iDataPlex racks.

SuperMUC will be operational in 2012, according to LRZ, and there is an outside chance that it might use Intel's "Ivy Bridge" chips, which will pack more cores on a die than the future Xeon E5s, due around the third quarter of this year.

LRZ said it would be using a combination of IBM server nodes with onboard ConnectX-3 ports, the SX6036 managed switches, and an unnamed 640-port InfiniBand director switch to lash together over 110,000 cores.

This monster weighs in at 29U of rack space and packs an aggregate of 72.5Tb/sec of bandwidth into the box. Port latencies range from 165 nanoseconds to 495 nanoseconds, depending on how many hops are necessary across the director's backplane, from leaf blade to leaf blade – hopping within one leaf yields the lower latency, and hopping across three leaves yields the larger latency.

Like the managed SX6036 switch, the SX6536 director will eventually be partitionable into eight separate switch partitions, but that capability is not coming until a future release of the hardware. The SX6536 leaf blades have 18 ports of FDR InfiniBand each, and two columns of 18 of these leaves can be put into the 29U chassis.

Mellanox has not said when the SX6536 FDR InfiniBand director switch will be available. ®

More about

TIP US OFF

Send us news


Other stories you might like