This article is more than 1 year old

Mellanox forges switch-hitting ConnectX-3 adapters

The server companion to SwitchX switch chips

Networking chip, adapter card and switch maker Mellanox is rounding out its converged InfiniBand-Ethernet product line with the debut of the ConnectX-3 integrated circuits and network adapter cards built using the chips.

Mellanox has been selling multi-protocol chips and adapter cards for servers for a number of years, and back in April the company announced its first switch-hitting chips, called SwitchX, to implement both 40Gb/sec Ethernet and 56Gb/sec InfiniBand on the same piece of silicon. Those SwitchX chips came to market in May at the heart of the sx1000 line of 40GE Ethernet switches. Later this year, the SwitchX silicon will be used to make a line of InfiniBand switches and eventually, when the multiprotocol software is fully cooked, will come out in a line of switches that can dynamically change from Ethernet and InfiniBand on a port-by-port basis.

The long-term goal at Mellanox – and one of the reasons it bought two-timing InfiniBand and Ethernet switch maker Voltaire back in November for $218m – is to allow customers to wire once and switch protocols on the server and switch as required by workloads. Mellanox can presumably charge a premium for such capability, and both the SwitchX and ConnectX-3 silicon allows Mellanox to create fixed adapters and switches at specific speeds to target specific customer needs and lower price points, too.

Mellanox ConnectX-3 chip

The Mellanox ConnectX-3 adapter chip

The ConnectX-3 silicon announced today is the first Fourteen Data Rate (FDR, running at 56Gb/sec) InfiniBand adapter chip to come to market. When running the InfiniBand protocol, it supports Remote Direct Memory Access (RDMA); Fibre Channel over InfiniBand (FCoIB); and Ethernet over InfiniBand (EoIB). RDMA is the key feature that lowers latencies on server-to-server links because it allows a server to bypass the entire network stack and reach right into the main memory of an adjacent server over InfiniBand links and grab some data.

The ConnectX-3 chip supports InfiniBand running at 10Gb/sec, 20GB/sec, 40Gb/sec, and 56Gb/sec speeds. On the Ethernet side, the ConnectX-3 chip implements 10GE or 40GE protocols and supports RDMA over Converged Ethernet (RoCE), Fibre Channel over Ethernet (FCoE), and Data Center Bridging (DCB). The new silicon also supports SR-IOV – an I/O virtualization and isolation standard for Ethernet networks that allows multiple operation systems to share a single PCI device – and IEEE 1588, a standard for synchronizing host server clocks to a master data center clock.

John Monson, vice president of marketing at Mellanox, tells El Reg that the important thing about the ConnectX-3 adapter card chip is that it is tuned to match the bandwidth of the forthcoming PCI-Express 3.0 bus. PCI-Express 3.0 slots are expected to come out with the next generation of servers later this year, and Ethernet and InfiniBand adapter cards usually are created for x8 slots. The ConnectX-3 chip can also be implemented on PCI-Express 1.1 or 2.0 peripherals if companies want to make cards that run at lower speeds on slower buses.

The ConnectX-3 chip is small enough to be implemented as a single chip LAN-on-motherboard (LOM) module, which is perhaps the most important thing for allowing for widespread adoption of 10GE and, later, 40GE networking in data centers. The ConnectX-3 chip includes PHY networking features, so you don't have to add these to the LOM; all you need are some capacitors and resistors and you are good to go, says Monson. The ConnectX-3 chip will also be used in PCI adapter cards and in mezzanine cards that slide into special slots on blade servers. Hewlett-Packard, IBM, Dell, Fujitsu, Oracle, and Bull all OEM Mellanox silicon, adapter, or mezz cards for the respective server lines to support InfiniBand, Ethernet, or converged protocols. It is not entirely clear if blade server makers will go with their current mezz card designs or implement LOM for 10GE networking. "It will be interesting to see how this will play out," Monson says.

The ConnectX-3 chip has enough oomph to implement two 56Gb/sec InfiniBand ports, two 40Gb/sec Ethernet ports, or one of each. Obviously, with an x8 PCI-Express 3.0 slot running at 8GT/sec, you have a peak of 64Gb/sec across eight lanes on the bus, and with encoding, you might be down somewhere around 56Gb/sec for a single x8 slot. So putting two FDR InfiniBand or 40GE ports on the same bus could saturate it, depending on the workload. (It is a wonder why network cards are not made to HPC servers that plug into x16 slots, but for whatever reason, they are not.)

Mellanox is happy to sell its ConnectX-3 silicon to anyone who wants to make network adapters, but is keen on selling its own adapters, of course. The ConnectX-3 chip is sampling now and will be generally available in a few months.

Next page: Shiny new cards

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like