This article is more than 1 year old

Mellanox enables server adaptor sharing

At InfiniBand edge

Mellanox has provided a way for servers to share Ethernet and Fibre Channel adapters slotted into a BridgeX product at the edge of an InfiniBand network.

The Mellanox view of the world is that, currently, servers in enterprise and high-performance computing (HPC) data centres can have as many as three different adapter cards: an InfiniBand Host Channel Adapter (HCA), an Ethernet Network Interface Card (NIC), and a Fibre Channel Host Bus Adapter (HBA). Each one has to be acquired, managed and powered, adding to the data centre costs.

By positioning them in a bridge device at the edge of an InfiniBand network the servers on that network can use InfiniBand as a unified networking backbone and share adapters to increase utilisation, simplify cabling, lower power consumption, and reduce cost.

The Mellanox BridgeX product connects to an InfiniBand switch and provides protocol translation so that servers can talk to Ethernet and/or Fibre Channel resources across a 40Gbit/s InfiniBand backbone via their ConnectX HCAs.

Hedging its bets, Mellanox has made the BridgeX product suitable for either InfiniBand or Ethernet consolidation. Eyal Waldman, Mellanox' chairman, president, and CEO, said: "BridgeX delivers a fabric-agnostic I/O consolidation solution that allows end-users to optimise the data centre infrastructure to run applications over high-performance, lossless, low-latency 40Gb/s InfiniBand or 10 Gigabit Ethernet." It supports Fibre Channel over Ethernet (FCoE) and is being sold to Ethernet Switch vendors as a kind of FCoE gateway.

The main BridgeX I/O ports can be configured for either InfiniBand or Ethernet and the product supports 2, 4 and 8Gbit/s Fibre Channel, 10Gbit/s Ethernet and 40Gbit/s InfiniBand.

In a blade server environment BridgeX can be paired with a 10GigE or InfiniBand switch blade to provide Fibre Channel SAN connectivity from an FCoE or Fibre Channel over InfiniBand (FCoIB)-based network adapter.

Mellanox is keen to point out that protocols in non-InfiniBand networks are encapsulated for InfiniBand transfer and not terminated, meaning, it says, full wire-speed networking.

Mellanox' hope is that data centres will enjoy network consolidation benefits - lower cost, simpler management, lower power needs and fewer cables - by using InfiniBand as the converged network backbone, but it is aiming to play nice in an Ethernet backbone environment.

The BridgeX-based BX line of gateway systems are available for purchase today starting at a list price of $9,995. ®

More about

TIP US OFF

Send us news


Other stories you might like