This article is more than 1 year old

Mellanox gives InfiniBand a 5 BILLION PACKET/sec cloud dose

100 Gbps switch, HPC software, drinks less juice too

Networking tech firm Mellanox is broadening its 100Gbps InfiniBand portfolio – and is hoping that the new products and capabilities will help it expand its footprint outside its supercomputer home territory.

Following on from its demo of 100 Gbps cables last March, the company is using ISC 2014 in Germany to launch the Switch-IB, which has 36 100 Gbps ports, a claimed aggregate throughput of 5.4 billion packets per second, and 130ns latency. The company's marketing VP Gilad Shainer told The Register Mellanox will also be pushing a low-power theme for the switch at the conference.

Since the 100 Gbps systems can connect more servers to fewer elements in a switch fabric, “for any switching infrastructure 100 Gbps will be a money-saver,” Shainer said. “Even if your application doesn't need 100 Gbps today, you can go and build the network, and that allows you to reduce the number of switches you need, your power consumption, and real estate”.

For a rack shifting 7Tbps of data, Mellanox claims the switch occupies one-quarter the real estate of a Cisco Nexus 6004 switch, at one-twelfth the power consumption.

All of this lies behind the expectation that Mellanox can ramp up its push beyond the HPC market: “there are more applications now that depend on fast analysis of data – national security, health care, Web applications.”

The high performance demands of cloud providers also puts a premium on good RDMA performance, Shainer said, and getting solutions that can scale enough is creating a growing market opportunity for InfiniBand.

“And we support all CPU architectures – for example, IBM has demonstrated multiple cases to take advantage of InfiniBand and RDMA, in which they're getting ten times better performance out of NoSQL,” he said.

The switches, he said, will be followed by NICs in the 2014/15 timeframe.

For its more traditional customer base, Mellanox is also launching a bunch of HPC software libraries, the HPC-X Scalable Software Toolkit, to make it both easier and quicker to get supercomputer software making the most of the underlying networking fabric.

The open software-based HPC-X has libraries for MPI, SHMEM and PGAS, and a bunch of accelerators including Open MPI, a parallel programming language based on Berkeley UPC, support for MPI-3m as well as a variety of monitoring and benchmarking tools.

The HPC-X toolkit is not confined to InfiniBand – Shainer said it also works with Ethernet underlying fabrics.

Cloud reference architecture

The other two key announcements to be made at the conference are Cloud-X, a reference architecture for cloud environments; and a joint demonstration with AppliedMicro's X-Gene 64-bit ARM platform. Shainer told The Register the Cloud-X reference design will be designed to simplify the deployment of its technologies into the cloud. The recipes, he said, will be based on off-the-shelf servers and storage, and the choice of Cloud-X OpenStack or commercial cloud stacks.

“We're also providing software that allows you to create a system in one click,” he said. There will also be an incubator for proof-of-concepts, and demonstration environments. ®

More about

TIP US OFF

Send us news


Other stories you might like