This article is more than 1 year old

Liquid Computing to float slushy Intel servers

x64-commodity drip effect

Liquid Computing is moving further away from its home-grown server design and more towards commodity x64 iron as it tries to ride the unified computing wave.

At this month's Supercomputing 09 in Portland, Oregon, Liquid Computing will launch a cluster of rack servers using its variation on the unified server and storage fabric scheme, also now based on 10 Gigabit Ethernet after its proprietary and very slick interconnect was ripped out of the LiquidIQ product line in a company relaunch last September.

That relaunch with the LiquidIQ 2.0 products had the blade-based Opteron compute nodes (ten in the front and ten in the back of a 20U chassis) interfacing the HyperTransport links with a home-grown Ethernet switch based on Fulcrum Microsystems' 10 GE silicon, as many switch makers are using these days.

A year ago, Liquid Computing was looking ahead to Intel's Nehalem family of chips and their QuickPath Interconnect, which is a lot like HyperTransport and therefore would allow the LiquidIQ gear to have Xeon options for processors.

Vikram Desai, president and chief executive officer, these Opteron-based LiquidIQ 2.0 machines are still available, but after consulting with customers, the company was basically told to get more standard server and storage technologies.

And so, the new Liquid Elements product line that will debut at SC09 is based on Intel's Marble Valley half-width rack custom servers - which means you can put two of them in a single 1U chassis. And Liquid Computing has tapped partner Network Appliance to supply iSCSI storage arrays, although any iSCSI storage will work.

Basically, with the Liquid Elements product line, what was once a very high-bandwidth and proprietary interconnect for virtualization processors, memory, and storage has become a 4U 10 Gigabit Ethernet switch with some sophisticated virtualization and provisioning software embedded in it that plugs into plain vanilla x64 servers with Liquid adapters.

Although the technology is slightly different from what Cisco Systems is doing with its Nexus unified communications switches and the California Unified Computing System blade servers that embed the Nexus converged storage and server fabrics on a 10 GE backbone, the approach is similar. You want the servers to lash to a switch with one and only one wire that has enough bandwidth to handle both server and storage traffic, and make the whole thing manageable from the switch.

The thing is, the IQInterconnect, which was built by a bunch of telecom nerds, as El Reg explained here, had 100 GB/sec of bandwidth between compute modules and the central switch with the Liquid 1.0 products, while the aggregate bandwidth on the Ethernet links coming out of the compute modules in the Liquid 2.0 products dropped to 84 Gb/sec. That first number is gigabytes, not gigabits.

With the shift to the Ethernet interconnect with the Liquid 2.0 products, both Windows and Linux were supported, as were ESX Server, Hyper-V, and Xen hypervisors, but these hypervisors are a little more rigid than the pooling and virtualization software that Liquid 1.0 products had and which allowed for the creation of virtual SMP servers on the fly.

Hence, these are more slushy than liquid, and the size of a server node is really limited to the motherboard. The old Liquid 1.0 products spanned motherboards, much as 3Leaf Systems and ScaleMP do with their respective hardware and software solutions that were announced this week as well.

The core of the forthcoming Liquid Elements line is the 4U fabric module, which has redundant switches for high availability and runs Liquid's Fabric Control Software, an out-of-band management tool embedded in the switch.

The Marble Valley servers, which you can learn more about here, are a 1U rack with two systems that slide into it, each with a two-socket Xeon 5500 motherboard from Intel and has room for 18 DDR3 DIMMs (for a maximum memory capacity of 144GB per server node). The rack can hold four 2.5-inch SATA drives, two for each server.

Each Marble Valley server has a single PCI-Express 2.0 x8 peripheral slot and an I/O expansion module that can support either 10GB or quad-data rate InfiniBand links. Liquid has its own switched network adapters, which plug into the system boards and talk back to its switches, which in turn link out to storage and networks.

The Liquid Elements setup that will come to market in the third week of December can scale to 16 rack servers, or a total of 32 nodes, for each Liquid switch. With 20 server nodes and switches and peripherals to back it up, Liquid Elements will run about $190,000.

Unified computing ain't cheap. And that is what happens when you can demonstrate operational cost savings, like the 10-to-one reduction in server provisioning time and cabling reduction and two-to-one reduction in server space requirements with the Marble Valley gear.

IT vendors shift some of those savings to capital expenditures and charge a profit. Cisco and its partners EMC and VMware are certainly trying to do with the combination of California blades and switches, vSphere hypervisors, and storage. ®

More about

TIP US OFF

Send us news


Other stories you might like