The Channel logo

News

By | Chris Mellor 4th October 2008 06:02

Oracle and HP's database machine predicated on Voltaire

Israeli firm says Infiniband is best of all possible I0s

Interview "Common sense is not so common" is a quotation attributed to Voltaire, the 18th century French philosopher, essayist, writer and wit. So, how sensible were HP and Oracle in basing their database machine around an IO fabric that has struggled to find favour beyond the HPC market?

The fabric in question is InfiniBand, and the switches come from Voltaire - the Israeli firm, rather than the philosopher. InfiniBand is rated at 20Gbit/s and Voltaire would like the idea that speed and daring is involved in Oracle and HP's decision to go with InfiniBand rather than 10GigE or Fibre Channel or a SAS fabric.

High-performance computing (HPC) applications use InfiniBand as a cluster or grid interconnect. Its bandwidth is so high, with 40Gbit/s possible, that its backers tout it as a data centre convergence fabric candidate with other protocols - Ethernet, Fibre Channel, whatever - running on top of it.

That's a theoretical possibility, but is it practicable? Can InfiniBand break out of its high-performance computing (HPC) niche into general data centre use? We asked Asaf Somekh, Voltaire's VP for strategic alliances some questions about the Dababase Machine's use of InfiniBand and what it might mean. Asaf's answers have not been edited at all.

El Reg: There are two grids in the Oracle Database Machine; one for the database servers and one for the Exadata storage servers. How is InfiniBand used within and between these grids?

Asaf Somekh: The HP Oracle Database Machine is designed to run large, multi-terabyte data warehouses with 10x faster performance than conventional database solutions. The HP Oracle Database Machine uses 14 Oracle Exadata Servers along with 8 Oracle RAC 11g database servers connected using four Voltaire Grid Director 24-port InfiniBand switches. The four Voltaire switches form a unified fabric which is used both for server and storage communication (see the diagram below). Separate grids for servers and storage are not required – because of the high throughput that InfiniBand provides, the same InfiniBand wires are used for running both types of traffic over a single fabric.

InfiniBand DataBase Machine layout

InfiniBand fabric layout in Oracle Database Machine.

Not only does the Oracle HP Database Machine take advantage of InfiniBand’s 20 Gigabit bandwidth and low latency, it also uses Reliable Datagram Sockets (RDS) and Remote Direct Memory Access (RDMA) – which enables zero loss, zero copy -- in order to deliver extreme I/O performance.

El Reg: Is the Oracle Database Machine just another example of a specialised enterprise high performance computing (HPC) environment or does it have a more general significance?

Asaf Somekh: Yes and no. The majority of our customers today use InfiniBand in an HPC-like configuration to improve the performance of a single application that’s critical to their specific business objectives. One could make the case that the HP Oracle Database Machine does this for Oracle and data warehouses. This is not a bad thing: enterprises can learn a lot from the HPC space in terms of how to architect clusters and grids that combine servers and storage communications to optimise application performance.

What makes this solution more generally significant to a broader data center audience is that Oracle and HP chose to architect the system with a unified fabric for server and storage networking using InfiniBand. The storage traffic and inter-processor communication runs on a single wire. More and more enterprises are now talking about further data center consolidation and building their “next generation” data centers.

Typically, these new data centers consist of large grids of commodity servers coupled with scalable shared file system solutions and connected by a single unified fabric. InfiniBand is the only technology that can enable a unified fabric today. Ethernet can’t do this yet because it is dependent on future technologies such as FCOE which many are saying is still 2-3 years away.

El Reg: Why isn't InfiniBand used for the Database Machine's accessing application servers?

Asaf Somekh: The Database Machine solution covers the backend part of a data warehouse environment and InfiniBand is used as the backbone for it. However, the solution is architected such that the door is open for the customers to use Voltaire to extend the HP Oracle Database Machine’s performance with InfiniBand connectivity all the way out to the application servers. In fact, some of our data warehouse customers identified InfiniBand for the application servers as the key factor in getting their application speedup.

El Reg: How does the cost of a server InfiniBand port compare to the cost of a GigE and a 10GigE port? What are the price trends with these types of port?

Asaf Somekh: InfiniBand is far less expensive than 10 Gigabit Ethernet. 20 Gbps InfiniBand director-class switches cost about US$800 per port (list price). 10 Gigabit Ethernet ranges from about US $1,500 to $3,000 per port and a GigE port averages at roughly US $200 per switch port. New Data Center Ethernet (DCE) solutions are priced even higher with costs between US$2600-3600 per port.

El Reg: How can InfiniBand move into the general data centre server networking space?

Asaf Somekh: Solutions like the HP-Oracle Database Machine are helping InfiniBand gain more share of the data center networking market. But there are many other things happening at a technology level and a macro economic level that will help InfiniBand proliferate in the data center. For example, Voltaire is doing a lot of work with leading enterprise applications such as Oracle and VMware to ensure tight integration, better I/O throughput, scalability and performance boosts for these applications.

The needs of the applications will be the drivers for InfiniBand adoption. At a business level, customers are telling us their charter from management is to get the most performance for their business-critical applications for the lowest cost. Many are turning to clusters and scale-out architectures to help them do more with less. InfiniBand is the best technology for delivering application performance in scale-out environments. It’s also about 50 per cent more energy efficient technology than 10 Gigabit Ethernet which provides additional savings on power and cooling.

Comment

Iain Stephens, HP's VP for industry standard servers in EMEA, says he's seeing customers using linked servers with direct-access storage being connected together and to accessing servers with InfiniBand, because that's the most cost-effective way to get the network I/O speed needed. That adds another strand to the InfiniBand-use-is-spreading view.

Asaf's cost point above contradicts the Cisco Ethernet economics story, but I'd guess Cisco would say that's only a minor and temporary blip, as the Ethernet tide just washes over every other network alternative thing in its path over the next few years.

It's likely that applications needing 10GigE performance now or more, plus the losslessness and better latency that InfiniBand provides will have a peek at using it. As - or if - CIOs (chief information officers) get energy cost responsibility, then that might encourage more InfiniBand takeup.

The common sense view is that Ethernet will be the prevailing interconnect in data centres. There's no InfiniBand revolution in prospect but its use does look to be spreading, particularly amongst customers who, we might say, are daring enough to use its speed.

Mind you, Voltaire isn't a reliable inspiration. The man described Canada as "a few acres of snow". ®

comment icon Read 5 comments on this article alert Send corrections

Opinion

Microsoft Surface bomb
Killer whale

Chris Mellor

Firm cites 'low demand' plus 'abusers'

Tim Worstall

Or why the reversal of globalisation ain't gonna 'appen

Features

No, silly... he was the fall guy for years of Finnish folly
Fraud image
Frodo and the Ring
Microsoft's strategy is to make Store apps popular. Good luck with that