The Channel logo

News

By | Philip Howard 29th March 2005 15:33

A case for software benchmarking

Wheat and chaff

Comment I do not normally hold much truck with benchmarking, at least for such public tests as TPC-C (the standard for transaction processing) and TPC-H (for data warehousing). These are typically marketing figures – the tests are artificial, limited, and vendors with big bucks can throw enough money at the tests to ensure that they do well. Thus, for example, I did not publicise IBM's TPC-C figure for the latest release of DB2 last year, despite the fact that it was more than twice as good as any previous figure. No, to my mind the only useful benchmarks are those that are performed specifically for individual customers with their data and their workload. But these, of course, are expensive to run.

This, at least, was my thought until recently. However, I have recently been discussing benchmarking with one of the founders of benchmarking, then at the University of Wisconsin, back in the 80s. The point was made to me that back in the early days, benchmarks really did have a useful purpose. For example, it turned out that some of the databases that were tested way back then couldn't do such things as three-way joins: the optimiser just thrashed about and produced nothing. The contention therefore, is that benchmarking can be useful where there is only uncertainty and doubt and a lot of marketing hype about performance – in other words, in the early days of a technology.

Is there such a technology? Yes, there is. Vendors (recent examples include Software AG, Sunopsis and Informatica) are pouring into the EII (enterprise information integration) space to offer federated query capability. That is, supporting real-time queries against heterogeneous data sources.

To date, much of the focus in this market has been on how you set up and define these queries and only a relatively few vendors (notably Composite Software, Callixa and IBM) have been putting much emphasis on performance. But performance is a big issue. Indeed, if you don't get good performance across a spectrum of queries, then it is not worth investing in the technology in the first place. In fact, I know of one major company (which will remain nameless) that, to its credit, has not introduced a product into this space precisely because it cannot get it to perform.

I am inclined to think that some benchmarking, perhaps based on a distributed TPC-H model, would be of benefit in this instance; at least in the short term. Vendors can currently wave performance figures around willy-nilly without fear of contradiction, and it is pretty much certain that some federated query projects will fail because the selected solution fails. While benchmarks are only a snapshot, in this case they may serve as an initial method for sorting the wheat from the chaff (if only because the chaff will not participate). To that extent I am in favour of them.

© IT-Analysis.com

Related stories

Sun could quell database hunger with Unify buy
IBM benchmark leaves server rivals breathless
IBM and HP take phony benchmark war up several notches

alert Send corrections

Opinion

Chris Mellor

Drives nails forged with Red Hat iron into VCE's coffin
Sleep Cycle iOS app screenshot

Trevor Pott

Forget big-spending globo biz: it's about the consumer... and he's desperate for a nap
Steve Bennet, ex-Symantec CEO

Chris Mellor

Enormo security firm needs to get serious about acquisitions

Features

Windows 8.1 Update  Storeapps Taskbar
Chinese Buffet self-service
Chopping down the phone tree to scrump low-hanging fruit
An original member of the System/360 family announced in 1964, the Model 50 was the most powerful unit in the medium price range.
Big Blue's big $5bn bet adjusted, modified, reduced, back for more
Microsoft CEO Satya Nadella
Redmond needs to discover the mathematics of trust