The Channel logo

News

By | Philip Howard 29th March 2005 15:33

A case for software benchmarking

Wheat and chaff

Comment I do not normally hold much truck with benchmarking, at least for such public tests as TPC-C (the standard for transaction processing) and TPC-H (for data warehousing). These are typically marketing figures – the tests are artificial, limited, and vendors with big bucks can throw enough money at the tests to ensure that they do well. Thus, for example, I did not publicise IBM's TPC-C figure for the latest release of DB2 last year, despite the fact that it was more than twice as good as any previous figure. No, to my mind the only useful benchmarks are those that are performed specifically for individual customers with their data and their workload. But these, of course, are expensive to run.

This, at least, was my thought until recently. However, I have recently been discussing benchmarking with one of the founders of benchmarking, then at the University of Wisconsin, back in the 80s. The point was made to me that back in the early days, benchmarks really did have a useful purpose. For example, it turned out that some of the databases that were tested way back then couldn't do such things as three-way joins: the optimiser just thrashed about and produced nothing. The contention therefore, is that benchmarking can be useful where there is only uncertainty and doubt and a lot of marketing hype about performance – in other words, in the early days of a technology.

Is there such a technology? Yes, there is. Vendors (recent examples include Software AG, Sunopsis and Informatica) are pouring into the EII (enterprise information integration) space to offer federated query capability. That is, supporting real-time queries against heterogeneous data sources.

To date, much of the focus in this market has been on how you set up and define these queries and only a relatively few vendors (notably Composite Software, Callixa and IBM) have been putting much emphasis on performance. But performance is a big issue. Indeed, if you don't get good performance across a spectrum of queries, then it is not worth investing in the technology in the first place. In fact, I know of one major company (which will remain nameless) that, to its credit, has not introduced a product into this space precisely because it cannot get it to perform.

I am inclined to think that some benchmarking, perhaps based on a distributed TPC-H model, would be of benefit in this instance; at least in the short term. Vendors can currently wave performance figures around willy-nilly without fear of contradiction, and it is pretty much certain that some federated query projects will fail because the selected solution fails. While benchmarks are only a snapshot, in this case they may serve as an initial method for sorting the wheat from the chaff (if only because the chaff will not participate). To that extent I am in favour of them.

© IT-Analysis.com

Related stories

Sun could quell database hunger with Unify buy
IBM benchmark leaves server rivals breathless
IBM and HP take phony benchmark war up several notches

alert Send corrections

Opinion

Windows 10 on Surface 3

Tim Anderson

It's do-or-die for Microsoft's new operating system on 29 July
Wine Taps by N Wong, Flickr, CC 2.0 License

Simon Sharwood

Clouds sell compute by the glass. On-premises kitmakers want to sell wine-as-a-service

Greg Knieriemen

Privacy, security, information sovereignty, what we all want, right?
Microsoft's Joe Belfiore, speaking at Build 2015

Andrew Orlowski

Redmond devotees may as well have demanded manga desktop wallpaper

Features

Time to pull out the magnifying glass to swot up on those Ts&Cs
Android icon desktop toys
Nice devices, now speak 'enterprise' to me
Standard Form 86 reads like a biography of each intelligence worker
Protestor barricade image via Shutterstock
Breaking through the hardware barricades to a new network state