Sponsored: Creating the Storage Advantage
Larry Ellison, Oracle's chief executive officer, has ants in his pants. Or something.
Instead of waiting until October 14 to roll out the official benchmark tests on Sparc-based servers that show a Sparc cluster can scale farther and cost less than a big Power 595 SMP server from IBM - as the company had been promising in ads that ran back in late August and early September - Oracle cranked out a press release late Sunday night after day one of the OpenWorld customer events in San Francisco had wound down, announcing the first TPC-C online transaction processing benchmark tests on a Sun system in over a decade. (Sun stopped using this OLTP test with the launch of its UltraSparc-III processors and their related "Serengeti" servers, which were announced in September 2001).
While everyone was expecting a two-digit performance number because of the Oracle advertisement touting the performance of Sparc-based machinery on the TPC-C, which showed "XX" million TPM (short for transactions per minute) compared to IBM's 6 million TPM on a Power 595 system. What Oracle didn't show in those ads was that there was decimal point in its two digits and that the Sparc cluster was able to crank through 7.7 million TPM.
As El Reg anticipated, Oracle slapped together a bunch of Sparc T5440 four-socket servers using the latest Sparc T2+ processors, which run at 1.6 GHz, and the F5100 flash array that we told you about a month and a half ago and which Sun announced at OpenWorld today as the second iteration of its FlashFire storage.
The first FlashFire product was the PCI-Express flash cards in the Exadata V2 x64 cluster that Sun and Oracle announced in mid-September and which Ellison had been bragging about as being twice as fast as IBM's Power 595, a claim Oracle could not back up and for which it was fined $10,000 and issued a case and desist order by the TPC.
It is interesting to note that Oracle has still not proven that that any cluster based on Sun iron offers twice as much oomph as IBM's biggest box, a claim that is at the heart of the $10m Exadata Challenge that Ellison threw down at the Fortune 1000's feet last week. Larry's cluster might indeed be bigger than IBM's SMP, but only by 26.8 per cent, considering IBM's Power 595 was able to hit just under 6.1 million TPM in June 2008.
It is unclear how far the Real Application Clusters clustering extensions for Oracle 11g can push performance, but the word on the street is that after about eight database nodes, it really starts to run out of gas. (There are no benchmarks to prove that, just chatter among server makers who still like to sell SMP boxes for beaucoup money and who therefore have a vested interest in undermining - but of course never publicly - Oracle RAC.) But Oracle needs to kick it up another notch to reach 12 million TPM on the TPC-C test, and it is a bit of a wonder why it didn't do that this week. Perhaps there is an Exadata V2 benchmark test coming out later this week with this kind of scale. And if there is, that will only make people wonder why Sun didn't push the Sparc boxes to the same limits as it did the x64 boxes.
Here's the Oracle and Sun Sparc setup used in the TPC-C test.
It starts with a dozen Sparc T5440 servers, which have a total of 256 threads and which have a list price of just under $2.8m with the four processors, 512 GB of main memory, two SAS disks, and four Fibre Channel adapters configured in them. Oracle slaps a 25.4 percent discount on this (you can't see this from Oracle's filing, you have to do the math yourself to figure out the discounts).
Solaris 10 Unix is, of course, free on these boxes, but for some reason the Solaris support, which most definitely is not free, was not added into the TPC-C configuration. (Whoops) So let me tell you, Larry, for a dozen T5440s with 24x7 support for three years, you're talking $5,961.60 per system for premium support, or $71,539.20. Yeah, I know this is a rounding error in the system price, but it also explains why Sun is not a profitable company when operating system support on a cluster that has a list price for hardware only that comes in at $23.6m has an operating system support contract that is worth only three-tenths of a per cent of the cost of the server, storage, and stuff linking it together.
You wanna know why Oracle cut the price of its database software for the Sparc T2+ processors by 33 per cent a few weeks ago? (And only on the T2+ chips used in the T5440, mind you, not on the T2 chips.) Well, it was to shave $3.9m out of the cost of this Sparc cluster. By dropping its core fudge factor on the Sparc T2+ chips from 0.75 times the per core price to 0.5, it was able to get the cost of Oracle 11g, RAC, and its database partitioning tools down to just under $7.9m.
As is always the case with the TPC-C test, the storage is where the big money is spent, basically because an OLTP test is really just a measure (in theory at least) of how quickly you can get random data from disks to processors and back. If you are thinking that those nifty F5100 FlashFire arrays are going to improve performance and bang for the buck, well, you got it half right. These boxes are loaded with I/O, but the 60 of them that Sun and Oracle used in the TPC-C test to boost I/O performance can cut back on thermals still cost $9.6m, with Fibre Channel adapters and SATA disk arrays bringing the list price of storage in the cluster up to $12.2m. After discounts of 31.3 per cent, the storage in the cluster - a mere 859.4 TB - cost $8.36m.
Various other client hardware running the TPC-C application suite and three years of support drove the overall price tag at list price to $24.7m. Taking off the discounts above plus a $2m thingie Oracle referred to as the "mandatory e-business discount), the three-year price of the Sparc-F5100 cluster came to just over $18m, a 26.9 percent discount off list price for the whole shebang. At just over 7.7 million TPM, that gives the Sparc T5440 cluster a rating of $2.34 per TPM. Without that price cut on Oracle software two weeks ago, this configuration would have cost $2.71 per TPM.
That was just not a big enough gap to brag against IBM's Power 595 behemoth, which had a list price of $37.4m including server, storage, software, client drivers, and maintenance, but which had its price tag reduced to $17.1m after a whopping 54.2 per cent discount. At 6,085,166 TPM, this setup delivered $2.81 per TPM.
The Oracle/Sun TPC-C test results were made public by Oracle last night, but the full benchmark report is not yet available online. The report will eventually be here, where all TPC-C results live.
In theory, this should be setting up an epic battle between the new thinking of horizontal scaling (well, really more like diagonal scaling) through the use of clustered servers and Oracle RAC and the old thinking of tightly coupled processors using symmetric multiprocessing (SMP) inside of a single box with a shared memory space for an operating system and its database.
But in practice, IBM has also announced its own clustering technologies for online transaction processing--that would be the DB2 PureScale extensions for DB2 running on AIX atop Power-based servers announced last week and shipping in December, and a similar AIX-Power cluster called the Smart Analytics System announced in late July that is tuned for data warehousing and analytics on mountains of data.
Oracle is obviously pretty impressed with itself and the Sun hardware, and was touting that its setup consumed one-quarter the energy as IBM's box, even though it did more work. And Oracle was quick to point out that response times were 16 times faster on the Oracle/Sun setup than on the IBM box. Well, no kidding. No one had enterprise-class flash drives in the summer of 2008, when IBM was putting together its test, so this is an apples-to-pears comparison. IBM could, and probably should, get a cluster of Power 550 machines tested using its own solid state disks and DB2 PureScale clustering software. And it should put some SSDs in the Power 595 to see how this lowers costs and improves response time.
Your move, IBM. ®
Sponsored: Creating the Storage Advantage