This article is more than 1 year old

IBM ships SSDs for Power Systems

Lots of I/O and I owe

IBM today begins selling its first solid state disks for its Power Systems boxes, the machines it uses to attack the Unix, Linux, and OS/400 installed bases.

The SSDs are not the same units it announced in March for its System x and BladeCenter x64-based servers. IBM is taking a 128 GB Zeus-IOP flash drive from STEC, with whom it inked a partnership earlier this month, and formatting it down to 69 GB. The reformatting allows IBM to perform "wear levelling" on the flash memory cells used in the SSD, moving data to cells as they wear out and thereby extending the life of the overall drive significantly.

The IBM SSD, which comes in a 2.5-inch form factor, has about 220 MB/sec of sustained throughput on reads and about 122 MB/sec of sustained throughput on writes and can perform about 28,000 I/O operations per second (IOPS) on random transactional processing. The SSD has a 3 Gb/sec SAS interface and has an average access time of from 20 to 120 microseconds, the difference depending on where data is located on the SSD. According to IBM's specs, the SSD has about 87 times the I/O operations per second of a 15K RPM SAS drive and yet consumes about one-fifth of the power.

You can see why server makers would, on the face of it, seem to be thrilled about SSDs. But not so fast on that conclusion. Up until now, server makers selling systems for online transaction processing have been able to sell customers large arrays of disks and controllers to back end their systems, and these arrays can be very expensive compared to the cost of the server itself. On big OLTP environments, storage can account for half of the cost of the complete system.

So boosting I/O performance using SSDs is going to cut into server makers' revenue streams bigtime. You can see now why IBM is charging customers using its entry Power 520 and 550 rack servers and BladeCenter JS23 and JS43 blade servers $10,000 a pop for a single SSD, and why those using its larger Power 560 and 570 systems have to pay $13,235 for a single drive. As far as I can ascertain, the older JS12 and JS22 blade servers and the high-end Power 595 machines are not being given support for the SSD, but there is no technical reason for this.

Compare those SSD prices to a 139 GB (for i 6.1) or 146 GB (for AIX and Linux) 15K RPM SAS drive on Power Systems, which has a list price of $498. So IBM is charging 20 to 26.5 times the cost of disks for its SSD, which delivers three times the improvement in I/O performance over disks.

That sounds like a reasonable deal until you compare it to the 32 GB X-25E SATA-style flash drive made by Intel and put into servers made by Sun Microsystems back in March. Sun is using Intel's X-25E drive, which it resells for $1,199 - that's about twice Intel's list price for the drive. And while the X-25E drive is SATA, not SAS, SSD and it can only handle 3,300 IOPs on writes compared to 35,000 IOPS on reads, the question is whether more capacity, wear levelling, better write performance are worth eight to eleven times the dough.

Of course, IBM is not giving customers those options. This Zeus-IOP SSD is their only option.

The SSD drives that IBM is shipping today on its Power Systems are supported on Big Blue's own AIX 5.3 and 6.1 Unix variants as well as its i 6.1 proprietary environment (but not on its predecessor i5/OS V5R4, however, which also runs on Power5 and Power6 iron). The SSDs can also run on Power Systems configured with Novell's SUSE Linux Enterprise Server 10 SP2 or later and Red Hat's Enterprise Linux 4.7 and higher or 5.2 or higher.

None of these operating systems have features that automatically move data and programs that are frequently used to SSDs from disk drives, so IBM has created a program called SSD Data Balancer that lets system administrators do an analysis of their running workloads and then move data to the SSDs to boost system performance. Over time, IBM expects to put features in AIX, i, and Linux that do this work automatically, tuning data sets to take advantage of the SSDs in systems.

The i operating system, of course, already does a lot of hierarchical storage management and will be the first to get such features, followed by AIX and Linux. The microcode inside of IBM's disk arrays does this function automatically, and mainframes that have SSDs installed on DS8000 disk arrays from IBM can move datasets to SSDs transparently using IBM's Data Facility Storage Management Subsystem (DFSMS). This mainframe hierarchical storage management code is older than dirt, and is just as useful to a server farm as real dirt is to a real farm. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like