This article is more than 1 year old

Pillar embraces Intel SSDs

Chipzilla's June debut

Marking Intel's first appearance in the enterprise storage array solid state disk (SSD) market, Pillar Data is launching SSD enclosures for its Axiom arrays in June.

Axiom is Pillar's array line with different levels of storage service available from a application-aware quality of service (QoS) system. The arrays use storage bricks as drive enclosures. June will see the arrival of an SSD brick with 12 64GB single level cell Intel X25-E SATA-interface SSDs, providing 768GB of capacity, accessed through dual RAID controllers. Axiom 600, 600MC and 500 arrays can have up to four of these SSD bricks per controller, for a total of 3.072TB.

Previously STEC SSDs have been used for storage arrays - from EMC and HDS, for example. Pillar CEO Mike Workman says STEC's SSD technology: "is good stuff (but) it's expensive and Pillar wants to maximise cost-effectiveness per dollar." He says the advantage of SSDs is not in sheer bandwidth but in microsecond-class latency. The millisecond latency and bandwidth of SATA spindles is fine and more cost-effective for video-streaming. You would reserve SSDs for applications needing blistering latency.

There are five bands in the Pillar QoS system, with SSDs occupying the premium band and levels of disk technology in the high, medium, low and archive/WORM bands. The QoS guarantees high priority filesystems and LUNs get SSD IO service from the shared Axiom storage pool. It also ensures that low priority I/O requests cannot "steal" SSD resources from the system. None of this required any change to the Axiom array's architecture and application-aware storage templates will be able to accommodate the SSD bricks.

Read IOPs per SSD brick are 16 times faster than a SATA brick and 5 times faster than a Fibre Channel drive brick. Write IOPs are 12 times faster than a SATA brick and 4 times faster than a FC brick.

According to Pillar, small blocks and random read applications/operations benefit most from an SSD power infusion, with examples being:

- Internet Retail: “List all of the DVDs starring Al Pacino”
- Search Engine: “Show me all references to Mike Workman’s blog”
- Business Intelligence: “List all customers in 94070 zip code”
- Database indexing operations.

Workman says he agrees in theory with people who say you can solve most latency and IOPS limitations by throwing disk spindles at them and striping across all spindles - as 3PAR does, for example. But this is a preposterous idea in many instances. Data centres often don't have the space required for hundreds of disk spindles, and it makes more sense to add an SSD performance tier to an array to provide the low latency needed by specific applications far more space-efficiently.

There is a power argument in favour of SSDs too. Comparing striping across a hundred drives versus a single shelf of SSDs, Workman said: "I think our solution is about 20 times more power efficient than using disk."

According to Pillar's world-wide marketing VP, Bob Maness, the total cost of ownership of an SSD brick can be less than that of four Fibre Channel disk bricks over time.

Pillar thinks it can drive down the dollars per IOP of SSDs to half that of traditional Fibre Channel drives. It also cites an 85 per cent reduction in power and cooling costs versus Fibre Channel drives and reckons that an SSD has a 42 per cent better mean time before failure rating than a 450GB Fibre Channel drive.

Workman said that SSD bricks were the first iteration of SSD technology use in Pillar and that level 2 cache will probably follow, the idea being that you apply SSD technology wherever it best solves cost per IOPS problems.

Beta testing of Pillar's SSD bricks will run from April through May with general availability in June. Pricing will be announced then. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like