He has written many sportscar buying guides, a few mountaineering guides and drives a car that's faster than he is.
Analysis The storage heartland is under attack from six armies, and each one aims to wipe out some or all of the core networked file and block arrays that form the bulk of the installed storage array base.
These invading forces are exploiting weaknesses in block and file arrays centred on data access latency; expense; limitations in capacity and performance scaling; the increasing unsuitability of RAID protection as disk capacity and array disk numbers rise; and the inability of a general-purpose array to be optimised for specific use cases.
More Reading2014 AD: A year in storage. STILL not enough nails for the disk coffin lidPure: We've created the Everlasting Gobstopper of Storage – 'Forever Flash'Non-profit or charity? Win a data centre makeover from NutanixTintri takes on hybrid storage startup rivals with VM-eating arrayArray with you: Hybrid upstart Tegile kicks out new flashers
There are six technology invaders, each aiming to suck data off the legacy/installed base arrays and destroy their business case:
- All-flash arrays for lower latency — such as Pure, SolidFire, XtremIO, FlashSystem, FlashRay, etc.
- Hybrid arrays for lower latency and better value money for newer use cases, such as VDI (Nimble Data, Tegile, Tintri).
- Go-faster HPC-style arrays — such as DDN, Panasas, Seagate/Xyratex, SGI, etc.
- Hyper scale-out stores for vastly-increased scale and RAID protection scheme avoidance — Amplidata, Caringo, Cleversafe, Scality, Atmos, etc.
- Server-side storage in converged/hyper-converged systems for lower latency and simplified purchase and management — such as Nutanix, Maxta, ScaleIO, Simplivity, VSAN.
- The public cloud for significantly lower cost and better scalability — such as Amazon, Azure, Google, and a few more.
The all-flash arrays, hybrid arrays, HPC-style arrays and hyper scale-out systems can all be classed as the shared/networked array re-invented; the same dog-and-pony show but with a stronger dog and bigger horses, maybe even herds of horses.
The other two invaders want to replace the core arrays with something completely different. Converged and hyper-converged server/storage/network systems want to provide a virtual shared resource from each component nodes' direct-attached storage (DAS), aggregating it together to provide a virtual SAN with, sometimes, a NAS head for file access.
The cloudy companies want to store all data in their remote data centres (the cloud) and have computing done there too, constituting a clear and present danger to all enterprise data centre server and storage suppliers, which includes the converged/hyper-converged system startups as well.
The positioning of all these ambitious attackers vis-a-vis legacy arrays can be summarised in a chart:
Six NAS/block heartland attacking technologies
The storage heartland arrays from suppliers such as Dell, EMC, Fujitsu, HDS, HP, IBM and NetApp have extended their functionality in the face of these threats, with:
- All-flash configurations and new all-flash product lines
- Hybrid flash/disk configs
- Scale-out through clustering and parallel file system/object storage product lines
- And most emphatically, hybrid cloud systems featuring cloud-style storage service for users and links to/integration with the terrible trio of public cloud suppliers — Amazon, Azure and Google.
Invading tech weaknesses
So far the invaders haven't felt any impact on their business growth from these reactive actions, and legacy array sales growth has faltered. Not that the invaders don't have their own weaknesses. For example:
- All-flash arrays — immature data services, raw cost, separate silo management except HDS and HP, justified for primary data only
- Hybrid arrays — separate silo management, immature data services
- Go-faster HPC-style arrays — high cost, complexity, multiple SW technologies
- Hyper scale-out stores — (object storage) limited performance
- Server-side storage converged/hyper-converged systems — unproven scalability of capacity and storage performance, immature data services
- The public cloud — data access latency, large file transfer costs, security, reliability
Let's take a 10-year view and ask where this is going?
There won't be one replacement for legacy storage arrays. Instead each of the six invading technologies will get their piece of the pie, but not all of it. A substantial chunk will be left for the legacy array suppliers, but not a big enough piece to support all the products pitched at it by the suppliers listed above.
HP and NetApp are in good shape as they each essentially have one unified file/block product line; 3PAR StoreServ and ONTAP FAS respectively.
Dell is moving that way by combining the EqualLogic and Compellent lines into a single product set. IBM has a problem; too many overlapping products. Storwize will probably emerge as the main product with the DS8000 safeguarded by mainframe customers. XIV and DS-whatever-else disk arrays will probably fall away over time.
EMC has a problem with VMAX, VNX and Isilon all being classic file/block arrays. It's conceivable that VMAX and VNX will merge and become scale-out in nature, eroding the role of Isilon, unless EMC sends it towards the hyper-scale-out/object area area.
HDS will likewise have to merge VSP and HUS. Fujitsu will have a tough time with its ETERNUS arrays buy may, we stress may, eventually walk away from the classic file/block array space, concentrating on the newer areas, such as hyper-scale storage with its latest Ceph-using CD 10000 array.
All this is speculation of course, and in-house speculation at that, coming from the Vulture staring at its own naval through the feathers. What a prospect. ®