You the expert We talked to four Reg-reading storage gurus on where and when to use 3.5-inch hard disk drives and when to use 2.5-inch ones. They each admit a role for the smaller form factor drives – with one view being that they could take over completely as their capacities increase. But not one of our quartet thinks that 2.5-inch drive use will decrease.
This is what they had to say:
Special Projects Co-ordinator and storage consultant
For the most part, people I know have used the 2.5-inch HDDs (SFF) for the boot O/S for servers for the last couple years – the SAS 6Gig ones now. Usually a redundant pair (RAID1) for reliability, 10K or 15K rpm and a smallish size like 146GB. 2U servers come with up to 25 of these, but people mostly use just one pair and put their data on external storage – usually a SAN except for SMB, where they use an appliance. The servers are still available with the 3.5-inch (LFF) drives – up to 14 for a 2U box, and I see these in the SMB space where they only have one or two servers and want to put all the storage in the box. Customers like to standardise on the SFF drives because they're more flexible – you can still get the spindle count up on a 1U box if you need to. Of course the advantage of having a standard size is it reduces the cost of the cold spare pool.
Most of the people I deal with now are doing or have done server consolidation, so the boot drives in the server are fairly irrelevant – they could boot the VMHost from SAN or SD card. The drives are there for tradition, and because that's easier to set up than anything else. They have the VMHost boot on them and nothing else. For these people the physical drive size is irrelevant. The cheapest drive will do because the VMHost typically loads once into RAM on boot and the drives are idle all the rest of the time except for VM logs mostly. That may be selection bias so I wouldn't read it as a general industry trend too much yet – I work for a company that's been big in both VMWare and Citrix consulting for many years and our customers may be selecting us for that reason.
Big drives or small you still have to match performance specs with the price data to hit the sweet spot on any given day for a particular solution. You have to look at the bottlenecks of the whole system to make sure it will meet its design goals. It's usually obvious, but the exceptions are worth the rule.
On desktop machines, the LFF SATA still rules the roost, one per system. I'm working a project today with a thousand of those. The enterprise is selecting the smallest available drive because they don't want people storing stuff on it anyway, and because, on the other end of the lifecycle, large drives take a long time to wipe. Unfortunately for them, apparently the minimum size is 320GB. They might benefit from moving to SFF or SSD just to get the wiping cost down. I'd like to see more SMB's adopt the hybrid drives in the next year – they can boost productivity and the cost is low, but they're not yet an available option from the big system vendors and they're new and relatively untried. The hybrid drives so far aren't working out for RAID.
Speaking of RAID, there's a trap with the really huge and slow 3TB SATA drives. The streaming write bandwidth on these drives is about 120MB/s. That makes almost seven hours to write the whole drive. In practice that equates to over a day, maybe several, to rebuild an array. That's too long. A lot can go wrong in a day and in a larger array nearing end of life it might enter a semi-permanent rebuild state where drives fail as often as the array rebuilds, or worse: induced multi-drive failure during rebuild from rebuild stress resulting in data loss. This situation will only get worse as drive capacities continue to grow faster than streaming write bandwidth. These drives are sweet density for backup targets. They're the new tape. But as always with backups: inspect what you expect. An untested backup isn't a backup at all – it's just wasted time and money.
For drives, if you want the most storage for the dollar today in business class, it's hard to beat the 2TB LFF SAS drives. The lower-end SAS drives cost about the same as server SATA drives from the big vendors. In an odd aside, I'm also seeing big SATA appliances like Drobo in the enterprise where you normally wouldn't expect them, and more home-brew Linux file servers and iSCSI solutions than ever before. School districts, where dollars are rare and tolerance for risk is high, seem to be taking to OpenFiler. People are wising up to the idea that highly redundant cheap stuff can be more cost-effective than, and just as reliable as, "Enterprise class" gear. Of course this stuff is external to the server in a dedicated filer or SAN (or a server purposed as such, which is the same thing really). Don't get me wrong: "more than ever before" is not anywhere close to "most of the time". It's changing, but it's not close to taking over yet.
Mikel Kirk is currently a PC deployment project manager for a 3,000+ unit desktop hardware refresh with Windows 7 migration. When that's complete, it's back to speccing and servicing servers, storage and networking. He is on a multi-year 10,000+ seat VDI project for a local school district, quite largish for VDI deployments at this point in time. He says: "Disclaimer: I work for a company. They sell stuff, including some of the products mentioned here. Their opinion is not mine, and mine is not theirs."
Chris Evans - Independent storage consultant
The discussion on when 2.5-inch hard drives will usurp their ubiquitous 3.5-inch swarm of brothers has been raging for some time. The 3.5-inch form factor was a standard adopted many years ago in enterprise, mid-range and consumer devices. The use of 2.5-inch drives has obvious appeal in areas where power and space are an issue; laptops are a good example. We've also seen 2.5-inch drives implemented in rack-mounted servers, again for the space and power benefits. But why has adoption in enterprise storage arrays taken so long?
An argument put forward to me by EMC some years ago was the lack of dual supplier. EMC wanted to ensure drives were available from multiple sources in the event of quality or supply issues. This no longer holds water for two reasons: firstly, EMC happily provides customers with solid state drives from a sole supplier (STEC) and secondly, there are multiple suppliers of 2.5-inch drives in the market today. Perhaps there's an issue with the drive characteristics. This too is no longer a problem; 2.5-inch drives are as reliable as their 3.5-inch counterparts. They are as performant as 3.5-inch drives (both in throughput and interface speeds) and in fact they have other benefits in power, space and cooling.
A typical 2.5-inch 10K drive will require around 8 watts in typical operation, a 3.5inch drive around 10 watts. This may not seem like a huge saving, but when measured across a large array, provides significant savings. Power savings also implies cooling savings and this is additionally achieved on two fronts. First, for the same volume of space, more drives can be deployed, providing better airflow; second, the surface area to volume ratio is better for 2.5-inch drives (4.66 rather than 2.84), meaning a better heat dissipation characteristic.
So overall the 2.5-inch drive holds up well – until we start to discuss numbers of drives. For the same volume of space, more 2.5-inch drives can be deployed. This means more physical spindles across which to spread I/O and of course the counter-argument that more drives mean more failures. However this isn't necessarily a problem. Hard drives still continue to be configured in RAID groups and data loss is only an issue if two drives fail in the same RAID group. Where drive numbers do count is in the cost. Deploying more drives in an array will obviously cost more money and this increase in cost occurred because more 2.5-inch drives were required to deliver the same physical capacity as 3.5-inch drives in an array. We need to look at drive models available today to see that for 15K drives, 2.5-inch models still lag behind the 3.5-inch drives. However, the sweet spot for capacity is 10K, with both 2.5-inch and 3.5-inch models supporting up to 600GB. This means for the same capacity, 2.5-inch drives can be deployed in less space, require less power/cooling and provide the same levels of availability and performance.
I believe we are seeing the implementation of 2.5-inch drives by certain storage array vendors due to two factors: the cost/benefit ratio for 2.5-inch drives has been reached, and arrays are being implemented with advanced tiering technology that enables these drives to be used to the fullest extent. As a consequence, it is no longer necessary to purchase large numbers of faster (and high capacity) 15K drives, but a blended approach of SSD, SAS and SATA can provide effective performance at a lower cost.
I believe as the introduction of advanced tiering technology takes hold, we will see 3.5-inch drives relegated to providing mass, SATA-based storage, with 2.5-inch SSD and HDD drives delivering the bulk of I/O to the enterprise.
Chris M Evans is a founding director of Langton Blue Ltd.. He has over 22 years' experience in IT, mostly as an independent consultant to large organisations. Chris's blogged musings on storage and virtualisation can be found at www.thestoragearchitect.com.
Evan Unrue - Product Specialist at Magirus UK
Apart from the obvious space-saving benefits of having a smaller form factor drive – which would allow for higher density of disks, consuming less physical footprint when considering RAID arrays – 2.5-inch drives do appear to have some performance gains over their bigger 3.5-inch brother when looking at manufacturers' published specifications.
Full stroke and track-to-track seek times on 2.5-inch drives come out at near half that of 3.5-inch drives in most cases, which one can only assume is down to the smaller platter and actuator arm. This would clearly have benefit in systems with random access patterns to disk. That said, there is an argument to say that 3.5-inch disks would potentially see higher performance in systems with highly sequential access patterns to disk due to the age old marvel that is zoned bit recording – as pragmatically speaking we should have more sectors on the outer tracks of a 3.5-inch drive than a 2.5-inch drive due to the larger platter.
With 3.5-inch disks, more sectors would pass under the read/write heads of the disk, resulting in a higher data read/write rate. However, this benefit would only be seen if data written/read to or from disk was done so in a purely sequential fashion and that typically has very specific use cases such as media streaming, backup to disk and the like (not what we would consider a general workload). Essentially 2.5-inch drives would see slower transfer speeds than 3.5-inch drives, but faster access times.
With all that being said, we are in an age of miniaturisation. Things which come in smaller packages typically are less power hungry. From what I’ve seen a 2.5inch drive will typically consume 40 per cent less power than a 3.5-inch drive of comparable rotational speed.
When looking at the enterprise storage vendors, one of the elements which appears to have let 3.5-inch drives hang in there is the lingering of Fibre Channel drives in the market, which I don't believe helped the introduction of the smaller form factor into the mainstream. Now 6Gbit/s SAS drives are outed, it looks like Fibre Channel drives, which only support a 4Gbit/s interface speed may get the chop, clearing the way for SAS and SATA drives across the board moving forward (one would think), and potentially allowing for a standardised smaller form factor in the enterprise storage arena.
Claus Egge - Storage Consultant
There is no real dilemma between 2.5-inch hard disk drives (HDDs) and 3.5-inch HDDs, since a tiered and well-balanced storage pool will accommodate both, as well as solid state drives (SSD). A detailed analysis weighing up all factors of cost, performance, energy consumption, physical footprint and reliability would be the ideal way to choose a storage hierarchy for particular data storage requirement. But, in reality, customers’ choice of disk arrays owes a lot to historical decisions.
This can provide an inhibiting factor towards the acceptance of 2.5-inch HDD shelves and arrays. It will take time for customers affected by this to understand and accept the differentiated attributes of 2.5-inch drives.
Small is often good in disk drive engineering where vibration, power consumption, and overall I/O speeds all benefit from the use of 2.5-inch HDDs. Having more spindles in a shelf means more read/write heads can be simultaneously active. And while the bits per square inch number keeps on growing, the question then becomes what is the best-sized basket for my storage eggs.
Some people worry about their HDDs being too big while others dislike large fragmented clusters of smaller HDDs. The 2.5-inch form factor will grow in enterprise arrays exactly because its spin speeds and capacity levels offer greater choice at better price points. There is still a role for 3.5-inch drives, such as providing bulk capacity data storage, but expect this form factor to play a lesser role in high performance storage array applications.
Whichever format dominates in the future, the magnetic spinning disk still has a future ahead of it, although it is being complemented by faster (SSD) alternatives. However, the design of future disk arrays will be dictated by new error correction philosophies, because RAID runs out of steam as disk numbers and capacities increase. But that is a different story...
The consensus is that 2.5-inch drives will predominate in performance-focused storage array applications, with 3.5-inch ones being preferred for bulk capacity ones. There is a general agreement that RAID rebuild times on high capacity 3.5-inch drives is becoming untenable and that is a problem that needs to be sorted. This could mean that enterprise servers and storage arrays move to 2.5-inch SATA drives for bulk data storage while desktops stick with the 3.5-inch drives. Maybe they will go 2.5-inch too though, with bulk data stored at the other end of a LAN or WAN link. As ever, we will have to wait and see. ®