Deep dive El Reg has teamed up with the Storage Networking Industry Association (SNIA) for a series of deep dive articles. Each month, the SNIA will deliver a comprehensive introduction to basic storage networking concepts. This month the SNIA examines data deduplication.
This article, derived from existing SNIA material, describes the different places where deduplication can be done; explores the differences between compression, single-instance files, and deduplication; and looks at the different ways sub-file level deduplication can be carried out. It also explains what kind of data is well-suited to deduplication, and what is not.
Data deduplication has become a very popular topic and commercial offering in the storage industry because of its potential for very large reductions in acquisition and running costs, as well as the increases in efficiency that it offers. With the explosive growth of data, nearly half of all data-centre managers rate it as one of their top three challenges. According to a recent Gartner survey, data deduplication offers an easy route for relieving pressures on storage budgets and coping with additional growth.
While seen as primarily a capacity-optimisation technology, deduplication also brings performance benefits – with less data stored, there is less data to move.
Deduplication technologies are offered at various points in the data life cycle, from source deduplication, deduplication of data in transit, and deduplication of data at rest at the storage destination. The technologies are also being applied at all storage tiers: backup, archive, and primary storage.
Regardless of what method is used, deduplication (often shortened to "dedupe") is the process of recognizing identical data at various levels of granularity, and replacing it with pointers to shared copies in order to save both storage space and the bandwidth required to move this data.
The deduplication process includes tracking and identifying all the eliminated duplicate data, and identifying and storing only data that is new and unique. The end user of the data should be completely unaware that the data may have been deduplicated and reconstituted many times in its life.
There are different ways of deduplicating data. Single Instance Storage (SIS) is a form of deduplication at the file or object level; duplicate copies are replaced by one instance with pointers to the original file or object.
Sub-file data deduplication operates at a more granular level than the file or object. Two flavours of this technology are commonly found: fixed-block deduplication, where data is broken into fixed length sections or blocks, and variable-length segments, where data is deduplicated based on a sliding window.
Compression is the encoding of data to reduce its size; it can also be applied to data once it is deduplicated to further reduce storage consumption. Deduplication and compression are different and complementary – for example, data may deduplicate well but compress poorly.
In addition, deduplicating data can be performed as an in-line process; i.e., as the data is being written to the target, or post-processed once the data has been written and is at rest on disk.
An example of deduplication
As a simplified example of deduplication, let's say we have two objects or files made up of blocks. These are depicted in the diagram below. The objects or files can be variable or window-based segments, fixed blocks, or collections of files – the same principle applies. Each object in this example contains blocks identified here by letters of the alphabet.
Sub-file level data deduplication (SNIA)
The first object is made up of blocks ABCZDYEF, the second of blocks ABDGHJECF; therefore the common blocks are ABCDEF. The original data would have taken eight plus nine blocks, for a total of 17 blocks. The deduplicated data requires just two blocks (Z and Y) plus three blocks (G, H and J) for the unique blocks in each object, and six for common blocks, plus some overhead for pointers and other data to help rehydrate, for a total of 11 blocks.
If we add a third file, say a modification of the first file after an edit to XBCZDYEF, then only one new block (X) is required. Twelve blocks and pointers are sufficient to store all the information needed for these three different objects. Compression can further reduce the deduplicated data and, depending on the type of data, typically a further reduction up to 50 per cent of the original size can be achieved. The original 17 blocks in this example would then be reduced to six or so blocks.
Deduplication use cases
There are many types of data that can benefit from this impressive capacity-reduction potential, including backups, where each stream of backup data is very similar to the last backup, with only small percentage of data changing between each backup. Backups can show deduplication ratios of 20 to one, and are normally much greater. Virtual-machine images, where each image is largely similar to every other, also deduplicate well, with savings of 90 per cent or more in practice.
Deduplication can be used for backup, primary storage, WAN optimisation, archiving, and disaster recovery. In fact, any point where data is stored and transmitted is a candidate.
Points to consider
Deduplication looks like a winner – but, like all technologies, getting the best from it requires understanding where it works well and where it isn't effective as well as the flavours offered by the various vendors.
Not all data types deduplicate as well as others. Some are problematic, such as video streams or geophysical data, for example. Many of these types of data have little to no repetitive data, and may already be compressed. On the other hand, regardless of their data type, backups – which contain large amounts of data that doesn't change on a regular basis – deduplicate well.
But generally most data types and sources of data have some potential for deduplication – home directories and VM images, for example. Deduplicated data may also be slower to access because reconstituting the data (sometimes referred to as "rehydration") may require more processing resources on the storage system than a file that's not been deduplicated, typically in the form of more CPU cycles.
On the other hand, deduped data may be faster to access since less data movement from slow disks is involved. Caching at the storage controller on flash storage devices or in the network itself can considerably reduce the overall I/O load on the disk subsystem. But your mileage may vary, and evaluation of the benefits needs an understanding of the service you are delivering and the data you are managing.
Most data types will benefit from deduplication, as the overheads are small and outweighed by the significant savings, but high-performance applications that require very fast access to their data are not generally good candidates for deduplication.
The bottom line
Data deduplication helps by managing data growth, reducing network bandwidth requirements, and therefore improves capacity and performance efficiencies. Significant cost reductions can be made, from lower administration costs (there's less to manage) to space, power, and cooling outgoings – deduplication helps data centres become greener by reducing the carbon footprint per stored byte.
When evaluating deduplication the answer to the question "Will it benefit my data centre?" generally is: "It will." The success of deduplication technologies to date should encourage every storage administrator to "go forth and deduplicate". ®
This article was written by Alex McDonald, SNIA Europe UK country committee member, NetApp, based on an existing SNIA material. To explore deduplication further, check out this SNIA tutorial: Advanced deduplication concepts (pdf).
To view all of the SNIA tutorials on Data Protection and Management, visit the SNIA Europe website at www.snia-europe.org/en/technology-topics/snia-tutorials/data-protection-and-management.cfm.