This article is more than 1 year old

HDS unwraps 'data ingestor'

Cloud info gobbler functions like data bee swarm

Hitachi Data Systems has revved its cloud object storage on-ramp, the file data-gulping HDI, to share data, restore old versions of files, and continuously migrate new files into a Hitachi-based cloud.

Hitachi Data Ingestor (HDI) is a physical or virtual caching appliance that presents a NAS interface through which file data from remote and branch offices or other cloud storage users can be fed to Hitachi's Content Platform (HCP), a virtualised and scalable object store. The overall scheme is described here.

Multiple HDI appliances can surround a central HCP site, much as bees swarm around a hive delivering pollen to it to make honey.

The new features are:

- Content Sharing - enabling “edge-dispersion” of data across a network of HDI systems. Multiple HDI systems can read from a single HCP namespace, giving an HDI system access to other HDI systems. Users can deploy a wide area content distribution framework.
- File Restore - users can retrieve previous versions of a file as well as deleted files while maintaining file and directory access control via Active Directory and LDAP.
- NAS Migration - enables transparent migration of data from NAS and Windows Servers to HDI. It supports automated throttling and continuous migration of data into HDI.

HDS says that, using its HDI/HCP combo, customers no longer have to backup data at edge locations such as remote and branch offices. Instead it can be sent to a central, object-based cloud vault from where it can be safeguarded, retrieved and shared.

This helps to simply the IT infrastructure at edge locations by taking advantage of sophisticated facilities in the centre such as multi-tenancy and multiple namespaces. ®

More about

TIP US OFF

Send us news


Other stories you might like