Making Storage Bigger on the Inside

HPC storage

This sponsored post from HPE delves into how tools like HPE Data Management Framework are working to make HPC storage “bigger on the inside” and streamline data workflows.

Unfortunately, storage tends to get the short end of the stick when it comes to budget and capital expenditure. There’s never enough money to accommodate an ever-expanding ocean of data. Ideally, as viewers of the TV series Doctor Who know, the trick is to “Make it bigger on the inside.” In the show, what looks like a blue police box, as typically seen in the U.K., actually has multiple levels of space inside, including a library and a swimming pool. If only some of that sci-fi futuristic technology could be applied to HPC storage.

HPC storage

Ideally, as viewers of the TV series Doctor Who know, the trick is to “Make it bigger on the inside.” (Photo: Shutterstock/Hethers)

HPE Data Management Framework (DMF) isn’t science fiction, but it does make storage “bigger on the inside”.  DMF has comprehensive metadata capabilities, which allows it to mirror the file system, and maintain an image of the file system over time. In conjunction with these metadata capabilities are a policy engine which allows the definition and use of a tiered data model, and high speed data movement functionality.

These features combine to allow active usage of cold or archived data, meaning that just because data is archived, it doesn’t mean it’s inaccessible. DMF seamlessly moves data between tiers, whether they’re “hot” tiers based on flash storage, “warm” tiers based on hard drives or “cold tiers based on tape. When an application or user requests data from DMF, it will be retrieved and can be read from primary storage as soon as data is transferred, while the remainder of the file is recalled. Partial file recalls are possible, allowing specified parts of a file to be recalled to primary storage while the entire file remains on secondary tier. A user or application can review the content and determine if a full recall is necessary.

Since all data appears online to all users all the time, active storage becomes a window to total storage capacity.

This fluid movement of data is further enhanced in DMF v7.1 with the concept of dynamic namespaces. A namespace is essentially a file structure, like a directory, of data. DMF can create, manage and delete namespaces. This provides great flexibility in managing data as all the relevant data based on specified criteria can be gathered, moved and archived as necessary. For instance, all the data for a project can be gathered, processed and then archived. This data can be deleted from the active file system to free up resources, but easily restored due to the metadata stored within the DMF metadata repository.

Since all data appears online to all users all the time, active storage becomes a window to total storage capacity. Effectively, active storage has become bigger on the inside. Whatever data is needed, it can be accessed. As can be imagined, this tiering and data management capability also allows maximum utilization of storage infrastructure. Inactive data can be flushed to warm tiers or to archive, without the fear of losing accessibility. DMF also has the ability to streamline data workflows. Read the links below to learn more about HPE Data Management Framework.

Read about next generation data management in a Hyperion Research Technology Spotlight:

https://www.hpe.com/us/en/resources/solutions/hpc-hyperion-dmf.html

(Registration required to download)

Blog post:  HPE Data Management Framework: Making HPC storage bigger on the inside

https://community.hpe.com/t5/Servers-The-Right-Compute/HPE-Data-Management-Framework-Making-storage-bigger-on-the/ba-p/7015671

To learn more about HPE Data Management Framework:

www.hpe.com/storage/dmf