The Hyper-Converged Approach to Storage – a Natural Next Step

Print Friendly, PDF & Email

In this week’s Industry Perspectives, Laura Shepard, Sr. Director Vertical and Product Marketing at DDN looks at trends in commodity hardware, specifically a converged approach to storage and the many benefits it offers to supercomputing professionals.

Laura Sheppard DDN

Laura Shepard, Sr. Director Vertical and Product Marketing at DDN

With increased deployment of virtualized hardware and the use of cloud resources, the perceived value of differentiated hardware has diminished over the past decade. I’ve sat in many meetings in which complete multi-million dollar infrastructures are being contemplated and designed without any mention of hardware specifics. Industry vendors have also been supporting this direction; moving to commodity hardware with differentiation almost exclusively in software.

This is not a surprise.  The computing industry moves in cycles where the compute, networking and IO capabilities leap-frog (most) user requirements and then innovation slows as industry, government and academia change their workflows to take maximum advantage of new developments.  After the early adopter phase, the race is on to commoditize these advances to drive cost out of new infrastructure for the main market.  Once the majority of the market has adjusted for advances, early adopters outstrip the capabilities of the previous round of innovation and we begin again.

And that brings us to the present – a new cycle is beginning, but it comes with some new twists.  Not only are we facing rapid innovation across all major computing areas at roughly the same time, but user requirements have also changed across multiple axis – performance and capacity of course, but also in terms of what they want our of their data and how they share data and results across and between organizations.

Huge data growth has been with us for years now, and shows no sign of slowing.  There is, an increasing expectation among end users that instead of having to work hard just to be in a position to catch their data as it grows, they want to leverage it effectively for strategic or competitive advantage.  So the challenge becomes, how best to balance new technologies to capture new data, incorporate non-traditional data, analyze across multiple sources for actionable insight, share data and results broadly and securely, and do all that cost-effectively.

Convergence and hyper-convergence on the compute side – especially in the enterprise – are broadly seen as agents of commoditization.  On the storage side of the industry, however – especially in data-intensive environments – the hyper-convergence of compute, networking and even applications into the storage device can provide a way to incorporate new technology and balance it for maximum performance.  By bringing the latest SAS and NVMe connected SSD drives and spinning media and the newest processors into the same system and balancing them with full bandwidth NVMe and SAS networks, a converged system delivers the performance of new components, with much less latency than a non-converged approach. The traditional convergence benefits also apply, of course, including fewer components to buy and manage.

Hyper-convergence has a role to play in delivering the performance, scale and data management requirements to tame and take advantage of massive data growth.  Embedding applications within the storage system itself, simplifies management, reduces data center foot print, but more importantly for data-intensive environments, significantly reduces their latency by removing networking hops between the application and data.

Several classes of application are ideal for hyper-converged storage:

  • Parallel file systems like Lustre and GPFS (example: GPFS at Virginia Bioinformatics)
  • Data organization, migration and metadata cataloging tools (example: iRODS at the University of Florida)
  • Bandwidth optimization tools for WAN data access and transfers (Example: dCache at Cern Tier 1 Data Center, TRIUMF)

The hyper-converged approach to storage is a natural next step.  It gives the end user a simple and cost-effective way to harness the latest technology while also providing a route to the highest possible performance.

There will be a lot of new product announcements around the upcoming Supercomputing’15 conference in Austin, Texas the week of November 16th.  DDN will be there with the brand new SFA14K and SFA14KE – the World’s Fastest Hybrid SSD and Disk, High Density Embedded Storage Platform. Delivering 60 GB/s and 6 million IOPs in a single 4 U system, and over 7 PB per rack, the SFA14K and 14KE are many times faster and significantly denser than competing offerings.  Come visit us in booth #633 or online at ddn.com.

Submitted by: Laura Shepard, Senior Director Vertical and Product Marketing at DDN.