In this special guest feature from Scientific Computing World, Robert Roe writes that the era of data-centric HPC is upon us. He then investigates how data storage companies are rising to the challenge. In August 2014, a ‘Task Force on High Performance Computing’ reported to the US Department of Energy that data-centric computing will be […]
In this video from the HPC Advisory Council Spain Conference, Jose Carreira from Panasas presents: Panasas HPC Storage — Simplicity and Performance. “NAS products for technical enterprise and research environments must deliver fast time to results and efficiently and linearly scale to extremely high levels of aggregate performance. While performance is critical, performance that comes at the expense of manageability can hamper workflows and impact productivity.”
Storage and data management have arguably become the most important HPC “pain points” already, with access densities a particularly troubling issue. Many HPC sites are doubling their storage capacities every two to three years, but adding capacity does not address the access density, data movement, and related storage issues many HPC buyers face. When this happens, your investments in processing, networking, middleware and applications are choked off by bottlenecks in your storage infrastructure. If you’re looking to maximize throughput of your technical computing infrastructure, storage performance often holds the key.
“My story in a nutshell is that as things get larger, if they get larger and we operate them on larger sizes, we actually have pretty good technology for dealing with size. We suffer primarily from scale and the number of components that can fail, and keeping consistency on those. The consistency issue is a serious one for storage systems that are always available.”