In this video, Oklahoma State Director of HPC Dana Brunson describes how the Cowboy supercomputer powers research. “High performance computing is often used for simulations that may be too big, too small, too fast, too slow, too dangerous or too costly, another thing it’s used for involves data. So you may remember the human genome project it took nearly a decade and cost a billion dollars, these sorts of things can now be done over the weekend for under a thousand dollars. Our current super computer is named Cowboy and it was funded by a 2011 National Science Foundation Grant and it has been serving us very well.”
Today Panasas announced that it has joined the iRODS Consortium as a contributing member. The iRODS Consortium leads development and support of the Integrated Rule-Oriented Data System (iRODS), free open source software for data discovery, workflow automation, secure collaboration, and data virtualization.
Today Panasas announced ActiveStor 18, its latest generation hybrid scale-out NAS appliance. By adopting 8 terabyte drive technology, ActiveStor 18 increases scalability to more than 20 petabytes and 200 gigabytes per second. According to Panasas, ActiveStor 18 also offers increased CPU power and twice the storage cache capacity to further accelerate mixed workload performance.
In this special guest feature from Scientific Computing World, Robert Roe writes that the era of data-centric HPC is upon us. He then investigates how data storage companies are rising to the challenge. In August 2014, a ‘Task Force on High Performance Computing’ reported to the US Department of Energy that data-centric computing will be […]
This the fifth article in a series from the editors of insideHPC on HPC storage. This week we different approaches to data storage. A different approach to data protection is clearly required if the limitations of hardware-based RAID are to be overcome.
Today Panasas announced that its ActiveStor16 hybrid scale-out NAS appliance is now shipping.
In scale-out NAS, HPC storage performance can be driven by several factors including elimination of file processing bottlenecks, parallel data paths and more. Learn how to maintain storage performance as you scale.
In this video from the HPC Advisory Council Spain Conference, Jose Carreira from Panasas presents: Panasas HPC Storage — Simplicity and Performance. “NAS products for technical enterprise and research environments must deliver fast time to results and efficiently and linearly scale to extremely high levels of aggregate performance. While performance is critical, performance that comes at the expense of manageability can hamper workflows and impact productivity.”
Storage and data management have arguably become the most important HPC “pain points” already, with access densities a particularly troubling issue. Many HPC sites are doubling their storage capacities every two to three years, but adding capacity does not address the access density, data movement, and related storage issues many HPC buyers face. When this happens, your investments in processing, networking, middleware and applications are choked off by bottlenecks in your storage infrastructure. If you’re looking to maximize throughput of your technical computing infrastructure, storage performance often holds the key.
“My story in a nutshell is that as things get larger, if they get larger and we operate them on larger sizes, we actually have pretty good technology for dealing with size. We suffer primarily from scale and the number of components that can fail, and keeping consistency on those. The consistency issue is a serious one for storage systems that are always available.”