In this WGRZ video, researchers describe supercomputing at the Center for Computational Research at the University of Buffalo. “The Center’s extensive computing facilities, which are housed in a state-of-the-art 4000 sq ft machine room, include a generally accessible (to all UB researchers) Linux cluster with more than 8000 processor cores and QDR Infiniband, a subset (32) of which contain (64) NVidia Tesla M2050 “Fermi” graphics processing units (GPUs).”
“Over the last several years, an enormous amount of development effort has gone into Lustre to address users’ enterprise-related requests. Their work is not only keeping Lustre extremely fast (the Spider II storage system at the Oak Ridge Leadership Computing Facility (OLCF) that supports OLCF’s Titan supercomputer delivers 1 TB/s ; and Data Oasis, supporting the Comet supercomputer at the San Diego Supercomputing Center (SDSC) supports thousands of users with 300GB/s throughput) but also making it an enterprise-class parallel file system that has since been deployed for many mission-critical applications, such as seismic processing and analysis, regional climate and weather modeling, and banking.”
“Ngenea’s blazingly-fast on-premises storage stores frequently accessed active data on the industry’s leading high performance file system, IBM Spectrum Scale (GPFS). Less frequently accessed data, including backup, archival data and data targeted to be shared globally, is directed to cloud storage based on predefined policies such as age, time of last access, frequency of access, project, subject, study or data source. Ngenea can direct data to specific cloud storage regions around the world to facilitate remote low latency data access and empower global collaboration.”
In this video from SC15, Larry Jones from Seagate provides an overview of the company’s revamped HPC storage product line, including a new 10,000 RPM ClusterStor hard disk drive tailor-made for the HPC market. “ClusterStor integrates the latest in Big Data technologies to deliver class-leading ingest speeds, massively scalable capacities to more than 100PB and the ability to handle a variety of mixed workloads.”
Today ArcaStream and General Atomics introduced Ngenea, the world’s first data-aware cloud storage gateway. Ready for quick deployment, Ngenea seamlessly integrates with popular cloud and object storage providers such as Amazon S3, Google GCS, Scality, Cleversafe and Swift and is optimized for data-intensive workflows in life science, education, research, and oil and gas exploration.
“Cyfronet recently celebrated the launch of Poland’s fastest supercomputer. As the world’s largest deployment of the HP Apollo 8000 platform, the 1.68 Petaflop Prometheus system is powered by 41,472 Intel Haswell cores, 216 Terabytes of memory, and 10 Petabytes of storage.”
Today Seagate announced it is integrating IBM Spectrum Scale software, based upon GPFS technology, with its ClusterStor HPC storage to deliver a new software defined storage appliance. The new appliance will help users manage the demands of data-intensive workloads, such as genomic research, computer aided design, digital media, data analytics, financial model analysis and electronic design simulations.
Recently, insideHPC featured an interview with IBM’s Jay Muelhoefer on the topic of software defined infrastructure. To learn more about the storage sign of that coin, we caught up with Jim Gutowski and Scott Fadden from IBM.
“IBM Spectrum Scale is a proven, scalable, high-performance data and file management solution (based upon IBM General Parallel File System or GPFS, also formerly known as code name Elastic Storage). IBM Spectrum Scale provides world-class storage management with extreme scalability, flash accelerated performance, and automatic policy-based storage tiering from flash through disk to tape. IBM Spectrum Scale reduces storage costs up to 90% while improving security and management efficiency in cloud, big data & analytics environments.”
“I came to IBM via the acquisition of Platform Computing. There’s also been other IBM assets around HPC, namely GPFS. What’s been the evolution of those items as well and how they really come together under this concept of software-defined infrastructure, and how we’re now taking these capabilities and expanding them into other initiatives that have sort of bled into the HPC space.”