Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Consolidating Storage for Scientific Computing

In this special guest feature from Scientific Computing World, Shailesh M Shenoy from the Albert Einstein College of Medicine in New York discusses the challenges faced by large medical research organizations in the face of ever-growing volumes of data. “In short, our challenge was that we needed the ability to collaborate within the institution and with colleagues at other institutes – we needed to maintain that fluid conversation that involves data, not just the hypotheses and methods.”

Video: Supercomputing at the University of Buffalo

In this WGRZ video, researchers describe supercomputing at the Center for Computational Research at the University of Buffalo. “The Center’s extensive computing facilities, which are housed in a state-of-the-art 4000 sq ft machine room, include a generally accessible (to all UB researchers) Linux cluster with more than 8000 processor cores and QDR Infiniband, a subset (32) of which contain (64) NVidia Tesla M2050 “Fermi” graphics processing units (GPUs).”

Lustre: This is Not Your Grandmother’s (or Grandfather’s) Parallel File System

“Over the last several years, an enormous amount of development effort has gone into Lustre to address users’ enterprise-related requests. Their work is not only keeping Lustre extremely fast (the Spider II storage system at the Oak Ridge Leadership Computing Facility (OLCF) that supports OLCF’s Titan supercomputer delivers 1 TB/s ; and Data Oasis, supporting the Comet supercomputer at the San Diego Supercomputing Center (SDSC) supports thousands of users with 300GB/s throughput) but also making it an enterprise-class parallel file system that has since been deployed for many mission-critical applications, such as seismic processing and analysis, regional climate and weather modeling, and banking.”

Video: General Atomics Delivers Data-Aware Cloud Storage Gateway with ArcaStream

“Ngenea’s blazingly-fast on-premises storage stores frequently accessed active data on the industry’s leading high performance file system, IBM Spectrum Scale (GPFS). Less frequently accessed data, including backup, archival data and data targeted to be shared globally, is directed to cloud storage based on predefined policies such as age, time of last access, frequency of access, project, subject, study or data source. Ngenea can direct data to specific cloud storage regions around the world to facilitate remote low latency data access and empower global collaboration.”

Slidecast: Seagate Beefs Up ClusterStor at SC15

In this video from SC15, Larry Jones from Seagate provides an overview of the company’s revamped HPC storage product line, including a new 10,000 RPM ClusterStor hard disk drive tailor-made for the HPC market. “ClusterStor integrates the latest in Big Data technologies to deliver class-leading ingest speeds, massively scalable capacities to more than 100PB and the ability to handle a variety of mixed workloads.”

World’s First Data-Aware Cloud Storage Gateway Coming to SC15

Today ArcaStream and General Atomics introduced Ngenea, the world’s first data-aware cloud storage gateway. Ready for quick deployment, Ngenea seamlessly integrates with popular cloud and object storage providers such as Amazon S3, Google GCS, Scality, Cleversafe and Swift and is optimized for data-intensive workflows in life science, education, research, and oil and gas exploration.

Video: Data Storage Infrastructure at Cyfronet

“Cyfronet recently celebrated the launch of Poland’s fastest supercomputer. As the world’s largest deployment of the HP Apollo 8000 platform, the 1.68 Petaflop Prometheus system is powered by 41,472 Intel Haswell cores, 216 Terabytes of memory, and 10 Petabytes of storage.”

Seagate Adopts IBM Spectrum Scale, based on GPFS

Today Seagate announced it is integrating IBM Spectrum Scale software, based upon GPFS technology, with its ClusterStor HPC storage to deliver a new software defined storage appliance. The new appliance will help users manage the demands of data-intensive workloads, such as genomic research, computer aided design, digital media, data analytics, financial model analysis and electronic design simulations.

Interview: Software Defined Storage for Bridging HPC and Big Data Analytics

Recently, insideHPC featured an interview with IBM’s Jay Muelhoefer on the topic of software defined infrastructure. To learn more about the storage sign of that coin, we caught up with Jim Gutowski and Scott Fadden from IBM.

Slidecast: Bridging HPC and Big Data Analytics with Software Defined Storage

“IBM Spectrum Scale is a proven, scalable, high-performance data and file management solution (based upon IBM General Parallel File System or GPFS, also formerly known as code name Elastic Storage). IBM Spectrum Scale provides world-class storage management with extreme scalability, flash accelerated performance, and automatic policy-based storage tiering from flash through disk to tape. IBM Spectrum Scale reduces storage costs up to 90% while improving security and management efficiency in cloud, big data & analytics environments.”