MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Debugging Slow Buffered Reads on the Lustre File System

“Buffered read performance under Lustre has been inexplicably slow when compared to writes or even direct IO reads. A balanced FDR-based Object Storage Server can easily saturate the network or backend disk storage using o_direct based IO. However, buffered IO reads remain at 80% of write bandwidth. In this presentation we will characterize the problem, discuss how it was debugged and proposed resolution. The format will be a presentation followed by Q&A.”

Industry Experts Discuss Accelerating Science with Storage Systems Research

In this special guest feature, Ken Strandberg describes the highlights of panel discussion on high performance storage at SC15. “There was significant discussion about identifying the most important workflows, e.g. will checkpoint/restart continue to dominate I/O demands, difficult to analyze scientific datasets, or some new emerging science workflows. In identifying these workflows, we expect to learn where to focus storage research.”

High-Performance Lustre* Storage Solution Helps Enable the Intel® Scalable System Framework

“Intel has incorporated Intel Solutions for Lustre Software as part of the Intel SSF because it provides the performance to move data and minimize storage bottlenecks. Lustre is also open source based, and already enjoys a wide foundation of deployments in research around the world, while gaining significant traction in enterprise HPC. Intel’s version of Lustre delivers a high-performance storage solution in the Intel SSF that next-generation HPC needs to move toward the era of Exascale.”

ECMWF to Upgrade Cray XC Supercomputers for Weather Forecasting

Today Cray announced a $36 million contract to upgrade and expand the Cray XC supercomputers and Cray Sonexion storage system at the European Centre for Medium-Range Weather Forecasts (ECMWF). When the project is completed, the enhanced systems will allow the world-class numerical weather prediction and research center to continue to drive improvements in its highly-complex models to provide more accurate weather forecasts.

The Impact of HPC on Music

Many will be familiar with HPC and industrial or scientific applications, but now HPC is making its impact on something that touches the soul of millions and millions of people every day — music. In an interview with the inventor of HPC for Music, Antonis Karalis shared a brief explanation of how the future of music has been compromised and what steps are being taken to revolutionize music composition, the creative workflow, and deliver new entertainment experiences. Along the way, Karalis is applying cutting edge computing technologies including Intel Optane 3D memory and the Scalable System Framework.

Lustre and Persistent Storage

Lustre was originally developed as the fastest scratch file system for HPC workloads that supercomputer centers could get, but has over the years matured to be an enterprise-class parallel file system supporting mission-critical workloads. Unfortunately, in spite of Lustre having become extremely attractive to enterprises and adopted by IT departments, some naysayers continue toclaim that Lustre is still just a scratch file system. We in the Lustre community see quite a different picture.

Lustre: This is Not Your Grandmother’s (or Grandfather’s) Parallel File System

“Over the last several years, an enormous amount of development effort has gone into Lustre to address users’ enterprise-related requests. Their work is not only keeping Lustre extremely fast (the Spider II storage system at the Oak Ridge Leadership Computing Facility (OLCF) that supports OLCF’s Titan supercomputer delivers 1 TB/s ; and Data Oasis, supporting the Comet supercomputer at the San Diego Supercomputing Center (SDSC) supports thousands of users with 300GB/s throughput) but also making it an enterprise-class parallel file system that has since been deployed for many mission-critical applications, such as seismic processing and analysis, regional climate and weather modeling, and banking.”

Video: Diving into Intel’s HPC Scalable System Framework Plans

“In July, Intel announced plans for the HPC Scalable System Framework – a design foundation enabling wide range of highly workload-optimized solutions. This talk will delve into aspects of the framework and highlight the relationship and benefits to application development and execution.”

Towards the Convergence of HPC and Big Data — Data-Centric Architecture at TACC

Dan Stanzione from TACC presented this talk at the DDN User Group at SC15. “TACC is an advanced computing research center that provides comprehensive advanced computing resources and support services to researchers in Texas and across the USA. The mission of TACC is to enable discoveries that advance science and society through the application of advanced computing technologies. Specializing in high performance computing, scientific visualization, data analysis & storage systems, software, research & development and portal interfaces, TACC deploys and operates advanced computational infrastructure to enable computational research activities of faculty, staff, and students of UT Austin.”

Setting a Path for the Next-Generation of High-Performance Computing Architecture

At SC15, Intel talked about some transformational high-performance computing technologies and the architecture—Intel® Scalable System Framework (Intel® SSF). Intel describes Intel SSF as “an advanced architectural approach for simplifying the procurement, deployment, and management of HPC systems, while broadening the accessibility of HPC to more industries and workloads.” Intel SSF is designed to eliminate the traditional bottlenecks; the so called power, memory, storage, and I/O walls that system builders and operators have run into over the years.