Europe’s most powerful supercomputer Piz Daint is being upgraded, a move that is expected to at least double its computing power. ETH Zurich is investing around CHF 40 million to allow researchers to perform simulations, data analyses and visualizations even more efficiently in the future. Although slightly reduced in physical size, Piz Daint will become considerably more powerful as a result of the upgrade, particularly because we will be able to significantly increase bandwidth in the most important areas,” says CSCS Director Thomas Schulthess.
“Trends in computer memory/storage technology are in flux perhaps more so now than in the last two decades. Economic analysis of HPC storage hierarchies has led to new tiers of storage being added to the next fleet of supercomputers including Burst Buffers or In-System Solid State Storage and Campaign Storage. This talk will cover the background that brought us these new storage tiers and postulate what the economic crystal ball looks like for the coming decade. Further it will suggest methods of leveraging HPC workflow studies to inform the continued evolution of the HPC storage hierarchy.”
“DDN’s IME14K revolutionizes how information is saved and accessed by compute. IME software allows data to reside next to compute in a very fast, shared pool of non-volatile memory (NVM). This new data adjacency significantly reduces latency by allowing IME software’s revolutionary, fast data communication layer to pass data without the file locking contention inherent in today’s parallel file systems.”
“There are a number of exciting technologies we should see in 2016, and a leader will be Intel’s next-generation Xeon Phi coprocessor – a hybrid between an accelerator and general purpose processor. This new class of processors will have a large impact on the industry with its innovative design that combines a many-core architecture with general-purpose productivity. Cray, for example, will be delivering Intel Xeon Phi processors with some of our largest systems, including those going to Los Alamos National Labs (the “Trinity” supercomputer) and NERSC (the “Cori” supercomputer).”
Tommaso Cecchi from DDN presented this talk at the HPCAC Spain Conference. “IME unleashes a new I/O provisioning paradigm. This breakthrough, software defined storage application introduces a whole new new tier of transparent, extendable, non-volatile memory (NVM), that provides game-changing latency reduction and greater bandwidth and IOPS performance for the next generation of performance hungry scientific, analytic and big data applications – all while offering significantly greater economic and operational efficiency than today’s traditional disk-based and all flash array storage approaches that are currently used to scale performance.”
Hussein Harake from CSCS presented this talk at the HPC Advisory Council Spain Conference. “IME unleashes a new I/O provisioning paradigm. This breakthrough, software defined storage application introduces a whole new new tier of transparent, extendable, non-volatile memory (NVM), that provides game-changing latency reduction and greater bandwidth and IOPS performance for the next generation of performance hungry scientific, analytic and big data applications – all while offering significantly greater economic and operational efficiency than today’s traditional disk-based and all flash array storage approaches that are currently used to scale performance.”
NERSC has selected a number of HPC research projects to participate in the center’s new Burst Buffer Early User Program, where they will be able to test and run their codes using the new Burst Buffer feature on the center’s newest supercomputer, Cori.
Nathan Rutman from Seagate presented this talk at the LAD’15 Conference. “So why is a spinning disk company talking about Flash? Last year, Seagate acquired Avago LSI’s flash division. We now have an array of flash-based storage. So I have nothing against Flash. This presentation is really on: Where does Flash make sense? I also have a personal agenda because I hate the term “Burst Buffer.” Everyone says “Burst Buffer” instead of saying “Flash.” It drives me crazy. So I’m going to explain what a Burst Buffer is and what it is not.”
This Week in HPC: Cray Unveils Next Generation Supercomputer and DDN Extends Product Line with Scale-out Appliance
In this episode of This Week in HPC, Michael Feldman and Addison Snell from Intersect360 Research discuss the next generation Cray XC40 supercomputer with DataWarp technology as well the introduction of a new Scale-out Storage Appliance from DDN based on GPFS.
“For those who haven’t been following the details of one of DOE’s more recent procurement rounds, the NERSC-8 and Trinity request for proposals (RFP) explicitly required that all vendor proposals include a burst buffer to address the capability of multi-petaflop simulations to dump tremendous amounts of data in very short order. The target use case is for petascale checkpoint-restart, where the memory of thousands of nodes (hundreds of terabytes of data) needs to be flushed to disk in an amount of time that doesn’t dominate the overall execution time of the calculation.”