MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Video: Best Practices in HPC Software Development

“Scientific code developers have increasingly been adopting software processes derived from the mainstream (non-scientific) community. Software practices are typically adopted when continuing without them becomes impractical. However, many software best practices need modification and/or customization, partly because the codes are used for research and exploration, and partly because of the combined funding and sociological challenges. This presentation will describe the lifecycle of scientific software and important ways in which it differs from other software development. We will provide a compilation of software engineering best practices that have generally been found to be useful by science communities, and we will provide guidelines for adoption of practices based on the size and the scope of the project.”

NERSC Paper on Burst Buffers Recognized at Cray User Group

A new paper outlining NERSC’s Burst Buffer Early User Program and the center’s pioneering efforts in recent months to test drive the technology using real science applications on Cori Phase 1 has won the Best Paper award at this year’s Cray User Group (CUG) meeting.

Superfacility – How New Workflows in the DOE Office of Science are Changing Storage Requirements

Katie Antypas from NERSC presented this talk at the 2016 MSST conference. Katie is the Project Lead for the NERSC-8 system procurement, a project to deploy NERSC’s next generation supercomputer in mid-2016. The system, named Cori, (after Nobel Laureate Gerty Cori) will be a Cray XC system featuring 9300 Intel Knights Landing processors. The Knights Landing processors will have over 60 cores with 4 hardware threads each and a 512 bit vector unit width. It will be crucial that users can exploit both thread and SIMD vectorization to achieve high performance on Cori.”

Edison Supercomputer Helps Find Roots of MJO Modeling Mismatches

The MJO occurs on its own timetable—every 30 to 60 days—but its worldwide impact spurs scientists to unlock its secrets. The ultimate answer? Timely preparation for the precipitation havoc it brings—and insight into how it will behave when pressured by a warming climate.

Video: Optimizing Applications for the CORI Supercomputer at NERSC

In this video from SC15, NERSC shares its experience on optimizing applications to run on the new Intel Xeon Phi processors (code name Knights Landing) that will empower the Cori supercomputer by the summer of 2016. “A key goal of the Cori Phase 1 system is to support the increasingly data-intensive computing needs of NERSC users. Toward this end, Phase 1 of Cori will feature more than 1,400 Intel Haswell compute nodes, each with 128 gigabytes of memory per node. The system will provide about the same sustained application performance as NERSC’s Hopper system, which will be retired later this year. The Cori interconnect will have a dragonfly topology based on the Aries interconnect, identical to NERSC’s Edison system.”

Agenda Posted for HPC User Forum in Tucson, April 11-13

IDC has published the agenda for their next HPC User Forum. The event will take place April 11-13 in Tucson, AZ. “Don’t miss the chance to hear top experts on these high-innovation, high-growth areas of the HPC market. At this meeting, you’ll also hear about government initiatives to get ready for future-generation supercomputers, machine learning, and High Performance Data Analytics.”

Time-lapse Video: Edison Supercomputer Moves to Berkeley

In this video, engineers move the NERSC Edison Supercomputer from Oakland to Berkeley. The one week long move is condensed into :41 seconds in this time lapse video, shot during the entire process. Edison is a Cray XC30, with a peak performance of 2.57 petaflops/sec, 133,824 compute cores, 357 terabytes of memory, and 7.56 petabytes of disk.

Video: Enabling Application Portability across HPC Platforms

“In this presentation, we will discuss several important goals and requirements of portable standards in the context of OpenMP. We will also encourage audience participation as we discuss and formulate the current state-of-the-art in this area and our hopes and goals for the future. We will start by describing the current and next generation architectures at NERSC and OLCF and explain how the differences require different general programming paradigms to facilitate high-performance implementations.”

Video: Using OpenMP at NERSC

“This presentation will describe how OpenMP is used at NERSC. NERSC is the primary supercomputing facility for Office of Science in the US Depart of Energy (DOE). Our next production system will be an Intel Xeon Phi Knights Landing (KNL) system, with 60+ cores per node and 4 hardware threads per core. The recommended programming model is hybrid MPI/OpenMP, which also promotes portability across different system architectures.”

Video: High Performance Clustering for Trillion Particle Simulations

“Modern Cosmology and Plasma Physics codes are capable of simulating trillions of particles on petascale systems. Each time step generated from such simulations is on the order of 10s of TBs. Summarizing and analyzing raw particle data is challenging, and scientists often focus on density structures for follow-up analysis. We develop a highly scalable version of the clustering algorithm DBSCAN and apply it to the largest particle simulation datasets. Our system, called BD-CATDS, is the first one to perform end-to-end clustering analysis of trillion particle simulation output. We demonstrate clustering analysis of a 1.4 Trillion particle dataset from a plasma physics simulation, and a 10,240^3 particle cosmology simulation utilizing ~100,000 cores in 30 minutes. BD-CATS has enabled scientists to ask novel questions about acceleration mechanisms in particle physics, and has demonstrated qualitatively superior results in cosmology. Clustering is an example of one scientific data analytics problem. This talk will conclude with a broad overview of other leading data analytics challenges across scientific domains, and joint efforts between NERSC and Intel Research to tackle some of these challenges.”