MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Edison Supercomputer Helps Find Roots of MJO Modeling Mismatches

The MJO occurs on its own timetable—every 30 to 60 days—but its worldwide impact spurs scientists to unlock its secrets. The ultimate answer? Timely preparation for the precipitation havoc it brings—and insight into how it will behave when pressured by a warming climate.

Video: Optimizing Applications for the CORI Supercomputer at NERSC

In this video from SC15, NERSC shares its experience on optimizing applications to run on the new Intel Xeon Phi processors (code name Knights Landing) that will empower the Cori supercomputer by the summer of 2016. “A key goal of the Cori Phase 1 system is to support the increasingly data-intensive computing needs of NERSC users. Toward this end, Phase 1 of Cori will feature more than 1,400 Intel Haswell compute nodes, each with 128 gigabytes of memory per node. The system will provide about the same sustained application performance as NERSC’s Hopper system, which will be retired later this year. The Cori interconnect will have a dragonfly topology based on the Aries interconnect, identical to NERSC’s Edison system.”

Agenda Posted for HPC User Forum in Tucson, April 11-13

IDC has published the agenda for their next HPC User Forum. The event will take place April 11-13 in Tucson, AZ. “Don’t miss the chance to hear top experts on these high-innovation, high-growth areas of the HPC market. At this meeting, you’ll also hear about government initiatives to get ready for future-generation supercomputers, machine learning, and High Performance Data Analytics.”

Time-lapse Video: Edison Supercomputer Moves to Berkeley

In this video, engineers move the NERSC Edison Supercomputer from Oakland to Berkeley. The one week long move is condensed into :41 seconds in this time lapse video, shot during the entire process. Edison is a Cray XC30, with a peak performance of 2.57 petaflops/sec, 133,824 compute cores, 357 terabytes of memory, and 7.56 petabytes of disk.

Video: Enabling Application Portability across HPC Platforms

“In this presentation, we will discuss several important goals and requirements of portable standards in the context of OpenMP. We will also encourage audience participation as we discuss and formulate the current state-of-the-art in this area and our hopes and goals for the future. We will start by describing the current and next generation architectures at NERSC and OLCF and explain how the differences require different general programming paradigms to facilitate high-performance implementations.”

Video: Using OpenMP at NERSC

“This presentation will describe how OpenMP is used at NERSC. NERSC is the primary supercomputing facility for Office of Science in the US Depart of Energy (DOE). Our next production system will be an Intel Xeon Phi Knights Landing (KNL) system, with 60+ cores per node and 4 hardware threads per core. The recommended programming model is hybrid MPI/OpenMP, which also promotes portability across different system architectures.”

Video: High Performance Clustering for Trillion Particle Simulations

“Modern Cosmology and Plasma Physics codes are capable of simulating trillions of particles on petascale systems. Each time step generated from such simulations is on the order of 10s of TBs. Summarizing and analyzing raw particle data is challenging, and scientists often focus on density structures for follow-up analysis. We develop a highly scalable version of the clustering algorithm DBSCAN and apply it to the largest particle simulation datasets. Our system, called BD-CATDS, is the first one to perform end-to-end clustering analysis of trillion particle simulation output. We demonstrate clustering analysis of a 1.4 Trillion particle dataset from a plasma physics simulation, and a 10,240^3 particle cosmology simulation utilizing ~100,000 cores in 30 minutes. BD-CATS has enabled scientists to ask novel questions about acceleration mechanisms in particle physics, and has demonstrated qualitatively superior results in cosmology. Clustering is an example of one scientific data analytics problem. This talk will conclude with a broad overview of other leading data analytics challenges across scientific domains, and joint efforts between NERSC and Intel Research to tackle some of these challenges.”

With APEX, National Labs Collaborate to Develop Next-Gen Supercomputers

Today Los Alamos, Lawrence Berkeley, and Sandia national laboratories announced the Alliance for Application Performance at Extreme Scale (APEX). The new collaboration will focus on the design, acquisition and deployment of future advanced technology high performance computing systems.

Cray, AMPLab, NERSC Collaborate on Spark Performance for HPC Platforms

Today NERSC announced a collaboration with UC Berkeley’s AMPLab and Cray to design large-scale data analytics stacks. “Analytics workloads will be an increasingly important workload on our supercomputers and we are thrilled to support and participate in this key collaboration,” said Ryan Waite, senior vice president of products at Cray. “As Cray’s supercomputing platforms enable researchers and scientists to model reality ever more accurately using high-fidelity simulations, we have long seen the need for scalable, performant analytic tools to interpret the resulting data. The Berkeley Data Analytics Stack (BDAS) and Spark, in particular, are emerging as a de facto foundation of such a toolset because of their combined focus on productivity and scalable performance.”

Users to Test DataWarp Burst Buffer on Cori Supercomputer

NERSC has selected a number of HPC research projects to participate in the center’s new Burst Buffer Early User Program, where they will be able to test and run their codes using the new Burst Buffer feature on the center’s newest supercomputer, Cori.