MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Agenda Posted for HPC User Forum in Tucson, April 11-13

IDC has published the agenda for their next HPC User Forum. The event will take place April 11-13 in Tucson, AZ. “Don’t miss the chance to hear top experts on these high-innovation, high-growth areas of the HPC market. At this meeting, you’ll also hear about government initiatives to get ready for future-generation supercomputers, machine learning, and High Performance Data Analytics.”

Time-lapse Video: Edison Supercomputer Moves to Berkeley

In this video, engineers move the NERSC Edison Supercomputer from Oakland to Berkeley. The one week long move is condensed into :41 seconds in this time lapse video, shot during the entire process. Edison is a Cray XC30, with a peak performance of 2.57 petaflops/sec, 133,824 compute cores, 357 terabytes of memory, and 7.56 petabytes of disk.

Video: Enabling Application Portability across HPC Platforms

“In this presentation, we will discuss several important goals and requirements of portable standards in the context of OpenMP. We will also encourage audience participation as we discuss and formulate the current state-of-the-art in this area and our hopes and goals for the future. We will start by describing the current and next generation architectures at NERSC and OLCF and explain how the differences require different general programming paradigms to facilitate high-performance implementations.”

Video: Using OpenMP at NERSC

“This presentation will describe how OpenMP is used at NERSC. NERSC is the primary supercomputing facility for Office of Science in the US Depart of Energy (DOE). Our next production system will be an Intel Xeon Phi Knights Landing (KNL) system, with 60+ cores per node and 4 hardware threads per core. The recommended programming model is hybrid MPI/OpenMP, which also promotes portability across different system architectures.”

Video: High Performance Clustering for Trillion Particle Simulations

“Modern Cosmology and Plasma Physics codes are capable of simulating trillions of particles on petascale systems. Each time step generated from such simulations is on the order of 10s of TBs. Summarizing and analyzing raw particle data is challenging, and scientists often focus on density structures for follow-up analysis. We develop a highly scalable version of the clustering algorithm DBSCAN and apply it to the largest particle simulation datasets. Our system, called BD-CATDS, is the first one to perform end-to-end clustering analysis of trillion particle simulation output. We demonstrate clustering analysis of a 1.4 Trillion particle dataset from a plasma physics simulation, and a 10,240^3 particle cosmology simulation utilizing ~100,000 cores in 30 minutes. BD-CATS has enabled scientists to ask novel questions about acceleration mechanisms in particle physics, and has demonstrated qualitatively superior results in cosmology. Clustering is an example of one scientific data analytics problem. This talk will conclude with a broad overview of other leading data analytics challenges across scientific domains, and joint efforts between NERSC and Intel Research to tackle some of these challenges.”

With APEX, National Labs Collaborate to Develop Next-Gen Supercomputers

Today Los Alamos, Lawrence Berkeley, and Sandia national laboratories announced the Alliance for Application Performance at Extreme Scale (APEX). The new collaboration will focus on the design, acquisition and deployment of future advanced technology high performance computing systems.

Cray, AMPLab, NERSC Collaborate on Spark Performance for HPC Platforms

Today NERSC announced a collaboration with UC Berkeley’s AMPLab and Cray to design large-scale data analytics stacks. “Analytics workloads will be an increasingly important workload on our supercomputers and we are thrilled to support and participate in this key collaboration,” said Ryan Waite, senior vice president of products at Cray. “As Cray’s supercomputing platforms enable researchers and scientists to model reality ever more accurately using high-fidelity simulations, we have long seen the need for scalable, performant analytic tools to interpret the resulting data. The Berkeley Data Analytics Stack (BDAS) and Spark, in particular, are emerging as a de facto foundation of such a toolset because of their combined focus on productivity and scalable performance.”

Users to Test DataWarp Burst Buffer on Cori Supercomputer

NERSC has selected a number of HPC research projects to participate in the center’s new Burst Buffer Early User Program, where they will be able to test and run their codes using the new Burst Buffer feature on the center’s newest supercomputer, Cori.

Cray Scales Fluent to 129,000 Compute Cores

Today Cray announced a world record by scaling ANSYS Fluent to 129,000 compute cores. “Less than a year ago, ANSYS announced Fluent had scaled to 36,000 cores with the help of NCSA. While the nearly 4x increase over the previous record is significant, it tells only part of the story. ANSYS has broadened the scope of simulations allowing for applicability to a much broader set of real-world problems and products than any other company offers.”

Accelerating Science with SciDB from NERSC

Over at NERSC, Linda Vu writes that the SciDB open source database system is a powerful tool for helping scientists wrangle Big Data. “SciDB is an open source database system designed to store and analyze extremely large array-structured data—like pictures from light sources and telescopes, time-series data collected from sensors, spectral data produced by spectrometers and spectrographs, and graph-like structures that illustrate relationships between entities.”