Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Pagoda Project Rolls Out First Software Libraries for Exascale

The Pagoda Project—a three-year Exascale Computing Project software development program based at Lawrence Berkeley National Laboratory—has successfully reached a major milestone: making its open source software libraries publicly available as of September 30, 2017. “Our job is to ensure that the exascale applications reach key performance parameters defined by the DOE,” said Baden.

Designing HPC, Big Data, & Deep Learning Middleware for Exascale

DK Panda from Ohio State University presented this talk at the HPC Advisory Council Spain Conference. “This talk will focus on challenges in designing HPC, Big Data, and Deep Learning middleware for Exascale systems with millions of processors and accelerators. For the HPC domain, we will discuss about the challenges in designing runtime environments for MPI+X (PGAS OpenSHMEM/UPC/CAF/UPC++, OpenMP, and CUDA) programming models. Features and sample performance numbers from MVAPICH2 libraries will be presented.”

Hyperion Research Breakfast Briefing Returns to Denver at SC17

Hyperion Research will host their annual Breakfast Briefing at SC17 in Denver. The event takes place at the Denver Marriott City Center at 7:30am – 9:30am on Tuesday, Nov. 14. “We will be presenting an update on HPC market results, key technology issues & trends, results of new research studies in AI & Big data, new ROI results, and the mapping of HPC centers across the US.”

Video: The Era of Data-Centric Data Centers

Gilad Shainer gave this talk at the HPC Advisory Council Spain Conference. “The latest revolution in HPC is the move to a co-design architecture, a collaborative effort among industry, academia, and manufacturers to reach Exascale performance. By taking a holistic system-level approach to fundamental performance improvements Co-design architectures exploit system efficiency and optimizes performance by creating synergies between the hardware and the software.”

SC17 Session Preview: “Taking the Nanoscale to the Exascale”

Brian Ban continues his series of SC17 Session Previews with a look at an invited talk on nanotechnology. “This talk will focus on the challenges that computational chemistry faces in taking the equations that model the very small (molecules and the reactions they undergo) to efficient and scalable implementations on the very large computers of today and tomorrow.”

Exascale to Enable Smart Cities

Over at Argonne, Charlie Catlett describes how the advent of Exascale computing will enable Smart Cities designed to improve the quality of life for urban dwellers. Catlett will moderate a panel discussion on Smart Cities at the SC17 Plenary session, which kicks off the conference on Monday, Nov. 13 in Denver.

Video: Silicon Photonics for Extreme Computing

Keren Bergman from Columbia University gave this talk at the 2017 Argonne Training Program on Extreme-Scale Computing. “Exaflop machines would represent a thousand-fold improvement over the current standard, the petaflop machines that first came on line in 2008. But while exaflop computers already appear on funders’ technology roadmaps, making the exaflop leap on the short timescales of those roadmaps constitutes a formidable challenge.”

HPC I/O for Computational Scientists

Phil Carns from Argonne gave this talk at the 2017 Argonne Training Program on Extreme-Scale Computing. “Darshan is a scalable HPC I/O characterization tool. It captures an accurate but concise picture of application I/O behavior with minimum overhead. Darshan was originally developed on the IBM Blue Gene series of computers deployed at the Argonne Leadership Computing Facility, but it is portable across a wide variety of platforms include the Cray XE6, Cray XC30, and Linux clusters.  Darshan routinely instruments jobs using up to 786,432 compute cores on the Mira system at ALCF.”

How Manufacturing will Leap Forward with Exascale Computing

In this special guest feature, Jeremy Thomas from Lawrence Livermore National Lab writes that exascale computing will be a vital boost to the U.S. manufacturing industry. “This is much bigger than any one company or any one industry. If you consider any industry, exascale is truly going to have a sizeable impact, and if a country like ours is going to be a leader in industrial design, engineering and manufacturing, we need exascale to keep the innovation edge.”

Supercomputing Earthquakes in the Age of Exascale

Tomorrow’s exascale supercomputers will enable researchers to accurately simulate the ground motions of regional earthquakes quickly and in unprecedented detail. “Simulations of high frequency earthquakes are more computationally demanding and will require exascale computers,” said David McCallen, who leads the ECP-supported effort. “Ultimately, we’d like to get to a much larger domain, higher frequency resolution and speed up our simulation time.”