Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Video: Unified Memory on Summit (Power9 + V100)

Jeff Larkin from NVIDIA gave this talk at the Summit Application Readiness Workshop. The event had the primary objective of providing the detailed technical information and hands-on help required for select application teams to meet the scalability and performance metrics required for Early Science proposals. Technical representatives from the IBM/NVIDIA Center of Excellence will be delivering a few plenary presentations, but most of the time will be set aside for the extended application teams to carry out hands-on technical work on Summit.”

OpenACC Helps Scientists Port their code at the Center for Application Readiness (CARR)

In this video, Jack Wells from the Oak Ridge Leadership Computing Facility and Duncan Poole from NVIDIA describe how OpenACC enabled them to port their codes to the new Summit supercomputer. “In preparation for next-generation supercomputer Summit, the Oak Ridge Leadership Computing Facility (OLCF) selected 13 partnership projects into its Center for Accelerated Application Readiness (CAAR) program. A collaborative effort of application development teams and staff from the OLCF Scientific Computing group, CAAR is focused on redesigning, porting, and optimizing application codes for Summit’s hybrid CPU–GPU architecture.”

Video: 25 Years of Supercomputing at Oak Ridge

Since its early days, the OLCF has consistently delivered supercomputers of unprecedented capability to the scientific community on behalf of DOE—contributing to a rapid evolution in scientific computing that has produced a millionfold increase in computing power. This rise has included the launch of the first teraflop system for open science, the science community’s first petaflop system, and two top-ranked machines on the TOP500 list. The next chapter in the OLCF’s legacy is set to begin with the deployment of Summit, a pre-exascale system capable of more than five times the performance of Titan.”

DOE Awards 1 Billion Hours of Supercomputer Time for Research

The DOE has awarded 1 Billion CPU hours of compute time on Oak Ridge supercomputers to a set important research projects vital to our nation’s future. ALCC allocations for 2017 continue in the tradition of innovation and discovery with projects awards ranging from 2 million to 300 million processor hours.

DOE to Invest $16 Million in Supercomputing Materials

Today the U.S. Department of Energy announced that it will invest $16 million over the next four years to accelerate the design of new materials through use of supercomputers. “Our simulations will rely on current petascale and future exascale capabilities at DOE supercomputing centers. To validate the predictions about material behavior, we’ll conduct experiments and use the facilities of the Advanced Photon Source, Spallation Neutron Source and the Nanoscale Science Research Centers.”

Video: Developing, Configuring, Building, and Deploying HPC Software

“The process of developing HPC software requires consideration of issues in software design as well as practices that support the collaborative writing of well-structured code that is easy to maintain, extend, and support. This presentation will provide an overview of development environments and how to configure, build, and deploy HPC software using some of the tools that are frequently used in the community.”

INCITE Seeking Proposals to Advance Science with Leadership Computing

The DoE INCITE program is now accepting proposals for high-impact, computationally intensive research campaigns in a broad array of science, engineering and computer science domains.

Call for Submissions: GPU Hackathon at the University of Delaware

The Call for Submissions is open for the upcoming GPU Programming Hackathon at University of Delaware (UDEL). The event takes place from May 2-6, 2016 at UDEL in Newark, Delaware.

Lustre: This is Not Your Grandmother’s (or Grandfather’s) Parallel File System

“Over the last several years, an enormous amount of development effort has gone into Lustre to address users’ enterprise-related requests. Their work is not only keeping Lustre extremely fast (the Spider II storage system at the Oak Ridge Leadership Computing Facility (OLCF) that supports OLCF’s Titan supercomputer delivers 1 TB/s ; and Data Oasis, supporting the Comet supercomputer at the San Diego Supercomputing Center (SDSC) supports thousands of users with 300GB/s throughput) but also making it an enterprise-class parallel file system that has since been deployed for many mission-critical applications, such as seismic processing and analysis, regional climate and weather modeling, and banking.”

DDNtool Streamlines File System Monitoring at Oak Ridge

Over at Oak Ridge, Eric Gedenk writes that monitoring the status of complex supercomputer systems is an ongoing challenge. Now, Ross Miller from OLCF has developed DDNtool, which provides a single interface to 72 controllers in near real time.