Tackling Turbulence on the Summit Supercomputer

Researchers at the Georgia Institute of Technology have achieved world record performance on the Summit supercomputer using a new algorithm for turbulence simulation. “The team identified the most time-intensive parts of a base CPU code and set out to design a new algorithm that would reduce the cost of these operations, push the limits of the largest problem size possible, and take advantage of the unique data-centric characteristics of Summit, the world’s most powerful and smartest supercomputer for open science.”

Applying Cloud Techniques to Address Complexity in HPC System Integrations

Arno Kolster from Providentia Worldwide gave this talk at the HPC User Forum. “OLCF and technology consulting company Providentia Worldwide recently collaborated to develop an intelligence system that combines real-time updates from the IBM AC922 Summit supercomputer with local weather and operational data from its adjacent cooling plant, with the goal of optimizing Summit’s energy efficiency. The OLCF proposed the idea and provided facility data, and Providentia developed a scalable platform to integrate and analyze the data.”

Video: Unified Memory on Summit (Power9 + V100)

Jeff Larkin from NVIDIA gave this talk at the Summit Application Readiness Workshop. The event had the primary objective of providing the detailed technical information and hands-on help required for select application teams to meet the scalability and performance metrics required for Early Science proposals. Technical representatives from the IBM/NVIDIA Center of Excellence will be delivering a few plenary presentations, but most of the time will be set aside for the extended application teams to carry out hands-on technical work on Summit.”

OpenACC Helps Scientists Port their code at the Center for Application Readiness (CARR)

In this video, Jack Wells from the Oak Ridge Leadership Computing Facility and Duncan Poole from NVIDIA describe how OpenACC enabled them to port their codes to the new Summit supercomputer. “In preparation for next-generation supercomputer Summit, the Oak Ridge Leadership Computing Facility (OLCF) selected 13 partnership projects into its Center for Accelerated Application Readiness (CAAR) program. A collaborative effort of application development teams and staff from the OLCF Scientific Computing group, CAAR is focused on redesigning, porting, and optimizing application codes for Summit’s hybrid CPU–GPU architecture.”

Video: 25 Years of Supercomputing at Oak Ridge

Since its early days, the OLCF has consistently delivered supercomputers of unprecedented capability to the scientific community on behalf of DOE—contributing to a rapid evolution in scientific computing that has produced a millionfold increase in computing power. This rise has included the launch of the first teraflop system for open science, the science community’s first petaflop system, and two top-ranked machines on the TOP500 list. The next chapter in the OLCF’s legacy is set to begin with the deployment of Summit, a pre-exascale system capable of more than five times the performance of Titan.”

DOE Awards 1 Billion Hours of Supercomputer Time for Research

The DOE has awarded 1 Billion CPU hours of compute time on Oak Ridge supercomputers to a set important research projects vital to our nation’s future. ALCC allocations for 2017 continue in the tradition of innovation and discovery with projects awards ranging from 2 million to 300 million processor hours.

DOE to Invest $16 Million in Supercomputing Materials

Today the U.S. Department of Energy announced that it will invest $16 million over the next four years to accelerate the design of new materials through use of supercomputers. “Our simulations will rely on current petascale and future exascale capabilities at DOE supercomputing centers. To validate the predictions about material behavior, we’ll conduct experiments and use the facilities of the Advanced Photon Source, Spallation Neutron Source and the Nanoscale Science Research Centers.”

Video: Developing, Configuring, Building, and Deploying HPC Software

“The process of developing HPC software requires consideration of issues in software design as well as practices that support the collaborative writing of well-structured code that is easy to maintain, extend, and support. This presentation will provide an overview of development environments and how to configure, build, and deploy HPC software using some of the tools that are frequently used in the community.”

INCITE Seeking Proposals to Advance Science with Leadership Computing

The DoE INCITE program is now accepting proposals for high-impact, computationally intensive research campaigns in a broad array of science, engineering and computer science domains.

Call for Submissions: GPU Hackathon at the University of Delaware

The Call for Submissions is open for the upcoming GPU Programming Hackathon at University of Delaware (UDEL). The event takes place from May 2-6, 2016 at UDEL in Newark, Delaware.