Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


ORNL Taps D-Wave for Exascale Computing Project

Today Oak Ridge National Laboratory (ORNL) announced they’re bringing on D-Wave to use quantum computing as an accelerator for the Exascale Computing Project. “Advancing the problem-solving capabilities of quantum computing takes dedicated collaboration with leading scientists and industry experts,” said Robert “Bo” Ewald, president of D-Wave International. “Our work with ORNL’s exceptional community of researchers and scientists will help us understand the potential of new hybrid computing architectures, and hopefully lead to faster and better solutions for critical and complex problems.”

Podcast: A Retrospective on Great Science and the Stampede Supercomputer

TACC will soon deploy Phase 2 of the Stampede II supercomputer. In this podcast, they celebrate by looking back on some of the great science computed on the original Stampede machine. “In 2017, the Stampede supercomputer, funded by the NSF, completed its five-year mission to provide world-class computational resources and support staff to more than 11,000 U.S. users on over 3,000 projects in the open science community. But what made it special? Stampede was like a bridge that moved thousands of researchers off of soon-to-be decommissioned supercomputers, while at the same time building a framework that anticipated the eminent trends that came to dominate advanced computing.”

Intel’s Xeon Scalable Processors Provide Cooling Challenges for HPC

Unless you reduce node and rack density, the wattages of today’s high-poweredCPUs and GPUs are simply no longer addressable with air cooling alone. Asetek explores how new processors, such as Intel’s Xeon Scalable processors, often call for more than just air cooling. “The largest Xeon Phi direct-to-chip cooled system today is Oakforest-PACS system in Japan. The system is made up of 8,208 computational nodes using Asetek Direct-to-Chip liquid cooled Intel Xeon Phi high performance processors with Knights Landing architecture. It is the highest performing system in Japan and #7 on the Top500.”

Surprising Stories from 17 National Labs in 17 Minutes

In this video, the U.S. Department of Energy gives a quick tour of all 17 National Labs. Each one comes with a surprising story on what these labs do for us as a Nation. “And they all do really different stuff. Think of a big scientific question or challenge, and one or more of the labs is probably working on it.”

Radio Free HPC Looks at AI Ethics and a Tale of Henry’s Super Heroism

In this podcast, the Radio Free HPC team learns about Henry’s first exploit as an Ethical Superhero. “After witnessing a hit-and-run fender bender, Henry confronted the culprit and ensured that the miscreant left a note on the victim’s windshield. And while we applaud Henry for his heroism, we are also very grateful that he was not shot in the process. This tale leads us into a discussion of AI ethics and how we won’t have this problem in the coming era of self-driving cars.”

LANL Adds Capacity to Trinity Supercomputer for Stockpile Stewardship

Los Alamos National Laboratory has boosted the computational capacity of their Trinity supercomputer with a merger of two system partitions. “With this merge completed, we have now successfully released one of the most capable supercomputers in the world to the Stockpile Stewardship Program,” said Bill Archer, Los Alamos ASC program director. “Trinity will enable unprecedented calculations that will directly support the mission of the national nuclear security laboratories, and we are extremely excited to be able to deliver this capability to the complex.”

A Scalable Object Store for Meteorological and Climate Data

Simon D. Smart gave this talk at the PASC17 conference. “Numerical Weather Prediction (NWP) and Climate simulations sit in the intersection between classically understood HPC and the Big Data communities. Driven by ever more ambitious scientific goals, both the size and number of output data elements generated as part of NWP operations has grown by several orders of magnitude, and will continue to grow into the future. This poses significant scalability challenges for the data processing pipeline, and the movement of data through and between stages is one of the most significant factors in this.”

Survey: Training and Support #1 Concern for the HPC Community

Initial results of the Scientific Computing World (SCW) HPC readership survey have shown training and support for HPC resources are the number one concern for both those that operate and manage HPC facilities and researchers using HPC resources. “Several themes have emerged as a priority to both HPC managers and users/researchers. Respondents cite that training and support are essential parameters compared to performance, hardware or the availability of HPC resources.”

Job of the Week: Software Engineer at NCAR

NCAR in Boulder is seeking a Software Engineer in our Job of the Week. “This position focuses primarily on the development of tools to meet the needs for the NCAR/IT community, and the design, writing, implementation, and support for systems monitoring tools necessary for the management of the computer infrastructure. Support will also be provided to the research community for the development of web-based analysis tools and general web programming.”

OpenACC Brings Directives to Accelerated Computing at ISC 2017

In this video from ISC 2017, Sunita Chandrasekaran and Michael Wolfe describe how OpenACC makes GPU-accelerated computing more accessible to scientists and engineers. “OpenACC is a user-driven directive-based performance-portable parallel programming model designed for scientists and engineers interested in porting their codes to a wide-variety of heterogeneous HPC hardware platforms and architectures with significantly less programming effort than required with a low-level model.”