Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Gabriel Broner Joins Rescale as VP & GM of HPC

Today HPC Cloud provider Rescale announced that Gabriel Broner has joined the company as Vice President and General Manager of High Performance Computing. “Rescale offers HPC users the possibility to instantly run simulations on large systems with the architecture of their choice, which enables companies to accelerate the pace of innovation,” said Broner. “I am very excited to join this talented group of people at Rescale who are driving the next big disruption in HPC.”

ORNL Taps D-Wave for Exascale Computing Project

Today Oak Ridge National Laboratory (ORNL) announced they’re bringing on D-Wave to use quantum computing as an accelerator for the Exascale Computing Project. “Advancing the problem-solving capabilities of quantum computing takes dedicated collaboration with leading scientists and industry experts,” said Robert “Bo” Ewald, president of D-Wave International. “Our work with ORNL’s exceptional community of researchers and scientists will help us understand the potential of new hybrid computing architectures, and hopefully lead to faster and better solutions for critical and complex problems.”

Podcast: A Retrospective on Great Science and the Stampede Supercomputer

TACC will soon deploy Phase 2 of the Stampede II supercomputer. In this podcast, they celebrate by looking back on some of the great science computed on the original Stampede machine. “In 2017, the Stampede supercomputer, funded by the NSF, completed its five-year mission to provide world-class computational resources and support staff to more than 11,000 U.S. users on over 3,000 projects in the open science community. But what made it special? Stampede was like a bridge that moved thousands of researchers off of soon-to-be decommissioned supercomputers, while at the same time building a framework that anticipated the eminent trends that came to dominate advanced computing.”

Supercomputers turn the clock back on Storms with “Hindcasting”

Researchers are using supercomputers at LBNL to determine how global climate change has affected the severity of storms and resultant flooding. “The group used the publicly available model, which can be used to forecast future weather, to “hindcast” the conditions that led to the Sept. 9-16, 2013 flooding around Boulder, Colorado.”

Intel’s Xeon Scalable Processors Provide Cooling Challenges for HPC

Unless you reduce node and rack density, the wattages of today’s high-poweredCPUs and GPUs are simply no longer addressable with air cooling alone. Asetek explores how new processors, such as Intel’s Xeon Scalable processors, often call for more than just air cooling. “The largest Xeon Phi direct-to-chip cooled system today is Oakforest-PACS system in Japan. The system is made up of 8,208 computational nodes using Asetek Direct-to-Chip liquid cooled Intel Xeon Phi high performance processors with Knights Landing architecture. It is the highest performing system in Japan and #7 on the Top500.”

Surprising Stories from 17 National Labs in 17 Minutes

In this video, the U.S. Department of Energy gives a quick tour of all 17 National Labs. Each one comes with a surprising story on what these labs do for us as a Nation. “And they all do really different stuff. Think of a big scientific question or challenge, and one or more of the labs is probably working on it.”

Radio Free HPC Looks at AI Ethics and a Tale of Henry’s Super Heroism

In this podcast, the Radio Free HPC team learns about Henry’s first exploit as an Ethical Superhero. “After witnessing a hit-and-run fender bender, Henry confronted the culprit and ensured that the miscreant left a note on the victim’s windshield. And while we applaud Henry for his heroism, we are also very grateful that he was not shot in the process. This tale leads us into a discussion of AI ethics and how we won’t have this problem in the coming era of self-driving cars.”

LANL Adds Capacity to Trinity Supercomputer for Stockpile Stewardship

Los Alamos National Laboratory has boosted the computational capacity of their Trinity supercomputer with a merger of two system partitions. “With this merge completed, we have now successfully released one of the most capable supercomputers in the world to the Stockpile Stewardship Program,” said Bill Archer, Los Alamos ASC program director. “Trinity will enable unprecedented calculations that will directly support the mission of the national nuclear security laboratories, and we are extremely excited to be able to deliver this capability to the complex.”

A Scalable Object Store for Meteorological and Climate Data

Simon D. Smart gave this talk at the PASC17 conference. “Numerical Weather Prediction (NWP) and Climate simulations sit in the intersection between classically understood HPC and the Big Data communities. Driven by ever more ambitious scientific goals, both the size and number of output data elements generated as part of NWP operations has grown by several orders of magnitude, and will continue to grow into the future. This poses significant scalability challenges for the data processing pipeline, and the movement of data through and between stages is one of the most significant factors in this.”

Survey: Training and Support #1 Concern for the HPC Community

Initial results of the Scientific Computing World (SCW) HPC readership survey have shown training and support for HPC resources are the number one concern for both those that operate and manage HPC facilities and researchers using HPC resources. “Several themes have emerged as a priority to both HPC managers and users/researchers. Respondents cite that training and support are essential parameters compared to performance, hardware or the availability of HPC resources.”